Message ID | 20230928001956.924301-1-seanjc@google.com |
---|---|
Headers | show |
Series | KVM: x86: Fix breakage in KVM_SET_XSAVE's ABI | expand |
On Wed, Oct 04, 2023, Tyler Stachecki wrote: > On Wed, Oct 04, 2023 at 04:11:52AM -0300, Leonardo Bras wrote: > > So this patch is supposed to fix migration of VM from a host with > > pre-ad856280ddea (OLD) kernel to a host with ad856280ddea + your set(NEW). > > Right? > > > > Let's get the scenario here, where all machines are the same: > > 1 - VM created on OLD kernel with a host-supported xfeature F, which is not > > guest supported. > > 2 - VM is migrated to a NEW kernel/host, and KVM_SET_XSAVE xfeature F. > > 3 - VM will be migrated to another host, qemu requests KVM_GET_XSAVE, which > > returns only guest-supported xfeatures, and this is passed to next host > > 4 - VM will be started on 3rd host with guest-supported xfeatures, meaning > > xfeature F is filtered-out, which is not good, because the VM will have > > less features compared to boot. No, the VM will not have less features, because KVM_SET_XSAVE loads *data*, not features. On a host that supports xfeature F, the VM is running with garbage data no matter what, which is perfectly fine because from the guest's perspective, that xfeature and its associated data do not exist. And in all likelihood, unless QEMU is doing something bizarre, the data that is loaded via KVM_SET_XSAVE will be the exact same data that is already present in the guest FPU state, as both with be in the init state. On top of that, the data that is loaded via KVM_SET_XSAVE may not actually be loaded into hardware, i.e. may never be exposed to the guest. E.g. IIRC, the original issues was with PKRU. If PKU is supported by the host, but not exposed to the guest, KVM will run the guest with the *host's* PKRU value. > This is what I was (trying) to convey earlier... > > See Sean's response here: > https://lore.kernel.org/all/ZRMHY83W%2FVPjYyhy@google.com/ > > I'll copy the pertinent part of his very detailed response inline: > > KVM *must* "trim" features when servicing KVM_GET_SAVE{2}, because that's been > > KVM's ABI for a very long time, and userspace absolutely relies on that > > functionality to ensure that a VM can be migrated within a pool of heterogenous > > systems so long as the features that are *exposed* to the guest are supported > > on all platforms. > > My 2 cents: as an outsider with less familiarity of the KVM code, it is hard > to understand the contract here with the guest/userspace. It seems there is a > fundamental question of whether or not "superfluous" features, those being > host-supported features which extend that which the guest is actually capable > of, can be removed between the time that the guest boots and when it > terminates, through however many live-migrations that may be. KVM's ABI has no formal notion of guest boot=>shutdown or live migration. The myriad KVM_GET_* APIs allow taking a snapshot of guest state, and the KVM_SET_* APIs allow loading a snapshot of guest state. Live migration is probably the most common use of those APIs, but there are other use cases. That matters because KVM's contract with userspace for KVM_SET_XSAVE (or any other state save/load ioctl()) doesn't have a holistic view of the guest, e.g. KVM can't know that userspace is live migrating a VM, and that userspace's attempt to load data for an unsupported xfeature is ok because the xfeature isn't exposed to the guest. In other words, at the time of KVM_SET_XSAVE, KVM has no way of knowing that an xfeature is superfluous. Normally, that's a complete non-issue because there is no superfluous xfeature data, as KVM's contract for KVM_GET_SAVE{2} is that only necessary data is saved in the snapshot. Unfortunately, the original bug that led to this mess broke the contract for KVM_GET_XSAVE{2}, and I don't see a safe way to workaround that bug in KVM without an opt-in from userspace. > Ultimately, this problem is not really fixable if said features cannot be > removed. It's not about removing features. The change you're asking for is to have KVM *silently* drop data. Aside from the fact that such a change would break KVM's ABI, silently ignoring data that userspace has explicitly requested be loaded for a vCPU is incredibly dangerous. E.g. a not too far fetched scenario would be: 1. xfeature X is supported on Host A and exposed to a guest 2. Host B is upgraded to a new kernel that has a bug that causes the kernel to disable support for X, even though X is supported in hardware 3. The guest is live migrated from Host A to Host B At step #3, what will currently happen is that KVM_SET_XSAVE will fail with -EINVAL because userspace is attempting to load data that Host B is incapable of loading. The change you're suggesting would result in KVM dropping the data for X and letting KVM_SET_XSAVE succeed, *for an xfeature that is exposed to the guest*. I.e. for all intents and purposes, KVM would deliberately corrupt guest data. > Is there an RFC or document which captures expectations of this form? Not AFAIK. :-/
On Wed, Oct 04, 2023, Tyler Stachecki wrote: > On Wed, Oct 04, 2023 at 07:51:17AM -0700, Sean Christopherson wrote: > > It's not about removing features. The change you're asking for is to have KVM > > *silently* drop data. Aside from the fact that such a change would break KVM's > > ABI, silently ignoring data that userspace has explicitly requested be loaded for > > a vCPU is incredibly dangerous. > > Sorry if it came off that way No need to apologise, you got bit by a nasty kernel bug and are trying to find a solution. There's nothing wrong with that. > I fully understand and am resigned to the "you > break it, you keep both halves" nature of what I had initially proposed and > that it is not a generally tractable solution. Yeah, the crux of the matter is that we have no control or even knowledge of who all is using KVM, with what userspace VMM, on what hardware, etc. E.g. if this bug were affecting our fleet and for some reason we couldn't address the problem in userspace, carrying a hack in KVM in our internal kernel would probably be a viable option because we can do a proper risk assessment. E.g. we know and control exactly what userspace we're running, the underlying hardware in affected pools, what features are exposed to the guest, etc. And we could revert the hack once all affected VMs had been sanitized.