Message ID | 20250303133011.44095-1-kalyazin@amazon.com |
---|---|
Headers | show |
Series | KVM: guest_memfd: support for uffd missing | expand |
On Wed, Mar 05, 2025 at 11:35:27AM -0800, James Houghton wrote: > I think it might be useful to implement an fs-generic MINOR mode. The > fault handler is already easy enough to do generically (though it > would become more difficult to determine if the "MINOR" fault is > actually a MISSING fault, but at least for my userspace, the > distinction isn't important. :)) So the question becomes: what should > UFFDIO_CONTINUE look like? > > And I think it would be nice if UFFDIO_CONTINUE just called > vm_ops->fault() to get the page we want to map and then mapped it, > instead of having shmem-specific and hugetlb-specific versions (though > maybe we need to keep the hugetlb specialization...). That would avoid > putting kvm/gmem/etc. symbols in mm/userfaultfd code. > > I've actually wanted to do this for a while but haven't had a good > reason to pursue it. I wonder if it can be done in a > backwards-compatible fashion... Yes I also thought about that. :) When Axel added minor fault, it's not a major concern as it's the only fs that will consume the feature anyway in the do_fault() path - hugetlbfs has its own path to take care of.. even until now. And there's some valid points too if someone would argue to put it there especially on folio lock - do that in shmem.c can avoid taking folio lock when generating minor fault message. It might make some difference when the faults are heavy and when folio lock is frequently taken elsewhere too. It might boil down to how many more FSes would support minor fault, and whether we would care about such difference at last to shmem users. If gmem is the only one after existing ones, IIUC there's still option we implement it in gmem code. After all, I expect the change should be very under control (<20 LOCs?)..
On 05/03/2025 20:29, Peter Xu wrote: > On Wed, Mar 05, 2025 at 11:35:27AM -0800, James Houghton wrote: >> I think it might be useful to implement an fs-generic MINOR mode. The >> fault handler is already easy enough to do generically (though it >> would become more difficult to determine if the "MINOR" fault is >> actually a MISSING fault, but at least for my userspace, the >> distinction isn't important. :)) So the question becomes: what should >> UFFDIO_CONTINUE look like? >> >> And I think it would be nice if UFFDIO_CONTINUE just called >> vm_ops->fault() to get the page we want to map and then mapped it, >> instead of having shmem-specific and hugetlb-specific versions (though >> maybe we need to keep the hugetlb specialization...). That would avoid >> putting kvm/gmem/etc. symbols in mm/userfaultfd code. >> >> I've actually wanted to do this for a while but haven't had a good >> reason to pursue it. I wonder if it can be done in a >> backwards-compatible fashion... > > Yes I also thought about that. :) Hi Peter, hi James. Thanks for pointing at the race condition! I did some experimentation and it indeed looks possible to call vm_ops->fault() from userfault_continue() to make it generic and decouple from KVM, at least for non-hugetlb cases. One thing is we'd need to prevent a recursive handle_userfault() invocation, which I believe can be solved by adding a new VMF flag to ignore the userfault path when the fault handler is called from userfault_continue(). I'm open to a more elegant solution though. Regarding usage of the MINOR notification, in what case do you recommend sending it? If following the logic implemented in shmem and hugetlb, ie if the page is _present_ in the pagecache, I can't see how it is going to work with the write syscall, as we'd like to know when the page is _missing_ in order to respond with the population via the write. If going against shmem/hugetlb logic, and sending the MINOR event when the page is missing from the pagecache, how would it solve the race condition problem? Also, where would the check for the folio_test_uptodate() mentioned by James fit into here? Would it only be used for fortifying the MINOR (present) against the race? > When Axel added minor fault, it's not a major concern as it's the only fs > that will consume the feature anyway in the do_fault() path - hugetlbfs has > its own path to take care of.. even until now. > > And there's some valid points too if someone would argue to put it there > especially on folio lock - do that in shmem.c can avoid taking folio lock > when generating minor fault message. It might make some difference when > the faults are heavy and when folio lock is frequently taken elsewhere too. Peter, could you expand on this? Are you referring to the following (shmem_get_folio_gfp)? if (folio) { folio_lock(folio); /* Has the folio been truncated or swapped out? */ if (unlikely(folio->mapping != inode->i_mapping)) { folio_unlock(folio); folio_put(folio); goto repeat; } if (sgp == SGP_WRITE) folio_mark_accessed(folio); if (folio_test_uptodate(folio)) goto out; /* fallocated folio */ if (sgp != SGP_READ) goto clear; folio_unlock(folio); folio_put(folio); } Could you explain in what case the lock can be avoided? AFAIC, the function is called by both the shmem fault handler and userfault_continue(). > It might boil down to how many more FSes would support minor fault, and > whether we would care about such difference at last to shmem users. If gmem > is the only one after existing ones, IIUC there's still option we implement > it in gmem code. After all, I expect the change should be very under > control (<20 LOCs?).. > > -- > Peter Xu >
On 10/03/2025 19:57, Peter Xu wrote: > On Mon, Mar 10, 2025 at 06:12:22PM +0000, Nikita Kalyazin wrote: >> >> >> On 05/03/2025 20:29, Peter Xu wrote: >>> On Wed, Mar 05, 2025 at 11:35:27AM -0800, James Houghton wrote: >>>> I think it might be useful to implement an fs-generic MINOR mode. The >>>> fault handler is already easy enough to do generically (though it >>>> would become more difficult to determine if the "MINOR" fault is >>>> actually a MISSING fault, but at least for my userspace, the >>>> distinction isn't important. :)) So the question becomes: what should >>>> UFFDIO_CONTINUE look like? >>>> >>>> And I think it would be nice if UFFDIO_CONTINUE just called >>>> vm_ops->fault() to get the page we want to map and then mapped it, >>>> instead of having shmem-specific and hugetlb-specific versions (though >>>> maybe we need to keep the hugetlb specialization...). That would avoid >>>> putting kvm/gmem/etc. symbols in mm/userfaultfd code. >>>> >>>> I've actually wanted to do this for a while but haven't had a good >>>> reason to pursue it. I wonder if it can be done in a >>>> backwards-compatible fashion... >>> >>> Yes I also thought about that. :) >> >> Hi Peter, hi James. Thanks for pointing at the race condition! >> >> I did some experimentation and it indeed looks possible to call >> vm_ops->fault() from userfault_continue() to make it generic and decouple >> from KVM, at least for non-hugetlb cases. One thing is we'd need to prevent >> a recursive handle_userfault() invocation, which I believe can be solved by >> adding a new VMF flag to ignore the userfault path when the fault handler is >> called from userfault_continue(). I'm open to a more elegant solution >> though. > > It sounds working to me. Adding fault flag can also be seen as part of > extension of vm_operations_struct ops. So we could consider reusing > fault() API indeed. Great! >> >> Regarding usage of the MINOR notification, in what case do you recommend >> sending it? If following the logic implemented in shmem and hugetlb, ie if >> the page is _present_ in the pagecache, I can't see how it is going to work > > It could be confusing when reading that chunk of code, because it looks > like it notifies minor fault when cache hit. But the critical part here is > that we rely on the pgtable missing causing the fault() to trigger first. > So it's more like "cache hit && pgtable missing" for minor fault. Right, but the cache hit still looks like a precondition for the minor fault event? >> with the write syscall, as we'd like to know when the page is _missing_ in >> order to respond with the population via the write. If going against >> shmem/hugetlb logic, and sending the MINOR event when the page is missing >> from the pagecache, how would it solve the race condition problem? > > Should be easier we stick with mmap() rather than write(). E.g. for shmem > case of current code base: > > if (folio && vma && userfaultfd_minor(vma)) { > if (!xa_is_value(folio)) > folio_put(folio); > *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); > return 0; > } > > vma is only availble if vmf!=NULL, aka in fault context. With that, in > write() to shmem inodes, nothing will generate a message, because minor > fault so far is only about pgtable missing. It needs to be mmap()ed first, > and has nothing yet to do with write() syscalls. Yes, that's true that write() itself isn't going to generate a message. My idea was to _respond_ to a message generated by the fault handler (vmf != NULL) with a write(). I didn't mean to generate it from write(). What I wanted to achieve was send a message on fault + cache miss and respond to the message with a write() to fill the cache followed by a UFFDIO_CONTINUE to set up pagetables. I understand that a MINOR trap (MINOR + UFFDIO_CONTINUE) is preferable, but how does it fit into this model? What/how will guarantee a cache hit that would trigger the MINOR message? To clarify, I would like to be able to populate pages _on-demand_, not only proactively (like in the original UFFDIO_CONTINUE cover letter [1]). Do you think the MINOR trap could still be applicable or would it necessarily require the MISSING trap? [1] https://lore.kernel.org/linux-fsdevel/20210301222728.176417-1-axelrasmussen@google.com/T/ >> >> Also, where would the check for the folio_test_uptodate() mentioned by James >> fit into here? Would it only be used for fortifying the MINOR (present) >> against the race? >> >>> When Axel added minor fault, it's not a major concern as it's the only fs >>> that will consume the feature anyway in the do_fault() path - hugetlbfs has >>> its own path to take care of.. even until now. >>> >>> And there's some valid points too if someone would argue to put it there >>> especially on folio lock - do that in shmem.c can avoid taking folio lock >>> when generating minor fault message. It might make some difference when >>> the faults are heavy and when folio lock is frequently taken elsewhere too. >> >> Peter, could you expand on this? Are you referring to the following >> (shmem_get_folio_gfp)? >> >> if (folio) { >> folio_lock(folio); >> >> /* Has the folio been truncated or swapped out? */ >> if (unlikely(folio->mapping != inode->i_mapping)) { >> folio_unlock(folio); >> folio_put(folio); >> goto repeat; >> } >> if (sgp == SGP_WRITE) >> folio_mark_accessed(folio); >> if (folio_test_uptodate(folio)) >> goto out; >> /* fallocated folio */ >> if (sgp != SGP_READ) >> goto clear; >> folio_unlock(folio); >> folio_put(folio); >> } >> >> Could you explain in what case the lock can be avoided? AFAIC, the function >> is called by both the shmem fault handler and userfault_continue(). > > I think you meant the UFFDIO_CONTINUE side of things. I agree with you, we > always need the folio lock. > > What I was saying is the trapping side, where the minor fault message can > be generated without the folio lock now in case of shmem. It's about > whether we could generalize the trapping side, so handle_mm_fault() can > generate the minor fault message instead of by shmem.c. > > If the only concern is "referring to a module symbol from core mm", then > indeed the trapping side should be less of a concern anyway, because the > trapping side (when in the module codes) should always be able to reference > mm functions. > > Actually.. if we have a fault() flag introduced above, maybe we can > generalize the trap side altogether without the folio lock overhead. When > the flag set, if we can always return the folio unlocked (as long as > refcount held), then in UFFDIO_CONTINUE ioctl we can lock it. Where does this locking happen exactly during trapping? I was thinking it was only done when the page was allocated. The trapping part (quoted by you above) only looks up the page in the cache and calls handle_userfault(). Am I missing something? >> >>> It might boil down to how many more FSes would support minor fault, and >>> whether we would care about such difference at last to shmem users. If gmem >>> is the only one after existing ones, IIUC there's still option we implement >>> it in gmem code. After all, I expect the change should be very under >>> control (<20 LOCs?).. >>> >>> -- >>> Peter Xu >>> >> > > -- > Peter Xu >
On Tue, Mar 11, 2025 at 04:56:47PM +0000, Nikita Kalyazin wrote: > > > On 10/03/2025 19:57, Peter Xu wrote: > > On Mon, Mar 10, 2025 at 06:12:22PM +0000, Nikita Kalyazin wrote: > > > > > > > > > On 05/03/2025 20:29, Peter Xu wrote: > > > > On Wed, Mar 05, 2025 at 11:35:27AM -0800, James Houghton wrote: > > > > > I think it might be useful to implement an fs-generic MINOR mode. The > > > > > fault handler is already easy enough to do generically (though it > > > > > would become more difficult to determine if the "MINOR" fault is > > > > > actually a MISSING fault, but at least for my userspace, the > > > > > distinction isn't important. :)) So the question becomes: what should > > > > > UFFDIO_CONTINUE look like? > > > > > > > > > > And I think it would be nice if UFFDIO_CONTINUE just called > > > > > vm_ops->fault() to get the page we want to map and then mapped it, > > > > > instead of having shmem-specific and hugetlb-specific versions (though > > > > > maybe we need to keep the hugetlb specialization...). That would avoid > > > > > putting kvm/gmem/etc. symbols in mm/userfaultfd code. > > > > > > > > > > I've actually wanted to do this for a while but haven't had a good > > > > > reason to pursue it. I wonder if it can be done in a > > > > > backwards-compatible fashion... > > > > > > > > Yes I also thought about that. :) > > > > > > Hi Peter, hi James. Thanks for pointing at the race condition! > > > > > > I did some experimentation and it indeed looks possible to call > > > vm_ops->fault() from userfault_continue() to make it generic and decouple > > > from KVM, at least for non-hugetlb cases. One thing is we'd need to prevent > > > a recursive handle_userfault() invocation, which I believe can be solved by > > > adding a new VMF flag to ignore the userfault path when the fault handler is > > > called from userfault_continue(). I'm open to a more elegant solution > > > though. > > > > It sounds working to me. Adding fault flag can also be seen as part of > > extension of vm_operations_struct ops. So we could consider reusing > > fault() API indeed. > > Great! > > > > > > > Regarding usage of the MINOR notification, in what case do you recommend > > > sending it? If following the logic implemented in shmem and hugetlb, ie if > > > the page is _present_ in the pagecache, I can't see how it is going to work > > > > It could be confusing when reading that chunk of code, because it looks > > like it notifies minor fault when cache hit. But the critical part here is > > that we rely on the pgtable missing causing the fault() to trigger first. > > So it's more like "cache hit && pgtable missing" for minor fault. > > Right, but the cache hit still looks like a precondition for the minor fault > event? Yes. > > > > with the write syscall, as we'd like to know when the page is _missing_ in > > > order to respond with the population via the write. If going against > > > shmem/hugetlb logic, and sending the MINOR event when the page is missing > > > from the pagecache, how would it solve the race condition problem? > > > > Should be easier we stick with mmap() rather than write(). E.g. for shmem > > case of current code base: > > > > if (folio && vma && userfaultfd_minor(vma)) { > > if (!xa_is_value(folio)) > > folio_put(folio); > > *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); > > return 0; > > } > > > > vma is only availble if vmf!=NULL, aka in fault context. With that, in > > write() to shmem inodes, nothing will generate a message, because minor > > fault so far is only about pgtable missing. It needs to be mmap()ed first, > > and has nothing yet to do with write() syscalls. > > Yes, that's true that write() itself isn't going to generate a message. My > idea was to _respond_ to a message generated by the fault handler (vmf != > NULL) with a write(). I didn't mean to generate it from write(). > > What I wanted to achieve was send a message on fault + cache miss and > respond to the message with a write() to fill the cache followed by a > UFFDIO_CONTINUE to set up pagetables. I understand that a MINOR trap (MINOR > + UFFDIO_CONTINUE) is preferable, but how does it fit into this model? > What/how will guarantee a cache hit that would trigger the MINOR message? > > To clarify, I would like to be able to populate pages _on-demand_, not only > proactively (like in the original UFFDIO_CONTINUE cover letter [1]). Do you > think the MINOR trap could still be applicable or would it necessarily > require the MISSING trap? I think MINOR can also achieve similar things. MINOR traps the pgtable missing event (let's imagine page cache is already populated, or at least when MISSING mode not registered, it'll auto-populate on 1st access). So as long as the content can only be accessed from the pgtable (either via mmap() or GUP on top of it), then afaiu it could work similarly like MISSING faults, because anything trying to access it will be trapped. Said that, we can also choose to implement MISSING first. In that case write() is definitely not enough, because MISSING is at least so far based on top of whether the page cache present, and write() won't be atomic on update a page. We need to implement UFFDIO_COPY for gmemfd MISSING. Either way looks ok to me. > > [1] https://lore.kernel.org/linux-fsdevel/20210301222728.176417-1-axelrasmussen@google.com/T/ > > > > > > > Also, where would the check for the folio_test_uptodate() mentioned by James > > > fit into here? Would it only be used for fortifying the MINOR (present) > > > against the race? > > > > > > > When Axel added minor fault, it's not a major concern as it's the only fs > > > > that will consume the feature anyway in the do_fault() path - hugetlbfs has > > > > its own path to take care of.. even until now. > > > > > > > > And there's some valid points too if someone would argue to put it there > > > > especially on folio lock - do that in shmem.c can avoid taking folio lock > > > > when generating minor fault message. It might make some difference when > > > > the faults are heavy and when folio lock is frequently taken elsewhere too. > > > > > > Peter, could you expand on this? Are you referring to the following > > > (shmem_get_folio_gfp)? > > > > > > if (folio) { > > > folio_lock(folio); > > > > > > /* Has the folio been truncated or swapped out? */ > > > if (unlikely(folio->mapping != inode->i_mapping)) { > > > folio_unlock(folio); > > > folio_put(folio); > > > goto repeat; > > > } > > > if (sgp == SGP_WRITE) > > > folio_mark_accessed(folio); > > > if (folio_test_uptodate(folio)) > > > goto out; > > > /* fallocated folio */ > > > if (sgp != SGP_READ) > > > goto clear; > > > folio_unlock(folio); > > > folio_put(folio); > > > } [1] > > > > > > Could you explain in what case the lock can be avoided? AFAIC, the function > > > is called by both the shmem fault handler and userfault_continue(). > > > > I think you meant the UFFDIO_CONTINUE side of things. I agree with you, we > > always need the folio lock. > > > > What I was saying is the trapping side, where the minor fault message can > > be generated without the folio lock now in case of shmem. It's about > > whether we could generalize the trapping side, so handle_mm_fault() can > > generate the minor fault message instead of by shmem.c. > > > > If the only concern is "referring to a module symbol from core mm", then > > indeed the trapping side should be less of a concern anyway, because the > > trapping side (when in the module codes) should always be able to reference > > mm functions. > > > > Actually.. if we have a fault() flag introduced above, maybe we can > > generalize the trap side altogether without the folio lock overhead. When > > the flag set, if we can always return the folio unlocked (as long as > > refcount held), then in UFFDIO_CONTINUE ioctl we can lock it. > > Where does this locking happen exactly during trapping? I was thinking it > was only done when the page was allocated. The trapping part (quoted by you > above) only looks up the page in the cache and calls handle_userfault(). Am > I missing something? That's only what I worry if we want to reuse fault() to generalize the trap code in core mm, because fault() by default takes the folio lock at least for shmem. I agree the folio doesn't need locking when trapping the fault and sending the message. Thanks, > > > > > > > > It might boil down to how many more FSes would support minor fault, and > > > > whether we would care about such difference at last to shmem users. If gmem > > > > is the only one after existing ones, IIUC there's still option we implement > > > > it in gmem code. After all, I expect the change should be very under > > > > control (<20 LOCs?).. > > > > > > > > -- > > > > Peter Xu > > > > > > > > > > > -- > > Peter Xu > > >
On 12/03/2025 15:45, Peter Xu wrote: > On Tue, Mar 11, 2025 at 04:56:47PM +0000, Nikita Kalyazin wrote: >> >> >> On 10/03/2025 19:57, Peter Xu wrote: >>> On Mon, Mar 10, 2025 at 06:12:22PM +0000, Nikita Kalyazin wrote: >>>> >>>> >>>> On 05/03/2025 20:29, Peter Xu wrote: >>>>> On Wed, Mar 05, 2025 at 11:35:27AM -0800, James Houghton wrote: >>>>>> I think it might be useful to implement an fs-generic MINOR mode. The >>>>>> fault handler is already easy enough to do generically (though it >>>>>> would become more difficult to determine if the "MINOR" fault is >>>>>> actually a MISSING fault, but at least for my userspace, the >>>>>> distinction isn't important. :)) So the question becomes: what should >>>>>> UFFDIO_CONTINUE look like? >>>>>> >>>>>> And I think it would be nice if UFFDIO_CONTINUE just called >>>>>> vm_ops->fault() to get the page we want to map and then mapped it, >>>>>> instead of having shmem-specific and hugetlb-specific versions (though >>>>>> maybe we need to keep the hugetlb specialization...). That would avoid >>>>>> putting kvm/gmem/etc. symbols in mm/userfaultfd code. >>>>>> >>>>>> I've actually wanted to do this for a while but haven't had a good >>>>>> reason to pursue it. I wonder if it can be done in a >>>>>> backwards-compatible fashion... >>>>> >>>>> Yes I also thought about that. :) >>>> >>>> Hi Peter, hi James. Thanks for pointing at the race condition! >>>> >>>> I did some experimentation and it indeed looks possible to call >>>> vm_ops->fault() from userfault_continue() to make it generic and decouple >>>> from KVM, at least for non-hugetlb cases. One thing is we'd need to prevent >>>> a recursive handle_userfault() invocation, which I believe can be solved by >>>> adding a new VMF flag to ignore the userfault path when the fault handler is >>>> called from userfault_continue(). I'm open to a more elegant solution >>>> though. >>> >>> It sounds working to me. Adding fault flag can also be seen as part of >>> extension of vm_operations_struct ops. So we could consider reusing >>> fault() API indeed. >> >> Great! >> >>>> >>>> Regarding usage of the MINOR notification, in what case do you recommend >>>> sending it? If following the logic implemented in shmem and hugetlb, ie if >>>> the page is _present_ in the pagecache, I can't see how it is going to work >>> >>> It could be confusing when reading that chunk of code, because it looks >>> like it notifies minor fault when cache hit. But the critical part here is >>> that we rely on the pgtable missing causing the fault() to trigger first. >>> So it's more like "cache hit && pgtable missing" for minor fault. >> >> Right, but the cache hit still looks like a precondition for the minor fault >> event? > > Yes. > >> >>>> with the write syscall, as we'd like to know when the page is _missing_ in >>>> order to respond with the population via the write. If going against >>>> shmem/hugetlb logic, and sending the MINOR event when the page is missing >>>> from the pagecache, how would it solve the race condition problem? >>> >>> Should be easier we stick with mmap() rather than write(). E.g. for shmem >>> case of current code base: >>> >>> if (folio && vma && userfaultfd_minor(vma)) { >>> if (!xa_is_value(folio)) >>> folio_put(folio); >>> *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); >>> return 0; >>> } >>> >>> vma is only availble if vmf!=NULL, aka in fault context. With that, in >>> write() to shmem inodes, nothing will generate a message, because minor >>> fault so far is only about pgtable missing. It needs to be mmap()ed first, >>> and has nothing yet to do with write() syscalls. >> >> Yes, that's true that write() itself isn't going to generate a message. My >> idea was to _respond_ to a message generated by the fault handler (vmf != >> NULL) with a write(). I didn't mean to generate it from write(). >> >> What I wanted to achieve was send a message on fault + cache miss and >> respond to the message with a write() to fill the cache followed by a >> UFFDIO_CONTINUE to set up pagetables. I understand that a MINOR trap (MINOR >> + UFFDIO_CONTINUE) is preferable, but how does it fit into this model? >> What/how will guarantee a cache hit that would trigger the MINOR message? >> >> To clarify, I would like to be able to populate pages _on-demand_, not only >> proactively (like in the original UFFDIO_CONTINUE cover letter [1]). Do you >> think the MINOR trap could still be applicable or would it necessarily >> require the MISSING trap? > > I think MINOR can also achieve similar things. MINOR traps the pgtable > missing event (let's imagine page cache is already populated, or at least > when MISSING mode not registered, it'll auto-populate on 1st access). So However if MISSING is not registered, the kernel will auto-populate with a clear page, ie there is no way to inject custom content from userspace. To explain my use case a bit more, the population thread will be trying to copy all guest memory proactively, but there will inevitably be cases where a page is accessed through pgtables _before_ it gets populated. It is not desirable for such access to result in a clear page provided by the kernel. > as long as the content can only be accessed from the pgtable (either via > mmap() or GUP on top of it), then afaiu it could work similarly like > MISSING faults, because anything trying to access it will be trapped. > > Said that, we can also choose to implement MISSING first. In that case > write() is definitely not enough, because MISSING is at least so far based > on top of whether the page cache present, and write() won't be atomic on > update a page. We need to implement UFFDIO_COPY for gmemfd MISSING. > > Either way looks ok to me. Yes, I understand that write() doesn't provide an atomic way of alloc + add + install PTE. Supporting UFFDIO_COPY is much more involved as it currently provides implementations specific to anonymous and shared memory, and adding guest_memfd to it brings the problem of the dependency on KVM back. I suppose it's possible to abstract those by introducing extra callbacks in vm_ops somehow and make the code generic, but it would be a significant change. If this is the only right way to address my use case, I will work on it. >> >> [1] https://lore.kernel.org/linux-fsdevel/20210301222728.176417-1-axelrasmussen@google.com/T/ >> >>>> >>>> Also, where would the check for the folio_test_uptodate() mentioned by James >>>> fit into here? Would it only be used for fortifying the MINOR (present) >>>> against the race? >>>> >>>>> When Axel added minor fault, it's not a major concern as it's the only fs >>>>> that will consume the feature anyway in the do_fault() path - hugetlbfs has >>>>> its own path to take care of.. even until now. >>>>> >>>>> And there's some valid points too if someone would argue to put it there >>>>> especially on folio lock - do that in shmem.c can avoid taking folio lock >>>>> when generating minor fault message. It might make some difference when >>>>> the faults are heavy and when folio lock is frequently taken elsewhere too. >>>> >>>> Peter, could you expand on this? Are you referring to the following >>>> (shmem_get_folio_gfp)? >>>> >>>> if (folio) { >>>> folio_lock(folio); >>>> >>>> /* Has the folio been truncated or swapped out? */ >>>> if (unlikely(folio->mapping != inode->i_mapping)) { >>>> folio_unlock(folio); >>>> folio_put(folio); >>>> goto repeat; >>>> } >>>> if (sgp == SGP_WRITE) >>>> folio_mark_accessed(folio); >>>> if (folio_test_uptodate(folio)) >>>> goto out; >>>> /* fallocated folio */ >>>> if (sgp != SGP_READ) >>>> goto clear; >>>> folio_unlock(folio); >>>> folio_put(folio); >>>> } > > [1] > >>>> >>>> Could you explain in what case the lock can be avoided? AFAIC, the function >>>> is called by both the shmem fault handler and userfault_continue(). >>> >>> I think you meant the UFFDIO_CONTINUE side of things. I agree with you, we >>> always need the folio lock. >>> >>> What I was saying is the trapping side, where the minor fault message can >>> be generated without the folio lock now in case of shmem. It's about >>> whether we could generalize the trapping side, so handle_mm_fault() can >>> generate the minor fault message instead of by shmem.c. >>> >>> If the only concern is "referring to a module symbol from core mm", then >>> indeed the trapping side should be less of a concern anyway, because the >>> trapping side (when in the module codes) should always be able to reference >>> mm functions. >>> >>> Actually.. if we have a fault() flag introduced above, maybe we can >>> generalize the trap side altogether without the folio lock overhead. When >>> the flag set, if we can always return the folio unlocked (as long as >>> refcount held), then in UFFDIO_CONTINUE ioctl we can lock it. >> >> Where does this locking happen exactly during trapping? I was thinking it >> was only done when the page was allocated. The trapping part (quoted by you >> above) only looks up the page in the cache and calls handle_userfault(). Am >> I missing something? > > That's only what I worry if we want to reuse fault() to generalize the trap > code in core mm, because fault() by default takes the folio lock at least > for shmem. I agree the folio doesn't need locking when trapping the fault > and sending the message. Ok, I think I understand what you mean now. Thanks for explaining that. > > Thanks, > >> >>>> >>>>> It might boil down to how many more FSes would support minor fault, and >>>>> whether we would care about such difference at last to shmem users. If gmem >>>>> is the only one after existing ones, IIUC there's still option we implement >>>>> it in gmem code. After all, I expect the change should be very under >>>>> control (<20 LOCs?).. >>>>> >>>>> -- >>>>> Peter Xu >>>>> >>>> >>> >>> -- >>> Peter Xu >>> >> > > -- > Peter Xu >
On 12/03/2025 19:32, Peter Xu wrote: > On Wed, Mar 12, 2025 at 05:07:25PM +0000, Nikita Kalyazin wrote: >> However if MISSING is not registered, the kernel will auto-populate with a >> clear page, ie there is no way to inject custom content from userspace. To >> explain my use case a bit more, the population thread will be trying to copy >> all guest memory proactively, but there will inevitably be cases where a >> page is accessed through pgtables _before_ it gets populated. It is not >> desirable for such access to result in a clear page provided by the kernel. > > IMHO populating with a zero page in the page cache is fine. It needs to > make sure all accesses will go via the pgtable, as discussed below in my > previous email [1], then nobody will be able to see the zero page, not > until someone updates the content then follow up with a CONTINUE to install > the pgtable entry. > > If there is any way that the page can be accessed without the pgtable > installation, minor faults won't work indeed. I think I see what you mean now. I agree, it isn't the end of the world if the kernel clears the page and then userspace overwrites it. The way I see it is: @@ -400,20 +401,26 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) if (WARN_ON_ONCE(folio_test_large(folio))) { ret = VM_FAULT_SIGBUS; goto out_folio; } if (!folio_test_uptodate(folio)) { clear_highpage(folio_page(folio, 0)); kvm_gmem_mark_prepared(folio); } + if (userfaultfd_minor(vmf->vma)) { + folio_unlock(folio); + filemap_invalidate_unlock_shared(inode->i_mapping); + return handle_userfault(vmf, VM_UFFD_MISSING); + } + vmf->page = folio_file_page(folio, vmf->pgoff); out_folio: if (ret != VM_FAULT_LOCKED) { folio_unlock(folio); folio_put(folio); } On the first fault (cache miss), the kernel will allocate/add/clear the page (as there is no MISSING trap now), and once the page is in the cache, a MINOR event will be sent for userspace to copy its content. Please let me know if this is an acceptable semantics. Since userspace is getting notified after KVM calls kvm_gmem_mark_prepared(), which removes the page from the direct map [1], userspace can't use write() to populate the content because write() relies on direct map [2]. However userspace can do a plain memcpy that would use user pagetables instead. This forces userspace to respond to stage-2 and VMA faults in guest_memfd differently, via write() and memcpy respectively. It doesn't seem like a significant problem though. I believe, with this approach the original race condition is gone because UFFD messages are only sent on cache hit and it is up to userspace to serialise writes. Please correct me if I'm wrong here. [1] https://lore.kernel.org/kvm/20250221160728.1584559-1-roypat@amazon.co.uk/T/#mdf41fe2dc33332e9c500febd47e14ae91ad99724 [2] https://lore.kernel.org/kvm/20241129123929.64790-1-kalyazin@amazon.com/T/#mf5d794aa31d753cbc73e193628f31e418051983d >> >>> as long as the content can only be accessed from the pgtable (either via >>> mmap() or GUP on top of it), then afaiu it could work similarly like >>> MISSING faults, because anything trying to access it will be trapped. > > [1] > > -- > Peter Xu >
On Thu, Mar 13, 2025 at 03:25:16PM +0000, Nikita Kalyazin wrote: > > > On 12/03/2025 19:32, Peter Xu wrote: > > On Wed, Mar 12, 2025 at 05:07:25PM +0000, Nikita Kalyazin wrote: > > > However if MISSING is not registered, the kernel will auto-populate with a > > > clear page, ie there is no way to inject custom content from userspace. To > > > explain my use case a bit more, the population thread will be trying to copy > > > all guest memory proactively, but there will inevitably be cases where a > > > page is accessed through pgtables _before_ it gets populated. It is not > > > desirable for such access to result in a clear page provided by the kernel. > > > > IMHO populating with a zero page in the page cache is fine. It needs to > > make sure all accesses will go via the pgtable, as discussed below in my > > previous email [1], then nobody will be able to see the zero page, not > > until someone updates the content then follow up with a CONTINUE to install > > the pgtable entry. > > > > If there is any way that the page can be accessed without the pgtable > > installation, minor faults won't work indeed. > > I think I see what you mean now. I agree, it isn't the end of the world if > the kernel clears the page and then userspace overwrites it. > > The way I see it is: > > @@ -400,20 +401,26 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) > if (WARN_ON_ONCE(folio_test_large(folio))) { > ret = VM_FAULT_SIGBUS; > goto out_folio; > } > > if (!folio_test_uptodate(folio)) { > clear_highpage(folio_page(folio, 0)); > kvm_gmem_mark_prepared(folio); > } > > + if (userfaultfd_minor(vmf->vma)) { > + folio_unlock(folio); > + filemap_invalidate_unlock_shared(inode->i_mapping); > + return handle_userfault(vmf, VM_UFFD_MISSING); > + } I suppose you meant s/MISSING/MINOR/. > + > vmf->page = folio_file_page(folio, vmf->pgoff); > > out_folio: > if (ret != VM_FAULT_LOCKED) { > folio_unlock(folio); > folio_put(folio); > } > > On the first fault (cache miss), the kernel will allocate/add/clear the page > (as there is no MISSING trap now), and once the page is in the cache, a > MINOR event will be sent for userspace to copy its content. Please let me > know if this is an acceptable semantics. > > Since userspace is getting notified after KVM calls > kvm_gmem_mark_prepared(), which removes the page from the direct map [1], > userspace can't use write() to populate the content because write() relies > on direct map [2]. However userspace can do a plain memcpy that would use > user pagetables instead. This forces userspace to respond to stage-2 and > VMA faults in guest_memfd differently, via write() and memcpy respectively. > It doesn't seem like a significant problem though. It looks ok in general, but could you remind me why you need to stick with write() syscall? IOW, if gmemfd will always need mmap() and it's fully accessible from userspace in your use case, wouldn't mmap()+memcpy() always work already, and always better than write()? Thanks, > > I believe, with this approach the original race condition is gone because > UFFD messages are only sent on cache hit and it is up to userspace to > serialise writes. Please correct me if I'm wrong here. > > [1] https://lore.kernel.org/kvm/20250221160728.1584559-1-roypat@amazon.co.uk/T/#mdf41fe2dc33332e9c500febd47e14ae91ad99724 > [2] https://lore.kernel.org/kvm/20241129123929.64790-1-kalyazin@amazon.com/T/#mf5d794aa31d753cbc73e193628f31e418051983d > > > > > > > > as long as the content can only be accessed from the pgtable (either via > > > > mmap() or GUP on top of it), then afaiu it could work similarly like > > > > MISSING faults, because anything trying to access it will be trapped. > > > > [1] > > > > -- > > Peter Xu > > > >
On 13/03/2025 19:12, Peter Xu wrote: > On Thu, Mar 13, 2025 at 03:25:16PM +0000, Nikita Kalyazin wrote: >> >> >> On 12/03/2025 19:32, Peter Xu wrote: >>> On Wed, Mar 12, 2025 at 05:07:25PM +0000, Nikita Kalyazin wrote: >>>> However if MISSING is not registered, the kernel will auto-populate with a >>>> clear page, ie there is no way to inject custom content from userspace. To >>>> explain my use case a bit more, the population thread will be trying to copy >>>> all guest memory proactively, but there will inevitably be cases where a >>>> page is accessed through pgtables _before_ it gets populated. It is not >>>> desirable for such access to result in a clear page provided by the kernel. >>> >>> IMHO populating with a zero page in the page cache is fine. It needs to >>> make sure all accesses will go via the pgtable, as discussed below in my >>> previous email [1], then nobody will be able to see the zero page, not >>> until someone updates the content then follow up with a CONTINUE to install >>> the pgtable entry. >>> >>> If there is any way that the page can be accessed without the pgtable >>> installation, minor faults won't work indeed. >> >> I think I see what you mean now. I agree, it isn't the end of the world if >> the kernel clears the page and then userspace overwrites it. >> >> The way I see it is: >> >> @@ -400,20 +401,26 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) >> if (WARN_ON_ONCE(folio_test_large(folio))) { >> ret = VM_FAULT_SIGBUS; >> goto out_folio; >> } >> >> if (!folio_test_uptodate(folio)) { >> clear_highpage(folio_page(folio, 0)); >> kvm_gmem_mark_prepared(folio); >> } >> >> + if (userfaultfd_minor(vmf->vma)) { >> + folio_unlock(folio); >> + filemap_invalidate_unlock_shared(inode->i_mapping); >> + return handle_userfault(vmf, VM_UFFD_MISSING); >> + } > > I suppose you meant s/MISSING/MINOR/. Yes, that's what I meant, thank you. >> + >> vmf->page = folio_file_page(folio, vmf->pgoff); >> >> out_folio: >> if (ret != VM_FAULT_LOCKED) { >> folio_unlock(folio); >> folio_put(folio); >> } >> >> On the first fault (cache miss), the kernel will allocate/add/clear the page >> (as there is no MISSING trap now), and once the page is in the cache, a >> MINOR event will be sent for userspace to copy its content. Please let me >> know if this is an acceptable semantics. >> >> Since userspace is getting notified after KVM calls >> kvm_gmem_mark_prepared(), which removes the page from the direct map [1], >> userspace can't use write() to populate the content because write() relies >> on direct map [2]. However userspace can do a plain memcpy that would use >> user pagetables instead. This forces userspace to respond to stage-2 and >> VMA faults in guest_memfd differently, via write() and memcpy respectively. >> It doesn't seem like a significant problem though. > > It looks ok in general, but could you remind me why you need to stick with > write() syscall? > > IOW, if gmemfd will always need mmap() and it's fully accessible from > userspace in your use case, wouldn't mmap()+memcpy() always work already, > and always better than write()? Yes, that's right, mmap() + memcpy() is functionally sufficient. write() is an optimisation. Most of the pages in guest_memfd are only ever accessed by the vCPU (not userspace) via TDP (stage-2 pagetables) so they don't need userspace pagetables set up. By using write() we can avoid VMA faults, installing corresponding PTEs and double page initialisation we discussed earlier. The optimised path only contains pagecache population via write(). Even TDP faults can be avoided if using KVM prefaulting API [1]. [1] https://docs.kernel.org/virt/kvm/api.html#kvm-pre-fault-memory > > Thanks, > >> >> I believe, with this approach the original race condition is gone because >> UFFD messages are only sent on cache hit and it is up to userspace to >> serialise writes. Please correct me if I'm wrong here. >> >> [1] https://lore.kernel.org/kvm/20250221160728.1584559-1-roypat@amazon.co.uk/T/#mdf41fe2dc33332e9c500febd47e14ae91ad99724 >> [2] https://lore.kernel.org/kvm/20241129123929.64790-1-kalyazin@amazon.com/T/#mf5d794aa31d753cbc73e193628f31e418051983d >> >>>> >>>>> as long as the content can only be accessed from the pgtable (either via >>>>> mmap() or GUP on top of it), then afaiu it could work similarly like >>>>> MISSING faults, because anything trying to access it will be trapped. >>> >>> [1] >>> >>> -- >>> Peter Xu >>> >> >> > > -- > Peter Xu >
On Thu, Mar 13, 2025 at 10:13:23PM +0000, Nikita Kalyazin wrote: > Yes, that's right, mmap() + memcpy() is functionally sufficient. write() is > an optimisation. Most of the pages in guest_memfd are only ever accessed by > the vCPU (not userspace) via TDP (stage-2 pagetables) so they don't need > userspace pagetables set up. By using write() we can avoid VMA faults, > installing corresponding PTEs and double page initialisation we discussed > earlier. The optimised path only contains pagecache population via write(). > Even TDP faults can be avoided if using KVM prefaulting API [1]. > > [1] https://docs.kernel.org/virt/kvm/api.html#kvm-pre-fault-memory Could you elaborate why VMA faults matters in perf? If we're talking about postcopy-like migrations on top of KVM guest-memfd, IIUC the VMAs can be pre-faulted too just like the TDP pgtables, e.g. with MADV_POPULATE_WRITE. Normally, AFAIU userapp optimizes IOs the other way round.. to change write()s into mmap()s, which at least avoids one round of copy. For postcopy using minor traps (and since guest-memfd is always shared and non-private..), it's also possible to feed the mmap()ed VAs to NIC as buffers (e.g. in recvmsg(), for example, as part of iovec[]), and as long as the mmap()ed ranges are not registered by KVM memslots, there's no concern on non-atomic copy. Thanks,
On 13/03/2025 22:38, Peter Xu wrote: > On Thu, Mar 13, 2025 at 10:13:23PM +0000, Nikita Kalyazin wrote: >> Yes, that's right, mmap() + memcpy() is functionally sufficient. write() is >> an optimisation. Most of the pages in guest_memfd are only ever accessed by >> the vCPU (not userspace) via TDP (stage-2 pagetables) so they don't need >> userspace pagetables set up. By using write() we can avoid VMA faults, >> installing corresponding PTEs and double page initialisation we discussed >> earlier. The optimised path only contains pagecache population via write(). >> Even TDP faults can be avoided if using KVM prefaulting API [1]. >> >> [1] https://docs.kernel.org/virt/kvm/api.html#kvm-pre-fault-memory > > Could you elaborate why VMA faults matters in perf? Based on my experiments, I can populate 3GiB of guest_memfd with write() in 980 ms, while memcpy takes 2140 ms. When I was profiling it, I saw ~63% of memcpy time spent in the exception handler, which made me think VMA faults mattered. > If we're talking about postcopy-like migrations on top of KVM guest-memfd, > IIUC the VMAs can be pre-faulted too just like the TDP pgtables, e.g. with > MADV_POPULATE_WRITE. Yes, I was thinking about MADV_POPULATE_WRITE as well, but AFAIK it isn't available in guest_memfd, at least with direct map removed due to [1] being updated in [2]: diff --git a/mm/gup.c b/mm/gup.c index 3883b307780e..7ddaf93c5b6a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1283,7 +1283,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) return -EOPNOTSUPP; - if (vma_is_secretmem(vma)) + if (vma_is_secretmem(vma) || vma_is_no_direct_map(vma)) return -EFAULT; if (write) { [1] https://elixir.bootlin.com/linux/v6.13.6/source/mm/gup.c#L1286 [2] https://lore.kernel.org/kvm/20250221160728.1584559-1-roypat@amazon.co.uk/T/#m05b5c6366be27c98a86baece52b2f408c455e962 > Normally, AFAIU userapp optimizes IOs the other way round.. to change > write()s into mmap()s, which at least avoids one round of copy. > > For postcopy using minor traps (and since guest-memfd is always shared and > non-private..), it's also possible to feed the mmap()ed VAs to NIC as > buffers (e.g. in recvmsg(), for example, as part of iovec[]), and as long > as the mmap()ed ranges are not registered by KVM memslots, there's no > concern on non-atomic copy. Yes, I see what you mean. It may be faster depending on the setup, if it's possible to remove one copy. Anyway, it looks like the solution we discussed allows to choose between memcpy-only and memcpy/write-combined userspace implementations. I'm going to work on the next version of the series that would include MINOR trap and avoiding KVM dependency in mm via calling vm_ops->fault() in UFFDIO_CONTINUE. > Thanks, > > -- > Peter Xu >
On Fri, Mar 14, 2025 at 05:12:35PM +0000, Nikita Kalyazin wrote: > Yes, I was thinking about MADV_POPULATE_WRITE as well, but AFAIK it isn't > available in guest_memfd, at least with direct map removed due to [1] being > updated in [2]: I see, so GUP is no-go. IIUC the userapp can also prefault by writing zeros in a loop after mmap(). Thanks,
On Fri, Mar 14, 2025 at 05:12:35PM +0000, Nikita Kalyazin wrote: > Anyway, it looks like the solution we discussed allows to choose between > memcpy-only and memcpy/write-combined userspace implementations. I'm going > to work on the next version of the series that would include MINOR trap and > avoiding KVM dependency in mm via calling vm_ops->fault() in > UFFDIO_CONTINUE. I'll attach some more context, not directly relevant to this series, but just FYI. One thing I am not yet sure is whether ultimately we still need to register userfaultfd with another fd using offset ranges. The problem is whether there will be userfaultfd trapping demand on the pure private CoCo use case later. The only thing I'm not sure is if all guest-memfd use cases allow mmap(). If true, then maybe we can stick with the current UFFDIO_REGISTER on VA ranges. In all cases, I think you can proceed with whatever you plan to do to add initial guest-memfd userfaultfd supports, as long as acceptable from KVM list. The other thing is, what you're looking for indeed looks very close to what we may need. We want to have purely shared guest-memfd working just like vanilla memfd_create(), not only for 4K but for huge pages. We also want GUP working, so it can replace the old hugetlbfs use case. I had a feeling that all the directions of guest-memfd recently happening on the list will ultimately need huge pages. It would be the same for you maybe, but only that your use case does not allow any permanant mapping that is visible to the kernel. Probably that's why GUP is forbidden but kmap isn't in your write()s; please bare with me if I made things wrong, I don't understand your use case well. Just in case helpful, I have some PoC branches ready allowing 1G pages to be mapped to userspace. https://github.com/xzpeter/linux/commits/peter-gmem-v0.2/ The work is based on Ackerley's 1G series, which contains most of the folio management part (but I fixed quite a few bugs in my tree; I believe Ackerley should have them fixed in his to-be-posted too). I also have a QEMU branch ready that can boot with it (I didn't yet test more things). https://github.com/xzpeter/qemu/commits/peter-gmem-v0.2/ For example, besides guest-memfd alone, we definitely also need guest-memfd being trappable by userfaultfd, as what you are trying to do here, one way or another. Thanks,