Message ID | 20221214194056.161492-22-michael.roth@amd.com |
---|---|
State | New |
Headers | show |
Series | Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand |
On Wed, 14 Dec 2022, Michael Roth wrote: > From: Hugh Dickins <hughd@google.com> > > When the address is backed by a memfd, the code to split the page does > nothing more than remove the PMD from the page tables. So immediately > install a PTE to ensure that any other pages in that 2MB region are > brought back as in 4K pages. > > Signed-off-by: Hugh Dickins <hughd@google.com> > Cc: Hugh Dickins <hughd@google.com> > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> > Signed-off-by: Michael Roth <michael.roth@amd.com> Hah, it's good to see this again, but it was "Suggested-by" me, not "Signed-off-by" me. And was a neat pragmatic one-liner workaround for the immediate problem we had, but came with caveats. The problem is that we have one wind blowing in the split direction, and another wind (khugepaged) blowing in the collapse direction, and who wins for how long depends on factors I've not fully got to grips with (and is liable to differ between kernel releases). Good and bad timing to see it. I was just yesterday reviewing a patch to the collapsing wind, which reminded me of an improvement yet to be made there, thinking I'd like to try it sometime; but recallng that someone somewhere relies on the splitting wind, and doesn't want the collapsing wind to blow any harder - now you remind me who! Bad timing in that I don't have any quick answer on the right thing to do instead, and can't give it the thought it needs at the moment - perhaps others can chime in more usefully. Hugh p.s. I don't know where "handle_split_page_fault" comes in, but "x86/fault" in the subject looks wrong, since this appears to be in generic code; and "memfd" seems inappropriate too, but perhaps you have a situation where only memfds can reach handle_split_page_fault(). > --- > mm/memory.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/mm/memory.c b/mm/memory.c > index e68da7e403c6..33c9020ba1f8 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4999,6 +4999,11 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) > static int handle_split_page_fault(struct vm_fault *vmf) > { > __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL); > + /* > + * Install a PTE immediately to ensure that any other pages in > + * this 2MB region are brought back in as 4K pages. > + */ > + __pte_alloc(vmf->vma->vm_mm, vmf->pmd); > return 0; > } > > -- > 2.25.1
diff --git a/mm/memory.c b/mm/memory.c index e68da7e403c6..33c9020ba1f8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4999,6 +4999,11 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) static int handle_split_page_fault(struct vm_fault *vmf) { __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL); + /* + * Install a PTE immediately to ensure that any other pages in + * this 2MB region are brought back in as 4K pages. + */ + __pte_alloc(vmf->vma->vm_mm, vmf->pmd); return 0; }