Message ID | 20201030100815.2269-1-daniel.vetter@ffwll.ch |
---|---|
Headers | show |
Series | follow_pfn and other iomap races | expand |
On Sat, Oct 31, 2020 at 3:55 AM John Hubbard <jhubbard@nvidia.com> wrote: > > On 10/30/20 3:08 AM, Daniel Vetter wrote: > > This is used by media/videbuf2 for persistent dma mappings, not just > > for a single dma operation and then freed again, so needs > > FOLL_LONGTERM. > > > > Unfortunately current pup_locked doesn't support FOLL_LONGTERM due to > > locking issues. Rework the code to pull the pup path out from the > > mmap_sem critical section as suggested by Jason. > > > > By relying entirely on the vma checks in pin_user_pages and follow_pfn > > There are vma checks in pin_user_pages(), but this patch changes things > to call pin_user_pages_fast(). And that does not have the vma checks. > More below about this: > > > (for vm_flags and vma_is_fsdax) we can also streamline the code a lot. > > > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> > > Cc: Jason Gunthorpe <jgg@ziepe.ca> > > Cc: Pawel Osciak <pawel@osciak.com> > > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > > Cc: Kyungmin Park <kyungmin.park@samsung.com> > > Cc: Tomasz Figa <tfiga@chromium.org> > > Cc: Mauro Carvalho Chehab <mchehab@kernel.org> > > Cc: Andrew Morton <akpm@linux-foundation.org> > > Cc: John Hubbard <jhubbard@nvidia.com> > > Cc: Jérôme Glisse <jglisse@redhat.com> > > Cc: Jan Kara <jack@suse.cz> > > Cc: Dan Williams <dan.j.williams@intel.com> > > Cc: linux-mm@kvack.org > > Cc: linux-arm-kernel@lists.infradead.org > > Cc: linux-samsung-soc@vger.kernel.org > > Cc: linux-media@vger.kernel.org > > Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> > > -- > > v2: Streamline the code and further simplify the loop checks (Jason) > > > > v5: Review from Tomasz: > > - fix page counting for the follow_pfn case by resetting ret > > - drop gup_flags paramater, now unused > > --- > > .../media/common/videobuf2/videobuf2-memops.c | 3 +- > > include/linux/mm.h | 2 +- > > mm/frame_vector.c | 53 ++++++------------- > > 3 files changed, 19 insertions(+), 39 deletions(-) > > > > diff --git a/drivers/media/common/videobuf2/videobuf2-memops.c b/drivers/media/common/videobuf2/videobuf2-memops.c > > index 6e9e05153f4e..9dd6c27162f4 100644 > > --- a/drivers/media/common/videobuf2/videobuf2-memops.c > > +++ b/drivers/media/common/videobuf2/videobuf2-memops.c > > @@ -40,7 +40,6 @@ struct frame_vector *vb2_create_framevec(unsigned long start, > > unsigned long first, last; > > unsigned long nr; > > struct frame_vector *vec; > > - unsigned int flags = FOLL_FORCE | FOLL_WRITE; > > > > first = start >> PAGE_SHIFT; > > last = (start + length - 1) >> PAGE_SHIFT; > > @@ -48,7 +47,7 @@ struct frame_vector *vb2_create_framevec(unsigned long start, > > vec = frame_vector_create(nr); > > if (!vec) > > return ERR_PTR(-ENOMEM); > > - ret = get_vaddr_frames(start & PAGE_MASK, nr, flags, vec); > > + ret = get_vaddr_frames(start & PAGE_MASK, nr, vec); > > if (ret < 0) > > goto out_destroy; > > /* We accept only complete set of PFNs */ > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index ef360fe70aaf..d6b8e30dce2e 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -1765,7 +1765,7 @@ struct frame_vector { > > struct frame_vector *frame_vector_create(unsigned int nr_frames); > > void frame_vector_destroy(struct frame_vector *vec); > > int get_vaddr_frames(unsigned long start, unsigned int nr_pfns, > > - unsigned int gup_flags, struct frame_vector *vec); > > + struct frame_vector *vec); > > void put_vaddr_frames(struct frame_vector *vec); > > int frame_vector_to_pages(struct frame_vector *vec); > > void frame_vector_to_pfns(struct frame_vector *vec); > > diff --git a/mm/frame_vector.c b/mm/frame_vector.c > > index 10f82d5643b6..f8c34b895c76 100644 > > --- a/mm/frame_vector.c > > +++ b/mm/frame_vector.c > > @@ -32,13 +32,12 @@ > > * This function takes care of grabbing mmap_lock as necessary. > > */ > > int get_vaddr_frames(unsigned long start, unsigned int nr_frames, > > - unsigned int gup_flags, struct frame_vector *vec) > > + struct frame_vector *vec) > > { > > struct mm_struct *mm = current->mm; > > struct vm_area_struct *vma; > > int ret = 0; > > int err; > > - int locked; > > > > if (nr_frames == 0) > > return 0; > > @@ -48,40 +47,26 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, > > > > start = untagged_addr(start); > > > > - mmap_read_lock(mm); > > - locked = 1; > > - vma = find_vma_intersection(mm, start, start + 1); > > - if (!vma) { > > - ret = -EFAULT; > > - goto out; > > - } > > - > > - /* > > - * While get_vaddr_frames() could be used for transient (kernel > > - * controlled lifetime) pinning of memory pages all current > > - * users establish long term (userspace controlled lifetime) > > - * page pinning. Treat get_vaddr_frames() like > > - * get_user_pages_longterm() and disallow it for filesystem-dax > > - * mappings. > > - */ > > - if (vma_is_fsdax(vma)) { > > - ret = -EOPNOTSUPP; > > - goto out; > > - } > > - > > - if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) { > > By removing this check from this location, and changing from > pin_user_pages_locked() to pin_user_pages_fast(), I *think* we end up > losing the check entirely. Is that intended? If so it could use a comment > somewhere to explain why. Yeah this wasn't intentional. I think I needed to drop the _locked version to prep for FOLL_LONGTERM, and figured _fast is always better. But I didn't realize that _fast doesn't have the vma checks, gup.c got me a bit confused. I'll remedy this in all the patches where this applies (because a VM_IO | VM_PFNMAP can point at struct page backed memory, and that exact use-case is what we want to stop with the unsafe_follow_pfn work since it wreaks things like cma or security). Aside: I do wonder whether the lack for that check isn't a problem. VM_IO | VM_PFNMAP generally means driver managed, which means the driver isn't going to consult the page pin count or anything like that (at least not necessarily) when revoking or moving that memory, since we're assuming it's totally under driver control. So if pup_fast can get into such a mapping, we might have a problem. -Daniel > thanks, > -- > John Hubbard > NVIDIA > > > + ret = pin_user_pages_fast(start, nr_frames, > > + FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, > > + (struct page **)(vec->ptrs)); > > + if (ret > 0) { > > vec->got_ref = true; > > vec->is_pfns = false; > > - ret = pin_user_pages_locked(start, nr_frames, > > - gup_flags, (struct page **)(vec->ptrs), &locked); > > - goto out; > > + goto out_unlocked; > > } > > > > + mmap_read_lock(mm); > > vec->got_ref = false; > > vec->is_pfns = true; > > + ret = 0; > > do { > > unsigned long *nums = frame_vector_pfns(vec); > > > > + vma = find_vma_intersection(mm, start, start + 1); > > + if (!vma) > > + break; > > + > > while (ret < nr_frames && start + PAGE_SIZE <= vma->vm_end) { > > err = follow_pfn(vma, start, &nums[ret]); > > if (err) { > > @@ -92,17 +77,13 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, > > start += PAGE_SIZE; > > ret++; > > } > > - /* > > - * We stop if we have enough pages or if VMA doesn't completely > > - * cover the tail page. > > - */ > > - if (ret >= nr_frames || start < vma->vm_end) > > + /* Bail out if VMA doesn't completely cover the tail page. */ > > + if (start < vma->vm_end) > > break; > > - vma = find_vma_intersection(mm, start, start + 1); > > - } while (vma && vma->vm_flags & (VM_IO | VM_PFNMAP)); > > + } while (ret < nr_frames); > > out: > > - if (locked) > > - mmap_read_unlock(mm); > > + mmap_read_unlock(mm); > > +out_unlocked: > > if (!ret) > > ret = -EFAULT; > > if (ret > 0) > > > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
On Fri, Oct 30, 2020 at 3:38 PM Daniel Vetter <daniel.vetter@ffwll.ch> wrote: > > On Fri, Oct 30, 2020 at 3:11 PM Tomasz Figa <tfiga@chromium.org> wrote: > > > > On Fri, Oct 30, 2020 at 11:08 AM Daniel Vetter <daniel.vetter@ffwll.ch> wrote: > > > > > > This is used by media/videbuf2 for persistent dma mappings, not just > > > for a single dma operation and then freed again, so needs > > > FOLL_LONGTERM. > > > > > > Unfortunately current pup_locked doesn't support FOLL_LONGTERM due to > > > locking issues. Rework the code to pull the pup path out from the > > > mmap_sem critical section as suggested by Jason. > > > > > > By relying entirely on the vma checks in pin_user_pages and follow_pfn > > > (for vm_flags and vma_is_fsdax) we can also streamline the code a lot. > > > > > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> > > > Cc: Jason Gunthorpe <jgg@ziepe.ca> > > > Cc: Pawel Osciak <pawel@osciak.com> > > > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > > > Cc: Kyungmin Park <kyungmin.park@samsung.com> > > > Cc: Tomasz Figa <tfiga@chromium.org> > > > Cc: Mauro Carvalho Chehab <mchehab@kernel.org> > > > Cc: Andrew Morton <akpm@linux-foundation.org> > > > Cc: John Hubbard <jhubbard@nvidia.com> > > > Cc: Jérôme Glisse <jglisse@redhat.com> > > > Cc: Jan Kara <jack@suse.cz> > > > Cc: Dan Williams <dan.j.williams@intel.com> > > > Cc: linux-mm@kvack.org > > > Cc: linux-arm-kernel@lists.infradead.org > > > Cc: linux-samsung-soc@vger.kernel.org > > > Cc: linux-media@vger.kernel.org > > > Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> > > > -- > > > v2: Streamline the code and further simplify the loop checks (Jason) > > > > > > v5: Review from Tomasz: > > > - fix page counting for the follow_pfn case by resetting ret > > > - drop gup_flags paramater, now unused > > > --- > > > .../media/common/videobuf2/videobuf2-memops.c | 3 +- > > > include/linux/mm.h | 2 +- > > > mm/frame_vector.c | 53 ++++++------------- > > > 3 files changed, 19 insertions(+), 39 deletions(-) > > > > > > > Thanks, looks good to me now. > > > > Acked-by: Tomasz Figa <tfiga@chromium.org> > > > > From reading the code, this is quite unlikely to introduce any > > behavior changes, but just to be safe, did you have a chance to test > > this with some V4L2 driver? > > Nah, unfortunately not. I believe we don't have any setup that could exercise the IO/PFNMAP user pointers, but it should be possible to exercise the basic userptr path by enabling the virtual (fake) video driver, vivid or CONFIG_VIDEO_VIVID, in your kernel and then using yavta [1] with --userptr and --capture=<number of frames> (and possibly some more options) to grab a couple of frames from the test pattern generator. Does it sound like something that you could give a try? Feel free to ping me on IRC (tfiga on #v4l or #dri-devel) if you need any help. [1] https://git.ideasonboard.org/yavta.git Best regards, Tomasz > -Daniel > > > > > Best regards, > > Tomasz > > > > > diff --git a/drivers/media/common/videobuf2/videobuf2-memops.c b/drivers/media/common/videobuf2/videobuf2-memops.c > > > index 6e9e05153f4e..9dd6c27162f4 100644 > > > --- a/drivers/media/common/videobuf2/videobuf2-memops.c > > > +++ b/drivers/media/common/videobuf2/videobuf2-memops.c > > > @@ -40,7 +40,6 @@ struct frame_vector *vb2_create_framevec(unsigned long start, > > > unsigned long first, last; > > > unsigned long nr; > > > struct frame_vector *vec; > > > - unsigned int flags = FOLL_FORCE | FOLL_WRITE; > > > > > > first = start >> PAGE_SHIFT; > > > last = (start + length - 1) >> PAGE_SHIFT; > > > @@ -48,7 +47,7 @@ struct frame_vector *vb2_create_framevec(unsigned long start, > > > vec = frame_vector_create(nr); > > > if (!vec) > > > return ERR_PTR(-ENOMEM); > > > - ret = get_vaddr_frames(start & PAGE_MASK, nr, flags, vec); > > > + ret = get_vaddr_frames(start & PAGE_MASK, nr, vec); > > > if (ret < 0) > > > goto out_destroy; > > > /* We accept only complete set of PFNs */ > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > > index ef360fe70aaf..d6b8e30dce2e 100644 > > > --- a/include/linux/mm.h > > > +++ b/include/linux/mm.h > > > @@ -1765,7 +1765,7 @@ struct frame_vector { > > > struct frame_vector *frame_vector_create(unsigned int nr_frames); > > > void frame_vector_destroy(struct frame_vector *vec); > > > int get_vaddr_frames(unsigned long start, unsigned int nr_pfns, > > > - unsigned int gup_flags, struct frame_vector *vec); > > > + struct frame_vector *vec); > > > void put_vaddr_frames(struct frame_vector *vec); > > > int frame_vector_to_pages(struct frame_vector *vec); > > > void frame_vector_to_pfns(struct frame_vector *vec); > > > diff --git a/mm/frame_vector.c b/mm/frame_vector.c > > > index 10f82d5643b6..f8c34b895c76 100644 > > > --- a/mm/frame_vector.c > > > +++ b/mm/frame_vector.c > > > @@ -32,13 +32,12 @@ > > > * This function takes care of grabbing mmap_lock as necessary. > > > */ > > > int get_vaddr_frames(unsigned long start, unsigned int nr_frames, > > > - unsigned int gup_flags, struct frame_vector *vec) > > > + struct frame_vector *vec) > > > { > > > struct mm_struct *mm = current->mm; > > > struct vm_area_struct *vma; > > > int ret = 0; > > > int err; > > > - int locked; > > > > > > if (nr_frames == 0) > > > return 0; > > > @@ -48,40 +47,26 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, > > > > > > start = untagged_addr(start); > > > > > > - mmap_read_lock(mm); > > > - locked = 1; > > > - vma = find_vma_intersection(mm, start, start + 1); > > > - if (!vma) { > > > - ret = -EFAULT; > > > - goto out; > > > - } > > > - > > > - /* > > > - * While get_vaddr_frames() could be used for transient (kernel > > > - * controlled lifetime) pinning of memory pages all current > > > - * users establish long term (userspace controlled lifetime) > > > - * page pinning. Treat get_vaddr_frames() like > > > - * get_user_pages_longterm() and disallow it for filesystem-dax > > > - * mappings. > > > - */ > > > - if (vma_is_fsdax(vma)) { > > > - ret = -EOPNOTSUPP; > > > - goto out; > > > - } > > > - > > > - if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) { > > > + ret = pin_user_pages_fast(start, nr_frames, > > > + FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, > > > + (struct page **)(vec->ptrs)); > > > + if (ret > 0) { > > > vec->got_ref = true; > > > vec->is_pfns = false; > > > - ret = pin_user_pages_locked(start, nr_frames, > > > - gup_flags, (struct page **)(vec->ptrs), &locked); > > > - goto out; > > > + goto out_unlocked; > > > } > > > > > > + mmap_read_lock(mm); > > > vec->got_ref = false; > > > vec->is_pfns = true; > > > + ret = 0; > > > do { > > > unsigned long *nums = frame_vector_pfns(vec); > > > > > > + vma = find_vma_intersection(mm, start, start + 1); > > > + if (!vma) > > > + break; > > > + > > > while (ret < nr_frames && start + PAGE_SIZE <= vma->vm_end) { > > > err = follow_pfn(vma, start, &nums[ret]); > > > if (err) { > > > @@ -92,17 +77,13 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, > > > start += PAGE_SIZE; > > > ret++; > > > } > > > - /* > > > - * We stop if we have enough pages or if VMA doesn't completely > > > - * cover the tail page. > > > - */ > > > - if (ret >= nr_frames || start < vma->vm_end) > > > + /* Bail out if VMA doesn't completely cover the tail page. */ > > > + if (start < vma->vm_end) > > > break; > > > - vma = find_vma_intersection(mm, start, start + 1); > > > - } while (vma && vma->vm_flags & (VM_IO | VM_PFNMAP)); > > > + } while (ret < nr_frames); > > > out: > > > - if (locked) > > > - mmap_read_unlock(mm); > > > + mmap_read_unlock(mm); > > > +out_unlocked: > > > if (!ret) > > > ret = -EFAULT; > > > if (ret > 0) > > > -- > > > 2.28.0 > > > > > _______________________________________________ > > dri-devel mailing list > > dri-devel@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch
On Sun, Nov 01, 2020 at 11:50:39PM +0100, Daniel Vetter wrote: > It's not device drivers, but everyone else. At least my understanding > is that VM_IO | VM_PFNMAP means "even if it happens to be backed by a > struct page, do not treat it like normal memory". And gup/pup_fast > happily break that. I tried to chase the history of that test, didn't > turn up anything I understood much: VM_IO isn't suppose do thave struct pages, so how can gup_fast return them? I thought some magic in the PTE flags excluded this? Jason
On Wed, Nov 04, 2020 at 04:54:19PM +0100, Daniel Vetter wrote: > I don't really have a box here, but dma_mmap_attrs() and friends to > mmap dma_alloc_coherent memory is set up as VM_IO | VM_PFNMAP (it's > actually enforced since underneath it uses remap_pfn_range), and > usually (except if it's pre-cma carveout) that's just normal struct > page backed memory. Sometimes from a cma region (so will be caught by > the cma page check), but if you have an iommu to make it > device-contiguous, that's not needed. dma_mmap_* memory may or may not be page backed, but it absolutely must not be resolved by get_user_pages and friends as it is special. So yes, not being able to get a struct page back from such an mmap is a feature.
On Wed, Nov 4, 2020 at 5:21 PM Christoph Hellwig <hch@infradead.org> wrote: > > On Wed, Nov 04, 2020 at 04:54:19PM +0100, Daniel Vetter wrote: > > I don't really have a box here, but dma_mmap_attrs() and friends to > > mmap dma_alloc_coherent memory is set up as VM_IO | VM_PFNMAP (it's > > actually enforced since underneath it uses remap_pfn_range), and > > usually (except if it's pre-cma carveout) that's just normal struct > > page backed memory. Sometimes from a cma region (so will be caught by > > the cma page check), but if you have an iommu to make it > > device-contiguous, that's not needed. > > dma_mmap_* memory may or may not be page backed, but it absolutely > must not be resolved by get_user_pages and friends as it is special. > So yes, not being able to get a struct page back from such an mmap is > a feature. Yes, that's clear. What we're discussing is whether gup_fast and pup_fast also obey this, or fall over and can give you the struct page that's backing the dma_mmap_* memory. Since the _fast variant doesn't check for vma->vm_flags, and afaict that's the only thing which closes this gap. And like you restate, that would be a bit a problem. So where's that check which Jason&me aren't spotting? -Daniel
On Wed, Nov 04, 2020 at 05:26:58PM +0100, Daniel Vetter wrote: > What we're discussing is whether gup_fast and pup_fast also obey this, > or fall over and can give you the struct page that's backing the > dma_mmap_* memory. Since the _fast variant doesn't check for > vma->vm_flags, and afaict that's the only thing which closes this gap. > And like you restate, that would be a bit a problem. So where's that > check which Jason&me aren't spotting? remap_pte_range uses pte_mkspecial to set up the PTEs, and gup_pte_range errors out on pte_special. Of course this only works for the CONFIG_ARCH_HAS_PTE_SPECIAL case, for other architectures we do have a real problem.
On Wed, Nov 04, 2020 at 04:37:58PM +0000, Christoph Hellwig wrote: > On Wed, Nov 04, 2020 at 05:26:58PM +0100, Daniel Vetter wrote: > > What we're discussing is whether gup_fast and pup_fast also obey this, > > or fall over and can give you the struct page that's backing the > > dma_mmap_* memory. Since the _fast variant doesn't check for > > vma->vm_flags, and afaict that's the only thing which closes this gap. > > And like you restate, that would be a bit a problem. So where's that > > check which Jason&me aren't spotting? > > remap_pte_range uses pte_mkspecial to set up the PTEs, and gup_pte_range > errors out on pte_special. Of course this only works for the > CONFIG_ARCH_HAS_PTE_SPECIAL case, for other architectures we do have > a real problem. Except that we don't really support pte-level gup-fast without CONFIG_ARCH_HAS_PTE_SPECIAL, and in fact all architectures selecting HAVE_FAST_GUP also select ARCH_HAS_PTE_SPECIAL, so we should be fine.
On 11/4/20 10:17 AM, Jason Gunthorpe wrote: > On Wed, Nov 04, 2020 at 04:41:19PM +0000, Christoph Hellwig wrote: >> On Wed, Nov 04, 2020 at 04:37:58PM +0000, Christoph Hellwig wrote: >>> On Wed, Nov 04, 2020 at 05:26:58PM +0100, Daniel Vetter wrote: >>>> What we're discussing is whether gup_fast and pup_fast also obey this, >>>> or fall over and can give you the struct page that's backing the >>>> dma_mmap_* memory. Since the _fast variant doesn't check for >>>> vma->vm_flags, and afaict that's the only thing which closes this gap. >>>> And like you restate, that would be a bit a problem. So where's that >>>> check which Jason&me aren't spotting? >>> >>> remap_pte_range uses pte_mkspecial to set up the PTEs, and gup_pte_range >>> errors out on pte_special. Of course this only works for the >>> CONFIG_ARCH_HAS_PTE_SPECIAL case, for other architectures we do have >>> a real problem. >> >> Except that we don't really support pte-level gup-fast without >> CONFIG_ARCH_HAS_PTE_SPECIAL, and in fact all architectures selecting >> HAVE_FAST_GUP also select ARCH_HAS_PTE_SPECIAL, so we should be fine. > > Mm, I thought it was probably the special flag.. > > Knowing that CONFIG_HAVE_FAST_GUP can't be set without > CONFIG_ARCH_HAS_PTE_SPECIAL is pretty insightful, can we put that in > the Kconfig? > > config HAVE_FAST_GUP > depends on MMU > depends on ARCH_HAS_PTE_SPECIAL > bool > Well, the !CONFIG_ARCH_HAS_PTE_SPECIAL case points out in a comment that gup-fast is not *completely* unavailable there, so I don't think you want to shut it off like that: /* * If we can't determine whether or not a pte is special, then fail immediately * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not * to be special. * * For a futex to be placed on a THP tail page, get_futex_key requires a * get_user_pages_fast_only implementation that can pin pages. Thus it's still * useful to have gup_huge_pmd even if we can't operate on ptes. */ thanks, -- John Hubbard NVIDIA
On Wed, Nov 04, 2020 at 10:44:56AM -0800, John Hubbard wrote: > On 11/4/20 10:17 AM, Jason Gunthorpe wrote: > > On Wed, Nov 04, 2020 at 04:41:19PM +0000, Christoph Hellwig wrote: > > > On Wed, Nov 04, 2020 at 04:37:58PM +0000, Christoph Hellwig wrote: > > > > On Wed, Nov 04, 2020 at 05:26:58PM +0100, Daniel Vetter wrote: > > > > > What we're discussing is whether gup_fast and pup_fast also obey this, > > > > > or fall over and can give you the struct page that's backing the > > > > > dma_mmap_* memory. Since the _fast variant doesn't check for > > > > > vma->vm_flags, and afaict that's the only thing which closes this gap. > > > > > And like you restate, that would be a bit a problem. So where's that > > > > > check which Jason&me aren't spotting? > > > > > > > > remap_pte_range uses pte_mkspecial to set up the PTEs, and gup_pte_range > > > > errors out on pte_special. Of course this only works for the > > > > CONFIG_ARCH_HAS_PTE_SPECIAL case, for other architectures we do have > > > > a real problem. > > > > > > Except that we don't really support pte-level gup-fast without > > > CONFIG_ARCH_HAS_PTE_SPECIAL, and in fact all architectures selecting > > > HAVE_FAST_GUP also select ARCH_HAS_PTE_SPECIAL, so we should be fine. > > > > Mm, I thought it was probably the special flag.. > > > > Knowing that CONFIG_HAVE_FAST_GUP can't be set without > > CONFIG_ARCH_HAS_PTE_SPECIAL is pretty insightful, can we put that in > > the Kconfig? > > > > config HAVE_FAST_GUP > > depends on MMU > > depends on ARCH_HAS_PTE_SPECIAL > > bool > > > Well, the !CONFIG_ARCH_HAS_PTE_SPECIAL case points out in a comment that > gup-fast is not *completely* unavailable there, so I don't think you want > to shut it off like that: > > /* > * If we can't determine whether or not a pte is special, then fail immediately > * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not > * to be special. > * > * For a futex to be placed on a THP tail page, get_futex_key requires a > * get_user_pages_fast_only implementation that can pin pages. Thus it's still > * useful to have gup_huge_pmd even if we can't operate on ptes. > */ I saw that once and I really couldn't make sense of it.. What use is having futex's that only work on THP pages? Confused CH said there was no case of HAVE_FAST_GUP !ARCH_HAS_PTE_SPECIAL, is one hidden someplace then? Jason
On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter wrote: > > /* > > * If we can't determine whether or not a pte is special, then fail immediately > > * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not > > * to be special. > > * > > * For a futex to be placed on a THP tail page, get_futex_key requires a > > * get_user_pages_fast_only implementation that can pin pages. Thus it's still > > * useful to have gup_huge_pmd even if we can't operate on ptes. > > */ > > We support hugepage faults in gpu drivers since recently, and I'm not > seeing a pud_mkhugespecial anywhere. So not sure this works, but probably > just me missing something again. It means ioremap can't create an IO page PUD, it has to be broken up. Does ioremap even create anything larger than PTEs? Jason
On 11/5/20 4:49 AM, Jason Gunthorpe wrote: > On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter wrote: >>> /* >>> * If we can't determine whether or not a pte is special, then fail immediately >>> * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not >>> * to be special. >>> * >>> * For a futex to be placed on a THP tail page, get_futex_key requires a >>> * get_user_pages_fast_only implementation that can pin pages. Thus it's still >>> * useful to have gup_huge_pmd even if we can't operate on ptes. >>> */ >> >> We support hugepage faults in gpu drivers since recently, and I'm not >> seeing a pud_mkhugespecial anywhere. So not sure this works, but probably >> just me missing something again. > > It means ioremap can't create an IO page PUD, it has to be broken up. > > Does ioremap even create anything larger than PTEs? > From my reading, yes. See ioremap_try_huge_pmd(). thanks,
On Fri, Nov 6, 2020 at 11:01 AM Daniel Vetter <daniel@ffwll.ch> wrote: > > On Fri, Nov 6, 2020 at 5:08 AM John Hubbard <jhubbard@nvidia.com> wrote: > > > > On 11/5/20 4:49 AM, Jason Gunthorpe wrote: > > > On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter wrote: > > >>> /* > > >>> * If we can't determine whether or not a pte is special, then fail immediately > > >>> * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not > > >>> * to be special. > > >>> * > > >>> * For a futex to be placed on a THP tail page, get_futex_key requires a > > >>> * get_user_pages_fast_only implementation that can pin pages. Thus it's still > > >>> * useful to have gup_huge_pmd even if we can't operate on ptes. > > >>> */ > > >> > > >> We support hugepage faults in gpu drivers since recently, and I'm not > > >> seeing a pud_mkhugespecial anywhere. So not sure this works, but probably > > >> just me missing something again. > > > > > > It means ioremap can't create an IO page PUD, it has to be broken up. > > > > > > Does ioremap even create anything larger than PTEs? > > gpu drivers also tend to use vmf_insert_pfn* directly, so we can do > on-demand paging and move buffers around. From what I glanced for > lowest level we to the pte_mkspecial correctly (I think I convinced > myself that vm_insert_pfn does that), but for pud/pmd levels it seems > just yolo. So I dug around a bit more and ttm sets PFN_DEV | PFN_MAP to get past the various pft_t_devmap checks (see e.g. vmf_insert_pfn_pmd_prot()). x86-64 has ARCH_HAS_PTE_DEVMAP, and gup.c seems to handle these specially, but frankly I got totally lost in what this does. The comment above the pfn_t_devmap check makes me wonder whether doing this is correct or not. Also adding Thomas Hellstrom, who implemented the huge map support in ttm. -Daniel > remap_pfn_range seems to indeed split down to pte level always. > > > From my reading, yes. See ioremap_try_huge_pmd(). > > The ioremap here shouldn't matter, since this is for kernel-internal > mappings. So that's all fine I think. > -Daniel > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch
On Fri, Nov 06, 2020 at 11:27:59AM +0100, Daniel Vetter wrote: > On Fri, Nov 6, 2020 at 11:01 AM Daniel Vetter <daniel@ffwll.ch> wrote: > > > > On Fri, Nov 6, 2020 at 5:08 AM John Hubbard <jhubbard@nvidia.com> wrote: > > > > > > On 11/5/20 4:49 AM, Jason Gunthorpe wrote: > > > > On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter wrote: > > > >>> /* > > > >>> * If we can't determine whether or not a pte is special, then fail immediately > > > >>> * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not > > > >>> * to be special. > > > >>> * > > > >>> * For a futex to be placed on a THP tail page, get_futex_key requires a > > > >>> * get_user_pages_fast_only implementation that can pin pages. Thus it's still > > > >>> * useful to have gup_huge_pmd even if we can't operate on ptes. > > > >>> */ > > > >> > > > >> We support hugepage faults in gpu drivers since recently, and I'm not > > > >> seeing a pud_mkhugespecial anywhere. So not sure this works, but probably > > > >> just me missing something again. > > > > > > > > It means ioremap can't create an IO page PUD, it has to be broken up. > > > > > > > > Does ioremap even create anything larger than PTEs? > > > > gpu drivers also tend to use vmf_insert_pfn* directly, so we can do > > on-demand paging and move buffers around. From what I glanced for > > lowest level we to the pte_mkspecial correctly (I think I convinced > > myself that vm_insert_pfn does that), but for pud/pmd levels it seems > > just yolo. > > So I dug around a bit more and ttm sets PFN_DEV | PFN_MAP to get past > the various pft_t_devmap checks (see e.g. vmf_insert_pfn_pmd_prot()). > x86-64 has ARCH_HAS_PTE_DEVMAP, and gup.c seems to handle these > specially, but frankly I got totally lost in what this does. The fact vmf_insert_pfn_pmd_prot() has all those BUG_ON's to prevent putting VM_PFNMAP pages into the page tables seems like a big red flag. The comment seems to confirm what we are talking about here: /* * If we had pmd_special, we could avoid all these restrictions, * but we need to be consistent with PTEs and architectures that * can't support a 'special' bit. */ ie without the ability to mark special we can't block fast gup and anyone who does O_DIRECT on these ranges will crash the kernel when it tries to convert a IO page into a struct page. Should be easy enough to directly test? Putting non-struct page PTEs into a VMA without setting VM_PFNMAP just seems horribly wrong to me. Jason
On Fri, Nov 06, 2020 at 11:01:57AM +0100, Daniel Vetter wrote: > gpu drivers also tend to use vmf_insert_pfn* directly, so we can do > on-demand paging and move buffers around. From what I glanced for > lowest level we to the pte_mkspecial correctly (I think I convinced > myself that vm_insert_pfn does that), but for pud/pmd levels it seems > just yolo. > > remap_pfn_range seems to indeed split down to pte level always. Thats what it looked like to me too. > > From my reading, yes. See ioremap_try_huge_pmd(). > > The ioremap here shouldn't matter, since this is for kernel-internal > mappings. So that's all fine I think. Right, sorry to be unclear, we are talking about io_remap_pfn_range() which is for userspace mappings in VMAs Jason
On Fri, 2020-11-06 at 08:55 -0400, Jason Gunthorpe wrote: > On Fri, Nov 06, 2020 at 11:27:59AM +0100, Daniel Vetter wrote: > > On Fri, Nov 6, 2020 at 11:01 AM Daniel Vetter <daniel@ffwll.ch> > > wrote: > > > On Fri, Nov 6, 2020 at 5:08 AM John Hubbard <jhubbard@nvidia.com> > > > wrote: > > > > On 11/5/20 4:49 AM, Jason Gunthorpe wrote: > > > > > On Thu, Nov 05, 2020 at 10:25:24AM +0100, Daniel Vetter > > > > > wrote: > > > > > > > /* > > > > > > > * If we can't determine whether or not a pte is > > > > > > > special, then fail immediately > > > > > > > * for ptes. Note, we can still pin HugeTLB and THP as > > > > > > > these are guaranteed not > > > > > > > * to be special. > > > > > > > * > > > > > > > * For a futex to be placed on a THP tail page, > > > > > > > get_futex_key requires a > > > > > > > * get_user_pages_fast_only implementation that can pin > > > > > > > pages. Thus it's still > > > > > > > * useful to have gup_huge_pmd even if we can't operate > > > > > > > on ptes. > > > > > > > */ > > > > > > > > > > > > We support hugepage faults in gpu drivers since recently, > > > > > > and I'm not > > > > > > seeing a pud_mkhugespecial anywhere. So not sure this > > > > > > works, but probably > > > > > > just me missing something again. > > > > > > > > > > It means ioremap can't create an IO page PUD, it has to be > > > > > broken up. > > > > > > > > > > Does ioremap even create anything larger than PTEs? > > > > > > gpu drivers also tend to use vmf_insert_pfn* directly, so we can > > > do > > > on-demand paging and move buffers around. From what I glanced for > > > lowest level we to the pte_mkspecial correctly (I think I > > > convinced > > > myself that vm_insert_pfn does that), but for pud/pmd levels it > > > seems > > > just yolo. > > > > So I dug around a bit more and ttm sets PFN_DEV | PFN_MAP to get > > past > > the various pft_t_devmap checks (see e.g. > > vmf_insert_pfn_pmd_prot()). > > x86-64 has ARCH_HAS_PTE_DEVMAP, and gup.c seems to handle these > > specially, but frankly I got totally lost in what this does. > > The fact vmf_insert_pfn_pmd_prot() has all those BUG_ON's to prevent > putting VM_PFNMAP pages into the page tables seems like a big red > flag. > > The comment seems to confirm what we are talking about here: > > /* > * If we had pmd_special, we could avoid all these > restrictions, > * but we need to be consistent with PTEs and architectures > that > * can't support a 'special' bit. > */ > > ie without the ability to mark special we can't block fast gup and > anyone who does O_DIRECT on these ranges will crash the kernel when > it > tries to convert a IO page into a struct page. > > Should be easy enough to directly test? > > Putting non-struct page PTEs into a VMA without setting VM_PFNMAP > just > seems horribly wrong to me. Although core mm special huge-page support is currently quite limited, some time ago, I extended the pre-existing vma_is_dax() to vma_is_special_huge(): /** * vma_is_special_huge - Are transhuge page-table entries considered special? * @vma: Pointer to the struct vm_area_struct to consider * * Whether transhuge page-table entries are considered "special" following * the definition in vm_normal_page(). * * Return: true if transhuge page-table entries should be considered special, * false otherwise. */ static inline bool vma_is_special_huge(const struct vm_area_struct *vma) { return vma_is_dax(vma) || (vma->vm_file && (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))); } meaning that currently all transhuge page-table-entries in a PFNMAP or MIXEDMAP vma are considered "special". The number of calls to this function (mainly in the page-splitting code) is quite limited so replacing it with a more elaborate per-page-table-entry scheme would, I guess, definitely be possible. Although all functions using it would need to require a fallback path for architectures not supporting it. /Thomas > > Jason