Message ID | 20220526235040.678984-1-dmitry.osipenko@collabora.com |
---|---|
Headers | show |
Series | Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers | expand |
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> On Fri, May 27, 2022 at 02:50:22AM +0300, Dmitry Osipenko wrote: > Calling madvise IOCTL twice on BO causes memory shrinker list corruption > and crashes kernel because BO is already on the list and it's added to > the list again, while BO should be removed from from the list before it's > re-added. Fix it. > > Cc: stable@vger.kernel.org > Fixes: 013b65101315 ("drm/panfrost: Add madvise and shrinker support") > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/gpu/drm/panfrost/panfrost_drv.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c > index 087e69b98d06..b1e6d238674f 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_drv.c > +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c > @@ -433,8 +433,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, > > if (args->retained) { > if (args->madv == PANFROST_MADV_DONTNEED) > - list_add_tail(&bo->base.madv_list, > - &pfdev->shrinker_list); > + list_move_tail(&bo->base.madv_list, > + &pfdev->shrinker_list); > else if (args->madv == PANFROST_MADV_WILLNEED) > list_del_init(&bo->base.madv_list); > } > -- > 2.35.3 >
On 5/27/22 02:50, Dmitry Osipenko wrote: > Hello, > > This patchset introduces memory shrinker for the VirtIO-GPU DRM driver > and adds memory purging and eviction support to VirtIO-GPU driver. > > The new dma-buf locking convention is introduced here as well. > > During OOM, the shrinker will release BOs that are marked as "not needed" > by userspace using the new madvise IOCTL, it will also evict idling BOs > to SWAP. The userspace in this case is the Mesa VirGL driver, it will mark > the cached BOs as "not needed", allowing kernel driver to release memory > of the cached shmem BOs on lowmem situations, preventing OOM kills. > > The Panfrost driver is switched to use generic memory shrinker. > > This patchset includes improvements and fixes for various things that > I found while was working on the shrinker. > > The Mesa and IGT patches will be kept on hold until this kernel series > will be approved and merged. > > This patchset was tested using Qemu and crosvm, including both cases of > IOMMU off/on. > > Mesa: https://gitlab.freedesktop.org/digetx/mesa/-/commits/virgl-madvise > IGT: https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/virtio-madvise > https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/panfrost-madvise > > Changelog: > > v6: - Added new VirtIO-related fix patch that previously was sent separately > and didn't get much attention: > > drm/gem: Properly annotate WW context on drm_gem_lock_reservations() error > > - Added new patch that fixes mapping of imported dma-bufs for > Tegra DRM and other affected drivers. It's also handy to have it > for switching to the new dma-buf locking convention scheme: > > drm/gem: Move mapping of imported dma-bufs to drm_gem_mmap_obj() > > - Added new patch that fixes shrinker list corruption for stable Panfrost > driver: > > drm/panfrost: Fix shrinker list corruption by madvise IOCTL > > - Added new minor patch-fix for drm-shmem: > > drm/shmem-helper: Add missing vunmap on error > > - Added fixes tag to the "Put mapping ..." patch like was suggested by > Steven Price. > > - Added new VirtIO-GPU driver improvement patch: > > drm/virtio: Return proper error codes instead of -1 > > - Reworked shrinker patches like was suggested by Daniel Vetter: > > - Introduced the new locking convention for dma-bufs. Tested on > VirtIO-GPU, Panfrost, Lima, Tegra and Intel selftests. > > - Dropped separate purge() callback. Now single evict() does > everything. > > - Dropped swap_in() callback from drm-shmem objects. DRM drivers > now could and should restore only the required mappings. > > - Dropped dynamic counting of evictable pages. This simplifies > code in exchange to *potentially* burning more CPU time on OOM. > > v5: - Added new for-stable patch "drm/panfrost: Put mapping instead of > shmem obj on panfrost_mmu_map_fault_addr() error" that corrects GEM's > refcounting in case of error. > > - The drm_gem_shmem_v[un]map() now takes a separate vmap_lock for > imported GEMs to avoid recursive locking of DMA reservations. > This addresses v4 comment from Thomas Zimmermann about the potential > deadlocking of vmapping. > > - Added ack from Thomas Zimmermann to "drm/shmem-helper: Correct > doc-comment of drm_gem_shmem_get_sg_table()" patch. > > - Dropped explicit shmem states from the generic shrinker patch as > was requested by Thomas Zimmermann. > > - Improved variable names and comments of the generic shrinker code. > > - Extended drm_gem_shmem_print_info() with the shrinker-state info in > the "drm/virtio: Support memory shrinking" patch. > > - Moved evict()/swap_in()/purge() callbacks from drm_gem_object_funcs > to drm_gem_shmem_object in the generic shrinker patch, for more > consistency. > > - Corrected bisectability of the patches that was broken in v4 > by accident. > > - The virtio_gpu_plane_prepare_fb() now uses drm_gem_shmem_pin() instead > of drm_gem_shmem_set_unpurgeable_and_unevictable() and does it only for > shmem BOs in the "drm/virtio: Support memory shrinking" patch. > > - Made more functions private to drm_gem_shmem_helper.c as was requested > by Thomas Zimmermann. This minimizes number of the public shmem helpers. > > v4: - Corrected minor W=1 warnings reported by kernel test robot for v3. > > - Renamed DRM_GEM_SHMEM_PAGES_STATE_ACTIVE/INACTIVE to PINNED/UNPINNED, > for more clarity. > > v3: - Hardened shrinker's count() with usage of READ_ONCE() since we don't > use atomic type for counting and technically compiler is free to > re-fetch counter's variable. > > - "Correct drm_gem_shmem_get_sg_table() error handling" now uses > PTR_ERR_OR_ZERO(), fixing typo that was made in v2. > > - Removed obsoleted shrinker from the Panfrost driver, which I missed to > do in v2 by accident and Alyssa Rosenzweig managed to notice it. > > - CCed stable kernels in all patches that make fixes, even the minor ones, > like was suggested by Emil Velikov and added his r-b to the patches. > > - Added t-b from Steven Price to the Panfrost's shrinker patch. > > - Corrected doc-comment of drm_gem_shmem_object.madv, like was suggested > by Steven Price. Comment now says that madv=1 means "object is purged" > instead of saying that value is unused. > > - Added more doc-comments to the new shmem shrinker API. > > - The "Improve DMA API usage for shmem BOs" patch got more improvements > by removing the obsoleted drm_dev_set_unique() quirk and its comment. > > - Added patch that makes Virtio-GPU driver to use common dev_is_pci() > helper, which was suggested by Robin Murphy. > > - Added new "drm/shmem-helper: Take GEM reservation lock instead of > drm_gem_shmem locks" patch, which was suggested by Daniel Vetter. > > - Added new "drm/virtio: Simplify error handling of > virtio_gpu_object_create()" patch. > > - Improved "Correct doc-comment of drm_gem_shmem_get_sg_table()" patch, > like was suggested by Daniel Vetter, by saying that function returns > ERR_PTR() and not errno. > > - virtio_gpu_purge_object() is fenced properly now, turned out > virtio_gpu_notify() doesn't do fencing as I was supposing before. > Stress testing of memory eviction revealed that. > > - Added new patch that corrects virtio_gpu_plane_cleanup_fb() to use > appropriate atomic plane state. > > - SHMEM shrinker got eviction support. > > - VirtIO-GPU driver now supports memory eviction. It's enabled for a > non-blob GEMs only, i.e. for VirGL. The blobs don't support dynamic > attaching/detaching of guest's memory, so it's not trivial to enable > them. > > - Added patch that removes obsoleted drm_gem_shmem_purge() > > - Added patch that makes drm_gem_shmem_get_pages() private. > > - Added patch that fixes lockup on dma_resv_reserve_fences() error. > > v2: - Improved shrinker by using a more fine-grained locking to reduce > contention during scan of objects and dropped locking from the > 'counting' callback by tracking count of shrinkable pages. This > was suggested by Rob Clark in the comment to v1. > > - Factored out common shrinker code into drm_gem_shmem_helper.c > and switched Panfrost driver to use the new common memory shrinker. > This was proposed by Thomas Zimmermann in his prototype series that > he shared with us in the comment to v1. Note that I only compile-tested > the Panfrost driver. > > - Shrinker now takes object_name_lock during scan to prevent racing > with dma-buf exporting. > > - Shrinker now takes vmap_lock during scan to prevent racing with shmem > vmap/unmap code. > > - Added "Correct doc-comment of drm_gem_shmem_get_sg_table()" patch, > which I sent out previously as a standalone change, since the > drm_gem_shmem_helper.c is now touched by this patchset anyways and > it doesn't hurt to group all the patches together. > > Dmitry Osipenko (22): > drm/gem: Properly annotate WW context on drm_gem_lock_reservations() > error > drm/gem: Move mapping of imported dma-bufs to drm_gem_mmap_obj() > drm/panfrost: Put mapping instead of shmem obj on > panfrost_mmu_map_fault_addr() error > drm/panfrost: Fix shrinker list corruption by madvise IOCTL > drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling > drm/virtio: Check whether transferred 2D BO is shmem > drm/virtio: Unlock reservations on virtio_gpu_object_shmem_init() > error > drm/virtio: Unlock reservations on dma_resv_reserve_fences() error > drm/virtio: Use appropriate atomic state in > virtio_gpu_plane_cleanup_fb() > drm/shmem-helper: Add missing vunmap on error > drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() ... Thomas, do you think it will be possible for you to take the fix patches 1-11 into the drm-fixes or would you prefer me to re-send them separately? The VirtIO patches 12-13 also are good to go into drm-next, IMO. I'm going to factor out the new dma-buf convention into a separate patchset, like was suggested by Christian. But it will take me some time to get the dma-buf patches ready and I also will be on a vacation soon. At minimum nothing should hold the fixes, so will be great if they could land sooner. Thank you!
On 6/28/22 15:31, Robin Murphy wrote: > ----->8----- > [ 68.295951] ====================================================== > [ 68.295956] WARNING: possible circular locking dependency detected > [ 68.295963] 5.19.0-rc3+ #400 Not tainted > [ 68.295972] ------------------------------------------------------ > [ 68.295977] cc1/295 is trying to acquire lock: > [ 68.295986] ffff000008d7f1a0 > (reservation_ww_class_mutex){+.+.}-{3:3}, at: drm_gem_shmem_free+0x7c/0x198 > [ 68.296036] > [ 68.296036] but task is already holding lock: > [ 68.296041] ffff80000c14b820 (fs_reclaim){+.+.}-{0:0}, at: > __alloc_pages_slowpath.constprop.0+0x4d8/0x1470 > [ 68.296080] > [ 68.296080] which lock already depends on the new lock. > [ 68.296080] > [ 68.296085] > [ 68.296085] the existing dependency chain (in reverse order) is: > [ 68.296090] > [ 68.296090] -> #1 (fs_reclaim){+.+.}-{0:0}: > [ 68.296111] fs_reclaim_acquire+0xb8/0x150 > [ 68.296130] dma_resv_lockdep+0x298/0x3fc > [ 68.296148] do_one_initcall+0xe4/0x5f8 > [ 68.296163] kernel_init_freeable+0x414/0x49c > [ 68.296180] kernel_init+0x2c/0x148 > [ 68.296195] ret_from_fork+0x10/0x20 > [ 68.296207] > [ 68.296207] -> #0 (reservation_ww_class_mutex){+.+.}-{3:3}: > [ 68.296229] __lock_acquire+0x1724/0x2398 > [ 68.296246] lock_acquire+0x218/0x5b0 > [ 68.296260] __ww_mutex_lock.constprop.0+0x158/0x2378 > [ 68.296277] ww_mutex_lock+0x7c/0x4d8 > [ 68.296291] drm_gem_shmem_free+0x7c/0x198 > [ 68.296304] panfrost_gem_free_object+0x118/0x138 > [ 68.296318] drm_gem_object_free+0x40/0x68 > [ 68.296334] drm_gem_shmem_shrinker_run_objects_scan+0x42c/0x5b8 > [ 68.296352] drm_gem_shmem_shrinker_scan_objects+0xa4/0x170 > [ 68.296368] do_shrink_slab+0x220/0x808 > [ 68.296381] shrink_slab+0x11c/0x408 > [ 68.296392] shrink_node+0x6ac/0xb90 > [ 68.296403] do_try_to_free_pages+0x1dc/0x8d0 > [ 68.296416] try_to_free_pages+0x1ec/0x5b0 > [ 68.296429] __alloc_pages_slowpath.constprop.0+0x528/0x1470 > [ 68.296444] __alloc_pages+0x4e0/0x5b8 > [ 68.296455] __folio_alloc+0x24/0x60 > [ 68.296467] vma_alloc_folio+0xb8/0x2f8 > [ 68.296483] alloc_zeroed_user_highpage_movable+0x58/0x68 > [ 68.296498] __handle_mm_fault+0x918/0x12a8 > [ 68.296513] handle_mm_fault+0x130/0x300 > [ 68.296527] do_page_fault+0x1d0/0x568 > [ 68.296539] do_translation_fault+0xa0/0xb8 > [ 68.296551] do_mem_abort+0x68/0xf8 > [ 68.296562] el0_da+0x74/0x100 > [ 68.296572] el0t_64_sync_handler+0x68/0xc0 > [ 68.296585] el0t_64_sync+0x18c/0x190 > [ 68.296596] > [ 68.296596] other info that might help us debug this: > [ 68.296596] > [ 68.296601] Possible unsafe locking scenario: > [ 68.296601] > [ 68.296604] CPU0 CPU1 > [ 68.296608] ---- ---- > [ 68.296612] lock(fs_reclaim); > [ 68.296622] lock(reservation_ww_class_mutex); > [ 68.296633] lock(fs_reclaim); > [ 68.296644] lock(reservation_ww_class_mutex); > [ 68.296654] > [ 68.296654] *** DEADLOCK *** This splat could be ignored for now. I'm aware about it, although haven't looked closely at how to fix it since it's a kind of a lockdep misreporting.
Hello Robin, On 6/28/22 15:31, Robin Murphy wrote: >> Hello, >> >> This patchset introduces memory shrinker for the VirtIO-GPU DRM driver >> and adds memory purging and eviction support to VirtIO-GPU driver. >> >> The new dma-buf locking convention is introduced here as well. >> >> During OOM, the shrinker will release BOs that are marked as "not needed" >> by userspace using the new madvise IOCTL, it will also evict idling BOs >> to SWAP. The userspace in this case is the Mesa VirGL driver, it will >> mark >> the cached BOs as "not needed", allowing kernel driver to release memory >> of the cached shmem BOs on lowmem situations, preventing OOM kills. >> >> The Panfrost driver is switched to use generic memory shrinker. > > I think we still have some outstanding issues here - Alyssa reported > some weirdness yesterday, so I just tried provoking a low-memory > condition locally with this series applied and a few debug options > enabled, and the results as below were... interesting. The warning and crash that you got actually are the minor issues. Alyssa caught an interesting PREEMPT_DEBUG issue in the shrinker that I haven't seen before. She is also experiencing another problem in the Panfrost driver with a bad shmem pages (I think). It is unrelated to this patchset and apparently require an extra setup for the reproduction.
On Tue, Jun 28, 2022 at 5:51 AM Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > > On 6/28/22 15:31, Robin Murphy wrote: > > ----->8----- > > [ 68.295951] ====================================================== > > [ 68.295956] WARNING: possible circular locking dependency detected > > [ 68.295963] 5.19.0-rc3+ #400 Not tainted > > [ 68.295972] ------------------------------------------------------ > > [ 68.295977] cc1/295 is trying to acquire lock: > > [ 68.295986] ffff000008d7f1a0 > > (reservation_ww_class_mutex){+.+.}-{3:3}, at: drm_gem_shmem_free+0x7c/0x198 > > [ 68.296036] > > [ 68.296036] but task is already holding lock: > > [ 68.296041] ffff80000c14b820 (fs_reclaim){+.+.}-{0:0}, at: > > __alloc_pages_slowpath.constprop.0+0x4d8/0x1470 > > [ 68.296080] > > [ 68.296080] which lock already depends on the new lock. > > [ 68.296080] > > [ 68.296085] > > [ 68.296085] the existing dependency chain (in reverse order) is: > > [ 68.296090] > > [ 68.296090] -> #1 (fs_reclaim){+.+.}-{0:0}: > > [ 68.296111] fs_reclaim_acquire+0xb8/0x150 > > [ 68.296130] dma_resv_lockdep+0x298/0x3fc > > [ 68.296148] do_one_initcall+0xe4/0x5f8 > > [ 68.296163] kernel_init_freeable+0x414/0x49c > > [ 68.296180] kernel_init+0x2c/0x148 > > [ 68.296195] ret_from_fork+0x10/0x20 > > [ 68.296207] > > [ 68.296207] -> #0 (reservation_ww_class_mutex){+.+.}-{3:3}: > > [ 68.296229] __lock_acquire+0x1724/0x2398 > > [ 68.296246] lock_acquire+0x218/0x5b0 > > [ 68.296260] __ww_mutex_lock.constprop.0+0x158/0x2378 > > [ 68.296277] ww_mutex_lock+0x7c/0x4d8 > > [ 68.296291] drm_gem_shmem_free+0x7c/0x198 > > [ 68.296304] panfrost_gem_free_object+0x118/0x138 > > [ 68.296318] drm_gem_object_free+0x40/0x68 > > [ 68.296334] drm_gem_shmem_shrinker_run_objects_scan+0x42c/0x5b8 > > [ 68.296352] drm_gem_shmem_shrinker_scan_objects+0xa4/0x170 > > [ 68.296368] do_shrink_slab+0x220/0x808 > > [ 68.296381] shrink_slab+0x11c/0x408 > > [ 68.296392] shrink_node+0x6ac/0xb90 > > [ 68.296403] do_try_to_free_pages+0x1dc/0x8d0 > > [ 68.296416] try_to_free_pages+0x1ec/0x5b0 > > [ 68.296429] __alloc_pages_slowpath.constprop.0+0x528/0x1470 > > [ 68.296444] __alloc_pages+0x4e0/0x5b8 > > [ 68.296455] __folio_alloc+0x24/0x60 > > [ 68.296467] vma_alloc_folio+0xb8/0x2f8 > > [ 68.296483] alloc_zeroed_user_highpage_movable+0x58/0x68 > > [ 68.296498] __handle_mm_fault+0x918/0x12a8 > > [ 68.296513] handle_mm_fault+0x130/0x300 > > [ 68.296527] do_page_fault+0x1d0/0x568 > > [ 68.296539] do_translation_fault+0xa0/0xb8 > > [ 68.296551] do_mem_abort+0x68/0xf8 > > [ 68.296562] el0_da+0x74/0x100 > > [ 68.296572] el0t_64_sync_handler+0x68/0xc0 > > [ 68.296585] el0t_64_sync+0x18c/0x190 > > [ 68.296596] > > [ 68.296596] other info that might help us debug this: > > [ 68.296596] > > [ 68.296601] Possible unsafe locking scenario: > > [ 68.296601] > > [ 68.296604] CPU0 CPU1 > > [ 68.296608] ---- ---- > > [ 68.296612] lock(fs_reclaim); > > [ 68.296622] lock(reservation_ww_class_mutex); > > [ 68.296633] lock(fs_reclaim); > > [ 68.296644] lock(reservation_ww_class_mutex); > > [ 68.296654] > > [ 68.296654] *** DEADLOCK *** > > This splat could be ignored for now. I'm aware about it, although > haven't looked closely at how to fix it since it's a kind of a lockdep > misreporting. The lockdep splat could be fixed with something similar to what I've done in msm, ie. basically just not acquire the lock in the finalizer: https://patchwork.freedesktop.org/patch/489364/ There is one gotcha to watch for, as danvet pointed out (scan_objects() could still see the obj in the LRU before the finalizer removes it), but if scan_objects() does the kref_get_unless_zero() trick, it is safe. BR, -R