From patchwork Thu Dec 3 14:02:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 337300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08C74C433C1 for ; Thu, 3 Dec 2020 14:03:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A89AE20709 for ; Thu, 3 Dec 2020 14:03:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729909AbgLCODs (ORCPT ); Thu, 3 Dec 2020 09:03:48 -0500 Received: from mx2.suse.de ([195.135.220.15]:34676 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726712AbgLCODr (ORCPT ); Thu, 3 Dec 2020 09:03:47 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7D481AD6B; Thu, 3 Dec 2020 14:03:06 +0000 (UTC) From: Thomas Zimmermann To: airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, hdegoede@redhat.com, christian.koenig@amd.com, sumit.semwal@linaro.org Cc: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Thomas Zimmermann Subject: [PATCH v2 1/7] drm/ast: Don't pin cursor source BO explicitly during update Date: Thu, 3 Dec 2020 15:02:53 +0100 Message-Id: <20201203140259.26580-2-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201203140259.26580-1-tzimmermann@suse.de> References: <20201203140259.26580-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Vmapping the cursor source BO contains an implicit pin operation, so there's no need to do this manually. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/ast/ast_cursor.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c index 742d43a7edf4..68bf3d33f1ed 100644 --- a/drivers/gpu/drm/ast/ast_cursor.c +++ b/drivers/gpu/drm/ast/ast_cursor.c @@ -180,12 +180,9 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) gbo = drm_gem_vram_of_gem(fb->obj[0]); - ret = drm_gem_vram_pin(gbo, 0); - if (ret) - return ret; ret = drm_gem_vram_vmap(gbo, &map); if (ret) - goto err_drm_gem_vram_unpin; + return ret; src = map.vaddr; /* TODO: Use mapping abstraction properly */ dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem; @@ -194,13 +191,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) update_cursor_image(dst, src, fb->width, fb->height); drm_gem_vram_vunmap(gbo, &map); - drm_gem_vram_unpin(gbo); return 0; - -err_drm_gem_vram_unpin: - drm_gem_vram_unpin(gbo); - return ret; } static void ast_cursor_set_base(struct ast_private *ast, u64 address) From patchwork Thu Dec 3 14:02:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 338047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A74D1C001B0 for ; Thu, 3 Dec 2020 14:03:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B91B206F6 for ; Thu, 3 Dec 2020 14:03:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389070AbgLCODs (ORCPT ); Thu, 3 Dec 2020 09:03:48 -0500 Received: from mx2.suse.de ([195.135.220.15]:34710 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728122AbgLCODs (ORCPT ); Thu, 3 Dec 2020 09:03:48 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 12503AD77; Thu, 3 Dec 2020 14:03:07 +0000 (UTC) From: Thomas Zimmermann To: airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, hdegoede@redhat.com, christian.koenig@amd.com, sumit.semwal@linaro.org Cc: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Thomas Zimmermann Subject: [PATCH v2 2/7] drm/ast: Only map cursor BOs during updates Date: Thu, 3 Dec 2020 15:02:54 +0100 Message-Id: <20201203140259.26580-3-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201203140259.26580-1-tzimmermann@suse.de> References: <20201203140259.26580-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The HW cursor's BO used to be mapped permanently into the kernel's address space. GEM's vmap operation will be protected by locks, and we don't want to lock the BO's for an indefinate period of time. Change the cursor code to map the HW BOs only during updates. The vmap operation in VRAM helpers is cheap, as a once estabished mapping is being reused until the BO actually moves. As the HW cursor BOs are permanently pinned, they never move at all. v2: * fix typos in commit description Signed-off-by: Thomas Zimmermann Acked-by: Christian König --- drivers/gpu/drm/ast/ast_cursor.c | 51 ++++++++++++++++++-------------- drivers/gpu/drm/ast/ast_drv.h | 2 -- 2 files changed, 28 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c index 68bf3d33f1ed..fac1ee79c372 100644 --- a/drivers/gpu/drm/ast/ast_cursor.c +++ b/drivers/gpu/drm/ast/ast_cursor.c @@ -39,7 +39,6 @@ static void ast_cursor_fini(struct ast_private *ast) for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { gbo = ast->cursor.gbo[i]; - drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); } @@ -53,14 +52,13 @@ static void ast_cursor_release(struct drm_device *dev, void *ptr) } /* - * Allocate cursor BOs and pins them at the end of VRAM. + * Allocate cursor BOs and pin them at the end of VRAM. */ int ast_cursor_init(struct ast_private *ast) { struct drm_device *dev = &ast->base; size_t size, i; struct drm_gem_vram_object *gbo; - struct dma_buf_map map; int ret; size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); @@ -77,15 +75,7 @@ int ast_cursor_init(struct ast_private *ast) drm_gem_vram_put(gbo); goto err_drm_gem_vram_put; } - ret = drm_gem_vram_vmap(gbo, &map); - if (ret) { - drm_gem_vram_unpin(gbo); - drm_gem_vram_put(gbo); - goto err_drm_gem_vram_put; - } - ast->cursor.gbo[i] = gbo; - ast->cursor.map[i] = map; } return drmm_add_action_or_reset(dev, ast_cursor_release, NULL); @@ -94,7 +84,6 @@ int ast_cursor_init(struct ast_private *ast) while (i) { --i; gbo = ast->cursor.gbo[i]; - drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); } @@ -168,31 +157,38 @@ static void update_cursor_image(u8 __iomem *dst, const u8 *src, int width, int h int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) { struct drm_device *dev = &ast->base; - struct drm_gem_vram_object *gbo; - struct dma_buf_map map; - int ret; - void *src; + struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index]; + struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]); + struct dma_buf_map src_map, dst_map; void __iomem *dst; + void *src; + int ret; if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) || drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT)) return -EINVAL; - gbo = drm_gem_vram_of_gem(fb->obj[0]); - - ret = drm_gem_vram_vmap(gbo, &map); + ret = drm_gem_vram_vmap(src_gbo, &src_map); if (ret) return ret; - src = map.vaddr; /* TODO: Use mapping abstraction properly */ + src = src_map.vaddr; /* TODO: Use mapping abstraction properly */ - dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem; + ret = drm_gem_vram_vmap(dst_gbo, &dst_map); + if (ret) + goto err_drm_gem_vram_vunmap; + dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ /* do data transfer to cursor BO */ update_cursor_image(dst, src, fb->width, fb->height); - drm_gem_vram_vunmap(gbo, &map); + drm_gem_vram_vunmap(dst_gbo, &dst_map); + drm_gem_vram_vunmap(src_gbo, &src_map); return 0; + +err_drm_gem_vram_vunmap: + drm_gem_vram_vunmap(src_gbo, &src_map); + return ret; } static void ast_cursor_set_base(struct ast_private *ast, u64 address) @@ -243,17 +239,26 @@ static void ast_cursor_set_location(struct ast_private *ast, u16 x, u16 y, void ast_cursor_show(struct ast_private *ast, int x, int y, unsigned int offset_x, unsigned int offset_y) { + struct drm_device *dev = &ast->base; + struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index]; + struct dma_buf_map map; u8 x_offset, y_offset; u8 __iomem *dst; u8 __iomem *sig; u8 jreg; + int ret; - dst = ast->cursor.map[ast->cursor.next_index].vaddr; + ret = drm_gem_vram_vmap(gbo, &map); + if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret)) + return; + dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ sig = dst + AST_HWC_SIZE; writel(x, sig + AST_HWC_SIGNATURE_X); writel(y, sig + AST_HWC_SIGNATURE_Y); + drm_gem_vram_vunmap(gbo, &map); + if (x < 0) { x_offset = (-x) + offset_x; x = 0; diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h index ccaff81924ee..f871fc36c2f7 100644 --- a/drivers/gpu/drm/ast/ast_drv.h +++ b/drivers/gpu/drm/ast/ast_drv.h @@ -28,7 +28,6 @@ #ifndef __AST_DRV_H__ #define __AST_DRV_H__ -#include #include #include #include @@ -133,7 +132,6 @@ struct ast_private { struct { struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM]; - struct dma_buf_map map[AST_DEFAULT_HWC_NUM]; unsigned int next_index; } cursor; From patchwork Thu Dec 3 14:02:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 337298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8B1FC001B1 for ; Thu, 3 Dec 2020 14:03:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B3A69207A4 for ; Thu, 3 Dec 2020 14:03:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389091AbgLCODu (ORCPT ); Thu, 3 Dec 2020 09:03:50 -0500 Received: from mx2.suse.de ([195.135.220.15]:34738 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388443AbgLCODt (ORCPT ); Thu, 3 Dec 2020 09:03:49 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9CC83AD8A; Thu, 3 Dec 2020 14:03:07 +0000 (UTC) From: Thomas Zimmermann To: airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, hdegoede@redhat.com, christian.koenig@amd.com, sumit.semwal@linaro.org Cc: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Thomas Zimmermann Subject: [PATCH v2 3/7] drm/vram-helper: Move BO locking from vmap code into callers Date: Thu, 3 Dec 2020 15:02:55 +0100 Message-Id: <20201203140259.26580-4-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201203140259.26580-1-tzimmermann@suse.de> References: <20201203140259.26580-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Implementations of the vmap/vunmap callbacks may expect that the caller holds the reservation lock. Therefore push the locking from vmap and vunmap into the callers. This affects fbdev emulation, and cursor updates in ast and vboxvideo. Ast and vboxvideo acquire the BO's reservation lock directly. Fbdev emulation uses DRM client helpers for locking. This is solely done for consistency with the rest of the interface. Fbdev emulation tries to avoid calling GEM interfaces. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/ast/ast_cursor.c | 21 ++++++++++++++++-- drivers/gpu/drm/drm_client.c | 31 +++++++++++++++++++++++++++ drivers/gpu/drm/drm_fb_helper.c | 10 +++++++-- drivers/gpu/drm/drm_gem_vram_helper.c | 18 +++------------- drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 ++++++---- include/drm/drm_client.h | 2 ++ 6 files changed, 70 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c index fac1ee79c372..15e5c4fd301d 100644 --- a/drivers/gpu/drm/ast/ast_cursor.c +++ b/drivers/gpu/drm/ast/ast_cursor.c @@ -159,6 +159,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) struct drm_device *dev = &ast->base; struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index]; struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]); + struct drm_gem_object *objs[] = {&src_gbo->bo.base, &dst_gbo->bo.base}; + struct ww_acquire_ctx ctx; struct dma_buf_map src_map, dst_map; void __iomem *dst; void *src; @@ -168,9 +170,13 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT)) return -EINVAL; - ret = drm_gem_vram_vmap(src_gbo, &src_map); + ret = drm_gem_lock_reservations(objs, ARRAY_SIZE(objs), &ctx); if (ret) return ret; + + ret = drm_gem_vram_vmap(src_gbo, &src_map); + if (ret) + goto err_drm_gem_unlock_reservations; src = src_map.vaddr; /* TODO: Use mapping abstraction properly */ ret = drm_gem_vram_vmap(dst_gbo, &dst_map); @@ -184,10 +190,14 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) drm_gem_vram_vunmap(dst_gbo, &dst_map); drm_gem_vram_vunmap(src_gbo, &src_map); + drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &ctx); + return 0; err_drm_gem_vram_vunmap: drm_gem_vram_vunmap(src_gbo, &src_map); +err_drm_gem_unlock_reservations: + drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &ctx); return ret; } @@ -241,6 +251,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, { struct drm_device *dev = &ast->base; struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index]; + struct drm_gem_object *obj = &gbo->bo.base; struct dma_buf_map map; u8 x_offset, y_offset; u8 __iomem *dst; @@ -248,9 +259,14 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, u8 jreg; int ret; + ret = dma_resv_lock(obj->resv, NULL); + if (ret) + return; ret = drm_gem_vram_vmap(gbo, &map); - if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret)) + if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret)) { + dma_resv_unlock(obj->resv); return; + } dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */ sig = dst + AST_HWC_SIZE; @@ -258,6 +274,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, writel(y, sig + AST_HWC_SIGNATURE_Y); drm_gem_vram_vunmap(gbo, &map); + dma_resv_unlock(obj->resv); if (x < 0) { x_offset = (-x) + offset_x; diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index ce45e380f4a2..82453ca0b3ec 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -288,6 +288,37 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u return ERR_PTR(ret); } +/** + * drm_client_buffer_lock - Locks the DRM client buffer + * @buffer: DRM client buffer + * + * This function locks the client buffer by acquiring the buffer + * object's reservation lock. + * + * Unlock the buffer with drm_client_buffer_unlock(). + * + * Returns: + * 0 on success, or a negative errno code otherwise. + */ +int +drm_client_buffer_lock(struct drm_client_buffer *buffer) +{ + return dma_resv_lock(buffer->gem->resv, NULL); +} +EXPORT_SYMBOL(drm_client_buffer_lock); + +/** + * drm_client_buffer_unlock - Unlock DRM client buffer + * @buffer: DRM client buffer + * + * Unlocks a client buffer. See drm_client_buffer_lock(). + */ +void drm_client_buffer_unlock(struct drm_client_buffer *buffer) +{ + dma_resv_unlock(buffer->gem->resv); +} +EXPORT_SYMBOL(drm_client_buffer_unlock); + /** * drm_client_buffer_vmap - Map DRM client buffer into address space * @buffer: DRM client buffer diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index 4b8119510687..97856d9194de 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -411,16 +411,22 @@ static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper, */ mutex_lock(&fb_helper->lock); + ret = drm_client_buffer_lock(buffer); + if (ret) + goto out_mutex_unlock; + ret = drm_client_buffer_vmap(buffer, &map); if (ret) - goto out; + goto out_drm_client_buffer_unlock; dst = map; drm_fb_helper_damage_blit_real(fb_helper, clip, &dst); drm_client_buffer_vunmap(buffer); -out: +out_drm_client_buffer_unlock: + drm_client_buffer_unlock(buffer); +out_mutex_unlock: mutex_unlock(&fb_helper->lock); return ret; diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 02ca22e90290..35a30dafccce 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -440,25 +440,19 @@ int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { int ret; - ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); - if (ret) - return ret; + dma_resv_assert_held(gbo->bo.base.resv); ret = drm_gem_vram_pin_locked(gbo, 0); if (ret) - goto err_ttm_bo_unreserve; + return ret; ret = drm_gem_vram_kmap_locked(gbo, map); if (ret) goto err_drm_gem_vram_unpin_locked; - ttm_bo_unreserve(&gbo->bo); - return 0; err_drm_gem_vram_unpin_locked: drm_gem_vram_unpin_locked(gbo); -err_ttm_bo_unreserve: - ttm_bo_unreserve(&gbo->bo); return ret; } EXPORT_SYMBOL(drm_gem_vram_vmap); @@ -473,16 +467,10 @@ EXPORT_SYMBOL(drm_gem_vram_vmap); */ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { - int ret; - - ret = ttm_bo_reserve(&gbo->bo, false, false, NULL); - if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) - return; + dma_resv_assert_held(gbo->bo.base.resv); drm_gem_vram_kunmap_locked(gbo, map); drm_gem_vram_unpin_locked(gbo); - - ttm_bo_unreserve(&gbo->bo); } EXPORT_SYMBOL(drm_gem_vram_vunmap); diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c index dbc0dd53c69e..8b1a8522144e 100644 --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c @@ -381,7 +381,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, container_of(plane->dev, struct vbox_private, ddev); struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc); struct drm_framebuffer *fb = plane->state->fb; - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]); + struct drm_gem_object *obj = fb->obj[0]; + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj); u32 width = plane->state->crtc_w; u32 height = plane->state->crtc_h; size_t data_size, mask_size; @@ -401,11 +402,12 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, vbox_crtc->cursor_enabled = true; + ret = dma_resv_lock(obj->resv, NULL); + if (ret) + return; ret = drm_gem_vram_vmap(gbo, &map); if (ret) { - /* - * BUG: we should have pinned the BO in prepare_fb(). - */ + dma_resv_unlock(obj->resv); mutex_unlock(&vbox->hw_mutex); DRM_WARN("Could not map cursor bo, skipping update\n"); return; @@ -422,6 +424,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); drm_gem_vram_vunmap(gbo, &map); + dma_resv_unlock(obj->resv); flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | VBOX_MOUSE_POINTER_ALPHA; diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h index f07f2fb02e75..1cf811471fc4 100644 --- a/include/drm/drm_client.h +++ b/include/drm/drm_client.h @@ -156,6 +156,8 @@ struct drm_client_buffer * drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); +int drm_client_buffer_lock(struct drm_client_buffer *buffer); +void drm_client_buffer_unlock(struct drm_client_buffer *buffer); int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); From patchwork Thu Dec 3 14:02:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 337299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89F9BC00130 for ; Thu, 3 Dec 2020 14:03:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2CEBD2079A for ; Thu, 3 Dec 2020 14:03:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2436468AbgLCODt (ORCPT ); Thu, 3 Dec 2020 09:03:49 -0500 Received: from mx2.suse.de ([195.135.220.15]:34768 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389078AbgLCODt (ORCPT ); Thu, 3 Dec 2020 09:03:49 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 36AC4AD8D; Thu, 3 Dec 2020 14:03:08 +0000 (UTC) From: Thomas Zimmermann To: airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, hdegoede@redhat.com, christian.koenig@amd.com, sumit.semwal@linaro.org Cc: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Thomas Zimmermann Subject: [PATCH v2 4/7] drm/vram-helper: Remove pinning from drm_gem_vram_{vmap, vunmap}() Date: Thu, 3 Dec 2020 15:02:56 +0100 Message-Id: <20201203140259.26580-5-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201203140259.26580-1-tzimmermann@suse.de> References: <20201203140259.26580-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org BO pinning was never meant to be part of a GEM object's vmap operation. Remove it from the related code in VRAM helpers. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/drm_gem_vram_helper.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 35a30dafccce..760d77c6c3c0 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -438,22 +438,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, */ int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { - int ret; - dma_resv_assert_held(gbo->bo.base.resv); - ret = drm_gem_vram_pin_locked(gbo, 0); - if (ret) - return ret; - ret = drm_gem_vram_kmap_locked(gbo, map); - if (ret) - goto err_drm_gem_vram_unpin_locked; - - return 0; - -err_drm_gem_vram_unpin_locked: - drm_gem_vram_unpin_locked(gbo); - return ret; + return drm_gem_vram_kmap_locked(gbo, map); } EXPORT_SYMBOL(drm_gem_vram_vmap); @@ -470,7 +457,6 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *ma dma_resv_assert_held(gbo->bo.base.resv); drm_gem_vram_kunmap_locked(gbo, map); - drm_gem_vram_unpin_locked(gbo); } EXPORT_SYMBOL(drm_gem_vram_vunmap); From patchwork Thu Dec 3 14:02:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 337297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FC7EC001B0 for ; Thu, 3 Dec 2020 14:04:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F5D3206D8 for ; Thu, 3 Dec 2020 14:04:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730609AbgLCOEa (ORCPT ); Thu, 3 Dec 2020 09:04:30 -0500 Received: from mx2.suse.de ([195.135.220.15]:35302 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726712AbgLCOE3 (ORCPT ); Thu, 3 Dec 2020 09:04:29 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id C2D9BADB3; Thu, 3 Dec 2020 14:03:08 +0000 (UTC) From: Thomas Zimmermann To: airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, hdegoede@redhat.com, christian.koenig@amd.com, sumit.semwal@linaro.org Cc: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Thomas Zimmermann Subject: [PATCH v2 5/7] drm/vram-helper: Remove vmap reference counting Date: Thu, 3 Dec 2020 15:02:57 +0100 Message-Id: <20201203140259.26580-6-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201203140259.26580-1-tzimmermann@suse.de> References: <20201203140259.26580-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Overlapping or nested mappings of the same BO are not allowed by the semantics of the GEM vmap/vunmap operations. Concurent access to the GEM object is prevented by reservation locks. So we don't need the reference counter in the GEM VRAM object. Remove it. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/drm_gem_vram_helper.c | 19 ++++--------------- include/drm/drm_gem_vram_helper.h | 17 +++-------------- 2 files changed, 7 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 760d77c6c3c0..276e8f8ea663 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -113,7 +113,6 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo) * up; only release the GEM object. */ - WARN_ON(gbo->vmap_use_count); WARN_ON(dma_buf_map_is_set(&gbo->map)); drm_gem_object_release(&gbo->bo.base); @@ -384,15 +383,10 @@ static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, { int ret; - if (gbo->vmap_use_count > 0) - goto out; - ret = ttm_bo_vmap(&gbo->bo, &gbo->map); if (ret) return ret; -out: - ++gbo->vmap_use_count; *map = gbo->map; return 0; @@ -403,15 +397,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, { struct drm_device *dev = gbo->bo.base.dev; - if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count)) - return; - if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) return; /* BUG: map not mapped from this BO */ - if (--gbo->vmap_use_count > 0) - return; - /* * Permanently mapping and unmapping buffers adds overhead from * updating the page tables and creates debugging output. Therefore, @@ -545,12 +533,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, struct ttm_resource *new_mem) { struct ttm_buffer_object *bo = &gbo->bo; - struct drm_device *dev = bo->base.dev; + struct dma_buf_map *map = &gbo->map; - if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count)) + if (dma_buf_map_is_null(map)) return; - ttm_bo_vunmap(bo, &gbo->map); + ttm_bo_vunmap(bo, map); + dma_buf_map_clear(map); } static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo, diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h index a4bac02249c2..48af238b5ca9 100644 --- a/include/drm/drm_gem_vram_helper.h +++ b/include/drm/drm_gem_vram_helper.h @@ -41,25 +41,14 @@ struct vm_area_struct; * dedicated memory. The buffer object can be evicted to system memory if * video memory becomes scarce. * - * GEM VRAM objects perform reference counting for pin and mapping - * operations. So a buffer object that has been pinned N times with - * drm_gem_vram_pin() must be unpinned N times with - * drm_gem_vram_unpin(). The same applies to pairs of - * drm_gem_vram_kmap() and drm_gem_vram_kunmap(), as well as pairs of - * drm_gem_vram_vmap() and drm_gem_vram_vunmap(). + * GEM VRAM objects perform reference counting for pin operations. So a + * buffer object that has been pinned N times with drm_gem_vram_pin() must + * be unpinned N times with drm_gem_vram_unpin(). */ struct drm_gem_vram_object { struct ttm_buffer_object bo; struct dma_buf_map map; - /** - * @vmap_use_count: - * - * Reference count on the virtual address. - * The address are un-mapped when the count reaches zero. - */ - unsigned int vmap_use_count; - /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */ struct ttm_placement placement; struct ttm_place placements[2]; From patchwork Thu Dec 3 14:02:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 338045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E809C0018C for ; Thu, 3 Dec 2020 14:04:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E377D20709 for ; Thu, 3 Dec 2020 14:04:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730737AbgLCOEa (ORCPT ); Thu, 3 Dec 2020 09:04:30 -0500 Received: from mx2.suse.de ([195.135.220.15]:35304 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727242AbgLCOE3 (ORCPT ); Thu, 3 Dec 2020 09:04:29 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5A9A1ADC1; Thu, 3 Dec 2020 14:03:09 +0000 (UTC) From: Thomas Zimmermann To: airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, hdegoede@redhat.com, christian.koenig@amd.com, sumit.semwal@linaro.org Cc: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Thomas Zimmermann Subject: [PATCH v2 6/7] drm/vram-helper: Simplify vmap implementation Date: Thu, 3 Dec 2020 15:02:58 +0100 Message-Id: <20201203140259.26580-7-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201203140259.26580-1-tzimmermann@suse.de> References: <20201203140259.26580-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org After removing the pinning operations, the vmap/vunmap code as been reduced to what used to be an internal helper. Inline the helper to simplify the implementation. Signed-off-by: Thomas Zimmermann Acked-by: Christian König --- drivers/gpu/drm/drm_gem_vram_helper.c | 52 +++++++++++---------------- 1 file changed, 20 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 276e8f8ea663..6159f5dc8f1f 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -378,36 +378,6 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, - struct dma_buf_map *map) -{ - int ret; - - ret = ttm_bo_vmap(&gbo->bo, &gbo->map); - if (ret) - return ret; - - *map = gbo->map; - - return 0; -} - -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, - struct dma_buf_map *map) -{ - struct drm_device *dev = gbo->bo.base.dev; - - if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) - return; /* BUG: map not mapped from this BO */ - - /* - * Permanently mapping and unmapping buffers adds overhead from - * updating the page tables and creates debugging output. Therefore, - * we delay the actual unmap operation until the BO gets evicted - * from memory. See drm_gem_vram_bo_driver_move_notify(). - */ -} - /** * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address * space @@ -426,9 +396,17 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, */ int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { + int ret; + dma_resv_assert_held(gbo->bo.base.resv); - return drm_gem_vram_kmap_locked(gbo, map); + ret = ttm_bo_vmap(&gbo->bo, &gbo->map); + if (ret) + return ret; + + *map = gbo->map; + + return 0; } EXPORT_SYMBOL(drm_gem_vram_vmap); @@ -442,9 +420,19 @@ EXPORT_SYMBOL(drm_gem_vram_vmap); */ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { + struct drm_device *dev = gbo->bo.base.dev; + dma_resv_assert_held(gbo->bo.base.resv); - drm_gem_vram_kunmap_locked(gbo, map); + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) + return; /* BUG: map not mapped from this BO */ + + /* + * Permanently mapping and unmapping buffers adds overhead from + * updating the page tables and creates debugging output. Therefore, + * we delay the actual unmap operation until the BO gets evicted + * from memory. See drm_gem_vram_bo_driver_move_notify(). + */ } EXPORT_SYMBOL(drm_gem_vram_vunmap); From patchwork Thu Dec 3 14:02:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Zimmermann X-Patchwork-Id: 338046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D19FC433C1 for ; Thu, 3 Dec 2020 14:04:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B6443206F9 for ; Thu, 3 Dec 2020 14:04:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730572AbgLCOE3 (ORCPT ); Thu, 3 Dec 2020 09:04:29 -0500 Received: from mx2.suse.de ([195.135.220.15]:35306 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726112AbgLCOE3 (ORCPT ); Thu, 3 Dec 2020 09:04:29 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E2A74ADD7; Thu, 3 Dec 2020 14:03:09 +0000 (UTC) From: Thomas Zimmermann To: airlied@redhat.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, hdegoede@redhat.com, christian.koenig@amd.com, sumit.semwal@linaro.org Cc: dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Thomas Zimmermann Subject: [PATCH v2 7/7] dma-buf: Write down some rules for vmap usage Date: Thu, 3 Dec 2020 15:02:59 +0100 Message-Id: <20201203140259.26580-8-tzimmermann@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201203140259.26580-1-tzimmermann@suse.de> References: <20201203140259.26580-1-tzimmermann@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Dma-buf's vmap and vunmap callbacks are undocumented and various exporters currently have slightly different semantics for them. Add documentation on how to implement and use these interfaces correctly. v2: * document vmap semantics in struct dma_buf_ops * add TODO item for reviewing and maybe fixing dma-buf exporters Signed-off-by: Thomas Zimmermann --- Documentation/gpu/todo.rst | 15 +++++++++++++ include/drm/drm_gem.h | 4 ++++ include/linux/dma-buf.h | 45 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 64 insertions(+) diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 009d8e6c7e3c..32bb797a84fc 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann , Christian König, Daniel Vette Level: Intermediate +Enforce rules for dma-buf vmap and pin ops +------------------------------------------ + +Exporter implementations of vmap and pin in struct dma_buf_ops (and consequently +struct drm_gem_object_funcs) use a variety of locking semantics. Some rely on +the caller holding the dma-buf's reservation lock, some do their own locking, +some don't require any locking. VRAM helpers even used to pin as part of vmap. + +We need to review each exporter and enforce the documented rules. + +Contact: Christian König, Daniel Vetter, Thomas Zimmermann + +Level: Advanced + + Core refactorings ================= diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 5e6daa1c982f..1864c6a721b1 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -138,6 +138,8 @@ struct drm_gem_object_funcs { * drm_gem_dmabuf_vmap() helper. * * This callback is optional. + * + * See also struct dma_buf_ops.vmap */ int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map); @@ -148,6 +150,8 @@ struct drm_gem_object_funcs { * drm_gem_dmabuf_vunmap() helper. * * This callback is optional. + * + * See also struct dma_buf_ops.vunmap */ void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map); diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index cf72699cb2bc..dc81fdc01dda 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -267,7 +267,52 @@ struct dma_buf_ops { */ int (*mmap)(struct dma_buf *, struct vm_area_struct *vma); + /** + * @vmap: + * + * Returns a virtual address for the buffer. + * + * Notes to callers: + * + * - Callers must hold the struct dma_buf.resv lock before calling + * this interface. + * + * - Callers must provide means to prevent the mappings from going + * stale, such as holding the reservation lock or providing a + * move-notify callback to the exporter. + * + * Notes to implementors: + * + * - Implementations must expect pairs of @vmap and @vunmap to be + * called frequently and should optimize for this case. + * + * - Implementations should avoid additional operations, such as + * pinning. + * + * - Implementations may expect the caller to hold the dma-buf's + * reservation lock to protect against concurrent calls and + * relocation. + * + * - Implementations may provide additional guarantees, such as working + * without holding the reservation lock. + * + * This callback is optional. + * + * Returns: + * + * 0 on success or a negative error code on failure. + */ int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map); + + /** + * @vunmap: + * + * Releases the address previously returned by @vmap. + * + * This callback is optional. + * + * See also @vmap() + */ void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map); };