From patchwork Mon Apr 5 17:45:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 415384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50B0BC433B4 for ; Mon, 5 Apr 2021 17:42:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 136E6613B3 for ; Mon, 5 Apr 2021 17:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239020AbhDERmN (ORCPT ); Mon, 5 Apr 2021 13:42:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234136AbhDERmM (ORCPT ); Mon, 5 Apr 2021 13:42:12 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CABAEC061756; Mon, 5 Apr 2021 10:42:05 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id g10so6014702plt.8; Mon, 05 Apr 2021 10:42:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8LRG1eNUpIgk81CY0UQCJ60bzUUpB9qtBpu3ufoR+j8=; b=D2nAzJtLwxoOX4+J3mJWz1OAewqduQtqpN7T3ykt5zK2TjS6SI6mqeMmSymhQEj41R l5BPcZXonPzP61SokcJy5jVjjmwRW6TctN0gzvPRiobbK7Q3T+cAXm9wiUAwcOrgFj3h nxz/iUi5Gg4hvYMzg/cdRk3i9YCTajXS6fY9JzAQxuMp6JP7nnIdKivAMplgv3Pekzos Ws76fl8miMvvrJapX/rZySzy+VEzGkZjltO3CxxqJH44sSHoxw2BYcBKi5isM/xURs0d J2aUc2mTr5O1LxTQj3r33GsZ7kgYFHYToSf03x8+tNuG8/5ytDYMS2qjPNMziST2pJVz J1/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8LRG1eNUpIgk81CY0UQCJ60bzUUpB9qtBpu3ufoR+j8=; b=LzGe2oOxr9f2IavMWIlWyLOg1NGxrGrRsRdoiFs+s6Z5KYTRBaG/94qsemVdOGXaos msdrV5/oKJj6kh4bwCcb1uSVvG1VWzKRto+XWHpSg712iKHR8gcsI+EbBfCfXGSJovc7 jgTvrP3K7g2q3om5EaLuUh/R4BlGRkWbkjE++RNPB9NybqEDHFz2hbXO+HvXDK0QN263 L8xGXaMgw16X9qHsEgiwLxItGj1ogLSu0yFMTw4CDJyI4fACcaRMOazykFeFzKzuUXu9 BHnbvPVNfL+9AfEBzEJmB22gxz5qTlmiZMFk/dEzasIjFP4aKwulxEho7R5a65Uphm3j 0Xsw== X-Gm-Message-State: AOAM530TN/brsL0CQ5VKBKt+vBb2wyVEqqPTlEny4rCcyoxQiuuQ6+nH geAsTy4DJtHLeU9WO2AJVdU= X-Google-Smtp-Source: ABdhPJy/zdAuuM/6jfnh/2mhjg0lpJ3vI3c+BeHxPWhAnanktVOdB6B3lc6p9HXSa6pbd8RUV4sqSg== X-Received: by 2002:a17:902:8685:b029:e6:5ff6:f7df with SMTP id g5-20020a1709028685b02900e65ff6f7dfmr25046456plo.40.1617644525372; Mon, 05 Apr 2021 10:42:05 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id t12sm16731786pga.85.2021.04.05.10.42.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:04 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 1/8] drm/msm: ratelimit GEM related WARN_ON()s Date: Mon, 5 Apr 2021 10:45:24 -0700 Message-Id: <20210405174532.1441497-2-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark If you mess something up, you don't really need to see the same warn on splat 4000 times pumped out a slow debug UART port.. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 66 +++++++++++++++++------------------ drivers/gpu/drm/msm/msm_gem.h | 19 ++++++---- 2 files changed, 45 insertions(+), 40 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 4e91b095ab77..d5abe8aa9978 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -96,7 +96,7 @@ static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); if (!msm_obj->pages) { struct drm_device *dev = obj->dev; @@ -180,7 +180,7 @@ struct page **msm_gem_get_pages(struct drm_gem_object *obj) msm_gem_lock(obj); - if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { + if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } @@ -256,7 +256,7 @@ static vm_fault_t msm_gem_fault(struct vm_fault *vmf) goto out; } - if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { + if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { msm_gem_unlock(obj); return VM_FAULT_SIGBUS; } @@ -289,7 +289,7 @@ static uint64_t mmap_offset(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; int ret; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); /* Make it mmapable */ ret = drm_gem_create_mmap_offset(obj); @@ -318,7 +318,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) @@ -337,7 +337,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace == aspace) @@ -363,7 +363,7 @@ put_iova_spaces(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace) { @@ -380,7 +380,7 @@ put_iova_vmas(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma, *tmp; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { del_vma(vma); @@ -394,7 +394,7 @@ static int get_iova_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma; int ret = 0; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); @@ -429,13 +429,13 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, if (msm_obj->flags & MSM_BO_MAP_PRIV) prot |= IOMMU_PRIV; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); - if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) + if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) return -EBUSY; vma = lookup_vma(obj, aspace); - if (WARN_ON(!vma)) + if (GEM_WARN_ON(!vma)) return -EINVAL; pages = get_pages(obj); @@ -453,7 +453,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, u64 local; int ret; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); ret = get_iova_locked(obj, aspace, &local, range_start, range_end); @@ -524,7 +524,7 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, msm_gem_lock(obj); vma = lookup_vma(obj, aspace); msm_gem_unlock(obj); - WARN_ON(!vma); + GEM_WARN_ON(!vma); return vma ? vma->iova : 0; } @@ -537,11 +537,11 @@ void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, { struct msm_gem_vma *vma; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); - if (!WARN_ON(!vma)) + if (!GEM_WARN_ON(!vma)) msm_gem_unmap_vma(aspace, vma); } @@ -593,12 +593,12 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret = 0; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); if (obj->import_attach) return ERR_PTR(-ENODEV); - if (WARN_ON(msm_obj->madv > madv)) { + if (GEM_WARN_ON(msm_obj->madv > madv)) { DRM_DEV_ERROR(obj->dev->dev, "Invalid madv state: %u vs %u\n", msm_obj->madv, madv); return ERR_PTR(-EBUSY); @@ -664,8 +664,8 @@ void msm_gem_put_vaddr_locked(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!msm_gem_is_locked(obj)); - WARN_ON(msm_obj->vmap_count < 1); + GEM_WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(msm_obj->vmap_count < 1); msm_obj->vmap_count--; } @@ -707,8 +707,8 @@ void msm_gem_purge(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!is_purgeable(msm_obj)); - WARN_ON(obj->import_attach); + GEM_WARN_ON(!is_purgeable(msm_obj)); + GEM_WARN_ON(obj->import_attach); put_iova_spaces(obj); @@ -739,9 +739,9 @@ void msm_gem_vunmap(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); - if (!msm_obj->vaddr || WARN_ON(!is_vunmapable(msm_obj))) + if (!msm_obj->vaddr || GEM_WARN_ON(!is_vunmapable(msm_obj))) return; vunmap(msm_obj->vaddr); @@ -789,9 +789,9 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) struct msm_drm_private *priv = obj->dev->dev_private; might_sleep(); - WARN_ON(!msm_gem_is_locked(obj)); - WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); - WARN_ON(msm_obj->dontneed); + GEM_WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); + GEM_WARN_ON(msm_obj->dontneed); if (msm_obj->active_count++ == 0) { mutex_lock(&priv->mm_lock); @@ -806,7 +806,7 @@ void msm_gem_active_put(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); might_sleep(); - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); if (--msm_obj->active_count == 0) { update_inactive(msm_obj); @@ -818,7 +818,7 @@ static void update_inactive(struct msm_gem_object *msm_obj) struct msm_drm_private *priv = msm_obj->base.dev->dev_private; mutex_lock(&priv->mm_lock); - WARN_ON(msm_obj->active_count != 0); + GEM_WARN_ON(msm_obj->active_count != 0); if (msm_obj->dontneed) mark_unpurgable(msm_obj); @@ -830,7 +830,7 @@ static void update_inactive(struct msm_gem_object *msm_obj) list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed); mark_purgable(msm_obj); } else { - WARN_ON(msm_obj->madv != __MSM_MADV_PURGED); + GEM_WARN_ON(msm_obj->madv != __MSM_MADV_PURGED); list_add_tail(&msm_obj->mm_list, &priv->inactive_purged); } @@ -1010,12 +1010,12 @@ void msm_gem_free_object(struct drm_gem_object *obj) msm_gem_lock(obj); /* object should not be on active list: */ - WARN_ON(is_active(msm_obj)); + GEM_WARN_ON(is_active(msm_obj)); put_iova_spaces(obj); if (obj->import_attach) { - WARN_ON(msm_obj->vaddr); + GEM_WARN_ON(msm_obj->vaddr); /* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated: @@ -1131,7 +1131,7 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) use_vram = true; - if (WARN_ON(use_vram && !priv->vram.size)) + if (GEM_WARN_ON(use_vram && !priv->vram.size)) return ERR_PTR(-EINVAL); /* Disallow zero sized objects as they make the underlying diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 7c7d54bad189..917af526a5c5 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -11,6 +11,11 @@ #include #include "msm_drv.h" +/* Make all GEM related WARN_ON()s ratelimited.. when things go wrong they + * tend to go wrong 1000s of times in a short timespan. + */ +#define GEM_WARN_ON(x) WARN_RATELIMIT(x, "%s", __stringify(x)) + /* Additional internal-use only BO flags: */ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ @@ -203,7 +208,7 @@ msm_gem_is_locked(struct drm_gem_object *obj) static inline bool is_active(struct msm_gem_object *msm_obj) { - WARN_ON(!msm_gem_is_locked(&msm_obj->base)); + GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); return msm_obj->active_count; } @@ -221,7 +226,7 @@ static inline bool is_purgeable(struct msm_gem_object *msm_obj) static inline bool is_vunmapable(struct msm_gem_object *msm_obj) { - WARN_ON(!msm_gem_is_locked(&msm_obj->base)); + GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); return (msm_obj->vmap_count == 0) && msm_obj->vaddr; } @@ -229,12 +234,12 @@ static inline void mark_purgable(struct msm_gem_object *msm_obj) { struct msm_drm_private *priv = msm_obj->base.dev->dev_private; - WARN_ON(!mutex_is_locked(&priv->mm_lock)); + GEM_WARN_ON(!mutex_is_locked(&priv->mm_lock)); if (is_unpurgable(msm_obj)) return; - if (WARN_ON(msm_obj->dontneed)) + if (GEM_WARN_ON(msm_obj->dontneed)) return; priv->shrinkable_count += msm_obj->base.size >> PAGE_SHIFT; @@ -245,16 +250,16 @@ static inline void mark_unpurgable(struct msm_gem_object *msm_obj) { struct msm_drm_private *priv = msm_obj->base.dev->dev_private; - WARN_ON(!mutex_is_locked(&priv->mm_lock)); + GEM_WARN_ON(!mutex_is_locked(&priv->mm_lock)); if (is_unpurgable(msm_obj)) return; - if (WARN_ON(!msm_obj->dontneed)) + if (GEM_WARN_ON(!msm_obj->dontneed)) return; priv->shrinkable_count -= msm_obj->base.size >> PAGE_SHIFT; - WARN_ON(priv->shrinkable_count < 0); + GEM_WARN_ON(priv->shrinkable_count < 0); msm_obj->dontneed = false; } From patchwork Mon Apr 5 17:45:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 415383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8706C43603 for ; Mon, 5 Apr 2021 17:42:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 81419613B7 for ; Mon, 5 Apr 2021 17:42:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239260AbhDERmU (ORCPT ); Mon, 5 Apr 2021 13:42:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239227AbhDERmT (ORCPT ); Mon, 5 Apr 2021 13:42:19 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B20CC06178C; Mon, 5 Apr 2021 10:42:12 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id m11so5984225pfc.11; Mon, 05 Apr 2021 10:42:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hft/qShxHo0bK33jERj2ZhfMZ7O/VHjF9RLnRkSOuTI=; b=c/hVy6Rjga0/F4Ynn4GXA9wFndHn3akAXUMMcmBNsZpKtr6LAj3nHXVoeFvS69vkWO 5UiVrkG9JCNVpzh7PWcbbOCWfqF8K1h484iNhoageOyvJvQ8Wla3cFCPyuhtlkVxRCKY jkqm+gQ0ttd3i2jG6+4syBmu6mxKbE0XmoQawe4hLcyt50Zzg1t8DmjtBRcLyEsY5uvQ cH+Jsb78Eh6yC/rTCy90iDi/rxacodWfYInSYvply95Jjopy8ExZeUXtvfzBt2fiSjT2 1xaoIJhGH2of4TJTmlcGhiUDo3/DWLCMPDDMpGFaLTv+5YJwFtRvBIzWtS38D6pHctWW TMQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hft/qShxHo0bK33jERj2ZhfMZ7O/VHjF9RLnRkSOuTI=; b=fqcKx0yuGyaP21dllOHkgG9vd1QAXJnQIgA60bSLvQ/00i2TnuvNQE6A+IWaE3rX/v p1zqwguZ3NqPU5V7zmAL0C4sfOdFoS+44A7NXQbiqPeEM/BBgzKM1McDRn4Ag+Lk8mBj QNWgXFIkLglkB7lfkSVULxzC5qfK7bDzrHG9vwjpAH2YOfkowDNNBwmDKKq7+H4nVgMC cXbWNY4pjHpWFzlw28ERuM7pqjHdG3Wu+NmDyg0zoR2P6Ln5Ug1yc/9Rn+OM9Ul06jhx 9xx53SLwbvssn5loG3omMGFwOHmUwAmi/hZUmexkgNVcCRBC4fwvwoBaH5MwqGo6iBqd wkvw== X-Gm-Message-State: AOAM532bZjQ0MlP8NzPFe7Js2i+tkrOkyYWzUYB00hhjpfe7qVvQ2y2n L+Pi4I0COBKx6q8FhHxr+8Q= X-Google-Smtp-Source: ABdhPJzmZZbLJeQ4J4wHDZid4oFT6Yl/7zQkym+z79mPochrvWMslUalo4Qv8GYrkFzvSmdKAPe0Ow== X-Received: by 2002:a63:ff4d:: with SMTP id s13mr23749361pgk.310.1617644531620; Mon, 05 Apr 2021 10:42:11 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id l25sm17411741pgu.72.2021.04.05.10.42.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 4/8] drm/msm: Split iova purge and close Date: Mon, 5 Apr 2021 10:45:27 -0700 Message-Id: <20210405174532.1441497-5-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Currently these always go together, either when we purge MADV_WONTNEED objects or when the object is freed. But for unpin, we want to be able to purge (unmap from iommu) the vma, while keeping the iova range allocated (so we can remap back to the same GPU virtual address when the object is re-pinned. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 71530a89b675..5f0647adc29d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -357,9 +357,14 @@ static void del_vma(struct msm_gem_vma *vma) kfree(vma); } -/* Called with msm_obj locked */ +/** + * If close is true, this also closes the VMA (releasing the allocated + * iova range) in addition to removing the iommu mapping. In the eviction + * case (!close), we keep the iova allocated, but only remove the iommu + * mapping. + */ static void -put_iova_spaces(struct drm_gem_object *obj) +put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; @@ -369,7 +374,8 @@ put_iova_spaces(struct drm_gem_object *obj) list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace) { msm_gem_purge_vma(vma->aspace, vma); - msm_gem_close_vma(vma->aspace, vma); + if (close) + msm_gem_close_vma(vma->aspace, vma); } } } @@ -711,7 +717,8 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); GEM_WARN_ON(obj->import_attach); - put_iova_spaces(obj); + /* Get rid of any iommu mapping(s): */ + put_iova_spaces(obj, true); msm_gem_vunmap(obj); @@ -1013,7 +1020,7 @@ void msm_gem_free_object(struct drm_gem_object *obj) /* object should not be on active list: */ GEM_WARN_ON(is_active(msm_obj)); - put_iova_spaces(obj); + put_iova_spaces(obj, true); if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); From patchwork Mon Apr 5 17:45:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 415382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB33DC433B4 for ; Mon, 5 Apr 2021 17:42:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8476061394 for ; Mon, 5 Apr 2021 17:42:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239429AbhDERmY (ORCPT ); Mon, 5 Apr 2021 13:42:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239424AbhDERmY (ORCPT ); Mon, 5 Apr 2021 13:42:24 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D53DC061756; Mon, 5 Apr 2021 10:42:17 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id z16so1173716pga.1; Mon, 05 Apr 2021 10:42:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sHChho7zMUiHOnTsFblEmDhAGhcQTocBf3TEMdJd+tY=; b=K3oGoYSRmlEfhUMGRq3WGqZGqilwkwmulVgJ92meU6+ihbOeYaJJhXJpgjcdzRvazl Lga9QxQfvx/FyHrKJRrpjuV/d0psQu6xMZDuJ/2OnNOlaDwbU74ByGiGMlmVOHrb79Hf KQ6Kbsmkk32THBDqakI8ZVIW995Sz9TeKtVTD0hz7aWK4IqI62j3GYGeQVJXHMmXNCOE 1qTfWpIPq+n+5EnZYDdbquw0u3mZp2xdgJNc+SsOlxlfo/5NgR5Y8U4h5UKE4lMBwq0C UAxtYa+oYqu8D68QBjuUiuYrNdlUINA1NJcSXZCncFKdsXIfyZ5E0PugSyII5i3iLy5G IYQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sHChho7zMUiHOnTsFblEmDhAGhcQTocBf3TEMdJd+tY=; b=gQQPdz15NrVEeRz4f46u9BB9x4sb+LviiZF+DMzr93oIKKKk62MtJcSn+VYpzXyyGs 17YMk9uERIaspQmN8XGyFmIKCOV/MKxeYl2Re+lbQg3YjyYM4y7j+yHkqLKm013wuWJO UaAvzXHPVmYLu6exzg+y5qxbwcpRRYsU2ZUpxDkxR7KZpWCAGX1u3EYk96y+wzex38tM RDpM0A4jN+/k07kXEuu7OT81VelvVWOEi+9SLYR49EvX0MT5URc+dp8SySzgNyGhgXXl CLpEe77gVGonLt1kon9uupxIq/zMBrWM4lxl+yobT0QrwiHI3zIJHdojak4dzPU73pCt 6ntw== X-Gm-Message-State: AOAM533Yso4x9HfGbPACOkTD+UPqpOViovlmdNdzTBhugfVcyU0gY99t dbSI7uMgIpffB+8POqdAWuw= X-Google-Smtp-Source: ABdhPJyrMISSr0sdcuXCm4ayrqtIlCTOJMOezpXviaCFdDfrnlFkeK1xFXqgU2PbrREfywv1AlBu7Q== X-Received: by 2002:a63:aa48:: with SMTP id x8mr23969338pgo.246.1617644536531; Mon, 05 Apr 2021 10:42:16 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id gz12sm92813pjb.33.2021.04.05.10.42.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:15 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 6/8] drm/msm: Track potentially evictable objects Date: Mon, 5 Apr 2021 10:45:29 -0700 Message-Id: <20210405174532.1441497-7-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Objects that are potential for swapping out are (1) willneed (ie. if they are purgable/MADV_WONTNEED we can just free the pages without them having to land in swap), (2) not on an active list, (3) not dma-buf imported or exported, and (4) not vmap'd. This repurposes the purged list for objects that do not have backing pages (either because they have not been pinned for the first time yet, or in a later patch because they have been unpinned/evicted. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 2 +- drivers/gpu/drm/msm/msm_drv.h | 13 ++++++---- drivers/gpu/drm/msm/msm_gem.c | 44 ++++++++++++++++++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 45 +++++++++++++++++++++++++++++++++++ 4 files changed, 89 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index e12d5fbd0a34..d3d6c743b7af 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -451,7 +451,7 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) INIT_LIST_HEAD(&priv->inactive_willneed); INIT_LIST_HEAD(&priv->inactive_dontneed); - INIT_LIST_HEAD(&priv->inactive_purged); + INIT_LIST_HEAD(&priv->inactive_unpinned); mutex_init(&priv->mm_lock); /* Teach lockdep about lock ordering wrt. shrinker: */ diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 6a42cdf4cf7e..2668941df529 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -182,11 +182,15 @@ struct msm_drm_private { struct mutex obj_lock; /** - * Lists of inactive GEM objects. Every bo is either in one of the + * LRUs of inactive GEM objects. Every bo is either in one of the * inactive lists (depending on whether or not it is shrinkable) or * gpu->active_list (for the gpu it is active on[1]), or transiently * on a temporary list as the shrinker is running. * + * Note that inactive_willneed also contains pinned and vmap'd bos, + * but the number of pinned-but-not-active objects is small (scanout + * buffers, ringbuffer, etc). + * * These lists are protected by mm_lock (which should be acquired * before per GEM object lock). One should *not* hold mm_lock in * get_pages()/vmap()/etc paths, as they can trigger the shrinker. @@ -194,10 +198,11 @@ struct msm_drm_private { * [1] if someone ever added support for the old 2d cores, there could be * more than one gpu object */ - struct list_head inactive_willneed; /* inactive + !shrinkable */ - struct list_head inactive_dontneed; /* inactive + shrinkable */ - struct list_head inactive_purged; /* inactive + purged */ + struct list_head inactive_willneed; /* inactive + potentially unpin/evictable */ + struct list_head inactive_dontneed; /* inactive + shrinkable */ + struct list_head inactive_unpinned; /* inactive + purged or unpinned */ long shrinkable_count; /* write access under mm_lock */ + long evictable_count; /* write access under mm_lock */ struct mutex mm_lock; struct workqueue_struct *wq; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 9ff37904ec2b..9ac89951080c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -130,6 +130,9 @@ static struct page **get_pages(struct drm_gem_object *obj) */ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) sync_for_device(msm_obj); + + GEM_WARN_ON(msm_obj->active_count); + update_inactive(msm_obj); } return msm_obj->pages; @@ -428,7 +431,7 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; struct page **pages; - int prot = IOMMU_READ; + int ret, prot = IOMMU_READ; if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) prot |= IOMMU_WRITE; @@ -449,8 +452,13 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, if (IS_ERR(pages)) return PTR_ERR(pages); - return msm_gem_map_vma(aspace, vma, prot, + ret = msm_gem_map_vma(aspace, vma, prot, msm_obj->sgt, obj->size >> PAGE_SHIFT); + + if (!ret) + msm_obj->pin_count++; + + return ret; } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, @@ -542,14 +550,21 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { + struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); - if (!GEM_WARN_ON(!vma)) + if (!GEM_WARN_ON(!vma)) { msm_gem_unmap_vma(aspace, vma); + + msm_obj->pin_count--; + GEM_WARN_ON(msm_obj->pin_count < 0); + + update_inactive(msm_obj); + } } /* @@ -800,9 +815,12 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) GEM_WARN_ON(!msm_gem_is_locked(obj)); GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); GEM_WARN_ON(msm_obj->dontneed); + GEM_WARN_ON(!msm_obj->sgt); if (msm_obj->active_count++ == 0) { mutex_lock(&priv->mm_lock); + if (msm_obj->evictable) + mark_unevictable(msm_obj); list_del(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); mutex_unlock(&priv->mm_lock); @@ -825,21 +843,28 @@ static void update_inactive(struct msm_gem_object *msm_obj) { struct msm_drm_private *priv = msm_obj->base.dev->dev_private; + GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); + + if (msm_obj->active_count != 0) + return; + mutex_lock(&priv->mm_lock); - GEM_WARN_ON(msm_obj->active_count != 0); if (msm_obj->dontneed) mark_unpurgable(msm_obj); + if (msm_obj->evictable) + mark_unevictable(msm_obj); list_del(&msm_obj->mm_list); - if (msm_obj->madv == MSM_MADV_WILLNEED) { + if ((msm_obj->madv == MSM_MADV_WILLNEED) && msm_obj->sgt) { list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed); + mark_evictable(msm_obj); } else if (msm_obj->madv == MSM_MADV_DONTNEED) { list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed); mark_purgable(msm_obj); } else { - GEM_WARN_ON(msm_obj->madv != __MSM_MADV_PURGED); - list_add_tail(&msm_obj->mm_list, &priv->inactive_purged); + GEM_WARN_ON((msm_obj->madv != __MSM_MADV_PURGED) && msm_obj->sgt); + list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); } mutex_unlock(&priv->mm_lock); @@ -1201,8 +1226,7 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, } mutex_lock(&priv->mm_lock); - /* Initially obj is idle, obj->madv == WILLNEED: */ - list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed); + list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); mutex_unlock(&priv->mm_lock); mutex_lock(&priv->obj_lock); @@ -1276,7 +1300,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, msm_gem_unlock(obj); mutex_lock(&priv->mm_lock); - list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed); + list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); mutex_unlock(&priv->mm_lock); mutex_lock(&priv->obj_lock); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index e13a9301b616..39b2e5584f97 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -60,6 +60,11 @@ struct msm_gem_object { */ bool dontneed : 1; + /** + * Is object evictable (ie. counted in priv->evictable_count)? + */ + bool evictable : 1; + /** * count of active vmap'ing */ @@ -103,6 +108,7 @@ struct msm_gem_object { char name[32]; /* Identifier to print for the debugfs files */ int active_count; + int pin_count; }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) @@ -263,7 +269,46 @@ static inline void mark_unpurgable(struct msm_gem_object *msm_obj) msm_obj->dontneed = false; } +static inline bool is_unevictable(struct msm_gem_object *msm_obj) +{ + return is_unpurgable(msm_obj) || msm_obj->pin_count || msm_obj->vaddr; +} + +static inline void mark_evictable(struct msm_gem_object *msm_obj) +{ + struct msm_drm_private *priv = msm_obj->base.dev->dev_private; + + WARN_ON(!mutex_is_locked(&priv->mm_lock)); + + if (is_unevictable(msm_obj)) + return; + + if (WARN_ON(msm_obj->evictable)) + return; + + priv->evictable_count += msm_obj->base.size >> PAGE_SHIFT; + msm_obj->evictable = true; +} + +static inline void mark_unevictable(struct msm_gem_object *msm_obj) +{ + struct msm_drm_private *priv = msm_obj->base.dev->dev_private; + + WARN_ON(!mutex_is_locked(&priv->mm_lock)); + + if (is_unevictable(msm_obj)) + return; + + if (WARN_ON(!msm_obj->evictable)) + return; + + priv->evictable_count -= msm_obj->base.size >> PAGE_SHIFT; + WARN_ON(priv->evictable_count < 0); + msm_obj->evictable = false; +} + void msm_gem_purge(struct drm_gem_object *obj); +void msm_gem_evict(struct drm_gem_object *obj); void msm_gem_vunmap(struct drm_gem_object *obj); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, From patchwork Mon Apr 5 17:45:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 415381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3E55C43460 for ; Mon, 5 Apr 2021 17:42:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ACE8B61396 for ; Mon, 5 Apr 2021 17:42:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239291AbhDERmj (ORCPT ); Mon, 5 Apr 2021 13:42:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239474AbhDERm3 (ORCPT ); Mon, 5 Apr 2021 13:42:29 -0400 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93B3DC061794; Mon, 5 Apr 2021 10:42:21 -0700 (PDT) Received: by mail-pg1-x52c.google.com with SMTP id y32so5382693pga.11; Mon, 05 Apr 2021 10:42:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7MWEztFYREaj4EaJg4EzCQLWGTSCEwCslvK3LYOeVzE=; b=HRjGZS1QILkvNMmvQmZn3S40Na/kBwIbA6WVXDpnPh/VerF7PjPRJvpl7/XN6W4A6M a6HWcvDmChgM4WL36FgQdBqTKbrpTyboqYE4Xmyqz2Pn6p/U2nwbYpYx4imr9/9Vt9Jo zGbSNQyGZlO4ootd5yQAv1i/dW4Z6+ljPnr9sODOK3Mw/0gywldqHIhPcpIMg74ctZA5 SMR/IzvOskt2Jmcru/JsesJCc+kr7QGaxjRDXPGDMqcdYYygeTWfjbQtl/44xeaUv4mr yPI8790q3HzS5SHk29h+3MdFMPyGngH1C+67AN53DJzLb9x14igyzbSZX+k6UQ+hRhbT Ha5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7MWEztFYREaj4EaJg4EzCQLWGTSCEwCslvK3LYOeVzE=; b=b4bMdsSG+eC8VugbmpFuD8xaOi18ngLRVEK2oryn2qKS/gYzs2AgxVQyTMjHfjChwx dTYbg2JUw3wNgFh1OKti09S7qhrFqttQapqWXnS9/qUWzQDNzklLUYH7/D+/1Yv8ldiu L7BOm24fXveO0kxvwP/Bu77UlwboDDhJS0DrkCgRSUsVtyrdXRsd5jlkjhVKcqFCwmH6 djyI2fMfZYIJs7gY7sA/ooDHmycXY1Z9IkLJ8wgs59sZkUQkqYEHZ7DBM+JAE1opR2XO tqyKQY9VPbIFrZJXMofEK9ff7BOW+PUxsdqKj4KUGfcZz/4IA71AbZNEBe52IIxDR3E6 WRrQ== X-Gm-Message-State: AOAM532Fnm+tHtZvsFIW0z/tB05MZhsC8ANd35d7YmwtqklFdpkehBqg 0bS3hIgtYezfZvTAAsO88G8= X-Google-Smtp-Source: ABdhPJxibT+j7asrrFr4/LyBfYOx3cjIw334LA6mEIfzkJ/SSj8fWZ8XoVuGuulT8c8TGoH9q2eLPg== X-Received: by 2002:a65:68d3:: with SMTP id k19mr23989185pgt.44.1617644541208; Mon, 05 Apr 2021 10:42:21 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id b7sm2441194pgs.62.2021.04.05.10.42.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:19 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 8/8] drm/msm: Support evicting GEM objects to swap Date: Mon, 5 Apr 2021 10:45:31 -0700 Message-Id: <20210405174532.1441497-9-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that tracking is wired up for potentially evictable GEM objects, wire up shrinker and the remaining GEM bits for unpinning backing pages of inactive objects. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 23 ++++++++++++++++ drivers/gpu/drm/msm/msm_gem_shrinker.c | 37 +++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_gpu_trace.h | 13 +++++++++ 3 files changed, 72 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 163a1d30b5c9..2b731cf42294 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -759,6 +759,29 @@ void msm_gem_purge(struct drm_gem_object *obj) 0, (loff_t)-1); } +/** + * Unpin the backing pages and make them available to be swapped out. + */ +void msm_gem_evict(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + struct msm_gem_object *msm_obj = to_msm_bo(obj); + + GEM_WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(is_unevictable(msm_obj)); + GEM_WARN_ON(!msm_obj->evictable); + GEM_WARN_ON(msm_obj->active_count); + + /* Get rid of any iommu mapping(s): */ + put_iova_spaces(obj, false); + + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + + put_pages(obj); + + update_inactive(msm_obj); +} + void msm_gem_vunmap(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 38bf919f8508..52828028b9d4 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -9,12 +9,26 @@ #include "msm_gpu.h" #include "msm_gpu_trace.h" +bool enable_swap = true; +MODULE_PARM_DESC(enable_swap, "Enable swappable GEM buffers"); +module_param(enable_swap, bool, 0600); + +static bool can_swap(void) +{ + return enable_swap && get_nr_swap_pages() > 0; +} + static unsigned long msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - return priv->shrinkable_count; + unsigned count = priv->shrinkable_count; + + if (can_swap()) + count += priv->evictable_count; + + return count; } static bool @@ -32,6 +46,17 @@ purge(struct msm_gem_object *msm_obj) return true; } +static bool +evict(struct msm_gem_object *msm_obj) +{ + if (is_unevictable(msm_obj)) + return false; + + msm_gem_evict(&msm_obj->base); + + return true; +} + static unsigned long scan(struct msm_drm_private *priv, unsigned nr_to_scan, struct list_head *list, bool (*shrink)(struct msm_gem_object *msm_obj)) @@ -104,6 +129,16 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); + if (can_swap() && freed < sc->nr_to_scan) { + int evicted = scan(priv, sc->nr_to_scan - freed, + &priv->inactive_willneed, evict); + + if (evicted > 0) + trace_msm_gem_evict(evicted << PAGE_SHIFT); + + freed += evicted; + } + return (freed > 0) ? freed : SHRINK_STOP; } diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h index 03e0c2536b94..ca0b08d7875b 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -128,6 +128,19 @@ TRACE_EVENT(msm_gem_purge, ); +TRACE_EVENT(msm_gem_evict, + TP_PROTO(u32 bytes), + TP_ARGS(bytes), + TP_STRUCT__entry( + __field(u32, bytes) + ), + TP_fast_assign( + __entry->bytes = bytes; + ), + TP_printk("Evicting %u bytes", __entry->bytes) +); + + TRACE_EVENT(msm_gem_purge_vmaps, TP_PROTO(u32 unmapped), TP_ARGS(unmapped),