From patchwork Mon Apr 11 21:58:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 559625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC435C433FE for ; Mon, 11 Apr 2022 21:58:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350261AbiDKWAi (ORCPT ); Mon, 11 Apr 2022 18:00:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231193AbiDKWAe (ORCPT ); Mon, 11 Apr 2022 18:00:34 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4799C12615; Mon, 11 Apr 2022 14:58:19 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id z128so15360376pgz.2; Mon, 11 Apr 2022 14:58:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g43LRxOPnuHcbyfQfDVoT6M8d2laa4QFWkHWb/Yu4qg=; b=PB5sKGBvIUmeegWThyUJf73EvvjkYSx9tZFYN7Txp95EYCkPzkG5pOXDZ7W8/XSRNo TE2NneBX+IHbwVDn5fX8FHCZ20p8Su31PmpbsNDUt1qYnRl/s96y1UE4Dqv0WDzo5TzM BHO2nemuSlFQf4iMVT4J9P5Of/LSEzQWzo3j3yq82tHylOFlwWEvTqzwyIDl0ykQoUGD c0+xVPSWPmgK8vAQMoQKFRWmxQPjEMQe7GyVsSXgF68Z+A4Ny/2sd6p2TYlAWj9hsQQf 3Q6funSeWWl/AQ2NdzylIff/WfTBHNQrn0XrAxHLygv2rX0S/NL24EQ3Cy7tpGlAUFJk oDig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g43LRxOPnuHcbyfQfDVoT6M8d2laa4QFWkHWb/Yu4qg=; b=J4FwrNYKweziB9IfII556oAJ9kt9XSUUy90OZ0vTG2FzehV96DJwngZiODgPXETuCq G3xxu1z8XvqjPJqbrYDay14cbEk7pjZqsYcDiHFrWqv35Y2WnD5b2H/ERycQ4ZaRMkeo PqAddYv5Pqnr5T6cN5XMH51swXI+6UJnlkzvCWSq7JQZF7RZGEbsdeloLUNSaS2pSWbK b9zTSVS7l91rm6EgatS9AZD+mqfmjWk7yXlCVOZl+eM5wQkfkAEzq8eub2gJ0atnAY9U cVcsUhe/deBA+81zVvR2iJqCbLOQr/AJSAeHTCJcSGibtnpP5ssaagi0xqZu6W7dUpdD 8JDw== X-Gm-Message-State: AOAM530/4W44cDfHqGcU9ggi3GqDMSrrEWDuaBLLxhTH6TYcQF4EvXsO Vu1XyR4f2izCXZZtUMroWqDINzjvFTE= X-Google-Smtp-Source: ABdhPJxjLarUtDWzS6MqlaRQ3gZfrM37+RVIzPOCAUUsohxcrF52JT2WA7z7VuW4xPR4UnfuGGYgBQ== X-Received: by 2002:a65:5583:0:b0:380:d91a:8270 with SMTP id j3-20020a655583000000b00380d91a8270mr28228373pgs.620.1649714298755; Mon, 11 Apr 2022 14:58:18 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id 16-20020a17090a005000b001c7511dc31esm427790pjb.41.2022.04.11.14.58.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Apr 2022 14:58:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Dmitry Baryshkov , Dmitry Osipenko , Rob Clark , Rob Clark , Sean Paul , Abhinav Kumar , David Airlie , Daniel Vetter , Jordan Crouse , Akhil P Oommen , Vladimir Lypak , Viresh Kumar , Jonathan Marek , Yangtao Li , Emma Anholt , =?utf-8?q?Christian_K=C3=B6nig?= , Dan Carpenter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 02/10] drm/msm/gpu: Drop duplicate fence counter Date: Mon, 11 Apr 2022 14:58:31 -0700 Message-Id: <20220411215849.297838-3-robdclark@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215849.297838-1-robdclark@gmail.com> References: <20220411215849.297838-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The ring seqno counter duplicates the fence-context last_fence counter. They end up getting incremented in lock-step, on the same scheduler thread, but the split just makes things less obvious. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 ++-- drivers/gpu/drm/msm/msm_gpu.c | 8 ++++---- drivers/gpu/drm/msm/msm_gpu.h | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.h | 1 - 6 files changed, 9 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 407f50a15faa..d31aa87c6c8d 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1235,7 +1235,7 @@ static void a5xx_fault_detect_irq(struct msm_gpu *gpu) return; DRM_DEV_ERROR(dev->dev, "gpu fault ring %d fence %x status %8.8X rb %4.4x/%4.4x ib1 %16.16llX/%4.4x ib2 %16.16llX/%4.4x\n", - ring ? ring->id : -1, ring ? ring->seqno : 0, + ring ? ring->id : -1, ring ? ring->fctx->last_fence : 0, gpu_read(gpu, REG_A5XX_RBBM_STATUS), gpu_read(gpu, REG_A5XX_CP_RB_RPTR), gpu_read(gpu, REG_A5XX_CP_RB_WPTR), diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 83c31b2ad865..17de46fc4bf2 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1390,7 +1390,7 @@ static void a6xx_fault_detect_irq(struct msm_gpu *gpu) DRM_DEV_ERROR(&gpu->pdev->dev, "gpu fault ring %d fence %x status %8.8X rb %4.4x/%4.4x ib1 %16.16llX/%4.4x ib2 %16.16llX/%4.4x\n", - ring ? ring->id : -1, ring ? ring->seqno : 0, + ring ? ring->id : -1, ring ? ring->fctx->last_fence : 0, gpu_read(gpu, REG_A6XX_RBBM_STATUS), gpu_read(gpu, REG_A6XX_CP_RB_RPTR), gpu_read(gpu, REG_A6XX_CP_RB_WPTR), diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 45f2c6084aa7..6385ab06632f 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -578,7 +578,7 @@ int adreno_gpu_state_get(struct msm_gpu *gpu, struct msm_gpu_state *state) state->ring[i].fence = gpu->rb[i]->memptrs->fence; state->ring[i].iova = gpu->rb[i]->iova; - state->ring[i].seqno = gpu->rb[i]->seqno; + state->ring[i].seqno = gpu->rb[i]->fctx->last_fence; state->ring[i].rptr = get_rptr(adreno_gpu, gpu->rb[i]); state->ring[i].wptr = get_wptr(gpu->rb[i]); @@ -828,7 +828,7 @@ void adreno_dump_info(struct msm_gpu *gpu) printk("rb %d: fence: %d/%d\n", i, ring->memptrs->fence, - ring->seqno); + ring->fctx->last_fence); printk("rptr: %d\n", get_rptr(adreno_gpu, ring)); printk("rb wptr: %d\n", get_wptr(ring)); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 747b89aa9d13..9480bdf875db 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -534,7 +534,7 @@ static void hangcheck_handler(struct timer_list *t) if (fence != ring->hangcheck_fence) { /* some progress has been made.. ya! */ ring->hangcheck_fence = fence; - } else if (fence_before(fence, ring->seqno)) { + } else if (fence_before(fence, ring->fctx->last_fence)) { /* no progress and not done.. hung! */ ring->hangcheck_fence = fence; DRM_DEV_ERROR(dev->dev, "%s: hangcheck detected gpu lockup rb %d!\n", @@ -542,13 +542,13 @@ static void hangcheck_handler(struct timer_list *t) DRM_DEV_ERROR(dev->dev, "%s: completed fence: %u\n", gpu->name, fence); DRM_DEV_ERROR(dev->dev, "%s: submitted fence: %u\n", - gpu->name, ring->seqno); + gpu->name, ring->fctx->last_fence); kthread_queue_work(gpu->worker, &gpu->recover_work); } /* if still more pending work, reset the hangcheck timer: */ - if (fence_after(ring->seqno, ring->hangcheck_fence)) + if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence)) hangcheck_timer_reset(gpu); /* workaround for missing irq: */ @@ -770,7 +770,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) msm_gpu_hw_init(gpu); - submit->seqno = ++ring->seqno; + submit->seqno = submit->hw_fence->seqno; msm_rd_dump_submit(priv->rd, submit, NULL); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 2c0203fd6ce3..e47a42b1244a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -291,7 +291,7 @@ static inline bool msm_gpu_active(struct msm_gpu *gpu) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; - if (fence_after(ring->seqno, ring->memptrs->fence)) + if (fence_after(ring->fctx->last_fence, ring->memptrs->fence)) return true; } diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index d8c63df4e9ca..2a5045abe46e 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -59,7 +59,6 @@ struct msm_ringbuffer { spinlock_t submit_lock; uint64_t iova; - uint32_t seqno; uint32_t hangcheck_fence; struct msm_rbmemptrs *memptrs; uint64_t memptrs_iova; From patchwork Mon Apr 11 21:58:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 559624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 872AAC43219 for ; Mon, 11 Apr 2022 21:58:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350263AbiDKWAj (ORCPT ); Mon, 11 Apr 2022 18:00:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350264AbiDKWAj (ORCPT ); Mon, 11 Apr 2022 18:00:39 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 187B511C02; Mon, 11 Apr 2022 14:58:24 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id c23so15077721plo.0; Mon, 11 Apr 2022 14:58:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tAkFR+6vhq0o/UUKOqeWc9uPJoN5WXdSoZFQuuhB/hI=; b=BVAdqHYdjgRDHToJmUjlWr1WJiW8u4kwQFY6dwLmE9EMWP13SbsQQdQGBdhzMJxAyo BPxrqDtdxL3W3u35q0PXTqfgpXcoRJ4MzivwkAtgqY3uw7PaAnpUbSWi7t1ISrnW+HSJ pjmiLq2iiiuIzKhKZJIa316Mlfk773qV7x9biGJX54TQMn4HpNetx9cbPn3JVupXmoR8 t4W/4kY99yX4BsD5BB/25GZBg4EFdILkT2TXgGziKg3mir6RWhCHXyBUjxnafbH9BSps Ag2iCEdxn7EqSl4fYVM5ODa4776/3AGho+7h1A6cAOgoqtg0cvcfVNfVabLGIgSAWUo6 PTWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tAkFR+6vhq0o/UUKOqeWc9uPJoN5WXdSoZFQuuhB/hI=; b=DaUcTuhKO/T/dOtEcQ0YxG76qIlX+JPm54RhCiAoJ9cinSfsHFXThb4b2p0vPlcldA fvDiIdgMDw2FngzojSWNl9vK2aChsIUP1/mbYQyLUZd4sbyC+mxjXrCUp0VfrRTwN9tQ 9yrUKvJ1FDPicFXLL6PkAAjZl+vwg2vV3cyEJsmsljgAquaKFTzB5B5fSusr1CWlKXkW icIy/nEp1tUHKaFMLGX7VtwP/8gaoLg2kHFdsKq8YclAN6yyB5nTrms5ofAkFwf8fWd7 fVvkBnncfsXaRH5sR6CunNFDyUZHjr6xBZfPyJUCDiRyvQUX9g1FqWWjS/u+fCB4wIj1 0ZRA== X-Gm-Message-State: AOAM530bRg4ToUgHpAJQEWfNPetsk1ts+ujhFk/7e9oorLjHxTBZ2Vgu RNqYEvR767jCbS5ToJbc1aY= X-Google-Smtp-Source: ABdhPJxNCwWDB9v3U/Ie32ovHq1HTYiwkwWYce1aYR2ERIaJqRWChN3IzYNAaPnGp53xn1g5gZmoGQ== X-Received: by 2002:a17:903:41c2:b0:158:83f7:f8a9 with SMTP id u2-20020a17090341c200b0015883f7f8a9mr1442985ple.146.1649714303486; Mon, 11 Apr 2022 14:58:23 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id k15-20020a63ab4f000000b00381eef69bfbsm652869pgp.3.2022.04.11.14.58.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Apr 2022 14:58:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Dmitry Baryshkov , Dmitry Osipenko , Rob Clark , Rob Clark , Sean Paul , Abhinav Kumar , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 04/10] drm/msm/gem: Split out inuse helper Date: Mon, 11 Apr 2022 14:58:33 -0700 Message-Id: <20220411215849.297838-5-robdclark@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215849.297838-1-robdclark@gmail.com> References: <20220411215849.297838-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Prep for a following patch, where it gets a bit more complicated. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_gem_vma.c | 9 +++++++-- 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index a4f61972667b..f96d1dc72021 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -938,7 +938,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, name, comm ? ":" : "", comm ? comm : "", vma->aspace, vma->iova, vma->mapped ? "mapped" : "unmapped", - vma->inuse); + msm_gem_vma_inuse(vma)); kfree(comm); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 947ff7d9b471..1b7f0f0b88bf 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -61,6 +61,7 @@ struct msm_gem_vma { int msm_gem_init_vma(struct msm_gem_address_space *aspace, struct msm_gem_vma *vma, int npages, u64 range_start, u64 range_end); +bool msm_gem_vma_inuse(struct msm_gem_vma *vma); void msm_gem_purge_vma(struct msm_gem_address_space *aspace, struct msm_gem_vma *vma); void msm_gem_unmap_vma(struct msm_gem_address_space *aspace, diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 64906594fc65..dc2ae097805e 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -37,6 +37,11 @@ msm_gem_address_space_get(struct msm_gem_address_space *aspace) return aspace; } +bool msm_gem_vma_inuse(struct msm_gem_vma *vma) +{ + return !!vma->inuse; +} + /* Actually unmap memory for the vma */ void msm_gem_purge_vma(struct msm_gem_address_space *aspace, struct msm_gem_vma *vma) @@ -44,7 +49,7 @@ void msm_gem_purge_vma(struct msm_gem_address_space *aspace, unsigned size = vma->node.size << PAGE_SHIFT; /* Print a message if we try to purge a vma in use */ - if (GEM_WARN_ON(vma->inuse > 0)) + if (GEM_WARN_ON(msm_gem_vma_inuse(vma))) return; /* Don't do anything if the memory isn't mapped */ @@ -100,7 +105,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace, void msm_gem_close_vma(struct msm_gem_address_space *aspace, struct msm_gem_vma *vma) { - if (GEM_WARN_ON(vma->inuse > 0 || vma->mapped)) + if (GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped)) return; spin_lock(&aspace->lock); From patchwork Mon Apr 11 21:58:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 559623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEA64C433F5 for ; Mon, 11 Apr 2022 21:58:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235416AbiDKWAu (ORCPT ); Mon, 11 Apr 2022 18:00:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350274AbiDKWAr (ORCPT ); Mon, 11 Apr 2022 18:00:47 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C03B11275E; Mon, 11 Apr 2022 14:58:32 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id c23so15078004plo.0; Mon, 11 Apr 2022 14:58:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FOiXrhcG6D/pgTVrD2sfYvvtxc1rOGgLTAX3yAXgAyg=; b=c9v+QY90G+UHTqw5akQUpW/JtS32c8Cdm5b6fk0Ih0frLz7DCn4QLK5XbqHfDd1/pj U9ddY11uJDs5uOv6wwmn7fC3eMMbj6epv3zU4kmHy8uIQACaxXegOgVMFw0Aw+GJu8nu Au0M7xWG6d3baUkmTcbtorGfeT8DSCQTZujBmYnwzc21kd1Q+rnI9cRod0Nud10CO/6e PDh7xbL5plTfeDDU3+GMPofmL8oXDc1X1PxXxBxFqDWgBfkf6GDk6x8YlkBjEkPBKrbX ia1svMoHJWNeB6PI6sUlWHJfoAVevtTMrSP/CwnwoONq32M5xG7YDrNKHqDBzxy+RpRd cjEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FOiXrhcG6D/pgTVrD2sfYvvtxc1rOGgLTAX3yAXgAyg=; b=8Ew/aa41fyqcf317pIQs4pYrVLzpZzI6pd+24Oy6/ReNdzXHHPHniVOapjRpieaESp DC/WgrltfNrh7Q3a11JvWXgqjYd90AZezlFfB6VSL4kgJlcc99bOK2AmagYpvbNHSbcY 14DB6ksxSyYKSf/80o7AzUd7N/wK5BUAHVqBF8e/tPzHVpnDImXQqjWjC468Ibx9n2cT AZONVJ3ztDQUcblm5rzXJRFO28I8ar5x83eDlOC26RTqRo1YSwKYFO6N5jxS5PABzBFS w5h2AOvVVbQqnar1D+J25cBz12t9UVlBZl4I1HHzJn7Cz8z+lL22FI7weS+NxIkZLf7c /rcA== X-Gm-Message-State: AOAM531TWr9taFCyN2gQjkP4xeb2RpJMSGNUiTJ8l4ESIsNCqkJHdI4x OyEmcbmed7Ck5NviUEdnWMw= X-Google-Smtp-Source: ABdhPJwxOGi1hkqA2QcN5GTf32C0rMz+28oJvhkEYGcNZLcAx/m70gQjwvCwJkgtrFouxLZzO6DHLQ== X-Received: by 2002:a17:90b:4ac1:b0:1cb:bc12:1a68 with SMTP id mh1-20020a17090b4ac100b001cbbc121a68mr1355345pjb.49.1649714312256; Mon, 11 Apr 2022 14:58:32 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id e127-20020a621e85000000b00505a64c8ea6sm8129180pfe.66.2022.04.11.14.58.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Apr 2022 14:58:31 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Dmitry Baryshkov , Dmitry Osipenko , Rob Clark , Rob Clark , Sean Paul , Abhinav Kumar , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 07/10] drm/msm/gem: Rework vma lookup and pin Date: Mon, 11 Apr 2022 14:58:36 -0700 Message-Id: <20220411215849.297838-8-robdclark@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215849.297838-1-robdclark@gmail.com> References: <20220411215849.297838-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Combines duplicate vma lookup in the get_and_pin path. Signed-off-by: Rob Clark Reviewed-by: Dmitry Osipenko --- drivers/gpu/drm/msm/msm_gem.c | 50 ++++++++++++++++++----------------- 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index deafae6feaa8..218744a490a4 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -376,39 +376,40 @@ put_iova_vmas(struct drm_gem_object *obj) } } -static int get_iova_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, +static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; - int ret = 0; GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); if (!vma) { + int ret; + vma = add_vma(obj, aspace); if (IS_ERR(vma)) - return PTR_ERR(vma); + return vma; ret = msm_gem_init_vma(aspace, vma, obj->size, range_start, range_end); if (ret) { del_vma(vma); - return ret; + return ERR_PTR(ret); } + } else { + GEM_WARN_ON(vma->iova < range_start); + GEM_WARN_ON((vma->iova + obj->size) > range_end); } - *iova = vma->iova; - return 0; + return vma; } -static int msm_gem_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) +static int msm_gem_pin_iova(struct drm_gem_object *obj, struct msm_gem_vma *vma) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; struct page **pages; int ret, prot = IOMMU_READ; @@ -426,15 +427,11 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) return -EBUSY; - vma = lookup_vma(obj, aspace); - if (GEM_WARN_ON(!vma)) - return -EINVAL; - pages = get_pages(obj); if (IS_ERR(pages)) return PTR_ERR(pages); - ret = msm_gem_map_vma(aspace, vma, prot, msm_obj->sgt, obj->size); + ret = msm_gem_map_vma(vma->aspace, vma, prot, msm_obj->sgt, obj->size); if (!ret) msm_obj->pin_count++; @@ -446,19 +443,18 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { - u64 local; + struct msm_gem_vma *vma; int ret; GEM_WARN_ON(!msm_gem_is_locked(obj)); - ret = get_iova_locked(obj, aspace, &local, - range_start, range_end); - - if (!ret) - ret = msm_gem_pin_iova(obj, aspace); + vma = get_vma_locked(obj, aspace, range_start, range_end); + if (IS_ERR(vma)) + return PTR_ERR(vma); + ret = msm_gem_pin_iova(obj, vma); if (!ret) - *iova = local; + *iova = vma->iova; return ret; } @@ -500,10 +496,16 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova) { - int ret; + struct msm_gem_vma *vma; + int ret = 0; msm_gem_lock(obj); - ret = get_iova_locked(obj, aspace, iova, 0, U64_MAX); + vma = get_vma_locked(obj, aspace, 0, U64_MAX); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + } else { + *iova = vma->iova; + } msm_gem_unlock(obj); return ret; From patchwork Mon Apr 11 21:58:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 559622 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC8ECC433FE for ; Mon, 11 Apr 2022 21:58:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235912AbiDKWAw (ORCPT ); Mon, 11 Apr 2022 18:00:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350275AbiDKWAu (ORCPT ); Mon, 11 Apr 2022 18:00:50 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5694E12613; Mon, 11 Apr 2022 14:58:35 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id o5so3963481pjr.0; Mon, 11 Apr 2022 14:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=05mUL8FeeCXWVDafJ6OfP4ybn8WOOy9jUdlt3AmYSNs=; b=GlITZb6d0IKkKaOBNcr8+eCnQNFge5a1y+Pr2qTFNgQZA8o3SAaTVykIpfj7H/8qol rmEdz8r7JkVo0+MF1a0qKoBnmTnmva/MIAXr5CZyYhtjRX8jLkgSG3CaMbTnR9m1orST 2iawE0JS0SV68vWIQGd75fOZJ1uIISZ831jympWd0TPetEVWPyXGv/xRORLRVhStRkMk Q+iHR4q21lAytrNDo+R/NCb7YeYOacCySeryACVi8KSLrKhWkBz472I279qfS8WtK7tq kvzlV3vHLzIBRs878XWU8RNT61bcdJ3KRgqdC9LpEnHmpPAiXhzlgZR1RBYMe0AagvG2 19qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=05mUL8FeeCXWVDafJ6OfP4ybn8WOOy9jUdlt3AmYSNs=; b=mT4pmlIG/QuKe0YcGIKsgbDI2t+7TjFmO1wPdS6bS2exG6xAXgZJgeMjaLe5FhaOJ6 u6pSPG8tV81gr7Q5ptEkZp08J9DlTLTsJIxLxZqpH4VL/C5f3q+IGDHXkwPWohpvwlUn WDvQ1aUxoHT5gVT7Rebjyp89XdmCleHcXMk7EkI26AERdFmHy1SAYPiQEtYcuC81st7m xI/CL/QI40nSESTJs1knmW4/bj17VhOtm9u6tDVnu7ttsEGG9lobfBJgKZ3p0Tq3H2Gx i/cldbpd/RwACTQXq6DNBMf77uZEjVQtvInG1+NnB/GqvgMfHBOCjS7cTHkv1G75TdXf zzrg== X-Gm-Message-State: AOAM530Wdr5LXefUZz5ZEalUVUIp6nGopDPeyDB2KGQeVa4CUBl+53av YcTVge0iL1y7Yst/xNfHP14= X-Google-Smtp-Source: ABdhPJxdMq9AhUa2yp2En43v4yIYd/2didpkUf/nYFI+ou2nHqfeMlGpAVaNzUynwwVFw3r9VJ7+cA== X-Received: by 2002:a17:903:32c1:b0:158:459f:2ec6 with SMTP id i1-20020a17090332c100b00158459f2ec6mr11245679plr.130.1649714314845; Mon, 11 Apr 2022 14:58:34 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id c7-20020a17090ab28700b001ca9514df81sm416364pjr.45.2022.04.11.14.58.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Apr 2022 14:58:33 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Dmitry Baryshkov , Dmitry Osipenko , Rob Clark , Rob Clark , Sean Paul , Abhinav Kumar , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 08/10] drm/msm/gem: Split vma lookup and pin Date: Mon, 11 Apr 2022 14:58:37 -0700 Message-Id: <20220411215849.297838-9-robdclark@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215849.297838-1-robdclark@gmail.com> References: <20220411215849.297838-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This way we only lookup vma once per object per submit, for both the submit and retire path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 60 +++++++++++++--------------- drivers/gpu/drm/msm/msm_gem.h | 9 +++-- drivers/gpu/drm/msm/msm_gem_submit.c | 17 +++++--- 3 files changed, 44 insertions(+), 42 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 218744a490a4..e8107a22c33a 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -407,7 +407,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, return vma; } -static int msm_gem_pin_iova(struct drm_gem_object *obj, struct msm_gem_vma *vma) +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **pages; @@ -439,6 +439,26 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, struct msm_gem_vma *vma) return ret; } +void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + + GEM_WARN_ON(!msm_gem_is_locked(obj)); + + msm_gem_unmap_vma(vma->aspace, vma); + + msm_obj->pin_count--; + GEM_WARN_ON(msm_obj->pin_count < 0); + + update_inactive(msm_obj); +} + +struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace) +{ + return get_vma_locked(obj, aspace, 0, U64_MAX); +} + static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) @@ -452,7 +472,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, if (IS_ERR(vma)) return PTR_ERR(vma); - ret = msm_gem_pin_iova(obj, vma); + ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) *iova = vma->iova; @@ -476,12 +496,6 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, return ret; } -int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) -{ - return get_and_pin_iova_range_locked(obj, aspace, iova, 0, U64_MAX); -} - /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova) @@ -511,29 +525,6 @@ int msm_gem_get_iova(struct drm_gem_object *obj, return ret; } -/* - * Locked variant of msm_gem_unpin_iova() - */ -void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; - - GEM_WARN_ON(!msm_gem_is_locked(obj)); - - vma = lookup_vma(obj, aspace); - - if (!GEM_WARN_ON(!vma)) { - msm_gem_unmap_vma(aspace, vma); - - msm_obj->pin_count--; - GEM_WARN_ON(msm_obj->pin_count < 0); - - update_inactive(msm_obj); - } -} - /* * Unpin a iova by updating the reference counts. The memory isn't actually * purged until something else (shrinker, mm_notifier, destroy, etc) decides @@ -542,8 +533,13 @@ void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { + struct msm_gem_vma *vma; + msm_gem_lock(obj); - msm_gem_unpin_iova_locked(obj, aspace); + vma = lookup_vma(obj, aspace); + if (!GEM_WARN_ON(!vma)) { + msm_gem_unpin_vma_locked(obj, vma); + } msm_gem_unlock(obj); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 772de010a669..f98264cf130d 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -133,17 +133,17 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); +void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); +struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova); -void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); struct page **msm_gem_get_pages(struct drm_gem_object *obj); @@ -369,6 +369,7 @@ struct msm_gem_submit { uint32_t handle; }; uint64_t iova; + struct msm_gem_vma *vma; } bos[]; }; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c6d60c8d286d..91da05af40ee 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -232,7 +232,7 @@ static void submit_cleanup_bo(struct msm_gem_submit *submit, int i, unsigned flags = submit->bos[i].flags & cleanup_flags; if (flags & BO_PINNED) - msm_gem_unpin_iova_locked(obj, submit->aspace); + msm_gem_unpin_vma_locked(obj, submit->bos[i].vma); if (flags & BO_ACTIVE) msm_gem_active_put(obj); @@ -365,21 +365,26 @@ static int submit_pin_objects(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = &submit->bos[i].obj->base; - uint64_t iova; + struct msm_gem_vma *vma; /* if locking succeeded, pin bo: */ - ret = msm_gem_get_and_pin_iova_locked(obj, - submit->aspace, &iova); + vma = msm_gem_get_vma_locked(obj, submit->aspace); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + break; + } + ret = msm_gem_pin_vma_locked(obj, vma); if (ret) break; submit->bos[i].flags |= BO_PINNED; + submit->bos[i].vma = vma; - if (iova == submit->bos[i].iova) { + if (vma->iova == submit->bos[i].iova) { submit->bos[i].flags |= BO_VALID; } else { - submit->bos[i].iova = iova; + submit->bos[i].iova = vma->iova; /* iova changed, so address in cmdstream is not valid: */ submit->bos[i].flags &= ~BO_VALID; submit->valid = false; From patchwork Mon Apr 11 21:58:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 559621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C683DC4332F for ; Mon, 11 Apr 2022 21:58:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350274AbiDKWA7 (ORCPT ); Mon, 11 Apr 2022 18:00:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350292AbiDKWA5 (ORCPT ); Mon, 11 Apr 2022 18:00:57 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63DD211C02; Mon, 11 Apr 2022 14:58:42 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id n18so15031344plg.5; Mon, 11 Apr 2022 14:58:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Dtgr6DYJ0qUGmOxJ81OL+NkRbDmdTgtEm+OJ0YJIztE=; b=O0d3ZuxLPBA0aeNZx3ThFsReo+RszKNL1+20+f7XlSlsKmtoYQu/arbBUYtgGyCfBr OCSYQ1UwJ/z3M/0iilQJcAb7gvJT4cPdeVqwajOE50Xs61BXUQdJw4vmpuvtxoBI9Ks1 RGe7K2OuYRmsGUnVNjNPVDH41SLD5Uhrx+2DlenmwwzYdKGEUqbALXShN6bMnKCFSAHR GXfj/0S3H7yZ9RiYd81ZDPvN4M0QUUFH20yJMSMod/7heIpIKlhWyW9xIpuMZsDoRwVG 4LN2MRKMuiZAtFK+gPvEaTHdAOV/KMm1bnEQ9ReQDgW1k0N6TIrwNbVtdh+g9ihfkRgc ZmaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Dtgr6DYJ0qUGmOxJ81OL+NkRbDmdTgtEm+OJ0YJIztE=; b=QqjM7QsrVfyTOAAztrWtG23X/hBCuxvd75LGOfrcC9he/Q1GfjqQTl9gYcvmMZSqwa fHjiFSg8RdElOKnGAutQC/C8u8fngFRF6c13z4fpC2wEPgFIJPTfrruMtnCIu04V8cqW XDOGKC/zQ2MULTpNHgOI7SEMIrASyA3THdvVujewJKcldwFUE6y/GUXjmljjbbkhcvkp hXiFN+dgapSwpzVTiNRM2aPzreJuWXWuRBctQYSsv2FfCAnNdqCVGQIzmhCAxa2bmerW EscNXkyYg0pPuhFCE1hzRqAbisFYHas6gLA0CEV94u+2dV67B/rIxNIahHc0ewEebeld buTg== X-Gm-Message-State: AOAM53339/ytACXtJ4dkjs0f5Hj4dpdS4RHTi6TGYk/643XuCyxI3w++ ejRek5yhKeAIIZjBJnQqzds= X-Google-Smtp-Source: ABdhPJyQQIZulZLVsNA8CkXz6Xo7Begx6I+UNnNXlDMbm24AUZMDkyu0iZc6a7Lb7NHxYWP7/ouWGA== X-Received: by 2002:a17:902:ce02:b0:153:bd65:5c0e with SMTP id k2-20020a170902ce0200b00153bd655c0emr34137421plg.160.1649714321692; Mon, 11 Apr 2022 14:58:41 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id gn21-20020a17090ac79500b001ca3c37af65sm447127pjb.21.2022.04.11.14.58.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Apr 2022 14:58:40 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Dmitry Baryshkov , Dmitry Osipenko , Rob Clark , Rob Clark , Sean Paul , Abhinav Kumar , David Airlie , Daniel Vetter , Akhil P Oommen , Jonathan Marek , =?utf-8?q?Christian_K=C3=B6nig?= , Jordan Crouse , Dan Carpenter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 10/10] drm/msm: Add a way for userspace to allocate GPU iova Date: Mon, 11 Apr 2022 14:58:39 -0700 Message-Id: <20220411215849.297838-11-robdclark@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215849.297838-1-robdclark@gmail.com> References: <20220411215849.297838-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The motivation at this point is mainly native userspace mesa driver in a VM guest. The one remaining synchronous "hotpath" is buffer allocation, because guest needs to wait to know the bo's iova before it can start emitting cmdstream/state that references the new bo. By allocating the iova in the guest userspace, we no longer need to wait for a response from the host, but can just rely on the allocation request being processed before the cmdstream submission. Allocation failures (OoM, etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET) or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY. v2: Fix inuse check v3: Change mismatched iova case to -EBUSY Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Reviewed-by: Dmitry Osipenko --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++++ drivers/gpu/drm/msm/msm_drv.c | 21 +++++++++++ drivers/gpu/drm/msm/msm_gem.c | 48 +++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem.h | 8 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++ include/uapi/drm/msm_drm.h | 3 ++ 6 files changed, 92 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 6385ab06632f..4caae0229518 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -281,6 +281,16 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, case MSM_PARAM_SUSPENDS: *value = gpu->suspend_count; return 0; + case MSM_PARAM_VA_START: + if (ctx->aspace == gpu->aspace) + return -EINVAL; + *value = ctx->aspace->va_start; + return 0; + case MSM_PARAM_VA_SIZE: + if (ctx->aspace == gpu->aspace) + return -EINVAL; + *value = ctx->aspace->va_size; + return 0; default: DBG("%s: invalid param: %u", gpu->name, param); return -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index d618953d33ea..34e2169308b4 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -722,6 +722,23 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, return msm_gem_get_iova(obj, ctx->aspace, iova); } +static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, + struct drm_file *file, struct drm_gem_object *obj, + uint64_t iova) +{ + struct msm_drm_private *priv = dev->dev_private; + struct msm_file_private *ctx = file->driver_priv; + + if (!priv->gpu) + return -EINVAL; + + /* Only supported if per-process address space is supported: */ + if (priv->gpu->aspace == ctx->aspace) + return -EOPNOTSUPP; + + return msm_gem_set_iova(obj, ctx->aspace, iova); +} + static int msm_ioctl_gem_info(struct drm_device *dev, void *data, struct drm_file *file) { @@ -736,6 +753,7 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void *data, switch (args->info) { case MSM_INFO_GET_OFFSET: case MSM_INFO_GET_IOVA: + case MSM_INFO_SET_IOVA: /* value returned as immediate, not pointer, so len==0: */ if (args->len) return -EINVAL; @@ -760,6 +778,9 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void *data, case MSM_INFO_GET_IOVA: ret = msm_ioctl_gem_info_iova(dev, file, obj, &args->value); break; + case MSM_INFO_SET_IOVA: + ret = msm_ioctl_gem_info_set_iova(dev, file, obj, args->value); + break; case MSM_INFO_SET_NAME: /* length check should leave room for terminating null: */ if (args->len >= sizeof(msm_obj->name)) { diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index bf4af17e2f1e..3ee30b8a76bd 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -525,6 +525,54 @@ int msm_gem_get_iova(struct drm_gem_object *obj, return ret; } +static int clear_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace) +{ + struct msm_gem_vma *vma = lookup_vma(obj, aspace); + + if (!vma) + return 0; + + if (msm_gem_vma_inuse(vma)) + return -EBUSY; + + msm_gem_purge_vma(vma->aspace, vma); + msm_gem_close_vma(vma->aspace, vma); + del_vma(vma); + + return 0; +} + +/* + * Get the requested iova but don't pin it. Fails if the requested iova is + * not available. Doesn't need a put because iovas are currently valid for + * the life of the object. + * + * Setting an iova of zero will clear the vma. + */ +int msm_gem_set_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t iova) +{ + int ret = 0; + + msm_gem_lock(obj); + if (!iova) { + ret = clear_iova(obj, aspace); + } else { + struct msm_gem_vma *vma; + vma = get_vma_locked(obj, aspace, iova, iova + obj->size); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + } else if (GEM_WARN_ON(vma->iova != iova)) { + clear_iova(obj, aspace); + ret = -EBUSY; + } + } + msm_gem_unlock(obj); + + return ret; +} + /* * Unpin a iova by updating the reference counts. The memory isn't actually * purged until something else (shrinker, mm_notifier, destroy, etc) decides diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 580b6eb95edd..c75d3b879a53 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -38,6 +38,12 @@ struct msm_gem_address_space { /* @faults: the number of GPU hangs associated with this address space */ int faults; + + /** @va_start: lowest possible address to allocate */ + uint64_t va_start; + + /** @va_size: the size of the address space (in bytes) */ + uint64_t va_size; }; struct msm_gem_address_space * @@ -144,6 +150,8 @@ struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova); +int msm_gem_set_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 0cd6770faf41..3c1dc9241831 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -184,6 +184,8 @@ msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, spin_lock_init(&aspace->lock); aspace->name = name; aspace->mmu = mmu; + aspace->va_start = va_start; + aspace->va_size = size; drm_mm_init(&aspace->mm, va_start, size); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 794ad1948497..3c7b097c4e3d 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -84,6 +84,8 @@ struct drm_msm_timespec { #define MSM_PARAM_SYSPROF 0x0b /* WO: 1 preserves perfcntrs, 2 also disables suspend */ #define MSM_PARAM_COMM 0x0c /* WO: override for task->comm */ #define MSM_PARAM_CMDLINE 0x0d /* WO: override for task cmdline */ +#define MSM_PARAM_VA_START 0x0e /* RO: start of valid GPU iova range */ +#define MSM_PARAM_VA_SIZE 0x0f /* RO: size of valid GPU iova range (bytes) */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # @@ -135,6 +137,7 @@ struct drm_msm_gem_new { #define MSM_INFO_GET_IOVA 0x01 /* get iova, returned by value */ #define MSM_INFO_SET_NAME 0x02 /* set the debug name (by pointer) */ #define MSM_INFO_GET_NAME 0x03 /* get debug name, returned by pointer */ +#define MSM_INFO_SET_IOVA 0x04 /* set the iova, passed by value */ struct drm_msm_gem_info { __u32 handle; /* in */