From patchwork Thu Jul 8 17:37:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 471581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC1A1C07E9E for ; Thu, 8 Jul 2021 17:38:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D3B9561879 for ; Thu, 8 Jul 2021 17:38:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229756AbhGHRkx (ORCPT ); Thu, 8 Jul 2021 13:40:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229650AbhGHRkw (ORCPT ); Thu, 8 Jul 2021 13:40:52 -0400 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E2DEC06175F for ; Thu, 8 Jul 2021 10:38:10 -0700 (PDT) Received: by mail-wm1-x32a.google.com with SMTP id j16-20020a05600c1c10b0290204b096b0caso4547853wms.1 for ; Thu, 08 Jul 2021 10:38:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fJWoyA6G2JkaA69/5nx0S1j8TXeHRpIQLImqh6Ok90g=; b=Iwa4RGJc6pSVqcozSr7UVAs6Kjgohf+512xvkF19bAHlYjAb6/MrzkXrHlFYT1p8v4 hCfzIc3bXfn+YT3bIgQcfcM8/Fog5pLsCj8MiYSl000W7pH6D7a3x2idQiMAozRYokq7 aF8S+Tk0VvVe2I6JGmj0wEPEj6JdukJsWkBWY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fJWoyA6G2JkaA69/5nx0S1j8TXeHRpIQLImqh6Ok90g=; b=aaJfmO4l07EeQynw1U086gw6wRp7Y6sU9Mz2W89aehSbo7jfjitxhnKfQEnVIu8smh da0tE3THqjp8IQXBaS419CUvUZWt9J7dhOuN9ThgoINAHUhfl900TCqCE7H0fT7RTOIL /I8e9SgBPvSm3LMGD9G8gQhM65XrjuInG2OIAbUU9EUk1XMkObBbHSlM6WibmzAcD7x/ kDFP2WtllNDYSTtOtJlkYmeaggilElfT1WMO4dSiGqsjSWNvsQzbTItoTgZj7lFJjEo6 bGikxqkZEUMBu/u2jK5vyGvJwpA3eZ7K2cwAkaHHkhcBKhlb/OVN3q64/agRAMvKcELm /S9g== X-Gm-Message-State: AOAM531aO38pOGZNMtIMWxyIcbn1LnaqdOHvoJEgfSFHdkipfRCxfz5D nutHIJqu6LVUrvm0MHdwgp7Fxw== X-Google-Smtp-Source: ABdhPJz485+ixssBY3blmTtjvLk/Z9doZfYtZuK1N73H8Y07YN8HhAWiiPAwL5W7gNTzIAgs4w2QJQ== X-Received: by 2002:a7b:c34d:: with SMTP id l13mr6523070wmj.99.1625765888779; Thu, 08 Jul 2021 10:38:08 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:08 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , Melissa Wen , Steven Price , Daniel Vetter , Lucas Stach , Russell King , Christian Gmeiner , Qiang Yu , Rob Herring , Tomeu Vizoso , Alyssa Rosenzweig , David Airlie , Daniel Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6ni?= =?utf-8?q?g?= , Masahiro Yamada , Kees Cook , Adam Borowski , Nick Terrell , Mauro Carvalho Chehab , Paul Menzel , Sami Tolvanen , Viresh Kumar , Alex Deucher , Dave Airlie , Nirmoy Das , Deepak R Varma , Lee Jones , Kevin Wang , Chen Li , Luben Tuikov , =?utf-8?b?TWFyZWsgT2zFocOhaw==?= , Dennis Li , Maarten Lankhorst , Andrey Grodzovsky , Sonny Jiang , Boris Brezillon , Tian Tao , etnaviv@lists.freedesktop.org, lima@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Emma Anholt Subject: [PATCH v3 02/20] drm/sched: Split drm_sched_job_init Date: Thu, 8 Jul 2021 19:37:36 +0200 Message-Id: <20210708173754.3877540-3-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This is a very confusingly named function, because not just does it init an object, it arms it and provides a point of no return for pushing a job into the scheduler. It would be nice if that's a bit clearer in the interface. But the real reason is that I want to push the dependency tracking helpers into the scheduler code, and that means drm_sched_job_init must be called a lot earlier, without arming the job. v2: - don't change .gitignore (Steven) - don't forget v3d (Emma) v3: Emma noticed that I leak the memory allocated in drm_sched_job_init if we bail out before the point of no return in subsequent driver patches. To be able to fix this change drm_sched_job_cleanup() so it can handle being called both before and after drm_sched_job_arm(). Also improve the kerneldoc for this. v4: - Fix the drm_sched_job_cleanup logic, I inverted the booleans, as usual (Melissa) - Christian pointed out that drm_sched_entity_select_rq() also needs to be moved into drm_sched_job_arm, which made me realize that the job->id definitely needs to be moved too. Shuffle things to fit between job_init and job_arm. Cc: Melissa Wen Acked-by: Steven Price (v2) Signed-off-by: Daniel Vetter Cc: Lucas Stach Cc: Russell King Cc: Christian Gmeiner Cc: Qiang Yu Cc: Rob Herring Cc: Tomeu Vizoso Cc: Steven Price Cc: Alyssa Rosenzweig Cc: David Airlie Cc: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: Masahiro Yamada Cc: Kees Cook Cc: Adam Borowski Cc: Nick Terrell Cc: Mauro Carvalho Chehab Cc: Paul Menzel Cc: Sami Tolvanen Cc: Viresh Kumar Cc: Alex Deucher Cc: Dave Airlie Cc: Nirmoy Das Cc: Deepak R Varma Cc: Lee Jones Cc: Kevin Wang Cc: Chen Li Cc: Luben Tuikov Cc: "Marek Olšák" Cc: Dennis Li Cc: Maarten Lankhorst Cc: Andrey Grodzovsky Cc: Sonny Jiang Cc: Boris Brezillon Cc: Tian Tao Cc: etnaviv@lists.freedesktop.org Cc: lima@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: Emma Anholt --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 + drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 + drivers/gpu/drm/lima/lima_sched.c | 2 + drivers/gpu/drm/panfrost/panfrost_job.c | 2 + drivers/gpu/drm/scheduler/sched_entity.c | 6 +-- drivers/gpu/drm/scheduler/sched_fence.c | 19 ++++--- drivers/gpu/drm/scheduler/sched_main.c | 64 ++++++++++++++++++++---- drivers/gpu/drm/v3d/v3d_gem.c | 2 + include/drm/gpu_scheduler.h | 7 ++- 10 files changed, 86 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index c5386d13eb4a..a4ec092af9a7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1226,6 +1226,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, if (r) goto error_unlock; + drm_sched_job_arm(&job->base); + /* No memory allocation is allowed while holding the notifier lock. * The lock is held until amdgpu_cs_submit is finished and fence is * added to BOs. diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index d33e6d97cc89..5ddb955d2315 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -170,6 +170,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity, if (r) return r; + drm_sched_job_arm(&job->base); + *f = dma_fence_get(&job->base.s_fence->finished); amdgpu_job_free_resources(job); drm_sched_entity_push_job(&job->base, entity); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index feb6da1b6ceb..05f412204118 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -163,6 +163,8 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, if (ret) goto out_unlock; + drm_sched_job_arm(&submit->sched_job); + submit->out_fence = dma_fence_get(&submit->sched_job.s_fence->finished); submit->out_fence_id = idr_alloc_cyclic(&submit->gpu->fence_idr, submit->out_fence, 0, diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index dba8329937a3..38f755580507 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -129,6 +129,8 @@ int lima_sched_task_init(struct lima_sched_task *task, return err; } + drm_sched_job_arm(&task->base); + task->num_bos = num_bos; task->vm = lima_vm_get(vm); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 71a72fb50e6b..2992dc85325f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -288,6 +288,8 @@ int panfrost_job_push(struct panfrost_job *job) goto unlock; } + drm_sched_job_arm(&job->base); + job->render_done_fence = dma_fence_get(&job->base.s_fence->finished); ret = panfrost_acquire_object_fences(job->bos, job->bo_count, diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 6fc116ee7302..64d398166644 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -485,9 +485,9 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) * @sched_job: job to submit * @entity: scheduler entity * - * Note: To guarantee that the order of insertion to queue matches - * the job's fence sequence number this function should be - * called with drm_sched_job_init under common lock. + * Note: To guarantee that the order of insertion to queue matches the job's + * fence sequence number this function should be called with drm_sched_job_arm() + * under common lock. * * Returns 0 for success, negative error code otherwise. */ diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 69de2c76731f..bcea035cf4c6 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -90,7 +90,7 @@ static const char *drm_sched_fence_get_timeline_name(struct dma_fence *f) * * Free up the fence memory after the RCU grace period. */ -static void drm_sched_fence_free(struct rcu_head *rcu) +void drm_sched_fence_free(struct rcu_head *rcu) { struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); struct drm_sched_fence *fence = to_drm_sched_fence(f); @@ -152,27 +152,32 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) } EXPORT_SYMBOL(to_drm_sched_fence); -struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, - void *owner) +struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity, + void *owner) { struct drm_sched_fence *fence = NULL; - unsigned seq; fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL); if (fence == NULL) return NULL; fence->owner = owner; - fence->sched = entity->rq->sched; spin_lock_init(&fence->lock); + return fence; +} + +void drm_sched_fence_init(struct drm_sched_fence *fence, + struct drm_sched_entity *entity) +{ + unsigned seq; + + fence->sched = entity->rq->sched; seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, &fence->lock, entity->fence_context, seq); dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished, &fence->lock, entity->fence_context + 1, seq); - - return fence; } module_init(drm_sched_fence_slab_init); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 01dd47154181..7e94754eb34c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -48,9 +48,11 @@ #include #include #include +#include #include #include +#include #include #include @@ -569,7 +571,6 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext); /** * drm_sched_job_init - init a scheduler job - * * @job: scheduler job to init * @entity: scheduler entity to use * @owner: job owner for debugging @@ -577,13 +578,45 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext); * Refer to drm_sched_entity_push_job() documentation * for locking considerations. * + * Drivers must make sure drm_sched_job_cleanup() if this function returns + * successfully, even when @job is aborted before drm_sched_job_arm() is called. + * * Returns 0 for success, negative error code otherwise. */ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner) +{ + job->entity = entity; + job->s_fence = drm_sched_fence_alloc(entity, owner); + if (!job->s_fence) + return -ENOMEM; + + INIT_LIST_HEAD(&job->list); + + return 0; +} +EXPORT_SYMBOL(drm_sched_job_init); + +/** + * drm_sched_job_arm - arm a scheduler job for execution + * @job: scheduler job to arm + * + * This arms a scheduler job for execution. Specifically it initializes the + * &drm_sched_job.s_fence of @job, so that it can be attached to struct dma_resv + * or other places that need to track the completion of this job. + * + * Refer to drm_sched_entity_push_job() documentation for locking + * considerations. + * + * This can only be called if drm_sched_job_init() succeeded. + */ +void drm_sched_job_arm(struct drm_sched_job *job) { struct drm_gpu_scheduler *sched; + struct drm_sched_entity *entity = job->entity; + + BUG_ON(!entity); drm_sched_entity_select_rq(entity); BUG_ON(!entity->rq); @@ -591,27 +624,36 @@ int drm_sched_job_init(struct drm_sched_job *job, sched = entity->rq->sched; job->sched = sched; - job->entity = entity; job->s_priority = entity->rq - sched->sched_rq; - job->s_fence = drm_sched_fence_create(entity, owner); - if (!job->s_fence) - return -ENOMEM; job->id = atomic64_inc_return(&sched->job_id_count); - INIT_LIST_HEAD(&job->list); - - return 0; + drm_sched_fence_init(job->s_fence, job->entity); } -EXPORT_SYMBOL(drm_sched_job_init); +EXPORT_SYMBOL(drm_sched_job_arm); /** * drm_sched_job_cleanup - clean up scheduler job resources - * * @job: scheduler job to clean up + * + * Cleans up the resources allocated with drm_sched_job_init(). + * + * Drivers should call this from their error unwind code if @job is aborted + * before drm_sched_job_arm() is called. + * + * After that point of no return @job is committed to be executed by the + * scheduler, and this function should be called from the + * &drm_sched_backend_ops.free_job callback. */ void drm_sched_job_cleanup(struct drm_sched_job *job) { - dma_fence_put(&job->s_fence->finished); + if (kref_read(&job->s_fence->finished.refcount)) { + /* drm_sched_job_arm() has been called */ + dma_fence_put(&job->s_fence->finished); + } else { + /* aborted job before committing to run it */ + drm_sched_fence_free(&job->s_fence->finished.rcu); + } + job->s_fence = NULL; } EXPORT_SYMBOL(drm_sched_job_cleanup); diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 4eb354226972..5c3a99027ecd 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -475,6 +475,8 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, if (ret) return ret; + drm_sched_job_arm(&job->base); + job->done_fence = dma_fence_get(&job->base.s_fence->finished); /* put by scheduler job completion */ diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 88ae7f331bb1..83afc3aa8e2f 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -348,6 +348,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched); int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); +void drm_sched_job_arm(struct drm_sched_job *job); void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); @@ -387,8 +388,12 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); -struct drm_sched_fence *drm_sched_fence_create( +struct drm_sched_fence *drm_sched_fence_alloc( struct drm_sched_entity *s_entity, void *owner); +void drm_sched_fence_init(struct drm_sched_fence *fence, + struct drm_sched_entity *entity); +void drm_sched_fence_free(struct rcu_head *rcu); + void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence); From patchwork Thu Jul 8 17:37:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 472130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99546C07E96 for ; Thu, 8 Jul 2021 17:38:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8747E61920 for ; Thu, 8 Jul 2021 17:38:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229795AbhGHRkz (ORCPT ); Thu, 8 Jul 2021 13:40:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229650AbhGHRky (ORCPT ); Thu, 8 Jul 2021 13:40:54 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AAE0C061574 for ; Thu, 8 Jul 2021 10:38:12 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id f9so2862143wrq.11 for ; Thu, 08 Jul 2021 10:38:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6BU0hbbARRWppYffO9U0YO5ajiPUOHOT/nRCLYKYZXQ=; b=kr6ZcMmVDAflvEIUeFnswWB6ZiIkPVa9z9/fI0obC8Vb7Pvopx5YS5WJZv4bdsq/oW mr4fL2pW/582V40IZxBd9PE34PFCk+il+cZxaEgWH/cz9pqjeUDY2JxD38dbImpgy4Zo mJFuuypwHTPEUoZ4wTD1efLmhYU+C3NlBt/J0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6BU0hbbARRWppYffO9U0YO5ajiPUOHOT/nRCLYKYZXQ=; b=ou4LU0jcCANqlPT9qIeSZyPP5Fp5reEuY78wVhXLqV18TyB9ZTnVuiJ3TuQt3wzcPd uJcz7csTIQiTpDrwbTiOT8RctCJSuA/UWVYYeBfqffXTL9Q0d1jy+jtHeFYJ4gziTgSV qY9T6qzpgfrMONb4fKbWCx55sSDz7FlKpzj0hybvS0nTyOpkiqd28tOJwuxPaCaTjIKj 9feSNyr3yk6nJp5K14QRCq7+oP3vbzJdbaLwVPcVS46WCX0q0Qh+Sf2AcpPCwYtyb1kt as69rd8ZaJspMxzP13fPgnrz2YUhjIvFHMWnpCR3GQihwHhjEbTcsgInPWhO9jzhtRK4 aiXg== X-Gm-Message-State: AOAM5316+YqM1sfAqmwekbRtH19TZ1tqkaM7vC4Sq5hTfmbmrtnmja3+ IQ7+Eg7b9ROmLU6hFY8eFwXKVA== X-Google-Smtp-Source: ABdhPJy29DgncW3AGG5DBDPcJZXx3UPlvMJDuIqCzadTpGFlGvNjl+8mP5gWdAnwFH/LQzvEyC98jw== X-Received: by 2002:a5d:4252:: with SMTP id s18mr35789899wrr.88.1625765890822; Thu, 08 Jul 2021 10:38:10 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:10 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , Steven Price , Daniel Vetter , David Airlie , Daniel Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6ni?= =?utf-8?q?g?= , Andrey Grodzovsky , Lee Jones , Nirmoy Das , Boris Brezillon , Luben Tuikov , Alex Deucher , Jack Zhang , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v3 04/20] drm/sched: Add dependency tracking Date: Thu, 8 Jul 2021 19:37:38 +0200 Message-Id: <20210708173754.3877540-5-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Instead of just a callback we can just glue in the gem helpers that panfrost, v3d and lima currently use. There's really not that many ways to skin this cat. On the naming bikeshed: The idea for using _await_ to denote adding dependencies to a job comes from i915, where that's used quite extensively all over the place, in lots of datastructures. v2/3: Rebased. Reviewed-by: Steven Price (v1) Signed-off-by: Daniel Vetter Cc: David Airlie Cc: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: Andrey Grodzovsky Cc: Lee Jones Cc: Nirmoy Das Cc: Boris Brezillon Cc: Luben Tuikov Cc: Alex Deucher Cc: Jack Zhang Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/scheduler/sched_entity.c | 18 +++- drivers/gpu/drm/scheduler/sched_main.c | 103 +++++++++++++++++++++++ include/drm/gpu_scheduler.h | 31 ++++++- 3 files changed, 146 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 4e1124ed80e0..c7e6d29c9a33 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -211,6 +211,19 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, job->sched->ops->free_job(job); } +static struct dma_fence * +drm_sched_job_dependency(struct drm_sched_job *job, + struct drm_sched_entity *entity) +{ + if (!xa_empty(&job->dependencies)) + return xa_erase(&job->dependencies, job->last_dependency++); + + if (job->sched->ops->dependency) + return job->sched->ops->dependency(job, entity); + + return NULL; +} + /** * drm_sched_entity_kill_jobs - Make sure all remaining jobs are killed * @@ -229,7 +242,7 @@ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity) struct drm_sched_fence *s_fence = job->s_fence; /* Wait for all dependencies to avoid data corruptions */ - while ((f = job->sched->ops->dependency(job, entity))) + while ((f = drm_sched_job_dependency(job, entity))) dma_fence_wait(f, false); drm_sched_fence_scheduled(s_fence); @@ -419,7 +432,6 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) */ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) { - struct drm_gpu_scheduler *sched = entity->rq->sched; struct drm_sched_job *sched_job; sched_job = to_drm_sched_job(spsc_queue_peek(&entity->job_queue)); @@ -427,7 +439,7 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) return NULL; while ((entity->dependency = - sched->ops->dependency(sched_job, entity))) { + drm_sched_job_dependency(sched_job, entity))) { trace_drm_sched_job_wait_dep(sched_job, entity->dependency); if (drm_sched_entity_add_dependency_cb(entity)) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 7e94754eb34c..ad62f1d2991c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -594,6 +594,8 @@ int drm_sched_job_init(struct drm_sched_job *job, INIT_LIST_HEAD(&job->list); + xa_init_flags(&job->dependencies, XA_FLAGS_ALLOC); + return 0; } EXPORT_SYMBOL(drm_sched_job_init); @@ -631,6 +633,98 @@ void drm_sched_job_arm(struct drm_sched_job *job) } EXPORT_SYMBOL(drm_sched_job_arm); +/** + * drm_sched_job_await_fence - adds the fence as a job dependency + * @job: scheduler job to add the dependencies to + * @fence: the dma_fence to add to the list of dependencies. + * + * Note that @fence is consumed in both the success and error cases. + * + * Returns: + * 0 on success, or an error on failing to expand the array. + */ +int drm_sched_job_await_fence(struct drm_sched_job *job, + struct dma_fence *fence) +{ + struct dma_fence *entry; + unsigned long index; + u32 id = 0; + int ret; + + if (!fence) + return 0; + + /* Deduplicate if we already depend on a fence from the same context. + * This lets the size of the array of deps scale with the number of + * engines involved, rather than the number of BOs. + */ + xa_for_each(&job->dependencies, index, entry) { + if (entry->context != fence->context) + continue; + + if (dma_fence_is_later(fence, entry)) { + dma_fence_put(entry); + xa_store(&job->dependencies, index, fence, GFP_KERNEL); + } else { + dma_fence_put(fence); + } + return 0; + } + + ret = xa_alloc(&job->dependencies, &id, fence, xa_limit_32b, GFP_KERNEL); + if (ret != 0) + dma_fence_put(fence); + + return ret; +} +EXPORT_SYMBOL(drm_sched_job_await_fence); + +/** + * drm_sched_job_await_implicit - adds implicit dependencies as job dependencies + * @job: scheduler job to add the dependencies to + * @obj: the gem object to add new dependencies from. + * @write: whether the job might write the object (so we need to depend on + * shared fences in the reservation object). + * + * This should be called after drm_gem_lock_reservations() on your array of + * GEM objects used in the job but before updating the reservations with your + * own fences. + * + * Returns: + * 0 on success, or an error on failing to expand the array. + */ +int drm_sched_job_await_implicit(struct drm_sched_job *job, + struct drm_gem_object *obj, + bool write) +{ + int ret; + struct dma_fence **fences; + unsigned int i, fence_count; + + if (!write) { + struct dma_fence *fence = dma_resv_get_excl_unlocked(obj->resv); + + return drm_sched_job_await_fence(job, fence); + } + + ret = dma_resv_get_fences(obj->resv, NULL, &fence_count, &fences); + if (ret || !fence_count) + return ret; + + for (i = 0; i < fence_count; i++) { + ret = drm_sched_job_await_fence(job, fences[i]); + if (ret) + break; + } + + for (; i < fence_count; i++) + dma_fence_put(fences[i]); + kfree(fences); + return ret; +} +EXPORT_SYMBOL(drm_sched_job_await_implicit); + + /** * drm_sched_job_cleanup - clean up scheduler job resources * @job: scheduler job to clean up @@ -646,6 +740,9 @@ EXPORT_SYMBOL(drm_sched_job_arm); */ void drm_sched_job_cleanup(struct drm_sched_job *job) { + struct dma_fence *fence; + unsigned long index; + if (kref_read(&job->s_fence->finished.refcount)) { /* drm_sched_job_arm() has been called */ dma_fence_put(&job->s_fence->finished); @@ -655,6 +752,12 @@ void drm_sched_job_cleanup(struct drm_sched_job *job) } job->s_fence = NULL; + + xa_for_each(&job->dependencies, index, fence) { + dma_fence_put(fence); + } + xa_destroy(&job->dependencies); + } EXPORT_SYMBOL(drm_sched_job_cleanup); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 83afc3aa8e2f..74fb321dbc44 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -27,9 +27,12 @@ #include #include #include +#include #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000) +struct drm_gem_object; + struct drm_gpu_scheduler; struct drm_sched_rq; @@ -198,6 +201,16 @@ struct drm_sched_job { enum drm_sched_priority s_priority; struct drm_sched_entity *entity; struct dma_fence_cb cb; + /** + * @dependencies: + * + * Contains the dependencies as struct dma_fence for this job, see + * drm_sched_job_await_fence() and drm_sched_job_await_implicit(). + */ + struct xarray dependencies; + + /** @last_dependency: tracks @dependencies as they signal */ + unsigned long last_dependency; }; static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, @@ -220,9 +233,14 @@ enum drm_gpu_sched_stat { */ struct drm_sched_backend_ops { /** - * @dependency: Called when the scheduler is considering scheduling - * this job next, to get another struct dma_fence for this job to - * block on. Once it returns NULL, run_job() may be called. + * @dependency: + * + * Called when the scheduler is considering scheduling this job next, to + * get another struct dma_fence for this job to block on. Once it + * returns NULL, run_job() may be called. + * + * If a driver exclusively uses drm_sched_job_await_fence() and + * drm_sched_job_await_implicit() this can be ommitted and left as NULL. */ struct dma_fence *(*dependency)(struct drm_sched_job *sched_job, struct drm_sched_entity *s_entity); @@ -349,6 +367,13 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); +int drm_sched_job_await_fence(struct drm_sched_job *job, + struct dma_fence *fence); +int drm_sched_job_await_implicit(struct drm_sched_job *job, + struct drm_gem_object *obj, + bool write); + + void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); From patchwork Thu Jul 8 17:37:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 472129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 138F8C07E99 for ; Thu, 8 Jul 2021 17:38:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EBF7B6191D for ; Thu, 8 Jul 2021 17:38:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229842AbhGHRk4 (ORCPT ); Thu, 8 Jul 2021 13:40:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229650AbhGHRk4 (ORCPT ); Thu, 8 Jul 2021 13:40:56 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAF57C061574 for ; Thu, 8 Jul 2021 10:38:13 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id a13so8504665wrf.10 for ; Thu, 08 Jul 2021 10:38:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1pvMr02hIHq2AEIJDWAC7sVPUBJ1dbEEa4aJ2D+xVCw=; b=BMeONXb5sS1kIcrHUY6v/jOtLtXPvpzXsLrkqKS0rhufnuy0mCjpIHYEKQrIVgsSP7 EYz10n+o0MEqnFESxpTj95W4GZpvGXvrO1UYLYkRVuMdeGzd42nUjFbrcj9vdxPF7zwv paVASeOml48hFOo0JLW6rd/1enG1JDhLqMp28= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1pvMr02hIHq2AEIJDWAC7sVPUBJ1dbEEa4aJ2D+xVCw=; b=SifKIBJyhvrkbfPsMm3RaByR0ydBfPvJVtuUbPJpKEhWNkM9y8GNC+xKwo4SRr78a5 CYcrASC3fWnOXLVQFOOtr9Hq/UD169i72JI7RF7BllCNt2ly+USJZBcmAGR1zqo7Sok1 YRKscKaqislMJKiL77zUUjRv4S0YFE6KGM3h0fC7Sedy50lrZhjPa57pS/9l8gwk8ErJ XB30DjiTouwPQIVF/AFc4ooHs4x3C+mZiW3/gel43vx4n45pOrP9tcnyYvMWs21vgM/o GfYHFAxWGdkDV46pAjcFCFYN2G5uOq8vZIUJp4q2EDpk4fsMOGkcnKr3kDjLTEYzYw0Q 2V8Q== X-Gm-Message-State: AOAM533dhYWWNKFay7HV6p0Z2jDWDuaX5k1srozuk/vd7D7iX6Ao4veq XM827Qanz4UThEzMqHUzf4Kukw== X-Google-Smtp-Source: ABdhPJz41FriE+mWoHnECaVxKDRxy5Ks/qVFZhTl5/nxmEaG5dhrbcz7KjRV07gqng/d/uz/5fJB+g== X-Received: by 2002:a5d:6984:: with SMTP id g4mr5243376wru.381.1625765892433; Thu, 08 Jul 2021 10:38:12 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:11 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , Steven Price , Daniel Vetter , Lucas Stach , Russell King , Christian Gmeiner , Qiang Yu , Rob Herring , Tomeu Vizoso , Alyssa Rosenzweig , Emma Anholt , David Airlie , Daniel Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6ni?= =?utf-8?q?g?= , Alex Deucher , Nirmoy Das , Dave Airlie , Chen Li , Lee Jones , Deepak R Varma , Kevin Wang , Luben Tuikov , =?utf-8?b?TWFyZWsgT2zFocOhaw==?= , Maarten Lankhorst , Andrey Grodzovsky , Dennis Li , Boris Brezillon , etnaviv@lists.freedesktop.org, lima@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v3 05/20] drm/sched: drop entity parameter from drm_sched_push_job Date: Thu, 8 Jul 2021 19:37:39 +0200 Message-Id: <20210708173754.3877540-6-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Originally a job was only bound to the queue when we pushed this, but now that's done in drm_sched_job_init, making that parameter entirely redundant. Remove it. The same applies to the context parameter in lima_sched_context_queue_task, simplify that too. Reviewed-by: Steven Price (v1) Signed-off-by: Daniel Vetter Cc: Lucas Stach Cc: Russell King Cc: Christian Gmeiner Cc: Qiang Yu Cc: Rob Herring Cc: Tomeu Vizoso Cc: Steven Price Cc: Alyssa Rosenzweig Cc: Emma Anholt Cc: David Airlie Cc: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: Alex Deucher Cc: Nirmoy Das Cc: Dave Airlie Cc: Chen Li Cc: Lee Jones Cc: Deepak R Varma Cc: Kevin Wang Cc: Luben Tuikov Cc: "Marek Olšák" Cc: Maarten Lankhorst Cc: Andrey Grodzovsky Cc: Dennis Li Cc: Boris Brezillon Cc: etnaviv@lists.freedesktop.org Cc: lima@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 +- drivers/gpu/drm/lima/lima_gem.c | 3 +-- drivers/gpu/drm/lima/lima_sched.c | 5 ++--- drivers/gpu/drm/lima/lima_sched.h | 3 +-- drivers/gpu/drm/panfrost/panfrost_job.c | 2 +- drivers/gpu/drm/scheduler/sched_entity.c | 6 ++---- drivers/gpu/drm/v3d/v3d_gem.c | 2 +- include/drm/gpu_scheduler.h | 3 +-- 10 files changed, 12 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index a4ec092af9a7..18f63567fb69 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1267,7 +1267,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, trace_amdgpu_cs_ioctl(job); amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket); - drm_sched_entity_push_job(&job->base, entity); + drm_sched_entity_push_job(&job->base); amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 5ddb955d2315..b8609cccc9c1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -174,7 +174,7 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity, *f = dma_fence_get(&job->base.s_fence->finished); amdgpu_job_free_resources(job); - drm_sched_entity_push_job(&job->base, entity); + drm_sched_entity_push_job(&job->base); return 0; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 05f412204118..180bb633d5c5 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -178,7 +178,7 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, /* the scheduler holds on to the job now */ kref_get(&submit->refcount); - drm_sched_entity_push_job(&submit->sched_job, sched_entity); + drm_sched_entity_push_job(&submit->sched_job); out_unlock: mutex_unlock(&submit->gpu->fence_lock); diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index de62966243cd..c528f40981bb 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -359,8 +359,7 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit) goto err_out2; } - fence = lima_sched_context_queue_task( - submit->ctx->context + submit->pipe, submit->task); + fence = lima_sched_context_queue_task(submit->task); for (i = 0; i < submit->nr_bos; i++) { if (submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE) diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index 38f755580507..e968b5a8f0b0 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -177,13 +177,12 @@ void lima_sched_context_fini(struct lima_sched_pipe *pipe, drm_sched_entity_fini(&context->base); } -struct dma_fence *lima_sched_context_queue_task(struct lima_sched_context *context, - struct lima_sched_task *task) +struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task) { struct dma_fence *fence = dma_fence_get(&task->base.s_fence->finished); trace_lima_task_submit(task); - drm_sched_entity_push_job(&task->base, &context->base); + drm_sched_entity_push_job(&task->base); return fence; } diff --git a/drivers/gpu/drm/lima/lima_sched.h b/drivers/gpu/drm/lima/lima_sched.h index 90f03c48ef4a..ac70006b0e26 100644 --- a/drivers/gpu/drm/lima/lima_sched.h +++ b/drivers/gpu/drm/lima/lima_sched.h @@ -98,8 +98,7 @@ int lima_sched_context_init(struct lima_sched_pipe *pipe, atomic_t *guilty); void lima_sched_context_fini(struct lima_sched_pipe *pipe, struct lima_sched_context *context); -struct dma_fence *lima_sched_context_queue_task(struct lima_sched_context *context, - struct lima_sched_task *task); +struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task); int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name); void lima_sched_pipe_fini(struct lima_sched_pipe *pipe); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 2992dc85325f..4bc962763e1f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -301,7 +301,7 @@ int panfrost_job_push(struct panfrost_job *job) kref_get(&job->refcount); /* put by scheduler job completion */ - drm_sched_entity_push_job(&job->base, entity); + drm_sched_entity_push_job(&job->base); mutex_unlock(&pfdev->sched_lock); diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index c7e6d29c9a33..e2a6803910ce 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -516,9 +516,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) /** * drm_sched_entity_push_job - Submit a job to the entity's job queue - * * @sched_job: job to submit - * @entity: scheduler entity * * Note: To guarantee that the order of insertion to queue matches the job's * fence sequence number this function should be called with drm_sched_job_arm() @@ -526,9 +524,9 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) * * Returns 0 for success, negative error code otherwise. */ -void drm_sched_entity_push_job(struct drm_sched_job *sched_job, - struct drm_sched_entity *entity) +void drm_sched_entity_push_job(struct drm_sched_job *sched_job) { + struct drm_sched_entity *entity = sched_job->entity; bool first; trace_drm_sched_job(sched_job, entity); diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 5c3a99027ecd..69ac20e11b09 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -482,7 +482,7 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, /* put by scheduler job completion */ kref_get(&job->refcount); - drm_sched_entity_push_job(&job->base, &v3d_priv->sched_entity[queue]); + drm_sched_entity_push_job(&job->base); return 0; } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 74fb321dbc44..2bb1869f2352 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -407,8 +407,7 @@ void drm_sched_entity_fini(struct drm_sched_entity *entity); void drm_sched_entity_destroy(struct drm_sched_entity *entity); void drm_sched_entity_select_rq(struct drm_sched_entity *entity); struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity); -void drm_sched_entity_push_job(struct drm_sched_job *sched_job, - struct drm_sched_entity *entity); +void drm_sched_entity_push_job(struct drm_sched_job *sched_job); void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); From patchwork Thu Jul 8 17:37:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 471580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA83EC07E99 for ; Thu, 8 Jul 2021 17:38:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D596A6192D for ; Thu, 8 Jul 2021 17:38:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229846AbhGHRk7 (ORCPT ); Thu, 8 Jul 2021 13:40:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229650AbhGHRk6 (ORCPT ); Thu, 8 Jul 2021 13:40:58 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F891C061574 for ; Thu, 8 Jul 2021 10:38:16 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id k16-20020a05600c1c90b02901f4ed0fcfe7so4547357wms.5 for ; Thu, 08 Jul 2021 10:38:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6CoSDQimINzuQMNmfQzI9m6jHRTl1DpQg1i8VsEPplE=; b=comVH3WXZENRluWhsm3XZUuXprJE4MTtXErDzvdFDjEBmtKavAYOFvOHcxB/8zZFez eJHyAnowlUdkk0kOKeEeEBC9y/TgUsglfEu+Ee3ynGoiOOYpxS/pdTUJsLKajwgYkTs4 wSRHXYvoduDpdFvmsOhVaSIEsY10qm1Nj1ewc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6CoSDQimINzuQMNmfQzI9m6jHRTl1DpQg1i8VsEPplE=; b=JaQLHHMFB0ESxLdc8DKd54+4kS35jqHiqNHLiWlbsFRtvLqNsUugAepOyMy1LDn7Yh gwgDgrpliZ6EwBvLH8kXPOY2jJ4/eXX39OFuCUshSRZmNAELLylFwMJZfDjNRIUw9Lzr zn2vSx0DNWTgg2CUzMArT4UMU3Gv8gk6vzmq5i+McvmmeWcZmS3vgj335anji+ea8EX2 GhR4YqjEj4bIJ4eQcogoQic803NlGky1eacBZI1erZDMb24QOFMAvFuohiGmXg5oA9g8 zaFqBEjcDhCizGx0iDD2BeIe8mBqz15iFQnGpRAotozKi9q2ZIFzvzvTshNQstHYGyhi 3DLQ== X-Gm-Message-State: AOAM531U/wwJeOxcXqDpyM9uWk83redGJfcVRHm57Sa1oJBQCYtaDYoI sCh9fmvRQBN3ov9MUZKm9lXv/A== X-Google-Smtp-Source: ABdhPJwv0jDlnfeo5tFFisp8oveqQyoI80/d9UEL/yD24KMyoOwVZoarR14a8rUkgr0TqSzlDSMITQ== X-Received: by 2002:a7b:c099:: with SMTP id r25mr6465162wmh.127.1625765895034; Thu, 08 Jul 2021 10:38:15 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:14 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , Steven Price , Daniel Vetter , Rob Herring , Tomeu Vizoso , Alyssa Rosenzweig , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v3 07/20] drm/panfrost: use scheduler dependency tracking Date: Thu, 8 Jul 2021 19:37:41 +0200 Message-Id: <20210708173754.3877540-8-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Just deletes some code that's now more shared. Note that thanks to the split into drm_sched_job_init/arm we can now easily pull the _init() part from under the submission lock way ahead where we're adding the sync file in-fences as dependencies. v2: Correctly clean up the partially set up job, now that job_init() and job_arm() are apart (Emma). Reviewed-by: Steven Price (v1) Signed-off-by: Daniel Vetter Cc: Rob Herring Cc: Tomeu Vizoso Cc: Steven Price Cc: Alyssa Rosenzweig Cc: Sumit Semwal Cc: "Christian König" Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Reviewed-by: Steven Price --- drivers/gpu/drm/panfrost/panfrost_drv.c | 16 ++++++++--- drivers/gpu/drm/panfrost/panfrost_job.c | 37 +++---------------------- drivers/gpu/drm/panfrost/panfrost_job.h | 5 +--- 3 files changed, 17 insertions(+), 41 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 1ffaef5ec5ff..9f53bea07d61 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -218,7 +218,7 @@ panfrost_copy_in_sync(struct drm_device *dev, if (ret) goto fail; - ret = drm_gem_fence_array_add(&job->deps, fence); + ret = drm_sched_job_await_fence(&job->base, fence); if (ret) goto fail; @@ -236,7 +236,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data, struct drm_panfrost_submit *args = data; struct drm_syncobj *sync_out = NULL; struct panfrost_job *job; - int ret = 0; + int ret = 0, slot; if (!args->jc) return -EINVAL; @@ -258,14 +258,20 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data, kref_init(&job->refcount); - xa_init_flags(&job->deps, XA_FLAGS_ALLOC); - job->pfdev = pfdev; job->jc = args->jc; job->requirements = args->requirements; job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev); job->file_priv = file->driver_priv; + slot = panfrost_job_get_slot(job); + + ret = drm_sched_job_init(&job->base, + &job->file_priv->sched_entity[slot], + NULL); + if (ret) + goto fail_job_put; + ret = panfrost_copy_in_sync(dev, file, args, job); if (ret) goto fail_job; @@ -283,6 +289,8 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data, drm_syncobj_replace_fence(sync_out, job->render_done_fence); fail_job: + drm_sched_job_cleanup(&job->base); +fail_job_put: panfrost_job_put(job); fail_out_sync: if (sync_out) diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 4bc962763e1f..86c843d8822e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -102,7 +102,7 @@ static struct dma_fence *panfrost_fence_create(struct panfrost_device *pfdev, in return &fence->base; } -static int panfrost_job_get_slot(struct panfrost_job *job) +int panfrost_job_get_slot(struct panfrost_job *job) { /* JS0: fragment jobs. * JS1: vertex/tiler jobs @@ -242,13 +242,13 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js) static int panfrost_acquire_object_fences(struct drm_gem_object **bos, int bo_count, - struct xarray *deps) + struct drm_sched_job *job) { int i, ret; for (i = 0; i < bo_count; i++) { /* panfrost always uses write mode in its current uapi */ - ret = drm_gem_fence_array_add_implicit(deps, bos[i], true); + ret = drm_sched_job_await_implicit(job, bos[i], true); if (ret) return ret; } @@ -269,31 +269,21 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos, int panfrost_job_push(struct panfrost_job *job) { struct panfrost_device *pfdev = job->pfdev; - int slot = panfrost_job_get_slot(job); - struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot]; struct ww_acquire_ctx acquire_ctx; int ret = 0; - ret = drm_gem_lock_reservations(job->bos, job->bo_count, &acquire_ctx); if (ret) return ret; mutex_lock(&pfdev->sched_lock); - - ret = drm_sched_job_init(&job->base, entity, NULL); - if (ret) { - mutex_unlock(&pfdev->sched_lock); - goto unlock; - } - drm_sched_job_arm(&job->base); job->render_done_fence = dma_fence_get(&job->base.s_fence->finished); ret = panfrost_acquire_object_fences(job->bos, job->bo_count, - &job->deps); + &job->base); if (ret) { mutex_unlock(&pfdev->sched_lock); goto unlock; @@ -318,15 +308,8 @@ static void panfrost_job_cleanup(struct kref *ref) { struct panfrost_job *job = container_of(ref, struct panfrost_job, refcount); - struct dma_fence *fence; - unsigned long index; unsigned int i; - xa_for_each(&job->deps, index, fence) { - dma_fence_put(fence); - } - xa_destroy(&job->deps); - dma_fence_put(job->done_fence); dma_fence_put(job->render_done_fence); @@ -365,17 +348,6 @@ static void panfrost_job_free(struct drm_sched_job *sched_job) panfrost_job_put(job); } -static struct dma_fence *panfrost_job_dependency(struct drm_sched_job *sched_job, - struct drm_sched_entity *s_entity) -{ - struct panfrost_job *job = to_panfrost_job(sched_job); - - if (!xa_empty(&job->deps)) - return xa_erase(&job->deps, job->last_dep++); - - return NULL; -} - static struct dma_fence *panfrost_job_run(struct drm_sched_job *sched_job) { struct panfrost_job *job = to_panfrost_job(sched_job); @@ -765,7 +737,6 @@ static void panfrost_reset_work(struct work_struct *work) } static const struct drm_sched_backend_ops panfrost_sched_ops = { - .dependency = panfrost_job_dependency, .run_job = panfrost_job_run, .timedout_job = panfrost_job_timedout, .free_job = panfrost_job_free diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h index 82306a03b57e..77e6d0e6f612 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.h +++ b/drivers/gpu/drm/panfrost/panfrost_job.h @@ -19,10 +19,6 @@ struct panfrost_job { struct panfrost_device *pfdev; struct panfrost_file_priv *file_priv; - /* Contains both explicit and implicit fences */ - struct xarray deps; - unsigned long last_dep; - /* Fence to be signaled by IRQ handler when the job is complete. */ struct dma_fence *done_fence; @@ -42,6 +38,7 @@ int panfrost_job_init(struct panfrost_device *pfdev); void panfrost_job_fini(struct panfrost_device *pfdev); int panfrost_job_open(struct panfrost_file_priv *panfrost_priv); void panfrost_job_close(struct panfrost_file_priv *panfrost_priv); +int panfrost_job_get_slot(struct panfrost_job *job); int panfrost_job_push(struct panfrost_job *job); void panfrost_job_put(struct panfrost_job *job); void panfrost_job_enable_interrupts(struct panfrost_device *pfdev); From patchwork Thu Jul 8 17:37:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 472128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42D4EC07E9C for ; Thu, 8 Jul 2021 17:38:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D25561879 for ; Thu, 8 Jul 2021 17:38:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229866AbhGHRlA (ORCPT ); Thu, 8 Jul 2021 13:41:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229815AbhGHRk7 (ORCPT ); Thu, 8 Jul 2021 13:40:59 -0400 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5C6EC061574 for ; Thu, 8 Jul 2021 10:38:17 -0700 (PDT) Received: by mail-wm1-x331.google.com with SMTP id l18-20020a1ced120000b029014c1adff1edso7348999wmh.4 for ; Thu, 08 Jul 2021 10:38:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mls2ejKErj4+c/cv4r8BLVM3Y5XNHv+hyokEAjrvlQQ=; b=IIF9iyvY6exRnC2Tldk5pPDZhe3//qmZGbSZqZVy4/VVvTCp6g6CRT/VmThqsvLEUc orLhRYsA2UKPlNNwkZwpYweBJMqmvrqVyMT4i72pFQwnFVgEVDKjXxceQ18N7NNjHRko TxrnLU5FP7l84aSxsVgvPpnwu0IGIwtcmVWuo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mls2ejKErj4+c/cv4r8BLVM3Y5XNHv+hyokEAjrvlQQ=; b=aLL7eS6yJs6z0D42UVC2BnWMSRqyvB+mc1BVZQceZc3gi8dwhJpGGVvr3mPSdJmNVY i/botjyRG7/XJdHRQx3+1SdZFQdPtLrG5oarmn7kHaZY5O8/4A39LuMtPXO1Pei3fKqt V0MbjpztWy2o1O9FFPJpab7cpmynVwgXQp37LagExDrg5NqqGtb13Is4PlltyBpG3Y7M Gin3r/iPPP4IzuYp27o3xnbwWROkAfMvcftI1MuGQcAMO8Pw1PqQexYNfJvisWVM7agK tSpG+BnVX7D4Eh0X8hWqQgv49DD6D24j8pW6pGYVpq12eem0vbgx3aJrlymRxEX1nIPz wYyQ== X-Gm-Message-State: AOAM531snyHScA0K/EETv4be62FpTw0HdOUP4M1h0sLUPxBY/a8ZgB0p 0nNa1BFXWdNMu21hJ8fPz96i8A== X-Google-Smtp-Source: ABdhPJxYS1PxW6pEpvMukPEy2FkcrmGfCw1y8tVGm6EpHZsEKDmHQd/ZnKUJ5sKhK1zprMgVk0lvEA== X-Received: by 2002:a7b:c955:: with SMTP id i21mr6605382wml.147.1625765896273; Thu, 08 Jul 2021 10:38:16 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:15 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , Daniel Vetter , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , lima@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v3 08/20] drm/lima: use scheduler dependency tracking Date: Thu, 8 Jul 2021 19:37:42 +0200 Message-Id: <20210708173754.3877540-9-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Nothing special going on here. Aside reviewing the code, it seems like drm_sched_job_arm() should be moved into lima_sched_context_queue_task and put under some mutex together with drm_sched_push_job(). See the kerneldoc for drm_sched_push_job(). Signed-off-by: Daniel Vetter Cc: Qiang Yu Cc: Sumit Semwal Cc: "Christian König" Cc: lima@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/lima/lima_gem.c | 4 ++-- drivers/gpu/drm/lima/lima_sched.c | 21 --------------------- drivers/gpu/drm/lima/lima_sched.h | 3 --- 3 files changed, 2 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index c528f40981bb..e54a88d5037a 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -267,7 +267,7 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo, if (explicit) return 0; - return drm_gem_fence_array_add_implicit(&task->deps, &bo->base.base, write); + return drm_sched_job_await_implicit(&task->base, &bo->base.base, write); } static int lima_gem_add_deps(struct drm_file *file, struct lima_submit *submit) @@ -285,7 +285,7 @@ static int lima_gem_add_deps(struct drm_file *file, struct lima_submit *submit) if (err) return err; - err = drm_gem_fence_array_add(&submit->task->deps, fence); + err = drm_sched_job_await_fence(&submit->task->base, fence); if (err) { dma_fence_put(fence); return err; diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index e968b5a8f0b0..99d5f6f1a882 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -134,24 +134,15 @@ int lima_sched_task_init(struct lima_sched_task *task, task->num_bos = num_bos; task->vm = lima_vm_get(vm); - xa_init_flags(&task->deps, XA_FLAGS_ALLOC); - return 0; } void lima_sched_task_fini(struct lima_sched_task *task) { - struct dma_fence *fence; - unsigned long index; int i; drm_sched_job_cleanup(&task->base); - xa_for_each(&task->deps, index, fence) { - dma_fence_put(fence); - } - xa_destroy(&task->deps); - if (task->bos) { for (i = 0; i < task->num_bos; i++) drm_gem_object_put(&task->bos[i]->base.base); @@ -186,17 +177,6 @@ struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task) return fence; } -static struct dma_fence *lima_sched_dependency(struct drm_sched_job *job, - struct drm_sched_entity *entity) -{ - struct lima_sched_task *task = to_lima_task(job); - - if (!xa_empty(&task->deps)) - return xa_erase(&task->deps, task->last_dep++); - - return NULL; -} - static int lima_pm_busy(struct lima_device *ldev) { int ret; @@ -472,7 +452,6 @@ static void lima_sched_free_job(struct drm_sched_job *job) } static const struct drm_sched_backend_ops lima_sched_ops = { - .dependency = lima_sched_dependency, .run_job = lima_sched_run_job, .timedout_job = lima_sched_timedout_job, .free_job = lima_sched_free_job, diff --git a/drivers/gpu/drm/lima/lima_sched.h b/drivers/gpu/drm/lima/lima_sched.h index ac70006b0e26..6a11764d87b3 100644 --- a/drivers/gpu/drm/lima/lima_sched.h +++ b/drivers/gpu/drm/lima/lima_sched.h @@ -23,9 +23,6 @@ struct lima_sched_task { struct lima_vm *vm; void *frame; - struct xarray deps; - unsigned long last_dep; - struct lima_bo **bos; int num_bos; From patchwork Thu Jul 8 17:37:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 472127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25DE9C11F67 for ; Thu, 8 Jul 2021 17:38:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 05B2461921 for ; Thu, 8 Jul 2021 17:38:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229875AbhGHRlE (ORCPT ); Thu, 8 Jul 2021 13:41:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229815AbhGHRlC (ORCPT ); Thu, 8 Jul 2021 13:41:02 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99D01C061574 for ; Thu, 8 Jul 2021 10:38:20 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id g10so4614441wmh.2 for ; Thu, 08 Jul 2021 10:38:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hjNW4u1I0U0Q7toaCUK63SQ1n8cRfb0yAV1MC4gWi04=; b=RdVtO26kKykbuARG+kcM5sPhsNaIfc6/4+aL71+B0Iudfror1HlkQkTm9U3ziJQ02S mdYfxXt4zcJCBHe/R69CBAczNjFW9qG+cRooM1yaROf+I+pr3TJvSGJQSgMhtOo9Pu1X dKSZOU36/TenDGQGVSciBtUFxUoFfg+J0JnQA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hjNW4u1I0U0Q7toaCUK63SQ1n8cRfb0yAV1MC4gWi04=; b=dF/lzC1TLDM3kJM3vCZolEG4CghuU/r4tzqtvuFNrgOnunChGDFkY57JU87RAiHLQh dU17ZBV++bcfGDkqGRggUplMStAum4Krqs09vP5RvSoDXVQ6u+KYdMuSfWGEWsAZXRhP rFbSgKx3p+6J/ZMiDc6fRUpRcwsyGKPRsZ7hRjNIT30UP29sGEdBhXpkWro96+1WQ4jY Ok7ulJajt2kU4XaWAvxU63k4k3qmdQyiLXgE6h0kwx50Npcsr1a5KSpPDX/Z0WqXVG/h ZsNllSdXrOfYTWIas7JWp/zNn7B3vHINiiZolYJnASshQXk1oqPWie7jhQRBtNzUqq/F nRtA== X-Gm-Message-State: AOAM532cq3+Rxt9og6+MOWJ+Ns9Z7rtQNVvBVF4crA3AE7esVZ4TZx0C n+EKAdGvcffdtv7g5BJ0htquBQ== X-Google-Smtp-Source: ABdhPJzeSdQ8WqEeuo+D61tem+xEHOYWy9du2bmluQAeHGxaTOG7LUFORkddeLlac2VtuV5cIKUW2g== X-Received: by 2002:a7b:c4ca:: with SMTP id g10mr6647151wmk.148.1625765899284; Thu, 08 Jul 2021 10:38:19 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:18 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , Daniel Vetter , Lucas Stach , Russell King , Christian Gmeiner , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , etnaviv@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v3 11/20] drm/etnaviv: Use scheduler dependency handling Date: Thu, 8 Jul 2021 19:37:45 +0200 Message-Id: <20210708173754.3877540-12-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org We need to pull the drm_sched_job_init much earlier, but that's very minor surgery. v2: Actually fix up cleanup paths by calling drm_sched_job_init, which I wanted to to in the previous round (and did, for all other drivers). Spotted by Lucas. Signed-off-by: Daniel Vetter Cc: Lucas Stach Cc: Russell King Cc: Christian Gmeiner Cc: Sumit Semwal Cc: "Christian König" Cc: etnaviv@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 5 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 58 +++++++++--------- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 63 +------------------- drivers/gpu/drm/etnaviv/etnaviv_sched.h | 3 +- 4 files changed, 35 insertions(+), 94 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h index 98e60df882b6..63688e6e4580 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h @@ -80,9 +80,6 @@ struct etnaviv_gem_submit_bo { u64 va; struct etnaviv_gem_object *obj; struct etnaviv_vram_mapping *mapping; - struct dma_fence *excl; - unsigned int nr_shared; - struct dma_fence **shared; }; /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, @@ -95,7 +92,7 @@ struct etnaviv_gem_submit { struct etnaviv_file_private *ctx; struct etnaviv_gpu *gpu; struct etnaviv_iommu_context *mmu_context, *prev_mmu_context; - struct dma_fence *out_fence, *in_fence; + struct dma_fence *out_fence; int out_fence_id; struct list_head node; /* GPU active submit list */ struct etnaviv_cmdbuf cmdbuf; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 4dd7d9d541c0..5b97ce1299ad 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -188,16 +188,10 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) continue; - if (bo->flags & ETNA_SUBMIT_BO_WRITE) { - ret = dma_resv_get_fences(robj, &bo->excl, - &bo->nr_shared, - &bo->shared); - if (ret) - return ret; - } else { - bo->excl = dma_resv_get_excl_unlocked(robj); - } - + ret = drm_sched_job_await_implicit(&submit->sched_job, &bo->obj->base, + bo->flags & ETNA_SUBMIT_BO_WRITE); + if (ret) + return ret; } return ret; @@ -403,8 +397,6 @@ static void submit_cleanup(struct kref *kref) wake_up_all(&submit->gpu->fence_event); - if (submit->in_fence) - dma_fence_put(submit->in_fence); if (submit->out_fence) { /* first remove from IDR, so fence can not be found anymore */ mutex_lock(&submit->gpu->fence_lock); @@ -529,7 +521,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &submit->cmdbuf, ALIGN(args->stream_size, 8) + 8); if (ret) - goto err_submit_objects; + goto err_submit_put; submit->ctx = file->driver_priv; etnaviv_iommu_context_get(submit->ctx->mmu); @@ -537,51 +529,61 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, submit->exec_state = args->exec_state; submit->flags = args->flags; + ret = drm_sched_job_init(&submit->sched_job, + &ctx->sched_entity[args->pipe], + submit->ctx); + if (ret) + goto err_submit_put; + ret = submit_lookup_objects(submit, file, bos, args->nr_bos); if (ret) - goto err_submit_objects; + goto err_submit_job; if ((priv->mmu_global->version != ETNAVIV_IOMMU_V2) && !etnaviv_cmd_validate_one(gpu, stream, args->stream_size / 4, relocs, args->nr_relocs)) { ret = -EINVAL; - goto err_submit_objects; + goto err_submit_job; } if (args->flags & ETNA_SUBMIT_FENCE_FD_IN) { - submit->in_fence = sync_file_get_fence(args->fence_fd); - if (!submit->in_fence) { + struct dma_fence *in_fence = sync_file_get_fence(args->fence_fd); + if (!in_fence) { ret = -EINVAL; - goto err_submit_objects; + goto err_submit_job; } + + ret = drm_sched_job_await_fence(&submit->sched_job, in_fence); + if (ret) + goto err_submit_job; } ret = submit_pin_objects(submit); if (ret) - goto err_submit_objects; + goto err_submit_job; ret = submit_reloc(submit, stream, args->stream_size / 4, relocs, args->nr_relocs); if (ret) - goto err_submit_objects; + goto err_submit_job; ret = submit_perfmon_validate(submit, args->exec_state, pmrs); if (ret) - goto err_submit_objects; + goto err_submit_job; memcpy(submit->cmdbuf.vaddr, stream, args->stream_size); ret = submit_lock_objects(submit, &ticket); if (ret) - goto err_submit_objects; + goto err_submit_job; ret = submit_fence_sync(submit); if (ret) - goto err_submit_objects; + goto err_submit_job; - ret = etnaviv_sched_push_job(&ctx->sched_entity[args->pipe], submit); + ret = etnaviv_sched_push_job(submit); if (ret) - goto err_submit_objects; + goto err_submit_job; submit_attach_object_fences(submit); @@ -595,7 +597,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, sync_file = sync_file_create(submit->out_fence); if (!sync_file) { ret = -ENOMEM; - goto err_submit_objects; + goto err_submit_job; } fd_install(out_fence_fd, sync_file->file); } @@ -603,7 +605,9 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, args->fence_fd = out_fence_fd; args->fence = submit->out_fence_id; -err_submit_objects: +err_submit_job: + drm_sched_job_cleanup(&submit->sched_job); +err_submit_put: etnaviv_submit_put(submit); err_submit_ww_acquire: diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 180bb633d5c5..2bbbd6ccc95e 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -17,58 +17,6 @@ module_param_named(job_hang_limit, etnaviv_job_hang_limit, int , 0444); static int etnaviv_hw_jobs_limit = 4; module_param_named(hw_job_limit, etnaviv_hw_jobs_limit, int , 0444); -static struct dma_fence * -etnaviv_sched_dependency(struct drm_sched_job *sched_job, - struct drm_sched_entity *entity) -{ - struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); - struct dma_fence *fence; - int i; - - if (unlikely(submit->in_fence)) { - fence = submit->in_fence; - submit->in_fence = NULL; - - if (!dma_fence_is_signaled(fence)) - return fence; - - dma_fence_put(fence); - } - - for (i = 0; i < submit->nr_bos; i++) { - struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; - int j; - - if (bo->excl) { - fence = bo->excl; - bo->excl = NULL; - - if (!dma_fence_is_signaled(fence)) - return fence; - - dma_fence_put(fence); - } - - for (j = 0; j < bo->nr_shared; j++) { - if (!bo->shared[j]) - continue; - - fence = bo->shared[j]; - bo->shared[j] = NULL; - - if (!dma_fence_is_signaled(fence)) - return fence; - - dma_fence_put(fence); - } - kfree(bo->shared); - bo->nr_shared = 0; - bo->shared = NULL; - } - - return NULL; -} - static struct dma_fence *etnaviv_sched_run_job(struct drm_sched_job *sched_job) { struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); @@ -140,29 +88,22 @@ static void etnaviv_sched_free_job(struct drm_sched_job *sched_job) } static const struct drm_sched_backend_ops etnaviv_sched_ops = { - .dependency = etnaviv_sched_dependency, .run_job = etnaviv_sched_run_job, .timedout_job = etnaviv_sched_timedout_job, .free_job = etnaviv_sched_free_job, }; -int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, - struct etnaviv_gem_submit *submit) +int etnaviv_sched_push_job(struct etnaviv_gem_submit *submit) { int ret = 0; /* * Hold the fence lock across the whole operation to avoid jobs being * pushed out of order with regard to their sched fence seqnos as - * allocated in drm_sched_job_init. + * allocated in drm_sched_job_arm. */ mutex_lock(&submit->gpu->fence_lock); - ret = drm_sched_job_init(&submit->sched_job, sched_entity, - submit->ctx); - if (ret) - goto out_unlock; - drm_sched_job_arm(&submit->sched_job); submit->out_fence = dma_fence_get(&submit->sched_job.s_fence->finished); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.h b/drivers/gpu/drm/etnaviv/etnaviv_sched.h index c0a6796e22c9..baebfa069afc 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.h @@ -18,7 +18,6 @@ struct etnaviv_gem_submit *to_etnaviv_submit(struct drm_sched_job *sched_job) int etnaviv_sched_init(struct etnaviv_gpu *gpu); void etnaviv_sched_fini(struct etnaviv_gpu *gpu); -int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, - struct etnaviv_gem_submit *submit); +int etnaviv_sched_push_job(struct etnaviv_gem_submit *submit); #endif /* __ETNAVIV_SCHED_H__ */ From patchwork Thu Jul 8 17:37:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 471579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1A60C11F66 for ; Thu, 8 Jul 2021 17:38:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98BFB61921 for ; Thu, 8 Jul 2021 17:38:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229949AbhGHRlF (ORCPT ); Thu, 8 Jul 2021 13:41:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229815AbhGHRlE (ORCPT ); Thu, 8 Jul 2021 13:41:04 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7A66C061574 for ; Thu, 8 Jul 2021 10:38:21 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id k32so1395493wms.4 for ; Thu, 08 Jul 2021 10:38:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PsoZ6J2M5QyGu071hd73Pkusw3eitnPHMAU0WekmIkg=; b=B/gf2bBWcJQdmGLreTa1g8yBCnmVp12cOH/A3zekhg4iwgJ9EvGURKPdtbHfHOtgyl 0Az8KmZ54vPXDRCJTk/Vlq3fuk4eHCOSPrbUZ9UQ6GscieXjk9mxcY6SNoRwzdl8jyEV zIoZvVJrBEoPyH+xuwztDtOIgMFEmrZDdSjb8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PsoZ6J2M5QyGu071hd73Pkusw3eitnPHMAU0WekmIkg=; b=hq+rXgOuro59yU625e6FtsOyLhmaoV0gBJOS0mYiuDscrWnTqfURi0PpSJIKjOTZlC i5sFsqnM/F5XbOC/qzUyLeuB1ZLGtngz6AUEHSQoWKlLi7h+i+lYdVOSQwoAAxQz/TtT jVyuKcHHRRlMd71C5bl+aQivMZ0imkLxx0aAqLNyNWoB3z5kkfhNqHFbkPkdgkK862fS uLtMRhxipms9QcC+vNLqwLBwBi7MRpbHQxOxBRL1E0msPsJk+h0IXkWj4AWBS0/CYZLj h/kqxdPtpsdDnRfdYUb6O46u83OY/iTulmNIv2UNj6T1B2Onm8w3knqrsf/OcgthHEOl F3uA== X-Gm-Message-State: AOAM533z1L/zH96UCNG1/Z3zWXhioC4fDzi3C1ksgcI9p75sctEQIQGO TUjgnXX+rambCZh7nzUUm9HuEA== X-Google-Smtp-Source: ABdhPJxW6gbYAoCJRF7mwKLk5kN1rIiivUiCmm128TmY2TKo/ZiWv4FQqTe3fwoO0lIb4zo8awnmcA== X-Received: by 2002:a05:600c:19d2:: with SMTP id u18mr6637474wmq.21.1625765900281; Thu, 08 Jul 2021 10:38:20 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:19 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6ni?= =?utf-8?q?g?= , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v3 12/20] drm/gem: Delete gem array fencing helpers Date: Thu, 8 Jul 2021 19:37:46 +0200 Message-Id: <20210708173754.3877540-13-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Integrated into the scheduler now and all users converted over. Signed-off-by: Daniel Vetter Cc: Maarten Lankhorst Cc: Maxime Ripard Cc: Thomas Zimmermann Cc: David Airlie Cc: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/drm_gem.c | 96 --------------------------------------- include/drm/drm_gem.h | 5 -- 2 files changed, 101 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 68deb1de8235..24d49a2636e0 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1294,99 +1294,3 @@ drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, ww_acquire_fini(acquire_ctx); } EXPORT_SYMBOL(drm_gem_unlock_reservations); - -/** - * drm_gem_fence_array_add - Adds the fence to an array of fences to be - * waited on, deduplicating fences from the same context. - * - * @fence_array: array of dma_fence * for the job to block on. - * @fence: the dma_fence to add to the list of dependencies. - * - * This functions consumes the reference for @fence both on success and error - * cases. - * - * Returns: - * 0 on success, or an error on failing to expand the array. - */ -int drm_gem_fence_array_add(struct xarray *fence_array, - struct dma_fence *fence) -{ - struct dma_fence *entry; - unsigned long index; - u32 id = 0; - int ret; - - if (!fence) - return 0; - - /* Deduplicate if we already depend on a fence from the same context. - * This lets the size of the array of deps scale with the number of - * engines involved, rather than the number of BOs. - */ - xa_for_each(fence_array, index, entry) { - if (entry->context != fence->context) - continue; - - if (dma_fence_is_later(fence, entry)) { - dma_fence_put(entry); - xa_store(fence_array, index, fence, GFP_KERNEL); - } else { - dma_fence_put(fence); - } - return 0; - } - - ret = xa_alloc(fence_array, &id, fence, xa_limit_32b, GFP_KERNEL); - if (ret != 0) - dma_fence_put(fence); - - return ret; -} -EXPORT_SYMBOL(drm_gem_fence_array_add); - -/** - * drm_gem_fence_array_add_implicit - Adds the implicit dependencies tracked - * in the GEM object's reservation object to an array of dma_fences for use in - * scheduling a rendering job. - * - * This should be called after drm_gem_lock_reservations() on your array of - * GEM objects used in the job but before updating the reservations with your - * own fences. - * - * @fence_array: array of dma_fence * for the job to block on. - * @obj: the gem object to add new dependencies from. - * @write: whether the job might write the object (so we need to depend on - * shared fences in the reservation object). - */ -int drm_gem_fence_array_add_implicit(struct xarray *fence_array, - struct drm_gem_object *obj, - bool write) -{ - int ret; - struct dma_fence **fences; - unsigned int i, fence_count; - - if (!write) { - struct dma_fence *fence = - dma_resv_get_excl_unlocked(obj->resv); - - return drm_gem_fence_array_add(fence_array, fence); - } - - ret = dma_resv_get_fences(obj->resv, NULL, - &fence_count, &fences); - if (ret || !fence_count) - return ret; - - for (i = 0; i < fence_count; i++) { - ret = drm_gem_fence_array_add(fence_array, fences[i]); - if (ret) - break; - } - - for (; i < fence_count; i++) - dma_fence_put(fences[i]); - kfree(fences); - return ret; -} -EXPORT_SYMBOL(drm_gem_fence_array_add_implicit); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 240049566592..6d5e33b89074 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -409,11 +409,6 @@ int drm_gem_lock_reservations(struct drm_gem_object **objs, int count, struct ww_acquire_ctx *acquire_ctx); void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, struct ww_acquire_ctx *acquire_ctx); -int drm_gem_fence_array_add(struct xarray *fence_array, - struct dma_fence *fence); -int drm_gem_fence_array_add_implicit(struct xarray *fence_array, - struct drm_gem_object *obj, - bool write); int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, u32 handle, u64 *offset); From patchwork Thu Jul 8 17:37:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 471578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06C0BC07E96 for ; Thu, 8 Jul 2021 17:38:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E43E361879 for ; Thu, 8 Jul 2021 17:38:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230106AbhGHRlO (ORCPT ); Thu, 8 Jul 2021 13:41:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230038AbhGHRlM (ORCPT ); Thu, 8 Jul 2021 13:41:12 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D13DFC061574 for ; Thu, 8 Jul 2021 10:38:29 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id j16-20020a05600c1c10b0290204b096b0caso4548534wms.1 for ; Thu, 08 Jul 2021 10:38:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2ZhKyW2iXAQnZB65HuPkrPJPq0kH1huhhVBcvQ5De2U=; b=LHatzWvSbADBpiEA60FUUoMCtMvyXCOO28RppY4XXcC5auDMB/2eJZraPfm2U9lb+V mv9LlVNKbTezQoIpN9BEemLw+d8ejGze2bBr4t+4XJyJk9p4RXklJsloIz3qX9iVe7PG uMaK+LE/T5tR3Y2Kwconjw0AKsvkTRXrj3o8Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2ZhKyW2iXAQnZB65HuPkrPJPq0kH1huhhVBcvQ5De2U=; b=F5zsXjWLL8bKKTmJ9XR8c+svg1lnMQiz7FI4rC6MOZ5B5V8SCPL4w5piMtM3bN6BjQ QHeQ3BG+gFbJJYXoB67A1/ret0cd2gi0XHn75RwYQene7cP+rfuokbL7fBvnNem6c9ux tZz+1sd8BlL7ey52yHB48zfNR699QY0uwgQqv7f+lhf5W6LwLcLhWKFnj6cIF8hLusdl /wuRj0G1NAW1TgSJDa0BEEYkbmDD4JJ8+jBwOnM0kOWYQCU4BAHRFTN98WkXAUy4avWD W13KGQfw6nDlC7H3qASBEm6ksYOiGfVoUb0aku2qgkde8faPioXvr4YM27o1jsv1widq AC/w== X-Gm-Message-State: AOAM532nrvuFbZrZjlQhuIAF6lJfttZBHvJk600ra98u8nTqZjjNH45b kRWbdwjrMur2WgVVT0mw0G+0Jw== X-Google-Smtp-Source: ABdhPJwxGV84X2b9ACK5a53uTXue40qCMinFJKOGDBNVRPXtiiqGSwbmCe2H2iUru49EJes1mbdX1w== X-Received: by 2002:a05:600c:1d23:: with SMTP id l35mr6748551wms.134.1625765908475; Thu, 08 Jul 2021 10:38:28 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:28 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand , Matthew Auld , Daniel Vetter , Sumit Semwal , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v3 20/20] dma-resv: Give the docs a do-over Date: Thu, 8 Jul 2021 19:37:54 +0200 Message-Id: <20210708173754.3877540-21-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Specifically document the new/clarified rules around how the shared fences do not have any ordering requirements against the exclusive fence. But also document all the things a bit better, given how central struct dma_resv to dynamic buffer management the docs have been very inadequat. - Lots more links to other pieces of the puzzle. Unfortunately ttm_buffer_object has no docs, so no links :-( - Explain/complain a bit about dma_resv_locking_ctx(). I still don't like that one, but fixing the ttm call chains is going to be horrible. Plus we want to plug in real slowpath locking when we do that anyway. - Main part of the patch is some actual docs for struct dma_resv. Overall I think we still have a lot of bad naming in this area (e.g. dma_resv.fence is singular, but contains the multiple shared fences), but I think that's more indicative of how the semantics and rules are just not great. Another thing that's real awkard is how chaining exclusive fences right now means direct dma_resv.exclusive_fence pointer access with an rcu_assign_pointer. Not so great either. v2: - Fix a pile of typos (Matt, Jason) - Hammer it in that breaking the rules leads to use-after-free issues around dma-buf sharing (Christian) Reviewed-by: Christian König Cc: Jason Ekstrand Cc: Matthew Auld Reviewed-by: Matthew Auld Signed-off-by: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/dma-buf/dma-resv.c | 24 ++++++--- include/linux/dma-buf.h | 7 +++ include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++-- 3 files changed, 124 insertions(+), 11 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index f26c71747d43..a3acb6479ddf 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -48,6 +48,8 @@ * write operations) or N shared fences (read operations). The RCU * mechanism is used to protect read access to fences from locked * write-side updates. + * + * See struct dma_resv for more details. */ DEFINE_WD_CLASS(reservation_ww_class); @@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini); * @num_fences: number of fences we want to add * * Should be called before dma_resv_add_shared_fence(). Must - * be called with obj->lock held. + * be called with @obj locked through dma_resv_lock(). + * + * Note that the preallocated slots need to be re-reserved if @obj is unlocked + * at any time before calling dma_resv_add_shared_fence(). This is validated + * when CONFIG_DEBUG_MUTEXES is enabled. * * RETURNS * Zero for success, or -errno @@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max); * @obj: the reservation object * @fence: the shared fence to add * - * Add a fence to a shared slot, obj->lock must be held, and + * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and * dma_resv_reserve_shared() has been called. + * + * See also &dma_resv.fence for a discussion of the semantics. */ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) { @@ -278,9 +286,11 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); /** * dma_resv_add_excl_fence - Add an exclusive fence. * @obj: the reservation object - * @fence: the shared fence to add + * @fence: the exclusive fence to add * - * Add a fence to the exclusive slot. The obj->lock must be held. + * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). + * Note that this function replaces all fences attached to @obj, see also + * &dma_resv.fence_excl for a discussion of the semantics. */ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) { @@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) * fence * * Callers are not required to hold specific locks, but maybe hold - * dma_resv_lock() already + * dma_resv_lock() already. + * * RETURNS - * true if all fences signaled, else false + * + * True if all fences signaled, else false. */ bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 2b814fde0d11..8cc0c55877a6 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -420,6 +420,13 @@ struct dma_buf { * - Dynamic importers should set fences for any access that they can't * disable immediately from their &dma_buf_attach_ops.move_notify * callback. + * + * IMPORTANT: + * + * All drivers must obey the struct dma_resv rules, specifically the + * rules for updating fences, see &dma_resv.fence_excl and + * &dma_resv.fence. If these dependency rules are broken access tracking + * can be lost resulting in use after free issues. */ struct dma_resv *resv; diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index e1ca2080a1ff..9100dd3dc21f 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -62,16 +62,90 @@ struct dma_resv_list { /** * struct dma_resv - a reservation object manages fences for a buffer - * @lock: update side lock - * @seq: sequence count for managing RCU read-side synchronization - * @fence_excl: the exclusive fence, if there is one currently - * @fence: list of current shared fences + * + * There are multiple uses for this, with sometimes slightly different rules in + * how the fence slots are used. + * + * One use is to synchronize cross-driver access to a struct dma_buf, either for + * dynamic buffer management or just to handle implicit synchronization between + * different users of the buffer in userspace. See &dma_buf.resv for a more + * in-depth discussion. + * + * The other major use is to manage access and locking within a driver in a + * buffer based memory manager. struct ttm_buffer_object is the canonical + * example here, since this is where reservation objects originated from. But + * use in drivers is spreading and some drivers also manage struct + * drm_gem_object with the same scheme. */ struct dma_resv { + /** + * @lock: + * + * Update side lock. Don't use directly, instead use the wrapper + * functions like dma_resv_lock() and dma_resv_unlock(). + * + * Drivers which use the reservation object to manage memory dynamically + * also use this lock to protect buffer object state like placement, + * allocation policies or throughout command submission. + */ struct ww_mutex lock; + + /** + * @seq: + * + * Sequence count for managing RCU read-side synchronization, allows + * read-only access to @fence_excl and @fence while ensuring we take a + * consistent snapshot. + */ seqcount_ww_mutex_t seq; + /** + * @fence_excl: + * + * The exclusive fence, if there is one currently. + * + * There are two ways to update this fence: + * + * - First by calling dma_resv_add_excl_fence(), which replaces all + * fences attached to the reservation object. To guarantee that no + * fences are lost, this new fence must signal only after all previous + * fences, both shared and exclusive, have signalled. In some cases it + * is convenient to achieve that by attaching a struct dma_fence_array + * with all the new and old fences. + * + * - Alternatively the fence can be set directly, which leaves the + * shared fences unchanged. To guarantee that no fences are lost, this + * new fence must signal only after the previous exclusive fence has + * signalled. Since the shared fences are staying intact, it is not + * necessary to maintain any ordering against those. If semantically + * only a new access is added without actually treating the previous + * one as a dependency the exclusive fences can be strung together + * using struct dma_fence_chain. + * + * Note that actual semantics of what an exclusive or shared fence mean + * is defined by the user, for reservation objects shared across drivers + * see &dma_buf.resv. + */ struct dma_fence __rcu *fence_excl; + + /** + * @fence: + * + * List of current shared fences. + * + * There are no ordering constraints of shared fences against the + * exclusive fence slot. If a waiter needs to wait for all access, it + * has to wait for both sets of fences to signal. + * + * A new fence is added by calling dma_resv_add_shared_fence(). Since + * this often needs to be done past the point of no return in command + * submission it cannot fail, and therefore sufficient slots need to be + * reserved by calling dma_resv_reserve_shared(). + * + * Note that actual semantics of what an exclusive or shared fence mean + * is defined by the user, for reservation objects shared across drivers + * see &dma_buf.resv. + */ struct dma_resv_list __rcu *fence; }; @@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {} * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation * object may be locked by itself by passing NULL as @ctx. + * + * When a die situation is indicated by returning -EDEADLK all locks held by + * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj. + * + * Unlocked by calling dma_resv_unlock(). + * + * See also dma_resv_lock_interruptible() for the interruptible variant. */ static inline int dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj, * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation * object may be locked by itself by passing NULL as @ctx. + * + * When a die situation is indicated by returning -EDEADLK all locks held by + * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() called on + * @obj. + * + * Unlocked by calling dma_resv_unlock(). */ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, * Acquires the reservation object after a die case. This function * will sleep until the lock becomes available. See dma_resv_lock() as * well. + * + * See also dma_resv_lock_slow_interruptible() for the interruptible variant. */ static inline void dma_resv_lock_slow(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj, * if they overlap with a writer. * * Also note that since no context is provided, no deadlock protection is - * possible. + * possible, which is also not needed for a trylock. * * Returns true if the lock was acquired, false otherwise. */ @@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_resv *obj) * * Returns the context used to lock a reservation object or NULL if no context * was used or the object is not locked at all. + * + * WARNING: This interface is pretty horrible, but TTM needs it because it + * doesn't pass the struct ww_acquire_ctx around in some very long callchains. + * Everyone else just uses it to check whether they're holding a reservation or + * not. */ static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj) {