From patchwork Thu Jan 9 13:37:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Stanner X-Patchwork-Id: 856097 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65B9821A43D; Thu, 9 Jan 2025 13:37:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736429871; cv=none; b=a7MstB24t4unTnzWDKvgE7S2wssiN4VQNiPEcgJrTR3pCSaIPjM5gkzs2sBDgTMcE3WZ1SxwGAmx6/eJzSsQPaQu0AF+tuuGhdmYsUANR7NegS4KsItFWjt6gTCRh47sYkUqBXNzSfxB/I87GkmZ7XiQnJv5gVZSzBr5VmkmDwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736429871; c=relaxed/simple; bh=c9D36Ogvzqnzvebv/xpE/vsoiRoi3LV9aeFt1ahu8h0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=I4m+tqI4c0EGjU96nM3/dCYgcgj+SYauXPlgmcBvrEelDmLdV7JKP1LrQGccbJxnD4rztZBNDtI+z3e8Qjh8gMIHsujYPIJycrtQACMh7iU1DJdlz6rbbMF1txOXLiL1fqsKZh4Rd6BUKL0AT4hoC4kRDHmQk8yupmAVsK6JysY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eutQ/m08; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eutQ/m08" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C28D1C4CED2; Thu, 9 Jan 2025 13:37:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736429871; bh=c9D36Ogvzqnzvebv/xpE/vsoiRoi3LV9aeFt1ahu8h0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eutQ/m08AoWt1SE7TjcEnn/nuieJV99R+aj/lc0RdZQHj52YKjLeDw7xdjnAZQPKz W2JiDGQyNhLEovVhxzm3Zk2PEzAkpIYe53rFmhF2dYewyd/TInscE+yRD3ZBHA0Qnp EIkC/BHnJg629yCmn+ZXZmtnzxJfkpfQWcrrj7FzEUyQlr0kLBx5xIPp8FhPTznCGu 3Njc2V9Du6Rw6Oh7Zi3bADioH8y+HU58SCVecPhTvdAPK0jY6CAg4FHbCUfX7IIrid H+44LPOiP1gneym68rjQ25a6WCN/noEHNjHV67z7BZYocyKswTEWDrw6h9hwM1/Qmh W5FOyEtyOU2xQ== From: Philipp Stanner To: Luben Tuikov , Matthew Brost , Danilo Krummrich , Philipp Stanner , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 1/3] drm/sched: Document run_job() refcount hazard Date: Thu, 9 Jan 2025 14:37:10 +0100 Message-ID: <20250109133710.39404-4-phasta@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250109133710.39404-2-phasta@kernel.org> References: <20250109133710.39404-2-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Philipp Stanner drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler. That fence is signalled by the driver once the hardware completed the associated job. The scheduler does not increment the reference count on that fence, but implicitly expects to inherit this fence from run_job(). This is relatively subtle and prone to misunderstandings. This implies that, to keep a reference for itself, a driver needs to call dma_fence_get() in addition to dma_fence_init() in that callback. It's further complicated by the fact that the scheduler even decrements the refcount in drm_sched_run_job_work() since it created a new reference in drm_sched_fence_scheduled(). It does, however, still use its pointer to the fence after calling dma_fence_put() - which is safe because of the aforementioned new reference, but actually still violates the refcounting rules. Improve the explanatory comment for that decrement. Move the call to dma_fence_put() to the position behind the last usage of the fence. Document the necessity to increment the reference count in drm_sched_backend_ops.run_job(). Signed-off-by: Philipp Stanner --- drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++--- include/drm/gpu_scheduler.h | 19 +++++++++++++++---- 2 files changed, 22 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 57da84908752..5f46c01eb01e 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct work_struct *w) drm_sched_fence_scheduled(s_fence, fence); if (!IS_ERR_OR_NULL(fence)) { - /* Drop for original kref_init of the fence */ - dma_fence_put(fence); - r = dma_fence_add_callback(fence, &sched_job->cb, drm_sched_job_done_cb); if (r == -ENOENT) drm_sched_job_done(sched_job, fence->error); else if (r) DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r); + + /* + * s_fence took a new reference to fence in the call to + * drm_sched_fence_scheduled() above. The reference passed by + * run_job() above is now not needed any longer. Drop it. + */ + dma_fence_put(fence); } else { drm_sched_job_done(sched_job, IS_ERR(fence) ? PTR_ERR(fence) : 0); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 95e17504e46a..d5cd2a78f27c 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -420,10 +420,21 @@ struct drm_sched_backend_ops { struct drm_sched_entity *s_entity); /** - * @run_job: Called to execute the job once all of the dependencies - * have been resolved. This may be called multiple times, if - * timedout_job() has happened and drm_sched_job_recovery() - * decides to try it again. + * @run_job: Called to execute the job once all of the dependencies + * have been resolved. This may be called multiple times, if + * timedout_job() has happened and drm_sched_job_recovery() decides to + * try it again. + * + * @sched_job: the job to run + * + * Returns: dma_fence the driver must signal once the hardware has + * completed the job ("hardware fence"). + * + * Note that the scheduler expects to 'inherit' its own reference to + * this fence from the callback. It does not invoke an extra + * dma_fence_get() on it. Consequently, this callback must take a + * reference for the scheduler, and additional ones for the driver's + * respective needs. */ struct dma_fence *(*run_job)(struct drm_sched_job *sched_job); From patchwork Thu Jan 9 13:37:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Stanner X-Patchwork-Id: 856096 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0138321D009; Thu, 9 Jan 2025 13:37:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736429880; cv=none; b=pEPCYy5AuBTdJ8etRH1OXfqHvgvixPP2XX3wGgKCflXCs8/9rUnarNagplPtJJaFYQO+x9clQRRBELCkYEGMkbXiTBeV2l/b9AjBE4KIuyPZkgGHVeSvXFWMXQ+rbOcFX2wMFig2NUqL74vcqZLmrWC6cGuCJApGXkttdBnXZPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736429880; c=relaxed/simple; bh=YCZh1cBBY7QlLadbExChChUtbsBI1j42CNiRKf+4Suo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rdtl1papojlhgSBsogLZEnt5BvNFCQrbKW1Pp8ugtscYX9N02fzP2fRPlvnxXRuKmwPdoG0iM7W7G7GeAI9UhIGSaAgt5f0RW8r9uGuYqcF0tCPQSZiBPi4IQny7MkNTcHhARgsJI/MKl9t2bWMmxk8ZW1xbOnR9M6uZEKw61xU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UWv6pdKg; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UWv6pdKg" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EACA3C4CEE7; Thu, 9 Jan 2025 13:37:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736429879; bh=YCZh1cBBY7QlLadbExChChUtbsBI1j42CNiRKf+4Suo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UWv6pdKgK4QP5CYKcx40hpgU/0tGVJVsKXgZ0BJJa75YOnhx50d6Q6WHYK5Bopmtb 7uLd2XoFBdqOoG6hKJ8F1KtSR17S6dorE7OtThfSCugZIkby9D5PdpCg2PjcNiGGzM aiYz5P3A+PWSApOyar5p4tFAZw1L1vrb9l5b81bYUxEFrJ3bEDhtWP1aedQ7qUeXWo f2iXq5Z6gflE7x90e49GjC6iAaRvDRjJqS4s+jx2tGX3RW2pHOAc++Q6RiLrS7L7ju qJIOZOb2jm++tP68Q3uqqYz9f23QVnzKctckaGvnYIrT2DLgAyqrNEv4Kc99/8QdAl nh9JKt7OnhcEw== From: Philipp Stanner To: Luben Tuikov , Matthew Brost , Danilo Krummrich , Philipp Stanner , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Philipp Stanner Subject: [PATCH 3/3] drm/sched: Update timedout_job()'s documentation Date: Thu, 9 Jan 2025 14:37:12 +0100 Message-ID: <20250109133710.39404-6-phasta@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250109133710.39404-2-phasta@kernel.org> References: <20250109133710.39404-2-phasta@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 drm_sched_backend_ops.timedout_job()'s documentation is outdated. It mentions the deprecated function drm_sched_resubmit_job(). Furthermore, it does not point out the important distinction between hardware and firmware schedulers. Since firmware schedulers tyipically only use one entity per scheduler, timeout handling is significantly more simple because the entity the faulted job came from can just be killed without affecting innocent processes. Update the documentation with that distinction and other details. Reformat the docstring to work to a unified style with the other handles. Signed-off-by: Philipp Stanner --- include/drm/gpu_scheduler.h | 83 +++++++++++++++++++++++-------------- 1 file changed, 52 insertions(+), 31 deletions(-) diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index c4e65f9f7f22..380b8840c591 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -445,43 +445,64 @@ struct drm_sched_backend_ops { * @timedout_job: Called when a job has taken too long to execute, * to trigger GPU recovery. * - * This method is called in a workqueue context. + * @sched_job: The job that has timed out * - * Drivers typically issue a reset to recover from GPU hangs, and this - * procedure usually follows the following workflow: + * Returns: + * - DRM_GPU_SCHED_STAT_NOMINAL, on success, i.e., if the underlying + * driver has started or completed recovery. + * - DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer + * available, i.e., has been unplugged. * - * 1. Stop the scheduler using drm_sched_stop(). This will park the - * scheduler thread and cancel the timeout work, guaranteeing that - * nothing is queued while we reset the hardware queue - * 2. Try to gracefully stop non-faulty jobs (optional) - * 3. Issue a GPU reset (driver-specific) - * 4. Re-submit jobs using drm_sched_resubmit_jobs() + * Drivers typically issue a reset to recover from GPU hangs. + * This procedure looks very different depending on whether a firmware + * or a hardware scheduler is being used. + * + * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one scheduler, and + * each scheduler (typically) has one entity. Hence, you typically + * follow those steps: + * + * 1. Stop the scheduler using drm_sched_stop(). This will pause the + * scheduler workqueues and cancel the timeout work, guaranteeing + * that nothing is queued while we reset the hardware queue. + * 2. Try to gracefully stop non-faulty jobs (optional). + * TODO: RFC ^ Folks, should we remove this step? What does it even mean + * precisely to "stop" those jobs? Is that even helpful to userspace in + * any way? + * 3. Issue a GPU reset (driver-specific). + * 4. Kill the entity the faulted job stems from, and the associated + * scheduler. * 5. Restart the scheduler using drm_sched_start(). At that point, new - * jobs can be queued, and the scheduler thread is unblocked + * jobs can be queued, and the scheduler workqueues awake again. + * + * For a HARDWARE SCHEDULER, each ring also has one scheduler, but each + * scheduler typically has many attached entities. This implies that you + * cannot tear down all entities associated with the affected scheduler, + * because this would effectively also kill innocent userspace processes + * which did not submit faulty jobs (for example). + * + * Consequently, the procedure to recover with a hardware scheduler + * should look like this: + * + * 1. Stop all schedulers impacted by the reset using drm_sched_stop(). + * 2. Figure out to which entity the faulted job belongs. + * 3. Try to gracefully stop non-faulty jobs (optional). + * TODO: RFC ^ Folks, should we remove this step? What does it even mean + * precisely to "stop" those jobs? Is that even helpful to userspace in + * any way? + * 4. Kill that entity. + * 5. Issue a GPU reset on all faulty rings (driver-specific). + * 6. Re-submit jobs on all schedulers impacted by re-submitting them to + * the entities which are still alive. + * 7. Restart all schedulers that were stopped in step #1 using + * drm_sched_start(). * * Note that some GPUs have distinct hardware queues but need to reset * the GPU globally, which requires extra synchronization between the - * timeout handler of the different &drm_gpu_scheduler. One way to - * achieve this synchronization is to create an ordered workqueue - * (using alloc_ordered_workqueue()) at the driver level, and pass this - * queue to drm_sched_init(), to guarantee that timeout handlers are - * executed sequentially. The above workflow needs to be slightly - * adjusted in that case: - * - * 1. Stop all schedulers impacted by the reset using drm_sched_stop() - * 2. Try to gracefully stop non-faulty jobs on all queues impacted by - * the reset (optional) - * 3. Issue a GPU reset on all faulty queues (driver-specific) - * 4. Re-submit jobs on all schedulers impacted by the reset using - * drm_sched_resubmit_jobs() - * 5. Restart all schedulers that were stopped in step #1 using - * drm_sched_start() - * - * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal, - * and the underlying driver has started or completed recovery. - * - * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer - * available, i.e. has been unplugged. + * timeout handlers of different schedulers. One way to achieve this + * synchronization is to create an ordered workqueue (using + * alloc_ordered_workqueue()) at the driver level, and pass this queue + * as drm_sched_init()'s @timeout_wq parameter. This will guarantee + * that timeout handlers are executed sequentially. */ enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);