diff mbox series

[1/7] dma-buf: add some more kerneldoc to dma_resv_add_shared_fence

Message ID 20210616082655.111001-2-christian.koenig@amd.com
State New
Headers show
Series [1/7] dma-buf: add some more kerneldoc to dma_resv_add_shared_fence | expand

Commit Message

Christian König June 16, 2021, 8:26 a.m. UTC
Explicitly document that code can't assume that shared fences
signal after the exclusive fence.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/dma-buf/dma-resv.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Daniel Vetter June 17, 2021, 7:26 p.m. UTC | #1
On Wed, Jun 16, 2021 at 10:26:49AM +0200, Christian König wrote:
> Explicitly document that code can't assume that shared fences
> signal after the exclusive fence.
> 
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>  drivers/dma-buf/dma-resv.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index f26c71747d43..4ab02b6c387a 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -235,7 +235,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max);
>   * @fence: the shared fence to add
>   *
>   * Add a fence to a shared slot, obj->lock must be held, and
> - * dma_resv_reserve_shared() has been called.
> + * dma_resv_reserve_shared() has been called. The shared fences can signal in
> + * any order and there is especially no guarantee that shared fences signal
> + * after the exclusive one. Code relying on any signaling order is broken and
> + * needs to be fixed.

So I agree this are reasonable semantics, but you need to audit drivers
first. Because currently that's not how at least a bunch of them work.
There's way more drivers than the handful you've looked at.

Imo gold standard here is what I've tried doing for the "how do we set
fences" side, which is going through all of them. The trouble is that this
is a bit nastier, because a) drivers play much more tricks here and b)
understand each driver's scheduling logic is more work than how they set
fences for a request/cs.

Unfortunately I haven't gotten around to doing that yet, because it means
a few days of uninterrupted time crawling through way too much code. I
haven't even found time to respin my old series to make the fence setting
more consistent (since I find a few more issues there than just the amdgpu
one that sparked it all).
-Daniel

>   */
>  void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
>  {
> -- 
> 2.25.1
>
diff mbox series

Patch

diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index f26c71747d43..4ab02b6c387a 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -235,7 +235,10 @@  EXPORT_SYMBOL(dma_resv_reset_shared_max);
  * @fence: the shared fence to add
  *
  * Add a fence to a shared slot, obj->lock must be held, and
- * dma_resv_reserve_shared() has been called.
+ * dma_resv_reserve_shared() has been called. The shared fences can signal in
+ * any order and there is especially no guarantee that shared fences signal
+ * after the exclusive one. Code relying on any signaling order is broken and
+ * needs to be fixed.
  */
 void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
 {