mbox series

[v4,0/6] dma-buf: Check status of enable-signaling bit on debug

Message ID 20220914164321.2156-1-Arvind.Yadav@amd.com
Headers show
Series dma-buf: Check status of enable-signaling bit on debug | expand

Message

Arvind Yadav Sept. 14, 2022, 4:43 p.m. UTC
Fence signaling must be enabled to make sure that
the dma_fence_is_signaled() function ever returns true.
Since drivers and implementations sometimes mess this up,
this ensures correct behaviour when DEBUG_WW_MUTEX_SLOWPATH
is used during debugging.
This should make any implementation bugs resulting in not
signaled fences much more obvious.

Arvind Yadav (6):
  [PATCH v4 1/6] dma-buf: Remove the signaled bit status check
  [PATCH v4 2/6] dma-buf: set signaling bit for the stub fence
  [PATCH v4 3/6] dma-buf: Enable signaling on fence for selftests
  [PATCH v4 4/6] dma-buf: dma_fence_wait must enable signaling
  [PATCH v4 5/6] drm/sched: Use parent fence instead of finished
  [PATCH v4 6/6] dma-buf: Check status of enable-signaling bit on debug

 drivers/dma-buf/Kconfig                |  7 +++++++
 drivers/dma-buf/dma-fence.c            | 16 ++++++++++------
 drivers/dma-buf/st-dma-fence-chain.c   |  4 ++++
 drivers/dma-buf/st-dma-fence-unwrap.c  | 22 ++++++++++++++++++++++
 drivers/dma-buf/st-dma-fence.c         | 16 ++++++++++++++++
 drivers/dma-buf/st-dma-resv.c          | 10 ++++++++++
 drivers/gpu/drm/scheduler/sched_main.c |  4 ++--
 include/linux/dma-fence.h              |  5 +++++
 8 files changed, 76 insertions(+), 8 deletions(-)

Comments

Christian König Sept. 17, 2022, 3:18 p.m. UTC | #1
Am 17.09.22 um 08:17 schrieb Ville Syrjälä:
> On Thu, Sep 15, 2022 at 06:05:30PM +0200, Christian König wrote:
>> Am 15.09.22 um 15:02 schrieb Yadav, Arvind:
>>> On 9/15/2022 5:37 PM, Christian König wrote:
>>>> Is that sufficient to allow running a desktop on amdgpu with the
>>>> extra check enabled? If yes that would be quite a milestone.
>>>>
>>> Yes, It is running on amdgpu with extra config enabled.
>> In this case I will start pushing the patches to drm-misc-next. I'm just
>> going to leave out the last one until the IGT tests are working as well.
> ffs Christian. intel CI blew up yet again:
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12146/shard-glk7/igt@kms_plane_lowres@tiling-y@pipe-c-hdmi-a-2.html
>
> The last time (some ttm thing) was just a week or two ago,
> so it's really getting tiresome watching you push entirely
> untested stuff all the time. Would be really helpful if you
> finally started to do/require premerge testing.

Well first of all sorry for causing trouble, but as I wrote above I 
intentionally left out the last one to *not* break the IGT tests.

The patches pushed so far where just updating a bunch of corner cases 
and fixing the selftests.

Do you have any more insight why that should affect the IGT tests?

Regards,
Christian.
Steven Price Sept. 29, 2022, 2:53 p.m. UTC | #2
On 14/09/2022 17:43, Arvind Yadav wrote:
> Using the parent fence instead of the finished fence
> to get the job status. This change is to avoid GPU
> scheduler timeout error which can cause GPU reset.

I'm able to reproduce crashes on Panfrost and I believe this commit is
the cause. Specifically it's possible for job->s_fence->parent to be NULL.

The underlying issue seems to involve drm_sched_resubmit_jobs_ext() - if
the run_jobs() callback returns an error it will set s_fence->parent to
NULL after signalling s_fence->finished:

> 		fence = sched->ops->run_job(s_job);
> 		i++;
> 
> 		if (IS_ERR_OR_NULL(fence)) {
> 			if (IS_ERR(fence))
> 				dma_fence_set_error(&s_fence->finished, PTR_ERR(fence));
> 
> 			s_job->s_fence->parent = NULL;

I don't understand the reasoning behind this change, but it doesn't seem
right to be using the parent fence when we have code which can be
setting that pointer to NULL.

Since I don't understand the reasoning my only suggestion is to revert
this patch (and potentially the dependent patch "dma-buf: Check status
of enable-signaling bit on debug"?).

Can anyone suggest a better fix?

Thanks,

Steve

> Signed-off-by: Arvind Yadav <Arvind.Yadav@amd.com>
> Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> ---
> 
> changes in v1,v2 - Enable signaling for finished fence in sche_main()
> is removed
> 
> ---
>  drivers/gpu/drm/scheduler/sched_main.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index e0ab14e0fb6b..2ac28ad11432 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -829,7 +829,7 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched)
>  	job = list_first_entry_or_null(&sched->pending_list,
>  				       struct drm_sched_job, list);
>  
> -	if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
> +	if (job && dma_fence_is_signaled(job->s_fence->parent)) {
>  		/* remove job from pending_list */
>  		list_del_init(&job->list);
>  
> @@ -841,7 +841,7 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched)
>  
>  		if (next) {
>  			next->s_fence->scheduled.timestamp =
> -				job->s_fence->finished.timestamp;
> +				job->s_fence->parent->timestamp;
>  			/* start TO timer for next job */
>  			drm_sched_start_timeout(sched);
>  		}
Christian König Sept. 29, 2022, 2:57 p.m. UTC | #3
Am 29.09.22 um 16:53 schrieb Steven Price:
> On 14/09/2022 17:43, Arvind Yadav wrote:
>> Using the parent fence instead of the finished fence
>> to get the job status. This change is to avoid GPU
>> scheduler timeout error which can cause GPU reset.
> I'm able to reproduce crashes on Panfrost and I believe this commit is
> the cause. Specifically it's possible for job->s_fence->parent to be NULL.
>
> The underlying issue seems to involve drm_sched_resubmit_jobs_ext() - if
> the run_jobs() callback returns an error it will set s_fence->parent to
> NULL after signalling s_fence->finished:
>
>> 		fence = sched->ops->run_job(s_job);
>> 		i++;
>>
>> 		if (IS_ERR_OR_NULL(fence)) {
>> 			if (IS_ERR(fence))
>> 				dma_fence_set_error(&s_fence->finished, PTR_ERR(fence));
>>
>> 			s_job->s_fence->parent = NULL;
> I don't understand the reasoning behind this change, but it doesn't seem
> right to be using the parent fence when we have code which can be
> setting that pointer to NULL.
>
> Since I don't understand the reasoning my only suggestion is to revert
> this patch (and potentially the dependent patch "dma-buf: Check status
> of enable-signaling bit on debug"?).
>
> Can anyone suggest a better fix?

Well, first of all please absolutely don't use 
drm_sched_resubmit_jobs_ext()!

It was an extremely bad idea in amdgpu to approach GPU by re-submitting 
jobs and it was an even worse idea to push this into the scheduler.

The design of dma_fence is that you submit that once and *only* once and 
then get a result for this submission. If re-submission is desirable it 
should be done in userspace or at least higher levels.

Apart from that, yes a NULL check is missing here but that should be 
trivial to fix.

Thanks,
Christian.

>
> Thanks,
>
> Steve
>
>> Signed-off-by: Arvind Yadav <Arvind.Yadav@amd.com>
>> Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>> ---
>>
>> changes in v1,v2 - Enable signaling for finished fence in sche_main()
>> is removed
>>
>> ---
>>   drivers/gpu/drm/scheduler/sched_main.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>> index e0ab14e0fb6b..2ac28ad11432 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -829,7 +829,7 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched)
>>   	job = list_first_entry_or_null(&sched->pending_list,
>>   				       struct drm_sched_job, list);
>>   
>> -	if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>> +	if (job && dma_fence_is_signaled(job->s_fence->parent)) {
>>   		/* remove job from pending_list */
>>   		list_del_init(&job->list);
>>   
>> @@ -841,7 +841,7 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched)
>>   
>>   		if (next) {
>>   			next->s_fence->scheduled.timestamp =
>> -				job->s_fence->finished.timestamp;
>> +				job->s_fence->parent->timestamp;
>>   			/* start TO timer for next job */
>>   			drm_sched_start_timeout(sched);
>>   		}
Steven Price Sept. 29, 2022, 3:31 p.m. UTC | #4
On 29/09/2022 15:57, Christian König wrote:
> Am 29.09.22 um 16:53 schrieb Steven Price:
>> On 14/09/2022 17:43, Arvind Yadav wrote:
>>> Using the parent fence instead of the finished fence
>>> to get the job status. This change is to avoid GPU
>>> scheduler timeout error which can cause GPU reset.
>> I'm able to reproduce crashes on Panfrost and I believe this commit is
>> the cause. Specifically it's possible for job->s_fence->parent to be
>> NULL.
>>
>> The underlying issue seems to involve drm_sched_resubmit_jobs_ext() - if
>> the run_jobs() callback returns an error it will set s_fence->parent to
>> NULL after signalling s_fence->finished:
>>
>>>         fence = sched->ops->run_job(s_job);
>>>         i++;
>>>
>>>         if (IS_ERR_OR_NULL(fence)) {
>>>             if (IS_ERR(fence))
>>>                 dma_fence_set_error(&s_fence->finished, PTR_ERR(fence));
>>>
>>>             s_job->s_fence->parent = NULL;
>> I don't understand the reasoning behind this change, but it doesn't seem
>> right to be using the parent fence when we have code which can be
>> setting that pointer to NULL.
>>
>> Since I don't understand the reasoning my only suggestion is to revert
>> this patch (and potentially the dependent patch "dma-buf: Check status
>> of enable-signaling bit on debug"?).
>>
>> Can anyone suggest a better fix?
> 
> Well, first of all please absolutely don't use
> drm_sched_resubmit_jobs_ext()!

Panfrost isn't using drm_sched_resubmit_jobs_ext() directly but via
drm_sched_resubmit_jobs().

> It was an extremely bad idea in amdgpu to approach GPU by re-submitting
> jobs and it was an even worse idea to push this into the scheduler.
> 
> The design of dma_fence is that you submit that once and *only* once and
> then get a result for this submission. If re-submission is desirable it
> should be done in userspace or at least higher levels.

Panfrost has an interesting feature where it's possible to rescue a job
during a GPU reset. Because jobs are queued on the GPU if the job hasn't
actually started executing then it's quite possible to safely resubmit
it from the kernel driver and user space doesn't need to be involved.

The benefit of this is if another process has hung the GPU that
processes jobs can be killed off without affecting any other innocent
processes.

One option would be to hide all this from the scheduler, but I can't see
how to do that without also hiding the actual reset from the scheduler.
Admittedly at the moment Panfrost is far too aggressive at resetting and
will perform a GPU reset in conditions where it's completely
unnecessary. There's work to do there but I haven't had the time to look
at it yet.

> Apart from that, yes a NULL check is missing here but that should be
> trivial to fix.

What I'm struggling to get my head round is whether it's correct to
always treat the job as signalled just because s_fence->parent is NULL?

Thanks,

Steve

> Thanks,
> Christian.
> 
>>
>> Thanks,
>>
>> Steve
>>
>>> Signed-off-by: Arvind Yadav <Arvind.Yadav@amd.com>
>>> Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>> ---
>>>
>>> changes in v1,v2 - Enable signaling for finished fence in sche_main()
>>> is removed
>>>
>>> ---
>>>   drivers/gpu/drm/scheduler/sched_main.c | 4 ++--
>>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>> index e0ab14e0fb6b..2ac28ad11432 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -829,7 +829,7 @@ drm_sched_get_cleanup_job(struct
>>> drm_gpu_scheduler *sched)
>>>       job = list_first_entry_or_null(&sched->pending_list,
>>>                          struct drm_sched_job, list);
>>>   -    if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>>> +    if (job && dma_fence_is_signaled(job->s_fence->parent)) {
>>>           /* remove job from pending_list */
>>>           list_del_init(&job->list);
>>>   @@ -841,7 +841,7 @@ drm_sched_get_cleanup_job(struct
>>> drm_gpu_scheduler *sched)
>>>             if (next) {
>>>               next->s_fence->scheduled.timestamp =
>>> -                job->s_fence->finished.timestamp;
>>> +                job->s_fence->parent->timestamp;
>>>               /* start TO timer for next job */
>>>               drm_sched_start_timeout(sched);
>>>           }
>
Christian König Sept. 29, 2022, 5:07 p.m. UTC | #5
Am 29.09.22 um 17:31 schrieb Steven Price:
> On 29/09/2022 15:57, Christian König wrote:
>> Am 29.09.22 um 16:53 schrieb Steven Price:
>>> On 14/09/2022 17:43, Arvind Yadav wrote:
>>>> Using the parent fence instead of the finished fence
>>>> to get the job status. This change is to avoid GPU
>>>> scheduler timeout error which can cause GPU reset.
>>> I'm able to reproduce crashes on Panfrost and I believe this commit is
>>> the cause. Specifically it's possible for job->s_fence->parent to be
>>> NULL.
>>>
>>> The underlying issue seems to involve drm_sched_resubmit_jobs_ext() - if
>>> the run_jobs() callback returns an error it will set s_fence->parent to
>>> NULL after signalling s_fence->finished:
>>>
>>>>          fence = sched->ops->run_job(s_job);
>>>>          i++;
>>>>
>>>>          if (IS_ERR_OR_NULL(fence)) {
>>>>              if (IS_ERR(fence))
>>>>                  dma_fence_set_error(&s_fence->finished, PTR_ERR(fence));
>>>>
>>>>              s_job->s_fence->parent = NULL;
>>> I don't understand the reasoning behind this change, but it doesn't seem
>>> right to be using the parent fence when we have code which can be
>>> setting that pointer to NULL.
>>>
>>> Since I don't understand the reasoning my only suggestion is to revert
>>> this patch (and potentially the dependent patch "dma-buf: Check status
>>> of enable-signaling bit on debug"?).
>>>
>>> Can anyone suggest a better fix?
>> Well, first of all please absolutely don't use
>> drm_sched_resubmit_jobs_ext()!
> Panfrost isn't using drm_sched_resubmit_jobs_ext() directly but via
> drm_sched_resubmit_jobs().

Yeah, but it's the same problem that this isn't designed very well.

>> It was an extremely bad idea in amdgpu to approach GPU by re-submitting
>> jobs and it was an even worse idea to push this into the scheduler.
>>
>> The design of dma_fence is that you submit that once and *only* once and
>> then get a result for this submission. If re-submission is desirable it
>> should be done in userspace or at least higher levels.
> Panfrost has an interesting feature where it's possible to rescue a job
> during a GPU reset. Because jobs are queued on the GPU if the job hasn't
> actually started executing then it's quite possible to safely resubmit
> it from the kernel driver and user space doesn't need to be involved.

That's actually fine. E.g. when you can save the hardware state and 
restart it there is nothing as far as I can see which speaks against that.

The problem is rather pushing this into the scheduler because and trying 
to fit the square pig through a round hole.

You either end up allocating memory while inside a GPU reset (which is 
illegal because allocating memory could need to wait for the reset to 
finish). Or you end up re-using the same dma_fence object twice (which 
in turn is illegal from the dma_fence design).

> The benefit of this is if another process has hung the GPU that
> processes jobs can be killed off without affecting any other innocent
> processes.
>
> One option would be to hide all this from the scheduler, but I can't see
> how to do that without also hiding the actual reset from the scheduler.
> Admittedly at the moment Panfrost is far too aggressive at resetting and
> will perform a GPU reset in conditions where it's completely
> unnecessary. There's work to do there but I haven't had the time to look
> at it yet.
>
>> Apart from that, yes a NULL check is missing here but that should be
>> trivial to fix.
> What I'm struggling to get my head round is whether it's correct to
> always treat the job as signalled just because s_fence->parent is NULL?

Well s_fence parent will never be set to something else than NULL in 
this situation, isn't it?

The problem with using the finished fence is that this is actually the 
public interface of the scheduler instead of the internal state.

In other words s_fence->parent is what the scheduler deals with to 
produce the s_fence->finished state.

Christian.

>
> Thanks,
>
> Steve
>
>> Thanks,
>> Christian.
>>
>>> Thanks,
>>>
>>> Steve
>>>
>>>> Signed-off-by: Arvind Yadav <Arvind.Yadav@amd.com>
>>>> Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>> ---
>>>>
>>>> changes in v1,v2 - Enable signaling for finished fence in sche_main()
>>>> is removed
>>>>
>>>> ---
>>>>    drivers/gpu/drm/scheduler/sched_main.c | 4 ++--
>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>> index e0ab14e0fb6b..2ac28ad11432 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>> @@ -829,7 +829,7 @@ drm_sched_get_cleanup_job(struct
>>>> drm_gpu_scheduler *sched)
>>>>        job = list_first_entry_or_null(&sched->pending_list,
>>>>                           struct drm_sched_job, list);
>>>>    -    if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>>>> +    if (job && dma_fence_is_signaled(job->s_fence->parent)) {
>>>>            /* remove job from pending_list */
>>>>            list_del_init(&job->list);
>>>>    @@ -841,7 +841,7 @@ drm_sched_get_cleanup_job(struct
>>>> drm_gpu_scheduler *sched)
>>>>              if (next) {
>>>>                next->s_fence->scheduled.timestamp =
>>>> -                job->s_fence->finished.timestamp;
>>>> +                job->s_fence->parent->timestamp;
>>>>                /* start TO timer for next job */
>>>>                drm_sched_start_timeout(sched);
>>>>            }
> _______________________________________________
> Linaro-mm-sig mailing list -- linaro-mm-sig@lists.linaro.org
> To unsubscribe send an email to linaro-mm-sig-leave@lists.linaro.org