mbox series

[0/2] drm/msm: Add the MSM_WAIT_IOVA ioctl

Message ID 20200113153605.52350-1-brian@brkho.com
Headers show
Series drm/msm: Add the MSM_WAIT_IOVA ioctl | expand

Message

Brian Ho Jan. 13, 2020, 3:36 p.m. UTC
This patch set implements the MSM_WAIT_IOVA ioctl which lets
userspace sleep until the value at a given iova reaches a certain
condition. This is needed in turnip to implement the
VK_QUERY_RESULT_WAIT_BIT flag for vkGetQueryPoolResults.

First, we add a GPU-wide wait queue that is signaled on all IRQs.
We can then wait on this wait queue inside MSM_WAIT_IOVA until the
condition is met.

The corresponding merge request in mesa can be found at:
https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3279

Brian Ho (2):
  drm/msm: Add a GPU-wide wait queue
  drm/msm: Add MSM_WAIT_IOVA ioctl

 drivers/gpu/drm/msm/msm_drv.c | 63 +++++++++++++++++++++++++++++++++--
 drivers/gpu/drm/msm/msm_gpu.c |  4 +++
 drivers/gpu/drm/msm/msm_gpu.h |  3 ++
 include/uapi/drm/msm_drm.h    | 13 ++++++++
 4 files changed, 81 insertions(+), 2 deletions(-)

Comments

Jordan Crouse Jan. 13, 2020, 5:55 p.m. UTC | #1
On Mon, Jan 13, 2020 at 10:36:04AM -0500, Brian Ho wrote:
> This wait queue is signaled on all IRQs for a given GPU and will be
> used as part of the new MSM_WAIT_IOVA ioctl so userspace can sleep
> until the value at a given iova reaches a certain condition.
> 
> Signed-off-by: Brian Ho <brian@brkho.com>
> ---
>  drivers/gpu/drm/msm/msm_gpu.c | 4 ++++
>  drivers/gpu/drm/msm/msm_gpu.h | 3 +++
>  2 files changed, 7 insertions(+)
> 
> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> index a052364a5d74..d7310c1336e5 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.c
> +++ b/drivers/gpu/drm/msm/msm_gpu.c
> @@ -779,6 +779,8 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
>  static irqreturn_t irq_handler(int irq, void *data)
>  {
>  	struct msm_gpu *gpu = data;
> +	wake_up_all(&gpu->event);
> +

I suppose it is intentional to have this happen on *all* interrupts because you
might be using the CP interrupts for fun and profit and you don't want to plumb
in callbacks?  I suppose it is okay to do this for all interrupts (including
errors) but if we're spending a lot of time here we might want to only trigger
on certain IRQs.


>  	return gpu->funcs->irq(gpu);
>  }
>  
> @@ -871,6 +873,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
>  
>  	spin_lock_init(&gpu->perf_lock);
>  
> +	init_waitqueue_head(&gpu->event);
> +
>  
>  	/* Map registers: */
>  	gpu->mmio = msm_ioremap(pdev, config->ioname, name);
> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> index ab8f0f9c9dc8..60562f065dbc 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.h
> +++ b/drivers/gpu/drm/msm/msm_gpu.h
> @@ -104,6 +104,9 @@ struct msm_gpu {
>  
>  	struct msm_gem_address_space *aspace;
>  
> +	/* GPU-wide wait queue that is signaled on all IRQs */
> +	wait_queue_head_t event;
> +
>  	/* Power Control: */
>  	struct regulator *gpu_reg, *gpu_cx;
>  	struct clk_bulk_data *grp_clks;
> -- 
> 2.25.0.rc1.283.g88dfdc4193-goog
> 
> _______________________________________________
> Freedreno mailing list
> Freedreno@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/freedreno
Rob Clark Jan. 13, 2020, 6:23 p.m. UTC | #2
On Mon, Jan 13, 2020 at 9:55 AM Jordan Crouse <jcrouse@codeaurora.org> wrote:
>
> On Mon, Jan 13, 2020 at 10:36:04AM -0500, Brian Ho wrote:
> > This wait queue is signaled on all IRQs for a given GPU and will be
> > used as part of the new MSM_WAIT_IOVA ioctl so userspace can sleep
> > until the value at a given iova reaches a certain condition.
> >
> > Signed-off-by: Brian Ho <brian@brkho.com>
> > ---
> >  drivers/gpu/drm/msm/msm_gpu.c | 4 ++++
> >  drivers/gpu/drm/msm/msm_gpu.h | 3 +++
> >  2 files changed, 7 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> > index a052364a5d74..d7310c1336e5 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu.c
> > +++ b/drivers/gpu/drm/msm/msm_gpu.c
> > @@ -779,6 +779,8 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
> >  static irqreturn_t irq_handler(int irq, void *data)
> >  {
> >       struct msm_gpu *gpu = data;
> > +     wake_up_all(&gpu->event);
> > +
>
> I suppose it is intentional to have this happen on *all* interrupts because you
> might be using the CP interrupts for fun and profit and you don't want to plumb
> in callbacks?  I suppose it is okay to do this for all interrupts (including
> errors) but if we're spending a lot of time here we might want to only trigger
> on certain IRQs.

Was just talking to Kristian about GPU hangs.. and I suspect we might
want the ioctl to return an error if there is a gpu reset (so that
userspace can use the robustness uapi to test if the gpu reset was
something it cares about, etc)

Which is as good as a reason as I can think of the wake_up_all() on all irqs..

BR,
-R

>
> >       return gpu->funcs->irq(gpu);
> >  }
> >
> > @@ -871,6 +873,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
> >
> >       spin_lock_init(&gpu->perf_lock);
> >
> > +     init_waitqueue_head(&gpu->event);
> > +
> >
> >       /* Map registers: */
> >       gpu->mmio = msm_ioremap(pdev, config->ioname, name);
> > diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> > index ab8f0f9c9dc8..60562f065dbc 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu.h
> > +++ b/drivers/gpu/drm/msm/msm_gpu.h
> > @@ -104,6 +104,9 @@ struct msm_gpu {
> >
> >       struct msm_gem_address_space *aspace;
> >
> > +     /* GPU-wide wait queue that is signaled on all IRQs */
> > +     wait_queue_head_t event;
> > +
> >       /* Power Control: */
> >       struct regulator *gpu_reg, *gpu_cx;
> >       struct clk_bulk_data *grp_clks;
> > --
> > 2.25.0.rc1.283.g88dfdc4193-goog
> >
> > _______________________________________________
> > Freedreno mailing list
> > Freedreno@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/freedreno
>
> --
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> a Linux Foundation Collaborative Project