mbox series

[v2,0/2] drm: fdinfo memory stats

Message ID 20230410210608.1873968-1-robdclark@gmail.com
Headers show
Series drm: fdinfo memory stats | expand

Message

Rob Clark April 10, 2023, 9:06 p.m. UTC
From: Rob Clark <robdclark@chromium.org>

Similar motivation to other similar recent attempt[1].  But with an
attempt to have some shared code for this.  As well as documentation.

It is probably a bit UMA-centric, I guess devices with VRAM might want
some placement stats as well.  But this seems like a reasonable start.

Basic gputop support: https://patchwork.freedesktop.org/series/116236/
And already nvtop support: https://github.com/Syllo/nvtop/pull/204

[1] https://patchwork.freedesktop.org/series/112397/

Rob Clark (2):
  drm: Add fdinfo memory stats
  drm/msm: Add memory stats to fdinfo

 Documentation/gpu/drm-usage-stats.rst | 21 +++++++
 drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
 drivers/gpu/drm/msm/msm_drv.c         | 25 ++++++++-
 drivers/gpu/drm/msm/msm_gpu.c         |  2 -
 include/drm/drm_file.h                | 10 ++++
 5 files changed, 134 insertions(+), 3 deletions(-)

Comments

Daniel Vetter April 11, 2023, 10:43 a.m. UTC | #1
On Mon, Apr 10, 2023 at 02:06:06PM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> Add a helper to dump memory stats to fdinfo.  For the things the drm
> core isn't aware of, use a callback.
> 
> v2: Fix typos, change size units to match docs, use div_u64
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>

Uh can't we wire this up by default? Having this as a per-driver opt-in
sounds like we'll get maximally fragmented drm fd_info, and since that's
uapi I don't think that's any good at all.

I think it's time we have
- drm_fd_info
- rolled out to all drivers in their fops
- with feature checks as appropriate
- push the driver-specific things into a drm_driver callback

And I guess start peopling giving a hard time for making things needless
driver-specifict ... there's really no reason at all this is not
consistent across drivers.
-Daniel

> ---
>  Documentation/gpu/drm-usage-stats.rst | 21 +++++++
>  drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
>  include/drm/drm_file.h                | 10 ++++
>  3 files changed, 110 insertions(+)
> 
> diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
> index b46327356e80..b5e7802532ed 100644
> --- a/Documentation/gpu/drm-usage-stats.rst
> +++ b/Documentation/gpu/drm-usage-stats.rst
> @@ -105,6 +105,27 @@ object belong to this client, in the respective memory region.
>  Default unit shall be bytes with optional unit specifiers of 'KiB' or 'MiB'
>  indicating kibi- or mebi-bytes.
>  
> +- drm-shared-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are shared with another file (ie. have more
> +than a single handle).
> +
> +- drm-private-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are not shared with another file.
> +
> +- drm-resident-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are resident in system memory.
> +
> +- drm-purgeable-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are purgeable.
> +
> +- drm-active-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are active on one or more rings.
> +
>  - drm-cycles-<str> <uint>
>  
>  Engine identifier string must be the same as the one specified in the
> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
> index a51ff8cee049..085b01842a87 100644
> --- a/drivers/gpu/drm/drm_file.c
> +++ b/drivers/gpu/drm/drm_file.c
> @@ -42,6 +42,7 @@
>  #include <drm/drm_client.h>
>  #include <drm/drm_drv.h>
>  #include <drm/drm_file.h>
> +#include <drm/drm_gem.h>
>  #include <drm/drm_print.h>
>  
>  #include "drm_crtc_internal.h"
> @@ -868,6 +869,84 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e)
>  }
>  EXPORT_SYMBOL(drm_send_event);
>  
> +static void print_size(struct drm_printer *p, const char *stat, size_t sz)
> +{
> +	const char *units[] = {"", " KiB", " MiB"};
> +	unsigned u;
> +
> +	for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {
> +		if (sz < SZ_1K)
> +			break;
> +		sz = div_u64(sz, SZ_1K);
> +	}
> +
> +	drm_printf(p, "%s:\t%zu%s\n", stat, sz, units[u]);
> +}
> +
> +/**
> + * drm_print_memory_stats - Helper to print standard fdinfo memory stats
> + * @file: the DRM file
> + * @p: the printer to print output to
> + * @status: callback to get driver tracked object status
> + *
> + * Helper to iterate over GEM objects with a handle allocated in the specified
> + * file.  The optional status callback can return additional object state which
> + * determines which stats the object is counted against.  The callback is called
> + * under table_lock.  Racing against object status change is "harmless", and the
> + * callback can expect to not race against object destruction.
> + */
> +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> +			    enum drm_gem_object_status (*status)(struct drm_gem_object *))
> +{
> +	struct drm_gem_object *obj;
> +	struct {
> +		size_t shared;
> +		size_t private;
> +		size_t resident;
> +		size_t purgeable;
> +		size_t active;
> +	} size = {0};
> +	int id;
> +
> +	spin_lock(&file->table_lock);
> +	idr_for_each_entry (&file->object_idr, obj, id) {
> +		enum drm_gem_object_status s = 0;
> +
> +		if (status)
> +			s = status(obj);
> +
> +		if (obj->handle_count > 1) {
> +			size.shared += obj->size;
> +		} else {
> +			size.private += obj->size;
> +		}
> +
> +		if (s & DRM_GEM_OBJECT_RESIDENT) {
> +			size.resident += obj->size;
> +			s &= ~DRM_GEM_OBJECT_PURGEABLE;
> +		}
> +
> +		if (s & DRM_GEM_OBJECT_ACTIVE) {
> +			size.active += obj->size;
> +			s &= ~DRM_GEM_OBJECT_PURGEABLE;
> +		}
> +
> +		if (s & DRM_GEM_OBJECT_PURGEABLE)
> +			size.purgeable += obj->size;
> +	}
> +	spin_unlock(&file->table_lock);
> +
> +	print_size(p, "drm-shared-memory", size.shared);
> +	print_size(p, "drm-private-memory", size.private);
> +
> +	if (status) {
> +		print_size(p, "drm-resident-memory", size.resident);
> +		print_size(p, "drm-purgeable-memory", size.purgeable);
> +		print_size(p, "drm-active-memory", size.active);
> +	}
> +}
> +EXPORT_SYMBOL(drm_print_memory_stats);
> +
>  /**
>   * mock_drm_getfile - Create a new struct file for the drm device
>   * @minor: drm minor to wrap (e.g. #drm_device.primary)
> diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
> index 0d1f853092ab..7bd8a1374f39 100644
> --- a/include/drm/drm_file.h
> +++ b/include/drm/drm_file.h
> @@ -41,6 +41,7 @@
>  struct dma_fence;
>  struct drm_file;
>  struct drm_device;
> +struct drm_printer;
>  struct device;
>  struct file;
>  
> @@ -438,6 +439,15 @@ void drm_send_event_timestamp_locked(struct drm_device *dev,
>  				     struct drm_pending_event *e,
>  				     ktime_t timestamp);
>  
> +enum drm_gem_object_status {
> +	DRM_GEM_OBJECT_RESIDENT  = BIT(0),
> +	DRM_GEM_OBJECT_PURGEABLE = BIT(1),
> +	DRM_GEM_OBJECT_ACTIVE    = BIT(2),
> +};
> +
> +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> +			    enum drm_gem_object_status (*status)(struct drm_gem_object *));
> +
>  struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags);
>  
>  #endif /* _DRM_FILE_H_ */
> -- 
> 2.39.2
>
Rob Clark April 11, 2023, 3:02 p.m. UTC | #2
On Tue, Apr 11, 2023 at 3:43 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Mon, Apr 10, 2023 at 02:06:06PM -0700, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Add a helper to dump memory stats to fdinfo.  For the things the drm
> > core isn't aware of, use a callback.
> >
> > v2: Fix typos, change size units to match docs, use div_u64
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
>
> Uh can't we wire this up by default? Having this as a per-driver opt-in
> sounds like we'll get maximally fragmented drm fd_info, and since that's
> uapi I don't think that's any good at all.

That is the reason for the centralized documentation of the props (and
why for this one I added a helper, rather than continuing the current
pattern of everyone rolling their own)..

We _could_ (and I had contemplated) doing this all in core if (a) we
move madv to drm_gem_object, and (b) track
drm_gem_get_pages()/drm_gem_put_pages().  I guess neither is totally
unreasonable, pretty much all the non-ttm/non-cma GEM drivers have
some form of madvise ioctl and use
drm_gem_get_pages()/drm_gem_put_pages()..

BR,
-R

> I think it's time we have
> - drm_fd_info
> - rolled out to all drivers in their fops
> - with feature checks as appropriate
> - push the driver-specific things into a drm_driver callback
>
> And I guess start peopling giving a hard time for making things needless
> driver-specifict ... there's really no reason at all this is not
> consistent across drivers.
> -Daniel
>
> > ---
> >  Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> >  drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
> >  include/drm/drm_file.h                | 10 ++++
> >  3 files changed, 110 insertions(+)
> >
> > diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
> > index b46327356e80..b5e7802532ed 100644
> > --- a/Documentation/gpu/drm-usage-stats.rst
> > +++ b/Documentation/gpu/drm-usage-stats.rst
> > @@ -105,6 +105,27 @@ object belong to this client, in the respective memory region.
> >  Default unit shall be bytes with optional unit specifiers of 'KiB' or 'MiB'
> >  indicating kibi- or mebi-bytes.
> >
> > +- drm-shared-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are shared with another file (ie. have more
> > +than a single handle).
> > +
> > +- drm-private-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are not shared with another file.
> > +
> > +- drm-resident-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are resident in system memory.
> > +
> > +- drm-purgeable-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are purgeable.
> > +
> > +- drm-active-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are active on one or more rings.
> > +
> >  - drm-cycles-<str> <uint>
> >
> >  Engine identifier string must be the same as the one specified in the
> > diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
> > index a51ff8cee049..085b01842a87 100644
> > --- a/drivers/gpu/drm/drm_file.c
> > +++ b/drivers/gpu/drm/drm_file.c
> > @@ -42,6 +42,7 @@
> >  #include <drm/drm_client.h>
> >  #include <drm/drm_drv.h>
> >  #include <drm/drm_file.h>
> > +#include <drm/drm_gem.h>
> >  #include <drm/drm_print.h>
> >
> >  #include "drm_crtc_internal.h"
> > @@ -868,6 +869,84 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e)
> >  }
> >  EXPORT_SYMBOL(drm_send_event);
> >
> > +static void print_size(struct drm_printer *p, const char *stat, size_t sz)
> > +{
> > +     const char *units[] = {"", " KiB", " MiB"};
> > +     unsigned u;
> > +
> > +     for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {
> > +             if (sz < SZ_1K)
> > +                     break;
> > +             sz = div_u64(sz, SZ_1K);
> > +     }
> > +
> > +     drm_printf(p, "%s:\t%zu%s\n", stat, sz, units[u]);
> > +}
> > +
> > +/**
> > + * drm_print_memory_stats - Helper to print standard fdinfo memory stats
> > + * @file: the DRM file
> > + * @p: the printer to print output to
> > + * @status: callback to get driver tracked object status
> > + *
> > + * Helper to iterate over GEM objects with a handle allocated in the specified
> > + * file.  The optional status callback can return additional object state which
> > + * determines which stats the object is counted against.  The callback is called
> > + * under table_lock.  Racing against object status change is "harmless", and the
> > + * callback can expect to not race against object destruction.
> > + */
> > +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> > +                         enum drm_gem_object_status (*status)(struct drm_gem_object *))
> > +{
> > +     struct drm_gem_object *obj;
> > +     struct {
> > +             size_t shared;
> > +             size_t private;
> > +             size_t resident;
> > +             size_t purgeable;
> > +             size_t active;
> > +     } size = {0};
> > +     int id;
> > +
> > +     spin_lock(&file->table_lock);
> > +     idr_for_each_entry (&file->object_idr, obj, id) {
> > +             enum drm_gem_object_status s = 0;
> > +
> > +             if (status)
> > +                     s = status(obj);
> > +
> > +             if (obj->handle_count > 1) {
> > +                     size.shared += obj->size;
> > +             } else {
> > +                     size.private += obj->size;
> > +             }
> > +
> > +             if (s & DRM_GEM_OBJECT_RESIDENT) {
> > +                     size.resident += obj->size;
> > +                     s &= ~DRM_GEM_OBJECT_PURGEABLE;
> > +             }
> > +
> > +             if (s & DRM_GEM_OBJECT_ACTIVE) {
> > +                     size.active += obj->size;
> > +                     s &= ~DRM_GEM_OBJECT_PURGEABLE;
> > +             }
> > +
> > +             if (s & DRM_GEM_OBJECT_PURGEABLE)
> > +                     size.purgeable += obj->size;
> > +     }
> > +     spin_unlock(&file->table_lock);
> > +
> > +     print_size(p, "drm-shared-memory", size.shared);
> > +     print_size(p, "drm-private-memory", size.private);
> > +
> > +     if (status) {
> > +             print_size(p, "drm-resident-memory", size.resident);
> > +             print_size(p, "drm-purgeable-memory", size.purgeable);
> > +             print_size(p, "drm-active-memory", size.active);
> > +     }
> > +}
> > +EXPORT_SYMBOL(drm_print_memory_stats);
> > +
> >  /**
> >   * mock_drm_getfile - Create a new struct file for the drm device
> >   * @minor: drm minor to wrap (e.g. #drm_device.primary)
> > diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
> > index 0d1f853092ab..7bd8a1374f39 100644
> > --- a/include/drm/drm_file.h
> > +++ b/include/drm/drm_file.h
> > @@ -41,6 +41,7 @@
> >  struct dma_fence;
> >  struct drm_file;
> >  struct drm_device;
> > +struct drm_printer;
> >  struct device;
> >  struct file;
> >
> > @@ -438,6 +439,15 @@ void drm_send_event_timestamp_locked(struct drm_device *dev,
> >                                    struct drm_pending_event *e,
> >                                    ktime_t timestamp);
> >
> > +enum drm_gem_object_status {
> > +     DRM_GEM_OBJECT_RESIDENT  = BIT(0),
> > +     DRM_GEM_OBJECT_PURGEABLE = BIT(1),
> > +     DRM_GEM_OBJECT_ACTIVE    = BIT(2),
> > +};
> > +
> > +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> > +                         enum drm_gem_object_status (*status)(struct drm_gem_object *));
> > +
> >  struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags);
> >
> >  #endif /* _DRM_FILE_H_ */
> > --
> > 2.39.2
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
Rob Clark April 11, 2023, 4:47 p.m. UTC | #3
On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
>
> From: Rob Clark <robdclark@chromium.org>
>
> Similar motivation to other similar recent attempt[1].  But with an
> attempt to have some shared code for this.  As well as documentation.
>
> It is probably a bit UMA-centric, I guess devices with VRAM might want
> some placement stats as well.  But this seems like a reasonable start.
>
> Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> And already nvtop support: https://github.com/Syllo/nvtop/pull/204

On a related topic, I'm wondering if it would make sense to report
some more global things (temp, freq, etc) via fdinfo?  Some of this,
tools like nvtop could get by trawling sysfs or other driver specific
ways.  But maybe it makes sense to have these sort of things reported
in a standardized way (even though they aren't really per-drm_file)

BR,
-R


> [1] https://patchwork.freedesktop.org/series/112397/
>
> Rob Clark (2):
>   drm: Add fdinfo memory stats
>   drm/msm: Add memory stats to fdinfo
>
>  Documentation/gpu/drm-usage-stats.rst | 21 +++++++
>  drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
>  drivers/gpu/drm/msm/msm_drv.c         | 25 ++++++++-
>  drivers/gpu/drm/msm/msm_gpu.c         |  2 -
>  include/drm/drm_file.h                | 10 ++++
>  5 files changed, 134 insertions(+), 3 deletions(-)
>
> --
> 2.39.2
>
Daniel Vetter April 11, 2023, 4:53 p.m. UTC | #4
On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> >
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Similar motivation to other similar recent attempt[1].  But with an
> > attempt to have some shared code for this.  As well as documentation.
> >
> > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > some placement stats as well.  But this seems like a reasonable start.
> >
> > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> 
> On a related topic, I'm wondering if it would make sense to report
> some more global things (temp, freq, etc) via fdinfo?  Some of this,
> tools like nvtop could get by trawling sysfs or other driver specific
> ways.  But maybe it makes sense to have these sort of things reported
> in a standardized way (even though they aren't really per-drm_file)

I think that's a bit much layering violation, we'd essentially have to
reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
be in :-)

What might be needed is better glue to go from the fd or fdinfo to the
right hw device and then crawl around the hwmon in sysfs automatically. I
would not be surprised at all if we really suck on this, probably more
likely on SoC than pci gpus where at least everything should be under the
main pci sysfs device.
-Daniel

> 
> BR,
> -R
> 
> 
> > [1] https://patchwork.freedesktop.org/series/112397/
> >
> > Rob Clark (2):
> >   drm: Add fdinfo memory stats
> >   drm/msm: Add memory stats to fdinfo
> >
> >  Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> >  drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
> >  drivers/gpu/drm/msm/msm_drv.c         | 25 ++++++++-
> >  drivers/gpu/drm/msm/msm_gpu.c         |  2 -
> >  include/drm/drm_file.h                | 10 ++++
> >  5 files changed, 134 insertions(+), 3 deletions(-)
> >
> > --
> > 2.39.2
> >
Rob Clark April 11, 2023, 5:13 p.m. UTC | #5
On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > Similar motivation to other similar recent attempt[1].  But with an
> > > attempt to have some shared code for this.  As well as documentation.
> > >
> > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > some placement stats as well.  But this seems like a reasonable start.
> > >
> > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> >
> > On a related topic, I'm wondering if it would make sense to report
> > some more global things (temp, freq, etc) via fdinfo?  Some of this,
> > tools like nvtop could get by trawling sysfs or other driver specific
> > ways.  But maybe it makes sense to have these sort of things reported
> > in a standardized way (even though they aren't really per-drm_file)
>
> I think that's a bit much layering violation, we'd essentially have to
> reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> be in :-)

I guess this is true for temp (where there are thermal zones with
potentially multiple temp sensors.. but I'm still digging my way thru
the thermal_cooling_device stuff)

But what about freq?  I think, esp for cases where some "fw thing" is
controlling the freq we end up needing to use gpu counters to measure
the freq.

> What might be needed is better glue to go from the fd or fdinfo to the
> right hw device and then crawl around the hwmon in sysfs automatically. I
> would not be surprised at all if we really suck on this, probably more
> likely on SoC than pci gpus where at least everything should be under the
> main pci sysfs device.

yeah, I *think* userspace would have to look at /proc/device-tree to
find the cooling device(s) associated with the gpu.. at least I don't
see a straightforward way to figure it out just for sysfs

BR,
-R

> -Daniel
>
> >
> > BR,
> > -R
> >
> >
> > > [1] https://patchwork.freedesktop.org/series/112397/
> > >
> > > Rob Clark (2):
> > >   drm: Add fdinfo memory stats
> > >   drm/msm: Add memory stats to fdinfo
> > >
> > >  Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > >  drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
> > >  drivers/gpu/drm/msm/msm_drv.c         | 25 ++++++++-
> > >  drivers/gpu/drm/msm/msm_gpu.c         |  2 -
> > >  include/drm/drm_file.h                | 10 ++++
> > >  5 files changed, 134 insertions(+), 3 deletions(-)
> > >
> > > --
> > > 2.39.2
> > >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
Dmitry Baryshkov April 11, 2023, 5:35 p.m. UTC | #6
On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
>
> On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > From: Rob Clark <robdclark@chromium.org>
> > > >
> > > > Similar motivation to other similar recent attempt[1].  But with an
> > > > attempt to have some shared code for this.  As well as documentation.
> > > >
> > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > some placement stats as well.  But this seems like a reasonable start.
> > > >
> > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > >
> > > On a related topic, I'm wondering if it would make sense to report
> > > some more global things (temp, freq, etc) via fdinfo?  Some of this,
> > > tools like nvtop could get by trawling sysfs or other driver specific
> > > ways.  But maybe it makes sense to have these sort of things reported
> > > in a standardized way (even though they aren't really per-drm_file)
> >
> > I think that's a bit much layering violation, we'd essentially have to
> > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > be in :-)
>
> I guess this is true for temp (where there are thermal zones with
> potentially multiple temp sensors.. but I'm still digging my way thru
> the thermal_cooling_device stuff)

It is slightly ugly. All thermal zones and cooling devices are virtual
devices (so, even no connection to the particular tsens device). One
can either enumerate them by checking
/sys/class/thermal/thermal_zoneN/type or enumerate them through
/sys/class/hwmon. For cooling devices again the only enumeration is
through /sys/class/thermal/cooling_deviceN/type.

Probably it should be possible to push cooling devices and thermal
zones under corresponding providers. However I do not know if there is
a good way to correlate cooling device (ideally a part of GPU) to the
thermal_zone (which in our case is provided by tsens / temp_alarm
rather than GPU itself).

>
> But what about freq?  I think, esp for cases where some "fw thing" is
> controlling the freq we end up needing to use gpu counters to measure
> the freq.

For the freq it is slightly easier: /sys/class/devfreq/*, devices are
registered under proper parent (IOW, GPU). So one can read
/sys/class/devfreq/3d00000.gpu/cur_freq or
/sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.

However because of the components usage, there is no link from
/sys/class/drm/card0
(/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.

Getting all these items together in a platform-independent way would
be definitely an important but complex topic.

>
> > What might be needed is better glue to go from the fd or fdinfo to the
> > right hw device and then crawl around the hwmon in sysfs automatically. I
> > would not be surprised at all if we really suck on this, probably more
> > likely on SoC than pci gpus where at least everything should be under the
> > main pci sysfs device.
>
> yeah, I *think* userspace would have to look at /proc/device-tree to
> find the cooling device(s) associated with the gpu.. at least I don't
> see a straightforward way to figure it out just for sysfs
>
> BR,
> -R
>
> > -Daniel
> >
> > >
> > > BR,
> > > -R
> > >
> > >
> > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > >
> > > > Rob Clark (2):
> > > >   drm: Add fdinfo memory stats
> > > >   drm/msm: Add memory stats to fdinfo
> > > >
> > > >  Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > >  drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
> > > >  drivers/gpu/drm/msm/msm_drv.c         | 25 ++++++++-
> > > >  drivers/gpu/drm/msm/msm_gpu.c         |  2 -
> > > >  include/drm/drm_file.h                | 10 ++++
> > > >  5 files changed, 134 insertions(+), 3 deletions(-)
> > > >
> > > > --
> > > > 2.39.2
> > > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
Daniel Vetter April 11, 2023, 6:26 p.m. UTC | #7
On Tue, Apr 11, 2023 at 08:35:48PM +0300, Dmitry Baryshkov wrote:
> On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > >
> > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > Similar motivation to other similar recent attempt[1].  But with an
> > > > > attempt to have some shared code for this.  As well as documentation.
> > > > >
> > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > some placement stats as well.  But this seems like a reasonable start.
> > > > >
> > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > >
> > > > On a related topic, I'm wondering if it would make sense to report
> > > > some more global things (temp, freq, etc) via fdinfo?  Some of this,
> > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > ways.  But maybe it makes sense to have these sort of things reported
> > > > in a standardized way (even though they aren't really per-drm_file)
> > >
> > > I think that's a bit much layering violation, we'd essentially have to
> > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > be in :-)
> >
> > I guess this is true for temp (where there are thermal zones with
> > potentially multiple temp sensors.. but I'm still digging my way thru
> > the thermal_cooling_device stuff)
> 
> It is slightly ugly. All thermal zones and cooling devices are virtual
> devices (so, even no connection to the particular tsens device). One
> can either enumerate them by checking
> /sys/class/thermal/thermal_zoneN/type or enumerate them through
> /sys/class/hwmon. For cooling devices again the only enumeration is
> through /sys/class/thermal/cooling_deviceN/type.
> 
> Probably it should be possible to push cooling devices and thermal
> zones under corresponding providers. However I do not know if there is
> a good way to correlate cooling device (ideally a part of GPU) to the
> thermal_zone (which in our case is provided by tsens / temp_alarm
> rather than GPU itself).

There's not even sysfs links to connect the pieces in both ways?

> > But what about freq?  I think, esp for cases where some "fw thing" is
> > controlling the freq we end up needing to use gpu counters to measure
> > the freq.
> 
> For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> registered under proper parent (IOW, GPU). So one can read
> /sys/class/devfreq/3d00000.gpu/cur_freq or
> /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> 
> However because of the components usage, there is no link from
> /sys/class/drm/card0
> (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.

Hm ... do we need to make component more visible in sysfs, with _looooots_
of links? Atm it's just not even there.

> Getting all these items together in a platform-independent way would
> be definitely an important but complex topic.

Yeah this sounds like some work. But also sounds like it's all generic
issues (thermal zones above and component here) that really should be
fixed at that level?

Cheers, Daniel


> > > What might be needed is better glue to go from the fd or fdinfo to the
> > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > would not be surprised at all if we really suck on this, probably more
> > > likely on SoC than pci gpus where at least everything should be under the
> > > main pci sysfs device.
> >
> > yeah, I *think* userspace would have to look at /proc/device-tree to
> > find the cooling device(s) associated with the gpu.. at least I don't
> > see a straightforward way to figure it out just for sysfs
> >
> > BR,
> > -R
> >
> > > -Daniel
> > >
> > > >
> > > > BR,
> > > > -R
> > > >
> > > >
> > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > >
> > > > > Rob Clark (2):
> > > > >   drm: Add fdinfo memory stats
> > > > >   drm/msm: Add memory stats to fdinfo
> > > > >
> > > > >  Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > >  drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
> > > > >  drivers/gpu/drm/msm/msm_drv.c         | 25 ++++++++-
> > > > >  drivers/gpu/drm/msm/msm_gpu.c         |  2 -
> > > > >  include/drm/drm_file.h                | 10 ++++
> > > > >  5 files changed, 134 insertions(+), 3 deletions(-)
> > > > >
> > > > > --
> > > > > 2.39.2
> > > > >
> > >
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > http://blog.ffwll.ch
> 
> 
> 
> -- 
> With best wishes
> Dmitry
Rodrigo Vivi April 12, 2023, 12:47 p.m. UTC | #8
On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote:
> On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
> > On 11/04/2023 21:28, Rob Clark wrote:
> > > On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> > > <dmitry.baryshkov@linaro.org> wrote:
> > > > 
> > > > On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> > > > > 
> > > > > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > > 
> > > > > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > 
> > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > 
> > > > > > > > Similar motivation to other similar recent attempt[1].  But with an
> > > > > > > > attempt to have some shared code for this.  As well as documentation.
> > > > > > > > 
> > > > > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > > > > some placement stats as well.  But this seems like a reasonable start.
> > > > > > > > 
> > > > > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > > > > > 
> > > > > > > On a related topic, I'm wondering if it would make sense to report
> > > > > > > some more global things (temp, freq, etc) via fdinfo?  Some of this,
> > > > > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > > > > ways.  But maybe it makes sense to have these sort of things reported
> > > > > > > in a standardized way (even though they aren't really per-drm_file)
> > > > > > 
> > > > > > I think that's a bit much layering violation, we'd essentially have to
> > > > > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > > > > be in :-)
> > > > > 
> > > > > I guess this is true for temp (where there are thermal zones with
> > > > > potentially multiple temp sensors.. but I'm still digging my way thru
> > > > > the thermal_cooling_device stuff)
> > > > 
> > > > It is slightly ugly. All thermal zones and cooling devices are virtual
> > > > devices (so, even no connection to the particular tsens device). One
> > > > can either enumerate them by checking
> > > > /sys/class/thermal/thermal_zoneN/type or enumerate them through
> > > > /sys/class/hwmon. For cooling devices again the only enumeration is
> > > > through /sys/class/thermal/cooling_deviceN/type.
> > > > 
> > > > Probably it should be possible to push cooling devices and thermal
> > > > zones under corresponding providers. However I do not know if there is
> > > > a good way to correlate cooling device (ideally a part of GPU) to the
> > > > thermal_zone (which in our case is provided by tsens / temp_alarm
> > > > rather than GPU itself).
> > > > 
> > > > > 
> > > > > But what about freq?  I think, esp for cases where some "fw thing" is
> > > > > controlling the freq we end up needing to use gpu counters to measure
> > > > > the freq.
> > > > 
> > > > For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> > > > registered under proper parent (IOW, GPU). So one can read
> > > > /sys/class/devfreq/3d00000.gpu/cur_freq or
> > > > /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> > > > 
> > > > However because of the components usage, there is no link from
> > > > /sys/class/drm/card0
> > > > (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> > > > to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
> > > > 
> > > > Getting all these items together in a platform-independent way would
> > > > be definitely an important but complex topic.
> > > 
> > > But I don't believe any of the pci gpu's use devfreq ;-)
> > > 
> > > And also, you can't expect the CPU to actually know the freq when fw
> > > is the one controlling freq.  We can, currently, have a reasonable
> > > approximation from devfreq but that stops if IFPC is implemented.  And
> > > other GPUs have even less direct control.  So freq is a thing that I
> > > don't think we should try to get from "common frameworks"
> > 
> > I think it might be useful to add another passive devfreq governor type for
> > external frequencies. This way we can use the same interface to export
> > non-CPU-controlled frequencies.
> 
> Yeah this sounds like a decent idea to me too. It might also solve the fun
> of various pci devices having very non-standard freq controls in sysfs
> (looking at least at i915 here ...)

I also like the idea of having some common infrastructure for the GPU freq.

hwmon have a good infrastructure, but they are more focused on individual
monitoring devices and not very welcomed to embedded monitoring and control.
I still want to check the opportunity to see if at least some freq control
could be aligned there.

Another thing that complicates that is that there are multiple frequency
domains and controls with multipliers in Intel GPU that are not very
standard or easy to integrate.

On a quick glace this devfreq seems neat because it aligns with the cpufreq
and governors. But again it would be hard to align with the multiple domains
and controls. But it deserves a look.

I will take a look to both fronts for Xe: hwmon and devfreq. Right now on
Xe we have a lot less controls than i915, but I can imagine soon there
will be requirements to make that to grow and I fear that we end up just
like i915. So I will take a look before that happens.

> 
> I guess it would minimally be a good idea if we could document this, or
> maybe have a reference implementation in nvtop or whatever the cool thing
> is rn.
> -Daniel
> 
> > 
> > > 
> > > BR,
> > > -R
> > > 
> > > > > 
> > > > > > What might be needed is better glue to go from the fd or fdinfo to the
> > > > > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > > > > would not be surprised at all if we really suck on this, probably more
> > > > > > likely on SoC than pci gpus where at least everything should be under the
> > > > > > main pci sysfs device.
> > > > > 
> > > > > yeah, I *think* userspace would have to look at /proc/device-tree to
> > > > > find the cooling device(s) associated with the gpu.. at least I don't
> > > > > see a straightforward way to figure it out just for sysfs
> > > > > 
> > > > > BR,
> > > > > -R
> > > > > 
> > > > > > -Daniel
> > > > > > 
> > > > > > > 
> > > > > > > BR,
> > > > > > > -R
> > > > > > > 
> > > > > > > 
> > > > > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > > > > > 
> > > > > > > > Rob Clark (2):
> > > > > > > >    drm: Add fdinfo memory stats
> > > > > > > >    drm/msm: Add memory stats to fdinfo
> > > > > > > > 
> > > > > > > >   Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > > > >   drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
> > > > > > > >   drivers/gpu/drm/msm/msm_drv.c         | 25 ++++++++-
> > > > > > > >   drivers/gpu/drm/msm/msm_gpu.c         |  2 -
> > > > > > > >   include/drm/drm_file.h                | 10 ++++
> > > > > > > >   5 files changed, 134 insertions(+), 3 deletions(-)
> > > > > > > > 
> > > > > > > > --
> > > > > > > > 2.39.2
> > > > > > > > 
> > > > > > 
> > > > > > --
> > > > > > Daniel Vetter
> > > > > > Software Engineer, Intel Corporation
> > > > > > http://blog.ffwll.ch
> > > > 
> > > > 
> > > > 
> > > > --
> > > > With best wishes
> > > > Dmitry
> > 
> > -- 
> > With best wishes
> > Dmitry
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
Rob Clark April 12, 2023, 8:09 p.m. UTC | #9
On Wed, Apr 12, 2023 at 5:47 AM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
>
> On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote:
> > On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
> > > On 11/04/2023 21:28, Rob Clark wrote:
> > > > On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> > > > <dmitry.baryshkov@linaro.org> wrote:
> > > > >
> > > > > On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > > >
> > > > > > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > > > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > >
> > > > > > > > > Similar motivation to other similar recent attempt[1].  But with an
> > > > > > > > > attempt to have some shared code for this.  As well as documentation.
> > > > > > > > >
> > > > > > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > > > > > some placement stats as well.  But this seems like a reasonable start.
> > > > > > > > >
> > > > > > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > > > > > >
> > > > > > > > On a related topic, I'm wondering if it would make sense to report
> > > > > > > > some more global things (temp, freq, etc) via fdinfo?  Some of this,
> > > > > > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > > > > > ways.  But maybe it makes sense to have these sort of things reported
> > > > > > > > in a standardized way (even though they aren't really per-drm_file)
> > > > > > >
> > > > > > > I think that's a bit much layering violation, we'd essentially have to
> > > > > > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > > > > > be in :-)
> > > > > >
> > > > > > I guess this is true for temp (where there are thermal zones with
> > > > > > potentially multiple temp sensors.. but I'm still digging my way thru
> > > > > > the thermal_cooling_device stuff)
> > > > >
> > > > > It is slightly ugly. All thermal zones and cooling devices are virtual
> > > > > devices (so, even no connection to the particular tsens device). One
> > > > > can either enumerate them by checking
> > > > > /sys/class/thermal/thermal_zoneN/type or enumerate them through
> > > > > /sys/class/hwmon. For cooling devices again the only enumeration is
> > > > > through /sys/class/thermal/cooling_deviceN/type.
> > > > >
> > > > > Probably it should be possible to push cooling devices and thermal
> > > > > zones under corresponding providers. However I do not know if there is
> > > > > a good way to correlate cooling device (ideally a part of GPU) to the
> > > > > thermal_zone (which in our case is provided by tsens / temp_alarm
> > > > > rather than GPU itself).
> > > > >
> > > > > >
> > > > > > But what about freq?  I think, esp for cases where some "fw thing" is
> > > > > > controlling the freq we end up needing to use gpu counters to measure
> > > > > > the freq.
> > > > >
> > > > > For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> > > > > registered under proper parent (IOW, GPU). So one can read
> > > > > /sys/class/devfreq/3d00000.gpu/cur_freq or
> > > > > /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> > > > >
> > > > > However because of the components usage, there is no link from
> > > > > /sys/class/drm/card0
> > > > > (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> > > > > to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
> > > > >
> > > > > Getting all these items together in a platform-independent way would
> > > > > be definitely an important but complex topic.
> > > >
> > > > But I don't believe any of the pci gpu's use devfreq ;-)
> > > >
> > > > And also, you can't expect the CPU to actually know the freq when fw
> > > > is the one controlling freq.  We can, currently, have a reasonable
> > > > approximation from devfreq but that stops if IFPC is implemented.  And
> > > > other GPUs have even less direct control.  So freq is a thing that I
> > > > don't think we should try to get from "common frameworks"
> > >
> > > I think it might be useful to add another passive devfreq governor type for
> > > external frequencies. This way we can use the same interface to export
> > > non-CPU-controlled frequencies.
> >
> > Yeah this sounds like a decent idea to me too. It might also solve the fun
> > of various pci devices having very non-standard freq controls in sysfs
> > (looking at least at i915 here ...)
>
> I also like the idea of having some common infrastructure for the GPU freq.
>
> hwmon have a good infrastructure, but they are more focused on individual
> monitoring devices and not very welcomed to embedded monitoring and control.
> I still want to check the opportunity to see if at least some freq control
> could be aligned there.
>
> Another thing that complicates that is that there are multiple frequency
> domains and controls with multipliers in Intel GPU that are not very
> standard or easy to integrate.
>
> On a quick glace this devfreq seems neat because it aligns with the cpufreq
> and governors. But again it would be hard to align with the multiple domains
> and controls. But it deserves a look.
>
> I will take a look to both fronts for Xe: hwmon and devfreq. Right now on
> Xe we have a lot less controls than i915, but I can imagine soon there
> will be requirements to make that to grow and I fear that we end up just
> like i915. So I will take a look before that happens.

So it looks like i915 (dgpu only) and nouveau already use hwmon.. so
maybe this is a good way to expose temp.  Maybe we can wire up some
sort of helper for drivers which use thermal_cooling_device (which can
be composed of multiple sensors) to give back an aggregate temp for
hwmon to report?

Freq could possibly be added to hwmon (ie. seems like a reasonable
attribute to add).  Devfreq might also be an option but on arm it
isn't necessarily associated with the drm device, whereas we could
associate the hwmon with the drm device to make it easier for
userspace to find.

BR,
-R

> >
> > I guess it would minimally be a good idea if we could document this, or
> > maybe have a reference implementation in nvtop or whatever the cool thing
> > is rn.
> > -Daniel
> >
> > >
> > > >
> > > > BR,
> > > > -R
> > > >
> > > > > >
> > > > > > > What might be needed is better glue to go from the fd or fdinfo to the
> > > > > > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > > > > > would not be surprised at all if we really suck on this, probably more
> > > > > > > likely on SoC than pci gpus where at least everything should be under the
> > > > > > > main pci sysfs device.
> > > > > >
> > > > > > yeah, I *think* userspace would have to look at /proc/device-tree to
> > > > > > find the cooling device(s) associated with the gpu.. at least I don't
> > > > > > see a straightforward way to figure it out just for sysfs
> > > > > >
> > > > > > BR,
> > > > > > -R
> > > > > >
> > > > > > > -Daniel
> > > > > > >
> > > > > > > >
> > > > > > > > BR,
> > > > > > > > -R
> > > > > > > >
> > > > > > > >
> > > > > > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > > > > > >
> > > > > > > > > Rob Clark (2):
> > > > > > > > >    drm: Add fdinfo memory stats
> > > > > > > > >    drm/msm: Add memory stats to fdinfo
> > > > > > > > >
> > > > > > > > >   Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > > > > >   drivers/gpu/drm/drm_file.c            | 79 +++++++++++++++++++++++++++
> > > > > > > > >   drivers/gpu/drm/msm/msm_drv.c         | 25 ++++++++-
> > > > > > > > >   drivers/gpu/drm/msm/msm_gpu.c         |  2 -
> > > > > > > > >   include/drm/drm_file.h                | 10 ++++
> > > > > > > > >   5 files changed, 134 insertions(+), 3 deletions(-)
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > 2.39.2
> > > > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Daniel Vetter
> > > > > > > Software Engineer, Intel Corporation
> > > > > > > http://blog.ffwll.ch
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > With best wishes
> > > > > Dmitry
> > >
> > > --
> > > With best wishes
> > > Dmitry
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch