mbox series

[v2,0/4] drm/nvdla: Add driver support for NVDLA

Message ID 20220426060808.78225-1-cai.huoqing@linux.dev
Headers show
Series drm/nvdla: Add driver support for NVDLA | expand

Message

Cai Huoqing April 26, 2022, 6:07 a.m. UTC
The NVIDIA Deep Learning Accelerator (NVDLA) is an open source IP
which is integrated into NVIDIA Jetson AGX Xavier,
so add driver support for this accelerator."

v1->v2:
*Rename nvdla_drm.[ch] to nvdla_drv.[ch] and rename nvdla_ioctl.h to nvdla_drm.h,
 move it to uapi.
 comments link: https://lore.kernel.org/lkml/20bac605-97e6-e5cd-c4e4-83a8121645d8@amd.com/
*Remove the  onexistent filename  in Makefile
 comments link: https://lore.kernel.org/lkml/202204201512.pp20MXT5-lkp@intel.com/
*Sort file names alphabetically in Makefile.
*Rearrange the error messages, and use drm_err/_dbg() instead of pr_err/_dbg().
*Replace  "dla_" prefix with "nvdla_"
*Check the iosys_map by iosys_map_is_null(), and check "ret" directly.
*Using iosys_map_memcpy_to/_from() for iosys_map instead of memcpy()
*Fix parameter error "dma_buf_vunmap(buf, ptr)", use "&map" instead of "ptr"
*Use iosys_map instead of kvaddr and use "iosys_map_set_vaddr()" to initialize iosys_map
*Using "vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node)" to update vm_pgoff is cleaner
*Remove the unused nvdla_drm_gem_mmap, register drm_gem_mmap to file_operations directly.
*Use DEFINE_DRM_GEM_FOPS() to define nvdla_drm_fops.
*Remove the unused nvdla_drm_gem_mmap_buf, register drm_gem_prime_mmap to drm_driver directly.
 comments link: https://lore.kernel.org/lkml/7fa19996-5830-af3d-ab24-08c76e1d5604@suse.de/
*Fix typo and some code style
*Remove unused function nvdla_get_time_us()
 comments link: https://lore.kernel.org/lkml/0fa9ab41-c18e-a569-e6fe-a0e9d965905e@stargateuniverse.net/

Cai Huoqing (4):
  MAINTAINERS: Add the driver info of the NVDLA
  drm/nvdla: Add driver support for NVDLA
  drm/nvdla: Add register head file of NVDLA
  drm/nvdla/uapi: Add UAPI of NVDLA driver

 MAINTAINERS                             |    7 +
 drivers/gpu/drm/Kconfig                 |    2 +
 drivers/gpu/drm/Makefile                |    1 +
 drivers/gpu/drm/nvdla/Kconfig           |    8 +
 drivers/gpu/drm/nvdla/Makefile          |   17 +
 drivers/gpu/drm/nvdla/nvdla_bdma.c      |  198 +
 drivers/gpu/drm/nvdla/nvdla_cache.c     |  202 +
 drivers/gpu/drm/nvdla/nvdla_cdp.c       |  299 ++
 drivers/gpu/drm/nvdla/nvdla_common.c    |  293 ++
 drivers/gpu/drm/nvdla/nvdla_common.h    |  835 +++
 drivers/gpu/drm/nvdla/nvdla_conv.c      |  684 +++
 drivers/gpu/drm/nvdla/nvdla_drv.c       |  694 +++
 drivers/gpu/drm/nvdla/nvdla_drv.h       |  129 +
 drivers/gpu/drm/nvdla/nvdla_engine.c    |  233 +
 drivers/gpu/drm/nvdla/nvdla_engine.h    |  272 +
 drivers/gpu/drm/nvdla/nvdla_gem.c       |  358 ++
 drivers/gpu/drm/nvdla/nvdla_pdp.c       |  448 ++
 drivers/gpu/drm/nvdla/nvdla_reg.h       | 6411 +++++++++++++++++++++++
 drivers/gpu/drm/nvdla/nvdla_rubik.c     |  214 +
 drivers/gpu/drm/nvdla/nvdla_sched.h     |   37 +
 drivers/gpu/drm/nvdla/nvdla_scheduler.c | 1012 ++++
 drivers/gpu/drm/nvdla/nvdla_sdp.c       |  723 +++
 include/uapi/drm/nvdla_drm.h            |   99 +
 23 files changed, 13176 insertions(+)
 create mode 100644 drivers/gpu/drm/nvdla/Kconfig
 create mode 100644 drivers/gpu/drm/nvdla/Makefile
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_bdma.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_cache.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_cdp.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_common.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_common.h
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_conv.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_drv.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_drv.h
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_engine.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_engine.h
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_gem.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_pdp.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_reg.h
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_rubik.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_sched.h
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_scheduler.c
 create mode 100644 drivers/gpu/drm/nvdla/nvdla_sdp.c
 create mode 100644 include/uapi/drm/nvdla_drm.h

Comments

Thierry Reding April 28, 2022, 2:10 p.m. UTC | #1
On Tue, Apr 26, 2022 at 02:07:57PM +0800, Cai Huoqing wrote:
> The NVIDIA Deep Learning Accelerator (NVDLA) is an open source IP
> which is integrated into NVIDIA Jetson AGX Xavier,
> so add driver support for this accelerator."

Hi,

nice to see this work going on. For subsequent revisions, can you please
also Cc the Tegra mailing list (linux-tegra@vger.kernel.org) as well as
the Tegra platform maintainers (that's Jon Hunter and myself). This will
make sure that more people with an interest in this will see your work.
Not everyone follows dri-devel, linaro-mm-sig or linux-media.

Thanks,
Thierry

> 
> v1->v2:
> *Rename nvdla_drm.[ch] to nvdla_drv.[ch] and rename nvdla_ioctl.h to nvdla_drm.h,
>  move it to uapi.
>  comments link: https://lore.kernel.org/lkml/20bac605-97e6-e5cd-c4e4-83a8121645d8@amd.com/
> *Remove the  onexistent filename  in Makefile
>  comments link: https://lore.kernel.org/lkml/202204201512.pp20MXT5-lkp@intel.com/
> *Sort file names alphabetically in Makefile.
> *Rearrange the error messages, and use drm_err/_dbg() instead of pr_err/_dbg().
> *Replace  "dla_" prefix with "nvdla_"
> *Check the iosys_map by iosys_map_is_null(), and check "ret" directly.
> *Using iosys_map_memcpy_to/_from() for iosys_map instead of memcpy()
> *Fix parameter error "dma_buf_vunmap(buf, ptr)", use "&map" instead of "ptr"
> *Use iosys_map instead of kvaddr and use "iosys_map_set_vaddr()" to initialize iosys_map
> *Using "vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node)" to update vm_pgoff is cleaner
> *Remove the unused nvdla_drm_gem_mmap, register drm_gem_mmap to file_operations directly.
> *Use DEFINE_DRM_GEM_FOPS() to define nvdla_drm_fops.
> *Remove the unused nvdla_drm_gem_mmap_buf, register drm_gem_prime_mmap to drm_driver directly.
>  comments link: https://lore.kernel.org/lkml/7fa19996-5830-af3d-ab24-08c76e1d5604@suse.de/
> *Fix typo and some code style
> *Remove unused function nvdla_get_time_us()
>  comments link: https://lore.kernel.org/lkml/0fa9ab41-c18e-a569-e6fe-a0e9d965905e@stargateuniverse.net/
> 
> Cai Huoqing (4):
>   MAINTAINERS: Add the driver info of the NVDLA
>   drm/nvdla: Add driver support for NVDLA
>   drm/nvdla: Add register head file of NVDLA
>   drm/nvdla/uapi: Add UAPI of NVDLA driver
> 
>  MAINTAINERS                             |    7 +
>  drivers/gpu/drm/Kconfig                 |    2 +
>  drivers/gpu/drm/Makefile                |    1 +
>  drivers/gpu/drm/nvdla/Kconfig           |    8 +
>  drivers/gpu/drm/nvdla/Makefile          |   17 +
>  drivers/gpu/drm/nvdla/nvdla_bdma.c      |  198 +
>  drivers/gpu/drm/nvdla/nvdla_cache.c     |  202 +
>  drivers/gpu/drm/nvdla/nvdla_cdp.c       |  299 ++
>  drivers/gpu/drm/nvdla/nvdla_common.c    |  293 ++
>  drivers/gpu/drm/nvdla/nvdla_common.h    |  835 +++
>  drivers/gpu/drm/nvdla/nvdla_conv.c      |  684 +++
>  drivers/gpu/drm/nvdla/nvdla_drv.c       |  694 +++
>  drivers/gpu/drm/nvdla/nvdla_drv.h       |  129 +
>  drivers/gpu/drm/nvdla/nvdla_engine.c    |  233 +
>  drivers/gpu/drm/nvdla/nvdla_engine.h    |  272 +
>  drivers/gpu/drm/nvdla/nvdla_gem.c       |  358 ++
>  drivers/gpu/drm/nvdla/nvdla_pdp.c       |  448 ++
>  drivers/gpu/drm/nvdla/nvdla_reg.h       | 6411 +++++++++++++++++++++++
>  drivers/gpu/drm/nvdla/nvdla_rubik.c     |  214 +
>  drivers/gpu/drm/nvdla/nvdla_sched.h     |   37 +
>  drivers/gpu/drm/nvdla/nvdla_scheduler.c | 1012 ++++
>  drivers/gpu/drm/nvdla/nvdla_sdp.c       |  723 +++
>  include/uapi/drm/nvdla_drm.h            |   99 +
>  23 files changed, 13176 insertions(+)
>  create mode 100644 drivers/gpu/drm/nvdla/Kconfig
>  create mode 100644 drivers/gpu/drm/nvdla/Makefile
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_bdma.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_cache.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_cdp.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_common.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_common.h
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_conv.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_drv.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_drv.h
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_engine.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_engine.h
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_gem.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_pdp.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_reg.h
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_rubik.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_sched.h
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_scheduler.c
>  create mode 100644 drivers/gpu/drm/nvdla/nvdla_sdp.c
>  create mode 100644 include/uapi/drm/nvdla_drm.h
> 
> -- 
> 2.25.1
>
Thierry Reding April 28, 2022, 2:40 p.m. UTC | #2
On Tue, Apr 26, 2022 at 02:08:01PM +0800, Cai Huoqing wrote:
> The NVIDIA Deep Learning Accelerator (NVDLA) is an open source IP
> which is integrated into NVIDIA Jetson AGX Xavier,
> so add UAPI of this driver.
> 
> Signed-off-by: Cai Huoqing <cai.huoqing@linux.dev>
> ---
> v1->v2:
> *Rename nvdla_drm.[ch] to nvdla_drv.[ch] and rename nvdla_ioctl.h to nvdla_drm.h,
>  move it to uapi.
>  comments link: https://lore.kernel.org/lkml/20bac605-97e6-e5cd-c4e4-83a8121645d8@amd.com/
> 
>  include/uapi/drm/nvdla_drm.h | 99 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 99 insertions(+)
>  create mode 100644 include/uapi/drm/nvdla_drm.h
> 
> diff --git a/include/uapi/drm/nvdla_drm.h b/include/uapi/drm/nvdla_drm.h
> new file mode 100644
> index 000000000000..984635285525
> --- /dev/null
> +++ b/include/uapi/drm/nvdla_drm.h
> @@ -0,0 +1,99 @@
> +/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
> +/*
> + * Copyright (C) 2017-2018 NVIDIA CORPORATION.
> + * Copyright (C) 2022 Cai Huoqing
> + */
> +
> +#ifndef __LINUX_NVDLA_IOCTL_H
> +#define __LINUX_NVDLA_IOCTL_H
> +
> +#include <linux/ioctl.h>
> +#include <linux/types.h>
> +
> +#if !defined(__KERNEL__)
> +#define __user
> +#endif
> +
> +/**
> + * struct nvdla_mem_handle structure for memory handles
> + *
> + * @handle		handle to DMA buffer allocated in userspace
> + * @reserved		Reserved for padding
> + * @offset		offset in bytes from start address of buffer
> + *
> + */
> +struct nvdla_mem_handle {
> +	__u32 handle;
> +	__u32 reserved;
> +	__u64 offset;
> +};
> +
> +/**
> + * struct nvdla_ioctl_submit_task structure for single task information
> + *
> + * @num_addresses		total number of entries in address_list
> + * @reserved			Reserved for padding
> + * @address_list		pointer to array of struct nvdla_mem_handle
> + *
> + */
> +struct nvdla_ioctl_submit_task {
> +#define NVDLA_MAX_BUFFERS_PER_TASK (6144)

This is an odd number. Can you clarify where this limitation comes from?
I say "limitation" here because, again, I'm no expert on DLA and I don't
know what a typical workload would look like. 6144 is a lot of buffers,
but are these tasks typically using a few large buffers or many small
buffers?

> +	__u32 num_addresses;
> +#define NVDLA_NO_TIMEOUT    (0xffffffff)
> +	__u32 timeout;
> +	__u64 address_list;
> +};

So if a task is basically just a collection of DMA buffers, is the
userspace supposed to fill some of those buffers with metadata to
determine what the task is about? If so, is this something that the
DLA firmware/hardware knows how to parse?

> +/**
> + * struct nvdla_submit_args structure for task submit
> + *
> + * @tasks		pointer to array of struct nvdla_ioctl_submit_task
> + * @num_tasks		number of entries in tasks
> + * @flags		flags for task submit, no flags defined yet
> + * @version		version of task structure
> + *
> + */
> +struct nvdla_submit_args {
> +	__u64 tasks;
> +	__u16 num_tasks;
> +#define NVDLA_MAX_TASKS_PER_SUBMIT	24

Perhaps worth clarifying if this is a hardware restriction or an
arbitrary software limit. Is this perhaps worth parameterizing somehow
if this can potentially change in newer versions of DLA?

> +#define NVDLA_SUBMIT_FLAGS_ATOMIC	(1 << 0)

What exactly does atomicity imply here? Should this be described in a
comment?

Thierry

> +	__u16 flags;
> +	__u32 version;
> +};
> +
> +/**
> + * struct nvdla_gem_create_args for allocating DMA buffer through GEM
> + *
> + * @handle		handle updated by kernel after allocation
> + * @flags		implementation specific flags
> + * @size		size of buffer to allocate
> + */
> +struct nvdla_gem_create_args {
> +	__u32 handle;
> +	__u32 flags;
> +	__u64 size;
> +};
> +
> +/**
> + * struct nvdla_gem_map_offset_args for mapping DMA buffer
> + *
> + * @handle		handle of the buffer
> + * @reserved		reserved for padding
> + * @offset		offset updated by kernel after mapping
> + */
> +struct nvdla_gem_map_offset_args {
> +	__u32 handle;
> +	__u32 reserved;
> +	__u64 offset;
> +};
> +
> +#define DRM_NVDLA_SUBMIT		0x00
> +#define DRM_NVDLA_GEM_CREATE	0x01
> +#define DRM_NVDLA_GEM_MMAP		0x02
> +
> +#define DRM_IOCTL_NVDLA_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_NVDLA_SUBMIT, struct nvdla_submit_args)
> +#define DRM_IOCTL_NVDLA_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_NVDLA_GEM_CREATE, struct nvdla_gem_create_args)
> +#define DRM_IOCTL_NVDLA_GEM_MMAP DRM_IOWR(DRM_COMMAND_BASE + DRM_NVDLA_GEM_MMAP, struct nvdla_gem_map_offset_args)
> +
> +#endif
> -- 
> 2.25.1
>
Mikko Perttunen April 28, 2022, 3:56 p.m. UTC | #3
On 4/28/22 17:10, Thierry Reding wrote:
> On Tue, Apr 26, 2022 at 02:07:57PM +0800, Cai Huoqing wrote:
>> The NVIDIA Deep Learning Accelerator (NVDLA) is an open source IP
>> which is integrated into NVIDIA Jetson AGX Xavier,
>> so add driver support for this accelerator."
> 
> Hi,
> 
> nice to see this work going on. For subsequent revisions, can you please
> also Cc the Tegra mailing list (linux-tegra@vger.kernel.org) as well as
> the Tegra platform maintainers (that's Jon Hunter and myself). This will
> make sure that more people with an interest in this will see your work.
> Not everyone follows dri-devel, linaro-mm-sig or linux-media.
> 
> Thanks,
> Thierry

 From a quick glance it looks like this driver pokes DLA hardware 
directly which is not the intended programming model on Tegra hardware 
(there are Falcon microcontrollers that offload task scheduling and 
synchronization from the CPU). The hardware is also behind the Host1x 
bus so a simple platform device is not sufficient.

Was this driver developed against some platform with OpenDLA hardware 
(i.e. not Tegra)?

If so, we'd need to verify if the hardware matches the hardware in 
Tegra194. Also, this driver may not be ideal for Tegra platforms since 
we would lack the hardware scheduling and synchronization facilities. It 
is likely necessary to have separate drivers for OpenDLA and Tegra's DLA 
integration.

Thanks,
Mikko

> 
>>
>> v1->v2:
>> *Rename nvdla_drm.[ch] to nvdla_drv.[ch] and rename nvdla_ioctl.h to nvdla_drm.h,
>>   move it to uapi.
>>   comments link: https://lore.kernel.org/lkml/20bac605-97e6-e5cd-c4e4-83a8121645d8@amd.com/
>> *Remove the  onexistent filename  in Makefile
>>   comments link: https://lore.kernel.org/lkml/202204201512.pp20MXT5-lkp@intel.com/
>> *Sort file names alphabetically in Makefile.
>> *Rearrange the error messages, and use drm_err/_dbg() instead of pr_err/_dbg().
>> *Replace  "dla_" prefix with "nvdla_"
>> *Check the iosys_map by iosys_map_is_null(), and check "ret" directly.
>> *Using iosys_map_memcpy_to/_from() for iosys_map instead of memcpy()
>> *Fix parameter error "dma_buf_vunmap(buf, ptr)", use "&map" instead of "ptr"
>> *Use iosys_map instead of kvaddr and use "iosys_map_set_vaddr()" to initialize iosys_map
>> *Using "vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node)" to update vm_pgoff is cleaner
>> *Remove the unused nvdla_drm_gem_mmap, register drm_gem_mmap to file_operations directly.
>> *Use DEFINE_DRM_GEM_FOPS() to define nvdla_drm_fops.
>> *Remove the unused nvdla_drm_gem_mmap_buf, register drm_gem_prime_mmap to drm_driver directly.
>>   comments link: https://lore.kernel.org/lkml/7fa19996-5830-af3d-ab24-08c76e1d5604@suse.de/
>> *Fix typo and some code style
>> *Remove unused function nvdla_get_time_us()
>>   comments link: https://lore.kernel.org/lkml/0fa9ab41-c18e-a569-e6fe-a0e9d965905e@stargateuniverse.net/
>>
>> Cai Huoqing (4):
>>    MAINTAINERS: Add the driver info of the NVDLA
>>    drm/nvdla: Add driver support for NVDLA
>>    drm/nvdla: Add register head file of NVDLA
>>    drm/nvdla/uapi: Add UAPI of NVDLA driver
>>
>>   MAINTAINERS                             |    7 +
>>   drivers/gpu/drm/Kconfig                 |    2 +
>>   drivers/gpu/drm/Makefile                |    1 +
>>   drivers/gpu/drm/nvdla/Kconfig           |    8 +
>>   drivers/gpu/drm/nvdla/Makefile          |   17 +
>>   drivers/gpu/drm/nvdla/nvdla_bdma.c      |  198 +
>>   drivers/gpu/drm/nvdla/nvdla_cache.c     |  202 +
>>   drivers/gpu/drm/nvdla/nvdla_cdp.c       |  299 ++
>>   drivers/gpu/drm/nvdla/nvdla_common.c    |  293 ++
>>   drivers/gpu/drm/nvdla/nvdla_common.h    |  835 +++
>>   drivers/gpu/drm/nvdla/nvdla_conv.c      |  684 +++
>>   drivers/gpu/drm/nvdla/nvdla_drv.c       |  694 +++
>>   drivers/gpu/drm/nvdla/nvdla_drv.h       |  129 +
>>   drivers/gpu/drm/nvdla/nvdla_engine.c    |  233 +
>>   drivers/gpu/drm/nvdla/nvdla_engine.h    |  272 +
>>   drivers/gpu/drm/nvdla/nvdla_gem.c       |  358 ++
>>   drivers/gpu/drm/nvdla/nvdla_pdp.c       |  448 ++
>>   drivers/gpu/drm/nvdla/nvdla_reg.h       | 6411 +++++++++++++++++++++++
>>   drivers/gpu/drm/nvdla/nvdla_rubik.c     |  214 +
>>   drivers/gpu/drm/nvdla/nvdla_sched.h     |   37 +
>>   drivers/gpu/drm/nvdla/nvdla_scheduler.c | 1012 ++++
>>   drivers/gpu/drm/nvdla/nvdla_sdp.c       |  723 +++
>>   include/uapi/drm/nvdla_drm.h            |   99 +
>>   23 files changed, 13176 insertions(+)
>>   create mode 100644 drivers/gpu/drm/nvdla/Kconfig
>>   create mode 100644 drivers/gpu/drm/nvdla/Makefile
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_bdma.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_cache.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_cdp.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_common.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_common.h
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_conv.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_drv.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_drv.h
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_engine.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_engine.h
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_gem.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_pdp.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_reg.h
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_rubik.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_sched.h
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_scheduler.c
>>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_sdp.c
>>   create mode 100644 include/uapi/drm/nvdla_drm.h
>>
>> -- 
>> 2.25.1
>>
Thierry Reding May 2, 2022, 5:04 p.m. UTC | #4
On Fri, Apr 29, 2022 at 11:28:10AM +0800, Cai Huoqing wrote:
> On 28 4月 22 18:56:07, Mikko Perttunen wrote:
> > On 4/28/22 17:10, Thierry Reding wrote:
> > > On Tue, Apr 26, 2022 at 02:07:57PM +0800, Cai Huoqing wrote:
> > > > The NVIDIA Deep Learning Accelerator (NVDLA) is an open source IP
> > > > which is integrated into NVIDIA Jetson AGX Xavier,
> > > > so add driver support for this accelerator."
> > > 
> > > Hi,
> > > 
> > > nice to see this work going on. For subsequent revisions, can you please
> > > also Cc the Tegra mailing list (linux-tegra@vger.kernel.org) as well as
> > > the Tegra platform maintainers (that's Jon Hunter and myself). This will
> > > make sure that more people with an interest in this will see your work.
> > > Not everyone follows dri-devel, linaro-mm-sig or linux-media.
> > > 
> > > Thanks,
> > > Thierry
> > 
> > From a quick glance it looks like this driver pokes DLA hardware directly
> > which is not the intended programming model on Tegra hardware (there are
> > Falcon microcontrollers that offload task scheduling and synchronization
> > from the CPU). The hardware is also behind the Host1x bus so a simple
> > platform device is not sufficient.
> > 
> > Was this driver developed against some platform with OpenDLA hardware (i.e.
> > not Tegra)?
> > 
> > If so, we'd need to verify if the hardware matches the hardware in Tegra194.
> > Also, this driver may not be ideal for Tegra platforms since we would lack
> > the hardware scheduling and synchronization facilities. It is likely
> > necessary to have separate drivers for OpenDLA and Tegra's DLA integration.
> > 
> > Thanks,
> > Mikko
> > 
> Tegra DLA seems to work with a slave coprocessor, the host driver just
> impelement message queue, share buffer, notification... The hardware
> detail of DLA maybe in the slave driver(not linux OS?).
> 
> Sure, This driver just support for the SOCs or FPGAs that OPENDLA
> inside. I will change this kind of description "integrated into NVIDIA Jetson AGX Xavier"
> this driver dont support for Tegra directly.

Yes, I think it would be good to make it clear that this is not going to
work with the Tegra instantiations so that people don't get confused.

I think it would be ideal, though, if we could reuse as much of this
driver as possible to work with other instantiations. The only reference
to OpenDLA that I can find and which seems somehow relevant to this is
here:

	https://github.com/SCLUO/ITRI-OpenDLA

Is that the version that you're using? Or is the version that you're
using at least compatible with that one? Apart from that and the Tegra
instantiations, are you aware of any other derivatives that we need to
account for? I'm worried that this might fragment to the point where it
becomes unmaintainable in upstream Linux.

Even if this doesn't concern the Tegra instantiation, I think most of my
other comments remain valid. Things like global variables will get in
the way of multiple FPGA instantiations as well, for example.

You will also need to provide the device tree bindings for the
particular instantiation that you're working on. Typically this would be
identified by a vendor-specific compatible string for your particular
board, but if it stems from a "canonical" FPGA mapping, matching on that
compatible string might also be an option. In either case, when you send
out the DT bindings, please include the devicetree@vger.kernel.org
mailing list so that they can be properly reviewed.

Thierry
Cai Huoqing May 7, 2022, 9:05 a.m. UTC | #5
On 02 5月 22 19:04:13, Thierry Reding wrote:
> On Fri, Apr 29, 2022 at 11:28:10AM +0800, Cai Huoqing wrote:
> > On 28 4月 22 18:56:07, Mikko Perttunen wrote:
> > > On 4/28/22 17:10, Thierry Reding wrote:
> > > > On Tue, Apr 26, 2022 at 02:07:57PM +0800, Cai Huoqing wrote:
> > > > > The NVIDIA Deep Learning Accelerator (NVDLA) is an open source IP
> > > > > which is integrated into NVIDIA Jetson AGX Xavier,
> > > > > so add driver support for this accelerator."
> > > > 
> > > > Hi,
> > > > 
> > > > nice to see this work going on. For subsequent revisions, can you please
> > > > also Cc the Tegra mailing list (linux-tegra@vger.kernel.org) as well as
> > > > the Tegra platform maintainers (that's Jon Hunter and myself). This will
> > > > make sure that more people with an interest in this will see your work.
> > > > Not everyone follows dri-devel, linaro-mm-sig or linux-media.
> > > > 
> > > > Thanks,
> > > > Thierry
> > > 
> > > From a quick glance it looks like this driver pokes DLA hardware directly
> > > which is not the intended programming model on Tegra hardware (there are
> > > Falcon microcontrollers that offload task scheduling and synchronization
> > > from the CPU). The hardware is also behind the Host1x bus so a simple
> > > platform device is not sufficient.
> > > 
> > > Was this driver developed against some platform with OpenDLA hardware (i.e.
> > > not Tegra)?
> > > 
> > > If so, we'd need to verify if the hardware matches the hardware in Tegra194.
> > > Also, this driver may not be ideal for Tegra platforms since we would lack
> > > the hardware scheduling and synchronization facilities. It is likely
> > > necessary to have separate drivers for OpenDLA and Tegra's DLA integration.
> > > 
> > > Thanks,
> > > Mikko
> > > 
> > Tegra DLA seems to work with a slave coprocessor, the host driver just
> > impelement message queue, share buffer, notification... The hardware
> > detail of DLA maybe in the slave driver(not linux OS?).
> > 
> > Sure, This driver just support for the SOCs or FPGAs that OPENDLA
> > inside. I will change this kind of description "integrated into NVIDIA Jetson AGX Xavier"
> > this driver dont support for Tegra directly.
> 
> Yes, I think it would be good to make it clear that this is not going to
> work with the Tegra instantiations so that people don't get confused.
> 
> I think it would be ideal, though, if we could reuse as much of this
> driver as possible to work with other instantiations. The only reference
> to OpenDLA that I can find and which seems somehow relevant to this is
> here:
> 
> 	https://github.com/SCLUO/ITRI-OpenDLA
Hi, thanks for your reply.

the hardware code here,
https://github.com/caihuoq/nvdla-hw
or https://github.com/nvdla/hw
which includes cmodel, RTL.

I also make a docker image to run cmodel simulator(based on qemu)
https://github.com/caihuoq/nvdla_docker
It can be used to check this driver.

Thanks,
Cai
> 
> Is that the version that you're using? Or is the version that you're
> using at least compatible with that one? Apart from that and the Tegra
> instantiations, are you aware of any other derivatives that we need to
> account for? I'm worried that this might fragment to the point where it
> becomes unmaintainable in upstream Linux.
> 
> Even if this doesn't concern the Tegra instantiation, I think most of my
> other comments remain valid. Things like global variables will get in
> the way of multiple FPGA instantiations as well, for example.
> 
> You will also need to provide the device tree bindings for the
> particular instantiation that you're working on. Typically this would be
> identified by a vendor-specific compatible string for your particular
> board, but if it stems from a "canonical" FPGA mapping, matching on that
> compatible string might also be an option. In either case, when you send
> out the DT bindings, please include the devicetree@vger.kernel.org
> mailing list so that they can be properly reviewed.
> 
> Thierry