Message ID | 20230911023038.30649-4-yong.wu@mediatek.com |
---|---|
State | New |
Headers | show |
Series | [1/9] dma-buf: heaps: Deduplicate docs and adopt common format | expand |
Am 11.09.23 um 20:29 schrieb John Stultz: > On Mon, Sep 11, 2023 at 3:14 AM Christian König > <christian.koenig@amd.com> wrote: >> Am 11.09.23 um 04:30 schrieb Yong Wu: >>> From: John Stultz <jstultz@google.com> >>> >>> This allows drivers who don't want to create their own >>> DMA-BUF exporter to be able to allocate DMA-BUFs directly >>> from existing DMA-BUF Heaps. >>> >>> There is some concern that the premise of DMA-BUF heaps is >>> that userland knows better about what type of heap memory >>> is needed for a pipeline, so it would likely be best for >>> drivers to import and fill DMA-BUFs allocated by userland >>> instead of allocating one themselves, but this is still >>> up for debate. >> The main design goal of having DMA-heaps in the first place is to avoid >> per driver allocation and this is not necessary because userland know >> better what type of memory it wants. >> >> The background is rather that we generally want to decouple allocation >> from having a device driver connection so that we have better chance >> that multiple devices can work with the same memory. > Yep, very much agreed, and this is what the comment above is trying to describe. > > Ideally user-allocated buffers would be used to ensure driver's don't > create buffers with constraints that limit which devices the buffers > might later be shared with. > > However, this patch was created as a hold-over from the old ION logic > to help vendors transition to dmabuf heaps, as vendors had situations > where they still wanted to export dmabufs that were not to be > generally shared and folks wanted to avoid duplication of logic > already in existing heaps. At the time, I never pushed it upstream as > there were no upstream users. But I think if there is now a potential > upstream user, it's worth having the discussion to better understand > the need. Yeah, that indeed makes much more sense. When existing drivers want to avoid their own handling and move their memory management over to using DMA-heaps even for internal allocations then no objections from my side. That is certainly something we should aim for if possible. But what we should try to avoid is that newly merged drivers provide both a driver specific UAPI and DMA-heaps. The justification that this makes it easier to transit userspace to the new UAPI doesn't really count. That would be adding UAPI already with a plan to deprecate it and that is most likely not helpful considering that UAPI must be supported forever as soon as it is upstream. > So I think this patch is a little confusing in this series, as I don't > see much of it actually being used here (though forgive me if I'm > missing it). > > Instead, It seems it get used in a separate patch series here: > https://lore.kernel.org/all/20230911125936.10648-1-yunfei.dong@mediatek.com/ Please try to avoid stuff like that it is really confusing and eats reviewers time. Regards, Christian. > > Yong, I appreciate you sending this out! But maybe if the secure heap > submission doesn't depend on this functionality, I might suggest > moving this patch (or at least the majority of it) to be part of the > vcodec series instead? That way reviewers will have more context for > how the code being added is used? > > thanks > -john
On Tue, 2023-09-12 at 09:06 +0200, Christian König wrote: > > External email : Please do not click links or open attachments until > you have verified the sender or the content. > Am 11.09.23 um 20:29 schrieb John Stultz: > > On Mon, Sep 11, 2023 at 3:14 AM Christian König > > <christian.koenig@amd.com> wrote: > >> Am 11.09.23 um 04:30 schrieb Yong Wu: > >>> From: John Stultz <jstultz@google.com> > >>> > >>> This allows drivers who don't want to create their own > >>> DMA-BUF exporter to be able to allocate DMA-BUFs directly > >>> from existing DMA-BUF Heaps. > >>> > >>> There is some concern that the premise of DMA-BUF heaps is > >>> that userland knows better about what type of heap memory > >>> is needed for a pipeline, so it would likely be best for > >>> drivers to import and fill DMA-BUFs allocated by userland > >>> instead of allocating one themselves, but this is still > >>> up for debate. > >> The main design goal of having DMA-heaps in the first place is to > avoid > >> per driver allocation and this is not necessary because userland > know > >> better what type of memory it wants. > >> > >> The background is rather that we generally want to decouple > allocation > >> from having a device driver connection so that we have better > chance > >> that multiple devices can work with the same memory. > > Yep, very much agreed, and this is what the comment above is trying > to describe. > > > > Ideally user-allocated buffers would be used to ensure driver's > don't > > create buffers with constraints that limit which devices the > buffers > > might later be shared with. > > > > However, this patch was created as a hold-over from the old ION > logic > > to help vendors transition to dmabuf heaps, as vendors had > situations > > where they still wanted to export dmabufs that were not to be > > generally shared and folks wanted to avoid duplication of logic > > already in existing heaps. At the time, I never pushed it upstream > as > > there were no upstream users. But I think if there is now a > potential > > upstream user, it's worth having the discussion to better > understand > > the need. > > Yeah, that indeed makes much more sense. > > When existing drivers want to avoid their own handling and move > their > memory management over to using DMA-heaps even for internal > allocations > then no objections from my side. That is certainly something we > should > aim for if possible. Thanks. > > But what we should try to avoid is that newly merged drivers provide > both a driver specific UAPI and DMA-heaps. The justification that > this > makes it easier to transit userspace to the new UAPI doesn't really > count. > > That would be adding UAPI already with a plan to deprecate it and > that > is most likely not helpful considering that UAPI must be supported > forever as soon as it is upstream. Sorry, I didn't understand this. I think we have not change the UAPI. Which code are you referring to? > > > So I think this patch is a little confusing in this series, as I > don't > > see much of it actually being used here (though forgive me if I'm > > missing it). > > > > Instead, It seems it get used in a separate patch series here: > > > https://lore.kernel.org/all/20230911125936.10648-1-yunfei.dong@mediatek.com/ > > Please try to avoid stuff like that it is really confusing and eats > reviewers time. My fault, I thought dma-buf and media belonged to the different tree, so I send them separately. The cover letter just said "The consumers of the new heap and new interface are our codecs and DRM, which will be sent upstream soon", and there was no vcodec link at that time. In the next version, we will put the first three patches into the vcodec patchset. Thanks. > > Regards, > Christian. > > > > > Yong, I appreciate you sending this out! But maybe if the secure > heap > > submission doesn't depend on this functionality, I might suggest > > moving this patch (or at least the majority of it) to be part of > the > > vcodec series instead? That way reviewers will have more context > for > > how the code being added is used? Will do. Thanks. > > > > thanks > > -john >
Le lundi 11 septembre 2023 à 12:13 +0200, Christian König a écrit : > Am 11.09.23 um 04:30 schrieb Yong Wu: > > From: John Stultz <jstultz@google.com> > > > > This allows drivers who don't want to create their own > > DMA-BUF exporter to be able to allocate DMA-BUFs directly > > from existing DMA-BUF Heaps. > > > > There is some concern that the premise of DMA-BUF heaps is > > that userland knows better about what type of heap memory > > is needed for a pipeline, so it would likely be best for > > drivers to import and fill DMA-BUFs allocated by userland > > instead of allocating one themselves, but this is still > > up for debate. > > The main design goal of having DMA-heaps in the first place is to avoid > per driver allocation and this is not necessary because userland know > better what type of memory it wants. If the memory is user visible, yes. When I look at the MTK VCODEC changes, this seems to be used for internal codec state and SHM buffers used to communicate with firmware. > > The background is rather that we generally want to decouple allocation > from having a device driver connection so that we have better chance > that multiple devices can work with the same memory. > > I once create a prototype which gives userspace a hint which DMA-heap to > user for which device: > https://patchwork.kernel.org/project/linux-media/patch/20230123123756.401692-2-christian.koenig@amd.com/ > > Problem is that I don't really have time to look into it and maintain > that stuff, but I think from the high level design that is rather the > general direction we should push at. > > Regards, > Christian. > > > > > Signed-off-by: John Stultz <jstultz@google.com> > > Signed-off-by: T.J. Mercier <tjmercier@google.com> > > Signed-off-by: Yong Wu <yong.wu@mediatek.com> > > [Yong: Fix the checkpatch alignment warning] > > --- > > drivers/dma-buf/dma-heap.c | 60 ++++++++++++++++++++++++++++---------- > > include/linux/dma-heap.h | 25 ++++++++++++++++ > > 2 files changed, 69 insertions(+), 16 deletions(-) > > > > diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c > > index dcc0e38c61fa..908bb30dc864 100644 > > --- a/drivers/dma-buf/dma-heap.c > > +++ b/drivers/dma-buf/dma-heap.c > > @@ -53,12 +53,15 @@ static dev_t dma_heap_devt; > > static struct class *dma_heap_class; > > static DEFINE_XARRAY_ALLOC(dma_heap_minors); > > > > -static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, > > - unsigned int fd_flags, > > - unsigned int heap_flags) > > +struct dma_buf *dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, > > + unsigned int fd_flags, > > + unsigned int heap_flags) > > { > > - struct dma_buf *dmabuf; > > - int fd; > > + if (fd_flags & ~DMA_HEAP_VALID_FD_FLAGS) > > + return ERR_PTR(-EINVAL); > > + > > + if (heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) > > + return ERR_PTR(-EINVAL); > > > > /* > > * Allocations from all heaps have to begin > > @@ -66,9 +69,20 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, > > */ > > len = PAGE_ALIGN(len); > > if (!len) > > - return -EINVAL; > > + return ERR_PTR(-EINVAL); > > > > - dmabuf = heap->ops->allocate(heap, len, fd_flags, heap_flags); > > + return heap->ops->allocate(heap, len, fd_flags, heap_flags); > > +} > > +EXPORT_SYMBOL_GPL(dma_heap_buffer_alloc); > > + > > +static int dma_heap_bufferfd_alloc(struct dma_heap *heap, size_t len, > > + unsigned int fd_flags, > > + unsigned int heap_flags) > > +{ > > + struct dma_buf *dmabuf; > > + int fd; > > + > > + dmabuf = dma_heap_buffer_alloc(heap, len, fd_flags, heap_flags); > > if (IS_ERR(dmabuf)) > > return PTR_ERR(dmabuf); > > > > @@ -106,15 +120,9 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) > > if (heap_allocation->fd) > > return -EINVAL; > > > > - if (heap_allocation->fd_flags & ~DMA_HEAP_VALID_FD_FLAGS) > > - return -EINVAL; > > - > > - if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) > > - return -EINVAL; > > - > > - fd = dma_heap_buffer_alloc(heap, heap_allocation->len, > > - heap_allocation->fd_flags, > > - heap_allocation->heap_flags); > > + fd = dma_heap_bufferfd_alloc(heap, heap_allocation->len, > > + heap_allocation->fd_flags, > > + heap_allocation->heap_flags); > > if (fd < 0) > > return fd; > > > > @@ -205,6 +213,7 @@ const char *dma_heap_get_name(struct dma_heap *heap) > > { > > return heap->name; > > } > > +EXPORT_SYMBOL_GPL(dma_heap_get_name); > > > > struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) > > { > > @@ -290,6 +299,24 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) > > kfree(heap); > > return err_ret; > > } > > +EXPORT_SYMBOL_GPL(dma_heap_add); > > + > > +struct dma_heap *dma_heap_find(const char *name) > > +{ > > + struct dma_heap *h; > > + > > + mutex_lock(&heap_list_lock); > > + list_for_each_entry(h, &heap_list, list) { > > + if (!strcmp(h->name, name)) { > > + kref_get(&h->refcount); > > + mutex_unlock(&heap_list_lock); > > + return h; > > + } > > + } > > + mutex_unlock(&heap_list_lock); > > + return NULL; > > +} > > +EXPORT_SYMBOL_GPL(dma_heap_find); > > > > static void dma_heap_release(struct kref *ref) > > { > > @@ -315,6 +342,7 @@ void dma_heap_put(struct dma_heap *h) > > kref_put(&h->refcount, dma_heap_release); > > mutex_unlock(&heap_list_lock); > > } > > +EXPORT_SYMBOL_GPL(dma_heap_put); > > > > static char *dma_heap_devnode(const struct device *dev, umode_t *mode) > > { > > diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h > > index f3c678892c5c..59e70f6c7a60 100644 > > --- a/include/linux/dma-heap.h > > +++ b/include/linux/dma-heap.h > > @@ -64,10 +64,35 @@ const char *dma_heap_get_name(struct dma_heap *heap); > > */ > > struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info); > > > > +/** > > + * dma_heap_find - get the heap registered with the specified name > > + * @name: Name of the DMA-Heap to find > > + * > > + * Returns: > > + * The DMA-Heap with the provided name. > > + * > > + * NOTE: DMA-Heaps returned from this function MUST be released using > > + * dma_heap_put() when the user is done to enable the heap to be unloaded. > > + */ > > +struct dma_heap *dma_heap_find(const char *name); > > + > > /** > > * dma_heap_put - drops a reference to a dmabuf heap, potentially freeing it > > * @heap: the heap whose reference count to decrement > > */ > > void dma_heap_put(struct dma_heap *heap); > > > > +/** > > + * dma_heap_buffer_alloc - Allocate dma-buf from a dma_heap > > + * @heap: DMA-Heap to allocate from > > + * @len: size to allocate in bytes > > + * @fd_flags: flags to set on returned dma-buf fd > > + * @heap_flags: flags to pass to the dma heap > > + * > > + * This is for internal dma-buf allocations only. Free returned buffers with dma_buf_put(). > > + */ > > +struct dma_buf *dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, > > + unsigned int fd_flags, > > + unsigned int heap_flags); > > + > > #endif /* _DMA_HEAPS_H */ >
Le mardi 12 septembre 2023 à 08:47 +0000, Yong Wu (吴勇) a écrit : > On Mon, 2023-09-11 at 12:12 -0400, Nicolas Dufresne wrote: > > > > External email : Please do not click links or open attachments until > > you have verified the sender or the content. > > Hi, > > > > Le lundi 11 septembre 2023 à 10:30 +0800, Yong Wu a écrit : > > > From: John Stultz <jstultz@google.com> > > > > > > This allows drivers who don't want to create their own > > > DMA-BUF exporter to be able to allocate DMA-BUFs directly > > > from existing DMA-BUF Heaps. > > > > > > There is some concern that the premise of DMA-BUF heaps is > > > that userland knows better about what type of heap memory > > > is needed for a pipeline, so it would likely be best for > > > drivers to import and fill DMA-BUFs allocated by userland > > > instead of allocating one themselves, but this is still > > > up for debate. > > > > > > Would be nice for the reviewers to provide the information about the > > user of > > this new in-kernel API. I noticed it because I was CCed, but > > strangely it didn't > > make it to the mailing list yet and its not clear in the cover what > > this is used > > with. > > > > I can explain in my words though, my read is that this is used to > > allocate both > > user visible and driver internal memory segments in MTK VCODEC > > driver. > > > > I'm somewhat concerned that DMABuf objects are used to abstract > > secure memory > > allocation from tee. For framebuffers that are going to be exported > > and shared > > its probably fair use, but it seems that internal shared memory and > > codec > > specific reference buffers also endup with a dmabuf fd (often called > > a secure fd > > in the v4l2 patchset) for data that is not being shared, and requires > > a 1:1 > > mapping to a tee handle anyway. Is that the design we'd like to > > follow ? > > Yes. basically this is right. > > > Can't > > we directly allocate from the tee, adding needed helper to make this > > as simple > > as allocating from a HEAP ? > > If this happens, the memory will always be inside TEE. Here we create a > new _CMA heap, it will cma_alloc/free dynamically. Reserve it before > SVP start, and release to kernel after SVP done. Ok, I see the benefit of having a common driver then. It would add to the complexity, but having a driver for the tee allocator and v4l2/heaps would be another option? > > Secondly. the v4l2/drm has the mature driver control flow, like > drm_gem_prime_import_dev that always use dma_buf ops. So we can use the > current flow as much as possible without having to re-plan a flow in > the TEE.
On Tue, 2023-09-12 at 11:05 -0400, Nicolas Dufresne wrote: > > External email : Please do not click links or open attachments until > you have verified the sender or the content. > Le mardi 12 septembre 2023 à 08:47 +0000, Yong Wu (吴勇) a écrit : > > On Mon, 2023-09-11 at 12:12 -0400, Nicolas Dufresne wrote: > > > > > > External email : Please do not click links or open attachments > until > > > you have verified the sender or the content. > > > Hi, > > > > > > Le lundi 11 septembre 2023 à 10:30 +0800, Yong Wu a écrit : > > > > From: John Stultz <jstultz@google.com> > > > > > > > > This allows drivers who don't want to create their own > > > > DMA-BUF exporter to be able to allocate DMA-BUFs directly > > > > from existing DMA-BUF Heaps. > > > > > > > > There is some concern that the premise of DMA-BUF heaps is > > > > that userland knows better about what type of heap memory > > > > is needed for a pipeline, so it would likely be best for > > > > drivers to import and fill DMA-BUFs allocated by userland > > > > instead of allocating one themselves, but this is still > > > > up for debate. > > > > > > > > > Would be nice for the reviewers to provide the information about > the > > > user of > > > this new in-kernel API. I noticed it because I was CCed, but > > > strangely it didn't > > > make it to the mailing list yet and its not clear in the cover > what > > > this is used > > > with. > > > > > > I can explain in my words though, my read is that this is used to > > > allocate both > > > user visible and driver internal memory segments in MTK VCODEC > > > driver. > > > > > > I'm somewhat concerned that DMABuf objects are used to abstract > > > secure memory > > > allocation from tee. For framebuffers that are going to be > exported > > > and shared > > > its probably fair use, but it seems that internal shared memory > and > > > codec > > > specific reference buffers also endup with a dmabuf fd (often > called > > > a secure fd > > > in the v4l2 patchset) for data that is not being shared, and > requires > > > a 1:1 > > > mapping to a tee handle anyway. Is that the design we'd like to > > > follow ? > > > > Yes. basically this is right. > > > > > Can't > > > we directly allocate from the tee, adding needed helper to make > this > > > as simple > > > as allocating from a HEAP ? > > > > If this happens, the memory will always be inside TEE. Here we > create a > > new _CMA heap, it will cma_alloc/free dynamically. Reserve it > before > > SVP start, and release to kernel after SVP done. > > Ok, I see the benefit of having a common driver then. It would add to > the > complexity, but having a driver for the tee allocator and v4l2/heaps > would be > another option? It's ok for v4l2. But our DRM also use this new heap and it will be sent upstream in the next few days. > > > > > Secondly. the v4l2/drm has the mature driver control flow, like > > drm_gem_prime_import_dev that always use dma_buf ops. So we can use > the > > current flow as much as possible without having to re-plan a flow > in > > the TEE. > > From what I've read from Yunfei series, this is only partially true > for V4L2. > The vb2 queue MMAP feature have dmabuf exportation as optional, but > its not a > problem to always back it up with a dmabuf object. But for internal > SHM buffers > used for firmware communication, I've never seen any driver use a > DMABuf. > > Same applies for primary decode buffers when frame buffer compression > or post- > processing it used (or reconstruction buffer in encoders), these are > not user > visible and are usually not DMABuf. If they aren't dmabuf, of course it is ok. I guess we haven't used these. The SHM buffer is got by tee_shm_register_kernel_buf in this case and we just use the existed dmabuf ops to complete SVP. In our case, the vcodec input/output/working buffers and DRM input buffer all use this new secure heap during secure video play. > > > > > > > > > Nicolas > > > > > > > > > > > Signed-off-by: John Stultz <jstultz@google.com> > > > > Signed-off-by: T.J. Mercier <tjmercier@google.com> > > > > Signed-off-by: Yong Wu <yong.wu@mediatek.com> > > > > [Yong: Fix the checkpatch alignment warning] > > > > --- > > > > drivers/dma-buf/dma-heap.c | 60 ++++++++++++++++++++++++++++ > ---- > > > ------ > > > > include/linux/dma-heap.h | 25 ++++++++++++++++ > > > > 2 files changed, 69 insertions(+), 16 deletions(-) > > > > > > [snip] >
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index dcc0e38c61fa..908bb30dc864 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -53,12 +53,15 @@ static dev_t dma_heap_devt; static struct class *dma_heap_class; static DEFINE_XARRAY_ALLOC(dma_heap_minors); -static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, - unsigned int fd_flags, - unsigned int heap_flags) +struct dma_buf *dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, + unsigned int fd_flags, + unsigned int heap_flags) { - struct dma_buf *dmabuf; - int fd; + if (fd_flags & ~DMA_HEAP_VALID_FD_FLAGS) + return ERR_PTR(-EINVAL); + + if (heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) + return ERR_PTR(-EINVAL); /* * Allocations from all heaps have to begin @@ -66,9 +69,20 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, */ len = PAGE_ALIGN(len); if (!len) - return -EINVAL; + return ERR_PTR(-EINVAL); - dmabuf = heap->ops->allocate(heap, len, fd_flags, heap_flags); + return heap->ops->allocate(heap, len, fd_flags, heap_flags); +} +EXPORT_SYMBOL_GPL(dma_heap_buffer_alloc); + +static int dma_heap_bufferfd_alloc(struct dma_heap *heap, size_t len, + unsigned int fd_flags, + unsigned int heap_flags) +{ + struct dma_buf *dmabuf; + int fd; + + dmabuf = dma_heap_buffer_alloc(heap, len, fd_flags, heap_flags); if (IS_ERR(dmabuf)) return PTR_ERR(dmabuf); @@ -106,15 +120,9 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data) if (heap_allocation->fd) return -EINVAL; - if (heap_allocation->fd_flags & ~DMA_HEAP_VALID_FD_FLAGS) - return -EINVAL; - - if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) - return -EINVAL; - - fd = dma_heap_buffer_alloc(heap, heap_allocation->len, - heap_allocation->fd_flags, - heap_allocation->heap_flags); + fd = dma_heap_bufferfd_alloc(heap, heap_allocation->len, + heap_allocation->fd_flags, + heap_allocation->heap_flags); if (fd < 0) return fd; @@ -205,6 +213,7 @@ const char *dma_heap_get_name(struct dma_heap *heap) { return heap->name; } +EXPORT_SYMBOL_GPL(dma_heap_get_name); struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) { @@ -290,6 +299,24 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) kfree(heap); return err_ret; } +EXPORT_SYMBOL_GPL(dma_heap_add); + +struct dma_heap *dma_heap_find(const char *name) +{ + struct dma_heap *h; + + mutex_lock(&heap_list_lock); + list_for_each_entry(h, &heap_list, list) { + if (!strcmp(h->name, name)) { + kref_get(&h->refcount); + mutex_unlock(&heap_list_lock); + return h; + } + } + mutex_unlock(&heap_list_lock); + return NULL; +} +EXPORT_SYMBOL_GPL(dma_heap_find); static void dma_heap_release(struct kref *ref) { @@ -315,6 +342,7 @@ void dma_heap_put(struct dma_heap *h) kref_put(&h->refcount, dma_heap_release); mutex_unlock(&heap_list_lock); } +EXPORT_SYMBOL_GPL(dma_heap_put); static char *dma_heap_devnode(const struct device *dev, umode_t *mode) { diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index f3c678892c5c..59e70f6c7a60 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -64,10 +64,35 @@ const char *dma_heap_get_name(struct dma_heap *heap); */ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info); +/** + * dma_heap_find - get the heap registered with the specified name + * @name: Name of the DMA-Heap to find + * + * Returns: + * The DMA-Heap with the provided name. + * + * NOTE: DMA-Heaps returned from this function MUST be released using + * dma_heap_put() when the user is done to enable the heap to be unloaded. + */ +struct dma_heap *dma_heap_find(const char *name); + /** * dma_heap_put - drops a reference to a dmabuf heap, potentially freeing it * @heap: the heap whose reference count to decrement */ void dma_heap_put(struct dma_heap *heap); +/** + * dma_heap_buffer_alloc - Allocate dma-buf from a dma_heap + * @heap: DMA-Heap to allocate from + * @len: size to allocate in bytes + * @fd_flags: flags to set on returned dma-buf fd + * @heap_flags: flags to pass to the dma heap + * + * This is for internal dma-buf allocations only. Free returned buffers with dma_buf_put(). + */ +struct dma_buf *dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, + unsigned int fd_flags, + unsigned int heap_flags); + #endif /* _DMA_HEAPS_H */