Message ID | 20190906184712.91980-1-john.stultz@linaro.org |
---|---|
Headers | show |
Series | DMA-BUF Heaps (destaging ION) | expand |
On Mon, Sep 30, 2019 at 12:43 AM Hillf Danton <hdanton@sina.com> wrote: > > > On Fri, 6 Sep 2019 18:47:09 +0000 John Stultz wrote: > > > > +static int system_heap_allocate(struct dma_heap *heap, > > + unsigned long len, > > + unsigned long fd_flags, > > + unsigned long heap_flags) > > +{ > > + struct heap_helper_buffer *helper_buffer; > > + struct dma_buf *dmabuf; > > + int ret = -ENOMEM; > > + pgoff_t pg; > > + > > + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); > > + if (!helper_buffer) > > + return -ENOMEM; > > + > > + init_heap_helper_buffer(helper_buffer, system_heap_free); > > + helper_buffer->flags = heap_flags; > > + helper_buffer->heap = heap; > > + helper_buffer->size = len; > > + > A couple of lines looks needed to handle len if it is not > PAGE_SIZE aligned. Hey! Thanks so much for the review! dma_heap_buffer_alloc() sets "len = PAGE_ALIGN(len);" before calling into the heap allocation hook. So hopefully this isn't a concern, or am I missing something? thanks -john
On Mon, Sep 30, 2019 at 1:14 AM Hillf Danton <hdanton@sina.com> wrote: > On Fri, 6 Sep 2019 18:47:09 +0000 John Stultz wrote: > > > > + cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false); > > + if (!cma_pages) > > + goto free_buf; > > + > > + if (PageHighMem(cma_pages)) { > > + unsigned long nr_clear_pages = nr_pages; > > + struct page *page = cma_pages; > > + > > + while (nr_clear_pages > 0) { > > + void *vaddr = kmap_atomic(page); > > + > > + memset(vaddr, 0, PAGE_SIZE); > > + kunmap_atomic(vaddr); > > + page++; > > + nr_clear_pages--; > > + } > > + } else { > > + memset(page_address(cma_pages), 0, size); > > + } > > Take a breath after zeroing a page, and a peep at pending signal. Ok. Took a swing at this. It will be in the next revision. Thanks again for the review! -john
On Tue, Sep 24, 2019 at 04:22:18PM +0000, Ayan Halder wrote: > On Thu, Sep 19, 2019 at 10:21:52PM +0530, Sumit Semwal wrote: > > Hello Christoph, everyone, > > > > On Sat, 7 Sep 2019 at 00:17, John Stultz <john.stultz@linaro.org> wrote: > > > > > > Here is yet another pass at the dma-buf heaps patchset Andrew > > > and I have been working on which tries to destage a fair chunk > > > of ION functionality. > > > > > > The patchset implements per-heap devices which can be opened > > > directly and then an ioctl is used to allocate a dmabuf from the > > > heap. > > > > > > The interface is similar, but much simpler then IONs, only > > > providing an ALLOC ioctl. > > > > > > Also, I've provided relatively simple system and cma heaps. > > > > > > I've booted and tested these patches with AOSP on the HiKey960 > > > using the kernel tree here: > > > https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap > > > > > > And the userspace changes here: > > > https://android-review.googlesource.com/c/device/linaro/hikey/+/909436 > > > > > > Compared to ION, this patchset is missing the system-contig, > > > carveout and chunk heaps, as I don't have a device that uses > > > those, so I'm unable to do much useful validation there. > > > Additionally we have no upstream users of chunk or carveout, > > > and the system-contig has been deprecated in the common/andoid-* > > > kernels, so this should be ok. > > > > > > I've also removed the stats accounting, since any such accounting > > > should be implemented by dma-buf core or the heaps themselves. > > > > > > Most of the changes in this revision are adddressing the more > > > concrete feedback from Christoph (many thanks!). Though I'm not > > > sure if some of the less specific feedback was completely resolved > > > in discussion last time around. Please let me know! > > > > It looks like most of the feedback has been taken care of. If there's > > no more objection to this series, I'd like to merge it in soon. > > > > If there are any more review comments, may I request you to please provide them? > > I tested these patches using our internal test suite with Arm,komeda > driver and the following node in dts > > reserved-memory { > #address-cells = <0x2>; > #size-cells = <0x2>; > ranges; > > framebuffer@60000000 { > compatible = "shared-dma-pool"; > linux,cma-default; > reg = <0x0 0x60000000 0x0 0x8000000>; > }; > } Apologies for the confusion, this dts node is irrelevant as our tests were using the cma heap (via /dev/dma_heap/reserved). That raises a question. How do we represent the reserved-memory nodes (as shown above) via the dma-buf heaps framework ? > > The tests went fine. Our tests allocates framebuffers of different > sizes, posts them on screen and the driver writes back to one of the > framebuffers. I havenot tested for any performance, latency or > cache management related stuff. So, it that looks appropriate, feel > free to add:- > Tested-by:- Ayan Kumar Halder <ayan.halder@arm.com> > > Are you planning to write some igt tests for it ? > > > > > > > > New in v8: > > > * Make struct dma_heap_ops consts (Suggested by Christoph) > > > * Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls > > > (suggested by Christoph) > > > * Condense dma_heap_buffer and heap_helper_buffer (suggested by > > > Christoph) > > > * Get rid of needless struct system_heap (suggested by Christoph) > > > * Fix indentation by using shorter argument names (suggested by > > > Christoph) > > > * Remove unused private_flags value > > > * Add forgotten include file to fix build issue on x86 > > > * Checkpatch whitespace fixups > > > > > > Thoughts and feedback would be greatly appreciated! > > > > > > thanks > > > -john > > Best, > > Sumit. > > > > > > Cc: Laura Abbott <labbott@redhat.com> > > > Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org> > > > Cc: Sumit Semwal <sumit.semwal@linaro.org> > > > Cc: Liam Mark <lmark@codeaurora.org> > > > Cc: Pratik Patel <pratikp@codeaurora.org> > > > Cc: Brian Starkey <Brian.Starkey@arm.com> > > > Cc: Vincent Donnefort <Vincent.Donnefort@arm.com> > > > Cc: Sudipto Paul <Sudipto.Paul@arm.com> > > > Cc: Andrew F. Davis <afd@ti.com> > > > Cc: Christoph Hellwig <hch@infradead.org> > > > Cc: Chenbo Feng <fengc@google.com> > > > Cc: Alistair Strachan <astrachan@google.com> > > > Cc: Hridya Valsaraju <hridya@google.com> > > > Cc: dri-devel@lists.freedesktop.org > > > > > > > > > Andrew F. Davis (1): > > > dma-buf: Add dma-buf heaps framework > > > > > > John Stultz (4): > > > dma-buf: heaps: Add heap helpers > > > dma-buf: heaps: Add system heap to dmabuf heaps > > > dma-buf: heaps: Add CMA heap to dmabuf heaps > > > kselftests: Add dma-heap test > > > > > > MAINTAINERS | 18 ++ > > > drivers/dma-buf/Kconfig | 11 + > > > drivers/dma-buf/Makefile | 2 + > > > drivers/dma-buf/dma-heap.c | 250 ++++++++++++++++ > > > drivers/dma-buf/heaps/Kconfig | 14 + > > > drivers/dma-buf/heaps/Makefile | 4 + > > > drivers/dma-buf/heaps/cma_heap.c | 164 +++++++++++ > > > drivers/dma-buf/heaps/heap-helpers.c | 269 ++++++++++++++++++ > > > drivers/dma-buf/heaps/heap-helpers.h | 55 ++++ > > > drivers/dma-buf/heaps/system_heap.c | 122 ++++++++ > > > include/linux/dma-heap.h | 59 ++++ > > > include/uapi/linux/dma-heap.h | 55 ++++ > > > tools/testing/selftests/dmabuf-heaps/Makefile | 9 + > > > .../selftests/dmabuf-heaps/dmabuf-heap.c | 230 +++++++++++++++ > > > 14 files changed, 1262 insertions(+) > > > create mode 100644 drivers/dma-buf/dma-heap.c > > > create mode 100644 drivers/dma-buf/heaps/Kconfig > > > create mode 100644 drivers/dma-buf/heaps/Makefile > > > create mode 100644 drivers/dma-buf/heaps/cma_heap.c > > > create mode 100644 drivers/dma-buf/heaps/heap-helpers.c > > > create mode 100644 drivers/dma-buf/heaps/heap-helpers.h > > > create mode 100644 drivers/dma-buf/heaps/system_heap.c > > > create mode 100644 include/linux/dma-heap.h > > > create mode 100644 include/uapi/linux/dma-heap.h > > > create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile > > > create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c > > > > > > -- > > > 2.17.1 > > > > > > > > > -- > > Thanks and regards, > > > > Sumit Semwal > > Linaro Consumer Group - Kernel Team Lead > > Linaro.org │ Open source software for ARM SoCs > > _______________________________________________ > > dri-devel mailing list > > dri-devel@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel
++ john.stultz@linaro.org (Sorry, somehow I am missing your email while sending. :( ) On Fri, Oct 18, 2019 at 06:41:24PM +0000, Ayan Halder wrote: > On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote: > > On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote: > > > On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote: > > > > On 10/17/19 3:14 PM, John Stultz wrote: > > > > > But if the objection stands, do you have a proposal for an alternative > > > > > way to enumerate a subset of CMA heaps? > > > > > > > > > When in staging ION had to reach into the CMA framework as the other > > > > direction would not be allowed, so cma_for_each_area() was added. If > > > > DMA-BUF heaps is not in staging then we can do the opposite, and have > > > > the CMA framework register heaps itself using our framework. That way > > > > the CMA system could decide what areas to export or not (maybe based on > > > > a DT property or similar). > > > > > > Ok. Though the CMA core doesn't have much sense of DT details either, > > > so it would probably have to be done in the reserved_mem logic, which > > > doesn't feel right to me. > > > > > > I'd probably guess we should have some sort of dt binding to describe > > > a dmabuf cma heap and from that node link to a CMA node via a > > > memory-region phandle. Along with maybe the default heap as well? Not > > > eager to get into another binding review cycle, and I'm not sure what > > > non-DT systems will do yet, but I'll take a shot at it and iterate. > > > > > > > The end result is the same so we can make this change later (it has to > > > > come after DMA-BUF heaps is in anyway). > > > > > > Well, I'm hesitant to merge code that exposes all the CMA heaps and > > > then add patches that becomes more selective, should anyone depend on > > > the initial behavior. :/ > > > > How about only auto-adding the system default CMA region (cma->name == > > "reserved")? > > > > And/or the CMA auto-add could be behind a config option? It seems a > > shame to further delay this, and the CMA heap itself really is useful. > > > A bit of a detour, comming back to the issue why the following node > was not getting detected by the dma-buf heaps framework. > > reserved-memory { > #address-cells = <2>; > #size-cells = <2>; > ranges; > > display_reserved: framebuffer@60000000 { > compatible = "shared-dma-pool"; > linux,cma-default; > reusable; <<<<<<<<<<<<-----------This was missing in our > earlier node > reg = <0 0x60000000 0 0x08000000>; > }; > > Quoting reserved-memory.txt :- > "The operating system can use the memory in this region with the limitation that > the device driver(s) owning the region need to be able to reclaim it back" > > Thus as per my observation, without 'reusable', rmem_cma_setup() > returns -EINVAL and the reserved-memory is not added as a cma region. > > With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :- > > [ 0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c > [ 0.458415] Modules linked in: > [ 0.461470] CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.3.0-rc4-01377-g51dbcf03884c-dirty #15 > [ 0.470017] Hardware name: ARM Juno development board (r0) (DT) > [ 0.475953] pstate: 80000005 (Nzcv daif -PAN -UAO) > [ 0.480755] pc : cma_init_reserved_areas+0xec/0x22c > [ 0.485643] lr : cma_init_reserved_areas+0xe8/0x22c > <----snip register dump ---> > > [ 0.600646] Unable to handle kernel paging request at virtual address ffff7dffff800000 > [ 0.608591] Mem abort info: > [ 0.611386] ESR = 0x96000006 > <---snip uninteresting bits ---> > [ 0.681069] pc : cma_init_reserved_areas+0x114/0x22c > [ 0.686043] lr : cma_init_reserved_areas+0xe8/0x22c > > > I am looking into this now. My final objective is to get "/dev/dma_heap/framebuffer" > (as a cma heap). > Any leads? > > > Cheers, > > -Brian > > > > > > > > So, <sigh>, I'll start on the rework for the CMA bits. > > > > > > That said, I'm definitely wanting to make some progress on this patch > > > series, so maybe we can still merge the core/helpers/system heap and > > > just hold the cma heap for a rework on the enumeration bits. That way > > > we can at least get other folks working on switching their vendor > > > heaps from ION. > > > > > > Sumit: Does that sound ok? Assuming no other objections, can you take > > > the v11 set minus the CMA heap patch? > > > > > > thanks > > > -john > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel