From patchwork Fri Mar 29 00:16:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 161378 Delivered-To: patches@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1367913jan; Thu, 28 Mar 2019 17:16:15 -0700 (PDT) X-Received: by 2002:a17:902:8bca:: with SMTP id r10mr35671029plo.67.1553818575626; Thu, 28 Mar 2019 17:16:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553818575; cv=none; d=google.com; s=arc-20160816; b=gvUipa7J1sUDO2AtBDM4pikaHrOrLFomLStzmcTtD63vzo/9jCsS6hun7FqXnBrye4 U6i034OYv1eWccLuYqfMiblUhU08bm6lxlzMfAYRHjO9/NxTYc1iJtmBLrdas1+1bxyP cVXGw8pn9Qr7gaZFH7KHxgq8GAH6MuFQ9z5B62LSB8iK0XKSb2V+v4mrRC0IuJLXBEgj dz1qlftGO4t/JxnJe5z0VO6Fou0zbpw1MpTeHQbJ9707w+9JJ7Oj91ev2liXwQd3MaWi uFMX/Nm6JmQryo3iQv2IHpf0MRRZVci44fNj30WCaaKT6KqVCgXJnm9FjIjggGY71Ugo zy9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pOVbcUlIdbhGECLbRdIpPVft1iuShrfaf8uMM9BcdOc=; b=k42mVxUOJI5O1EOOxTQj8d6puYmSNk2oUUJD3WQdVSUIkf74wg+pH10lSORBvEq6W4 MzB8C/cL1/9G43gaEA7xoc1BDSvsFeN3EiirEYHWg7J/uSqaXFIYgd/ZVjJVni5OxkTn nbiPG8qYyWLHnJLHtP0yL77oHbBib8knUe3OJFCjgz9UhxdvHPXgw9eaOj6vr/MgKPv7 EvppDikHUPAz+6ni9BsBWnt2m/RlhRoqvFTIYPAWPjKzDv3PhCP5MRUiy0MZTzcYbxFR nyqxPpzVYbyyIwutvds/1yDZeB0nEvDmX3uIe0LJW3NQGlUdCnDujOfTDkggc2fmdycK 1XVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=F5JooO6M; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id f15sor465939pgj.56.2019.03.28.17.16.15 for (Google Transport Security); Thu, 28 Mar 2019 17:16:15 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=F5JooO6M; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pOVbcUlIdbhGECLbRdIpPVft1iuShrfaf8uMM9BcdOc=; b=F5JooO6M8bFSUsRtlTRXRpTknNCyoqWb/q8C8RdqxPLEZNJxkeTpqixA0nuGxopOTR IHeW4UvhrXn25IeB0TbbP5S+9QXC3UDjzy4zfT58zv1SAkkR9yh4O4F5mhfzl7wyKHlD qD7zig8me8NBnIVn2Ev21+VsmcfK3X5j08hlTYo1U5+m4Fg4zbueyCH5jBh+IBmJwkj/ SMWbCWeAftBQl3zWyWAej4x64DWNvZ8jBOQ4DRs2CRyMBvX9X1VN4ZFZAeB89fQas+S/ Xt4OgcExNS99xxDtrSnmI0XHxT9L1HJA7MHnkOx6cyQyOsePAFBue6tLdpXnBEbbyraL Bvrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pOVbcUlIdbhGECLbRdIpPVft1iuShrfaf8uMM9BcdOc=; b=dpMtUVXDxi5s3I2CzKMxbCeTxGaWtb/349e/dO3Us3kxYt4G6LKhgDX0kBCCyP0719 l9R6Oy/Ojhbt08AJOerHO6mjK4Yibp+4imey7Wh2srnMsIlMDS/+dItoOXH408O6WUh6 1l/q8SSVeO3ozENLOE5c8sroulC+vEfnEarkeC1z8wD+a1GsDXmL8B75jYil4xUlKMdp 7J2BEMeCPB1l81Xpzg2glCb3O4hs53DZg2TMMjqdIUJtBOO8wqJyXVi6DT5XwIQAjRM/ phVUCxYK3f+QMi/WzCo+sCUzLkDM8UynIlzLWVWT1jCiJPenGSmzMMi/4Cz+q6+X/o96 f2+g== X-Gm-Message-State: APjAAAX9eAbviXoVAG+EhPSE0dtY9GZ/hxNneIHzwAqc3DtCU6DgAw/+ znIl9y1kGB0IJU70sqPZ4PXIG9ht X-Google-Smtp-Source: APXvYqybmM98+7sVUoG8gvrk57ySIN+cjf5t3aBJmUZ+ATvK2vgdHi43Vu2hw9Pz2caBYFbePNzgcg== X-Received: by 2002:a63:5953:: with SMTP id j19mr28121189pgm.260.1553818575190; Thu, 28 Mar 2019 17:16:15 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id g5sm430137pfo.53.2019.03.28.17.16.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 28 Mar 2019 17:16:14 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Benjamin Gaignard , Sumit Semwal , Liam Mark , Pratik Patel , Brian Starkey , Vincent Donnefort , Sudipto Paul , "Andrew F . Davis" , Xu YiPing , "Chenfeng (puck)" , butao , "Xiaqing (A)" , Yudongbin , Christoph Hellwig , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [RFC][PATCH 4/6 v3] dma-buf: heaps: Add CMA heap to dmabuf heapss Date: Thu, 28 Mar 2019 17:16:00 -0700 Message-Id: <1553818562-2516-5-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553818562-2516-1-git-send-email-john.stultz@linaro.org> References: <1553818562-2516-1-git-send-email-john.stultz@linaro.org> This adds a CMA heap, which allows userspace to allocate a dma-buf of contiguous memory out of a CMA region. This code is an evolution of the Android ION implementation, so thanks to its original author and maintainters: Benjamin Gaignard, Laura Abbott, and others! Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Pratik Patel Cc: Brian Starkey Cc: Vincent Donnefort Cc: Sudipto Paul Cc: Andrew F. Davis Cc: Xu YiPing Cc: "Chenfeng (puck)" Cc: butao Cc: "Xiaqing (A)" Cc: Yudongbin Cc: Christoph Hellwig Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Switch allocate to return dmabuf fd * Simplify init code * Checkpatch fixups v3: * Switch to inline function for to_cma_heap() * Minor cleanups suggested by Brian * Fold in new registration style from Andrew * Folded in changes from Andrew to use simplified page list from the heap helpers --- drivers/dma-buf/heaps/Kconfig | 8 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/cma_heap.c | 170 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 179 insertions(+) create mode 100644 drivers/dma-buf/heaps/cma_heap.c -- 2.7.4 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 2050527..a5eef06 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. + +config DMABUF_HEAPS_CMA + bool "DMA-BUF CMA Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CMA heap. This heap is backed + by the Contiguous Memory Allocator (CMA). If your system has these + regions, you should say Y here. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index d1808ec..6e54cde 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-y += heap-helpers.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c new file mode 100644 index 0000000..f4485c60 --- /dev/null +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -0,0 +1,170 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF CMA heap exporter + * + * Copyright (C) 2012, 2019 Linaro Ltd. + * Author: for ST-Ericsson. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "heap-helpers.h" + +struct cma_heap { + struct dma_heap *heap; + struct cma *cma; +}; + +static void cma_heap_free(struct heap_helper_buffer *buffer) +{ + struct cma_heap *cma_heap = dma_heap_get_data(buffer->heap_buffer.heap); + struct page *pages = buffer->priv_virt; + unsigned long nr_pages; + + nr_pages = buffer->heap_buffer.size >> PAGE_SHIFT; + + /* free page list */ + kfree(buffer->pages); + /* release memory */ + cma_release(cma_heap->cma, pages, nr_pages); + kfree(buffer); +} + +/* dmabuf heap CMA operations functions */ +static int cma_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long flags) +{ + struct cma_heap *cma_heap = dma_heap_get_data(heap); + struct heap_helper_buffer *helper_buffer; + struct page *pages; + size_t size = PAGE_ALIGN(len); + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long align = get_order(size); + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + int ret = -ENOMEM; + pgoff_t pg; + + if (align > CONFIG_CMA_ALIGNMENT) + align = CONFIG_CMA_ALIGNMENT; + + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); + if (!helper_buffer) + return -ENOMEM; + + INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free); + helper_buffer->heap_buffer.flags = flags; + helper_buffer->heap_buffer.heap = heap; + helper_buffer->heap_buffer.size = len; + + pages = cma_alloc(cma_heap->cma, nr_pages, align, false); + if (!pages) + goto free_buf; + + if (PageHighMem(pages)) { + unsigned long nr_clear_pages = nr_pages; + struct page *page = pages; + + while (nr_clear_pages > 0) { + void *vaddr = kmap_atomic(page); + + memset(vaddr, 0, PAGE_SIZE); + kunmap_atomic(vaddr); + page++; + nr_clear_pages--; + } + } else { + memset(page_address(pages), 0, size); + } + + helper_buffer->pagecount = nr_pages; + helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, + sizeof(*helper_buffer->pages), + GFP_KERNEL); + if (!helper_buffer->pages) { + ret = -ENOMEM; + goto free_cma; + } + + for (pg = 0; pg < helper_buffer->pagecount; pg++) { + helper_buffer->pages[pg] = &pages[pg]; + if (!helper_buffer->pages[pg]) + goto free_pages; + } + + /* create the dmabuf */ + exp_info.ops = &heap_helper_ops; + exp_info.size = len; + exp_info.flags = O_RDWR; + exp_info.priv = &helper_buffer->heap_buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto free_pages; + } + + helper_buffer->heap_buffer.dmabuf = dmabuf; + helper_buffer->priv_virt = pages; + + ret = dma_buf_fd(dmabuf, O_CLOEXEC); + if (ret < 0) { + dma_buf_put(dmabuf); + /* just return, as put will call release and that will free */ + return ret; + } + + return ret; + +free_pages: + kfree(helper_buffer->pages); +free_cma: + cma_release(cma_heap->cma, pages, nr_pages); +free_buf: + kfree(helper_buffer); + return ret; +} + +static struct dma_heap_ops cma_heap_ops = { + .allocate = cma_heap_allocate, +}; + +static int __add_cma_heap(struct cma *cma, void *data) +{ + struct cma_heap *cma_heap; + struct dma_heap_export_info exp_info; + + cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL); + if (!cma_heap) + return -ENOMEM; + cma_heap->cma = cma; + + exp_info.name = cma_get_name(cma); + exp_info.ops = &cma_heap_ops; + exp_info.priv = cma_heap; + + cma_heap->heap = dma_heap_add(&exp_info); + if (IS_ERR(cma_heap->heap)) { + int ret = PTR_ERR(cma_heap->heap); + + kfree(cma_heap); + return ret; + } + + return 0; +} + +static int add_cma_heaps(void) +{ + cma_for_each_area(__add_cma_heap, NULL); + return 0; +} +device_initcall(add_cma_heaps);