From patchwork Sat Apr 14 11:52:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Prathyush X-Patchwork-Id: 7875 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id A7C1123E49 for ; Mon, 16 Apr 2012 15:39:34 +0000 (UTC) Received: from mail-gx0-f180.google.com (mail-gx0-f180.google.com [209.85.161.180]) by fiordland.canonical.com (Postfix) with ESMTP id 5E4ACA186A2 for ; Mon, 16 Apr 2012 15:39:34 +0000 (UTC) Received: by gglu1 with SMTP id u1so3089465ggl.11 for ; Mon, 16 Apr 2012 08:39:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:x-auditid :from:to:date:message-id:x-mailer:in-reply-to:references :x-brightmail-tracker:x-tm-as-mml:x-mailman-approved-at:cc:subject :x-beenthere:x-mailman-version:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-gm-message-state; bh=ixaHixlBa1gsEe1QXMqvqbeY922aX75DpAzaV9t6lag=; b=ednkoH2yaimJgZc7Z+sv9Yre2oxOEReOp7HhnFym9a4kCNPl7lVgbxOgS/iiXE0Vxg dQDGqu+E8qDihLYiOC3Y8RRl4gYCEDYJiPAMfB5GYMNGpUJmCoaROVbSo0BFz/Vl6TaC 2j7vQUiyjsx+vzEBxyg/J7y8tSuwZdyn4VepigGBnll2jNDgXRrRCDRApYh2B0pyeo24 IRNDMLo0/AQBOw73Rcmg5dcZ0e8IdLaSnk60W+HmYi5o9dbV9HozNnOj/Udr9bkn8Doy BRQllHTrrtdpTy97itfl2r9V498EOmFIjLbL9tAIF7nZK+KMNA5e3tUgt5E8XO8BgzOF Bj4A== Received: by 10.50.154.167 with SMTP id vp7mr6348220igb.55.1334590773659; Mon, 16 Apr 2012 08:39:33 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.137.198 with SMTP id x6csp84621ibt; Mon, 16 Apr 2012 08:39:32 -0700 (PDT) Received: by 10.180.92.71 with SMTP id ck7mr19926042wib.2.1334590766678; Mon, 16 Apr 2012 08:39:26 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id w8si8626468wiz.42.2012.04.16.08.39.25; Mon, 16 Apr 2012 08:39:26 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SJo1Q-0002OY-IJ; Mon, 16 Apr 2012 15:39:20 +0000 Received: from mailout2.samsung.com ([203.254.224.25]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SJ1Hm-0007BG-1u for linaro-mm-sig@lists.linaro.org; Sat, 14 Apr 2012 11:36:58 +0000 Received: from epcpsbgm2.samsung.com (mailout2.samsung.com [203.254.224.25]) by mailout2.samsung.com (Oracle Communications Messaging Exchange Server 7u4-19.01 64bit (built Sep 7 2010)) with ESMTP id <0M2G007ZNW9H3K50@mailout2.samsung.com> for linaro-mm-sig@lists.linaro.org; Sat, 14 Apr 2012 20:36:55 +0900 (KST) X-AuditID: cbfee61b-b7bd8ae000003eff-74-4f896157632a Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm2.samsung.com (MMPCPMTA) with SMTP id 5C.6F.16127.751698F4; Sat, 14 Apr 2012 20:36:55 +0900 (KST) Received: from localhost.localdomain ([107.108.73.106]) by mmp2.samsung.com (Oracle Communications Messaging Exchange Server 7u4-19.01 64bit (built Sep 7 2010)) with ESMTPA id <0M2G00CFAW941S00@mmp2.samsung.com> for linaro-mm-sig@lists.linaro.org; Sat, 14 Apr 2012 20:36:55 +0900 (KST) From: Prathyush To: dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Date: Sat, 14 Apr 2012 17:22:10 +0530 Message-id: <1334404333-24592-2-git-send-email-prathyush.k@samsung.com> X-Mailer: git-send-email 1.7.0.4 In-reply-to: <1334404333-24592-1-git-send-email-prathyush.k@samsung.com> References: <1334404333-24592-1-git-send-email-prathyush.k@samsung.com> X-Brightmail-Tracker: AAAAAA== X-TM-AS-MML: No X-Mailman-Approved-At: Mon, 16 Apr 2012 15:39:19 +0000 Cc: inki.dae@samsung.com, sunilm@samsung.com, subash.rp@samsung.com, prashanth.g@samsung.com, prathyush.k@samsung.com Subject: [Linaro-mm-sig] [PATCH 1/4] [RFC] drm/exynos: DMABUF: Added support for exporting non-contig buffers X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQluOJLu2MKJ0f78QEFHc1n5R1H/Yl9FVUjzqJrrpqRFkY6qYB7ASoSOYy4p4f5Lap8D06XD With this change, the exynos drm dmabuf module can export and import dmabuf of gem objects with non-continuous memory. The exynos_map_dmabuf function can create SGT of a non-contiguous buffer by calling dma_get_pages to retrieve the allocated pages and then maps the SGT to the caller's address space. Signed-off-by: Prathyush K --- drivers/gpu/drm/exynos/exynos_drm_dmabuf.c | 98 +++++++++++++++++++++++----- 1 files changed, 81 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c index cbb6ad4..54b88bd 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c +++ b/drivers/gpu/drm/exynos/exynos_drm_dmabuf.c @@ -56,6 +56,59 @@ static void exynos_dmabuf_detach(struct dma_buf *dmabuf, dma_buf_put(dmabuf); } + +static struct sg_table *drm_dc_pages_to_sgt(struct page **pages, + unsigned long n_pages, size_t offset, size_t offset2, dma_addr_t daddr) +{ + struct sg_table *sgt; + struct scatterlist *s; + int i, j, cur_page, chunks, ret; + + sgt = kzalloc(sizeof *sgt, GFP_KERNEL); + if (!sgt) + return ERR_PTR(-ENOMEM); + + /* compute number of chunks */ + chunks = 1; + for (i = 1; i < n_pages; ++i) + if (pages[i] != pages[i - 1] + 1) + ++chunks; + + ret = sg_alloc_table(sgt, chunks, GFP_KERNEL); + if (ret) { + kfree(sgt); + return ERR_PTR(-ENOMEM); + } + + /* merging chunks and putting them into the scatterlist */ + cur_page = 0; + for_each_sg(sgt->sgl, s, sgt->orig_nents, i) { + size_t size = PAGE_SIZE; + + for (j = cur_page + 1; j < n_pages; ++j) { + if (pages[j] != pages[j - 1] + 1) + break; + size += PAGE_SIZE; + } + + /* cut offset if chunk starts at the first page */ + if (cur_page == 0) + size -= offset; + /* cut offset2 if chunk ends at the last page */ + if (j == n_pages) + size -= offset2; + + sg_set_page(s, pages[cur_page], size, offset); + s->dma_address = daddr; + daddr += size; + offset = 0; + cur_page = j; + } + + return sgt; +} + + static struct sg_table *exynos_map_dmabuf(struct dma_buf_attachment *attach, enum dma_data_direction direction) { @@ -64,6 +117,8 @@ static struct sg_table *exynos_map_dmabuf(struct dma_buf_attachment *attach, struct exynos_drm_gem_buf *buffer; struct sg_table *sgt; int ret; + int size, n_pages; + struct page **pages = NULL; DRM_DEBUG_KMS("%s\n", __FILE__); @@ -71,27 +126,37 @@ static struct sg_table *exynos_map_dmabuf(struct dma_buf_attachment *attach, buffer = exynos_gem_obj->buffer; - /* TODO. consider physically non-continuous memory with IOMMU. */ + size = buffer->size; + n_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; - sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); - if (!sgt) { - DRM_DEBUG_KMS("failed to allocate sg table.\n"); - return ERR_PTR(-ENOMEM); + pages = kmalloc(n_pages * sizeof pages[0], GFP_KERNEL); + if (!pages) { + DRM_DEBUG_KMS("failed to alloc page table\n"); + return NULL; } - ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + ret = dma_get_pages(attach->dev, buffer->kvaddr, + buffer->dma_addr, pages, n_pages); if (ret < 0) { - DRM_DEBUG_KMS("failed to allocate scatter list.\n"); - kfree(sgt); - sgt = NULL; - return ERR_PTR(-ENOMEM); + DRM_DEBUG_KMS("failed to get buffer pages from DMA API\n"); + return NULL; } + if (ret != n_pages) { + DRM_DEBUG_KMS("failed to get all pages from DMA API\n"); + return NULL; + } + + sgt = drm_dc_pages_to_sgt(pages, n_pages, 0, 0, buffer->dma_addr); + if (IS_ERR(sgt)) { + DRM_DEBUG_KMS("failed to prepare sg table\n"); + return NULL; + } + + sgt->nents = dma_map_sg(attach->dev, sgt->sgl, + sgt->orig_nents, DMA_BIDIRECTIONAL); - sg_init_table(sgt->sgl, 1); - sg_dma_len(sgt->sgl) = buffer->size; - sg_set_page(sgt->sgl, pfn_to_page(PFN_DOWN(buffer->dma_addr)), - buffer->size, 0); - sg_dma_address(sgt->sgl) = buffer->dma_addr; + /* pages are no longer needed */ + kfree(pages); /* * increase reference count of this buffer. @@ -303,8 +368,6 @@ int exynos_dmabuf_prime_fd_to_handle(struct drm_device *drm_dev, if (ret < 0) goto fail_handle; - /* consider physically non-continuous memory with IOMMU. */ - buffer->dma_addr = sg_dma_address(sgt->sgl); buffer->size = sg_dma_len(sgt->sgl); @@ -316,6 +379,7 @@ int exynos_dmabuf_prime_fd_to_handle(struct drm_device *drm_dev, atomic_set(&buffer->shared_refcount, 1); exynos_gem_obj->base.import_attach = attach; + exynos_gem_obj->buffer = buffer; ret = drm_prime_insert_fd_handle_mapping(&file_priv->prime, dmabuf, *handle);