From patchwork Mon Jul 14 08:28:06 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 33556 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qg0-f69.google.com (mail-qg0-f69.google.com [209.85.192.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0E7F220CAD for ; Mon, 14 Jul 2014 08:28:39 +0000 (UTC) Received: by mail-qg0-f69.google.com with SMTP id j107sf10826460qga.8 for ; Mon, 14 Jul 2014 01:28:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list:content-type :content-transfer-encoding; bh=mpXKeIak8pl5Hmphh+bYgA5mydHAu7Gxxz7wgfmU3zE=; b=E9CLXZq+kQaeXEunOtHiudzxCkV/y6z/xvFetzQLVBLTCjlO0zJozBr1x3j/EU6Cv4 c7GxDmFCwINr5/snFm2Fe33y+eRSMsvnLtALHp+iPmGp/VL0fEsyZut30+K2GCQxWIP1 bKfoYdaWpX67UYa66ESJFezBinMMffEwpq8Q9f4E/vgZeqkX7Dkr0Z0lfTWLFzt8gTnW n8VcoLra//oYsLcjyTeoUWTWvxDrqVh0DQ3wda1H0Lm9n29z8ppuO+ev0/665sQwKtpu NXotyJKdUDBh1EVwapOpnKCS+rvRHlWRKGxNtqh4gEfYNISgXEmJc724bq9MnJPq9p3w /zzw== X-Gm-Message-State: ALoCoQlO/MeEseNs+ufqoJsgmBBHaHTb2SP5OSnuPeuQI/NKnlx8NN6uLH9DprKH0NI/gO6myQGO X-Received: by 10.236.4.132 with SMTP id 4mr6224907yhj.28.1405326518888; Mon, 14 Jul 2014 01:28:38 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.47.113 with SMTP id l104ls475963qga.61.gmail; Mon, 14 Jul 2014 01:28:38 -0700 (PDT) X-Received: by 10.220.203.134 with SMTP id fi6mr15255141vcb.18.1405326518805; Mon, 14 Jul 2014 01:28:38 -0700 (PDT) Received: from mail-vc0-f180.google.com (mail-vc0-f180.google.com [209.85.220.180]) by mx.google.com with ESMTPS id dr20si5116792veb.59.2014.07.14.01.28.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 14 Jul 2014 01:28:38 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) client-ip=209.85.220.180; Received: by mail-vc0-f180.google.com with SMTP id im17so6691172vcb.11 for ; Mon, 14 Jul 2014 01:28:38 -0700 (PDT) X-Received: by 10.220.174.137 with SMTP id t9mr15369606vcz.12.1405326518677; Mon, 14 Jul 2014 01:28:38 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp116559vcb; Mon, 14 Jul 2014 01:28:38 -0700 (PDT) X-Received: by 10.194.189.50 with SMTP id gf18mr17512039wjc.13.1405326517589; Mon, 14 Jul 2014 01:28:37 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id j3si14809105wjf.168.2014.07.14.01.28.36 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 14 Jul 2014 01:28:37 -0700 (PDT) Received-SPF: none (google.com: linaro-mm-sig-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1X6baN-000209-2q; Mon, 14 Jul 2014 08:26:11 +0000 Received: from mailout3.w1.samsung.com ([210.118.77.13]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1X6baD-0001zm-H6 for linaro-mm-sig@lists.linaro.org; Mon, 14 Jul 2014 08:26:01 +0000 Received: from eucpsbgm2.samsung.com (unknown [203.254.199.245]) by mailout3.w1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N8P00E1Z0V46710@mailout3.w1.samsung.com> for linaro-mm-sig@lists.linaro.org; Mon, 14 Jul 2014 09:28:16 +0100 (BST) X-AuditID: cbfec7f5-b7f626d000004b39-4e-53c394a164c2 Received: from eusync3.samsung.com ( [203.254.199.213]) by eucpsbgm2.samsung.com (EUCPMTA) with SMTP id B5.FD.19257.1A493C35; Mon, 14 Jul 2014 09:28:17 +0100 (BST) Received: from amdc1339.digital.local ([106.116.147.30]) by eusync3.samsung.com (Oracle Communications Messaging Server 7u4-23.01 (7.0.4.23.0) 64bit (built Aug 10 2011)) with ESMTPA id <0N8P00JJ10UYW070@eusync3.samsung.com>; Mon, 14 Jul 2014 09:28:17 +0100 (BST) From: Marek Szyprowski To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Date: Mon, 14 Jul 2014 10:28:06 +0200 Message-id: <1405326487-15346-4-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.9.2 In-reply-to: <1405326487-15346-1-git-send-email-m.szyprowski@samsung.com> References: <1405326487-15346-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrPLMWRmVeSWpSXmKPExsVy+t/xq7oLpxwONnjZxmUxZ/0aNovHr+ex WPyddIzd4v2yHkaL+UfOsVoc+LOD0WJldzObxc517xgtzja9YbfY3jmD3eLLlYdMFpseX2O1 uLxrDpvF2iN32S02vDzIZLHgeAurxZ/pchZrjixmt/i7fROLxfoZr1ksFs6/z27x8uMJFgcx jzXz1jB6/P41idHjcl8vk8fOWXfZPbreXmHyuHNtD5vHiRm/WTweHNrM4rF5Sb3H7X+PmT3W /XnF5NH/18Bj7q4+Ro++LasYPT5vkgvgj+KySUnNySxLLdK3S+DKeDNrNmPBJtuKvnXL2BsY Fxh1MXJySAiYSKxYNpcNwhaTuHBvPZDNxSEksJRR4vPj7+wgCSGBPiaJd3/8QGw2AUOJrrdd YA0iAm4S/9YdAmtgFnjNKrHi0WcWkISwQLTEt1PrgWwODhYBVYnF6+JAwrwCHhIvu6+wQyyT k/j/cgUTiM0p4Ckx4/N7qF0eEu8v97JOYORdwMiwilE0tTS5oDgpPddIrzgxt7g0L10vOT93 EyMkkr7uYFx6zOoQowAHoxIPbwH/4WAh1sSy4srcQ4wSHMxKIrzhbkAh3pTEyqrUovz4otKc 1OJDjEwcnFINjId3Os1Orb96r4ZhxzOtHcV1u+YJv/7gp8YwJW09p5cDt4TwXtE3T6Orliu9 /yFY1PhU3zhGWUhlqlYTa9et2i9fqy3Wsl3yiI6YP/2JQY3zq/U7nx9LyC65xv0he4mOqcGq qnqWJdcl5K4fFd4vb9T/0eLwv6qZr4Kdbqbs5D+yueIGxy2BICWW4oxEQy3mouJEAK0To2SC AgAA Cc: Jon Medhurst , devicetree@vger.kernel.org, Andrew Morton , Arnd Bergmann , Josh Cartwright , Catalin Marinas , Tomasz Figa , Will Deacon , Michal Nazarewicz , linaro-mm-sig@lists.linaro.org, Paul Mackerras , "Aneesh Kumar K.V." , Grant Likely , Joonsoo Kim , Sascha Hauer Subject: [Linaro-mm-sig] [PATCH v2 RESEND 3/4] drivers: dma-coherent: add initialization from device tree X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: linaro-mm-sig-bounces@lists.linaro.org Sender: linaro-mm-sig-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: m.szyprowski@samsung.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Initialization procedure of dma coherent pool has been split into two parts, so memory pool can now be initialized without assigning to particular struct device. Then initialized region can be assigned to more than one struct device. To protect from concurent allocations from different devices, a spinlock has been added to dma_coherent_mem structure. The last part of this patch adds support for handling 'shared-dma-pool' reserved-memory device tree nodes. Signed-off-by: Marek Szyprowski --- drivers/base/dma-coherent.c | 137 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 118 insertions(+), 19 deletions(-) diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c index 7d6e84a51424..7185a4f247e1 100644 --- a/drivers/base/dma-coherent.c +++ b/drivers/base/dma-coherent.c @@ -14,11 +14,14 @@ struct dma_coherent_mem { int size; int flags; unsigned long *bitmap; + spinlock_t spinlock; }; -int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, - dma_addr_t device_addr, size_t size, int flags) +static int dma_init_coherent_memory(phys_addr_t phys_addr, dma_addr_t device_addr, + size_t size, int flags, + struct dma_coherent_mem **mem) { + struct dma_coherent_mem *dma_mem = NULL; void __iomem *mem_base = NULL; int pages = size >> PAGE_SHIFT; int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); @@ -27,27 +30,26 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, goto out; if (!size) goto out; - if (dev->dma_mem) - goto out; - - /* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */ mem_base = ioremap(phys_addr, size); if (!mem_base) goto out; - dev->dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL); - if (!dev->dma_mem) + dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL); + if (!dma_mem) goto out; - dev->dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL); - if (!dev->dma_mem->bitmap) + dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL); + if (!dma_mem->bitmap) goto free1_out; - dev->dma_mem->virt_base = mem_base; - dev->dma_mem->device_base = device_addr; - dev->dma_mem->pfn_base = PFN_DOWN(phys_addr); - dev->dma_mem->size = pages; - dev->dma_mem->flags = flags; + dma_mem->virt_base = mem_base; + dma_mem->device_base = device_addr; + dma_mem->pfn_base = PFN_DOWN(phys_addr); + dma_mem->size = pages; + dma_mem->flags = flags; + spin_lock_init(&dma_mem->spinlock); + + *mem = dma_mem; if (flags & DMA_MEMORY_MAP) return DMA_MEMORY_MAP; @@ -55,12 +57,51 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, return DMA_MEMORY_IO; free1_out: - kfree(dev->dma_mem); + kfree(dma_mem); out: if (mem_base) iounmap(mem_base); return 0; } + +static void dma_release_coherent_memory(struct dma_coherent_mem *mem) +{ + if (!mem) + return; + iounmap(mem->virt_base); + kfree(mem->bitmap); + kfree(mem); +} + +static int dma_assign_coherent_memory(struct device *dev, + struct dma_coherent_mem *mem) +{ + if (dev->dma_mem) + return -EBUSY; + + dev->dma_mem = mem; + /* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */ + + return 0; +} + +int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, + dma_addr_t device_addr, size_t size, int flags) +{ + struct dma_coherent_mem *mem; + int ret; + + ret = dma_init_coherent_memory(phys_addr, device_addr, size, flags, + &mem); + if (ret == 0) + return 0; + + if (dma_assign_coherent_memory(dev, mem) == 0) + return ret; + + dma_release_coherent_memory(mem); + return 0; +} EXPORT_SYMBOL(dma_declare_coherent_memory); void dma_release_declared_memory(struct device *dev) @@ -69,10 +110,8 @@ void dma_release_declared_memory(struct device *dev) if (!mem) return; + dma_release_coherent_memory(mem); dev->dma_mem = NULL; - iounmap(mem->virt_base); - kfree(mem->bitmap); - kfree(mem); } EXPORT_SYMBOL(dma_release_declared_memory); @@ -80,6 +119,7 @@ void *dma_mark_declared_memory_occupied(struct device *dev, dma_addr_t device_addr, size_t size) { struct dma_coherent_mem *mem = dev->dma_mem; + unsigned long flags; int pos, err; size += device_addr & ~PAGE_MASK; @@ -87,8 +127,11 @@ void *dma_mark_declared_memory_occupied(struct device *dev, if (!mem) return ERR_PTR(-EINVAL); + spin_lock_irqsave(&mem->spinlock, flags); pos = (device_addr - mem->device_base) >> PAGE_SHIFT; err = bitmap_allocate_region(mem->bitmap, pos, get_order(size)); + spin_unlock_irqrestore(&mem->spinlock, flags); + if (err != 0) return ERR_PTR(err); return mem->virt_base + (pos << PAGE_SHIFT); @@ -115,6 +158,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size, { struct dma_coherent_mem *mem; int order = get_order(size); + unsigned long flags; int pageno; if (!dev) @@ -124,6 +168,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size, return 0; *ret = NULL; + spin_lock_irqsave(&mem->spinlock, flags); if (unlikely(size > (mem->size << PAGE_SHIFT))) goto err; @@ -138,10 +183,12 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size, *dma_handle = mem->device_base + (pageno << PAGE_SHIFT); *ret = mem->virt_base + (pageno << PAGE_SHIFT); memset(*ret, 0, size); + spin_unlock_irqrestore(&mem->spinlock, flags); return 1; err: + spin_unlock_irqrestore(&mem->spinlock, flags); /* * In the case where the allocation can not be satisfied from the * per-device area, try to fall back to generic memory if the @@ -171,8 +218,11 @@ int dma_release_from_coherent(struct device *dev, int order, void *vaddr) if (mem && vaddr >= mem->virt_base && vaddr < (mem->virt_base + (mem->size << PAGE_SHIFT))) { int page = (vaddr - mem->virt_base) >> PAGE_SHIFT; + unsigned long flags; + spin_lock_irqsave(&mem->spinlock, flags); bitmap_release_region(mem->bitmap, page, order); + spin_unlock_irqrestore(&mem->spinlock, flags); return 1; } return 0; @@ -218,3 +268,52 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma, return 0; } EXPORT_SYMBOL(dma_mmap_from_coherent); + +/* + * Support for reserved memory regions defined in device tree + */ +#ifdef CONFIG_OF_RESERVED_MEM +#include +#include +#include + +static void rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) +{ + struct dma_coherent_mem *mem = rmem->priv; + if (!mem && + dma_init_coherent_memory(rmem->base, rmem->base, rmem->size, + DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE, + &mem) != DMA_MEMORY_MAP) { + pr_info("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + return; + } + rmem->priv = mem; + dma_assign_coherent_memory(dev, mem); +} + +static void rmem_dma_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + dev->dma_mem = NULL; +} + +static const struct reserved_mem_ops rmem_dma_ops = { + .device_init = rmem_dma_device_init, + .device_release = rmem_dma_device_release, +}; + +static int __init rmem_dma_setup(struct reserved_mem *rmem) +{ + unsigned long node = rmem->fdt_node; + + if (of_get_flat_dt_prop(node, "reusable", NULL)) + return -EINVAL; + + rmem->ops = &rmem_dma_ops; + pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + return 0; +} +RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup); +#endif