From patchwork Fri Jul 13 18:01:46 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9994 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 4BA8023E56 for ; Fri, 13 Jul 2012 18:02:15 +0000 (UTC) Received: from mail-yw0-f52.google.com (mail-yw0-f52.google.com [209.85.213.52]) by fiordland.canonical.com (Postfix) with ESMTP id 04015A1833A for ; Fri, 13 Jul 2012 18:02:14 +0000 (UTC) Received: by yhpp61 with SMTP id p61so4339903yhp.11 for ; Fri, 13 Jul 2012 11:02:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf :x-ironport-av:from:to:date:message-id:x-mailer:in-reply-to :references:cc:subject:x-beenthere:x-mailman-version:precedence :list-id:list-unsubscribe:list-archive:list-post:list-help :list-subscribe:mime-version:content-type:content-transfer-encoding :sender:errors-to:x-gm-message-state; bh=WtVeNQd1l3eAi1AfXT3jo5av8GfGw6GIOy+EiFTs6FY=; b=NytNObpr0cwSkZZ/7bPIB5X5t1EC7vg9FZp868t2YVANC4HHnvBBdSddcn/2Fh/77w TRvHwBhWNYeYJAM88CmCF7HIQEkeCBu5eeESHcNvRTcsZEeQWXdryobqlAPBwCcBYg/I pZkoiYpuracwMN8EKVg4npdfVr9VDMV9NSJe75s+JHkiIVVx62M1AGwho8yq8UodLp5T VVAmtIVq22sWZ2PGAbVFxGjDLB7kQslCRPXak1YS09EAvJlsOWirF2ycA86uENof16Pv vTgdFWbw5743hfVqJBTBs/cwtnCPnKz7nVGZ33wE4v9sGIP0PNJcXqm/o0TM0TdZ9JbD /1VQ== Received: by 10.50.87.227 with SMTP id bb3mr1791853igb.57.1342202534018; Fri, 13 Jul 2012 11:02:14 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.241.2 with SMTP id lc2csp18630ibb; Fri, 13 Jul 2012 11:02:13 -0700 (PDT) Received: by 10.204.156.11 with SMTP id u11mr1471740bkw.69.1342202531686; Fri, 13 Jul 2012 11:02:11 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id im3si11286086bkc.43.2012.07.13.11.02.10; Fri, 13 Jul 2012 11:02:11 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SpkBs-0005SW-Pg; Fri, 13 Jul 2012 18:02:08 +0000 Received: from wolverine01.qualcomm.com ([199.106.114.254]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SpkBq-0005S7-U8 for linaro-mm-sig@lists.linaro.org; Fri, 13 Jul 2012 18:02:07 +0000 X-IronPort-AV: E=McAfee;i="5400,1158,6771"; a="210398073" Received: from pdmz-ns-mip.qualcomm.com (HELO mostmsg01.qualcomm.com) ([199.106.114.10]) by wolverine01.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 13 Jul 2012 11:02:07 -0700 Received: from lauraa-linux.qualcomm.com (pdmz-ns-snip_218_1.qualcomm.com [192.168.218.1]) by mostmsg01.qualcomm.com (Postfix) with ESMTPA id 3599210004B1; Fri, 13 Jul 2012 11:02:06 -0700 (PDT) From: Laura Abbott To: linaro-mm-sig@lists.linaro.org, Marek Szyprowski , Russell King Date: Fri, 13 Jul 2012 11:01:46 -0700 Message-Id: <1342202506-12449-2-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.7.8.3 In-Reply-To: <1342202506-12449-1-git-send-email-lauraa@codeaurora.org> References: <1342202506-12449-1-git-send-email-lauraa@codeaurora.org> Cc: linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [Linaro-mm-sig] [PATCH][RFC] arm: dma-mapping: Add support for allocating/mapping cached buffers X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQmogpmDRh7j4BUIwS+ECWulnvCLyKu3wal3Ull3w6odJB+C+v35t+wgMJi+mhTPcVKgw6y+ There are currently no dma allocation APIs that support cached buffers. For some use cases, caching provides a signficiant performance boost that beats write-combining regions. Add apis to allocate and map a cached DMA region. Signed-off-by: Laura Abbott --- arch/arm/include/asm/dma-mapping.h | 21 +++++++++++++++++++++ arch/arm/mm/dma-mapping.c | 21 +++++++++++++++++++++ 2 files changed, 42 insertions(+), 0 deletions(-) diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h index dc988ff..1565403 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -239,12 +239,33 @@ int dma_mmap_coherent(struct device *, struct vm_area_struct *, extern void *dma_alloc_writecombine(struct device *, size_t, dma_addr_t *, gfp_t); +/** + * dma_alloc_cached - allocate cached memory for DMA + * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices + * @size: required memory size + * @handle: bus-specific DMA address + * + * Allocate some cached memory for a device for + * performing DMA. This function allocates pages, and will + * return the CPU-viewed address, and sets @handle to be the + * device-viewed address. + */ +extern void *dma_alloc_cached(struct device *, size_t, dma_addr_t *, + gfp_t); + #define dma_free_writecombine(dev,size,cpu_addr,handle) \ dma_free_coherent(dev,size,cpu_addr,handle) +#define dma_free_cached(dev,size,cpu_addr,handle) \ + dma_free_coherent(dev,size,cpu_addr,handle) + int dma_mmap_writecombine(struct device *, struct vm_area_struct *, void *, dma_addr_t, size_t); + +int dma_mmap_cached(struct device *, struct vm_area_struct *, + void *, dma_addr_t, size_t); + /* * This can be called during boot to increase the size of the consistent * DMA region above it's default value of 2MB. It must be called before the diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index b1911c4..f396ddc 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -633,6 +633,20 @@ dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *handle, gfp_ } EXPORT_SYMBOL(dma_alloc_writecombine); +/* + * Allocate a cached DMA region + */ +void * +dma_alloc_cached(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp) +{ + return __dma_alloc(dev, size, handle, gfp, + pgprot_kernel, + __builtin_return_address(0)); +} +EXPORT_SYMBOL(dma_alloc_cached); + + + static int dma_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size) { @@ -664,6 +678,13 @@ int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma, } EXPORT_SYMBOL(dma_mmap_writecombine); +int dma_mmap_cached(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size) +{ + return dma_mmap(dev, vma, cpu_addr, dma_addr, size); +} +EXPORT_SYMBOL(dma_mmap_cached); + /* * Free a buffer as defined by the above mapping.