From patchwork Thu Aug 23 06:10:29 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi Doyu X-Patchwork-Id: 10896 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id D10BF23E00 for ; Thu, 23 Aug 2012 06:11:34 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id DA54EA18F33 for ; Thu, 23 Aug 2012 06:11:20 +0000 (UTC) Received: by mail-iy0-f180.google.com with SMTP id j38so694034iad.11 for ; Wed, 22 Aug 2012 23:11:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf :x-pgp-universal:from:to:date:message-id:x-mailer:in-reply-to :references:mime-version:cc:subject:x-beenthere:x-mailman-version :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:content-type:content-transfer-encoding :sender:errors-to:x-gm-message-state; bh=OuseJz6glUPkJUCexHuxsVo78u/m6BHpz6IeR7mTboE=; b=Dx2AVdKyx9KdwXTIG8Z6AExcD27JIWUECdAe+qRp7KmsD2f9ToIZXWaLG/l+Yocdvu veN+KCYqNUqFzJPDl8rBKEH5PG3U1Fx7nrjYEMOXOV6mrhZPHhR4PYhWR7AZ3MBDjf0X EaOaLAAB76ps/xJwVJ8CE+IpH0lYAiyh8mOCSQ7JQlyYRIx8NdvvrROXuD3XD4TCB6bk RSIHBPE9FJtS1M2LWGMsxLR2+nZQ1YK4LnSBG9HAx7ZX5b6Scj8xXxEVVViz5pfx4r9d 300xbJLFKaefBp4rh/TilCqcsMqctQyUJ6f1NFEsQfDYDCOK9h0b2DXq3xjA7i7rDpsT 5JiA== Received: by 10.42.109.194 with SMTP id m2mr262635icp.48.1345702294190; Wed, 22 Aug 2012 23:11:34 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp239308igc; Wed, 22 Aug 2012 23:11:33 -0700 (PDT) Received: by 10.204.133.194 with SMTP id g2mr85796bkt.13.1345702292756; Wed, 22 Aug 2012 23:11:32 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id fo13si4104874bkc.52.2012.08.22.23.11.31; Wed, 22 Aug 2012 23:11:32 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1T4Qdd-0004yx-Ui; Thu, 23 Aug 2012 06:11:30 +0000 Received: from hqemgate04.nvidia.com ([216.228.121.35]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1T4Qdb-0004yP-TO for linaro-mm-sig@lists.linaro.org; Thu, 23 Aug 2012 06:11:28 +0000 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate04.nvidia.com id ; Wed, 22 Aug 2012 23:10:22 -0700 Received: from hqemhub03.nvidia.com ([172.17.108.22]) by hqnvupgp07.nvidia.com (PGP Universal service); Wed, 22 Aug 2012 23:05:08 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Wed, 22 Aug 2012 23:05:08 -0700 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQEMHUB03.nvidia.com (172.20.150.15) with Microsoft SMTP Server id 8.3.264.0; Wed, 22 Aug 2012 23:11:01 -0700 Received: from daphne.nvidia.com (Not Verified[172.16.212.96]) by hqnvemgw02.nvidia.com with MailMarshal (v6,7,2,8378) id ; Wed, 22 Aug 2012 23:11:48 -0700 Received: from oreo.Nvidia.com (dhcp-10-21-25-186.nvidia.com [10.21.25.186]) by daphne.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id q7N6AcIT004834; Wed, 22 Aug 2012 23:10:57 -0700 (PDT) From: Hiroshi Doyu To: Date: Thu, 23 Aug 2012 09:10:29 +0300 Message-ID: <1345702229-9539-5-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1345702229-9539-1-git-send-email-hdoyu@nvidia.com> References: <1345702229-9539-1-git-send-email-hdoyu@nvidia.com> MIME-Version: 1.0 Cc: linux@arm.linux.org.uk, arnd@arndb.de, konrad.wilk@oracle.com, minchan@kernel.org, linux-kernel@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, kyungmin.park@samsung.com, pullip.cho@samsung.com, linux-arm-kernel@lists.infradead.org Subject: [Linaro-mm-sig] [v2 4/4] ARM: dma-mapping: IOMMU allocates pages from atomic_pool with GFP_ATOMIC X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQmoz9tYPYCMdiNNumaYhjNFXAb1xFdwAa+YOXpRbt1FjAAg/0DL4e20mZbdDzlF8XGZoM5n Makes use of the same atomic pool from DMA, and skips kernel page mapping which can involve sleep'able operations at allocating a kernel page table. Signed-off-by: Hiroshi Doyu --- arch/arm/mm/dma-mapping.c | 30 +++++++++++++++++++++++++----- 1 files changed, 25 insertions(+), 5 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 7ab016b..433312a 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1063,7 +1063,6 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, struct page **pages; int count = size >> PAGE_SHIFT; int array_size = count * sizeof(struct page *); - int err; if ((array_size <= PAGE_SIZE) || (gfp & GFP_ATOMIC)) pages = kzalloc(array_size, gfp); @@ -1072,9 +1071,20 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, if (!pages) return NULL; - err = __alloc_fill_pages(&pages, count, gfp); - if (err) - goto error + if (gfp & GFP_ATOMIC) { + struct page *page; + int i; + void *addr = __alloc_from_pool(size, &page); + if (!addr) + goto error; + + for (i = 0; i < count; i++) + pages[i] = page + i; + } else { + int err = __alloc_fill_pages(&pages, count, gfp); + if (err) + goto error; + } return pages; @@ -1091,9 +1101,15 @@ static int __iommu_free_buffer(struct device *dev, struct page **pages, size_t s int count = size >> PAGE_SHIFT; int array_size = count * sizeof(struct page *); int i; + + if (__free_from_pool(page_address(pages[0]), size)) + goto out; + for (i = 0; i < count; i++) if (pages[i]) __free_pages(pages[i], 0); + +out: if ((array_size <= PAGE_SIZE) || __in_atomic_pool(page_address(pages[0]), size)) kfree(pages); @@ -1221,6 +1237,9 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, if (*handle == DMA_ERROR_CODE) goto err_buffer; + if (gfp & GFP_ATOMIC) + return page_address(pages[0]); + if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) return pages; @@ -1279,7 +1298,8 @@ void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, return; } - if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) { + if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs) || + !__in_atomic_pool(cpu_addr, size)) { unmap_kernel_range((unsigned long)cpu_addr, size); vunmap(cpu_addr); }