From patchwork Tue Nov 20 14:31:45 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 13005 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 9F42D23FC2 for ; Tue, 20 Nov 2012 14:32:16 +0000 (UTC) Received: from mail-ia0-f180.google.com (mail-ia0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 2ECAFA185BC for ; Tue, 20 Nov 2012 14:32:16 +0000 (UTC) Received: by mail-ia0-f180.google.com with SMTP id t4so1738985iag.11 for ; Tue, 20 Nov 2012 06:32:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:x-auditid :from:to:date:message-id:x-mailer:in-reply-to:references :x-brightmail-tracker:cc:subject:x-beenthere:x-mailman-version :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=m9GTFp3nv5/XLor3czao9fzFsQftg05H9bbz5NTbsWE=; b=Ybh+k5km9aR2AStqYk/t6b/MW83zjYTHaNen7vIvhMiahI79jO7OVlqc5YG9S7HKEx dtJRblykT7SiyLlvP4NzEtu6yUi37U1TyQqkI0i1xYp5Htr9SXI3SmpzhDH51WDIn9Lt fIe7MlkprcoTn426cL/UScSmo+IY3Or3k94QD04axzqGwPDtRkr+b4kh2cF0cWScvU48 gXkQAEmuWAFj7j1NGCHrIVA2AYIb0m6VCndl1OpN5ZdY7dXjWQvX4ws+pemVyz3ILQ3z gL392M/u63RM6+rwt8j3VGrpIaFSmZWhu+xML7uC1vyufyxiWvNpy8/duyAl+qeVw70+ oKWg== Received: by 10.43.46.2 with SMTP id um2mr14018775icb.18.1353421935611; Tue, 20 Nov 2012 06:32:15 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.148 with SMTP id n20csp326499igt; Tue, 20 Nov 2012 06:32:14 -0800 (PST) Received: by 10.14.179.1 with SMTP id g1mr34758109eem.14.1353421934074; Tue, 20 Nov 2012 06:32:14 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id m46si21890964eeo.53.2012.11.20.06.32.12; Tue, 20 Nov 2012 06:32:14 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Taory-0005sm-4V; Tue, 20 Nov 2012 14:32:10 +0000 Received: from mailout2.samsung.com ([203.254.224.25]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Taorv-0005sc-DN for linaro-mm-sig@lists.linaro.org; Tue, 20 Nov 2012 14:32:08 +0000 Received: from epcpsbgm2.samsung.com (epcpsbgm2 [203.254.230.27]) by mailout2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MDS001OCJ17YN21@mailout2.samsung.com> for linaro-mm-sig@lists.linaro.org; Tue, 20 Nov 2012 23:32:03 +0900 (KST) X-AuditID: cbfee61b-b7f616d00000319b-fa-50ab946382b9 Received: from epmmp1.local.host ( [203.254.227.16]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id 4D.EA.12699.3649BA05; Tue, 20 Nov 2012 23:32:03 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MDS006W8J15GHB0@mmp1.samsung.com> for linaro-mm-sig@lists.linaro.org; Tue, 20 Nov 2012 23:32:03 +0900 (KST) From: Marek Szyprowski To: linux-arm-kernel@lists.infradead.org, linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 20 Nov 2012 15:31:45 +0100 Message-id: <1353421905-3112-1-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <20121119144826.f59667b2.akpm@linux-foundation.org> References: <20121119144826.f59667b2.akpm@linux-foundation.org> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrIJMWRmVeSWpSXmKPExsVy+t9jAd3kKasDDNoOmVl8ufKQyYHR4/a/ x8wBjFFcNimpOZllqUX6dglcGY+fT2EpWCJfsfn4d6YGxmuSXYycHBICJhKrZm1khbDFJC7c W8/WxcjFISSwiFFi3d3lLBDOKiaJLfOmMYFUsQkYSnS97QKq4uAQEaiRmDeDEaSGWeALs8St vwuZQWqEBWIkPjceYgSxWQRUJf59WgQW5xVwl7j68hcjSK+EgILEnEk2ICangIPE9n5BkAoh AXuJabensk9g5F3AyLCKUTS1ILmgOCk910ivODG3uDQvXS85P3cTI9jjz6R3MK5qsDjEKMDB qMTD+zBhVYAQa2JZcWXuIUYJDmYlEV7XntUBQrwpiZVVqUX58UWlOanFhxilOViUxHmbPVIC hATSE0tSs1NTC1KLYLJMHJxSDYxuukqKD7wvyc1yON/0jcUu1WNnRUT6+TU7fQwPb+tPSIxV 5HvKPb2CfcN1NbmnDmeeBWXlfb/tJ7Tg+vkUnnen/u85Xp9iGRN25pnnfbWM+Bd2t6ZFtAkU fXhiuGRt/B3hP6v/p6zn9oi+stfOmGPBpgi1DvZTi8RO8b14VC4yxbR72uqyfyZKLMUZiYZa zEXFiQBBV4jG9AEAAA== Cc: Thomas Petazzoni , Andrew Lunn , Jason Cooper , Arnd Bergmann , Michal Hocko , Kyungmin Park , Soren Moch , Mel Gorman , Andrew Morton , KAMEZAWA Hiroyuki , Sebastian Hesselbarth Subject: [Linaro-mm-sig] [PATCH v2] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQmWeK8Nq22aybe4hdEFAdeX46wAbrEojY5GbDU170+kvNJc5Jc2SK0F2UJYSLkXRICucpix dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, regardless the flags provided by the caller. This causes excessive pruning of emergency memory pools without any good reason. Additionaly, on ARM architecture any driver which is using dmapools will sooner or later trigger the following error: "ERROR: 256 KiB atomic DMA coherent pool is too small! Please increase it with coherent_pool= kernel parameter!". Increasing the coherent pool size usually doesn't help much and only delays such error, because all GFP_ATOMIC DMA allocations are always served from the special, very limited memory pool. This patch changes the dmapool code to correctly use gfp flags provided by the dmapool caller. Reported-by: Soeren Moch Reported-by: Thomas Petazzoni Signed-off-by: Marek Szyprowski Tested-by: Andrew Lunn Tested-by: Soeren Moch --- changelog v2: - removed all waitq related stuff - extended commit message mm/dmapool.c | 31 +++++++------------------------ 1 file changed, 7 insertions(+), 24 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index c5ab33b..da1b0f0 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -50,7 +50,6 @@ struct dma_pool { /* the pool */ size_t allocation; size_t boundary; char name[32]; - wait_queue_head_t waitq; struct list_head pools; }; @@ -62,8 +61,6 @@ struct dma_page { /* cacheable header for 'allocation' bytes */ unsigned int offset; }; -#define POOL_TIMEOUT_JIFFIES ((100 /* msec */ * HZ) / 1000) - static DEFINE_MUTEX(pools_lock); static ssize_t @@ -172,7 +169,6 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, retval->size = size; retval->boundary = boundary; retval->allocation = allocation; - init_waitqueue_head(&retval->waitq); if (dev) { int ret; @@ -227,7 +223,6 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags) memset(page->vaddr, POOL_POISON_FREED, pool->allocation); #endif pool_initialise_page(pool, page); - list_add(&page->page_list, &pool->page_list); page->in_use = 0; page->offset = 0; } else { @@ -315,30 +310,21 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, might_sleep_if(mem_flags & __GFP_WAIT); spin_lock_irqsave(&pool->lock, flags); - restart: list_for_each_entry(page, &pool->page_list, page_list) { if (page->offset < pool->allocation) goto ready; } - page = pool_alloc_page(pool, GFP_ATOMIC); - if (!page) { - if (mem_flags & __GFP_WAIT) { - DECLARE_WAITQUEUE(wait, current); - __set_current_state(TASK_UNINTERRUPTIBLE); - __add_wait_queue(&pool->waitq, &wait); - spin_unlock_irqrestore(&pool->lock, flags); + /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */ + spin_unlock_irqrestore(&pool->lock, flags); - schedule_timeout(POOL_TIMEOUT_JIFFIES); + page = pool_alloc_page(pool, mem_flags); + if (!page) + return NULL; - spin_lock_irqsave(&pool->lock, flags); - __remove_wait_queue(&pool->waitq, &wait); - goto restart; - } - retval = NULL; - goto done; - } + spin_lock_irqsave(&pool->lock, flags); + list_add(&page->page_list, &pool->page_list); ready: page->in_use++; offset = page->offset; @@ -348,7 +334,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, #ifdef DMAPOOL_DEBUG memset(retval, POOL_POISON_ALLOCATED, pool->size); #endif - done: spin_unlock_irqrestore(&pool->lock, flags); return retval; } @@ -435,8 +420,6 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) page->in_use--; *(int *)vaddr = page->offset; page->offset = offset; - if (waitqueue_active(&pool->waitq)) - wake_up_locked(&pool->waitq); /* * Resist a temptation to do * if (!is_page_busy(page)) pool_free_page(pool, page);