From patchwork Thu Jul 5 10:45:20 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rabin Vincent X-Patchwork-Id: 9854 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id AE2BE23F43 for ; Thu, 5 Jul 2012 14:59:39 +0000 (UTC) Received: from mail-gh0-f180.google.com (mail-gh0-f180.google.com [209.85.160.180]) by fiordland.canonical.com (Postfix) with ESMTP id 7D742A1892A for ; Thu, 5 Jul 2012 14:59:39 +0000 (UTC) Received: by mail-gh0-f180.google.com with SMTP id z12so8109548ghb.11 for ; Thu, 05 Jul 2012 07:59:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf :dkim-signature:date:from:to:message-id:references:mime-version :content-disposition:in-reply-to:user-agent:x-mailman-approved-at:cc :subject:x-beenthere:x-mailman-version:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :content-type:content-transfer-encoding:sender:errors-to :x-gm-message-state; bh=8Gbm3Q7OCWgH/gSBAYe1YaNURUhhCB/scTboZt+ufHg=; b=evWS+QBcmjxES/2ffBiM50Luy/pXep95sirMhwneO7DhIhlhgCQsqblmKD+iXQEQRi VQyQpCb1Zgw8Ko6pU1qAD8dD8/747nq17Zn5/r0hJjQl+QDkLGwZun1nZXr8UGeK36ld qBJfLPDTMfyxYk4sueq+Se0/7tUtO64hER5F5C4Ry/VWaiJ5NrB7pnXLe2/617IAQK8k mEG22eTlvlk+uwwZxDpvlVIEfXSa/qM5hCCDeHCB4Hcjx9jxd3nb1tJAmqll6SfeVhSs 9qqkw3sQQZKoWGkVYwkfs7D1913iPVc35OPpoFLXhV/uHjdJx72R+wfCSYFG1aorkhEc piOQ== Received: by 10.50.163.99 with SMTP id yh3mr14225505igb.53.1341500378707; Thu, 05 Jul 2012 07:59:38 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.24.148 with SMTP id v20csp66977ibb; Thu, 5 Jul 2012 07:59:37 -0700 (PDT) Received: by 10.204.152.6 with SMTP id e6mr13730708bkw.85.1341500376599; Thu, 05 Jul 2012 07:59:36 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id gq18si1263314bkc.53.2012.07.05.07.59.35; Thu, 05 Jul 2012 07:59:36 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@gmail.com Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SmnWh-0000Sy-Ja; Thu, 05 Jul 2012 14:59:27 +0000 Received: from mail-pb0-f42.google.com ([209.85.160.42]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SmjZ1-0004lm-0Z for linaro-mm-sig@lists.linaro.org; Thu, 05 Jul 2012 10:45:35 +0000 Received: by pbbrp12 with SMTP id rp12so21533311pbb.1 for ; Thu, 05 Jul 2012 03:45:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=oysiSWgrkfgeNFrOBZaww896sZMFI4dMYmW/4/jfY20=; b=xXRa8FJ21F6yCRJqszpo7fsq1mDfNzomzvlM9JGgY6phrvRP6fKFPotMKYHSFFgYUJ nSirFARUoRvBCtMYLj824vADS3V5Z+MfIZmEvv6f5UCFrDzQEvZuFw5gfThH4/O+armH 2C5Js/gksAcIzbnsQ2YjMWcgzQCWYfTJOis1Hs3Rk5S2JSgFSqvgs1j7fAxkMSFp0SCd OHCzt7KvSyCkolyqaK3m3JkLjPCXrg5sVlBdyPfGCjby/G5NA3m877lnURtbuzihJWHd 6a94+v4fvFYGviyFU3aYBd3fTIf6fX3KaDvFcA2uY7+Da9v0LHt2gKIukUbDSfAH6WS1 Q8LA== Received: by 10.68.224.225 with SMTP id rf1mr26936432pbc.55.1341485133548; Thu, 05 Jul 2012 03:45:33 -0700 (PDT) Received: from latitude ([122.181.155.58]) by mx.google.com with ESMTPS id of4sm19546812pbb.51.2012.07.05.03.45.29 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 05 Jul 2012 03:45:32 -0700 (PDT) Date: Thu, 5 Jul 2012 16:15:20 +0530 From: Rabin Vincent To: Marek Szyprowski Message-ID: <20120705104520.GA6773@latitude> References: <4FAC200D.2080306@codeaurora.org> <02fc01cd2f50$5d77e4c0$1867ae40$%szyprowski@samsung.com> <4FAD89DC.2090307@codeaurora.org> <015f01cd5a95$c1525dc0$43f71940$%szyprowski@samsung.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <015f01cd5a95$c1525dc0$43f71940$%szyprowski@samsung.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-Mailman-Approved-At: Thu, 05 Jul 2012 14:59:25 +0000 Cc: linux-arm-msm@vger.kernel.org, 'LKML' , 'Michal Nazarewicz' , linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: Re: [Linaro-mm-sig] Bad use of highmem with buffer_migrate_page? X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQkoAnEodTmf2RckiIDtwrxNyh7Hg/pwOcznuhQMIBQLjoZjtJmeVOJIQaomBqG5aUWsYb33 On Thu, Jul 05, 2012 at 12:05:45PM +0200, Marek Szyprowski wrote: > On Thursday, July 05, 2012 11:28 AM Rabin Vincent wrote: > > The problem is still present on latest mainline. The filesystem layer > > expects that the pages in the block device's mapping are not in highmem > > (the mapping's gfp mask is set in bdget()), but CMA replaces lowmem > > pages with highmem pages leading to the crashes. > > > > The above fix should work, but perhaps the following is preferable since > > it should allow moving highmem pages to other highmem pages? > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 4403009..4a4f921 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -5635,7 +5635,12 @@ static struct page * > > __alloc_contig_migrate_alloc(struct page *page, unsigned long private, > > int **resultp) > > { > > - return alloc_page(GFP_HIGHUSER_MOVABLE); > > + gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE; > > + > > + if (PageHighMem(page)) > > + gfp_mask |= __GFP_HIGHMEM; > > + > > + return alloc_page(gfp_mask); > > } > > > > /* [start, end) must belong to a single zone. */ > > > The patch looks fine and does it job well. Could you resend it as a complete > patch with commit message and signed-off-by/reported-by lines? I will handle > merging it to mainline then. Thanks, here it is: 8<---- >From 8a94126eb3aa2824866405fb78bb0b8316f8fd00 Mon Sep 17 00:00:00 2001 From: Rabin Vincent Date: Thu, 5 Jul 2012 15:52:23 +0530 Subject: [PATCH] mm: cma: don't replace lowmem pages with highmem The filesystem layer expects pages in the block device's mapping to not be in highmem (the mapping's gfp mask is set in bdget()), but CMA can currently replace lowmem pages with highmem pages, leading to crashes in filesystem code such as the one below: Unable to handle kernel NULL pointer dereference at virtual address 00000400 pgd = c0c98000 [00000400] *pgd=00c91831, *pte=00000000, *ppte=00000000 Internal error: Oops: 817 [#1] PREEMPT SMP ARM CPU: 0 Not tainted (3.5.0-rc5+ #80) PC is at __memzero+0x24/0x80 ... Process fsstress (pid: 323, stack limit = 0xc0cbc2f0) Backtrace: [] (ext4_getblk+0x0/0x180) from [] (ext4_bread+0x1c/0x98) [] (ext4_bread+0x0/0x98) from [] (ext4_mkdir+0x160/0x3bc) r4:c15337f0 [] (ext4_mkdir+0x0/0x3bc) from [] (vfs_mkdir+0x8c/0x98) [] (vfs_mkdir+0x0/0x98) from [] (sys_mkdirat+0x74/0xac) r6:00000000 r5:c152eb40 r4:000001ff r3:c14b43f0 [] (sys_mkdirat+0x0/0xac) from [] (sys_mkdir+0x20/0x24) r6:beccdcf0 r5:00074000 r4:beccdbbc [] (sys_mkdir+0x0/0x24) from [] (ret_fast_syscall+0x0/0x30) Fix this by replacing only highmem pages with highmem. Reported-by: Laura Abbott Signed-off-by: Rabin Vincent Acked-by: Michal Nazarewicz --- mm/page_alloc.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4403009..4a4f921 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5635,7 +5635,12 @@ static struct page * __alloc_contig_migrate_alloc(struct page *page, unsigned long private, int **resultp) { - return alloc_page(GFP_HIGHUSER_MOVABLE); + gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE; + + if (PageHighMem(page)) + gfp_mask |= __GFP_HIGHMEM; + + return alloc_page(gfp_mask); } /* [start, end) must belong to a single zone. */