From patchwork Wed Jun 12 04:22:49 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17809 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f198.google.com (mail-qc0-f198.google.com [209.85.216.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2404D25DFB for ; Wed, 12 Jun 2013 04:23:26 +0000 (UTC) Received: by mail-qc0-f198.google.com with SMTP id c1sf1460003qcz.1 for ; Tue, 11 Jun 2013 21:23:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=HfLKTN/wNAuuvwJq9Crz1KaYiuNZUFXfkHfa1XmD38M=; b=QyWMZPcwfSsC0clTdiW/rv9J0IHrfarpuAQIBz0ZVLQL/Q1ZNRp3Ybn/SxMllvEIoM Sl4c33EE0MQBoRPgKJYvgJY+W31WiySbsD+AHV0+UtzQSPDl1fZsK/dJJ+vOtHR/V9LE TinrQoEJgl5CrCzmIEbGIC7fUU1o5yh7+UxzCz0ZnKmvCM6ysegKtlrP7Lu+YvaGCg+E DOvkk//1/qdSVmfMql02xBJ2pPN6SogccwPvLa+GixSnWKmx4AjBr+bJVJWuwoMjfAq9 pkdSqlhcU6cd5xGGt05GHRcd6gkQ/ZnFj7yw/LvBDs/wyibrPymFr1zTo9uzBVfKd7K/ U0YA== X-Received: by 10.224.205.138 with SMTP id fq10mr12508844qab.1.1371011005872; Tue, 11 Jun 2013 21:23:25 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.96.35 with SMTP id dp3ls3630641qeb.87.gmail; Tue, 11 Jun 2013 21:23:25 -0700 (PDT) X-Received: by 10.220.213.131 with SMTP id gw3mr9217395vcb.27.1371011005699; Tue, 11 Jun 2013 21:23:25 -0700 (PDT) Received: from mail-vb0-x230.google.com (mail-vb0-x230.google.com [2607:f8b0:400c:c02::230]) by mx.google.com with ESMTPS id fb8si8037805vcb.33.2013.06.11.21.23.25 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:25 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::230 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::230; Received: by mail-vb0-f48.google.com with SMTP id w15so3562803vbf.21 for ; Tue, 11 Jun 2013 21:23:25 -0700 (PDT) X-Received: by 10.58.100.234 with SMTP id fb10mr9026649veb.5.1371011005534; Tue, 11 Jun 2013 21:23:25 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.191.99 with SMTP id gx3csp130073vec; Tue, 11 Jun 2013 21:23:25 -0700 (PDT) X-Received: by 10.66.163.99 with SMTP id yh3mr21930636pab.22.1371011004457; Tue, 11 Jun 2013 21:23:24 -0700 (PDT) Received: from mail-pb0-x231.google.com (mail-pb0-x231.google.com [2607:f8b0:400e:c01::231]) by mx.google.com with ESMTPS id qu4si8509660pbc.105.2013.06.11.21.23.24 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:24 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c01::231 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c01::231; Received: by mail-pb0-f49.google.com with SMTP id jt11so9299876pbb.36 for ; Tue, 11 Jun 2013 21:23:24 -0700 (PDT) X-Received: by 10.66.172.172 with SMTP id bd12mr21635793pac.139.1371011004001; Tue, 11 Jun 2013 21:23:24 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id xe9sm17439221pbc.21.2013.06.11.21.23.22 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:23 -0700 (PDT) From: John Stultz To: LKML Cc: Minchan Kim , Andrew Morton , Android Kernel Team , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Dave Chinner , Neil Brown , Andrea Righi , Andrea Arcangeli , "Aneesh Kumar K.V" , Mike Hommey , Taras Glek , Dhaval Giani , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , "linux-mm@kvack.org" , John Stultz Subject: [PATCH 6/8] vrange: Add GFP_NO_VRANGE allocation flag Date: Tue, 11 Jun 2013 21:22:49 -0700 Message-Id: <1371010971-15647-7-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1371010971-15647-1-git-send-email-john.stultz@linaro.org> References: <1371010971-15647-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQnTpqHigPlRDRMHv2LfoanTbjZrn6fDCY6ouIxhmhtuYGQqJAmvAL7qtH9yV9EzvPCVeF4G X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::230 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim In cloning the vroot tree during a fork, we have to allocate memory while hold the vroot lock. This is problematic, as the memory allocation can trigger reclaim, which might require grabbing a vroot lock in order to find purgable pages. Thus this patch introduces GFP_NO_VRANGE which will allow us to avoid having a allocation for vrange to trigger any volatile range purging. XXX: We're not yet using this flag in the later purge paths, so we still get the lockdep warnings. Cc: Andrew Morton Cc: Android Kernel Team Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Dave Chinner Cc: Neil Brown Cc: Andrea Righi Cc: Andrea Arcangeli Cc: Aneesh Kumar K.V Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: Minchan Kim [jstultz: Split out from a different patch, created new commit message] Signed-off-by: John Stultz --- include/linux/gfp.h | 7 +++++-- mm/vrange.c | 2 +- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 0f615eb..fa52199 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -35,6 +35,7 @@ struct vm_area_struct; #define ___GFP_NO_KSWAPD 0x400000u #define ___GFP_OTHER_NODE 0x800000u #define ___GFP_WRITE 0x1000000u +#define ___GFP_NO_VRANGE 0x2000000u /* If the above are modified, __GFP_BITS_SHIFT may need updating */ /* @@ -70,6 +71,7 @@ struct vm_area_struct; #define __GFP_HIGH ((__force gfp_t)___GFP_HIGH) /* Should access emergency pools? */ #define __GFP_IO ((__force gfp_t)___GFP_IO) /* Can start physical IO? */ #define __GFP_FS ((__force gfp_t)___GFP_FS) /* Can call down to low-level FS? */ +#define __GFP_NO_VRANGE ((__force gfp_t)___GFP_NO_VRANGE) /* Can't reclaim volatile pages */ #define __GFP_COLD ((__force gfp_t)___GFP_COLD) /* Cache-cold page required */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) /* Suppress page allocation failure warning */ #define __GFP_REPEAT ((__force gfp_t)___GFP_REPEAT) /* See above */ @@ -99,7 +101,7 @@ struct vm_area_struct; */ #define __GFP_NOTRACK_FALSE_POSITIVE (__GFP_NOTRACK) -#define __GFP_BITS_SHIFT 25 /* Room for N __GFP_FOO bits */ +#define __GFP_BITS_SHIFT 26 /* Room for N __GFP_FOO bits */ #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /* This equals 0, but use constants in case they ever change */ @@ -134,7 +136,8 @@ struct vm_area_struct; /* Control page allocator reclaim behavior */ #define GFP_RECLAIM_MASK (__GFP_WAIT|__GFP_HIGH|__GFP_IO|__GFP_FS|\ __GFP_NOWARN|__GFP_REPEAT|__GFP_NOFAIL|\ - __GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC) + __GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC|\ + __GFP_NO_VRANGE) /* Control slab gfp mask during early boot */ #define GFP_BOOT_MASK (__GFP_BITS_MASK & ~(__GFP_WAIT|__GFP_IO|__GFP_FS)) diff --git a/mm/vrange.c b/mm/vrange.c index f3c2465..5278939 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -204,7 +204,7 @@ int vrange_fork(struct mm_struct *new_mm, struct mm_struct *old_mm) range = vrange_entry(next); next = rb_next(next); - new_range = __vrange_alloc(GFP_KERNEL); + new_range = __vrange_alloc(GFP_KERNEL|__GFP_NO_VRANGE); if (!new_range) goto fail; __vrange_set(new_range, range->node.start,