From patchwork Wed Jun 12 04:22:46 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17806 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f200.google.com (mail-ve0-f200.google.com [209.85.128.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3CD0F25DFB for ; Wed, 12 Jun 2013 04:23:21 +0000 (UTC) Received: by mail-ve0-f200.google.com with SMTP id m1sf9428579ves.3 for ; Tue, 11 Jun 2013 21:23:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=MTIjcbmBVrbw74Fbx/xO8CMS2t01MLc5e4tg6zyhSmo=; b=BPthJVanZzvcgVWKRNYazFi/FcRX6bR7TAdlwraF6I1QNytY+XqE9ohCt9TkER7Umf oCswkhQfe4KUQndaK9nRvGrNhlNMCRamphbUofoT9hgOpllfxl1Jg7wV7QiJBvzbqYDP kxMjnQ56zlNmfhi23qWNqpoA712hAjcaAWsKJUN9c9DoG+dNw2Tv3U+FQXg9rkdxc311 SNgQ/rbv23X015IF0nAMYPiqbS6c1Dz37n884y0vlySAENVPj8htbBfaodB1D8eFMthM LIk+LkhoDpVdJY3YZQXhMZS8W5vfbQxU3X2mbKGgsCuZJpaIDkHmu10Be8XOlUHLBEqW ulzw== X-Received: by 10.224.174.145 with SMTP id t17mr12439526qaz.4.1371011001029; Tue, 11 Jun 2013 21:23:21 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.97.4 with SMTP id dw4ls3585622qeb.58.gmail; Tue, 11 Jun 2013 21:23:20 -0700 (PDT) X-Received: by 10.220.188.201 with SMTP id db9mr8969871vcb.30.1371011000696; Tue, 11 Jun 2013 21:23:20 -0700 (PDT) Received: from mail-vc0-f176.google.com (mail-vc0-f176.google.com [209.85.220.176]) by mx.google.com with ESMTPS id bs2si8031940veb.14.2013.06.11.21.23.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:20 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.176; Received: by mail-vc0-f176.google.com with SMTP id ha12so2437885vcb.35 for ; Tue, 11 Jun 2013 21:23:20 -0700 (PDT) X-Received: by 10.52.36.115 with SMTP id p19mr7773293vdj.8.1371011000536; Tue, 11 Jun 2013 21:23:20 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.191.99 with SMTP id gx3csp130070vec; Tue, 11 Jun 2013 21:23:20 -0700 (PDT) X-Received: by 10.68.48.197 with SMTP id o5mr18322259pbn.184.1371010999558; Tue, 11 Jun 2013 21:23:19 -0700 (PDT) Received: from mail-pd0-f173.google.com (mail-pd0-f173.google.com [209.85.192.173]) by mx.google.com with ESMTPS id s9si10624883pan.230.2013.06.11.21.23.19 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:19 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.192.173 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.173; Received: by mail-pd0-f173.google.com with SMTP id v14so5526367pde.18 for ; Tue, 11 Jun 2013 21:23:19 -0700 (PDT) X-Received: by 10.66.136.49 with SMTP id px17mr8145368pab.133.1371010998990; Tue, 11 Jun 2013 21:23:18 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id xe9sm17439221pbc.21.2013.06.11.21.23.17 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:18 -0700 (PDT) From: John Stultz To: LKML Cc: Minchan Kim , Andrew Morton , Android Kernel Team , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Dave Chinner , Neil Brown , Andrea Righi , Andrea Arcangeli , "Aneesh Kumar K.V" , Mike Hommey , Taras Glek , Dhaval Giani , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , "linux-mm@kvack.org" , John Stultz Subject: [PATCH 3/8] vrange: Add vrange support to mm_structs Date: Tue, 11 Jun 2013 21:22:46 -0700 Message-Id: <1371010971-15647-4-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1371010971-15647-1-git-send-email-john.stultz@linaro.org> References: <1371010971-15647-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQmQflJAJjdzhRWzHRswg7ECK4Ap76RJ95QUXkwyXtoTUyzDswEELo/bzTaO85aJKfCg8zba X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim Allows for vranges to be managed against mm_structs. Includes support for copying vrange trees on fork, as well as clearing them on exec. Cc: Andrew Morton Cc: Android Kernel Team Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Dave Chinner Cc: Neil Brown Cc: Andrea Righi Cc: Andrea Arcangeli Cc: Aneesh Kumar K.V Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: Minchan Kim [jstultz: Heavy refactoring.] Signed-off-by: John Stultz --- include/linux/mm_types.h | 5 +++++ include/linux/vrange.h | 7 ++++++- kernel/fork.c | 6 ++++++ mm/vrange.c | 30 ++++++++++++++++++++++++++++++ 4 files changed, 47 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ace9a5f..2e02a6d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -13,6 +13,8 @@ #include #include #include +#include +#include #include #include @@ -351,6 +353,9 @@ struct mm_struct { */ +#ifdef CONFIG_MMU + struct vrange_root vroot; +#endif unsigned long hiwater_rss; /* High-watermark of RSS usage */ unsigned long hiwater_vm; /* High-water virtual memory usage */ diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 2064cb0..13f4887 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -33,12 +33,17 @@ static inline int vrange_type(struct vrange *vrange) void vrange_init(void); extern void vrange_root_cleanup(struct vrange_root *vroot); - +extern int vrange_fork(struct mm_struct *new, + struct mm_struct *old); #else static inline void vrange_init(void) {}; static inline void vrange_root_init(struct vrange_root *vroot, int type) {}; static inline void vrange_root_cleanup(struct vrange_root *vroot) {}; +static inline int vrange_fork(struct mm_struct *new, struct mm_struct *old) +{ + return 0; +} #endif #endif /* _LINIUX_VRANGE_H */ diff --git a/kernel/fork.c b/kernel/fork.c index 987b28a..6d22625 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -71,6 +71,7 @@ #include #include #include +#include #include #include @@ -379,6 +380,9 @@ static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm) retval = khugepaged_fork(mm, oldmm); if (retval) goto out; + retval = vrange_fork(mm, oldmm); + if (retval) + goto out; prev = NULL; for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) { @@ -542,6 +546,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p) spin_lock_init(&mm->page_table_lock); mm->free_area_cache = TASK_UNMAPPED_BASE; mm->cached_hole_size = ~0UL; + vrange_root_init(&mm->vroot, VRANGE_MM); mm_init_aio(mm); mm_init_owner(mm, p); @@ -613,6 +618,7 @@ void mmput(struct mm_struct *mm) if (atomic_dec_and_test(&mm->mm_users)) { uprobe_clear_state(mm); + vrange_root_cleanup(&mm->vroot); exit_aio(mm); ksm_exit(mm); khugepaged_exit(mm); /* must run before exit_mmap */ diff --git a/mm/vrange.c b/mm/vrange.c index e3042e0..bbaa184 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -4,6 +4,7 @@ #include #include +#include static struct kmem_cache *vrange_cachep; @@ -179,3 +180,32 @@ void vrange_root_cleanup(struct vrange_root *vroot) vrange_unlock(vroot); } +int vrange_fork(struct mm_struct *new_mm, struct mm_struct *old_mm) +{ + struct vrange_root *new, *old; + struct vrange *range, *new_range; + struct rb_node *next; + + new = &new_mm->vroot; + old = &old_mm->vroot; + + vrange_lock(old); + next = rb_first(&old->v_rb); + while (next) { + range = vrange_entry(next); + next = rb_next(next); + + new_range = __vrange_alloc(GFP_KERNEL); + if (!new_range) + goto fail; + __vrange_set(new_range, range->node.start, + range->node.last, range->purged); + __vrange_add(new_range, new); + + } + vrange_unlock(old); + return 0; +fail: + vrange_root_cleanup(new); + return -ENOMEM; +}