From patchwork Tue Jun 11 02:11:22 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17762 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f199.google.com (mail-ve0-f199.google.com [209.85.128.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 871CB2397B for ; Tue, 11 Jun 2013 02:11:42 +0000 (UTC) Received: by mail-ve0-f199.google.com with SMTP id ox1sf7892163veb.10 for ; Mon, 10 Jun 2013 19:11:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=YnAbiTTrU/o9okSu9StYFlyyeY3JWiu/b6xthKSiAuo=; b=b6yOdABqfUFxl2NMfykOS8nfdUhjMuG0d/JPDq0jdT57X92Id/ERCEwC9s2Tzf4U6N EPf3Wq6RuyVMzp19i54yEh8P95JJrNslpG7PyQzNbthZlLfRiaBb7rlpv+YksxWu4yOe eYCYnIFiQoksnrFE5bOZbWQbfwagiuvAmPInTBjVtI+EHE5Z3y1Bk/T3SOYmvyn4zvoB k1KgGUJIZs6UoyR+KvUM4o82NzaXkBDhFiWbJcURE8Rf406QV9XbpNd7RAtP8vGVZp0W vkH4uwBxt8mjOrBe5Bsen1/hqVZJFmxzJhWujB7i10GzBSKQ31Jyoeb2pR7TEnxS1ips umOA== X-Received: by 10.224.86.200 with SMTP id t8mr8539744qal.0.1370916702224; Mon, 10 Jun 2013 19:11:42 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.39.3 with SMTP id l3ls742953qek.62.gmail; Mon, 10 Jun 2013 19:11:42 -0700 (PDT) X-Received: by 10.52.185.8 with SMTP id ey8mr6097063vdc.108.1370916702025; Mon, 10 Jun 2013 19:11:42 -0700 (PDT) Received: from mail-vb0-x22c.google.com (mail-vb0-x22c.google.com [2607:f8b0:400c:c02::22c]) by mx.google.com with ESMTPS id di3si6227133vcb.5.2013.06.10.19.11.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:11:42 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::22c is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::22c; Received: by mail-vb0-f44.google.com with SMTP id e15so1203538vbg.31 for ; Mon, 10 Jun 2013 19:11:41 -0700 (PDT) X-Received: by 10.58.106.77 with SMTP id gs13mr7269727veb.22.1370916701886; Mon, 10 Jun 2013 19:11:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.10.206 with SMTP id pb14csp90902vcb; Mon, 10 Jun 2013 19:11:41 -0700 (PDT) X-Received: by 10.68.247.101 with SMTP id yd5mr12425574pbc.57.1370916700552; Mon, 10 Jun 2013 19:11:40 -0700 (PDT) Received: from mail-pd0-f172.google.com (mail-pd0-f172.google.com [209.85.192.172]) by mx.google.com with ESMTPS id oo1si5916455pbb.149.2013.06.10.19.11.40 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:11:40 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.192.172 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.172; Received: by mail-pd0-f172.google.com with SMTP id z10so4490788pdj.17 for ; Mon, 10 Jun 2013 19:11:40 -0700 (PDT) X-Received: by 10.68.28.232 with SMTP id e8mr9681724pbh.94.1370916700156; Mon, 10 Jun 2013 19:11:40 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id nt2sm12427175pbc.17.2013.06.10.19.11.39 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:11:39 -0700 (PDT) From: John Stultz To: minchan.kim@lge.com Cc: Minchan Kim , John Stultz Subject: [PATCH 03/13] vrange: Add vrange support to mm_structs Date: Mon, 10 Jun 2013 19:11:22 -0700 Message-Id: <1370916692-9576-4-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1370916692-9576-1-git-send-email-john.stultz@linaro.org> References: <1370916692-9576-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQm89oriN84mLvEJE1XDdV+AUvKtffUueKbTGdNm1Kik00wRnCzMD1xbOwwqZnNnW/+yCCZC X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::22c is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim Allows for vranges to be managed against mm_structs. Includes support for copying vrange trees on fork, as well as clearing them on exec. Signed-off-by: Minchan Kim [jstultz: Heavy refactoring.] Signed-off-by: John Stultz --- include/linux/mm_types.h | 5 +++++ include/linux/vrange.h | 7 ++++++- kernel/fork.c | 6 ++++++ mm/vrange.c | 30 ++++++++++++++++++++++++++++++ 4 files changed, 47 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ace9a5f..2e02a6d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -13,6 +13,8 @@ #include #include #include +#include +#include #include #include @@ -351,6 +353,9 @@ struct mm_struct { */ +#ifdef CONFIG_MMU + struct vrange_root vroot; +#endif unsigned long hiwater_rss; /* High-watermark of RSS usage */ unsigned long hiwater_vm; /* High-water virtual memory usage */ diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 2064cb0..13f4887 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -33,12 +33,17 @@ static inline int vrange_type(struct vrange *vrange) void vrange_init(void); extern void vrange_root_cleanup(struct vrange_root *vroot); - +extern int vrange_fork(struct mm_struct *new, + struct mm_struct *old); #else static inline void vrange_init(void) {}; static inline void vrange_root_init(struct vrange_root *vroot, int type) {}; static inline void vrange_root_cleanup(struct vrange_root *vroot) {}; +static inline int vrange_fork(struct mm_struct *new, struct mm_struct *old) +{ + return 0; +} #endif #endif /* _LINIUX_VRANGE_H */ diff --git a/kernel/fork.c b/kernel/fork.c index 987b28a..6d22625 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -71,6 +71,7 @@ #include #include #include +#include #include #include @@ -379,6 +380,9 @@ static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm) retval = khugepaged_fork(mm, oldmm); if (retval) goto out; + retval = vrange_fork(mm, oldmm); + if (retval) + goto out; prev = NULL; for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) { @@ -542,6 +546,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p) spin_lock_init(&mm->page_table_lock); mm->free_area_cache = TASK_UNMAPPED_BASE; mm->cached_hole_size = ~0UL; + vrange_root_init(&mm->vroot, VRANGE_MM); mm_init_aio(mm); mm_init_owner(mm, p); @@ -613,6 +618,7 @@ void mmput(struct mm_struct *mm) if (atomic_dec_and_test(&mm->mm_users)) { uprobe_clear_state(mm); + vrange_root_cleanup(&mm->vroot); exit_aio(mm); ksm_exit(mm); khugepaged_exit(mm); /* must run before exit_mmap */ diff --git a/mm/vrange.c b/mm/vrange.c index e3042e0..bbaa184 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -4,6 +4,7 @@ #include #include +#include static struct kmem_cache *vrange_cachep; @@ -179,3 +180,32 @@ void vrange_root_cleanup(struct vrange_root *vroot) vrange_unlock(vroot); } +int vrange_fork(struct mm_struct *new_mm, struct mm_struct *old_mm) +{ + struct vrange_root *new, *old; + struct vrange *range, *new_range; + struct rb_node *next; + + new = &new_mm->vroot; + old = &old_mm->vroot; + + vrange_lock(old); + next = rb_first(&old->v_rb); + while (next) { + range = vrange_entry(next); + next = rb_next(next); + + new_range = __vrange_alloc(GFP_KERNEL); + if (!new_range) + goto fail; + __vrange_set(new_range, range->node.start, + range->node.last, range->purged); + __vrange_add(new_range, new); + + } + vrange_unlock(old); + return 0; +fail: + vrange_root_cleanup(new); + return -ENOMEM; +}