From patchwork Tue Oct 1 18:38:45 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 20726 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qe0-f72.google.com (mail-qe0-f72.google.com [209.85.128.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A39E023920 for ; Tue, 1 Oct 2013 18:39:05 +0000 (UTC) Received: by mail-qe0-f72.google.com with SMTP id 6sf8605485qea.7 for ; Tue, 01 Oct 2013 11:39:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:subject:date :message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=wvoMY0FdN5aV4n3GAMyKwqP9zCilKAQAtrkVN7h09Yk=; b=MVJnoT+6NA/u5kP+ea9Ax9+PfZygqGhWlZK9k61Gh24XeR/Sm7VyzMz45Xzyt0WSvw vnuS9WukhgqBEhWYjh816jbI0IsYz9ei+WtcFz+ERKg6reYeSiYAbUoa/5I6jLuGT38q 4+FpNYUIi88Lnu9cuY0m70enLz3MSG6gQHdN9d4xtj9pfbGe9N0JciGjLzXUAbh3uHWR nhooEH5Gkccd4xKbtfxK8hl8TE5jvPgDydppmSa2eP16lxVFfbYz6WB1OdCr6FTJKNJZ dc+C1e+/KKCIXmnd0hqpmdqUYp9JI6Q6RDA3ylVp545o6EFveR6nnErRs+vid4FtD1G7 s/Xw== X-Received: by 10.58.49.3 with SMTP id q3mr2253756ven.26.1380652745444; Tue, 01 Oct 2013 11:39:05 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.4.231 with SMTP id n7ls199582qen.14.gmail; Tue, 01 Oct 2013 11:39:05 -0700 (PDT) X-Received: by 10.52.108.230 with SMTP id hn6mr1538955vdb.28.1380652745316; Tue, 01 Oct 2013 11:39:05 -0700 (PDT) Received: from mail-vb0-f45.google.com (mail-vb0-f45.google.com [209.85.212.45]) by mx.google.com with ESMTPS id ud10si1615905vcb.66.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:05 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.212.45 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.45; Received: by mail-vb0-f45.google.com with SMTP id e15so5022189vbg.18 for ; Tue, 01 Oct 2013 11:39:05 -0700 (PDT) X-Gm-Message-State: ALoCoQk0WtMOtSZsjkkTwwtmtbqVUThYjemhwZXo+JUmlq9sTyHG3RqnY9512JQfxjGyaBI9gJUU X-Received: by 10.220.110.6 with SMTP id l6mr1566027vcp.28.1380652745180; Tue, 01 Oct 2013 11:39:05 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp38598vcz; Tue, 1 Oct 2013 11:39:04 -0700 (PDT) X-Received: by 10.68.114.36 with SMTP id jd4mr30731164pbb.31.1380652743741; Tue, 01 Oct 2013 11:39:03 -0700 (PDT) Received: from mail-pa0-f44.google.com (mail-pa0-f44.google.com [209.85.220.44]) by mx.google.com with ESMTPS id ar2si5560389pbc.52.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:03 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.44 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.44; Received: by mail-pa0-f44.google.com with SMTP id lf10so7870099pab.31 for ; Tue, 01 Oct 2013 11:39:03 -0700 (PDT) X-Received: by 10.66.243.196 with SMTP id xa4mr4529370pac.174.1380652743197; Tue, 01 Oct 2013 11:39:03 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id ed3sm8282606pbc.6.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:02 -0700 (PDT) From: John Stultz To: Minchan Kim , Dhaval Giani Subject: [PATCH 01/14] vrange: Add basic data structure and functions Date: Tue, 1 Oct 2013 11:38:45 -0700 Message-Id: <1380652738-8000-2-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1380652738-8000-1-git-send-email-john.stultz@linaro.org> References: <1380652738-8000-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.45 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds vrange data structure and core management functions. The vrange uses the generic interval tree as main data structure because it handles address range, which fits well for this purpose. The vrange_add/vrange_remove are core functions for the vrange() system call that will be introduced in a following patch. The vrange_add inserts new address range into interval tree. If new address range crosses over existing volatile range, existing volatile range will be expanded to cover new range. Thus, if existing volatile range has purged state, new extended range will inherit that purged state. If new address range is inside existing range, we ignore it. vrange_remove removes the address range, returning the purged state of the address ranges. Cc: Andrew Morton Cc: Android Kernel Team Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Dave Chinner Cc: Neil Brown Cc: Andrea Righi Cc: Andrea Arcangeli Cc: Aneesh Kumar K.V Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Rob Clark Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: Minchan Kim [jstultz: Heavy rework and cleanups to make this infrastructure more easily reused for both file and anonymous pages] Signed-off-by: John Stultz --- include/linux/vrange.h | 48 ++++++++++++ include/linux/vrange_types.h | 25 ++++++ lib/Makefile | 2 +- mm/Makefile | 2 +- mm/vrange.c | 183 +++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 258 insertions(+), 2 deletions(-) create mode 100644 include/linux/vrange.h create mode 100644 include/linux/vrange_types.h create mode 100644 mm/vrange.c diff --git a/include/linux/vrange.h b/include/linux/vrange.h new file mode 100644 index 0000000..0d378a5 --- /dev/null +++ b/include/linux/vrange.h @@ -0,0 +1,48 @@ +#ifndef _LINUX_VRANGE_H +#define _LINUX_VRANGE_H + +#include +#include + +#define vrange_from_node(node_ptr) \ + container_of(node_ptr, struct vrange, node) + +#define vrange_entry(ptr) \ + container_of(ptr, struct vrange, node.rb) + +#ifdef CONFIG_MMU + +static inline void vrange_root_init(struct vrange_root *vroot, int type, + void *object) +{ + vroot->type = type; + vroot->v_rb = RB_ROOT; + mutex_init(&vroot->v_lock); + vroot->object = object; +} + +static inline void vrange_lock(struct vrange_root *vroot) +{ + mutex_lock(&vroot->v_lock); +} + +static inline void vrange_unlock(struct vrange_root *vroot) +{ + mutex_unlock(&vroot->v_lock); +} + +static inline int vrange_type(struct vrange *vrange) +{ + return vrange->owner->type; +} + +extern void vrange_root_cleanup(struct vrange_root *vroot); + +#else + +static inline void vrange_root_init(struct vrange_root *vroot, + int type, void *object) {}; +static inline void vrange_root_cleanup(struct vrange_root *vroot) {}; + +#endif +#endif /* _LINIUX_VRANGE_H */ diff --git a/include/linux/vrange_types.h b/include/linux/vrange_types.h new file mode 100644 index 0000000..0d48b42 --- /dev/null +++ b/include/linux/vrange_types.h @@ -0,0 +1,25 @@ +#ifndef _LINUX_VRANGE_TYPES_H +#define _LINUX_VRANGE_TYPES_H + +#include +#include + +enum vrange_type { + VRANGE_MM, + VRANGE_FILE, +}; + +struct vrange_root { + struct rb_root v_rb; /* vrange rb tree */ + struct mutex v_lock; /* Protect v_rb */ + enum vrange_type type; /* range root type */ + void *object; /* pointer to mm_struct or mapping */ +}; + +struct vrange { + struct interval_tree_node node; + struct vrange_root *owner; + int purged; +}; +#endif + diff --git a/lib/Makefile b/lib/Makefile index 7baccfd..c8739ee 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -13,7 +13,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \ sha1.o md5.o irq_regs.o reciprocal_div.o argv_split.o \ proportions.o flex_proportions.o prio_heap.o ratelimit.o show_mem.o \ is_single_threaded.o plist.o decompress.o kobject_uevent.o \ - earlycpio.o percpu-refcount.o + earlycpio.o percpu-refcount.o interval_tree.o obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o lib-$(CONFIG_MMU) += ioremap.o diff --git a/mm/Makefile b/mm/Makefile index f008033..54928af 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -5,7 +5,7 @@ mmu-y := nommu.o mmu-$(CONFIG_MMU) := fremap.o highmem.o madvise.o memory.o mincore.o \ mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \ - vmalloc.o pagewalk.o pgtable-generic.o + vmalloc.o pagewalk.o pgtable-generic.o vrange.o ifdef CONFIG_CROSS_MEMORY_ATTACH mmu-$(CONFIG_MMU) += process_vm_access.o diff --git a/mm/vrange.c b/mm/vrange.c new file mode 100644 index 0000000..866566c --- /dev/null +++ b/mm/vrange.c @@ -0,0 +1,183 @@ +/* + * mm/vrange.c + */ + +#include +#include + +static struct kmem_cache *vrange_cachep; + +static int __init vrange_init(void) +{ + vrange_cachep = KMEM_CACHE(vrange, SLAB_PANIC); + return 0; +} +module_init(vrange_init); + +static struct vrange *__vrange_alloc(gfp_t flags) +{ + struct vrange *vrange = kmem_cache_alloc(vrange_cachep, flags); + if (!vrange) + return vrange; + vrange->owner = NULL; + return vrange; +} + +static void __vrange_free(struct vrange *range) +{ + WARN_ON(range->owner); + kmem_cache_free(vrange_cachep, range); +} + +static void __vrange_add(struct vrange *range, struct vrange_root *vroot) +{ + range->owner = vroot; + interval_tree_insert(&range->node, &vroot->v_rb); +} + +static void __vrange_remove(struct vrange *range) +{ + interval_tree_remove(&range->node, &range->owner->v_rb); + range->owner = NULL; +} + +static inline void __vrange_set(struct vrange *range, + unsigned long start_idx, unsigned long end_idx, + bool purged) +{ + range->node.start = start_idx; + range->node.last = end_idx; + range->purged = purged; +} + +static inline void __vrange_resize(struct vrange *range, + unsigned long start_idx, unsigned long end_idx) +{ + struct vrange_root *vroot = range->owner; + bool purged = range->purged; + + __vrange_remove(range); + __vrange_set(range, start_idx, end_idx, purged); + __vrange_add(range, vroot); +} + +static int vrange_add(struct vrange_root *vroot, + unsigned long start_idx, unsigned long end_idx) +{ + struct vrange *new_range, *range; + struct interval_tree_node *node, *next; + int purged = 0; + + new_range = __vrange_alloc(GFP_KERNEL); + if (!new_range) + return -ENOMEM; + + vrange_lock(vroot); + + node = interval_tree_iter_first(&vroot->v_rb, start_idx, end_idx); + while (node) { + next = interval_tree_iter_next(node, start_idx, end_idx); + range = vrange_from_node(node); + /* old range covers new range fully */ + if (node->start <= start_idx && node->last >= end_idx) { + __vrange_free(new_range); + goto out; + } + + start_idx = min_t(unsigned long, start_idx, node->start); + end_idx = max_t(unsigned long, end_idx, node->last); + purged |= range->purged; + + __vrange_remove(range); + __vrange_free(range); + + node = next; + } + + __vrange_set(new_range, start_idx, end_idx, purged); + __vrange_add(new_range, vroot); +out: + vrange_unlock(vroot); + return 0; +} + +static int vrange_remove(struct vrange_root *vroot, + unsigned long start_idx, unsigned long end_idx, + int *purged) +{ + struct vrange *new_range, *range; + struct interval_tree_node *node, *next; + bool used_new = false; + + if (!purged) + return -EINVAL; + + *purged = 0; + + new_range = __vrange_alloc(GFP_KERNEL); + if (!new_range) + return -ENOMEM; + + vrange_lock(vroot); + + node = interval_tree_iter_first(&vroot->v_rb, start_idx, end_idx); + while (node) { + next = interval_tree_iter_next(node, start_idx, end_idx); + range = vrange_from_node(node); + + *purged |= range->purged; + + if (start_idx <= node->start && end_idx >= node->last) { + /* argumented range covers the range fully */ + __vrange_remove(range); + __vrange_free(range); + } else if (node->start >= start_idx) { + /* + * Argumented range covers over the left of the + * range + */ + __vrange_resize(range, end_idx + 1, node->last); + } else if (node->last <= end_idx) { + /* + * Argumented range covers over the right of the + * range + */ + __vrange_resize(range, node->start, start_idx - 1); + } else { + /* + * Argumented range is middle of the range + */ + unsigned long last = node->last; + used_new = true; + __vrange_resize(range, node->start, start_idx - 1); + __vrange_set(new_range, end_idx + 1, last, + range->purged); + __vrange_add(new_range, vroot); + break; + } + + node = next; + } + vrange_unlock(vroot); + + if (!used_new) + __vrange_free(new_range); + + return 0; +} + +void vrange_root_cleanup(struct vrange_root *vroot) +{ + struct vrange *range; + struct rb_node *node; + + vrange_lock(vroot); + /* We should remove node by post-order traversal */ + while ((node = rb_first(&vroot->v_rb))) { + range = vrange_entry(node); + __vrange_remove(range); + __vrange_free(range); + } + vrange_unlock(vroot); +} +