From patchwork Fri May 3 18:27:06 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 16697 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f69.google.com (mail-yh0-f69.google.com [209.85.213.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5140D23905 for ; Fri, 3 May 2013 18:27:36 +0000 (UTC) Received: by mail-yh0-f69.google.com with SMTP id i72sf3430452yha.0 for ; Fri, 03 May 2013 11:27:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-beenthere:x-received:received-spf :x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=1r19nUyokA2/kiRM8QUd2/1jO5BNz2JOgbhkWC4ue54=; b=nnlXJlbbOK/tgs5XzyPj0jDe2T0XQajsIcCXUFADdsIolmKBHMihX/0oI5dSpMBvO6 nFDEWcLibM2AmW/c4L33dxkE/qFtYZU2Otd/P6gflUBjpsRsNr/sU+DQOoIHK5LZCoRe +HflsQuvTX4EpWokNmFM3oUZuosse7YgVlAT8/knE4rwsUJlJbhLVIvQIHHbi88i8fpG WSAC0LfMuqpW7k274K1HF+3QL8gEiYSnq4i3NkXaxF6u7Uer6PKdlQklPm6KWvPNUiiv JDmqFzspuxBlNXtRLZ40zVcrC8wFTORZvULYGVo8HHx0NROI4zMsac9inUysdjkQJbiK wUuA== X-Received: by 10.236.126.165 with SMTP id b25mr9477889yhi.2.1367605646172; Fri, 03 May 2013 11:27:26 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.60.35 with SMTP id e3ls1999582qer.65.gmail; Fri, 03 May 2013 11:27:25 -0700 (PDT) X-Received: by 10.58.132.232 with SMTP id ox8mr3989064veb.45.1367605645912; Fri, 03 May 2013 11:27:25 -0700 (PDT) Received: from mail-vb0-x22a.google.com (mail-vb0-x22a.google.com [2607:f8b0:400c:c02::22a]) by mx.google.com with ESMTPS id j11si5560307vcw.5.2013.05.03.11.27.25 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:25 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::22a is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::22a; Received: by mail-vb0-f42.google.com with SMTP id w16so1610373vbf.15 for ; Fri, 03 May 2013 11:27:25 -0700 (PDT) X-Received: by 10.52.66.101 with SMTP id e5mr3420177vdt.57.1367605645782; Fri, 03 May 2013 11:27:25 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp34376veb; Fri, 3 May 2013 11:27:25 -0700 (PDT) X-Received: by 10.66.7.202 with SMTP id l10mr15828340paa.176.1367605644684; Fri, 03 May 2013 11:27:24 -0700 (PDT) Received: from mail-pb0-x229.google.com (mail-pb0-x229.google.com [2607:f8b0:400e:c01::229]) by mx.google.com with ESMTPS id ch5si8610458pad.133.2013.05.03.11.27.24 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:24 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c01::229 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c01::229; Received: by mail-pb0-f41.google.com with SMTP id mc17so1040068pbc.14 for ; Fri, 03 May 2013 11:27:24 -0700 (PDT) X-Received: by 10.66.144.136 with SMTP id sm8mr15833614pab.115.1367605644267; Fri, 03 May 2013 11:27:24 -0700 (PDT) Received: from localhost.localdomain (c-24-21-54-107.hsd1.or.comcast.net. [24.21.54.107]) by mx.google.com with ESMTPSA id qh4sm13792406pac.8.2013.05.03.11.27.22 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:23 -0700 (PDT) From: John Stultz To: Minchan Kim Cc: John Stultz Subject: [PATCH 02/12] vrange: Add basic data structure and functions Date: Fri, 3 May 2013 11:27:06 -0700 Message-Id: <1367605636-18284-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1367605636-18284-1-git-send-email-john.stultz@linaro.org> References: <1367605636-18284-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQlQN5XeLwyeaJAtCK0gxHikxMTGhelI5VdfNbJbc+6GSbF2O/m6Bu2HzAALBstM27OVhw4G X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::22a is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds vrange data structure(interval tree) and related functions. The vrange uses generic interval tree as main data structure because it handles address range so generic interval tree fits well for the purpose. The add_vrange/remove_vrange are core functions for system call will be introduced next patch. 1. add_vrange inserts new address range into interval tree. If new address range crosses over existing volatile range, existing volatile range will be expanded to cover new range. Then, if existing volatile range has purged state, new range will have a purged state. It's not good and we need more fine-grained purged state handling in a vrange(TODO) If new address range is inside existing range, we ignore it 2. remove_vrange removes address range Then, return a purged state of the address ranges. This patch copied some part from John Stultz's work but different semantic. Signed-off-by: John Stultz Signed-off-by: Minchan Kim [jstultz: Heavy rework and cleanups to make this infrastructure more easily reused for both file and anonymous pages] Signed-off-by: John Stultz --- include/linux/vrange.h | 45 ++++++++++++ include/linux/vrange_types.h | 20 ++++++ init/main.c | 2 + mm/Makefile | 2 +- mm/vrange.c | 165 +++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 233 insertions(+), 1 deletion(-) create mode 100644 include/linux/vrange.h create mode 100644 include/linux/vrange_types.h create mode 100644 mm/vrange.c diff --git a/include/linux/vrange.h b/include/linux/vrange.h new file mode 100644 index 0000000..2c1c58a --- /dev/null +++ b/include/linux/vrange.h @@ -0,0 +1,45 @@ +#ifndef _LINUX_VRANGE_H +#define _LINUX_VRANGE_H + +#include +#include + +#define vrange_entry(ptr) \ + container_of(ptr, struct vrange, node.rb) + +#ifdef CONFIG_MMU + +static inline void vrange_root_init(struct vrange_root *vroot, int type) +{ + vroot->type = type; + vroot->v_rb = RB_ROOT; + mutex_init(&vroot->v_lock); +} + + +static inline void vrange_lock(struct vrange_root *vroot) +{ + mutex_lock(&vroot->v_lock); +} + +static inline void vrange_unlock(struct vrange_root *vroot) +{ + mutex_unlock(&vroot->v_lock); +} + +static inline int vrange_type(struct vrange *vrange) +{ + return vrange->owner->type; +} + +void vrange_init(void); +extern void vrange_root_cleanup(struct vrange_root *vroot); + +#else + +static inline void vrange_init(void) {}; +static inline void vrange_root_init(struct vrange_root *vroot, int type) {}; +static inline void vrange_root_cleanup(struct vrange_root *vroot) {}; + +#endif +#endif /* _LINIUX_VRANGE_H */ diff --git a/include/linux/vrange_types.h b/include/linux/vrange_types.h new file mode 100644 index 0000000..e46942c --- /dev/null +++ b/include/linux/vrange_types.h @@ -0,0 +1,20 @@ +#ifndef _LINUX_VRANGE_TYPES_H +#define _LINUX_VRANGE_TYPES_H + +#include +#include + +struct vrange_root { + struct rb_root v_rb; /* vrange rb tree */ + struct mutex v_lock; /* Protect v_rb */ + enum {VRANGE_MM, VRANGE_FILE} type; /* range root type */ +}; + + +struct vrange { + struct interval_tree_node node; + struct vrange_root *owner; + bool purged; +}; +#endif + diff --git a/init/main.c b/init/main.c index 63534a1..0b9e0b5 100644 --- a/init/main.c +++ b/init/main.c @@ -72,6 +72,7 @@ #include #include #include +#include #include #include @@ -605,6 +606,7 @@ asmlinkage void __init start_kernel(void) calibrate_delay(); pidmap_init(); anon_vma_init(); + vrange_init(); #ifdef CONFIG_X86 if (efi_enabled(EFI_RUNTIME_SERVICES)) efi_enter_virtual_mode(); diff --git a/mm/Makefile b/mm/Makefile index 3a46287..a31235e 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -5,7 +5,7 @@ mmu-y := nommu.o mmu-$(CONFIG_MMU) := fremap.o highmem.o madvise.o memory.o mincore.o \ mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \ - vmalloc.o pagewalk.o pgtable-generic.o + vmalloc.o pagewalk.o pgtable-generic.o vrange.o ifdef CONFIG_CROSS_MEMORY_ATTACH mmu-$(CONFIG_MMU) += process_vm_access.o diff --git a/mm/vrange.c b/mm/vrange.c new file mode 100644 index 0000000..565bca34 --- /dev/null +++ b/mm/vrange.c @@ -0,0 +1,165 @@ +/* + * mm/vrange.c + */ + +#include +#include + +static struct kmem_cache *vrange_cachep; + +void __init vrange_init(void) +{ + vrange_cachep = KMEM_CACHE(vrange, SLAB_PANIC); +} + +static struct vrange *__vrange_alloc(void) +{ + struct vrange *vrange = kmem_cache_alloc(vrange_cachep, GFP_KERNEL); + if (!vrange) + return vrange; + vrange->owner = NULL; + return vrange; +} + +static void __vrange_free(struct vrange *range) +{ + WARN_ON(range->owner); + kmem_cache_free(vrange_cachep, range); +} + +static void __vrange_add(struct vrange *range, struct vrange_root *vroot) +{ + range->owner = vroot; + interval_tree_insert(&range->node, &vroot->v_rb); +} + +static void __vrange_remove(struct vrange *range) +{ + interval_tree_remove(&range->node, &range->owner->v_rb); + range->owner = NULL; +} + +static inline void __vrange_set(struct vrange *range, + unsigned long start_idx, unsigned long end_idx, + bool purged) +{ + range->node.start = start_idx; + range->node.last = end_idx; + range->purged = purged; +} + +static inline void __vrange_resize(struct vrange *range, + unsigned long start, unsigned long end) +{ + struct vrange_root *vroot = range->owner; + bool purged = range->purged; + + __vrange_remove(range); + __vrange_set(range, start, end, purged); + __vrange_add(range, vroot); +} + +static int vrange_add(struct vrange_root *vroot, + unsigned long start, unsigned long end) +{ + struct vrange *new_range, *range; + struct interval_tree_node *node, *next; + int purged = 0; + + new_range = __vrange_alloc(); + if (!new_range) + return -ENOMEM; + + vrange_lock(vroot); + node = interval_tree_iter_first(&vroot->v_rb, start, end); + while (node) { + next = interval_tree_iter_next(node, start, end); + + range = container_of(node, struct vrange, node); + if (node->start < start && node->last > end) { + __vrange_free(new_range); + goto out; + } + + start = min_t(unsigned long, start, node->start); + end = max_t(unsigned long, end, node->last); + + purged |= range->purged; + __vrange_remove(range); + __vrange_free(range); + + node = next; + } + __vrange_set(new_range, start, end, purged); + __vrange_add(new_range, vroot); +out: + vrange_unlock(vroot); + return 0; +} + +static int vrange_remove(struct vrange_root *vroot, + unsigned long start, unsigned long end, + int *purged) +{ + struct vrange *new_range, *range; + struct interval_tree_node *node, *next; + bool used_new = false; + + if (!purged) + return -EINVAL; + *purged = 0; + + new_range = __vrange_alloc(); + if (!new_range) + return -ENOMEM; + + vrange_lock(vroot); + node = interval_tree_iter_first(&vroot->v_rb, start, end); + while (node) { + next = interval_tree_iter_next(node, start, end); + + range = container_of(node, struct vrange, node); + *purged |= range->purged; + + if (start <= node->start && end >= node->last) { + __vrange_remove(range); + __vrange_free(range); + } else if (node->start >= start) { + __vrange_resize(range, end, node->last); + } else if (node->last <= end) { + __vrange_resize(range, node->start, start); + } else { + used_new = true; + __vrange_set(new_range, end, node->last, range->purged); + __vrange_resize(range, node->start, start); + __vrange_add(new_range, vroot); + break; + } + + node = next; + } + vrange_unlock(vroot); + + if (!used_new) + __vrange_free(new_range); + + return 0; +} + + +void vrange_root_cleanup(struct vrange_root *vroot) +{ + struct vrange *range; + struct rb_node *next; + + vrange_lock(vroot); + next = rb_first(&vroot->v_rb); + while (next) { + range = vrange_entry(next); + next = rb_next(next); + __vrange_remove(range); + __vrange_free(range); + } + vrange_unlock(vroot); +} +