From patchwork Fri Jun 1 23:38:44 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 9084 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 2075023E37 for ; Fri, 1 Jun 2012 23:39:03 +0000 (UTC) Received: from mail-gg0-f180.google.com (mail-gg0-f180.google.com [209.85.161.180]) by fiordland.canonical.com (Postfix) with ESMTP id C28D4A180A3 for ; Fri, 1 Jun 2012 23:39:02 +0000 (UTC) Received: by ggnf1 with SMTP id f1so2550782ggn.11 for ; Fri, 01 Jun 2012 16:39:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-content-scanned:x-cbid:x-gm-message-state; bh=JHSUDgmqarec+tekDK94z3tKVLQj/wSWTTRPnmKBOsA=; b=eL3CljoPqp12/6l1D46/O2AbsBc0efHx5AEM8+cNtacGVBaa2P9FkALg7tdHypX5Wz iUhBLkX7Lypyrt6MPAhQqY4gHctt2nfDE6dltZJZmqpV9eMb0joM1nv/UAMWqYqUbE2G nt6/FzADH0Kv/CXEhv9bV/7B/FRW/jmFNaDI2mGTEt4R/pZBOd4XB/V3hQks+TjAwSjD KBkdyPyOTb/onZPZvR+pdygkhf3tR48OiAdcOmTYmsA74U0EF2Sudoj8Li6ln509UEL4 FamXxCQr3DwQDd5G6Kcx+t+IjPSpt8+GqvCWJzx+IsLVBZXOgmXWxT9gORaca/RlOqVe 3iDg== Received: by 10.50.57.167 with SMTP id j7mr3068279igq.53.1338593941941; Fri, 01 Jun 2012 16:39:01 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.24.148 with SMTP id v20csp279ibb; Fri, 1 Jun 2012 16:39:01 -0700 (PDT) Received: by 10.68.224.70 with SMTP id ra6mr14997755pbc.11.1338593940575; Fri, 01 Jun 2012 16:39:00 -0700 (PDT) Received: from e35.co.us.ibm.com (e35.co.us.ibm.com. [32.97.110.153]) by mx.google.com with ESMTPS id qs8si53712pbc.37.2012.06.01.16.39.00 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 01 Jun 2012 16:39:00 -0700 (PDT) Received-SPF: pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.153 as permitted sender) client-ip=32.97.110.153; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.153 as permitted sender) smtp.mail=jstultz@us.ibm.com Received: from /spool/local by e35.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 1 Jun 2012 17:38:59 -0600 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e35.co.us.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 1 Jun 2012 17:38:56 -0600 Received: from d03relay05.boulder.ibm.com (d03relay05.boulder.ibm.com [9.17.195.107]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 397FB3E40049; Fri, 1 Jun 2012 23:38:56 +0000 (WET) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay05.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q51NcuQl236920; Fri, 1 Jun 2012 17:38:56 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q51Ncsrk029644; Fri, 1 Jun 2012 17:38:55 -0600 Received: from kernel.beaverton.ibm.com (kernel.beaverton.ibm.com [9.47.67.96]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q51Ncsdh029287; Fri, 1 Jun 2012 17:38:54 -0600 Received: by kernel.beaverton.ibm.com (Postfix, from userid 1056) id 34D62C061F; Fri, 1 Jun 2012 16:38:53 -0700 (PDT) From: John Stultz To: LKML Cc: John Stultz , Andrew Morton , Android Kernel Team , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Dave Chinner , Neil Brown , Andrea Righi , "Aneesh Kumar K.V" , Taras Glek , Mike Hommey , Jan Kara , KOSAKI Motohiro Subject: [PATCH 1/3] [RFC] Interval tree implementation Date: Fri, 1 Jun 2012 16:38:44 -0700 Message-Id: <1338593926-17344-2-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.3.2.146.gca209 In-Reply-To: <1338593926-17344-1-git-send-email-john.stultz@linaro.org> References: <1338593926-17344-1-git-send-email-john.stultz@linaro.org> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12060123-6148-0000-0000-00000657E3AA X-Gm-Message-State: ALoCoQmdTkKprfUzygKaBYaLnFke5rqVeGN9DhUAWcfWMi8Sol+bC80PIC5Fj4FpSuRixwb2qfRz After Andrew suggested something like his mumbletree idea to better store a list of intervals, I worked on a few different approaches, and this is what I've finally managed to get working. The idea of storing intervals in a tree is nice, but has a number of complications. When adding an interval, its possible that a large interval will consume and merge a number of smaller intervals. When removing a interval, its possible you may end up splitting an existing interval, causing one interval to become two. This makes it very difficult to provide generic list_head like behavior, as the parent structures would need to be duplicated and removed, and that has lots of memory ownership issues. So, this is a much simplified and more list_head like implementation. You can add a node to a tree, or remove a node to a tree, but the generic implementation doesn't do the merging or splitting for you. But it does provide helpers to find overlapping and adjacent intervals. Andrew also really wanted this interval-tree implementation to be resuable so we don't duplicate the file locking logic. I'm not totally convinced that the requirements between the volatile intervals and file locking are really equivelent, but this reduced impelementation may make it possible. Changelog: v2: * Reworked code to use an rbtree instead of splaying v3: * Added range_tree_next_in_range() to avoid having to start lookups from the root every time. * Fixed some comments and return NULL instead of 0, as suggested by Aneesh Kumar K.V v6: * Fixed range_tree_in_range() so that it finds the earliest range, rather then the first. This allows the next_in_range() function to properly cover all the ranges in the tree. * Minor clenaups to simplify some of the functions v7: * Changed terminology from rangetree to intervaltree as suggested by Jan Kara CC: Andrew Morton CC: Android Kernel Team CC: Robert Love CC: Mel Gorman CC: Hugh Dickins CC: Dave Hansen CC: Rik van Riel CC: Dmitry Adamushko CC: Dave Chinner CC: Neil Brown CC: Andrea Righi CC: Aneesh Kumar K.V CC: Taras Glek CC: Mike Hommey CC: Jan Kara CC: KOSAKI Motohiro Signed-off-by: John Stultz --- include/linux/intervaltree.h | 55 +++++++++++++++++++ lib/Makefile | 2 +- lib/intervaltree.c | 119 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 175 insertions(+), 1 deletions(-) create mode 100644 include/linux/intervaltree.h create mode 100644 lib/intervaltree.c diff --git a/include/linux/intervaltree.h b/include/linux/intervaltree.h new file mode 100644 index 0000000..cfaa174 --- /dev/null +++ b/include/linux/intervaltree.h @@ -0,0 +1,55 @@ +#ifndef _LINUX_INTERVALTREE_H +#define _LINUX_INTERVALTREE_H + +#include +#include + +struct interval_tree_node { + struct rb_node rb; + u64 start; + u64 end; +}; + +struct interval_tree_root { + struct rb_root head; +}; + +static inline void interval_tree_init(struct interval_tree_root *root) +{ + root->head = RB_ROOT; +} + +static inline void interval_tree_node_init(struct interval_tree_node *node) +{ + rb_init_node(&node->rb); + node->start = 0; + node->end = 0; +} + +static inline int interval_tree_empty(struct interval_tree_root *root) +{ + return RB_EMPTY_ROOT(&root->head); +} + +static inline +struct interval_tree_node *interval_tree_root_node( + struct interval_tree_root *root) +{ + struct interval_tree_node *ret; + ret = container_of(root->head.rb_node, struct interval_tree_node, rb); + return ret; +} + +extern struct interval_tree_node *interval_tree_in_interval( + struct interval_tree_root *root, + u64 start, u64 end); +extern struct interval_tree_node *interval_tree_next_in_interval( + struct interval_tree_node *node, + u64 start, u64 end); +extern void interval_tree_add(struct interval_tree_root *root, + struct interval_tree_node *node); +extern void interval_tree_remove(struct interval_tree_root *root, + struct interval_tree_node *node); +#endif + + diff --git a/lib/Makefile b/lib/Makefile index 8c31a0c..2bbad25 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -12,7 +12,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \ idr.o int_sqrt.o extable.o prio_tree.o \ sha1.o md5.o irq_regs.o reciprocal_div.o argv_split.o \ proportions.o prio_heap.o ratelimit.o show_mem.o \ - is_single_threaded.o plist.o decompress.o + is_single_threaded.o plist.o decompress.o intervaltree.o lib-$(CONFIG_MMU) += ioremap.o lib-$(CONFIG_SMP) += cpumask.o diff --git a/lib/intervaltree.c b/lib/intervaltree.c new file mode 100644 index 0000000..47c52e0 --- /dev/null +++ b/lib/intervaltree.c @@ -0,0 +1,119 @@ +#include +#include +#include + +/* This code implements a naive interval tree, which stores a series of + * non-intersecting intervals. + * More complex interval trees can be read about here: + * http://en.wikipedia.org/wiki/Interval_tree + */ + + +/** + * interval_tree_in_interval - Returns the first node that intersects with the + * given interval + * @root: interval_tree root + * @start: interval start + * @end: interval end + * + */ +struct interval_tree_node *interval_tree_in_interval( + struct interval_tree_root *root, + u64 start, u64 end) +{ + struct rb_node *p = root->head.rb_node; + struct interval_tree_node *candidate, *match = NULL; + + while (p) { + candidate = rb_entry(p, struct interval_tree_node, rb); + if (end < candidate->start) + p = p->rb_left; + else if (start > candidate->end) + p = p->rb_right; + else { + /* We found one, but try to find an earlier match */ + match = candidate; + p = p->rb_left; + } + } + + return match; +} + + +/** + * interval_tree_next_in_interval - Return the next interval in a intervaltree + * thatintersects with a specified interval. + * @root: interval_tree root + * @start: interval start + * @end: interval end + * + */ +struct interval_tree_node *interval_tree_next_in_interval( + struct interval_tree_node *node, + u64 start, u64 end) +{ + struct rb_node *next; + struct interval_tree_node *candidate; + if (!node) + return NULL; + next = rb_next(&node->rb); + if (!next) + return NULL; + + candidate = container_of(next, struct interval_tree_node, rb); + + if ((candidate->start > end) || (candidate->end < start)) + return NULL; + + return candidate; +} + +/** + * interval_tree_add - Add a node to a interval tree + * @root: interval tree to be added to + * @node: interval_tree_node to be added + * + * Adds a node to the interval tree. Added interval should not intersect with + * existing intervals in the tree. + */ +void interval_tree_add(struct interval_tree_root *root, + struct interval_tree_node *node) +{ + struct rb_node **p = &root->head.rb_node; + struct rb_node *parent = NULL; + struct interval_tree_node *ptr; + + WARN_ON_ONCE(!RB_EMPTY_NODE(&node->rb)); + + /* XXX might want to conditionalize this on debugging checks */ + WARN_ON_ONCE(!!interval_tree_in_interval(root, node->start, node->end)); + + while (*p) { + parent = *p; + ptr = rb_entry(parent, struct interval_tree_node, rb); + if (node->start < ptr->start) + p = &(*p)->rb_left; + else + p = &(*p)->rb_right; + } + rb_link_node(&node->rb, parent, p); + rb_insert_color(&node->rb, &root->head); +} + + +/** + * interval_tree_remove: Removes a given node from the tree + * @root: root of tree + * @node: Node to be removed + * + * Removes a node and splays the tree + */ +void interval_tree_remove(struct interval_tree_root *root, + struct interval_tree_node *node) +{ + WARN_ON_ONCE(RB_EMPTY_NODE(&node->rb)); + + rb_erase(&node->rb, &root->head); + RB_CLEAR_NODE(&node->rb); +}