From patchwork Sat Apr 14 01:08:00 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 7808 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 5CF8323E1D for ; Sat, 14 Apr 2012 01:08:19 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 0BFA9A181EE for ; Sat, 14 Apr 2012 01:08:18 +0000 (UTC) Received: by iage36 with SMTP id e36so6800827iag.11 for ; Fri, 13 Apr 2012 18:08:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-content-scanned:x-cbid:x-gm-message-state; bh=8cc0AGmTA7o2scn9umNcNgVyImI6g0Mc9IV0jA61xT4=; b=FSjanrLCYK3FJL2/mELSnbEaiidmqpzwM3UWAZ2bCXz02I+6IYGMOKWGZNpPU2gXkH /Lz36cOSC7M7aPQmJkR0nJO2Uxtsf91TSQoOqVD1IlaxaBznnBtzLbe/k5LrP+6n+MUa fo/KNWZNL9rRmAZhy1ndxqEDNrzlpmPzqCMBmQ3KykhqPOlmpESp5oDYOiUjukhTTojQ 5v+vNaAWJdEpfWnlR3iSAYSnSDXtGTCRCM7m++iNAqRNiHbRpis29/OjzpNTgNUrUisU ermBIfVpYidfHHk/dJKyUY/mK6nwW2dSUMSRqLtftMDN0ys4uu/3N+51BxGxwHfYOtZa WXUA== Received: by 10.50.158.202 with SMTP id ww10mr148724igb.30.1334365698390; Fri, 13 Apr 2012 18:08:18 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.70.69 with SMTP id c5csp62144ibj; Fri, 13 Apr 2012 18:08:17 -0700 (PDT) Received: by 10.182.141.9 with SMTP id rk9mr4794544obb.50.1334365696078; Fri, 13 Apr 2012 18:08:16 -0700 (PDT) Received: from e32.co.us.ibm.com (e32.co.us.ibm.com. [32.97.110.150]) by mx.google.com with ESMTPS id 8si5514166obv.167.2012.04.13.18.08.15 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 13 Apr 2012 18:08:16 -0700 (PDT) Received-SPF: pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.150 as permitted sender) client-ip=32.97.110.150; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.150 as permitted sender) smtp.mail=jstultz@us.ibm.com Received: from /spool/local by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 13 Apr 2012 19:08:14 -0600 Received: from d01dlp03.pok.ibm.com (9.56.224.17) by e32.co.us.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 13 Apr 2012 19:08:13 -0600 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id A9B45C90057; Fri, 13 Apr 2012 21:08:11 -0400 (EDT) Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q3E18Cs1337736; Fri, 13 Apr 2012 21:08:12 -0400 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q3E18BTD001283; Fri, 13 Apr 2012 19:08:12 -0600 Received: from kernel.beaverton.ibm.com (kernel.beaverton.ibm.com [9.47.67.96]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q3E18BKV001271; Fri, 13 Apr 2012 19:08:11 -0600 Received: by kernel.beaverton.ibm.com (Postfix, from userid 1056) id 589B8C057E; Fri, 13 Apr 2012 18:08:10 -0700 (PDT) From: John Stultz To: linux-kernel@vger.kernel.org Cc: John Stultz , Andrew Morton , Android Kernel Team , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Dave Chinner , Neil Brown , Andrea Righi , "Aneesh Kumar K.V" Subject: [PATCH 1/2] [RFC] Range tree implementation Date: Fri, 13 Apr 2012 18:08:00 -0700 Message-Id: <1334365681-21586-2-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.3.2.146.gca209 In-Reply-To: <1334365681-21586-1-git-send-email-john.stultz@linaro.org> References: <1334365681-21586-1-git-send-email-john.stultz@linaro.org> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12041401-3270-0000-0000-0000058BEAAB X-Gm-Message-State: ALoCoQlNSTOtrk3n7qIdAyhEd6Qz7s9UpNVTswtTDXRNxe9YNIGR9fbl1ZhNJcgFxjUt5D6rNjd5 After Andrew suggested something like his mumbletree idea to better store a list of ranges, I worked on a few different approaches, and this is what I've finally managed to get working. I suspect range-tree isn't a totally accurate name, but I couldn't quite make out the difference between range trees and interval trees, so I just picked one to call it. Do let me know if you have a better name. The idea of storing ranges in a tree is nice, but has a number of complications. When adding a range, its possible that a large range will consume and merge a number of smaller ranges. When removing a range, its possible you may end up splitting an existing range, causing one range to become two. This makes it very difficult to provide generic list_head like behavior, as the parent structures would need to be duplicated and removed, and that has lots of memory ownership issues. So, this is a much simplified and more list_head like implementation. You can add a node to a tree, or remove a node to a tree, but the generic implementation doesn't do the merging or splitting for you. But it does provide helpers to find overlapping and adjacent ranges. Andrew also really wanted this range-tree implementation to be resuable so we don't duplicate the file locking logic. I'm not totally convinced that the requirements between the volatile ranges and file locking are really equivelent, but this reduced impelementation may make it possible. Do let me know what you think or if you have other ideas for better ways to do the same. Changelog: v2: * Reworked code to use an rbtree instead of splaying v3: * Added range_tree_next_in_range() to avoid having to start lookups from the root every time. * Fixed some comments and return NULL instead of 0, as suggested by Aneesh Kumar K.V v6: * Fixed range_tree_in_range() so that it finds the earliest range, rather then the first. This allows the next_in_range() function to properly cover all the ranges in the tree. * Minor clenaups to simplify some of the functions CC: Andrew Morton CC: Android Kernel Team CC: Robert Love CC: Mel Gorman CC: Hugh Dickins CC: Dave Hansen CC: Rik van Riel CC: Dmitry Adamushko CC: Dave Chinner CC: Neil Brown CC: Andrea Righi CC: Aneesh Kumar K.V Signed-off-by: John Stultz --- include/linux/rangetree.h | 56 ++++++++++++++++++++ lib/Makefile | 2 +- lib/rangetree.c | 128 +++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 185 insertions(+), 1 deletions(-) create mode 100644 include/linux/rangetree.h create mode 100644 lib/rangetree.c diff --git a/include/linux/rangetree.h b/include/linux/rangetree.h new file mode 100644 index 0000000..c61ce7c --- /dev/null +++ b/include/linux/rangetree.h @@ -0,0 +1,56 @@ +#ifndef _LINUX_RANGETREE_H +#define _LINUX_RANGETREE_H + +#include +#include + +struct range_tree_node { + struct rb_node rb; + u64 start; + u64 end; +}; + +struct range_tree_root { + struct rb_root head; +}; + +static inline void range_tree_init(struct range_tree_root *root) +{ + root->head = RB_ROOT; +} + +static inline void range_tree_node_init(struct range_tree_node *node) +{ + rb_init_node(&node->rb); + node->start = 0; + node->end = 0; +} + +static inline int range_tree_empty(struct range_tree_root *root) +{ + return RB_EMPTY_ROOT(&root->head); +} + +static inline +struct range_tree_node *range_tree_root_node(struct range_tree_root *root) +{ + struct range_tree_node *ret; + ret = container_of(root->head.rb_node, struct range_tree_node, rb); + return ret; +} + +extern struct range_tree_node *range_tree_in_range(struct range_tree_root *root, + u64 start, u64 end); +extern struct range_tree_node *range_tree_in_range_adjacent( + struct range_tree_root *root, + u64 start, u64 end); +extern struct range_tree_node *range_tree_next_in_range( + struct range_tree_node *node, + u64 start, u64 end); +extern void range_tree_add(struct range_tree_root *root, + struct range_tree_node *node); +extern void range_tree_remove(struct range_tree_root *root, + struct range_tree_node *node); +#endif + + diff --git a/lib/Makefile b/lib/Makefile index 18515f0..f43ef0d 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -12,7 +12,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \ idr.o int_sqrt.o extable.o prio_tree.o \ sha1.o md5.o irq_regs.o reciprocal_div.o argv_split.o \ proportions.o prio_heap.o ratelimit.o show_mem.o \ - is_single_threaded.o plist.o decompress.o + is_single_threaded.o plist.o decompress.o rangetree.o lib-$(CONFIG_MMU) += ioremap.o lib-$(CONFIG_SMP) += cpumask.o diff --git a/lib/rangetree.c b/lib/rangetree.c new file mode 100644 index 0000000..08185bc --- /dev/null +++ b/lib/rangetree.c @@ -0,0 +1,128 @@ +#include +#include +#include + + + +/** + * range_tree_in_range - Returns the first node that overlaps with the + * given range + * @root: range_tree root + * @start: range start + * @end: range end + * + */ +struct range_tree_node *range_tree_in_range(struct range_tree_root *root, + u64 start, u64 end) +{ + struct rb_node *p = root->head.rb_node; + struct range_tree_node *candidate, *match = NULL; + + while (p) { + candidate = rb_entry(p, struct range_tree_node, rb); + if (end < candidate->start) + p = p->rb_left; + else if (start > candidate->end) + p = p->rb_right; + else { + /* We found one, but try to find an earlier match */ + match = candidate; + p = p->rb_left; + } + } + + return match; +} + + +/** + * range_tree_in_range_adjacent - Returns the first node that overlaps or + * is adjacent with the given range. + * @root: range_tree root + * @start: range start + * @end: range end + * + */ +struct range_tree_node *range_tree_in_range_adjacent( + struct range_tree_root *root, + u64 start, u64 end) +{ + struct rb_node *p = root->head.rb_node; + struct range_tree_node *candidate; + + while (p) { + candidate = rb_entry(p, struct range_tree_node, rb); + if (end+1 < candidate->start) + p = p->rb_left; + else if (start > candidate->end + 1) + p = p->rb_right; + else + return candidate; + } + return NULL; +} + +struct range_tree_node *range_tree_next_in_range(struct range_tree_node *node, + u64 start, u64 end) +{ + struct rb_node *next; + struct range_tree_node *candidate; + if (!node) + return NULL; + next = rb_next(&node->rb); + if (!next) + return NULL; + + candidate = container_of(next, struct range_tree_node, rb); + + if ((candidate->start > end) || (candidate->end < start)) + return NULL; + + return candidate; +} + +/** + * range_tree_add - Add a node to a range tree + * @root: range tree to be added to + * @node: range_tree_node to be added + * + * Adds a node to the range tree. + */ +void range_tree_add(struct range_tree_root *root, + struct range_tree_node *node) +{ + struct rb_node **p = &root->head.rb_node; + struct rb_node *parent = NULL; + struct range_tree_node *ptr; + + WARN_ON_ONCE(!RB_EMPTY_NODE(&node->rb)); + + while (*p) { + parent = *p; + ptr = rb_entry(parent, struct range_tree_node, rb); + if (node->start < ptr->start) + p = &(*p)->rb_left; + else + p = &(*p)->rb_right; + } + rb_link_node(&node->rb, parent, p); + rb_insert_color(&node->rb, &root->head); + +} + + +/** + * range_tree_remove: Removes a given node from the tree + * @root: root of tree + * @node: Node to be removed + * + * Removes a node and splays the tree + */ +void range_tree_remove(struct range_tree_root *root, + struct range_tree_node *node) +{ + WARN_ON_ONCE(RB_EMPTY_NODE(&node->rb)); + + rb_erase(&node->rb, &root->head); + RB_CLEAR_NODE(&node->rb); +}