From patchwork Thu Oct 3 00:51:38 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 20759 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ye0-f198.google.com (mail-ye0-f198.google.com [209.85.213.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 112F2238F9 for ; Thu, 3 Oct 2013 00:52:22 +0000 (UTC) Received: by mail-ye0-f198.google.com with SMTP id m11sf2714955yen.1 for ; Wed, 02 Oct 2013 17:52:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=mjVnFGXFZgimev0psa6pc/DUDoRl5dIkt5sji+xdwhA=; b=QxLlfdtO5iX7eg42YdumWexur93eHgXYxSDJFvo9hbG7ZK2dsXbScVo1tB9aQB4tk6 AvlnaEl7P042y+Gp86v+Wy582KfEYC3gWete3xgCwRJSiETRps1oZE1JfRsnBtbu9cL7 v3V8QXfekjggnWvxkoVwuqIZKO6Uq3aBY+7ZIhLA6eTGW4fsRJb6H4HqPq0UoSzlJfqW tJcrO11v7bU03d1VNg0fK99KG5UxEgpUuD7Gst/RNoFdlA4mMpS8Ox5/9LyShCZ2RjX3 lN+UTqYzhO8Lc02YEMDwe+adXCeIuFI+jl2fG+u1VuPzb/nD8zcQOJHE+MonjaGgRe6N XhNw== X-Received: by 10.236.66.244 with SMTP id h80mr5156387yhd.30.1380761541889; Wed, 02 Oct 2013 17:52:21 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.85.200 with SMTP id j8ls688353qez.6.gmail; Wed, 02 Oct 2013 17:52:21 -0700 (PDT) X-Received: by 10.220.13.20 with SMTP id z20mr4686942vcz.0.1380761541787; Wed, 02 Oct 2013 17:52:21 -0700 (PDT) Received: from mail-vc0-f179.google.com (mail-vc0-f179.google.com [209.85.220.179]) by mx.google.com with ESMTPS id dh7si1038123vcb.113.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 02 Oct 2013 17:52:21 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.179; Received: by mail-vc0-f179.google.com with SMTP id ht10so614727vcb.10 for ; Wed, 02 Oct 2013 17:52:21 -0700 (PDT) X-Gm-Message-State: ALoCoQk8bkqN3IDwyNym/vKRc3yYOWZi+gsOFcjObShMlTymo7MFd/KFbBw0CO268XhwEvRHBvX5 X-Received: by 10.52.32.37 with SMTP id f5mr3845565vdi.17.1380761541640; Wed, 02 Oct 2013 17:52:21 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp137530vcz; Wed, 2 Oct 2013 17:52:21 -0700 (PDT) X-Received: by 10.67.14.231 with SMTP id fj7mr6123843pad.115.1380761540654; Wed, 02 Oct 2013 17:52:20 -0700 (PDT) Received: from mail-pa0-f43.google.com (mail-pa0-f43.google.com [209.85.220.43]) by mx.google.com with ESMTPS id gv2si2501011pbb.71.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 02 Oct 2013 17:52:20 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.43 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.43; Received: by mail-pa0-f43.google.com with SMTP id hz1so1807810pad.2 for ; Wed, 02 Oct 2013 17:52:20 -0700 (PDT) X-Received: by 10.66.163.2 with SMTP id ye2mr250260pab.168.1380761540215; Wed, 02 Oct 2013 17:52:20 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id gh2sm4507018pbc.40.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 02 Oct 2013 17:52:19 -0700 (PDT) From: John Stultz To: LKML Cc: Minchan Kim , Andrew Morton , Android Kernel Team , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Dave Chinner , Neil Brown , Andrea Righi , Andrea Arcangeli , "Aneesh Kumar K.V" , Mike Hommey , Taras Glek , Dhaval Giani , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Rob Clark , "linux-mm@kvack.org" , John Stultz Subject: [PATCH 09/14] vrange: Add vrange LRU list for purging Date: Wed, 2 Oct 2013 17:51:38 -0700 Message-Id: <1380761503-14509-10-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1380761503-14509-1-git-send-email-john.stultz@linaro.org> References: <1380761503-14509-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds vrange LRU list for managing vranges to purge by something (In this implementation, I will use slab shrinker introduced by upcoming patches). This is necessary to purge vranges on swapless system because currently the VM only ages anonymous pages if the system has a swap device. In this case, because we would otherwise be duplicating the page LRUs tracking of hot/cold pages, we utilize a vrange LRU, to manage the shrinking order. Thus the shrinker will discard the entire vrange at once, and vranges are purged in the order they are marked volatile. Cc: Andrew Morton Cc: Android Kernel Team Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Dave Chinner Cc: Neil Brown Cc: Andrea Righi Cc: Andrea Arcangeli Cc: Aneesh Kumar K.V Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Rob Clark Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: Minchan Kim Signed-off-by: John Stultz --- include/linux/vrange_types.h | 2 ++ mm/vrange.c | 61 ++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 58 insertions(+), 5 deletions(-) diff --git a/include/linux/vrange_types.h b/include/linux/vrange_types.h index 0d48b42..d7d451c 100644 --- a/include/linux/vrange_types.h +++ b/include/linux/vrange_types.h @@ -20,6 +20,8 @@ struct vrange { struct interval_tree_node node; struct vrange_root *owner; int purged; + struct list_head lru; + atomic_t refcount; }; #endif diff --git a/mm/vrange.c b/mm/vrange.c index c19a966..33e3ac1 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -14,8 +14,21 @@ static struct kmem_cache *vrange_cachep; +static struct vrange_list { + struct list_head list; + unsigned long size; + struct mutex lock; +} vrange_list; + +static inline unsigned int vrange_size(struct vrange *range) +{ + return range->node.last + 1 - range->node.start; +} + static int __init vrange_init(void) { + INIT_LIST_HEAD(&vrange_list.list); + mutex_init(&vrange_list.lock); vrange_cachep = KMEM_CACHE(vrange, SLAB_PANIC); return 0; } @@ -27,19 +40,56 @@ static struct vrange *__vrange_alloc(gfp_t flags) if (!vrange) return vrange; vrange->owner = NULL; + INIT_LIST_HEAD(&vrange->lru); + atomic_set(&vrange->refcount, 1); + return vrange; } static void __vrange_free(struct vrange *range) { WARN_ON(range->owner); + WARN_ON(atomic_read(&range->refcount) != 0); + WARN_ON(!list_empty(&range->lru)); + kmem_cache_free(vrange_cachep, range); } +static inline void __vrange_lru_add(struct vrange *range) +{ + mutex_lock(&vrange_list.lock); + WARN_ON(!list_empty(&range->lru)); + list_add(&range->lru, &vrange_list.list); + vrange_list.size += vrange_size(range); + mutex_unlock(&vrange_list.lock); +} + +static inline void __vrange_lru_del(struct vrange *range) +{ + mutex_lock(&vrange_list.lock); + if (!list_empty(&range->lru)) { + list_del_init(&range->lru); + vrange_list.size -= vrange_size(range); + WARN_ON(range->owner); + } + mutex_unlock(&vrange_list.lock); +} + static void __vrange_add(struct vrange *range, struct vrange_root *vroot) { range->owner = vroot; interval_tree_insert(&range->node, &vroot->v_rb); + + WARN_ON(atomic_read(&range->refcount) <= 0); + __vrange_lru_add(range); +} + +static inline void __vrange_put(struct vrange *range) +{ + if (atomic_dec_and_test(&range->refcount)) { + __vrange_lru_del(range); + __vrange_free(range); + } } static void __vrange_remove(struct vrange *range) @@ -64,6 +114,7 @@ static inline void __vrange_resize(struct vrange *range, bool purged = range->purged; __vrange_remove(range); + __vrange_lru_del(range); __vrange_set(range, start_idx, end_idx, purged); __vrange_add(range, vroot); } @@ -100,7 +151,7 @@ static int vrange_add(struct vrange_root *vroot, range = vrange_from_node(node); /* old range covers new range fully */ if (node->start <= start_idx && node->last >= end_idx) { - __vrange_free(new_range); + __vrange_put(new_range); goto out; } @@ -109,7 +160,7 @@ static int vrange_add(struct vrange_root *vroot, purged |= range->purged; __vrange_remove(range); - __vrange_free(range); + __vrange_put(range); node = next; } @@ -150,7 +201,7 @@ static int vrange_remove(struct vrange_root *vroot, if (start_idx <= node->start && end_idx >= node->last) { /* argumented range covers the range fully */ __vrange_remove(range); - __vrange_free(range); + __vrange_put(range); } else if (node->start >= start_idx) { /* * Argumented range covers over the left of the @@ -181,7 +232,7 @@ static int vrange_remove(struct vrange_root *vroot, vrange_unlock(vroot); if (!used_new) - __vrange_free(new_range); + __vrange_put(new_range); return 0; } @@ -204,7 +255,7 @@ void vrange_root_cleanup(struct vrange_root *vroot) while ((node = rb_first(&vroot->v_rb))) { range = vrange_entry(node); __vrange_remove(range); - __vrange_free(range); + __vrange_put(range); } vrange_unlock(vroot); }