From patchwork Tue Jun 11 02:11:28 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17768 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-gg0-f199.google.com (mail-gg0-f199.google.com [209.85.161.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 50B632397B for ; Tue, 11 Jun 2013 02:13:17 +0000 (UTC) Received: by mail-gg0-f199.google.com with SMTP id o1sf6877498ggn.2 for ; Mon, 10 Jun 2013 19:13:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=eaKBIXkGdvjTMoj7AxBKlJTzyE20YJzg4W5s79q9/8c=; b=oqix27xLGTfOaFBBnB0Yw6KRS1kojYSY4y1pOos/j2HUezuRY1Cl6MmA2TqRIDuXzC Jg3s4C4akGPJ4VeGSnQm4CBZUZ/ue/dJPeRDO71g4ggg74xUEuDPPOIo852hyTAoLI10 ZzM71Kdy1xFvB5yxwdCR3ALn4Sn+8Bvv43Hz7MavvA7sNStFqYpZKTvgpbI95Yioh/dh 1entnIpYvFqRC/DSK06spWaAv2szq+g7jcXiDFmpe3jAWHhbSjb1fKr7jJ8Wx0XdJTS2 QGCSSzT7W7e9K5Lk6N/0ZGAP28Ep44h5omqL4DPbAbN/1szWfoU3HfSfJmF65xCt9OJA +oVA== X-Received: by 10.224.42.141 with SMTP id s13mr9787558qae.3.1370916797031; Mon, 10 Jun 2013 19:13:17 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.13.106 with SMTP id g10ls2903735qec.39.gmail; Mon, 10 Jun 2013 19:13:16 -0700 (PDT) X-Received: by 10.220.19.74 with SMTP id z10mr7333154vca.45.1370916796814; Mon, 10 Jun 2013 19:13:16 -0700 (PDT) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by mx.google.com with ESMTPS id i3si6198346vdw.126.2013.06.10.19.13.16 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:13:16 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.174; Received: by mail-vc0-f174.google.com with SMTP id kw10so4907278vcb.5 for ; Mon, 10 Jun 2013 19:13:16 -0700 (PDT) X-Received: by 10.58.97.238 with SMTP id ed14mr3054255veb.34.1370916796672; Mon, 10 Jun 2013 19:13:16 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.10.206 with SMTP id pb14csp90925vcb; Mon, 10 Jun 2013 19:13:16 -0700 (PDT) X-Received: by 10.66.162.102 with SMTP id xz6mr14230364pab.0.1370916795735; Mon, 10 Jun 2013 19:13:15 -0700 (PDT) Received: from mail-pb0-x22a.google.com (mail-pb0-x22a.google.com [2607:f8b0:400e:c01::22a]) by mx.google.com with ESMTPS id o4si8032338pac.279.2013.06.10.19.13.15 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:13:15 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c01::22a is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c01::22a; Received: by mail-pb0-f42.google.com with SMTP id un1so3762203pbc.15 for ; Mon, 10 Jun 2013 19:13:15 -0700 (PDT) X-Received: by 10.68.200.133 with SMTP id js5mr2773300pbc.22.1370916795241; Mon, 10 Jun 2013 19:13:15 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id nt2sm12427175pbc.17.2013.06.10.19.13.13 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:13:14 -0700 (PDT) From: John Stultz To: minchan.kim@lge.com Cc: Minchan Kim , John Stultz Subject: [PATCH 09/13] vrange: Add LRU handling for victim vrange Date: Mon, 10 Jun 2013 19:11:28 -0700 Message-Id: <1370916692-9576-10-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1370916692-9576-1-git-send-email-john.stultz@linaro.org> References: <1370916692-9576-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQlIvx6jlCdPNq1Oq7PpdchTWq7dp9alrBvTKm/8xwi0oE2xeo3N+jeUkR6YJejW/52rAmPZ X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds LRU data structure for selecting victim vrange when memory pressure happens. Basically, VM will select old vrange but if user try to access purged page recently, the vrange includes the page will be activated because page fault means one of them which user process will be killed or recover SIGBUS and continue the work. For latter case, we have to keep the vrange out of victim selection. I admit LRU might be not best but I can't imagine better idea so wanted to make it simple. I think user space can handle better with enough information so hope they handle it via mempressure notifier. Otherwise, if you have better idea, welcome! Signed-off-by: Minchan Kim Signed-off-by: John Stultz --- include/linux/vrange.h | 3 +++ include/linux/vrange_types.h | 1 + mm/memory.c | 1 + mm/vrange.c | 49 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 54 insertions(+) diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 75754d1..fb101c6 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -43,6 +43,9 @@ bool vrange_address(struct mm_struct *mm, unsigned long start, extern bool is_purged_vrange(struct mm_struct *mm, unsigned long address); +unsigned int discard_vrange_pages(struct zone *zone, int nr_to_discard); +void lru_move_vrange_to_head(struct mm_struct *mm, unsigned long address); + #else static inline void vrange_init(void) {}; diff --git a/include/linux/vrange_types.h b/include/linux/vrange_types.h index 7f44c01..71ebc70 100644 --- a/include/linux/vrange_types.h +++ b/include/linux/vrange_types.h @@ -14,6 +14,7 @@ struct vrange { struct interval_tree_node node; struct vrange_root *owner; int purged; + struct list_head lru; /* protected by lru_lock */ }; #endif diff --git a/mm/memory.c b/mm/memory.c index f9bc45c..341c794 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3724,6 +3724,7 @@ anon: if (unlikely(pte_vrange(entry))) { if (!is_purged_vrange(mm, address)) { + lru_move_vrange_to_head(mm, address); /* zap pte */ ptl = pte_lockptr(mm, pmd); spin_lock(ptl); diff --git a/mm/vrange.c b/mm/vrange.c index 603057e..c686960 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -14,8 +14,53 @@ #include #include +static LIST_HEAD(lru_vrange); +static DEFINE_SPINLOCK(lru_lock); + static struct kmem_cache *vrange_cachep; + + +void lru_add_vrange(struct vrange *vrange) +{ + spin_lock(&lru_lock); + WARN_ON(!list_empty(&vrange->lru)); + list_add(&vrange->lru, &lru_vrange); + spin_unlock(&lru_lock); +} + +void lru_remove_vrange(struct vrange *vrange) +{ + spin_lock(&lru_lock); + if (!list_empty(&vrange->lru)) + list_del_init(&vrange->lru); + spin_unlock(&lru_lock); +} + +void lru_move_vrange_to_head(struct mm_struct *mm, unsigned long address) +{ + struct vrange_root *vroot = &mm->vroot; + struct interval_tree_node *node; + struct vrange *vrange; + + vrange_lock(vroot); + node = interval_tree_iter_first(&vroot->v_rb, address, + address + PAGE_SIZE - 1); + if (node) { + vrange = container_of(node, struct vrange, node); + spin_lock(&lru_lock); + /* + * Race happens with get_victim_vrange so in such case, + * we can't move but it can put the vrange into head + * after finishing purging work so no problem. + */ + if (!list_empty(&vrange->lru)) + list_move(&vrange->lru, &lru_vrange); + spin_unlock(&lru_lock); + } + vrange_unlock(vroot); +} + void __init vrange_init(void) { vrange_cachep = KMEM_CACHE(vrange, SLAB_PANIC); @@ -27,24 +72,28 @@ static struct vrange *__vrange_alloc(gfp_t flags) if (!vrange) return vrange; vrange->owner = NULL; + INIT_LIST_HEAD(&vrange->lru); return vrange; } static void __vrange_free(struct vrange *range) { WARN_ON(range->owner); + lru_remove_vrange(range); kmem_cache_free(vrange_cachep, range); } static void __vrange_add(struct vrange *range, struct vrange_root *vroot) { range->owner = vroot; + lru_add_vrange(range); interval_tree_insert(&range->node, &vroot->v_rb); } static void __vrange_remove(struct vrange *range) { interval_tree_remove(&range->node, &range->owner->v_rb); + lru_remove_vrange(range); range->owner = NULL; }