From patchwork Tue Jun 11 01:12:15 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17755 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qe0-f71.google.com (mail-qe0-f71.google.com [209.85.128.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E75F425DF9 for ; Tue, 11 Jun 2013 01:12:58 +0000 (UTC) Received: by mail-qe0-f71.google.com with SMTP id 1sf7911864qee.6 for ; Mon, 10 Jun 2013 18:12:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=eaKBIXkGdvjTMoj7AxBKlJTzyE20YJzg4W5s79q9/8c=; b=j3SFMH9pw1UaBOrPgM1TBmofSeznNxDny2GcAruZi3DUc+QL2xpA8+IIj3/17CKrKq a5X27hFRE0/ukZlEleRh+zlnLq+zSV6AFu+/vDhJ9lEEhbyw4AnXBTrxSnbC3Rki9wb+ szUkQhdGsJ+bQoSVFJdbP2Z5iCcvKZ2AjJr2+Ajkoxz/3CbH2mzPWvVVOO7oj90jI3Il GGxqR0/CHCd4ULJB9PBDRSBBobWLXdJL9RptlDe0c9JIRGQlgG8sc7kR09MyewNWHQHF NYfv8tD3t8BPYayjTdrWLNvQMtdW7/GMoUAJ3XMw35HtJFcfTs7BrsXg7IckoJ6mFQcX axqw== X-Received: by 10.224.86.200 with SMTP id t8mr8448900qal.0.1370913178696; Mon, 10 Jun 2013 18:12:58 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.71.210 with SMTP id x18ls2852746qeu.85.gmail; Mon, 10 Jun 2013 18:12:58 -0700 (PDT) X-Received: by 10.52.67.176 with SMTP id o16mr6057144vdt.42.1370913178449; Mon, 10 Jun 2013 18:12:58 -0700 (PDT) Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by mx.google.com with ESMTPS id tn5si6156764vdc.39.2013.06.10.18.12.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 18:12:58 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.175; Received: by mail-vc0-f175.google.com with SMTP id hr11so4886594vcb.20 for ; Mon, 10 Jun 2013 18:12:58 -0700 (PDT) X-Received: by 10.52.36.115 with SMTP id p19mr6107625vdj.8.1370913178334; Mon, 10 Jun 2013 18:12:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.10.206 with SMTP id pb14csp89607vcb; Mon, 10 Jun 2013 18:12:57 -0700 (PDT) X-Received: by 10.68.34.165 with SMTP id a5mr3427047pbj.156.1370913177072; Mon, 10 Jun 2013 18:12:57 -0700 (PDT) Received: from mail-pb0-x229.google.com (mail-pb0-x229.google.com [2607:f8b0:400e:c01::229]) by mx.google.com with ESMTPS id xx2si7941024pac.277.2013.06.10.18.12.56 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 18:12:57 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c01::229 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c01::229; Received: by mail-pb0-f41.google.com with SMTP id rp16so4345831pbb.14 for ; Mon, 10 Jun 2013 18:12:56 -0700 (PDT) X-Received: by 10.68.252.36 with SMTP id zp4mr12421044pbc.51.1370913176670; Mon, 10 Jun 2013 18:12:56 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qe10sm9802489pbb.2.2013.06.10.18.12.55 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 18:12:56 -0700 (PDT) From: John Stultz To: minchan@kernel.org Cc: dgiani@mozilla.com, John Stultz Subject: [PATCH 09/13] vrange: Add LRU handling for victim vrange Date: Mon, 10 Jun 2013 18:12:15 -0700 Message-Id: <1370913139-9320-10-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1370913139-9320-1-git-send-email-john.stultz@linaro.org> References: <1370913139-9320-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQlPSaDYlC7HG/vX4zn7EUeotF+RSM9hIP9Z0kOxEqMasFApP2vkgat7pUDf2M/b0bAZrSRO X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds LRU data structure for selecting victim vrange when memory pressure happens. Basically, VM will select old vrange but if user try to access purged page recently, the vrange includes the page will be activated because page fault means one of them which user process will be killed or recover SIGBUS and continue the work. For latter case, we have to keep the vrange out of victim selection. I admit LRU might be not best but I can't imagine better idea so wanted to make it simple. I think user space can handle better with enough information so hope they handle it via mempressure notifier. Otherwise, if you have better idea, welcome! Signed-off-by: Minchan Kim Signed-off-by: John Stultz --- include/linux/vrange.h | 3 +++ include/linux/vrange_types.h | 1 + mm/memory.c | 1 + mm/vrange.c | 49 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 54 insertions(+) diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 75754d1..fb101c6 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -43,6 +43,9 @@ bool vrange_address(struct mm_struct *mm, unsigned long start, extern bool is_purged_vrange(struct mm_struct *mm, unsigned long address); +unsigned int discard_vrange_pages(struct zone *zone, int nr_to_discard); +void lru_move_vrange_to_head(struct mm_struct *mm, unsigned long address); + #else static inline void vrange_init(void) {}; diff --git a/include/linux/vrange_types.h b/include/linux/vrange_types.h index 7f44c01..71ebc70 100644 --- a/include/linux/vrange_types.h +++ b/include/linux/vrange_types.h @@ -14,6 +14,7 @@ struct vrange { struct interval_tree_node node; struct vrange_root *owner; int purged; + struct list_head lru; /* protected by lru_lock */ }; #endif diff --git a/mm/memory.c b/mm/memory.c index f9bc45c..341c794 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3724,6 +3724,7 @@ anon: if (unlikely(pte_vrange(entry))) { if (!is_purged_vrange(mm, address)) { + lru_move_vrange_to_head(mm, address); /* zap pte */ ptl = pte_lockptr(mm, pmd); spin_lock(ptl); diff --git a/mm/vrange.c b/mm/vrange.c index 603057e..c686960 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -14,8 +14,53 @@ #include #include +static LIST_HEAD(lru_vrange); +static DEFINE_SPINLOCK(lru_lock); + static struct kmem_cache *vrange_cachep; + + +void lru_add_vrange(struct vrange *vrange) +{ + spin_lock(&lru_lock); + WARN_ON(!list_empty(&vrange->lru)); + list_add(&vrange->lru, &lru_vrange); + spin_unlock(&lru_lock); +} + +void lru_remove_vrange(struct vrange *vrange) +{ + spin_lock(&lru_lock); + if (!list_empty(&vrange->lru)) + list_del_init(&vrange->lru); + spin_unlock(&lru_lock); +} + +void lru_move_vrange_to_head(struct mm_struct *mm, unsigned long address) +{ + struct vrange_root *vroot = &mm->vroot; + struct interval_tree_node *node; + struct vrange *vrange; + + vrange_lock(vroot); + node = interval_tree_iter_first(&vroot->v_rb, address, + address + PAGE_SIZE - 1); + if (node) { + vrange = container_of(node, struct vrange, node); + spin_lock(&lru_lock); + /* + * Race happens with get_victim_vrange so in such case, + * we can't move but it can put the vrange into head + * after finishing purging work so no problem. + */ + if (!list_empty(&vrange->lru)) + list_move(&vrange->lru, &lru_vrange); + spin_unlock(&lru_lock); + } + vrange_unlock(vroot); +} + void __init vrange_init(void) { vrange_cachep = KMEM_CACHE(vrange, SLAB_PANIC); @@ -27,24 +72,28 @@ static struct vrange *__vrange_alloc(gfp_t flags) if (!vrange) return vrange; vrange->owner = NULL; + INIT_LIST_HEAD(&vrange->lru); return vrange; } static void __vrange_free(struct vrange *range) { WARN_ON(range->owner); + lru_remove_vrange(range); kmem_cache_free(vrange_cachep, range); } static void __vrange_add(struct vrange *range, struct vrange_root *vroot) { range->owner = vroot; + lru_add_vrange(range); interval_tree_insert(&range->node, &vroot->v_rb); } static void __vrange_remove(struct vrange *range) { interval_tree_remove(&range->node, &range->owner->v_rb); + lru_remove_vrange(range); range->owner = NULL; }