From patchwork Tue Oct 1 18:38:55 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 20736 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qe0-f72.google.com (mail-qe0-f72.google.com [209.85.128.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C7CEB23920 for ; Tue, 1 Oct 2013 18:39:17 +0000 (UTC) Received: by mail-qe0-f72.google.com with SMTP id 6sf8606238qea.7 for ; Tue, 01 Oct 2013 11:39:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:subject:date :message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=uOZRs5anCVJV8Cw5uE4pWNdNz+r5YD3JIaVWqP2FJGM=; b=Hn+TWLdbxwd8dtlCoWTXy8Xkur2n+cJRAUgfq+u9EW5/7rYMoT9DW9yqmpLiQ7BrGt UJ40FmYxugkEgSWX14k5zA/oejjz5Zg02ONBh0JDM8BEO+YqfIwJJVSDC0fh7OOVcfNx fqNVnz4BqZynCLD1PQJkd2fe04OpoQhBd4TAvHUD4NRQdFMNTF4aEsFlBSmY5h1fT9kY 2lqEDIXhX00Qk6weqd7xcLzPpp87j7SfAWzFfOWb/MGKMMxHaTZ79uUMcSCbOX5C5QKq 5XsJakmRrRVHlho0Wo3Vl0sbZHmlhsZskrF5h8ZzSx022fY0KXxKue0TPFd4sTyt9xvn O8Fg== X-Received: by 10.58.238.199 with SMTP id vm7mr2326455vec.17.1380652757676; Tue, 01 Oct 2013 11:39:17 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.84.201 with SMTP id b9ls197045qez.62.gmail; Tue, 01 Oct 2013 11:39:17 -0700 (PDT) X-Received: by 10.220.188.66 with SMTP id cz2mr192842vcb.33.1380652757552; Tue, 01 Oct 2013 11:39:17 -0700 (PDT) Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by mx.google.com with ESMTPS id uh5si1609176vcb.127.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:17 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.175; Received: by mail-vc0-f175.google.com with SMTP id ia10so5035453vcb.20 for ; Tue, 01 Oct 2013 11:39:17 -0700 (PDT) X-Gm-Message-State: ALoCoQnR7k8+zdNWWBDXoam1m0LZy2jzyUJTADUJSdyhVrY7aqbTCONlR3NOeM27zzx0jzSIoSDR X-Received: by 10.58.134.16 with SMTP id pg16mr13487773veb.21.1380652757453; Tue, 01 Oct 2013 11:39:17 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp38612vcz; Tue, 1 Oct 2013 11:39:16 -0700 (PDT) X-Received: by 10.66.65.195 with SMTP id z3mr34995836pas.47.1380652755871; Tue, 01 Oct 2013 11:39:15 -0700 (PDT) Received: from mail-pa0-f54.google.com (mail-pa0-f54.google.com [209.85.220.54]) by mx.google.com with ESMTPS id z1si5537551pbw.159.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:15 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.54 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.54; Received: by mail-pa0-f54.google.com with SMTP id kx10so7839804pab.13 for ; Tue, 01 Oct 2013 11:39:15 -0700 (PDT) X-Received: by 10.68.76.65 with SMTP id i1mr31125255pbw.37.1380652755444; Tue, 01 Oct 2013 11:39:15 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id ed3sm8282606pbc.6.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:14 -0700 (PDT) From: John Stultz To: Minchan Kim , Dhaval Giani Subject: [PATCH 11/14] vrange: Purging vrange-anon pages from shrinker Date: Tue, 1 Oct 2013 11:38:55 -0700 Message-Id: <1380652738-8000-12-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1380652738-8000-1-git-send-email-john.stultz@linaro.org> References: <1380652738-8000-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch provides the logic to discard anonymous vranges from the shrinker, by generating the page list for the volatile ranges setting the ptes volatile, and discarding the pages. Cc: Andrew Morton Cc: Android Kernel Team Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Dave Chinner Cc: Neil Brown Cc: Andrea Righi Cc: Andrea Arcangeli Cc: Aneesh Kumar K.V Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Rob Clark Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: Minchan Kim [jstultz: Code tweaks and commit log rewording] Signed-off-by: John Stultz --- mm/vrange.c | 179 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 178 insertions(+), 1 deletion(-) diff --git a/mm/vrange.c b/mm/vrange.c index 688ddb8..7e55ff3 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -11,6 +11,8 @@ #include #include "internal.h" #include +#include +#include static struct kmem_cache *vrange_cachep; @@ -20,6 +22,11 @@ static struct vrange_list { struct mutex lock; } vrange_list; +struct vrange_walker { + struct vm_area_struct *vma; + struct list_head *pagelist; +}; + static inline unsigned int vrange_size(struct vrange *range) { return range->node.last + 1 - range->node.start; @@ -690,11 +697,181 @@ static struct vrange *vrange_isolate(void) return vrange; } -static unsigned int discard_vrange(struct vrange *vrange) +static unsigned int discard_vrange_pagelist(struct list_head *page_list) +{ + struct page *page; + unsigned int nr_discard = 0; + LIST_HEAD(ret_pages); + LIST_HEAD(free_pages); + + while (!list_empty(page_list)) { + int err; + page = list_entry(page_list->prev, struct page, lru); + list_del(&page->lru); + if (!trylock_page(page)) { + list_add(&page->lru, &ret_pages); + continue; + } + + /* + * discard_vapge returns unlocked page if it + * is successful + */ + err = discard_vpage(page); + if (err) { + unlock_page(page); + list_add(&page->lru, &ret_pages); + continue; + } + + ClearPageActive(page); + list_add(&page->lru, &free_pages); + dec_zone_page_state(page, NR_ISOLATED_ANON); + nr_discard++; + } + + free_hot_cold_page_list(&free_pages, 1); + list_splice(&ret_pages, page_list); + return nr_discard; +} + +static void vrange_pte_entry(pte_t pteval, unsigned long address, + unsigned ptent_size, struct mm_walk *walk) +{ + struct page *page; + struct vrange_walker *vw = walk->private; + struct vm_area_struct *vma = vw->vma; + struct list_head *pagelist = vw->pagelist; + + if (pte_none(pteval)) + return; + + if (!pte_present(pteval)) + return; + + page = vm_normal_page(vma, address, pteval); + if (unlikely(!page)) + return; + + if (!PageLRU(page) || PageLocked(page)) + return; + + /* TODO : Support THP */ + if (unlikely(PageCompound(page))) + return; + + if (isolate_lru_page(page)) + return; + + list_add(&page->lru, pagelist); + + VM_BUG_ON(page_is_file_cache(page)); + inc_zone_page_state(page, NR_ISOLATED_ANON); +} + +static int vrange_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + struct mm_walk *walk) { + pte_t *pte; + spinlock_t *ptl; + + pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + for (; addr != end; pte++, addr += PAGE_SIZE) + vrange_pte_entry(*pte, addr, PAGE_SIZE, walk); + pte_unmap_unlock(pte - 1, ptl); + cond_resched(); + return 0; } +static unsigned int discard_vma_pages(struct mm_struct *mm, + struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + unsigned int ret = 0; + LIST_HEAD(pagelist); + struct vrange_walker vw; + struct mm_walk vrange_walk = { + .pmd_entry = vrange_pte_range, + .mm = vma->vm_mm, + .private = &vw, + }; + + vw.pagelist = &pagelist; + vw.vma = vma; + + walk_page_range(start, end, &vrange_walk); + + if (!list_empty(&pagelist)) + ret = discard_vrange_pagelist(&pagelist); + + putback_lru_pages(&pagelist); + return ret; +} + +/* + * vrange->owner isn't stable because caller doesn't hold vrange_lock + * so avoid touching vrange->owner. + */ +static int __discard_vrange_anon(struct mm_struct *mm, struct vrange *vrange, + unsigned int *ret_discard) +{ + struct vm_area_struct *vma; + unsigned int nr_discard = 0; + unsigned long start = vrange->node.start; + unsigned long end = vrange->node.last + 1; + int ret = 0; + + /* It prevent to destroy vma when the process exist */ + if (!atomic_inc_not_zero(&mm->mm_users)) + return ret; + + if (!down_read_trylock(&mm->mmap_sem)) { + mmput(mm); + ret = -EBUSY; + goto out; /* this vrange could be retried */ + } + + vma = find_vma(mm, start); + if (!vma || (vma->vm_start >= end)) + goto out_unlock; + + for (; vma; vma = vma->vm_next) { + if (vma->vm_start >= end) + break; + BUG_ON(vma->vm_flags & (VM_SPECIAL|VM_LOCKED|VM_MIXEDMAP| + VM_HUGETLB)); + cond_resched(); + nr_discard += discard_vma_pages(mm, vma, + max_t(unsigned long, start, vma->vm_start), + min_t(unsigned long, end, vma->vm_end)); + } +out_unlock: + up_read(&mm->mmap_sem); + mmput(mm); + *ret_discard = nr_discard; +out: + return ret; +} + +static int discard_vrange(struct vrange *vrange) +{ + int ret = 0; + struct mm_struct *mm; + struct vrange_root *vroot; + unsigned int nr_discard = 0; + vroot = vrange->owner; + + /* TODO : handle VRANGE_FILE */ + if (vroot->type != VRANGE_MM) + goto out; + + mm = vroot->object; + ret = __discard_vrange_anon(mm, vrange, &nr_discard); +out: + return nr_discard; +} + static int shrink_vrange(struct shrinker *s, struct shrink_control *sc) { struct vrange *range = NULL;