From patchwork Thu Oct 3 00:51:36 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 20757 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qe0-f72.google.com (mail-qe0-f72.google.com [209.85.128.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A6488238F9 for ; Thu, 3 Oct 2013 00:52:17 +0000 (UTC) Received: by mail-qe0-f72.google.com with SMTP id 6sf3591532qea.11 for ; Wed, 02 Oct 2013 17:52:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=6GHkzc2P7UO6L4ILxNhqnk8GdG4H4iA+SEXPGDc0P7Q=; b=CNXsSSvqXrIFtaTTi79rcFlklXe3aKbHdNt7YqsDm+a9+8thQckYqn82uQ6EeRCK9D n2QTSb3ioypXJ12CFKtKplWVsnbnxJVfxevjCvknWk8UZhFuiW6BbZdPKmSxITqS1IYe Qql2K5kHXYfuF+dKwY7w0yNmSPRjnuvE02e0B6pqp31rUUHvZkHuImRDvonZZ1d5Du97 nnuSZvXkockKsII+JT875MMG8c/+d6VYsGQzhIjoZOOLe0rP5re1KPZsmyKaVCKSNipI DcFWs2RcI4IdhfWjyIhtbsb4EC0tvH0S27nuCGoRSfnQ5tj1uZDX7qF5ZujQzaoigGZW 6wKg== X-Received: by 10.236.145.34 with SMTP id o22mr4663597yhj.22.1380761537521; Wed, 02 Oct 2013 17:52:17 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.70.161 with SMTP id n1ls774245qeu.27.gmail; Wed, 02 Oct 2013 17:52:17 -0700 (PDT) X-Received: by 10.58.168.205 with SMTP id zy13mr4678650veb.19.1380761537423; Wed, 02 Oct 2013 17:52:17 -0700 (PDT) Received: from mail-vb0-f44.google.com (mail-vb0-f44.google.com [209.85.212.44]) by mx.google.com with ESMTPS id zw7si1038928vec.112.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 02 Oct 2013 17:52:17 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.212.44 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.44; Received: by mail-vb0-f44.google.com with SMTP id e13so1178301vbg.3 for ; Wed, 02 Oct 2013 17:52:17 -0700 (PDT) X-Gm-Message-State: ALoCoQnDHUpTK0qxDatxybjy5d7yIoFUr+xceHoBCpf2wou3vXNp2osJ+Rgq7jmJ3byJkYlOPw15 X-Received: by 10.58.168.205 with SMTP id zy13mr4678629veb.19.1380761537010; Wed, 02 Oct 2013 17:52:17 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp137519vcz; Wed, 2 Oct 2013 17:52:16 -0700 (PDT) X-Received: by 10.68.115.15 with SMTP id jk15mr5600879pbb.36.1380761536079; Wed, 02 Oct 2013 17:52:16 -0700 (PDT) Received: from mail-pd0-f181.google.com (mail-pd0-f181.google.com [209.85.192.181]) by mx.google.com with ESMTPS id dl5si3142087pbd.206.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 02 Oct 2013 17:52:16 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.192.181 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.181; Received: by mail-pd0-f181.google.com with SMTP id g10so1659445pdj.26 for ; Wed, 02 Oct 2013 17:52:15 -0700 (PDT) X-Received: by 10.66.228.38 with SMTP id sf6mr6176734pac.21.1380761535650; Wed, 02 Oct 2013 17:52:15 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id gh2sm4507018pbc.40.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 02 Oct 2013 17:52:14 -0700 (PDT) From: John Stultz To: LKML Cc: Minchan Kim , Andrew Morton , Android Kernel Team , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Dave Chinner , Neil Brown , Andrea Righi , Andrea Arcangeli , "Aneesh Kumar K.V" , Mike Hommey , Taras Glek , Dhaval Giani , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Rob Clark , "linux-mm@kvack.org" , John Stultz Subject: [PATCH 07/14] vrange: Purge volatile pages when memory is tight Date: Wed, 2 Oct 2013 17:51:36 -0700 Message-Id: <1380761503-14509-8-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1380761503-14509-1-git-send-email-john.stultz@linaro.org> References: <1380761503-14509-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.44 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds purging logic of volatile pages into direct reclaim path so that if vrange pages is selected as victim by VM, they could be discarded rather than swapping out. Direct purging doesn't consider volatile page's age because it would be better to free the page rather than swapping out another working set pages. This makes sense because userspace specifies "please remove free these pages when memory is tight" via the vrange syscall. This however is an in-kernel behavior and the purging logic could later change. Applications should not assume anything about the volatile page purging order, much as they shouldn't assume anything about the page swapout order. Cc: Andrew Morton Cc: Android Kernel Team Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Dave Chinner Cc: Neil Brown Cc: Andrea Righi Cc: Andrea Arcangeli Cc: Aneesh Kumar K.V Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Rob Clark Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: Minchan Kim Signed-off-by: John Stultz --- include/linux/rmap.h | 11 +++++++---- mm/ksm.c | 2 +- mm/rmap.c | 28 ++++++++++++++++++++-------- mm/vmscan.c | 17 +++++++++++++++-- 4 files changed, 43 insertions(+), 15 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 6dacb93..f38185d 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -181,10 +181,11 @@ static inline void page_dup_rmap(struct page *page) /* * Called from mm/vmscan.c to handle paging out */ -int page_referenced(struct page *, int is_locked, - struct mem_cgroup *memcg, unsigned long *vm_flags); +int page_referenced(struct page *, int is_locked, struct mem_cgroup *memcg, + unsigned long *vm_flags, int *is_vrange); int page_referenced_one(struct page *, struct vm_area_struct *, - unsigned long address, unsigned int *mapcount, unsigned long *vm_flags); + unsigned long address, unsigned int *mapcount, + unsigned long *vm_flags, int *is_vrange); #define TTU_ACTION(x) ((x) & TTU_ACTION_MASK) @@ -249,9 +250,11 @@ int rmap_walk(struct page *page, int (*rmap_one)(struct page *, static inline int page_referenced(struct page *page, int is_locked, struct mem_cgroup *memcg, - unsigned long *vm_flags) + unsigned long *vm_flags, + int *is_vrange) { *vm_flags = 0; + *is_vrange = 0; return 0; } diff --git a/mm/ksm.c b/mm/ksm.c index b6afe0c..debc20c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1932,7 +1932,7 @@ again: continue; referenced += page_referenced_one(page, vma, - rmap_item->address, &mapcount, vm_flags); + rmap_item->address, &mapcount, vm_flags, NULL); if (!search_new_forks || !mapcount) break; } diff --git a/mm/rmap.c b/mm/rmap.c index b2e29ac..f929f22 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -57,6 +57,7 @@ #include #include #include +#include #include @@ -662,7 +663,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) */ int page_referenced_one(struct page *page, struct vm_area_struct *vma, unsigned long address, unsigned int *mapcount, - unsigned long *vm_flags) + unsigned long *vm_flags, int *is_vrange) { struct mm_struct *mm = vma->vm_mm; int referenced = 0; @@ -724,6 +725,11 @@ int page_referenced_one(struct page *page, struct vm_area_struct *vma, referenced++; } pte_unmap_unlock(pte, ptl); + if (is_vrange && vrange_addr_volatile(vma, address)) { + *is_vrange = 1; + *mapcount = 0; /* break ealry from loop */ + goto out; + } } (*mapcount)--; @@ -736,7 +742,7 @@ out: static int page_referenced_anon(struct page *page, struct mem_cgroup *memcg, - unsigned long *vm_flags) + unsigned long *vm_flags, int *is_vrange) { unsigned int mapcount; struct anon_vma *anon_vma; @@ -761,7 +767,8 @@ static int page_referenced_anon(struct page *page, if (memcg && !mm_match_cgroup(vma->vm_mm, memcg)) continue; referenced += page_referenced_one(page, vma, address, - &mapcount, vm_flags); + &mapcount, vm_flags, + is_vrange); if (!mapcount) break; } @@ -785,7 +792,7 @@ static int page_referenced_anon(struct page *page, */ static int page_referenced_file(struct page *page, struct mem_cgroup *memcg, - unsigned long *vm_flags) + unsigned long *vm_flags, int *is_vrange) { unsigned int mapcount; struct address_space *mapping = page->mapping; @@ -826,7 +833,8 @@ static int page_referenced_file(struct page *page, if (memcg && !mm_match_cgroup(vma->vm_mm, memcg)) continue; referenced += page_referenced_one(page, vma, address, - &mapcount, vm_flags); + &mapcount, vm_flags, + is_vrange); if (!mapcount) break; } @@ -841,6 +849,7 @@ static int page_referenced_file(struct page *page, * @is_locked: caller holds lock on the page * @memcg: target memory cgroup * @vm_flags: collect encountered vma->vm_flags who actually referenced the page + * @is_vrange: Is @page in vrange? * * Quick test_and_clear_referenced for all mappings to a page, * returns the number of ptes which referenced the page. @@ -848,7 +857,8 @@ static int page_referenced_file(struct page *page, int page_referenced(struct page *page, int is_locked, struct mem_cgroup *memcg, - unsigned long *vm_flags) + unsigned long *vm_flags, + int *is_vrange) { int referenced = 0; int we_locked = 0; @@ -867,10 +877,12 @@ int page_referenced(struct page *page, vm_flags); else if (PageAnon(page)) referenced += page_referenced_anon(page, memcg, - vm_flags); + vm_flags, + is_vrange); else if (page->mapping) referenced += page_referenced_file(page, memcg, - vm_flags); + vm_flags, + is_vrange); if (we_locked) unlock_page(page); diff --git a/mm/vmscan.c b/mm/vmscan.c index 2cff0d4..ab377b6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include @@ -610,17 +611,19 @@ enum page_references { PAGEREF_RECLAIM, PAGEREF_RECLAIM_CLEAN, PAGEREF_KEEP, + PAGEREF_DISCARD, PAGEREF_ACTIVATE, }; static enum page_references page_check_references(struct page *page, struct scan_control *sc) { + int is_vrange = 0; int referenced_ptes, referenced_page; unsigned long vm_flags; referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup, - &vm_flags); + &vm_flags, &is_vrange); referenced_page = TestClearPageReferenced(page); /* @@ -630,6 +633,13 @@ static enum page_references page_check_references(struct page *page, if (vm_flags & VM_LOCKED) return PAGEREF_RECLAIM; + /* + * If volatile page is reached on LRU's tail, we discard the + * page without considering recycle the page. + */ + if (is_vrange) + return PAGEREF_DISCARD; + if (referenced_ptes) { if (PageSwapBacked(page)) return PAGEREF_ACTIVATE; @@ -859,6 +869,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, goto activate_locked; case PAGEREF_KEEP: goto keep_locked; + case PAGEREF_DISCARD: + if (may_enter_fs && !discard_vpage(page)) + goto free_it; case PAGEREF_RECLAIM: case PAGEREF_RECLAIM_CLEAN: ; /* try to reclaim the page below */ @@ -1614,7 +1627,7 @@ static void shrink_active_list(unsigned long nr_to_scan, } if (page_referenced(page, 0, sc->target_mem_cgroup, - &vm_flags)) { + &vm_flags, NULL)) { nr_rotated += hpage_nr_pages(page); /* * Identify referenced, file-backed active pages and