From patchwork Wed Jun 27 04:17:15 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 9641 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 85AAA23F28 for ; Wed, 27 Jun 2012 04:17:49 +0000 (UTC) Received: from mail-lb0-f180.google.com (mail-lb0-f180.google.com [209.85.217.180]) by fiordland.canonical.com (Postfix) with ESMTP id 6CF57A1872C for ; Wed, 27 Jun 2012 04:17:49 +0000 (UTC) Received: by mail-lb0-f180.google.com with SMTP id gj3so1157503lbb.11 for ; Tue, 26 Jun 2012 21:17:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-content-scanned:x-cbid:x-gm-message-state; bh=bwW4/3j4co/qcrwebKReW2x+XEJaPpF1ToUZVzng1Zw=; b=jv7IVRYfdfyYCTOcNvtLfDsDAKMs/9LO7WyEKaaZU/DCSGCW8NcgO2Qn33QLKH1s5I LcQGnSASzz6qOjA/mdZcWmWwB+CREkeoA9RHZgzPipDuuRmohHDTNbDIm8kNFNtg6Vny P/a+F5uVLrnAYqvoWkMVb8liEAfeM+Etm42qnnkOKvtEcr3ysmJ4FqXGmpf9yIHDGrVW ukB6AvL16R48D9tbHnzzQM9RmfZ8N5N6rRiSGwhCbNJ2ft9QpyILA/zZPiMfyloDol6S PkqjvfKK7gUEuvUmqMmt1S2EtMl2fTDgWimzhvUsl6jPUZKLogRLQpWbQJGFu7cWHROd 7oZw== Received: by 10.112.42.34 with SMTP id k2mr8887380lbl.0.1340770669284; Tue, 26 Jun 2012 21:17:49 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.152.148.101 with SMTP id tr5csp35898lab; Tue, 26 Jun 2012 21:17:48 -0700 (PDT) Received: by 10.50.57.130 with SMTP id i2mr328828igq.2.1340770667450; Tue, 26 Jun 2012 21:17:47 -0700 (PDT) Received: from e3.ny.us.ibm.com (e3.ny.us.ibm.com. [32.97.182.143]) by mx.google.com with ESMTPS id nj5si20856242icb.16.2012.06.26.21.17.46 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 26 Jun 2012 21:17:47 -0700 (PDT) Received-SPF: neutral (google.com: 32.97.182.143 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=32.97.182.143; Authentication-Results: mx.google.com; spf=neutral (google.com: 32.97.182.143 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) smtp.mail=john.stultz@linaro.org Received: from /spool/local by e3.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 27 Jun 2012 00:17:45 -0400 Received: from d01dlp01.pok.ibm.com (9.56.224.56) by e3.ny.us.ibm.com (192.168.1.103) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 27 Jun 2012 00:17:42 -0400 Received: from d01relay05.pok.ibm.com (d01relay05.pok.ibm.com [9.56.227.237]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id 553FF38C8021; Wed, 27 Jun 2012 00:17:42 -0400 (EDT) Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay05.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q5R4Hg3S192868; Wed, 27 Jun 2012 00:17:42 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q5R4HZ4i019961; Wed, 27 Jun 2012 01:17:42 -0300 Received: from kernel.stglabs.ibm.com (kernel.stglabs.ibm.com [9.114.214.19]) by d01av02.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q5R4HZ8p019891; Wed, 27 Jun 2012 01:17:35 -0300 From: John Stultz To: LKML Cc: John Stultz , Andrew Morton , Android Kernel Team , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Dave Chinner , Neil Brown , Andrea Righi , "Aneesh Kumar K.V" , Taras Glek , Mike Hommey , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Minchan Kim , "linux-mm@kvack.org" Subject: [PATCH 5/5] [RFC][HACK] mm: Change memory management of anonymous pages on swapless systems Date: Wed, 27 Jun 2012 00:17:15 -0400 Message-Id: <1340770635-9909-6-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1340770635-9909-1-git-send-email-john.stultz@linaro.org> References: <1340770635-9909-1-git-send-email-john.stultz@linaro.org> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12062704-8974-0000-0000-00000A8F8551 X-Gm-Message-State: ALoCoQkMg64pjyZNptmOu9uy48v87CXeKQVhLuDa1kWzCLXCUJVSH9GQEn+ddA7mh9EbwFIXyAT6 Due to my newbie-ness, the following may not be precise, but I think it conveys the intent of what I'm trying to do here. Anonymous memory is tracked on two LRU lists: LRU_INACTIVE_ANON and LRU_ACTIVE_ANON. This split is useful when we need to free up pages and are trying to decide what to swap out. However, on systems that do no have swap, this partition is less clear. In many cases the code avoids aging active anonymous pages onto the inactive list. However in some cases pages do get moved to the inactive list, but we never call writepage, as there isn't anything to swap out. This patch changes some of the active/inactive list management of anonymous memory when there is no swap. In that case pages are always added to the active lru. The intent is that since anonymous pages cannot be swapped out, they all shoudld be active. The one exception is volatile pages, which can be moved to the inactive lru by calling deactivate_page(). In addition, I've changed the logic so we also do try to shrink the inactive anonymous lru, and call writepage. This should only be done if there are volatile pages on the inactive lru. This allows us to purge volatile pages in writepage when the system does not have swap. CC: Andrew Morton CC: Android Kernel Team CC: Robert Love CC: Mel Gorman CC: Hugh Dickins CC: Dave Hansen CC: Rik van Riel CC: Dmitry Adamushko CC: Dave Chinner CC: Neil Brown CC: Andrea Righi CC: Aneesh Kumar K.V CC: Taras Glek CC: Mike Hommey CC: Jan Kara CC: KOSAKI Motohiro CC: Michel Lespinasse CC: Minchan Kim CC: linux-mm@kvack.org Signed-off-by: John Stultz --- include/linux/pagevec.h | 5 +---- include/linux/swap.h | 23 +++++++++++++++-------- mm/swap.c | 13 ++++++++++++- mm/vmscan.c | 9 --------- 4 files changed, 28 insertions(+), 22 deletions(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 2aa12b8..e1312a5 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -22,6 +22,7 @@ struct pagevec { void __pagevec_release(struct pagevec *pvec); void __pagevec_lru_add(struct pagevec *pvec, enum lru_list lru); +void __pagevec_lru_add_anon(struct pagevec *pvec); unsigned pagevec_lookup(struct pagevec *pvec, struct address_space *mapping, pgoff_t start, unsigned nr_pages); unsigned pagevec_lookup_tag(struct pagevec *pvec, @@ -64,10 +65,6 @@ static inline void pagevec_release(struct pagevec *pvec) __pagevec_release(pvec); } -static inline void __pagevec_lru_add_anon(struct pagevec *pvec) -{ - __pagevec_lru_add(pvec, LRU_INACTIVE_ANON); -} static inline void __pagevec_lru_add_active_anon(struct pagevec *pvec) { diff --git a/include/linux/swap.h b/include/linux/swap.h index c84ec68..639936f 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -238,14 +238,6 @@ extern void swap_setup(void); extern void add_page_to_unevictable_list(struct page *page); -/** - * lru_cache_add: add a page to the page lists - * @page: the page to add - */ -static inline void lru_cache_add_anon(struct page *page) -{ - __lru_cache_add(page, LRU_INACTIVE_ANON); -} static inline void lru_cache_add_file(struct page *page) { @@ -474,5 +466,20 @@ mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) } #endif /* CONFIG_SWAP */ + +/** + * lru_cache_add: add a page to the page lists + * @page: the page to add + */ +static inline void lru_cache_add_anon(struct page *page) +{ + int lru = LRU_INACTIVE_ANON; + if (!total_swap_pages) + lru = LRU_ACTIVE_ANON; + + __lru_cache_add(page, lru); +} + + #endif /* __KERNEL__*/ #endif /* _LINUX_SWAP_H */ diff --git a/mm/swap.c b/mm/swap.c index 4e7e2ec..f35df46 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -691,7 +691,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, SetPageLRU(page_tail); if (page_evictable(page_tail, NULL)) { - if (PageActive(page)) { + if (PageActive(page) || !total_swap_pages) { SetPageActive(page_tail); active = 1; lru = LRU_ACTIVE_ANON; @@ -755,6 +755,17 @@ void __pagevec_lru_add(struct pagevec *pvec, enum lru_list lru) } EXPORT_SYMBOL(__pagevec_lru_add); + +void __pagevec_lru_add_anon(struct pagevec *pvec) +{ + if (!total_swap_pages) + __pagevec_lru_add(pvec, LRU_ACTIVE_ANON); + else + __pagevec_lru_add(pvec, LRU_INACTIVE_ANON); +} +EXPORT_SYMBOL(__pagevec_lru_add_anon); + + /** * pagevec_lookup - gang pagecache lookup * @pvec: Where the resulting pages are placed diff --git a/mm/vmscan.c b/mm/vmscan.c index eeb3bc9..52d8ad9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1597,15 +1597,6 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, if (!global_reclaim(sc)) force_scan = true; - /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || (nr_swap_pages <= 0)) { - noswap = 1; - fraction[0] = 0; - fraction[1] = 1; - denominator = 1; - goto out; - } - anon = get_lru_size(lruvec, LRU_ACTIVE_ANON) + get_lru_size(lruvec, LRU_INACTIVE_ANON); file = get_lru_size(lruvec, LRU_ACTIVE_FILE) +