From patchwork Fri Mar 21 21:17:35 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 26884 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f200.google.com (mail-ob0-f200.google.com [209.85.214.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8866F203AB for ; Fri, 21 Mar 2014 21:18:10 +0000 (UTC) Received: by mail-ob0-f200.google.com with SMTP id gq1sf10945692obb.11 for ; Fri, 21 Mar 2014 14:18:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=cAMOwS8GPEcTmews1kYUWu+H+LhSyW4rzWyZZTDdPGU=; b=PRDnzVwZaFbymMW30T9zg5ny8KGu3YdPyjJobbTUi1aE6eTfV0uwRfQJIVy9/NLSv0 TPLvLCZ5ZjJBMGCBt/2AVQKQZ39JJ7BzV9EzCpqhqpEmlDZ+XKyVZIe+vT04At0/kFmR ZZGWOonn/NxlZk372FLysFmm7suAFaKVqNFQOwcj27r4dlsfgGS89wZoj+lfwQ095Iid OJFrQTR/L/RI3YaS/Uv+GRsVR0zV6FzX6FC16jBjvYBNZJGD6zG/dDG9xw9ziFsBij/p wtsZcX60QxQ6IeR8JUuweFX/1JqJuJor6KTKdYSEYeXa1IUJeLE7qxqzdMHwuTXYs1lb nEHQ== X-Gm-Message-State: ALoCoQmbMlOCfmXBCxgPd+Khoo6qtno8KphGQchhA/ZC6ugNPqAEn5qWOcoEu4evEJjl1KoF9NEs X-Received: by 10.42.107.146 with SMTP id d18mr17064216icp.8.1395436690042; Fri, 21 Mar 2014 14:18:10 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.109.137 with SMTP id l9ls759841qgf.40.gmail; Fri, 21 Mar 2014 14:18:09 -0700 (PDT) X-Received: by 10.52.128.231 with SMTP id nr7mr12828641vdb.17.1395436689918; Fri, 21 Mar 2014 14:18:09 -0700 (PDT) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id gs7si1446940vdc.146.2014.03.21.14.18.09 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Mar 2014 14:18:09 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id lg15so3274750vcb.16 for ; Fri, 21 Mar 2014 14:18:09 -0700 (PDT) X-Received: by 10.58.162.168 with SMTP id yb8mr18311956veb.9.1395436689829; Fri, 21 Mar 2014 14:18:09 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.78.9 with SMTP id i9csp59706vck; Fri, 21 Mar 2014 14:18:09 -0700 (PDT) X-Received: by 10.68.201.67 with SMTP id jy3mr56442792pbc.20.1395436688730; Fri, 21 Mar 2014 14:18:08 -0700 (PDT) Received: from mail-pb0-f43.google.com (mail-pb0-f43.google.com [209.85.160.43]) by mx.google.com with ESMTPS id zm8si4445273pac.235.2014.03.21.14.18.08 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Mar 2014 14:18:08 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.160.43 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.160.43; Received: by mail-pb0-f43.google.com with SMTP id um1so2915787pbc.2 for ; Fri, 21 Mar 2014 14:18:08 -0700 (PDT) X-Received: by 10.68.244.229 with SMTP id xj5mr58044706pbc.108.1395436688316; Fri, 21 Mar 2014 14:18:08 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id os1sm31409893pac.20.2014.03.21.14.17.59 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Mar 2014 14:18:07 -0700 (PDT) From: John Stultz To: LKML Cc: John Stultz , Andrew Morton , Android Kernel Team , Johannes Weiner , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Neil Brown , Andrea Arcangeli , Mike Hommey , Taras Glek , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Minchan Kim , "linux-mm@kvack.org" Subject: [PATCH 5/5] vmscan: Age anonymous memory even when swap is off. Date: Fri, 21 Mar 2014 14:17:35 -0700 Message-Id: <1395436655-21670-6-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1395436655-21670-1-git-send-email-john.stultz@linaro.org> References: <1395436655-21670-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently we don't shrink/scan the anonymous lrus when swap is off. This is problematic for volatile range purging on swapless systems/ This patch naievely changes the vmscan code to continue scanning and shrinking the lrus even when there is no swap. It obviously has performance issues. Thoughts on how best to implement this would be appreciated. Cc: Andrew Morton Cc: Android Kernel Team Cc: Johannes Weiner Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Neil Brown Cc: Andrea Arcangeli Cc: Mike Hommey Cc: Taras Glek Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: John Stultz --- mm/vmscan.c | 26 ++++---------------------- 1 file changed, 4 insertions(+), 22 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 34f159a..07b0a8c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -155,9 +155,8 @@ static unsigned long zone_reclaimable_pages(struct zone *zone) nr = zone_page_state(zone, NR_ACTIVE_FILE) + zone_page_state(zone, NR_INACTIVE_FILE); - if (get_nr_swap_pages() > 0) - nr += zone_page_state(zone, NR_ACTIVE_ANON) + - zone_page_state(zone, NR_INACTIVE_ANON); + nr += zone_page_state(zone, NR_ACTIVE_ANON) + + zone_page_state(zone, NR_INACTIVE_ANON); return nr; } @@ -1764,13 +1763,6 @@ static int inactive_anon_is_low_global(struct zone *zone) */ static int inactive_anon_is_low(struct lruvec *lruvec) { - /* - * If we don't have swap space, anonymous page deactivation - * is pointless. - */ - if (!total_swap_pages) - return 0; - if (!mem_cgroup_disabled()) return mem_cgroup_inactive_anon_is_low(lruvec); @@ -1880,12 +1872,6 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, if (!global_reclaim(sc)) force_scan = true; - /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || (get_nr_swap_pages() <= 0)) { - scan_balance = SCAN_FILE; - goto out; - } - /* * Global reclaim will swap to prevent OOM even with no * swappiness, but memcg users want to use this knob to @@ -2048,7 +2034,6 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) if (nr[lru]) { nr_to_scan = min(nr[lru], SWAP_CLUSTER_MAX); nr[lru] -= nr_to_scan; - nr_reclaimed += shrink_list(lru, nr_to_scan, lruvec, sc); } @@ -2181,8 +2166,8 @@ static inline bool should_continue_reclaim(struct zone *zone, */ pages_for_compaction = (2UL << sc->order); inactive_lru_pages = zone_page_state(zone, NR_INACTIVE_FILE); - if (get_nr_swap_pages() > 0) - inactive_lru_pages += zone_page_state(zone, NR_INACTIVE_ANON); + inactive_lru_pages += zone_page_state(zone, NR_INACTIVE_ANON); + if (sc->nr_reclaimed < pages_for_compaction && inactive_lru_pages > pages_for_compaction) return true; @@ -2726,9 +2711,6 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc) { struct mem_cgroup *memcg; - if (!total_swap_pages) - return; - memcg = mem_cgroup_iter(NULL, NULL, NULL); do { struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);