From patchwork Thu Jun 23 10:05:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 102113 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp346238qgy; Thu, 23 Jun 2016 03:03:38 -0700 (PDT) X-Received: by 10.98.208.197 with SMTP id p188mr33744711pfg.152.1466676218493; Thu, 23 Jun 2016 03:03:38 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1si6100323paj.218.2016.06.23.03.03.38; Thu, 23 Jun 2016 03:03:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751639AbcFWKDf (ORCPT + 30 others); Thu, 23 Jun 2016 06:03:35 -0400 Received: from mout.kundenserver.de ([212.227.17.24]:65265 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751026AbcFWKDd (ORCPT ); Thu, 23 Jun 2016 06:03:33 -0400 Received: from wuerfel.lan. ([78.42.132.4]) by mrelayeu.kundenserver.de (mreue101) with ESMTPA (Nemesis) id 0LZek0-1bfA573rnI-00lR12; Thu, 23 Jun 2016 12:03:14 +0200 From: Arnd Bergmann To: Mel Gorman Cc: Vlastimil Babka , Johannes Weiner , Rik van Riel , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arnd Bergmann Subject: [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Date: Thu, 23 Jun 2016 12:05:17 +0200 Message-Id: <20160623100518.156662-1-arnd@arndb.de> X-Mailer: git-send-email 2.9.0 X-Provags-ID: V03:K0:pJv8m746ArfmiPYWKOckBagTqx0lr3Vvqg4JtXcrUHlmkEbgHx4 1AlU5lxVOz8zFRlE6p3x/BkrcydhkdZ9H0noJPxogBeCySkHW1Mr2uLSY0pr0lPpcjmsRLG 3aMGPxaU3raZXnGR8ELaQwM2YHpXKQhJIrgfKj9fPeOD1muYC5VH04zuxi6+RMPiaUFlfeA R/wSyWLldtGia4O3Oxzwg== X-UI-Out-Filterresults: notjunk:1; V01:K0:fVWHxLkrZhk=:rDjKeWFIpj1kvtp2tkFvLu KTu1ybXL49nRSnVSMsNHGNnAUrpPteaEXT6PHjkRDtH5RLgJP87GJocq4/jnD6HV/7WHysp07 Au+D7MM/YBXbhOgVHxDihjSvqmu9e2If3UsUZGg2TVLWzYk72qq4tGz4AKMzS66dTr9qoAOPm iIlUovDj4ZH7fza95fNGxYTtW542Q6XCRGNwc9VUKNE/fllfpGNcL2A1Srsn0Dk9RQyGP6Xfe ylA0xiMBu/w9/bASotJq8kptZxkmWriwSMtJjtlxBg7WfCSKq3UmB8Yb5JVMjhKx4KhxnVnGE QZLyuo/BiyjTdicGqM9NaozLStaRPGBtvgnTUb06jFIwMZULG+x/UDWsWfH5IgaBSwn0ynW3I mRp/aWQRkY2nFLUTVxHRPXZKgWNtZIdEZlqRD+QjWjw51HJrZquG9mdgrtTfvcnMoEu/frnL1 yvFjHKYPBNBKCtbBfqlTMKfthhsASk8LOfKzQBUshkS9RL3yRqfuvbrKVRrGxeak7VBK6aFGC gEiDp3sDc8bCNYMb7dDar56tSz6UzSkwoGLkD1lGpIfiUYLjLUgydV1wcK1coD7iZLrS/Ennd FqnHMPYaE58z8/cDc1y4E9YyorGeq3DxW+sLZZxxepS1s87Hwgnvftjkap2bmE0Z84Klj8SDb 7ERJJelDYahBqxnfhcyKSisa6FLJuk/f8VC1kbPlfSdM9Sk1wrZARhQJubtOqK2FF9dM= Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I see some new warnings from a recent mm change: mm/filemap.c: In function '__delete_from_page_cache': include/linux/vmstat.h:116:2: error: array subscript is above array bounds [-Werror=array-bounds] atomic_long_add(x, &zone->vm_stat[item]); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds] atomic_long_add(x, &zone->vm_stat[item]); ~~~~~~~~~~~~~^~~~~~ include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds] include/linux/vmstat.h:117:2: error: array subscript is above array bounds [-Werror=array-bounds] Looking deeper into it, I find that we pass the wrong enum into some functions after the type for the symbol has changed. This changes the code to use the other function for those that are using the incorrect type. I've done this blindly just going by warnings I got from a debug patch I did for this, so it's likely that some cases are more subtle and need another change, so please treat this as a bug-report rather than a patch for applying. Signed-off-by: Arnd Bergmann Fixes: e426f7b4ade5 ("mm: move most file-based accounting to the node") --- mm/filemap.c | 4 ++-- mm/page_alloc.c | 15 ++++++++------- mm/rmap.c | 4 ++-- mm/shmem.c | 4 ++-- mm/vmscan.c | 2 +- 5 files changed, 15 insertions(+), 14 deletions(-) -- 2.9.0 diff --git a/mm/filemap.c b/mm/filemap.c index 6cb19e012887..77e902bf04f4 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -218,9 +218,9 @@ void __delete_from_page_cache(struct page *page, void *shadow) /* hugetlb pages do not participate in page cache accounting. */ if (!PageHuge(page)) - __mod_zone_page_state(page_zone(page), NR_FILE_PAGES, -nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_PAGES, -nr); if (PageSwapBacked(page)) { - __mod_zone_page_state(page_zone(page), NR_SHMEM, -nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_SHMEM, -nr); if (PageTransHuge(page)) __dec_zone_page_state(page, NR_SHMEM_THPS); } else { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23b5044f5ced..d5287011ed27 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3484,9 +3484,10 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order, unsigned long writeback; unsigned long dirty; - writeback = zone_page_state_snapshot(zone, + writeback = node_page_state_snapshot(zone->zone_pgdat, NR_WRITEBACK); - dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY); + dirty = node_page_state_snapshot(zone->zone_pgdat, + NR_FILE_DIRTY); if (2*(writeback + dirty) > reclaimable) { congestion_wait(BLK_RW_ASYNC, HZ/10); @@ -4396,9 +4397,9 @@ void show_free_areas(unsigned int filter) K(zone->present_pages), K(zone->managed_pages), K(zone_page_state(zone, NR_MLOCK)), - K(zone_page_state(zone, NR_FILE_DIRTY)), - K(zone_page_state(zone, NR_WRITEBACK)), - K(zone_page_state(zone, NR_SHMEM)), + K(node_page_state(zone->zone_pgdat, NR_FILE_DIRTY)), + K(node_page_state(zone, NR_WRITEBACK)), + K(node_page_state(zone, NR_SHMEM)), #ifdef CONFIG_TRANSPARENT_HUGEPAGE K(zone_page_state(zone, NR_SHMEM_THPS) * HPAGE_PMD_NR), K(zone_page_state(zone, NR_SHMEM_PMDMAPPED) @@ -4410,12 +4411,12 @@ void show_free_areas(unsigned int filter) zone_page_state(zone, NR_KERNEL_STACK) * THREAD_SIZE / 1024, K(zone_page_state(zone, NR_PAGETABLE)), - K(zone_page_state(zone, NR_UNSTABLE_NFS)), + K(node_page_state(zone, NR_UNSTABLE_NFS)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), K(this_cpu_read(zone->pageset->pcp.count)), K(zone_page_state(zone, NR_FREE_CMA_PAGES)), - K(zone_page_state(zone, NR_WRITEBACK_TEMP)), + K(node_page_state(zone, NR_WRITEBACK_TEMP)), K(node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED))); printk("lowmem_reserve[]:"); for (i = 0; i < MAX_NR_ZONES; i++) diff --git a/mm/rmap.c b/mm/rmap.c index 4deff963ea8a..898b2b7806ca 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1296,7 +1296,7 @@ void page_add_file_rmap(struct page *page, bool compound) if (!atomic_inc_and_test(&page->_mapcount)) goto out; } - __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_MAPPED, nr); mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); out: unlock_page_memcg(page); @@ -1336,7 +1336,7 @@ static void page_remove_file_rmap(struct page *page, bool compound) * these counters are not modified in interrupt context, and * pte lock(a spinlock) is held, which implies preemption disabled. */ - __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, -nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_MAPPED, -nr); mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); if (unlikely(PageMlocked(page))) diff --git a/mm/shmem.c b/mm/shmem.c index e5c50fb0d4a4..99dcb8e5642d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -576,8 +576,8 @@ static int shmem_add_to_page_cache(struct page *page, mapping->nrpages += nr; if (PageTransHuge(page)) __inc_zone_page_state(page, NR_SHMEM_THPS); - __mod_zone_page_state(page_zone(page), NR_FILE_PAGES, nr); - __mod_zone_page_state(page_zone(page), NR_SHMEM, nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_PAGES, nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_SHMEM, nr); spin_unlock_irq(&mapping->tree_lock); } else { page->mapping = NULL; diff --git a/mm/vmscan.c b/mm/vmscan.c index 07e17dac1793..4702069cc80b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2079,7 +2079,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, int z; unsigned long total_high_wmark = 0; - pgdatfree = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + pgdatfree = global_page_state(NR_FREE_PAGES); pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) + node_page_state(pgdat, NR_INACTIVE_FILE);