From patchwork Thu Jun 23 13:18:38 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 102110 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp428695qgy; Thu, 23 Jun 2016 06:17:13 -0700 (PDT) X-Received: by 10.98.206.77 with SMTP id y74mr42929028pfg.55.1466687833676; Thu, 23 Jun 2016 06:17:13 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id vx8si114557pac.107.2016.06.23.06.17.13; Thu, 23 Jun 2016 06:17:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751664AbcFWNQu (ORCPT + 30 others); Thu, 23 Jun 2016 09:16:50 -0400 Received: from mout.kundenserver.de ([212.227.17.13]:51786 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751037AbcFWNQt (ORCPT ); Thu, 23 Jun 2016 09:16:49 -0400 Received: from wuerfel.lan. ([78.42.132.4]) by mrelayeu.kundenserver.de (mreue103) with ESMTPA (Nemesis) id 0Lb8B7-1bdpaZ0QDR-00khNS; Thu, 23 Jun 2016 15:16:38 +0200 From: Arnd Bergmann To: Mel Gorman Cc: Vlastimil Babka , Johannes Weiner , Rik van Riel , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arnd Bergmann Subject: [RFC, DEBUGGING v2 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Date: Thu, 23 Jun 2016 15:18:38 +0200 Message-Id: <20160623131839.3579472-1-arnd@arndb.de> X-Mailer: git-send-email 2.9.0 In-Reply-To: <3817461.6pThRKgN9N@wuerfel> References: <3817461.6pThRKgN9N@wuerfel> X-Provags-ID: V03:K0:hSvRT+3vWJCI3z9gL/+UXsFKaNIjM0w77zqX+dO5h2JV0LOX12Q oSbrvi7kgD1pMxyj8Pba15ce70N7KgROLc3HJy0goR/5rPMv33aKDzVm9vPFrl8VdF3rKC2 PDhX+XySlLQaCms6tt1aVx8QqM5syTNqAbbe/1hPh1bvUDgIiz93HRsijldApIJOJnhQ0bi U3cvf+K79zZ6EYQ1IF/FQ== X-UI-Out-Filterresults: notjunk:1; V01:K0:1R1jBCciglI=:DJG6Am9UCzYciiBMfZ5MkH 0SYsDo6gp/8dpB1XfUzmPQn68FOjuZjQ7SpUpvfaPKnqFUZj8q/NZl0pMTX8/fZrx7QeKX5db uM2n8tHPArINi5cwbTrEn3v5tqVHrl9w17NgavhI7WEuS7k61mTCmZW0pLOp/JOpi/b8jInbB fB2dA3jM5ZOsE4/rx7odK2mCe1ofckjDup38wr3mMBO0CFpNPoVijCVQqzCiAh5LqM1P9gV1O /zIX//07xFdI2UgK5Qd/oawm4mhLvMx9g9jZp623kwlYq4ArV84wmvKRU4m3drga374qC3Vud KH4j/OOg7DgeQqVCmzoMzz7T9w+dwhg9b2xixeYm6LeKNkDc09wT6ZkxFfD90fHLE6Sf/1ivr n04F+egN7MxqMhCOQnjr6RSTizhqRcVNOKcGT0PmPnJmJXK5M6k/C46ED5tdIUqNUBIOkm0a+ No/yG8Gc4eVMfhz/aZEwCDahf945P3a3tsvErc29jWvV5zMmoPi5J/4toINzjQ1SxC6zuYB2N wjursbYxd9FvOimOTJGhMhaSy+P5NVOj0nygIDxriDMDqOcRAMyXpW8ew7pyBgl7Aml196wkx y1hszDe/8IMFw8B/uSHY2n2t3ByAekwZ+/d/GWe1mrPXzCLbtK6Q4lstrWhsl+ALFAplf61hJ iAu8fL2HcyLtShoUZyMCOoMwcPJfaFmYWs0yISMkdiIoTblFr+cpi/3LC0kkeKL2dQp4= Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I see some new warnings from a recent mm change: mm/filemap.c: In function '__delete_from_page_cache': include/linux/vmstat.h:116:2: error: array subscript is above array bounds [-Werror=array-bounds] atomic_long_add(x, &zone->vm_stat[item]); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds] atomic_long_add(x, &zone->vm_stat[item]); ~~~~~~~~~~~~~^~~~~~ include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds] include/linux/vmstat.h:117:2: error: array subscript is above array bounds [-Werror=array-bounds] Looking deeper into it, I find that we pass the wrong enum into some functions after the type for the symbol has changed. This changes the code to use the other function for those that are using the incorrect type. I've done this blindly just going by warnings I got from a debug patch I did for this, so it's likely that some cases are more subtle and need another change, so please treat this as a bug-report rather than a patch for applying. Signed-off-by: Arnd Bergmann Fixes: e426f7b4ade5 ("mm: move most file-based accounting to the node") --- mm/filemap.c | 4 ++-- mm/khugepaged.c | 4 ++-- mm/page_alloc.c | 15 ++++++++------- mm/rmap.c | 4 ++-- mm/shmem.c | 4 ++-- mm/vmscan.c | 2 +- 6 files changed, 17 insertions(+), 16 deletions(-) -- 2.9.0 diff --git a/mm/filemap.c b/mm/filemap.c index 6cb19e012887..e0fe47e9ea44 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -218,9 +218,9 @@ void __delete_from_page_cache(struct page *page, void *shadow) /* hugetlb pages do not participate in page cache accounting. */ if (!PageHuge(page)) - __mod_zone_page_state(page_zone(page), NR_FILE_PAGES, -nr); + __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr); if (PageSwapBacked(page)) { - __mod_zone_page_state(page_zone(page), NR_SHMEM, -nr); + __mod_node_page_state(page_pgdat(page), NR_SHMEM, -nr); if (PageTransHuge(page)) __dec_zone_page_state(page, NR_SHMEM_THPS); } else { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index af256d599080..0efda0345aed 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1476,8 +1476,8 @@ tree_unlocked: local_irq_save(flags); __inc_zone_page_state(new_page, NR_SHMEM_THPS); if (nr_none) { - __mod_zone_page_state(zone, NR_FILE_PAGES, nr_none); - __mod_zone_page_state(zone, NR_SHMEM, nr_none); + __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none); + __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none); } local_irq_restore(flags); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23b5044f5ced..277dc0cbe780 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3484,9 +3484,10 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order, unsigned long writeback; unsigned long dirty; - writeback = zone_page_state_snapshot(zone, + writeback = node_page_state_snapshot(zone->zone_pgdat, NR_WRITEBACK); - dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY); + dirty = node_page_state_snapshot(zone->zone_pgdat, + NR_FILE_DIRTY); if (2*(writeback + dirty) > reclaimable) { congestion_wait(BLK_RW_ASYNC, HZ/10); @@ -4396,9 +4397,9 @@ void show_free_areas(unsigned int filter) K(zone->present_pages), K(zone->managed_pages), K(zone_page_state(zone, NR_MLOCK)), - K(zone_page_state(zone, NR_FILE_DIRTY)), - K(zone_page_state(zone, NR_WRITEBACK)), - K(zone_page_state(zone, NR_SHMEM)), + K(node_page_state(zone->zone_pgdat, NR_FILE_DIRTY)), + K(node_page_state(zone->zone_pgdat, NR_WRITEBACK)), + K(node_page_state(zone->zone_pgdat, NR_SHMEM)), #ifdef CONFIG_TRANSPARENT_HUGEPAGE K(zone_page_state(zone, NR_SHMEM_THPS) * HPAGE_PMD_NR), K(zone_page_state(zone, NR_SHMEM_PMDMAPPED) @@ -4410,12 +4411,12 @@ void show_free_areas(unsigned int filter) zone_page_state(zone, NR_KERNEL_STACK) * THREAD_SIZE / 1024, K(zone_page_state(zone, NR_PAGETABLE)), - K(zone_page_state(zone, NR_UNSTABLE_NFS)), + K(node_page_state(zone->zone_pgdat, NR_UNSTABLE_NFS)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), K(this_cpu_read(zone->pageset->pcp.count)), K(zone_page_state(zone, NR_FREE_CMA_PAGES)), - K(zone_page_state(zone, NR_WRITEBACK_TEMP)), + K(node_page_state(zone->zone_pgdat, NR_WRITEBACK_TEMP)), K(node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED))); printk("lowmem_reserve[]:"); for (i = 0; i < MAX_NR_ZONES; i++) diff --git a/mm/rmap.c b/mm/rmap.c index 4deff963ea8a..a66f80bc8703 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1296,7 +1296,7 @@ void page_add_file_rmap(struct page *page, bool compound) if (!atomic_inc_and_test(&page->_mapcount)) goto out; } - __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, nr); + __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, nr); mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); out: unlock_page_memcg(page); @@ -1336,7 +1336,7 @@ static void page_remove_file_rmap(struct page *page, bool compound) * these counters are not modified in interrupt context, and * pte lock(a spinlock) is held, which implies preemption disabled. */ - __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, -nr); + __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, -nr); mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); if (unlikely(PageMlocked(page))) diff --git a/mm/shmem.c b/mm/shmem.c index e5c50fb0d4a4..a03c087f71fe 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -576,8 +576,8 @@ static int shmem_add_to_page_cache(struct page *page, mapping->nrpages += nr; if (PageTransHuge(page)) __inc_zone_page_state(page, NR_SHMEM_THPS); - __mod_zone_page_state(page_zone(page), NR_FILE_PAGES, nr); - __mod_zone_page_state(page_zone(page), NR_SHMEM, nr); + __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); + __mod_node_page_state(page_pgdat(page), NR_SHMEM, nr); spin_unlock_irq(&mapping->tree_lock); } else { page->mapping = NULL; diff --git a/mm/vmscan.c b/mm/vmscan.c index 07e17dac1793..4702069cc80b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2079,7 +2079,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, int z; unsigned long total_high_wmark = 0; - pgdatfree = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + pgdatfree = global_page_state(NR_FREE_PAGES); pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) + node_page_state(pgdat, NR_INACTIVE_FILE);