From patchwork Sat Jun 20 20:04:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 223615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99BFDC433E0 for ; Sat, 20 Jun 2020 20:04:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 727E024687 for ; Sat, 20 Jun 2020 20:04:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592683445; bh=UaBKgxjUoKLkGYfGcRVJEasCL6YtvmrnZ+fFuusrC/o=; h=Date:From:To:Subject:List-ID:From; b=JcemEO3WA7d295gLD3/PZKS30Dcqqcr12utmuZnEVO7gAC6HJcTnKMy+/E+/ZQiQc OHkXw5GO+FtlTJgsi+5uR0PU4vYWQ+OHDcpXQ1aNapCh0brvcTXCpDIRhmWV10LyMn KUjbD7C2luNyoLBYYVV4cMPsePu4TQazqlAC8vKc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728692AbgFTUEF (ORCPT ); Sat, 20 Jun 2020 16:04:05 -0400 Received: from mail.kernel.org ([198.145.29.99]:34266 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728640AbgFTUEE (ORCPT ); Sat, 20 Jun 2020 16:04:04 -0400 Received: from X1 (nat-ab2241.sltdut.senawave.net [162.218.216.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8EE9D24680; Sat, 20 Jun 2020 20:04:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592683443; bh=UaBKgxjUoKLkGYfGcRVJEasCL6YtvmrnZ+fFuusrC/o=; h=Date:From:To:Subject:From; b=aODgCXDgHubg5JVeAhWX3w+zKiu+z4+3GESmfP+As/vGpMDNs71Q18LIBBwJbdTUR 9ZI0kN1xBNcIBXDlosGH9gFA293eoDclWIkQExjHQxzvOGc6bhgrIQmpIzvAd3x8uA vWEv/vlIemOyyN/pMmQwDaR5P+Rd+BbhwekSok1U= Date: Sat, 20 Jun 2020 13:04:02 -0700 From: akpm@linux-foundation.org To: mm-commits@vger.kernel.org, vdavydov.dev@gmail.com, stable@vger.kernel.org, shakeelb@google.com, rientjes@google.com, penberg@kernel.org, mhocko@kernel.org, iamjoonsoo.kim@lge.com, hannes@cmpxchg.org, guro@fb.com, cl@linux.com, longman@redhat.com Subject: + mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab.patch added to -mm tree Message-ID: <20200620200402.Cd06t%akpm@linux-foundation.org> User-Agent: s-nail v14.9.10 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch titled Subject: mm, slab: fix sign conversion problem in memcg_uncharge_slab() has been added to the -mm tree. Its filename is mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Waiman Long Subject: mm, slab: fix sign conversion problem in memcg_uncharge_slab() It was found that running the LTP test on a PowerPC system could produce erroneous values in /proc/meminfo, like: MemTotal: 531915072 kB MemFree: 507962176 kB MemAvailable: 1100020596352 kB Using bisection, the problem is tracked down to commit 9c315e4d7d8c ("mm: memcg/slab: cache page number in memcg_(un)charge_slab()"). In memcg_uncharge_slab() with a "int order" argument: unsigned int nr_pages = 1 << order; : mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages); The mod_lruvec_state() function will eventually call the __mod_zone_page_state() which accepts a long argument. Depending on the compiler and how inlining is done, "-nr_pages" may be treated as a negative number or a very large positive number. Apparently, it was treated as a large positive number in that PowerPC system leading to incorrect stat counts. This problem hasn't been seen in x86-64 yet, perhaps the gcc compiler there has some slight difference in behavior. It is fixed by making nr_pages a signed value. For consistency, a similar change is applied to memcg_charge_slab() as well. Link: http://lkml.kernel.org/r/20200620184719.10994-1-longman@redhat.com Fixes: 9c315e4d7d8c ("mm: memcg/slab: cache page number in memcg_(un)charge_slab()"). Signed-off-by: Waiman Long Acked-by: Roman Gushchin Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Shakeel Butt Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Signed-off-by: Andrew Morton --- mm/slab.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/slab.h~mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab +++ a/mm/slab.h @@ -348,7 +348,7 @@ static __always_inline int memcg_charge_ gfp_t gfp, int order, struct kmem_cache *s) { - unsigned int nr_pages = 1 << order; + int nr_pages = 1 << order; struct mem_cgroup *memcg; struct lruvec *lruvec; int ret; @@ -388,7 +388,7 @@ out: static __always_inline void memcg_uncharge_slab(struct page *page, int order, struct kmem_cache *s) { - unsigned int nr_pages = 1 << order; + int nr_pages = 1 << order; struct mem_cgroup *memcg; struct lruvec *lruvec;