From patchwork Sat Mar 13 05:08:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 400067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_RED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67B7BC43619 for ; Sat, 13 Mar 2021 05:09:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4387D64FB5 for ; Sat, 13 Mar 2021 05:09:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232331AbhCMFIq (ORCPT ); Sat, 13 Mar 2021 00:08:46 -0500 Received: from mail.kernel.org ([198.145.29.99]:42516 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232401AbhCMFIg (ORCPT ); Sat, 13 Mar 2021 00:08:36 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 902F264FA8; Sat, 13 Mar 2021 05:08:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1615612115; bh=3BlKgJi0ll++8rFe7FHuTeUyHYOG/ULHH87iCYBuEz0=; h=Date:From:To:Subject:In-Reply-To:From; b=sAynsjzZMqr18T9Ur1ebKYh7J9gFZMvAjXYLDVbjEEbG5FrUYVtS29cWpS5Q534E8 APTLQ8R4Z5M0sPkwaSArMMaP9URURKQ45sGReuclpfMs+OszNNvrGkm6EfR34NvhNs Cvc72RhQIMExqfH5C5xlt8jSCTZVCEyLV2WNXd9Q= Date: Fri, 12 Mar 2021 21:08:33 -0800 From: Andrew Morton To: akpm@linux-foundation.org, chenweilong@huawei.com, dingtianhong@huawei.com, guohanjun@huawei.com, hannes@cmpxchg.org, hughd@google.com, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, npiggin@gmail.com, rui.xiang@huawei.com, shakeelb@google.com, stable@vger.kernel.org, torvalds@linux-foundation.org, wangkefeng.wang@huawei.com, zhouguanghui1@huawei.com, ziy@nvidia.com Subject: [patch 27/29] mm/memcg: set memcg when splitting page Message-ID: <20210313050833.w4W0nDDFK%akpm@linux-foundation.org> In-Reply-To: <20210312210632.9b7d62973d72a56fb13c7a03@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Zhou Guanghui Subject: mm/memcg: set memcg when splitting page As described in the split_page() comment, for the non-compound high order page, the sub-pages must be freed individually. If the memcg of the first page is valid, the tail pages cannot be uncharged when be freed. For example, when alloc_pages_exact is used to allocate 1MB continuous physical memory, 2MB is charged(kmemcg is enabled and __GFP_ACCOUNT is set). When make_alloc_exact free the unused 1MB and free_pages_exact free the applied 1MB, actually, only 4KB(one page) is uncharged. Therefore, the memcg of the tail page needs to be set when splitting a page. Michel: There are at least two explicit users of __GFP_ACCOUNT with alloc_exact_pages added recently. See 7efe8ef274024 ("KVM: arm64: Allocate stage-2 pgd pages with GFP_KERNEL_ACCOUNT") and c419621873713 ("KVM: s390: Add memcg accounting to KVM allocations"), so this is not just a theoretical issue. Link: https://lkml.kernel.org/r/20210304074053.65527-3-zhouguanghui1@huawei.com Signed-off-by: Zhou Guanghui Acked-by: Johannes Weiner Reviewed-by: Zi Yan Reviewed-by: Shakeel Butt Acked-by: Michal Hocko Cc: Hanjun Guo Cc: Hugh Dickins Cc: Kefeng Wang Cc: "Kirill A. Shutemov" Cc: Nicholas Piggin Cc: Rui Xiang Cc: Tianhong Ding Cc: Weilong Chen Cc: Signed-off-by: Andrew Morton --- mm/page_alloc.c | 1 + 1 file changed, 1 insertion(+) --- a/mm/page_alloc.c~mm-memcg-set-memcg-when-split-page +++ a/mm/page_alloc.c @@ -3314,6 +3314,7 @@ void split_page(struct page *page, unsig for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); + split_page_memcg(page, 1 << order); } EXPORT_SYMBOL_GPL(split_page);