From patchwork Fri Mar 5 12:22:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 394941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FD40C432C3 for ; Fri, 5 Mar 2021 12:40:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 213B564FF0 for ; Fri, 5 Mar 2021 12:40:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229901AbhCEMjv (ORCPT ); Fri, 5 Mar 2021 07:39:51 -0500 Received: from mail.kernel.org ([198.145.29.99]:54072 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233228AbhCEMje (ORCPT ); Fri, 5 Mar 2021 07:39:34 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id D1D6E64FF0; Fri, 5 Mar 2021 12:39:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1614947973; bh=jccTq6OER1P9/5YWt8iNhRDRhuycPyuRO0gMATMCP3o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Onc2tRp3OQ1P5iAFAtRSsb1Z0ZlyHziF2UywFo84zVoYPb1WPQl7cgYTigf2AAyRf NeCiBM0A8vpOv5gpUqqS8SPGw6rIfXqckE/haHne+MhH8g+wfrgJA3786ztg/Nb3Uk 5Dy1Ne6gU+aU0U6eqL0Cd+YdqBOgBSIDi2OorR1s= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Zi Yan , Mike Kravetz , Davidlohr Bueso , "Kirill A . Shutemov" , Andrea Arcangeli , Matthew Wilcox , Oscar Salvador , Joao Martins , Andrew Morton , Linus Torvalds Subject: [PATCH 4.14 04/39] hugetlb: fix update_and_free_page contig page struct assumption Date: Fri, 5 Mar 2021 13:22:03 +0100 Message-Id: <20210305120851.973504079@linuxfoundation.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210305120851.751937389@linuxfoundation.org> References: <20210305120851.751937389@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mike Kravetz commit dbfee5aee7e54f83d96ceb8e3e80717fac62ad63 upstream. page structs are not guaranteed to be contiguous for gigantic pages. The routine update_and_free_page can encounter a gigantic page, yet it assumes page structs are contiguous when setting page flags in subpages. If update_and_free_page encounters non-contiguous page structs, we can see “BUG: Bad page state in process …” errors. Non-contiguous page structs are generally not an issue. However, they can exist with a specific kernel configuration and hotplug operations. For example: Configure the kernel with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP. Then, hotplug add memory for the area where the gigantic page will be allocated. Zi Yan outlined steps to reproduce here [1]. [1] https://lore.kernel.org/linux-mm/16F7C58B-4D79-41C5-9B64-A1A1628F4AF2@nvidia.com/ Link: https://lkml.kernel.org/r/20210217184926.33567-1-mike.kravetz@oracle.com Fixes: 944d9fec8d7a ("hugetlb: add support for gigantic page allocation at runtime") Signed-off-by: Zi Yan Signed-off-by: Mike Kravetz Cc: Zi Yan Cc: Davidlohr Bueso Cc: "Kirill A . Shutemov" Cc: Andrea Arcangeli Cc: Matthew Wilcox Cc: Oscar Salvador Cc: Joao Martins Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1208,14 +1208,16 @@ static inline int alloc_fresh_gigantic_p static void update_and_free_page(struct hstate *h, struct page *page) { int i; + struct page *subpage = page; if (hstate_is_gigantic(h) && !gigantic_page_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; - for (i = 0; i < pages_per_huge_page(h); i++) { - page[i].flags &= ~(1 << PG_locked | 1 << PG_error | + for (i = 0; i < pages_per_huge_page(h); + i++, subpage = mem_map_next(subpage, page, i)) { + subpage->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | 1 << PG_active | 1 << PG_private | 1 << PG_writeback);