From patchwork Wed Feb 24 15:39:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 387657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1D02C433E6 for ; Wed, 24 Feb 2021 15:59:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6901464EF5 for ; Wed, 24 Feb 2021 15:59:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234344AbhBXP7X (ORCPT ); Wed, 24 Feb 2021 10:59:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:53546 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232008AbhBXPlL (ORCPT ); Wed, 24 Feb 2021 10:41:11 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0E78064ED4; Wed, 24 Feb 2021 15:40:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1614181210; bh=zpbZSJ3n3/UQqrQj84VRvoG9ZEiWEcOUrHhFFU5TqVg=; h=From:To:Cc:Subject:Date:From; b=TroD82eFVaO62rdCY73XmFb46J9O6dXemfwDFKmcshbAcbvRrD9hu25kvSYRxl7bs s2JpM4JIgBZtIAY04bR5QE/x1rJgk+X11S5WvRwuUr8lowy+Ln/pJVDjSWi+jDdfVa jGPGVcs2h2kV04CoMtSB0i6x9MuWzX4SPqDIO7ET8rwZCxnfioLT+xi3DzhpJKCivl 26VVB0MnilQHpFQOqdWUotLGGaMmKPXS9iH9VeH1WQD+YLgHqv84Vf8tZF8KqreKZB wlOEd4lU5D6OQZZrJRe2cIxjgAAVj6YiGk9FO7X2u19nNNedm3GlPBEiozfImBW68t ikcoa5+909O9g== From: Mike Rapoport To: Andrew Morton Cc: Andrea Arcangeli , Baoquan He , Borislav Petkov , Chris Wilson , David Hildenbrand , "H. Peter Anvin" , Ingo Molnar , Linus Torvalds , =?utf-8?q?=C5=81ukasz_Majczak?= , Mel Gorman , Michal Hocko , Mike Rapoport , Mike Rapoport , Qian Cai , "Sarvela, Tomi P" , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, x86@kernel.org Subject: [PATCH v7 0/1] mm: fix initialization of struct page for holes in memory layout Date: Wed, 24 Feb 2021 17:39:49 +0200 Message-Id: <20210224153950.20789-1-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mike Rapoport Hi, @Andrew, this is based on v5.11-mmotm-2021-02-18-18-29 with the previous version reverted Commit 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather that check each PFN") exposed several issues with the memory map initialization and these patches fix those issues. Initially there were crashes during compaction that Qian Cai reported back in April [1]. It seemed back then that the problem was fixed, but a few weeks ago Andrea Arcangeli hit the same bug [2] and there was an additional discussion at [3]. I didn't appreciate variety of ways BIOSes can report memory in the first megabyte, so previous versions of this set caused all kinds of troubles. The last version that implicitly extended node/zone to cover the complete section might also have unexpected side effects, so this time I'm trying to move in forward in baby steps. This is mostly a return to the fist version that simply merges init_unavailable_pages() into memmap_init() so that the only effective change would be more sensible zone/node links in unavailable struct pages. For now, I've dropped the patch that tried to make ZONE_DMA to span pfn 0 because it didn't cause any issues for really long time and there are way to many hidden mines around this. I have an ugly workaround for "pfn 0" issue that IMHO is the safest way to deal with it until it could be gradually fixed properly: https://git.kernel.org/pub/scm/linux/kernel/git/rppt/linux.git/commit/?h=meminit/pfn0&id=90272f37151c6e1bc2610997310c51f4e984cf2f v7: * add handling of section end that span beyond the populated zones v6: https://lore.kernel.org/lkml/20210222105728.28636-1-rppt@kernel.org * only interleave initialization of unavailable pages in memmap_init(), so that it is essentially includes init_unavailable_pages(). v5: https://lore.kernel.org/lkml/20210208110820.6269-1-rppt@kernel.org * extend node/zone spans to cover complete sections, this allows to interleave the initialization of unavailable pages with "normal" memory map init. * drop modifications to x86 early setup v4: https://lore.kernel.org/lkml/20210130221035.4169-1-rppt@kernel.org/ * make sure pages in the range 0 - start_pfn_of_lowest_zone are initialized even if an architecture hides them from the generic mm * finally make pfn 0 on x86 to be a part of memory visible to the generic mm as reserved memory. v3: https://lore.kernel.org/lkml/20210111194017.22696-1-rppt@kernel.org * use architectural zone constraints to set zone links for struct pages corresponding to the holes * drop implicit update of memblock.memory * add a patch that sets pfn 0 to E820_TYPE_RAM on x86 v2: https://lore.kernel.org/lkml/20201209214304.6812-1-rppt@kernel.org/): * added patch that adds all regions in memblock.reserved that do not overlap with memblock.memory to memblock.memory in the beginning of free_area_init() [1] https://lore.kernel.org/lkml/8C537EB7-85EE-4DCF-943E-3CC0ED0DF56D@lca.pw [2] https://lore.kernel.org/lkml/20201121194506.13464-1-aarcange@redhat.com [3] https://lore.kernel.org/mm-commits/20201206005401.qKuAVgOXr%akpm@linux-foundation.org Mike Rapoport (1): mm/page_alloc.c: refactor initialization of struct page for holes in memory layout mm/page_alloc.c | 147 +++++++++++++++++++++--------------------------- 1 file changed, 64 insertions(+), 83 deletions(-)