From patchwork Tue Sep 13 19:54:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE20BECAAD8 for ; Tue, 13 Sep 2022 19:58:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229453AbiIMT6C (ORCPT ); Tue, 13 Sep 2022 15:58:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229732AbiIMT6A (ORCPT ); Tue, 13 Sep 2022 15:58:00 -0400 Received: from mail-qt1-x82a.google.com (mail-qt1-x82a.google.com [IPv6:2607:f8b0:4864:20::82a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E578672EF8; Tue, 13 Sep 2022 12:57:58 -0700 (PDT) Received: by mail-qt1-x82a.google.com with SMTP id r20so8904690qtn.12; Tue, 13 Sep 2022 12:57:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=rWqwJImtlBlKgE2e4HCMCjZ57hDnUIYZvYnGoYO3Ko0=; b=d2+jGtD0TKLMdHImIPIOFRc/CCsLBd1zTd7Z32acL/4ee7Q///DRfLAeWKVby8PAWE fgb6z8hXf1k5PM+mERKCcjYlhJUc7YcVmkzbKybhsAlNUjEqr+U7/BKoMqeCMB/HNIHI g9CIZx3cpM7r7QtrN3niww/63rwDZ4ABYIOm7CcpFGEgc/xw03gOtoM8VQgFq7CdtHOy PnM3xHgjGzGq4Cl6QAxH02OPTmRMVzFrDgXO8l6Rjf0jEf15sxaNGCsaRPE9K0g1rGX0 Q9q2x8LOpIavofodITDiPKTwnIXo7FGcbxZ+dhF4xLlkpGof9K3nuM7AOF386w+Nu9xd TF4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=rWqwJImtlBlKgE2e4HCMCjZ57hDnUIYZvYnGoYO3Ko0=; b=htp07O+LUzlLcuRDBo/jLs0Qna4JbMm1SQP1g5z6O3NJVGsbQRLGt/FroqyvjADLwz LEMGzTv6UD9Tos1We7dGLItHAVse6cwnI4Ire47DJpRmvCcxQEh9CKd1xPIW1kZ9p3fE KUnenjT3BnrC9g32o4+NcyjVTrZwGRK3uR9PcCnI98eg8+7loYIdZ2q0hzrZB+GaV5VL aYEJ96G6ubneVTjKtjzKAuahTwZZ6gzVTUOJqTQ/uDEnLNMvHIqU4/rVUk5ROOj9KwAa e5oGpTGDJrmJnijMsHOymLBVMRPrQuNV1RA1FAor6jUg983Pxs6iT+vpyXQfOV+n0D1s atUg== X-Gm-Message-State: ACgBeo3twSvFvU9kaYE+n/RZ7xkEDk+fcfFmuP2i1U2V6Nxumu+VUctb GXQmW0fP6MXVoDMkZFYpeQo= X-Google-Smtp-Source: AA6agR5i53agCunM1nP5BNWdgMMXAxgKBBgHP+YifFsQRT8ONLZddir0hPCAFH311ymBKWli946Oxg== X-Received: by 2002:a05:622a:40e:b0:343:7769:5895 with SMTP id n14-20020a05622a040e00b0034377695895mr30562094qtx.467.1663099078015; Tue, 13 Sep 2022 12:57:58 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.57.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:57:57 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 01/21] mm/page_isolation: protect cma from isolate_single_pageblock Date: Tue, 13 Sep 2022 12:54:48 -0700 Message-Id: <20220913195508.3511038-2-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The function set_migratetype_isolate() has special handling for pageblocks of MIGRATE_CMA type that protects them from being isolated for MIGRATE_MOVABLE requests. Since isolate_single_pageblock() doesn't receive the migratetype argument of start_isolate_page_range() it used the migratetype of the pageblock instead of the requested migratetype which defeats this MIGRATE_CMA check. This allows an attempt to create a gigantic page within a CMA region to change the migratetype of the first and last pageblocks from MIGRATE_CMA to MIGRATE_MOVABLE when they are restored after failure, which corrupts the CMA region. The calls to (un)set_migratetype_isolate() for the first and last pageblocks of the start_isolate_page_range() are moved back into that function to allow access to its migratetype argument and make it easier to see how all of the pageblocks in the range are isolated. Fixes: b2c9e2fbba32 ("mm: make alloc_contig_range work at pageblock granularity") Signed-off-by: Doug Berger Reported-by: kernel test robot --- mm/page_isolation.c | 75 +++++++++++++++++++++------------------------ 1 file changed, 35 insertions(+), 40 deletions(-) diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 9d73dc38e3d7..8e16aa22cb61 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -286,8 +286,6 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * @flags: isolation flags * @gfp_flags: GFP flags used for migrating pages * @isolate_before: isolate the pageblock before the boundary_pfn - * @skip_isolation: the flag to skip the pageblock isolation in second - * isolate_single_pageblock() * * Free and in-use pages can be as big as MAX_ORDER-1 and contain more than one * pageblock. When not all pageblocks within a page are isolated at the same @@ -302,9 +300,8 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * the in-use page then splitting the free page. */ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, - gfp_t gfp_flags, bool isolate_before, bool skip_isolation) + gfp_t gfp_flags, bool isolate_before) { - unsigned char saved_mt; unsigned long start_pfn; unsigned long isolate_pageblock; unsigned long pfn; @@ -328,18 +325,6 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, start_pfn = max(ALIGN_DOWN(isolate_pageblock, MAX_ORDER_NR_PAGES), zone->zone_start_pfn); - saved_mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock)); - - if (skip_isolation) - VM_BUG_ON(!is_migrate_isolate(saved_mt)); - else { - ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), saved_mt, flags, - isolate_pageblock, isolate_pageblock + pageblock_nr_pages); - - if (ret) - return ret; - } - /* * Bail out early when the to-be-isolated pageblock does not form * a free or in-use page across boundary_pfn: @@ -428,7 +413,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, ret = set_migratetype_isolate(page, page_mt, flags, head_pfn, head_pfn + nr_pages); if (ret) - goto failed; + return ret; } ret = __alloc_contig_migrate_range(&cc, head_pfn, @@ -443,7 +428,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, unset_migratetype_isolate(page, page_mt); if (ret) - goto failed; + return -EBUSY; /* * reset pfn to the head of the free page, so * that the free page handling code above can split @@ -459,24 +444,19 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, while (!PageBuddy(pfn_to_page(outer_pfn))) { /* stop if we cannot find the free page */ if (++order >= MAX_ORDER) - goto failed; + return -EBUSY; outer_pfn &= ~0UL << order; } pfn = outer_pfn; continue; } else #endif - goto failed; + return -EBUSY; } pfn++; } return 0; -failed: - /* restore the original migratetype */ - if (!skip_isolation) - unset_migratetype_isolate(pfn_to_page(isolate_pageblock), saved_mt); - return -EBUSY; } /** @@ -534,21 +514,30 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, unsigned long isolate_start = ALIGN_DOWN(start_pfn, pageblock_nr_pages); unsigned long isolate_end = ALIGN(end_pfn, pageblock_nr_pages); int ret; - bool skip_isolation = false; /* isolate [isolate_start, isolate_start + pageblock_nr_pages) pageblock */ - ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false, skip_isolation); + ret = set_migratetype_isolate(pfn_to_page(isolate_start), migratetype, + flags, isolate_start, isolate_start + pageblock_nr_pages); if (ret) return ret; - - if (isolate_start == isolate_end - pageblock_nr_pages) - skip_isolation = true; + ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false); + if (ret) + goto unset_start_block; /* isolate [isolate_end - pageblock_nr_pages, isolate_end) pageblock */ - ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true, skip_isolation); + pfn = isolate_end - pageblock_nr_pages; + if (isolate_start != pfn) { + ret = set_migratetype_isolate(pfn_to_page(pfn), migratetype, + flags, pfn, pfn + pageblock_nr_pages); + if (ret) + goto unset_start_block; + } + ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true); if (ret) { - unset_migratetype_isolate(pfn_to_page(isolate_start), migratetype); - return ret; + if (isolate_start != pfn) + goto unset_end_block; + else + goto unset_start_block; } /* skip isolated pageblocks at the beginning and end */ @@ -557,15 +546,21 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, pfn += pageblock_nr_pages) { page = __first_valid_page(pfn, pageblock_nr_pages); if (page && set_migratetype_isolate(page, migratetype, flags, - start_pfn, end_pfn)) { - undo_isolate_page_range(isolate_start, pfn, migratetype); - unset_migratetype_isolate( - pfn_to_page(isolate_end - pageblock_nr_pages), - migratetype); - return -EBUSY; - } + start_pfn, end_pfn)) + goto unset_isolated_blocks; } return 0; + +unset_isolated_blocks: + ret = -EBUSY; + undo_isolate_page_range(isolate_start + pageblock_nr_pages, pfn, + migratetype); +unset_end_block: + unset_migratetype_isolate(pfn_to_page(isolate_end - pageblock_nr_pages), + migratetype); +unset_start_block: + unset_migratetype_isolate(pfn_to_page(isolate_start), migratetype); + return ret; } /* From patchwork Tue Sep 13 19:54:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50DE0C6FA82 for ; Tue, 13 Sep 2022 19:58:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229764AbiIMT61 (ORCPT ); Tue, 13 Sep 2022 15:58:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229723AbiIMT6Y (ORCPT ); Tue, 13 Sep 2022 15:58:24 -0400 Received: from mail-qv1-xf2e.google.com (mail-qv1-xf2e.google.com [IPv6:2607:f8b0:4864:20::f2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69A507392C; Tue, 13 Sep 2022 12:58:06 -0700 (PDT) Received: by mail-qv1-xf2e.google.com with SMTP id q8so10090370qvr.9; Tue, 13 Sep 2022 12:58:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=dgAC9wdSWdNnQd7c6lUo6ch6gbgfZ8Ik+m13B5omrFE=; b=pgU7qB1cfn3bbb6aqaPKVsP0QULztBXM6ZTdmlLajCShk8VBE78cxzG8Cv0iNse9pr raJ8GgLPcDyEkXzvMHViIvbgw2EPxCCCxmBWVydWXsbvHPXhV9LnTIQcxrhx+uvf4Xru 5WYEg1qQVSMXl/Ik2LXwDiiTB+uN5XnQ7jp32HYymQ2HZJZADmguB3tnteKHk9qIukRc jpn8LdBlaPcxTkVAmOKzkV+NLvIVifwROYLBZ5KLKak/eoCCJdA+KloOJgYdmZJeYMVG xPc5vPtISXEXXxdKiktdhoBVSWbwV6cguyuU1u/JwyBG7hVEPpa11URawNPjZH6cK9Db YImw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=dgAC9wdSWdNnQd7c6lUo6ch6gbgfZ8Ik+m13B5omrFE=; b=B6u2/MEiC4pV8psVGPVSD88rjHk/IaFxLFt9DSmwztJxmOBOQ29whM6/9WL5oPgNju Ytwri3PoF0ssUebbuJc8S0sZLZzFZRI/aZtyV9JK6V3VFUPFfg8YLHEvMd6du48OYrSk Ea+C66o6ypmPRi/fSi6gvKMo8moBkr9/qBgTXhD35luusRvDsjnWWGc29uVC4X8YKslK uC6wHIidWuQmowN79vPl4MAWCtJWG33aRyClCK8tVyisEKmqxBlcHfHcvL01Jo/rSxWc gL/N6P2yaDJ4VM8v9oDUdYg9IPVkzqZ0f04Gfic3TpRHdodT6gifAxAssYMzGsCPxqKN g9lg== X-Gm-Message-State: ACgBeo3yXKV81tjYZmKY4cxWDuvTmGVW/jcVwiycnZPmHVv4Yp85sQza iYVtOy1JxBKkbFeGu5TfnAE= X-Google-Smtp-Source: AA6agR5N0IDxlcIGSy6m2BrDrpoJ44hk3cGJOE0+/g5noyEETZjDxRdlg2yegkHVtpvIIdmASzximw== X-Received: by 2002:a0c:a901:0:b0:4aa:a283:ef4a with SMTP id y1-20020a0ca901000000b004aaa283ef4amr28465376qva.53.1663099085481; Tue, 13 Sep 2022 12:58:05 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:04 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 03/21] mm/hugetlb: correct demote page offset logic Date: Tue, 13 Sep 2022 12:54:50 -0700 Message-Id: <20220913195508.3511038-4-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org With gigantic pages it may not be true that struct page structures are contiguous across the entire gigantic page. The mem_map_offset function is used here in place of direct pointer arithmetic to correct for this. Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support") Signed-off-by: Doug Berger Acked-by: Muchun Song --- mm/hugetlb.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 79949893ac12..a1d51a1f0404 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3420,6 +3420,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) { int i, nid = page_to_nid(page); struct hstate *target_hstate; + struct page *subpage; int rc = 0; target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); @@ -3453,15 +3454,16 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) mutex_lock(&target_hstate->resize_lock); for (i = 0; i < pages_per_huge_page(h); i += pages_per_huge_page(target_hstate)) { + subpage = mem_map_offset(page, i); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_page_for_demote(page + i, + prep_compound_gigantic_page_for_demote(subpage, target_hstate->order); else - prep_compound_page(page + i, target_hstate->order); - set_page_private(page + i, 0); - set_page_refcounted(page + i); - prep_new_huge_page(target_hstate, page + i, nid); - put_page(page + i); + prep_compound_page(subpage, target_hstate->order); + set_page_private(subpage, 0); + set_page_refcounted(subpage); + prep_new_huge_page(target_hstate, subpage, nid); + put_page(subpage); } mutex_unlock(&target_hstate->resize_lock); From patchwork Tue Sep 13 19:54:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 858DDC6FA8A for ; Tue, 13 Sep 2022 19:58:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229813AbiIMT6k (ORCPT ); Tue, 13 Sep 2022 15:58:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229833AbiIMT6Z (ORCPT ); Tue, 13 Sep 2022 15:58:25 -0400 Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com [IPv6:2607:f8b0:4864:20::72c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA5DA74DD4; Tue, 13 Sep 2022 12:58:10 -0700 (PDT) Received: by mail-qk1-x72c.google.com with SMTP id y2so6829968qkl.11; Tue, 13 Sep 2022 12:58:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=yHgeZ0+Ocv6GrFoaruOXhrYrRwLmKzzR+EPUoXoCFLk=; b=MO7TGhjFAweThFGVYteWrV+nJ7j5y3LZRa4C7ACnJ1qWyIG54wf5kg5kO3xvw/FqDQ s4wKsy9yFNOEBO191kejMvdr64z7Q9PD3aPBFXDJT69Mpap/Qm7BagM/tmOC2sbVk5Ek f646vwKxfSqOitl6VKXJ3CuYfAma7EN7IglgyNYbVTkgDe6A8Ejr2VK5Z6ft+DXrx+l4 fQRhw7+DxWu3uXxtthnmrlFhvdM2zwLPoqSspHMHQaTzl30h2Y5TuDS9T3a1oeXGiuia JD/Wvi1/kbP+I5YXUEfkPCFyR+Q73ACUQMRNqUffeivDuZhRpUvnXKzfC7RG3PK+ZTef LsyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=yHgeZ0+Ocv6GrFoaruOXhrYrRwLmKzzR+EPUoXoCFLk=; b=cnBDhhQIFNXguJJ23rGROJmch75Eqsn/VfAHAHyhxlDTq1/ZCt/mJ+u8LtJSYNSG7Y /VSibe22cxCkCk+frsTgtFPD7sppKKsow7FCaXp8IPtfNqlXeCfCifr4YCmPkO49mVvF tr0CyYaf+WvAsS3TRIIH+r7XiQY/v5IAxxu7rzSgkD6Eup5Sd3ob4ChU1R6cC4N3WdAC qnWruRmMnEmAvfdNH6FnDhbHnxqJBokE+0ojoAVZrAAhBRpscRPhr0ZjlOljSCO/RlM+ r/XFbrjCZEp2W528aRhk2XdbfQwe6a/ibJ7nFw+H48p+eMRlZLUJjoBfHaamgNJPEsmQ WZaA== X-Gm-Message-State: ACgBeo0EEXE3Rb3FJ/9hVh3SOJkUEWDhK2YpLwU3Wiv2QvI7vFQ2NDBQ fzaW9yYfpVFKQ3Q4E1BD++A= X-Google-Smtp-Source: AA6agR5MEfLRKDjJcxpoJCjM2jEAW8H06DRSio04kdGXrEcQ2RvULscAsN8q5RMRJ4UP1SunQTdfRQ== X-Received: by 2002:a05:620a:488f:b0:6ce:634e:8963 with SMTP id ea15-20020a05620a488f00b006ce634e8963mr3462100qkb.16.1663099089149; Tue, 13 Sep 2022 12:58:09 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:08 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 04/21] mm/hugetlb: refactor alloc_and_dissolve_huge_page Date: Tue, 13 Sep 2022 12:54:51 -0700 Message-Id: <20220913195508.3511038-5-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The alloc_replacement_page() and replace_hugepage() functions are created from code in the alloc_and_dissolve_huge_page() function to allow their reuse by the next commit. Signed-off-by: Doug Berger --- mm/hugetlb.c | 84 +++++++++++++++++++++++++++++++--------------------- 1 file changed, 51 insertions(+), 33 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a1d51a1f0404..f232a37df4b6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2709,32 +2709,22 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, } /* - * alloc_and_dissolve_huge_page - Allocate a new page and dissolve the old one - * @h: struct hstate old page belongs to - * @old_page: Old page to dissolve - * @list: List to isolate the page in case we need to - * Returns 0 on success, otherwise negated error. + * Before dissolving the page, we need to allocate a new one for the + * pool to remain stable. Here, we allocate the page and 'prep' it + * by doing everything but actually updating counters and adding to + * the pool. This simplifies and let us do most of the processing + * under the lock. */ -static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, - struct list_head *list) +static struct page *alloc_replacement_page(struct hstate *h, int nid) { gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - int nid = page_to_nid(old_page); bool alloc_retry = false; struct page *new_page; - int ret = 0; - /* - * Before dissolving the page, we need to allocate a new one for the - * pool to remain stable. Here, we allocate the page and 'prep' it - * by doing everything but actually updating counters and adding to - * the pool. This simplifies and let us do most of the processing - * under the lock. - */ alloc_retry: new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL); if (!new_page) - return -ENOMEM; + return ERR_PTR(-ENOMEM); /* * If all goes well, this page will be directly added to the free * list in the pool. For this the ref count needs to be zero. @@ -2748,7 +2738,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, SetHPageTemporary(new_page); if (!put_page_testzero(new_page)) { if (alloc_retry) - return -EBUSY; + return ERR_PTR(-EBUSY); alloc_retry = true; goto alloc_retry; @@ -2757,6 +2747,48 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, __prep_new_huge_page(h, new_page); + return new_page; +} + +static void replace_hugepage(struct hstate *h, int nid, struct page *old_page, + struct page *new_page) +{ + lockdep_assert_held(&hugetlb_lock); + /* + * Ok, old_page is still a genuine free hugepage. Remove it from + * the freelist and decrease the counters. These will be + * incremented again when calling __prep_account_new_huge_page() + * and enqueue_huge_page() for new_page. The counters will remain + * stable since this happens under the lock. + */ + remove_hugetlb_page(h, old_page, false); + + /* + * Ref count on new page is already zero as it was dropped + * earlier. It can be directly added to the pool free list. + */ + __prep_account_new_huge_page(h, nid); + enqueue_huge_page(h, new_page); +} + +/* + * alloc_and_dissolve_huge_page - Allocate a new page and dissolve the old one + * @h: struct hstate old page belongs to + * @old_page: Old page to dissolve + * @list: List to isolate the page in case we need to + * Returns 0 on success, otherwise negated error. + */ +static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, + struct list_head *list) +{ + int nid = page_to_nid(old_page); + struct page *new_page; + int ret = 0; + + new_page = alloc_replacement_page(h, nid); + if (IS_ERR(new_page)) + return PTR_ERR(new_page); + retry: spin_lock_irq(&hugetlb_lock); if (!PageHuge(old_page)) { @@ -2783,21 +2815,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, cond_resched(); goto retry; } else { - /* - * Ok, old_page is still a genuine free hugepage. Remove it from - * the freelist and decrease the counters. These will be - * incremented again when calling __prep_account_new_huge_page() - * and enqueue_huge_page() for new_page. The counters will remain - * stable since this happens under the lock. - */ - remove_hugetlb_page(h, old_page, false); - - /* - * Ref count on new page is already zero as it was dropped - * earlier. It can be directly added to the pool free list. - */ - __prep_account_new_huge_page(h, nid); - enqueue_huge_page(h, new_page); + replace_hugepage(h, nid, old_page, new_page); /* * Pages have been replaced, we can safely free the old one. From patchwork Tue Sep 13 19:54:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9CB6ECAAD8 for ; Tue, 13 Sep 2022 19:59:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229945AbiIMT67 (ORCPT ); Tue, 13 Sep 2022 15:58:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229953AbiIMT6a (ORCPT ); Tue, 13 Sep 2022 15:58:30 -0400 Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E76167822B; Tue, 13 Sep 2022 12:58:21 -0700 (PDT) Received: by mail-qv1-xf2d.google.com with SMTP id ml1so10116352qvb.1; Tue, 13 Sep 2022 12:58:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=hB/8CHaP3epa66ZDI7eIxcYHinx9lai7E+SzdR8JK34=; b=DH5wOpdpiHaFFXY8TAT3JRecX64XiPNxiR/CcG1LrVu07Readvn0PWViKgBmDA60Pb UBITk9aWlYro2ikNI4WmLJPOCXwQOqtprdUB0CzoJZ72kknTQx9o/RitCRSffQDxQ5Q6 VTZyKVoIHd28mrFZHIQl9dz1uLmu560sdtEcQWDtP1q9dt0hIGQCuqCTjE9fttuhjONt 0HIT34gX8eS66xaVSIttBxYAN/YQSf5UtfsLj6aZ6r0xFgT3C7geS9uBf8xzXOro8Ch1 QuB8O5yFa+P3URJ5XpTXammfwNsUv1UpGQMU/vjO4KV44uIZqk+ZBhqofvaHP1BLFOQP ilEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=hB/8CHaP3epa66ZDI7eIxcYHinx9lai7E+SzdR8JK34=; b=lsGCDxEQmPS3bvGU1uYNwbVPxKKD4cCzXQitNCv83Z4hxM31dWH2JuVcNPq3jKetCo VaxxISkv0XiakRffnHQ0FIJe6o3lsQ7Gl+IGNoJlmAcd1I6yk9PFRhUuR2JQSWnGZ7Cx wye9WTEZcu6J6uPg6oR+Uwkch+yhwSIEDMUEkk79+75u8Wn3+ss/tnvOK8NHSofHMcJq G1OVzI+tCyZgwIKtqOZBPPw12G4qyQMBTB4Lca8ldOaadXlTFzGGReQ+WhNHF5hIee7v eusCLXx6vH2/XpM245Vz/hpIcSr8letmTh5Ybk9YiuNia9+VGxFIq7W6cuH9yMn9TqBE sxkA== X-Gm-Message-State: ACgBeo1agUzYEYbEGSyuozDVO4qiGXFPyX5MF8wrs/uzC6jW64WDzDgR c+ajKkxtZLuUfQzWUmZ+TF0= X-Google-Smtp-Source: AA6agR4VzptNv0IdYnihlFGaNyhbElSt3SCLpuN9uI3qoIO24r77G7BDUS/ieSa0RDNa+EBnGiO84g== X-Received: by 2002:a0c:c98a:0:b0:4ac:9f4b:efed with SMTP id b10-20020a0cc98a000000b004ac9f4befedmr14045129qvk.35.1663099100395; Tue, 13 Sep 2022 12:58:20 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:19 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 07/21] lib/show_mem.c: display MovableOnly Date: Tue, 13 Sep 2022 12:54:54 -0700 Message-Id: <20220913195508.3511038-8-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The comment for commit c78e93630d15 ("mm: do not walk all of system memory during show_mem") indicates it "also corrects the reporting of HighMem as HighMem/MovableOnly as ZONE_MOVABLE has similar problems to HighMem with respect to lowmem/highmem exhaustion." Presuming the similar problems are with regard to the general exclusion of kernel allocations from either zone, I believe it makes sense to include all ZONE_MOVABLE memory even on systems without HighMem. To the extent that this was the intent of the original commit I have included a "Fixes" tag, but it seems unnecessary to submit to linux-stable. Fixes: c78e93630d15 ("mm: do not walk all of system memory during show_mem") Signed-off-by: Doug Berger --- lib/show_mem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/show_mem.c b/lib/show_mem.c index 1c26c14ffbb9..337c870a5e59 100644 --- a/lib/show_mem.c +++ b/lib/show_mem.c @@ -27,7 +27,7 @@ void show_mem(unsigned int filter, nodemask_t *nodemask) total += zone->present_pages; reserved += zone->present_pages - zone_managed_pages(zone); - if (is_highmem_idx(zoneid)) + if (zoneid == ZONE_MOVABLE || is_highmem_idx(zoneid)) highmem += zone->present_pages; } } From patchwork Tue Sep 13 19:54:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B9C3C54EE9 for ; Tue, 13 Sep 2022 19:59:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230030AbiIMT7Q (ORCPT ); Tue, 13 Sep 2022 15:59:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230029AbiIMT6g (ORCPT ); Tue, 13 Sep 2022 15:58:36 -0400 Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com [IPv6:2607:f8b0:4864:20::72f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B64C874CE6; Tue, 13 Sep 2022 12:58:28 -0700 (PDT) Received: by mail-qk1-x72f.google.com with SMTP id s9so7062214qkg.4; Tue, 13 Sep 2022 12:58:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=kefl6D8eKpN1GxKRqpPd+2PbW8yMD7NaxrgEOy2EJKU=; b=byR/eekYsW9DhH0NqLev2fS+tpkFMN+hxgB8j3wmXvKTocIMCm6zZesYvYgQ87wjhP q+4AfKw8TXxcj2+vnpVcn5ECPh4qGk7Jtzj+zNbyb6YCFLIiwPOzRUIUuAqm/vme1CGD HXFFRmWffOsc2lXoXpa+V0KHw2MwGlrFjaT+dLfIFP4Da4Lspw/3GTIWmZkSLzazc/cQ hUYdezyBusY92UGvKeXjT7Rg0vsemlXyaK6ln+reN6yeiO3uNu7F3nGVR2B+OsucHch6 d5MKRdQDkKjHro2ts06Z/prjBlf3W1AQZuN3IrXE7aLh74QN4KD+aUTWaAkLm41rgl56 mOWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=kefl6D8eKpN1GxKRqpPd+2PbW8yMD7NaxrgEOy2EJKU=; b=pxILh4RiPrkaLKWtwcO0Stiql600sgOP4HLqmk9oo8eEU9flyY8wE9iF6hahCeU4kU BgQrrntlmdJOTU0LWOqmn1MZ1o6/AhMI80fAkQZCq1SOXmIOxOrnLCUeSN/7BKjWrNKw L0+CNoG6rdkEn2CaBtz3TDYC3GyJMgIu57FUXe4vDWuqCle0fY/BLuCDJ9OggJOklSAD U2f2SPFlS5jzDtwEbShQgfxFCfFbhM8r8r0BfBM3W2y9IbjDyW37iEAsmCu8lkdqNy2f RUzvFwhFR/HQEDDGdJoESoFb+EzX+1cfFnsYrJhBxnvrZsL+BLZ+qSupdiVMsSN0YPfT Ruzw== X-Gm-Message-State: ACgBeo0+TIBXew2b9EwIELFX2rXqbBjea6U+B2Fcz+zZa1MrWuOlAEYA l5lhjXkWmfzpSCl7iUNeF8s= X-Google-Smtp-Source: AA6agR4kCtYZpwUOe+kdoRL5XUeDY9Udr6YA+L+RrDdkM+ZkGSBIa8BGCY7yWmaW8ey8FgVIise+hw== X-Received: by 2002:a37:916:0:b0:6ce:5f1c:d5d9 with SMTP id 22-20020a370916000000b006ce5f1cd5d9mr5678519qkj.737.1663099107720; Tue, 13 Sep 2022 12:58:27 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:27 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 09/21] mm/page_alloc: calculate node_spanned_pages from pfns Date: Tue, 13 Sep 2022 12:54:56 -0700 Message-Id: <20220913195508.3511038-10-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Since the start and end pfns of the node are passed as arguments to calculate_node_totalpages() they might as well be used to specify the node_spanned_pages value for the node rather than accumulating the spans of member zones. This prevents the need for additional adjustments if zones are allowed to overlap. Signed-off-by: Doug Berger --- mm/page_alloc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6bf76bbc0308..b6074961fb59 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7452,7 +7452,7 @@ static void __init calculate_node_totalpages(struct pglist_data *pgdat, unsigned long node_start_pfn, unsigned long node_end_pfn) { - unsigned long realtotalpages = 0, totalpages = 0; + unsigned long realtotalpages = 0; enum zone_type i; for (i = 0; i < MAX_NR_ZONES; i++) { @@ -7483,11 +7483,10 @@ static void __init calculate_node_totalpages(struct pglist_data *pgdat, zone->present_early_pages = real_size; #endif - totalpages += size; realtotalpages += real_size; } - pgdat->node_spanned_pages = totalpages; + pgdat->node_spanned_pages = node_end_pfn - node_start_pfn; pgdat->node_present_pages = realtotalpages; pr_debug("On node %d totalpages: %lu\n", pgdat->node_id, realtotalpages); } From patchwork Tue Sep 13 19:54:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B1A2C6FA8D for ; Tue, 13 Sep 2022 19:59:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229811AbiIMT7S (ORCPT ); Tue, 13 Sep 2022 15:59:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229768AbiIMT6j (ORCPT ); Tue, 13 Sep 2022 15:58:39 -0400 Received: from mail-qv1-xf2c.google.com (mail-qv1-xf2c.google.com [IPv6:2607:f8b0:4864:20::f2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 132A075CD8; Tue, 13 Sep 2022 12:58:35 -0700 (PDT) Received: by mail-qv1-xf2c.google.com with SMTP id q8so10091435qvr.9; Tue, 13 Sep 2022 12:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=KeLoRQJB3eAOdyXC4NTd1TP4on3QCykrOlScFxq8ZPE=; b=FeFlqfsiEQU3HSeYb1n4gkpg1qmnB5WeDR3FybX0zPrw0OvjL/48yUYTNjXdAaMlAU JdyQzl2XCc8s8RP5OO/H5J7S/0xDcSLDPZ/g9EIPbcsw9LhFgl8pcNCL7ia4xS0HbBWk UrWVqsTOfQjwUdrXVkxZ/eY8sVsnX0gm6/fSQE07CBzEH1IsIWTpZVsPIgNRukD5QnzD c6mLxWXf85eq1Q9U3EFaUBdh/wEyMg/Ynk1wbDDmvjWml5kn5SQbnOB+ZusyeRT2gzZB 30p18F8NjILejz6FdnbFmeCsFxJLxim3pDKbO19ZAHgQZI8N1Ooww3p9oQH3KqqVXxOk 7Eog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=KeLoRQJB3eAOdyXC4NTd1TP4on3QCykrOlScFxq8ZPE=; b=73KjvX/hW9b5RIPytF7VDZZ7S9LzWUCqudyw8RUyKfzCrGd7gkafXXeBavDigf4GBj +SVaqRADsGAZHfiNfxsflokn9wb9vAFNSGIygClAFJFwd57lWGwuVg5Q24TPJWd/v/2p MYiMhO8D17H8ltry2pZ4Ccufwbj3cZSDTgZwQgfx6JxVeeqNYizA+Ows5gzI3P1058Qw /xzwh4+hT2pQl3y8YpHrXDPAQtOokDevGK4nQd7NOahZX0d+iD5pN4BabwGc3rRlOOeG XmCeAmTeXJ8/SGlxX9p+dXKvSM9pRdbHieA4xMoRWsBdWkmeopsrYE2corD+GuECEJmv V+pg== X-Gm-Message-State: ACgBeo1od7ThmtacegZzr+jd83r5Zf2XBxYxgTruXNP71wOOO7MifYy6 ihRahicxK6tfQyxIngXEprQ= X-Google-Smtp-Source: AA6agR6LdySz/okFv68gwh3HYvltSAZalCbl/B/p5X5bZPgANvOoXngUoiwr94jsBmavX1SN+2thWQ== X-Received: by 2002:a05:6214:301a:b0:4ac:a4ec:b8b1 with SMTP id ke26-20020a056214301a00b004aca4ecb8b1mr13793217qvb.122.1663099114897; Tue, 13 Sep 2022 12:58:34 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:34 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 11/21] mm/page_alloc: introduce init_reserved_pageblock() Date: Tue, 13 Sep 2022 12:54:58 -0700 Message-Id: <20220913195508.3511038-12-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Most of the implementation of init_cma_reserved_pageblock() is common to the initialization of any reserved pageblock for use by the page allocator. This commit breaks that functionality out into the new common function init_reserved_pageblock() for use by code other than CMA. The CMA specific code is relocated from page_alloc to the point where init_cma_reserved_pageblock() was invoked and the new function is used there instead. The error path is also updated to use the function to operate on pageblocks rather than pages. Signed-off-by: Doug Berger --- include/linux/gfp.h | 5 +---- mm/cma.c | 15 +++++++++++---- mm/page_alloc.c | 8 ++------ 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index f314be58fa77..71ed687be406 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -367,9 +367,6 @@ extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, #endif void free_contig_range(unsigned long pfn, unsigned long nr_pages); -#ifdef CONFIG_CMA -/* CMA stuff */ -extern void init_cma_reserved_pageblock(struct page *page); -#endif +extern void init_reserved_pageblock(struct page *page); #endif /* __LINUX_GFP_H */ diff --git a/mm/cma.c b/mm/cma.c index 4a978e09547a..6208a3e1cd9d 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "cma.h" @@ -116,8 +117,13 @@ static void __init cma_activate_area(struct cma *cma) } for (pfn = base_pfn; pfn < base_pfn + cma->count; - pfn += pageblock_nr_pages) - init_cma_reserved_pageblock(pfn_to_page(pfn)); + pfn += pageblock_nr_pages) { + struct page *page = pfn_to_page(pfn); + + set_pageblock_migratetype(page, MIGRATE_CMA); + init_reserved_pageblock(page); + page_zone(page)->cma_pages += pageblock_nr_pages; + } spin_lock_init(&cma->lock); @@ -133,8 +139,9 @@ static void __init cma_activate_area(struct cma *cma) out_error: /* Expose all pages to the buddy, they are useless for CMA. */ if (!cma->reserve_pages_on_error) { - for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++) - free_reserved_page(pfn_to_page(pfn)); + for (pfn = base_pfn; pfn < base_pfn + cma->count; + pfn += pageblock_nr_pages) + init_reserved_pageblock(pfn_to_page(pfn)); } totalcma_pages -= cma->count; cma->count = 0; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ad38a81203e5..1682d8815efa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2302,9 +2302,8 @@ void __init page_alloc_init_late(void) set_zone_contiguous(zone); } -#ifdef CONFIG_CMA -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */ -void __init init_cma_reserved_pageblock(struct page *page) +/* Free whole pageblock */ +void __init init_reserved_pageblock(struct page *page) { unsigned i = pageblock_nr_pages; struct page *p = page; @@ -2314,14 +2313,11 @@ void __init init_cma_reserved_pageblock(struct page *page) set_page_count(p, 0); } while (++p, --i); - set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page); __free_pages(page, pageblock_order); adjust_managed_page_count(page, pageblock_nr_pages); - page_zone(page)->cma_pages += pageblock_nr_pages; } -#endif /* * The order of subdivision here is critical for the IO subsystem. From patchwork Tue Sep 13 19:54:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C189C6FA8A for ; Tue, 13 Sep 2022 19:59:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbiIMT7h (ORCPT ); Tue, 13 Sep 2022 15:59:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229903AbiIMT6o (ORCPT ); Tue, 13 Sep 2022 15:58:44 -0400 Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1239877EB4; Tue, 13 Sep 2022 12:58:39 -0700 (PDT) Received: by mail-qt1-x82b.google.com with SMTP id r20so8906218qtn.12; Tue, 13 Sep 2022 12:58:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=5WWAAzNcGRX09yf3QcgLeiV4ytIgw0QIAz2zMov/ZDY=; b=Yz1mtV6YiuvArbfHbQBXNKT9eLELbvn7bnniGDLD1BumkiBsr2ZDlVa36+Wy8wKT5o ryYhGAJx49vUh11TKsorymepYWGSO76xJbhP0rWVGH3ntJ/cZBgl0HG4OoxTwsRCKi5P r8fZHkmi9dhCwHqvtScDgQwe7gg6nWmJBlbT6hYOH0ew7iRaIAjSwvoSk6SKm+tRfQu+ rKZsLcxmkz1cCAli2W9VyR6gyYWGuJOVoZMSgPsY0YmHve5v7DOrmmIyu3LMUZg6pUfw lIU/zKc3QMqDUaQMXnCFOtpC/YhN4nYgSjeSX0Y2GgcVNB6wvWWTn+zhjMxvZzDNv7zs 69cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=5WWAAzNcGRX09yf3QcgLeiV4ytIgw0QIAz2zMov/ZDY=; b=keVK2qTyr0mAp68rSoSKQu2s0EYZtJ0bTuSBrK5zTl1JgbNXoxxQl1YT+pY8QQzoz0 saVuhpTmIfoHIlBlo8DPQph6N9bE1/SKbIQ0mQEzvJ6CSVLrj7KOolr1QEdWAj7xha3k jfiloyjwHoBL9F+DpmL44ThLdRC+qo8HZ/o+79vkLb9dmAzWkXqy0EM1fIQEw4dJ+v92 kHtBXq0M4tJ1FmigwLMPiaWcUMZniDCaQFa3pXHMp+hSsvOMjSvsGgOQdOEgFRi988b5 N076arQ4q2iblI1BbvgMK6xXc6PgFMdsZloZwzQXQwuhoitoR9uLj5eSXeinyEvHrJ2R po1Q== X-Gm-Message-State: ACgBeo2xHUdK/BT7ZkzDVqAXW7ZG+QvWOf73qetsowJQVpNQB96yeXcY Z+r+U5DL2URTREHK2PAfiUM= X-Google-Smtp-Source: AA6agR6JFHHtW1Pd7kJf3d4lhxO33YWLvSfzGCmyAGQLZ1dEkgSuVHnr1X0agG5L8d/cKj0D7xYuZA== X-Received: by 2002:a05:622a:1998:b0:343:6452:dbd9 with SMTP id u24-20020a05622a199800b003436452dbd9mr30254647qtc.423.1663099118855; Tue, 13 Sep 2022 12:58:38 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:38 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 12/21] memblock: introduce MEMBLOCK_MOVABLE flag Date: Tue, 13 Sep 2022 12:54:59 -0700 Message-Id: <20220913195508.3511038-13-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The MEMBLOCK_MOVABLE flag is introduced to designate a memblock as only supporting movable allocations by the page allocator. Signed-off-by: Doug Berger --- include/linux/memblock.h | 8 ++++++++ mm/memblock.c | 24 ++++++++++++++++++++++++ 2 files changed, 32 insertions(+) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 50ad19662a32..8eb3ca32dfa7 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -47,6 +47,7 @@ enum memblock_flags { MEMBLOCK_MIRROR = 0x2, /* mirrored region */ MEMBLOCK_NOMAP = 0x4, /* don't add to kernel direct mapping */ MEMBLOCK_DRIVER_MANAGED = 0x8, /* always detected via a driver */ + MEMBLOCK_MOVABLE = 0x10, /* designated movable block */ }; /** @@ -125,6 +126,8 @@ int memblock_clear_hotplug(phys_addr_t base, phys_addr_t size); int memblock_mark_mirror(phys_addr_t base, phys_addr_t size); int memblock_mark_nomap(phys_addr_t base, phys_addr_t size); int memblock_clear_nomap(phys_addr_t base, phys_addr_t size); +int memblock_mark_movable(phys_addr_t base, phys_addr_t size); +int memblock_clear_movable(phys_addr_t base, phys_addr_t size); void memblock_free_all(void); void memblock_free(void *ptr, size_t size); @@ -265,6 +268,11 @@ static inline bool memblock_is_driver_managed(struct memblock_region *m) return m->flags & MEMBLOCK_DRIVER_MANAGED; } +static inline bool memblock_is_movable(struct memblock_region *m) +{ + return m->flags & MEMBLOCK_MOVABLE; +} + int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, diff --git a/mm/memblock.c b/mm/memblock.c index b5d3026979fc..5d6a210d98ec 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -979,6 +979,30 @@ int __init_memblock memblock_clear_nomap(phys_addr_t base, phys_addr_t size) return memblock_setclr_flag(base, size, 0, MEMBLOCK_NOMAP); } +/** + * memblock_mark_movable - Mark designated movable block with MEMBLOCK_MOVABLE. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_mark_movable(phys_addr_t base, phys_addr_t size) +{ + return memblock_setclr_flag(base, size, 1, MEMBLOCK_MOVABLE); +} + +/** + * memblock_clear_movable - Clear flag MEMBLOCK_MOVABLE for a specified region. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_clear_movable(phys_addr_t base, phys_addr_t size) +{ + return memblock_setclr_flag(base, size, 0, MEMBLOCK_MOVABLE); +} + static bool should_skip_region(struct memblock_type *type, struct memblock_region *m, int nid, int flags) From patchwork Tue Sep 13 19:55:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605428 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03ACFC54EE9 for ; Tue, 13 Sep 2022 19:59:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230008AbiIMT7z (ORCPT ); Tue, 13 Sep 2022 15:59:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230079AbiIMT7I (ORCPT ); Tue, 13 Sep 2022 15:59:08 -0400 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B56A1786FF; Tue, 13 Sep 2022 12:58:50 -0700 (PDT) Received: by mail-qk1-x733.google.com with SMTP id q11so6264314qkc.12; Tue, 13 Sep 2022 12:58:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ed+iX/nv/XMEFygJ1SlfRX9x+uzs7Q6SqUw+PcSFA+Y=; b=kTC2V8I0KbKQgYVpiMXWXobqHukHN5wc9G7pyTq0OXIlmxgzs1DdLdIuMurl376JOh TI+sLZOsyYvKvgQV1mtsgOKDhgkCHY9vgTN75V+209rBR7AYRtArLxFXLQfDpntS8+77 1VJuC85XaAr9lDOJi4JPuZBYlxos4LmgTFUlBZzjlAHRPleIyKNPrsw1Q/TybKJ6LhLy P7jZiatOzHC+C4lsh8jLuUsBJv4lBg9GwtABHfhEamUAOb2Eu3rURCtfJTb4hzSEiYzf Mf68rOnlVxAV76vW2E0E1N4Dtdqb6V3ShGLPb0HbK61ur+zHBF+wQNqRrMeYo3aOEqsI sQUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ed+iX/nv/XMEFygJ1SlfRX9x+uzs7Q6SqUw+PcSFA+Y=; b=T20Otj5Ly/TC8a+VsTAB82tQyD56hLYzJ8A4iZBRUOyYuwJeTDyvVkJfEHZchkr9o6 ovTG9933VUDtGHgYWjS02Q2PzvykKjGg+0Xcf5ibS8YFftj7Lvdo8mI8IyeqEdEQQqmD eOHN3hJvGlb59TdFRhlOU2NPZKO8xO9Rgg7k3BHXEV9VLL1gspkQTrZ8u7QmR3JApcXq XoGDa/otK7QtsgqNQ1tNSYzu61A3bg76kCI76EIg2UuKYl+NfACgAVj5jBaZ1gSY8yTj hZ8telwUlJk6v3baBvwa27UpL1fVEfRFib1tJe7DaAJNOEsRtvf4NYmtjFLdZfckWgpy YmBQ== X-Gm-Message-State: ACgBeo2WYpA/Dl/zJgmMXFGqBPH6+kwbgNEU7VqoDjxR5ATWpXEDnboa HFc2tVUeLx1KXCIVCtXzQ6w= X-Google-Smtp-Source: AA6agR6oL/h20WlhqK4D/I9yr1owRo+hnv02mOT7OtmgSO2hWSXNGrz7MzYBNL87UH+0Ca4e8ZJtlg== X-Received: by 2002:a05:620a:2956:b0:6ce:60f5:d887 with SMTP id n22-20020a05620a295600b006ce60f5d887mr4614697qkp.303.1663099129650; Tue, 13 Sep 2022 12:58:49 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:49 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 15/21] mm/page_alloc: allow base for movablecore Date: Tue, 13 Sep 2022 12:55:02 -0700 Message-Id: <20220913195508.3511038-16-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org A Designated Movable Block can be created by including the base address of the block when specifying a movablecore range on the kernel command line. Signed-off-by: Doug Berger --- .../admin-guide/kernel-parameters.txt | 14 ++++++- mm/page_alloc.c | 38 ++++++++++++++++--- 2 files changed, 45 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 426fa892d311..8141fac7c7cb 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3312,7 +3312,7 @@ reporting absolute coordinates, such as tablets movablecore= [KNL,X86,IA-64,PPC] - Format: nn[KMGTPE] | nn% + Format: nn[KMGTPE] | nn[KMGTPE]@ss[KMGTPE] | nn% This parameter is the complement to kernelcore=, it specifies the amount of memory used for migratable allocations. If both kernelcore and movablecore is @@ -3322,6 +3322,18 @@ that the amount of memory usable for all allocations is not too small. + If @ss[KMGTPE] is included, memory within the region + from ss to ss+nn will be designated as a movable block + and included in ZONE_MOVABLE. Designated Movable Blocks + must be aligned to pageblock_order. Designated Movable + Blocks take priority over values of kernelcore= and are + considered part of any memory specified by more general + movablecore= values. + Multiple Designated Movable Blocks may be specified, + comma delimited. + Example: + movablecore=100M@2G,100M@3G,1G@1024G + movable_node [KNL] Boot-time switch to make hotplugable memory NUMA nodes to be movable. This means that the memory of such nodes will be usable only for movable diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 69753cc51e19..e38dd1b32771 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8370,9 +8370,9 @@ void __init free_area_init(unsigned long *max_zone_pfn) } static int __init cmdline_parse_core(char *p, unsigned long *core, - unsigned long *percent) + unsigned long *percent, bool movable) { - unsigned long long coremem; + unsigned long long coremem, address; char *endptr; if (!p) @@ -8387,6 +8387,17 @@ static int __init cmdline_parse_core(char *p, unsigned long *core, *percent = coremem; } else { coremem = memparse(p, &p); + if (movable && *p == '@') { + address = memparse(++p, &p); + if (*p != '\0' || + !memblock_is_region_memory(address, coremem) || + memblock_is_region_reserved(address, coremem)) + return -EINVAL; + memblock_reserve(address, coremem); + return dmb_reserve(address, coremem, NULL); + } else if (*p != '\0') { + return -EINVAL; + } /* Paranoid check that UL is enough for the coremem value */ WARN_ON((coremem >> PAGE_SHIFT) > ULONG_MAX); @@ -8409,17 +8420,32 @@ static int __init cmdline_parse_kernelcore(char *p) } return cmdline_parse_core(p, &required_kernelcore, - &required_kernelcore_percent); + &required_kernelcore_percent, false); } /* * movablecore=size sets the amount of memory for use for allocations that - * can be reclaimed or migrated. + * can be reclaimed or migrated. movablecore=size@base defines a Designated + * Movable Block. */ static int __init cmdline_parse_movablecore(char *p) { - return cmdline_parse_core(p, &required_movablecore, - &required_movablecore_percent); + int ret = -EINVAL; + + while (p) { + char *k = strchr(p, ','); + + if (k) + *k++ = 0; + + ret = cmdline_parse_core(p, &required_movablecore, + &required_movablecore_percent, true); + if (ret) + break; + p = k; + } + + return ret; } early_param("kernelcore", cmdline_parse_kernelcore); From patchwork Tue Sep 13 19:55:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCAA2C54EE9 for ; Tue, 13 Sep 2022 20:00:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230185AbiIMUAB (ORCPT ); Tue, 13 Sep 2022 16:00:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230029AbiIMT7T (ORCPT ); Tue, 13 Sep 2022 15:59:19 -0400 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8E71792D0; Tue, 13 Sep 2022 12:58:57 -0700 (PDT) Received: by mail-qk1-x733.google.com with SMTP id i3so4616430qkl.3; Tue, 13 Sep 2022 12:58:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=U4jOrSu6bqJBytWJghR6t0ljWrrZGLSB9ph/v+e9V8I=; b=D/mTJk6kZw64eyUhHPaSFfxR9Msa5UBxyOTlcsmCwhUVjiAhTk7oqnxftuemkUJs0o e/3ecILYiDZtf1ZduEhchjc87lM/UzBWANhyHjDCeSCCZZ+OsXTubHr/HBHsItOzKdH7 p/GQO/8nQMFRqeejgZsBOVzCwSXLRfYO2jhkP7CDsujDhgqd9989s5l2qAcx+CNYHM2D 3FzlZrQmr76F2aSmz5wNYnpRQxzHx962ONlutGAvPr/+dXcOvxUiHsk1k1ee7yaWbHVI 2LCJzHzqV9IGkV4gb1i+myYCzvATpnUG8sQPbUiE3I3lJUeSHhrVFH3mITfi1tyJU0b1 X7Mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=U4jOrSu6bqJBytWJghR6t0ljWrrZGLSB9ph/v+e9V8I=; b=6tGerPA+8CLTaSTurXxfHH11caWZcpKkMd0d7u6ZQznwAYm2Lqstgix8JAC9XY2Ias PjBfiEnd2F57RC8cTQvztOyRLGzn+KN25OIc5glV8l8sZ8aCuSXM1z+3FOLa/rVNr8Z2 ZQ4MUlcCZoEf8JbHCUi+QirGDEZppS3zHNO/x7BcodkzQ+Of+5hOZONZxILgYLq14eOU Nhle7ekgTmWIP57OvyhCMkVxLVhoouA+fbxnT68XH9yll0MK7Lr+BRC9os5aDaCGrNpe QnkIi9LK8nZPqffy6uKUYMa7+l6fn8Q4roBC3aqYDbb4+ZDet5EPhm6DibH1EgAJoQCL bsGw== X-Gm-Message-State: ACgBeo0u1sLYwclovlWZGsAQhjYhUGpEZ5wc506+TDJwKe4CSJ8D2sva 2jos7PqMzIl8gLZjY6e3pRo= X-Google-Smtp-Source: AA6agR42tNhbqU65weBklFefGDPSfeY7AMPrK26FTuyeT6UMnlp5fPm7LJh/3FGN4dhzcFbZNJ8Esw== X-Received: by 2002:a05:620a:2452:b0:6bb:d8ba:ca65 with SMTP id h18-20020a05620a245200b006bbd8baca65mr24199159qkn.263.1663099136924; Tue, 13 Sep 2022 12:58:56 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.58.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:58:56 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 17/21] mm/dmb: introduce rmem designated-movable-block Date: Tue, 13 Sep 2022 12:55:04 -0700 Message-Id: <20220913195508.3511038-18-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org This commit allows Designated Movable Blocks to be created by including reserved-memory child nodes in the device tree with the "designated-movable-block" compatible string. Signed-off-by: Doug Berger --- drivers/of/of_reserved_mem.c | 15 ++++++--- mm/dmb.c | 64 ++++++++++++++++++++++++++++++++++++ 2 files changed, 74 insertions(+), 5 deletions(-) diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 65f3b02a0e4e..0eb9e8898d7b 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "of_private.h" @@ -113,12 +114,16 @@ static int __init __reserved_mem_alloc_size(unsigned long node, nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL; - /* Need adjust the alignment to satisfy the CMA requirement */ - if (IS_ENABLED(CONFIG_CMA) - && of_flat_dt_is_compatible(node, "shared-dma-pool") - && of_get_flat_dt_prop(node, "reusable", NULL) - && !nomap) + if (of_flat_dt_is_compatible(node, "designated-movable-block")) { + /* Need adjust the alignment to satisfy the DMB requirement */ + align = max_t(phys_addr_t, align, DMB_MIN_ALIGNMENT_BYTES); + } else if (IS_ENABLED(CONFIG_CMA) + && of_flat_dt_is_compatible(node, "shared-dma-pool") + && of_get_flat_dt_prop(node, "reusable", NULL) + && !nomap) { + /* Need adjust the alignment to satisfy the CMA requirement */ align = max_t(phys_addr_t, align, CMA_MIN_ALIGNMENT_BYTES); + } prop = of_get_flat_dt_prop(node, "alloc-ranges", &len); if (prop) { diff --git a/mm/dmb.c b/mm/dmb.c index 9d9fd31089d2..8132d18542a0 100644 --- a/mm/dmb.c +++ b/mm/dmb.c @@ -90,3 +90,67 @@ void __init dmb_init_region(struct memblock_region *region) init_reserved_pageblock(page); } } + +/* + * Support for reserved memory regions defined in device tree + */ +#ifdef CONFIG_OF_RESERVED_MEM +#include +#include +#include + +#undef pr_fmt +#define pr_fmt(fmt) fmt + +static int rmem_dmb_device_init(struct reserved_mem *rmem, struct device *dev) +{ + struct dmb *dmb; + + dmb = (struct dmb *)rmem->priv; + if (dmb->owner) + return -EBUSY; + + dmb->owner = dev; + return 0; +} + +static void rmem_dmb_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + struct dmb *dmb; + + dmb = (struct dmb *)rmem->priv; + if (dmb->owner == (void *)dev) + dmb->owner = NULL; +} + +static const struct reserved_mem_ops rmem_dmb_ops = { + .device_init = rmem_dmb_device_init, + .device_release = rmem_dmb_device_release, +}; + +static int __init rmem_dmb_setup(struct reserved_mem *rmem) +{ + unsigned long node = rmem->fdt_node; + struct dmb *dmb; + int err; + + if (!of_get_flat_dt_prop(node, "reusable", NULL) || + of_get_flat_dt_prop(node, "no-map", NULL)) + return -EINVAL; + + err = dmb_reserve(rmem->base, rmem->size, &dmb); + if (err) { + pr_err("Reserved memory: unable to setup DMB region\n"); + return err; + } + + rmem->priv = dmb; + rmem->ops = &rmem_dmb_ops; + pr_info("Reserved memory: created DMB at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + + return 0; +} +RESERVEDMEM_OF_DECLARE(dmb, "designated-movable-block", rmem_dmb_setup); +#endif From patchwork Tue Sep 13 19:55:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CDD1C54EE9 for ; Tue, 13 Sep 2022 20:00:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230130AbiIMUAd (ORCPT ); Tue, 13 Sep 2022 16:00:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229982AbiIMT7v (ORCPT ); Tue, 13 Sep 2022 15:59:51 -0400 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36D9A7437A; Tue, 13 Sep 2022 12:59:05 -0700 (PDT) Received: by mail-qk1-x731.google.com with SMTP id u28so6371503qku.2; Tue, 13 Sep 2022 12:59:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Iv6LAHuQtTXeYWyq0DEDKZbOD/1nb3ztqrePD2HtkIE=; b=YrAFBuuyJkwRgGb5yJxtoTNNKh/7y4BOHe54UWWbwGw/7zVyvD5YS3xh6wOizb8P5y WPvQ65cSNAdgAQQ5wDi83N9jAKpum/P9kqfXIHyP2JYI4r5D1linVF+pAeO7tMTGeNJd s10XvBpPDdlS4NQdOIfsc4Y86zu8zbnj/iRe9JIr+LJQn5lsr3jvpDq8YkHXv8CVoVQ9 l1Y7VxuX7aqVyF4T9IwYMUdPvuZE/+WFrgImMS1687EPItdw+VwpQNOwUoolbQ3Hf9aR Ox2qAlcyZj/+2EVV8HvGsz7p+nWwMiSQtDYItBS1H3jfaKi/gJ7kAUpfhR4jWWYZpfwT Vmow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Iv6LAHuQtTXeYWyq0DEDKZbOD/1nb3ztqrePD2HtkIE=; b=A5hoLNND822FCudX2ihAYymlfGK7yfBKgzASY9nWsW4RyOUx9pJrGxe1oWXoPLDGrn n9QMHEvcjkJP9QFiI602C29eEQ7zbrft1bm9ozpIUrJd+ESBvZE9urWmoZUWHru0obVd LvPDaBgVzdREnhNZ435dbSVOGH+KKVIWC1/iJTZPIpGLOvCdxcQ5JGtMqDnPA8Q32R3q ZHSoQ1m2GIItCGRBXcLNDyteTt16DL3XFybem2w2iwluVF56wVatG/rk4eHZNaKUjSi7 V2LTSmSa/XngYdssC51zP2M4DWaqQnkrsjNq6V51CPL93IugfbBbvs5f73tBf9OyAiKU W4hw== X-Gm-Message-State: ACgBeo163S6u9sLJ7xkyJuyxNzyCpJNm308cw2CT06JiXqOycO4d6eYX 1q/wHT16vw10JdFRJS3e6KM= X-Google-Smtp-Source: AA6agR6fK874H1rNDTk5i/hsxGcmtY4AZlr+R557N3WVBFwFecth0fyDkTeEea5ukgS/JNJqcxXZjQ== X-Received: by 2002:a37:916:0:b0:6ce:5f1c:d5d9 with SMTP id 22-20020a370916000000b006ce5f1cd5d9mr5679706qkj.737.1663099144354; Tue, 13 Sep 2022 12:59:04 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.59.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:59:03 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 19/21] dt-bindings: reserved-memory: shared-dma-pool: support DMB Date: Tue, 13 Sep 2022 12:55:06 -0700 Message-Id: <20220913195508.3511038-20-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The shared-dmb-pool compatible string creates a Designated Movable Block to contain a shared pool of DMA buffers. Signed-off-by: Doug Berger --- .../bindings/reserved-memory/shared-dma-pool.yaml | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/Documentation/devicetree/bindings/reserved-memory/shared-dma-pool.yaml b/Documentation/devicetree/bindings/reserved-memory/shared-dma-pool.yaml index 618105f079be..85824fe05ac9 100644 --- a/Documentation/devicetree/bindings/reserved-memory/shared-dma-pool.yaml +++ b/Documentation/devicetree/bindings/reserved-memory/shared-dma-pool.yaml @@ -22,6 +22,14 @@ properties: operating system to instantiate the necessary pool management subsystem if necessary. + - const: shared-dmb-pool + description: > + This indicates a shared-dma-pool region that is located within + a Designated Movable Block. The operating system is free to + use unallocated memory for movable allocations in this region. + Devices need to be tolerant of allocation latency to use this + pool. + - const: restricted-dma-pool description: > This indicates a region of memory meant to be used as a pool From patchwork Tue Sep 13 19:55:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Berger X-Patchwork-Id: 605425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B5B6ECAAD8 for ; Tue, 13 Sep 2022 20:00:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230017AbiIMUAx (ORCPT ); Tue, 13 Sep 2022 16:00:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbiIMUAD (ORCPT ); Tue, 13 Sep 2022 16:00:03 -0400 Received: from mail-qv1-xf2a.google.com (mail-qv1-xf2a.google.com [IPv6:2607:f8b0:4864:20::f2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB6B6796AA; Tue, 13 Sep 2022 12:59:12 -0700 (PDT) Received: by mail-qv1-xf2a.google.com with SMTP id l5so10085275qvs.13; Tue, 13 Sep 2022 12:59:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=koBwJdR4HF2Mh9OkMa3nN+ZpGb/cJotv47IHPg3aLjM=; b=H9fVlQTVY6EaU+njTimAq+ibJtvJjx2vWGkJikgtxKpK4lpRvh7Gs0dhq+SfYU7Y1l ol227J1p7+D7fHpafEcMsArwK6zpQesHNLeCyw8pBBnlfTeZXRH0gTB01wVhDLo6ZqZ0 2mt9ehwJJq5f5bEntKn/0aquyhMi35hMoNg4MgcTAFw56eJYhoioxREYotQhcXVzv4ef 26N8v6sdGt7Ksl3t6WmZRbYcEffyv4LTbLh0c7Zn4Y23wzA2w1INQgl13/vOL+Z8k7Uy rhOg4rC2RHnt2n62JjXnioNT4+kK415lG500sJWQkISGfWvSLQb972JndYtjJi8pBB0H qioA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=koBwJdR4HF2Mh9OkMa3nN+ZpGb/cJotv47IHPg3aLjM=; b=HN7lhz6SD+hT8b1DBWvUrJNtZdiu4wDF7bHx9lrwgcYGTgSOc9npRxclx1uIhOeAm/ piFgg7I0qM+7uuFTbwTcWzP9U+X37VOMOF18k7Z1D1yQGZJGyaNNn4onTUV5J8Th551p 4gdxT/E+aBKOjYJheyA4m31wPDDwaZRD2xEJnTY5xETTAmV94uC4j32OtCkbyae3S8X9 3ZAVw2kfdi96qY6vAUR0REO3fQCFLxRduOFcSh39HdTtdjl54THciYIjQnSWhml+qgH2 qEnrb7SSxvrQeqjiwyp+ZKEWS+kj+Rfqsjno6sE1Zc1Vr10CMWw9AhTKBF6sjsJ/c8tq tAUw== X-Gm-Message-State: ACgBeo0ybWQ69iXreBTql3F2ddDt1pdWNkqPzneq7FOVfr/I3cHY5oCE Boyb2a0M2EpQxprcLY5ENB8= X-Google-Smtp-Source: AA6agR4KLeMksXEqgCvXkwrSYo6YaUFxsEGC4nazCYouj2Y6nxYnhGX19vjDfEjEBR3X6wdqMlktzg== X-Received: by 2002:a05:6214:1c0a:b0:49d:87ff:e77c with SMTP id u10-20020a0562141c0a00b0049d87ffe77cmr28164232qvc.54.1663099151805; Tue, 13 Sep 2022 12:59:11 -0700 (PDT) Received: from stbirv-lnx-3.igp.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s11-20020a05620a29cb00b006b8e049cf08sm276305qkp.2.2022.09.13.12.59.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Sep 2022 12:59:11 -0700 (PDT) From: Doug Berger To: Andrew Morton Cc: Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Frank Rowand , Mike Kravetz , Muchun Song , Mike Rapoport , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Borislav Petkov , "Paul E. McKenney" , Neeraj Upadhyay , Randy Dunlap , Damien Le Moal , Doug Berger , Florian Fainelli , David Hildenbrand , Zi Yan , Oscar Salvador , Hari Bathini , Kees Cook , - , KOSAKI Motohiro , Mel Gorman , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux.dev Subject: [PATCH 21/21] mm/hugetlb: introduce hugetlb_dmb Date: Tue, 13 Sep 2022 12:55:08 -0700 Message-Id: <20220913195508.3511038-22-opendmb@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220913195508.3511038-1-opendmb@gmail.com> References: <20220913195508.3511038-1-opendmb@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org If specified on the kernel command line the hugetlb_dmb parameter modifies the behavior of the hugetlb_cma parameter to use the Contiguous Memory Allocator within Designated Movable Blocks for gigantic page allocation. This allows the kernel page allocator to use the memory more agressively than traditional CMA memory pools at the cost of potentially increased allocation latency. Signed-off-by: Doug Berger --- Documentation/admin-guide/kernel-parameters.txt | 3 +++ mm/hugetlb.c | 16 +++++++++++++--- 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 8141fac7c7cb..b29d1fa253d6 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1732,6 +1732,9 @@ hugepages using the CMA allocator. If enabled, the boot-time allocation of gigantic hugepages is skipped. + hugetlb_dmb [HW,CMA] Causes hugetlb_cma to use Designated Movable + Blocks for any CMA areas it reserves. + hugetlb_free_vmemmap= [KNL] Reguires CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP enabled. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2f354423f50f..d3fb8b1f443f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -54,6 +54,7 @@ struct hstate hstates[HUGE_MAX_HSTATE]; #ifdef CONFIG_CMA static struct cma *hugetlb_cma[MAX_NUMNODES]; static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; +static bool hugetlb_dmb __initdata; static bool hugetlb_cma_page(struct page *page, unsigned int order) { return cma_pages_valid(hugetlb_cma[page_to_nid(page)], page, @@ -7321,6 +7322,14 @@ static int __init cmdline_parse_hugetlb_cma(char *p) early_param("hugetlb_cma", cmdline_parse_hugetlb_cma); +static int __init cmdline_parse_hugetlb_dmb(char *p) +{ + hugetlb_dmb = true; + return 0; +} + +early_param("hugetlb_dmb", cmdline_parse_hugetlb_dmb); + void __init hugetlb_cma_reserve(int order) { unsigned long size, reserved, per_node; @@ -7396,10 +7405,11 @@ void __init hugetlb_cma_reserve(int order) * may be returned to CMA allocator in the case of * huge page demotion. */ - res = cma_declare_contiguous_nid(0, size, 0, + res = __cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << HUGETLB_PAGE_ORDER, - 0, false, name, - &hugetlb_cma[nid], nid); + 0, false, name, + &hugetlb_cma[nid], nid, + hugetlb_dmb); if (res) { pr_warn("hugetlb_cma: reservation failed: err %d, node %d", res, nid);