From patchwork Tue Jun 14 12:02:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 581765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43958CCA47C for ; Tue, 14 Jun 2022 12:03:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356360AbiFNMDD (ORCPT ); Tue, 14 Jun 2022 08:03:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356278AbiFNMCq (ORCPT ); Tue, 14 Jun 2022 08:02:46 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC0E219F92; Tue, 14 Jun 2022 05:02:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655208163; x=1686744163; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4+cZiXkfCn9pd/1sYbCEjbMclwb37pFvdZ6baZwVkzM=; b=Ay1kn7rOWzhe/eDdxI8T6YLzpQkCPCU8VIA6kUwWfQpQXlsplen/FDOa zOaZfls7LTP+zQ8RMF5WjP6Ze/AWDjj628LiillmHWBQyfr5MTOM1Nju6 oMRvK02UQtcEFxuH6DfgHOy3sjqR6NfvxmTQ6KXazTk6U7F7XMN8UUBuI EQHHk0IVucy65OUK77eDoYSLgpPhRJ+PrdQeitGXuILVzaOG7MHkDkUie khDvwLJ2u1hlNg0mTPQP2RGq7V4Lkht35QycRfXljovTfZhFUf3sAhqj4 j9FOkj8VJlYz8jNKPGmGF9rBT5Wb/pnRzFixvLlx8GvPYigmKXq+ytZVH Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10377"; a="258432261" X-IronPort-AV: E=Sophos;i="5.91,300,1647327600"; d="scan'208";a="258432261" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jun 2022 05:02:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,300,1647327600"; d="scan'208";a="686633480" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga002.fm.intel.com with ESMTP; 14 Jun 2022 05:02:36 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id C571B4EB; Tue, 14 Jun 2022 15:02:32 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv7 09/14] x86/mm: Provide helpers for unaccepted memory Date: Tue, 14 Jun 2022 15:02:26 +0300 Message-Id: <20220614120231.48165-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Core-mm requires few helpers to support unaccepted memory: - accept_memory() checks the range of addresses against the bitmap and accept memory if needed. - range_contains_unaccepted_memory() checks if anything within the range requires acceptance. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/page.h | 3 ++ arch/x86/include/asm/unaccepted_memory.h | 4 ++ arch/x86/mm/Makefile | 2 + arch/x86/mm/unaccepted_memory.c | 61 ++++++++++++++++++++++++ 4 files changed, 70 insertions(+) create mode 100644 arch/x86/mm/unaccepted_memory.c diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 9cc82f305f4b..df4ec3a988dc 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -19,6 +19,9 @@ struct page; #include + +#include + extern struct range pfn_mapped[]; extern int nr_pfn_mapped; diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h index 41fbfc798100..89fc91c61560 100644 --- a/arch/x86/include/asm/unaccepted_memory.h +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -7,6 +7,10 @@ struct boot_params; void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); +#ifdef CONFIG_UNACCEPTED_MEMORY + void accept_memory(phys_addr_t start, phys_addr_t end); +bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end); #endif +#endif diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index f8220fd2c169..96fc2766b6d7 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -59,3 +59,5 @@ obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o + +obj-$(CONFIG_UNACCEPTED_MEMORY) += unaccepted_memory.o diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c new file mode 100644 index 000000000000..1df918b21469 --- /dev/null +++ b/arch/x86/mm/unaccepted_memory.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include + +#include +#include +#include + +/* Protects unaccepted memory bitmap */ +static DEFINE_SPINLOCK(unaccepted_memory_lock); + +void accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long range_start, range_end; + unsigned long *bitmap; + unsigned long flags; + + if (!boot_params.unaccepted_memory) + return; + + bitmap = __va(boot_params.unaccepted_memory); + range_start = start / PMD_SIZE; + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + for_each_set_bitrange_from(range_start, range_end, bitmap, + DIV_ROUND_UP(end, PMD_SIZE)) { + unsigned long len = range_end - range_start; + + /* Platform-specific memory-acceptance call goes here */ + panic("Cannot accept memory: unknown platform\n"); + bitmap_clear(bitmap, range_start, len); + } + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); +} + +bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long *bitmap; + unsigned long flags; + bool ret = false; + + if (!boot_params.unaccepted_memory) + return 0; + + bitmap = __va(boot_params.unaccepted_memory); + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + while (start < end) { + if (test_bit(start / PMD_SIZE, bitmap)) { + ret = true; + break; + } + + start += PMD_SIZE; + } + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); + + return ret; +}