From patchwork Sun Jun 2 12:39:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 801101 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41C951E52C; Sun, 2 Jun 2024 12:39:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717331954; cv=none; b=h7i7pkrK9dYNtMVV9RULOMCc8VINvlQ+S/eBCJM6JYo+meAPvAO0s62njX2VW2+mIqwJT9BzWsacRLsoc2mu6osCUUw2UbKtXivOTtEIrjnbfwwDn4CvngzLbRKQMOGWgxLdLAoynP3PabUBW+TUtXqyihokbb9Ur0XIDM9mVkY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717331954; c=relaxed/simple; bh=6wIXclynsLFdchuG1Z21o7bIw4fgwFAhF255TAknuUc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eQ3IdSdy1WC1pUJBvv0OzJJDhJ3t4lZ6jL6YT3xPdtvmZBwgFfSMsVH3hK4mNvDFUSpYBpKczx5IEY1rpqnfGHCMvMbOJhwanYS4AbWELCpqQwgnf95CQwil1Tc1UXlcntj3Ct8zjVwoPFDugcIg7aTDPH9emKz1ol5SCd7b14w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ePosiyhv; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ePosiyhv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1717331952; x=1748867952; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6wIXclynsLFdchuG1Z21o7bIw4fgwFAhF255TAknuUc=; b=ePosiyhvDC3p+FaDJnHSntO6AboidvJyHkqDpA5eQhnftOITPfcqzwTw LbcBtXA5/BZFuK5bK87icK71/Z9eBeMLxEHAnzCUMm+AE2jPMIEZM6bSj ikqjAe/NxX8wlxLMIDX/tFroxa9jA+vYNhlza6355iHDLf0p3NdJPepUE n7z75Mf689Q1kgwW9KoHkGISCdmkqS0At41XPZ1NdyzjZvHYDeUeKEWT4 f6+OeDehRoye8gE6CNm/y0SRM8aMecAzw2IjqvMAg1sH9nCcsZdYzGEVf wy04LmY9reN5JodkBNYLVjpPY+z26o5ry0XL/HCv8QZpXJu2/LUc99JXf w==; X-CSE-ConnectionGUID: vD0DVm0JSAKGag2tvBWpkw== X-CSE-MsgGUID: oRBUN2CjRwOkxljcOg0t+A== X-IronPort-AV: E=McAfee;i="6600,9927,11091"; a="13584310" X-IronPort-AV: E=Sophos;i="6.08,209,1712646000"; d="scan'208";a="13584310" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2024 05:39:12 -0700 X-CSE-ConnectionGUID: mxdIA+1+SVWoBP/D+j6xiQ== X-CSE-MsgGUID: xctQMwVMRiyCPZiO1FwPYA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,209,1712646000"; d="scan'208";a="59792150" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa002.fm.intel.com with ESMTP; 02 Jun 2024 05:39:05 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 9D79E1CB; Sun, 02 Jun 2024 15:39:04 +0300 (EEST) From: "Kirill A. Shutemov" To: bp@alien8.de Cc: adrian.hunter@intel.com, ardb@kernel.org, ashish.kalra@amd.com, bhe@redhat.com, dave.hansen@linux.intel.com, elena.reshetova@intel.com, haiyangz@microsoft.com, hpa@zytor.com, jun.nakajima@intel.com, kai.huang@intel.com, kexec@lists.infradead.org, kirill.shutemov@linux.intel.com, kys@microsoft.com, linux-acpi@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, ltao@redhat.com, mingo@redhat.com, nik.borisov@suse.com, peterz@infradead.org, rafael@kernel.org, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, seanjc@google.com, tglx@linutronix.de, thomas.lendacky@amd.com, x86@kernel.org Subject: [PATCHv11.1 10/19] x86/mm: Add callbacks to prepare encrypted memory for kexec Date: Sun, 2 Jun 2024 15:39:03 +0300 Message-ID: <20240602123903.2121883-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240529104257.GIZlcGsTkJHVBblkrY@fat_crate.local> References: <20240529104257.GIZlcGsTkJHVBblkrY@fat_crate.local> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 AMD SEV and Intel TDX guests allocate shared buffers for performing I/O. This is done by allocating pages normally from the buddy allocator and then converting them to shared using set_memory_decrypted(). On kexec, the second kernel is unaware of which memory has been converted in this manner. It only sees E820_TYPE_RAM. Accessing shared memory as private is fatal. Therefore, the memory state must be reset to its original state before starting the new kernel with kexec. The process of converting shared memory back to private occurs in two steps: - enc_kexec_begin() stops new conversions. - enc_kexec_finish() unshares all existing shared memory, reverting it back to private. Signed-off-by: Kirill A. Shutemov Reviewed-by: Nikolay Borisov Reviewed-by: Kai Huang Tested-by: Tao Liu --- arch/x86/include/asm/x86_init.h | 9 +++++++++ arch/x86/kernel/crash.c | 12 ++++++++++++ arch/x86/kernel/reboot.c | 12 ++++++++++++ arch/x86/kernel/x86_init.c | 4 ++++ 4 files changed, 37 insertions(+) diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h index 28ac3cb9b987..6cade48811cc 100644 --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -149,12 +149,21 @@ struct x86_init_acpi { * @enc_status_change_finish Notify HV after the encryption status of a range is changed * @enc_tlb_flush_required Returns true if a TLB flush is needed before changing page encryption status * @enc_cache_flush_required Returns true if a cache flush is needed before changing page encryption status + * @enc_kexec_begin Begin the two-step process of conversion shared memory back + * to private. It stops the new conversions from being started + * and waits in-flight conversions to finish, if possible. + * @enc_kexec_finish Finish the two-step process of conversion shared memory to + * private. All memory is private after the call. + * It called with all CPUs but one shutdown and interrupts + * disabled. */ struct x86_guest { int (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc); int (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc); bool (*enc_tlb_flush_required)(bool enc); bool (*enc_cache_flush_required)(void); + void (*enc_kexec_begin)(bool crash); + void (*enc_kexec_finish)(void); }; /** diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index f06501445cd9..74f6305eb9ec 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -128,6 +128,18 @@ void native_machine_crash_shutdown(struct pt_regs *regs) #ifdef CONFIG_HPET_TIMER hpet_disable(); #endif + + /* + * Non-crash kexec calls enc_kexec_begin() while scheduling is still + * active. This allows the callback to wait until all in-flight + * shared<->private conversions are complete. In a crash scenario, + * enc_kexec_begin() get call after all but one CPU has been shut down + * and interrupts have been disabled. This only allows the callback to + * detect a race with the conversion and report it. + */ + x86_platform.guest.enc_kexec_begin(true); + x86_platform.guest.enc_kexec_finish(); + crash_save_cpu(regs, safe_smp_processor_id()); } diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c index f3130f762784..097313147ad3 100644 --- a/arch/x86/kernel/reboot.c +++ b/arch/x86/kernel/reboot.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -716,6 +717,14 @@ static void native_machine_emergency_restart(void) void native_machine_shutdown(void) { + /* + * Call enc_kexec_begin() while all CPUs are still active and + * interrupts are enabled. This will allow all in-flight memory + * conversions to finish cleanly. + */ + if (kexec_in_progress) + x86_platform.guest.enc_kexec_begin(false); + /* Stop the cpus and apics */ #ifdef CONFIG_X86_IO_APIC /* @@ -752,6 +761,9 @@ void native_machine_shutdown(void) #ifdef CONFIG_X86_64 x86_platform.iommu_shutdown(); #endif + + if (kexec_in_progress) + x86_platform.guest.enc_kexec_finish(); } static void __machine_emergency_restart(int emergency) diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c index a7143bb7dd93..8a79fb505303 100644 --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -138,6 +138,8 @@ static int enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool static int enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return 0; } static bool enc_tlb_flush_required_noop(bool enc) { return false; } static bool enc_cache_flush_required_noop(void) { return false; } +static void enc_kexec_begin_noop(bool crash) {} +static void enc_kexec_finish_noop(void) {} static bool is_private_mmio_noop(u64 addr) {return false; } struct x86_platform_ops x86_platform __ro_after_init = { @@ -161,6 +163,8 @@ struct x86_platform_ops x86_platform __ro_after_init = { .enc_status_change_finish = enc_status_change_finish_noop, .enc_tlb_flush_required = enc_tlb_flush_required_noop, .enc_cache_flush_required = enc_cache_flush_required_noop, + .enc_kexec_begin = enc_kexec_begin_noop, + .enc_kexec_finish = enc_kexec_finish_noop, }, };