From patchwork Wed Jul 17 21:31:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Wahl X-Patchwork-Id: 813258 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B598A1F94D; Wed, 17 Jul 2024 21:32:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.147.86 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721251979; cv=none; b=Mzgj/5JRrCzuduRQkafK9Q1+oQGqIqFJuysU1JB0s0kcYrFuBaKE/RgQj6XWtaoO1dnVIjJaRG5Uq5uWRNxqTD/kuOrdyfvDuc9Ewp/F+XZ/4hNFs7fiwjWQFE1DR6blDiPUnGe5RpFabloIdNUKCVtLN+B6LUOdVUzoxBiQPvQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721251979; c=relaxed/simple; bh=OLY9Bx7/nzbjKiMIAGVuuXnkquyU7vJpkYIRB4XQE/A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=C6kUSk7XoQeOmb28GH3MwafKMqPs5rQPVRnyqE5bg96+SjmiNpslQUx3N74po+acMKKLkwLfWvEC0QvzkXnZ/C8M2XHC/MJXUn2xePbgVMWjGtkM/JFqrFq5XqkEiyAzsncJz/hK/LgG4J9ZZndql5obKvZ/jpd/H6vcMFGcOQY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=hpe.com; spf=pass smtp.mailfrom=hpe.com; dkim=pass (2048-bit key) header.d=hpe.com header.i=@hpe.com header.b=HpijY0CB; arc=none smtp.client-ip=148.163.147.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=hpe.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=hpe.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hpe.com header.i=@hpe.com header.b="HpijY0CB" Received: from pps.filterd (m0150242.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 46HDaBoL025060; Wed, 17 Jul 2024 21:31:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from :to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=pps0720; bh=AEHglC0ER 2F2UT1A2l5u2EFWiKzkYSHcBmHpYY1hFJQ=; b=HpijY0CBnnYL6mrCmUVcxphF4 vX1tR53y0XsXxotZa9UEPDvKtk7rz0djZza0gFm46HQogRVis4J6FS8wGxA7rzxL aDAMRZtit/Wk3/ZLAtpJjn87ZWdImFUzbJIkqjHG/YOX6xhrtK1XF8HgUE2hhtAW 35KupU+JuqGpx9kVRf8xXDunoADPe2RFFFuoOdbXxh60nujiYW2zXk79qxzt3HdU cArUt6cgss9yl/9briaBr3sfkbLiX9l3x2lnZb8sWCVdJgh92F93+aJQ6+Ppb947 lmm398phhWzCJktBxxJsG5V9BiLWRfa36yONZ4fIFbjuvmkKMpEzO6m0itjgw== Received: from p1lg14881.it.hpe.com ([16.230.97.202]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 40dwf02w6q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 17 Jul 2024 21:31:26 +0000 (GMT) Received: from p1lg14886.dc01.its.hpecorp.net (unknown [10.119.18.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id B9F088059E3; Wed, 17 Jul 2024 21:31:24 +0000 (UTC) Received: from dog.eag.rdlabs.hpecorp.net (unknown [16.231.227.39]) by p1lg14886.dc01.its.hpecorp.net (Postfix) with ESMTP id 5730780478F; Wed, 17 Jul 2024 21:31:22 +0000 (UTC) Received: by dog.eag.rdlabs.hpecorp.net (Postfix, from userid 200934) id 3C152300BB4CB; Wed, 17 Jul 2024 16:31:21 -0500 (CDT) From: Steve Wahl To: Steve Wahl , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, Pavin Joseph , Eric Hagberg Cc: Simon Horman , Eric Biederman , Dave Young , Sarah Brofeldt , Russ Anderson , Dimitri Sivanich , Hou Wenlong , Andrew Morton , Baoquan He , Yuntao Wang , Bjorn Helgaas , Joerg Roedel , Michael Roth , Tao Liu , kexec@lists.infradead.org, "Kalra, Ashish" , Ard Biesheuvel , linux-efi@vger.kernel.org Subject: [PATCH v3 1/2] x86/kexec: Add EFI config table identity mapping for kexec kernel Date: Wed, 17 Jul 2024 16:31:20 -0500 Message-Id: <20240717213121.3064030-2-steve.wahl@hpe.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20240717213121.3064030-1-steve.wahl@hpe.com> References: <20240717213121.3064030-1-steve.wahl@hpe.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: VoHJkPaaSpjVEq-jCxqVNjto46UGH-jy X-Proofpoint-ORIG-GUID: VoHJkPaaSpjVEq-jCxqVNjto46UGH-jy X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-07-17_16,2024-07-17_02,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 lowpriorityscore=0 phishscore=0 suspectscore=0 spamscore=0 mlxlogscore=999 priorityscore=1501 mlxscore=0 adultscore=0 impostorscore=0 clxscore=1015 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2407170162 From: Tao Liu A kexec kernel boot failure is sometimes observed on AMD CPUs due to an unmapped EFI config table array. This can be seen when "nogbpages" is on the kernel command line, and has been observed as a full BIOS reboot rather than a successful kexec. This was also the cause of reported regressions attributed to Commit 7143c5f4cf20 ("x86/mm/ident_map: Use gbpages only where full GB page should be mapped.") which was subsequently reverted. To avoid this page fault, explicitly include the EFI config table array in the kexec identity map. Further explanation: The following 2 commits caused the EFI config table array to be accessed when enabling sev at kernel startup. commit ec1c66af3a30 ("x86/compressed/64: Detect/setup SEV/SME features earlier during boot") commit c01fce9cef84 ("x86/compressed: Add SEV-SNP feature detection/setup") This is in the code that examines whether SEV should be enabled or not, so it can even affect systems that are not SEV capable. This may result in a page fault if the EFI config table array's address is unmapped. Since the page fault occurs before the new kernel establishes its own identity map and page fault routines, it is unrecoverable and kexec fails. Most often, this problem is not seen because the EFI config table array gets included in the map by the luck of being placed at a memory address close enough to other memory areas that *are* included in the map created by kexec. Both the "nogbpages" command line option and the "use gpbages only where full GB page should be mapped" patch greatly reduce the chance of being included in the map by luck, which is why the problem appears. Signed-off-by: Tao Liu Signed-off-by: Steve Wahl Tested-by: Pavin Joseph Tested-by: Sarah Brofeldt Tested-by: Eric Hagberg Reviewed-by: Ard Biesheuvel --- Version 3: Do not rename map_efi_systab to map_efi_tables, and don't add 'config table' to the comments, per Ard Biesheuvel request. arch/x86/kernel/machine_kexec_64.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index cc0f7f70b17b..9c9ac606893e 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -28,6 +28,7 @@ #include #include #include +#include #ifdef CONFIG_ACPI /* @@ -87,6 +88,8 @@ map_efi_systab(struct x86_mapping_info *info, pgd_t *level4p) { #ifdef CONFIG_EFI unsigned long mstart, mend; + void *kaddr; + int ret; if (!efi_enabled(EFI_BOOT)) return 0; @@ -102,6 +105,30 @@ map_efi_systab(struct x86_mapping_info *info, pgd_t *level4p) if (!mstart) return 0; + ret = kernel_ident_mapping_init(info, level4p, mstart, mend); + if (ret) + return ret; + + kaddr = memremap(mstart, mend - mstart, MEMREMAP_WB); + if (!kaddr) { + pr_err("Could not map UEFI system table\n"); + return -ENOMEM; + } + + mstart = efi_config_table; + + if (efi_enabled(EFI_64BIT)) { + efi_system_table_64_t *stbl = (efi_system_table_64_t *)kaddr; + + mend = mstart + sizeof(efi_config_table_64_t) * stbl->nr_tables; + } else { + efi_system_table_32_t *stbl = (efi_system_table_32_t *)kaddr; + + mend = mstart + sizeof(efi_config_table_32_t) * stbl->nr_tables; + } + + memunmap(kaddr); + return kernel_ident_mapping_init(info, level4p, mstart, mend); #endif return 0; From patchwork Wed Jul 17 21:31:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Wahl X-Patchwork-Id: 813259 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B5C70364CD; Wed, 17 Jul 2024 21:32:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.147.86 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721251974; cv=none; b=F1hFmPFnGC2wQ6k3D9l+AZcieofawiAUpcbUuoP3PnjkB7P21izjcolimq8xALLqRpS9XNVRmwvsnuUKfZGWyUgbR5a4Ax6Zr2JT7V6uApJG8Edem2C+nP/mzVZSKg7AQTbU9sNSYddbA03IMmw1FWufpOkevLiyNp3PBbFLIxo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721251974; c=relaxed/simple; bh=pBawZxHaYBQT2t62wERrG9s82JSHOZeCgIQYqIENj64=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RqIlgFDSoUsHx/ftkQH1SOzy+gA/TeSpAEJieZeTMCyidzlci9v2SR3Dvudv9a2x7elc+gA2/u+a62NpHvUeCaSpa1XB+UhBLs18+GbTGzDQx98YJIChIcFPlpOgwGN1mkqlrX3L0Vg+EFDMIEGHCfJ3xUhDc6tO3iH+RXFCNSw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=hpe.com; spf=pass smtp.mailfrom=hpe.com; dkim=pass (2048-bit key) header.d=hpe.com header.i=@hpe.com header.b=lotvYbmh; arc=none smtp.client-ip=148.163.147.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=hpe.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=hpe.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hpe.com header.i=@hpe.com header.b="lotvYbmh" Received: from pps.filterd (m0134421.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 46HGuXVl017540; Wed, 17 Jul 2024 21:31:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from :to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=pps0720; bh=ma6MWGfDS iybBQfLQjd48GruvjHUoPF2GnYDc9S5+xA=; b=lotvYbmhp4v8vYRwlMR7qQdlB +LBZRT/gaSNsnlE9z/KJTHGnpro0rgmryY68m70hznuBcHlscc5xPZrASMjWfPQ9 ExRZZlm2kDY/ZSSW0IklzS0WtBtkzZoZwlfpSk77jMmy24oGl81VqtMVmq8uu3ax UA2fwCF8gw20vwyDB67sQ8pNm5Dkcwri4+/hDA8uckIoP+P42hLXP7AxaWCEge3T IRFhY/uSFaExu39v14nt++rMxj/KUR5IMaOdQ8zUVe4M/Qis6zXt11Pbg2lTGC+M 5vg8bdtDbhlo6dT7QqlCHBhn0c3+8htwqD/pAsitQhJ9ipQAvVrXtmtAP1Q/A== Received: from p1lg14879.it.hpe.com ([16.230.97.200]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 40ehwqhnv9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 17 Jul 2024 21:31:27 +0000 (GMT) Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id B9A0E147B7; Wed, 17 Jul 2024 21:31:24 +0000 (UTC) Received: from dog.eag.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 73BEF80046F; Wed, 17 Jul 2024 21:31:22 +0000 (UTC) Received: by dog.eag.rdlabs.hpecorp.net (Postfix, from userid 200934) id 3D9E8300C5F8C; Wed, 17 Jul 2024 16:31:21 -0500 (CDT) From: Steve Wahl To: Steve Wahl , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, Pavin Joseph , Eric Hagberg Cc: Simon Horman , Eric Biederman , Dave Young , Sarah Brofeldt , Russ Anderson , Dimitri Sivanich , Hou Wenlong , Andrew Morton , Baoquan He , Yuntao Wang , Bjorn Helgaas , Joerg Roedel , Michael Roth , Tao Liu , kexec@lists.infradead.org, "Kalra, Ashish" , Ard Biesheuvel , linux-efi@vger.kernel.org Subject: [PATCH v3 2/2] x86/mm/ident_map: Use gbpages only where full GB page should be mapped. Date: Wed, 17 Jul 2024 16:31:21 -0500 Message-Id: <20240717213121.3064030-3-steve.wahl@hpe.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20240717213121.3064030-1-steve.wahl@hpe.com> References: <20240717213121.3064030-1-steve.wahl@hpe.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: nYI9MFPl_E_FGmWXQRAm7xw7Fmy9Db5v X-Proofpoint-GUID: nYI9MFPl_E_FGmWXQRAm7xw7Fmy9Db5v X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-07-17_16,2024-07-17_02,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxscore=0 priorityscore=1501 impostorscore=0 adultscore=0 suspectscore=0 clxscore=1015 malwarescore=0 phishscore=0 mlxlogscore=999 lowpriorityscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2407170163 When ident_pud_init() uses only gbpages to create identity maps, large ranges of addresses not actually requested can be included in the resulting table; a 4K request will map a full GB. This can include a lot of extra address space past that requested, including areas marked reserved by the BIOS. That allows processor speculation into reserved regions, that on UV systems can cause system halts. Only use gbpages when map creation requests include the full GB page of space. Fall back to using smaller 2M pages when only portions of a GB page are included in the request. No attempt is made to coalesce mapping requests. If a request requires a map entry at the 2M (pmd) level, subsequent mapping requests within the same 1G region will also be at the pmd level, even if adjacent or overlapping such requests could have been combined to map a full gbpage. Existing usage starts with larger regions and then adds smaller regions, so this should not have any great consequence. Signed-off-by: Steve Wahl Tested-by: Pavin Joseph Tested-by: Sarah Brofeldt Tested-by: Eric Hagberg --- arch/x86/mm/ident_map.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c index 968d7005f4a7..a204a332c71f 100644 --- a/arch/x86/mm/ident_map.c +++ b/arch/x86/mm/ident_map.c @@ -26,18 +26,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page, for (; addr < end; addr = next) { pud_t *pud = pud_page + pud_index(addr); pmd_t *pmd; + bool use_gbpage; next = (addr & PUD_MASK) + PUD_SIZE; if (next > end) next = end; - if (info->direct_gbpages) { - pud_t pudval; + /* if this is already a gbpage, this portion is already mapped */ + if (pud_leaf(*pud)) + continue; + + /* Is using a gbpage allowed? */ + use_gbpage = info->direct_gbpages; - if (pud_present(*pud)) - continue; + /* Don't use gbpage if it maps more than the requested region. */ + /* at the begining: */ + use_gbpage &= ((addr & ~PUD_MASK) == 0); + /* ... or at the end: */ + use_gbpage &= ((next & ~PUD_MASK) == 0); + + /* Never overwrite existing mappings */ + use_gbpage &= !pud_present(*pud); + + if (use_gbpage) { + pud_t pudval; - addr &= PUD_MASK; pudval = __pud((addr - info->offset) | info->page_flag); set_pud(pud, pudval); continue;