From patchwork Tue Nov 8 18:21:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 623102 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6B73C433FE for ; Tue, 8 Nov 2022 18:22:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234816AbiKHSWh (ORCPT ); Tue, 8 Nov 2022 13:22:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234755AbiKHSWg (ORCPT ); Tue, 8 Nov 2022 13:22:36 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E9AE271E for ; Tue, 8 Nov 2022 10:22:35 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 63DCDCE1C42 for ; Tue, 8 Nov 2022 18:22:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC1B1C4347C; Tue, 8 Nov 2022 18:22:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1667931745; bh=4ivurFOMcnYVOQgldP9y6PNo1y3QrIuNCSXZfDe0wZ0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HVtw7Hu6DUvbpKOVwV/p+CiSVKwcPVqSVk3gCi1n3Nt3wqlPfHhCVGMUwruGPS6J3 pmEm4gikkcFP/soLswtTydF+otSWBMzi3GfXAymKY/1KMFXmuiGMQ3ZtPTWSUHAml2 O29Byr2b/la9thOg+uUKOsz16iZSbMPQiTWqtF/Xo4gpuXcNasjgxbKm3z59LyoWIx 0RYObraV2IpHeXP7UOB3GHch5yGblV4xRsZ2HeEFDCeAz3lTDg+QghZXR2EAu/lNeK 5ouSIRj1wcgVfLfptMgFOOf4WMDpLWO+D3fEUxQrQ+jswYSlPBnKzPZ9wRviOZBV6I 5EQAAFFcHNKrw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v5 1/7] arm64: head: Move all finalise_el2 calls to after __enable_mmu Date: Tue, 8 Nov 2022 19:21:58 +0100 Message-Id: <20221108182204.2447664-2-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108182204.2447664-1-ardb@kernel.org> References: <20221108182204.2447664-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2013; i=ardb@kernel.org; h=from:subject; bh=4ivurFOMcnYVOQgldP9y6PNo1y3QrIuNCSXZfDe0wZ0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjap5BO3NtnUnMpfrLBi1Z4al4QY8BVK7KA3An1EL2 0mSkueuJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY2qeQQAKCRDDTyI5ktmPJJX2C/ 42SFuWsziw4ywk1XWPY4JihPVVfjieJZmi7x7P/St0q6g22TzuRgPc1HbK5Ee3bnj369AicFRyAq61 IAJWhgphN7cRjqBIlFr+NzqDdILqzAXUrzbU/xm0/Do5JppTZGQJoYGFrm/gfeRr9jiqWKIpF12DQ0 CyxEwEMWkKMIJ4/adII34MENXO2QH37y7rPQdqFYpeoQ2/lflnu/5V+bcya4SjD4W+uTwugIFYvHqc QYbsAfNFu/w1z0ktxWEQgpJLrs7R8UTAVdIrd+yAKPzk50NeoXBqvcm2I2gDNdAqZkmzl879r1DnYi 8hB450QJySYGUQTPzNOLD+GYi0boNFZW1sQxULXSqnbpgWTajPSiFsRuOn3ba1ubY+A4/6D8qI29WC EkhQgdK9n0YPsXwtOJparTrLLv064cG2e4IXOMQGuBiAv7oZ963DgJ29l768/47rn1GLYz4g7Irx/k Vh8a2fwsny9yikkE5+xIVFDTig0Xszprw74DRE6tVZKr4= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org In the primary boot path, finalise_el2() is called much later than on the secondary boot or resume-from-suspend paths, and this does not appear to be intentional. Since we aim to do as little as possible before enabling the MMU and caches, align secondary and resume with primary boot, and defer the call to after the MMU is turned on. This also removes the need to clean finalise_el2() to the PoC once we enable support for booting with the MMU on. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 5 ++++- arch/arm64/kernel/sleep.S | 5 ++++- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 2196aad7b55bcef0..c59e0d95b44d0901 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -584,7 +584,6 @@ SYM_FUNC_START_LOCAL(secondary_startup) * Common entry point for secondary CPUs. */ mov x20, x0 // preserve boot mode - bl finalise_el2 bl __cpu_secondary_check52bitva #if VA_BITS > 48 ldr_l x0, vabits_actual @@ -600,6 +599,10 @@ SYM_FUNC_END(secondary_startup) SYM_FUNC_START_LOCAL(__secondary_switched) mov x0, x20 bl set_cpu_boot_mode_flag + + mov x0, x20 + bl finalise_el2 + str_l xzr, __early_cpu_boot_status, x3 adr_l x5, vectors msr vbar_el1, x5 diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 97c9de57725dfddb..7b7c56e048346e97 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -100,7 +100,7 @@ SYM_FUNC_END(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" SYM_CODE_START(cpu_resume) bl init_kernel_el - bl finalise_el2 + mov x19, x0 // preserve boot mode #if VA_BITS > 48 ldr_l x0, vabits_actual #endif @@ -116,6 +116,9 @@ SYM_CODE_END(cpu_resume) .popsection SYM_FUNC_START(_cpu_resume) + mov x0, x19 + bl finalise_el2 + mrs x1, mpidr_el1 adr_l x8, mpidr_hash // x8 = struct mpidr_hash virt address From patchwork Tue Nov 8 18:21:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 623104 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DAFCC4332F for ; Tue, 8 Nov 2022 18:22:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234769AbiKHSWc (ORCPT ); Tue, 8 Nov 2022 13:22:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234755AbiKHSWb (ORCPT ); Tue, 8 Nov 2022 13:22:31 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 685052CB for ; Tue, 8 Nov 2022 10:22:30 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 22E7AB81BED for ; Tue, 8 Nov 2022 18:22:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20910C433C1; Tue, 8 Nov 2022 18:22:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1667931747; bh=BEcl9lZv2MCbvNBgvkLCzZ96Z78+ncsINGTKsTvpgMw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KB4gM8NmX7zJG9+E6V2OzY1MeUN4qetsIfYhpSv/miKS2VbyFjT2HFPWZV9Kq5xB7 c1IJyG1wCNAMAdLHlo2QZSG7SH1r0KXj7hATHHxrfSknI43n0IlMn6yY/UIhCa2B5i LVN+8cO77s4ayqmkgajXRLJH7rWbRIYqxVyqiS/r5rG5BUYs9l8qxpiloC43HjRsQG fkc3oPL7eOMptKwui0a4lM8L66bWWw6VkEVeLuLLclll3O7verejEZE3m46g47WveV AKlODUyDbEvDQr9uYfF65NCLNKOdmRRooQUlQT2EtYefkm03OhSmmTBweqzYIk4SfW Dh57DC8BdIRCQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v5 2/7] arm64: kernel: move identity map out of .text mapping Date: Tue, 8 Nov 2022 19:21:59 +0100 Message-Id: <20221108182204.2447664-3-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108182204.2447664-1-ardb@kernel.org> References: <20221108182204.2447664-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3476; i=ardb@kernel.org; h=from:subject; bh=BEcl9lZv2MCbvNBgvkLCzZ96Z78+ncsINGTKsTvpgMw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjap5Dd8EYeR6GPhjFwyg/KNACzmWEOF5bzxZkzYKz aUnOLPqJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY2qeQwAKCRDDTyI5ktmPJB3BC/ 4wRy6sI/iHMYRj/AUBA3eRP5xko6VseCTtEEqLbvGEXOwv1cnYH3S7+dhT+qSz1+nC46AJ5/2JzVKo Ii4fkOxAS+NGPu/BFpiBx3Jb479p5SEtp5Llbuud17a+pFRDbQ6l7LcY5O/o77/zePMHy5afwnruH/ ONdpVcvRY4DC2tckEGiWnYs5JmPN7v/GlfM2md36PAZMJJZ07cvDcpIa/AJSvoUZP+pLs5D3tlfwg2 OnygY8zWRGLUX3CD4odNiQ8RyCC2ulfDj5VyHa0r5e6W41WknLi6ngTtlVFNpsZ9FxOIjJQwpWkxIS V00So6bYMWfwPR4iVxlDcqV8/tJ5pHRbCXE3INdKZp4ntBWtMoVkxI0cmeF2xRh7Io9s9KOxs2aA19 LIlkGNOZt4eP9cd44UzkutkSu8yBrJgz6YbWNmpgVD0qLxGQtHBApFXO2bEE029+fbR86/5+Ozw2O7 De/x1rjw8TXpF0JzU9cZEIUo+H+rPUbSt2Ol6lkvUQzKk= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Reorganize the ID map slightly so that only code that is executed with the MMU off or via the 1:1 mapping remains. This allows us to move the identity map out of the .text segment, as it will no longer need executable permissions via the kernel mapping. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 28 +++++++++++--------- arch/arm64/kernel/vmlinux.lds.S | 2 +- arch/arm64/mm/proc.S | 2 -- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c59e0d95b44d0901..272877c5b4fa1203 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -540,19 +540,6 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) eret SYM_FUNC_END(init_kernel_el) -/* - * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed - * in w0. See arch/arm64/include/asm/virt.h for more info. - */ -SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) - adr_l x1, __boot_cpu_mode - cmp w0, #BOOT_CPU_MODE_EL2 - b.ne 1f - add x1, x1, #4 -1: str w0, [x1] // Save CPU boot mode - ret -SYM_FUNC_END(set_cpu_boot_mode_flag) - /* * This provides a "holding pen" for platforms to hold all secondary * cores are held until we're ready for them to initialise. @@ -596,6 +583,7 @@ SYM_FUNC_START_LOCAL(secondary_startup) br x8 SYM_FUNC_END(secondary_startup) + .text SYM_FUNC_START_LOCAL(__secondary_switched) mov x0, x20 bl set_cpu_boot_mode_flag @@ -628,6 +616,19 @@ SYM_FUNC_START_LOCAL(__secondary_too_slow) b __secondary_too_slow SYM_FUNC_END(__secondary_too_slow) +/* + * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed + * in w0. See arch/arm64/include/asm/virt.h for more info. + */ +SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) + adr_l x1, __boot_cpu_mode + cmp w0, #BOOT_CPU_MODE_EL2 + b.ne 1f + add x1, x1, #4 +1: str w0, [x1] // Save CPU boot mode + ret +SYM_FUNC_END(set_cpu_boot_mode_flag) + /* * The booting CPU updates the failed status @__early_cpu_boot_status, * with MMU turned off. @@ -659,6 +660,7 @@ SYM_FUNC_END(__secondary_too_slow) * Checks if the selected granule size is supported by the CPU. * If it isn't, park the CPU */ + .section ".idmap.text","awx" SYM_FUNC_START(__enable_mmu) mrs x3, ID_AA64MMFR0_EL1 ubfx x3, x3, #ID_AA64MMFR0_EL1_TGRAN_SHIFT, 4 diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 45131e354e27f1f8..c7727a1740ce11f5 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -168,7 +168,6 @@ SECTIONS LOCK_TEXT KPROBES_TEXT HYPERVISOR_TEXT - IDMAP_TEXT *(.gnu.warning) . = ALIGN(16); *(.got) /* Global offset table */ @@ -195,6 +194,7 @@ SECTIONS TRAMP_TEXT HIBERNATE_TEXT KEXEC_TEXT + IDMAP_TEXT . = ALIGN(PAGE_SIZE); } diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index b9ecbbae1e1abca1..d7ca6f23fb0d1334 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -110,7 +110,6 @@ SYM_FUNC_END(cpu_do_suspend) * * x0: Address of context pointer */ - .pushsection ".idmap.text", "awx" SYM_FUNC_START(cpu_do_resume) ldp x2, x3, [x0] ldp x4, x5, [x0, #16] @@ -166,7 +165,6 @@ alternative_else_nop_endif isb ret SYM_FUNC_END(cpu_do_resume) - .popsection #endif .pushsection ".idmap.text", "awx" From patchwork Tue Nov 8 18:22:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 623103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30031C433FE for ; Tue, 8 Nov 2022 18:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234797AbiKHSWe (ORCPT ); Tue, 8 Nov 2022 13:22:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234794AbiKHSWd (ORCPT ); Tue, 8 Nov 2022 13:22:33 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 936BA2CB for ; Tue, 8 Nov 2022 10:22:32 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 50C93B81BF5 for ; Tue, 8 Nov 2022 18:22:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 491EFC43140; Tue, 8 Nov 2022 18:22:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1667931750; bh=3pnGX8FHsLaShVzT6sQT9NeI+2vWRSqwJDen85XOp0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gSmZIoB0shCru2q2qUozFtyAwcLMDwgWe6HqD4oWCWxP0U9/+DItCub1b7aeDFLgj XWdiHK/4/B4MziNEpyuY0cZKNS5p98YUOpqEwVnhWZNfV2UUF2KbpdskO9iVQEzhvY fA1RHquBYTNiT0kumqJT9lOg07miSc6DGCoL7W7MpSdK/BLVkfTIJ20pExaX7G+/H9 vWE9u/J+dbUaz0hvzFPGXaMTuUi+yH7JgJn0uqJYPJtIxivDWJ72f8h8BD/eNimn03 cJtHWpLlvdVGg1WiAs+JVK1/jTmBsds3X9HZmI2ya4WlAVZAMLFVPXpuaxQV3W0sOA D/UMydL9o4VNA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v5 3/7] arm64: head: record the MMU state at primary entry Date: Tue, 8 Nov 2022 19:22:00 +0100 Message-Id: <20221108182204.2447664-4-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108182204.2447664-1-ardb@kernel.org> References: <20221108182204.2447664-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4591; i=ardb@kernel.org; h=from:subject; bh=3pnGX8FHsLaShVzT6sQT9NeI+2vWRSqwJDen85XOp0s=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjap5E4RZhMI3V/6ys6kjY9ksOE+oo5+tezjXExJh1 Lk6A+RGJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY2qeRAAKCRDDTyI5ktmPJARFC/ 9G5HhKeivZj69Eggt/uXPaubV8naIsO70LUwFlXxYbFnvR1g2IukvNOKJDbXtnN7EZCC7SDgnsudzG VcZ0gB91+LPyafLp/HS5z5eTRzoDgJz7poy6rzpO+kSquOpp09RYIMgmt7l79lPnki3IK9rMQ1cYAT VQqrwDwfZbzs6047nVQaZeVwE1o9qAk0Dh+jDA5rYSrltJZHhOOvoOEdYoH03ETxUG6hgtyh0eYPkm dk6VHf3NoN14cpI6S0LOnzWsZCiRIPFJNBp/RVg1K7cNsLau35Fd+m8Dl4xhC7gAtv+1wgoWymL+Y0 sY+uFkBK6jfQBwMcx07Ik9XhOVv1/3rNQmH+xFSxh7zzenvvhM26DOeL8pRuIRcZGpSAhU38QAjQrW 4nxajQpXRGSrFbzaLxo6aBo0dbXeI5C4cx/LHTw2SHCm5tJe2L38wepHTDZGdTVrVXWHh9Fut5Cu0F bXYhWWg1AjjVHNqP8rvE0Vft3LnekECDq/2j8JIkXk38k= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Prepare for being able to deal with primary entry with the MMU and caches enabled, by recording whether or not we entered with the MMU on in register x19 and in a global variable. (Note that setting this variable to '1' does not require cache invalidation, nor is it required for storing the bootargs in that case, so omit the cache maintenance). Since boot with the MMU enabled is not permitted by the bare metal boot protocol, ensure that a diagnostic is emitted and a taint bit set if the MMU was found to be enabled on a non-EFI boot. We will make an exception for EFI boot later, which has strict requirements for the mapping of system memory, permitting us to relax the boot protocol and hand over from the EFI stub to the core kernel with MMU and caches left enabled. While at it, add 'pre_disable_mmu_workaround' macro invocations to init_kernel_el, as its manipulation of SCTLR_ELx may amount to disabling of the MMU after subsequent patches. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 21 ++++++++++++++++++++ arch/arm64/kernel/setup.c | 9 +++++++-- 2 files changed, 28 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 272877c5b4fa1203..3e654e43fa115947 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -77,6 +77,7 @@ * primary lowlevel boot path: * * Register Scope Purpose + * x19 primary_entry() .. start_kernel() whether we entered with the MMU on * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob @@ -86,6 +87,7 @@ * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) + bl record_mmu_state bl preserve_boot_args bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -109,6 +111,19 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) +SYM_CODE_START_LOCAL(record_mmu_state) + mrs x19, CurrentEL + cmp x19, #CurrentEL_EL2 + mrs x19, sctlr_el1 + b.ne 0f + mrs x19, sctlr_el2 +0: tst x19, #SCTLR_ELx_C // Z := (C == 0) + and x19, x19, #SCTLR_ELx_M // isolate M bit + ccmp x19, xzr, #4, ne // Z |= (M == 0) + cset x19, ne // set x19 if !Z + ret +SYM_CODE_END(record_mmu_state) + /* * Preserve the arguments passed by the bootloader in x0 .. x3 */ @@ -119,11 +134,14 @@ SYM_CODE_START_LOCAL(preserve_boot_args) stp x21, x1, [x0] // x0 .. x3 at kernel entry stp x2, x3, [x0, #16] + cbnz x19, 0f // skip cache invalidation if MMU is on dmb sy // needed before dc ivac with // MMU off add x1, x0, #0x20 // 4 x 8 bytes b dcache_inval_poc // tail call +0: str_l x19, mmu_enabled_at_boot, x0 + ret SYM_CODE_END(preserve_boot_args) SYM_FUNC_START_LOCAL(clear_page_tables) @@ -494,6 +512,7 @@ SYM_FUNC_START(init_kernel_el) SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) mov_q x0, INIT_SCTLR_EL1_MMU_OFF + pre_disable_mmu_workaround msr sctlr_el1, x0 isb mov_q x0, INIT_PSTATE_EL1 @@ -526,11 +545,13 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) cbz x0, 1f /* Set a sane SCTLR_EL1, the VHE way */ + pre_disable_mmu_workaround msr_s SYS_SCTLR_EL12, x1 mov x2, #BOOT_CPU_FLAG_E2H b 2f 1: + pre_disable_mmu_workaround msr sctlr_el1, x1 mov x2, xzr 2: diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index fea3223704b6339a..11cf21afafa9f852 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -56,6 +56,7 @@ static int num_standard_resources; static struct resource *standard_resources; phys_addr_t __fdt_pointer __initdata; +u64 mmu_enabled_at_boot __initdata; /* * Standard memory resources @@ -328,8 +329,12 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) xen_early_init(); efi_init(); - if (!efi_enabled(EFI_BOOT) && ((u64)_text % MIN_KIMG_ALIGN) != 0) - pr_warn(FW_BUG "Kernel image misaligned at boot, please fix your bootloader!"); + if (!efi_enabled(EFI_BOOT)) { + if ((u64)_text % MIN_KIMG_ALIGN) + pr_warn(FW_BUG "Kernel image misaligned at boot, please fix your bootloader!"); + WARN_TAINT(mmu_enabled_at_boot, TAINT_FIRMWARE_WORKAROUND, + FW_BUG "Booted with MMU enabled!"); + } arm64_memblock_init(); From patchwork Tue Nov 8 18:22:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 622733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FDABC4332F for ; Tue, 8 Nov 2022 18:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234794AbiKHSWe (ORCPT ); Tue, 8 Nov 2022 13:22:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234755AbiKHSWd (ORCPT ); Tue, 8 Nov 2022 13:22:33 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28C0D1163 for ; Tue, 8 Nov 2022 10:22:33 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BA8D461727 for ; Tue, 8 Nov 2022 18:22:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71D6FC43470; Tue, 8 Nov 2022 18:22:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1667931752; bh=L5q3Hzdh1HbKLn9PVJqkB2WytTwlIIHjujoTsQXL1Uw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WMa//HNmViFf3Zk/Dn72obg8yCz47D75F8c5Y+x9+dJxAjGJx5V23Lwy0D/BnMbu6 +R4y8PFqrEkv38RWevx9nLo4oMdSVwRG246tsweDDWX6ONOtTbmUeQrHqQ+CZwSnW3 HhKkMSS7dd7RwXpwOygUTz9dS18xZCZGfhiEHpCQQXnflhiIcTaYXnBq4VWjgi4hqd 6aYGM5JN4r2KHVjasAXnFQOMo3VhuDquBH7g9vcv9soZn6rxrsTGU6CnqC16fbJNHQ e7FadPuT7Si97/cM9qxkfCFx9kFxikb9iFbtzGJu5Y3jefS572BzgAFT9RViY0ClWV UPKB3LlF/pSSg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v5 4/7] arm64: head: avoid cache invalidation when entering with the MMU on Date: Tue, 8 Nov 2022 19:22:01 +0100 Message-Id: <20221108182204.2447664-5-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108182204.2447664-1-ardb@kernel.org> References: <20221108182204.2447664-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1358; i=ardb@kernel.org; h=from:subject; bh=L5q3Hzdh1HbKLn9PVJqkB2WytTwlIIHjujoTsQXL1Uw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjap5Gwd1LA9GuF/9QsQ0L389fKmqeZ3BEvPyQcdV2 9eokvUGJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY2qeRgAKCRDDTyI5ktmPJGxbDA DEIc4rgcoZghfL2wuCC40W+38B0i4SCAPXX3J+fCB9/w1Q8uBZDRoZAxhAaO5NmLYVLIgZZpU06Zv+ rRi6wqY+DlCGsLIvWXT9QNwYvgp67gpjj4QJa8mTrqEVdkWFlPv5cZI85FEqU0GDXHqslZ9Pxke2VP AivCF9OngqI4+tCdi0pxngDw+XBPgqCt2yMD8AMEbpPPRGXpJiHUho1CzdYOvKESOjqSQDsD/zJp0F C+NUN4TRZ1fBuLcuyHZVzCD9Gy4GGm6ltFhuZT79sygqF+wWCDcfQNYiuZvMgXE48kE74vl8+lO5tk 7BOI/80cIhSHe0yH+lhdCPXnBziODR0HBuZQsAJ18FQyWmwprdByBGjCcWtatzjfKue0YUMOj+rK4P lfDcyousyRTnUTbB6+gH1pf1CiIThTAxcZuMdBD0O4WAwQaZyvWD5xww86USyrupzFP95MQCCqLyjM Qoy2lUp/JNG1S9C2m6jukPmflj8i8AnHSpVVKamdmcwVo= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU on, there is no need for explicit cache invalidation for stores to memory, as they will be coherent with the caches. Let's take advantage of this, and create the ID map with the MMU still enabled if that is how we entered, and avoid any cache invalidation calls in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 3e654e43fa115947..a7c84cde67c5c652 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -89,9 +89,9 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args + bl create_idmap bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 - bl create_idmap /* * The following calls CPU setup code, see arch/arm64/mm/proc.S for @@ -378,12 +378,13 @@ SYM_FUNC_START_LOCAL(create_idmap) * accesses (MMU disabled), invalidate those tables again to * remove any speculatively loaded cache lines. */ + cbnz x19, 0f // skip cache invalidation if MMU is on dmb sy adrp x0, init_idmap_pg_dir adrp x1, init_idmap_pg_end bl dcache_inval_poc - ret x28 +0: ret x28 SYM_FUNC_END(create_idmap) SYM_FUNC_START_LOCAL(create_kernel_mapping) From patchwork Tue Nov 8 18:22:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 622732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEEC4C43219 for ; Tue, 8 Nov 2022 18:22:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234813AbiKHSWh (ORCPT ); Tue, 8 Nov 2022 13:22:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234816AbiKHSWg (ORCPT ); Tue, 8 Nov 2022 13:22:36 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 599D22BE2 for ; Tue, 8 Nov 2022 10:22:35 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E708161728 for ; Tue, 8 Nov 2022 18:22:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9AB9CC433B5; Tue, 8 Nov 2022 18:22:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1667931754; bh=BSPbdyn+0wdNt5TH3CR06gDbxLNByYa+H+dOwnVBqIg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BydbxnZ+ra/lf/jgc7o939DMY4Xt4wKTkZh6ZwvdMg5yg9x3d/J1eo6JzX3VDdnNV FquVTCzVLBjB8yl15hsLwEPgMWsZkmCSrG/xixKq3Mi83bCy3OWLGXVAgHM8veMizT FUyQG666ydKylaFq/h3L7R5hy+Lr7ozRqYA3zCztQ+fHCtcAJbR3jHqzFb1NyUWkLs cwCB4If165X4bCuDRxGSmjC0Q8CBzCy1zlIVGUkqKTCWq0ogoA9W+wpih03nSF3CNW Yhk4NGDGB7ANSi3TDtziAyChfWZ+0oLH35sonzPE3IUDPcsMMe/RJmxEEE6c/cXrC2 ylM718ZsnSibw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v5 5/7] arm64: head: Clean the ID map and the HYP text to the PoC if needed Date: Tue, 8 Nov 2022 19:22:02 +0100 Message-Id: <20221108182204.2447664-6-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108182204.2447664-1-ardb@kernel.org> References: <20221108182204.2447664-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3821; i=ardb@kernel.org; h=from:subject; bh=BSPbdyn+0wdNt5TH3CR06gDbxLNByYa+H+dOwnVBqIg=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjap5IJ/gliatLgpExMKIgiOYkVwmBYFEHtI4jpLdE mbvy94iJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY2qeSAAKCRDDTyI5ktmPJLZtDA CoyQbnSzKp28X77uKnl/Lj85MOohXqA5PXNlnhepLBDSBNTmRs8bvepcIzxEJC+7t0yt04sGQkL2ao J6J9goz1MwiQd1DWXOsbit1ySDziNWG6xcZEF6qWt02YmyRHyQslJfGaO/F9vhRWwBdTR2GPFafdee WF2KS8UoWBneMMHzWoKNoW3gMLFD4HlizsBKK20JSyRyK3pQKgi+bDjo4rFA3axITNRInSx89sUWE2 YiNdZj1MPeHPIvDYNKJjwlDJp3Af/YZRNYfq3aPMsFjTmABtjFV1gOm0x8jvn14TM2ywvbEGbeRX8E pmX+t9NjgO19u0Y91OEFBqTNwN/CPzTwnREmGQOnr00umfhCeZM7awRwcUKY6154Kli1wZsVmNy/9P HdZ+O6eFAfAk8MGUVx7sSd4QbP6HwBUMh60Y5TPwc/EYoltFRs6zYad03oqeIFJrsgVXwlsHV8p7fj TH/rFPWVRrq5SNu6Q9HiyvKNrwPwxtSdMIU6lXGwTQGiI= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU and caches enabled, the bootloader may not have performed any cache maintenance to the PoC. So clean the ID mapped page to the PoC, to ensure that instruction and data accesses with the MMU off see the correct data. For similar reasons, clean all the HYP text to the PoC as well when entering at EL2 with the MMU and caches enabled. Note that this means primary_entry() itself needs to be moved into the ID map as well, as we will return from init_kernel_el() with the MMU and caches off. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 31 +++++++++++++++++--- arch/arm64/kernel/sleep.S | 1 + 2 files changed, 28 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index a7c84cde67c5c652..825f1d0549661030 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -70,7 +70,7 @@ __EFI_PE_HEADER - __INIT + .section ".idmap.text","awx" /* * The following callee saved general purpose registers are used on the @@ -90,6 +90,17 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args bl create_idmap + + /* + * If we entered with the MMU and caches on, clean the ID mapped part + * of the primary boot code to the PoC so we can safely execute it with + * the MMU off. + */ + cbz x19, 0f + adrp x0, __idmap_text_start + adr_l x1, __idmap_text_end + bl dcache_clean_poc +0: mov x19, x0 bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -111,6 +122,7 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) + __INIT SYM_CODE_START_LOCAL(record_mmu_state) mrs x19, CurrentEL cmp x19, #CurrentEL_EL2 @@ -505,10 +517,12 @@ SYM_FUNC_END(__primary_switched) * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in x0 if * booted in EL1 or EL2 respectively, with the top 32 bits containing * potential context flags. These flags are *not* stored in __boot_cpu_mode. + * + * x0: whether we are being called from the primary boot path with the MMU on */ SYM_FUNC_START(init_kernel_el) - mrs x0, CurrentEL - cmp x0, #CurrentEL_EL2 + mrs x1, CurrentEL + cmp x1, #CurrentEL_EL2 b.eq init_el2 SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) @@ -523,6 +537,14 @@ SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) eret SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) + msr elr_el2, lr + + // clean all HYP code to the PoC if we booted at EL2 with the MMU on + cbz x0, 0f + adrp x0, __hyp_idmap_text_start + adr_l x1, __hyp_text_end + bl dcache_clean_poc +0: mov_q x0, HCR_HOST_NVHE_FLAGS msr hcr_el2, x0 isb @@ -556,7 +578,6 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) msr sctlr_el1, x1 mov x2, xzr 2: - msr elr_el2, lr mov w0, #BOOT_CPU_MODE_EL2 orr x0, x0, x2 eret @@ -567,6 +588,7 @@ SYM_FUNC_END(init_kernel_el) * cores are held until we're ready for them to initialise. */ SYM_FUNC_START(secondary_holding_pen) + mov x0, xzr bl init_kernel_el // w0=cpu_boot_mode mrs x2, mpidr_el1 mov_q x1, MPIDR_HWID_BITMASK @@ -584,6 +606,7 @@ SYM_FUNC_END(secondary_holding_pen) * be used where CPUs are brought online dynamically by the kernel. */ SYM_FUNC_START(secondary_entry) + mov x0, xzr bl init_kernel_el // w0=cpu_boot_mode b secondary_startup SYM_FUNC_END(secondary_entry) diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 7b7c56e048346e97..2ae7cff1953aaf87 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -99,6 +99,7 @@ SYM_FUNC_END(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" SYM_CODE_START(cpu_resume) + mov x0, xzr bl init_kernel_el mov x19, x0 // preserve boot mode #if VA_BITS > 48 From patchwork Tue Nov 8 18:22:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 622731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01845C433FE for ; Tue, 8 Nov 2022 18:22:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234531AbiKHSWl (ORCPT ); Tue, 8 Nov 2022 13:22:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234755AbiKHSWj (ORCPT ); Tue, 8 Nov 2022 13:22:39 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12EB82CB for ; Tue, 8 Nov 2022 10:22:39 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C6440B81BF6 for ; Tue, 8 Nov 2022 18:22:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C38D0C4347C; Tue, 8 Nov 2022 18:22:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1667931756; bh=MWD3jiDqDGuRLBaHt/HdBEQzHS/nhfIQWq2XlQqRkt4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DQlbi3wPZJ9XLvE+EpwIE093Ab3zgJ91e1DqN/pHRZuLPYkV47sFa10y1NDg1cz/W zPsLqa3CH/LOPlcMp04cqjSF7hdz9rt9OlNEMtEXUcyggZHSWz1v4PYQsA6P6dDlPw TQzoX1xzmifCRUZWBww+OYpJ8xabFawFEjMPtPKTWw2Kw+YQVJ8OB/um+Znhh6fcBc jrjosSBbXV+GV/xxsHhPcC2yzwAx0BKUs1SOEYKd2geUtU+YGG3+oF1TGYTBq+Ao62 OXnoaYt2nyz8uMExRKJ7VLTjBdnceoHZuPIiypvKuMce6SDC7MN3u5Sl3Btm6JxxVr PMfO9LcJyT3Wg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v5 6/7] arm64: lds: reduce effective minimum image alignment to 64k Date: Tue, 8 Nov 2022 19:22:03 +0100 Message-Id: <20221108182204.2447664-7-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108182204.2447664-1-ardb@kernel.org> References: <20221108182204.2447664-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4680; i=ardb@kernel.org; h=from:subject; bh=MWD3jiDqDGuRLBaHt/HdBEQzHS/nhfIQWq2XlQqRkt4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjap5JWXXKYnowwNzvCLta3MG3V6apoNBwitfw05zf YrDWe/uJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY2qeSQAKCRDDTyI5ktmPJKW2C/ 9S2oyok3Zym96V/+5yk6p0C5BDDM81GC8BWGowTtjp7eBqM4SULE9aAfguRffDVaH+CCfL6g5KhK5y wXgjRaIPuMEewbX6s66L5CCx1I4ZyM7X48jbC+192fX7dEQbObgH5sIuo1WqAVVk0As7QVW6EoK/ZC FTVm/izx+qiMlauOBKBlhkstMogDxH3ccoF10Z3hACAW/cAgTR0BjkhCvhMqkfOrcpmsWPc2ThTOld xkqhPTUbSzbHAmRyfPAoYCCiZ/10RPAm7XePuzfYqdLH2NY0ltxlqZSgvnOa4g8XZ6Ic0uLqu2VcTd /rX+7w6uFORNFNi2LE//O/w8C3OKhk/Zc6BPpyLng7XAvX6K/5SkzhsltOw6cqizqOpWLHY49FCjUV vDLW+B6W9EUC/mwO3hl9eMJ1VS4LtOtcX96I4OySOu7YhK+BFmDZZ7jO4bPyu8FsWtxnOqLphM8vV/ ml+TyY/JrJdJVxYCqFBxp7luHStPjtSTk+7RPdkXjpaBo= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Our segment alignment is 64k for all configurations, and coincidentally, this is the largest alignment supported by the PE/COFF executable format used by EFI. This means that generally, there is no need to move the image around in memory after it has been loaded by the firmware, which can be advantageous as it also permits us to rely on the memory attributes set by the firmware (R-X for [_text, __inittext_end] and RW- for [__initdata_begin, _end]. However, the minimum alignment of the image is actually 128k on 64k pages configurations with CONFIG_VMAP_STACK=y, due to the existence of a single 128k aligned object in the image, which is the stack of the init task. Let's work around this by adding some padding before the init stack allocation, so we can round down the stack pointer to a suitably aligned value if the image is not aligned to 128k in memory. Note that this does not affect the boot protocol, which still requires 2 MiB alignment for bare metal boot, but is only part of the internal contract between the EFI stub and the kernel proper. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/efi.h | 9 +-------- arch/arm64/kernel/head.S | 3 +++ arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++- include/linux/efi.h | 6 +----- 4 files changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index 108b115dbf5b7436..7ed7a0e621a5b0b6 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -54,13 +54,6 @@ efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...); /* arch specific definitions used by the stub code */ -/* - * In some configurations (e.g. VMAP_STACK && 64K pages), stacks built into the - * kernel need greater alignment than we require the segments to be padded to. - */ -#define EFI_KIMG_ALIGN \ - (SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN : THREAD_ALIGN) - /* * On arm64, we have to ensure that the initrd ends up in the linear region, * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is @@ -88,7 +81,7 @@ static inline unsigned long efi_get_kimg_min_align(void) * 2M alignment if KASLR was explicitly disabled, even if it was not * going to be activated to begin with. */ - return efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN; + return efi_nokaslr ? MIN_KIMG_ALIGN : SEGMENT_ALIGN; } #define EFI_ALLOC_ALIGN SZ_64K diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 825f1d0549661030..8d7c6155da59e215 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -429,6 +429,9 @@ SYM_FUNC_END(create_kernel_mapping) msr sp_el0, \tsk ldr \tmp1, [\tsk, #TSK_STACK] +#if THREAD_ALIGN > SEGMENT_ALIGN + bic \tmp1, \tmp1, #THREAD_ALIGN - 1 +#endif add sp, \tmp1, #THREAD_SIZE sub sp, sp, #PT_REGS_SIZE diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index c7727a1740ce11f5..5002d869fa7f1767 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -274,7 +274,16 @@ SECTIONS _data = .; _sdata = .; - RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) +#if THREAD_ALIGN > SEGMENT_ALIGN + /* + * Add some padding for the init stack so we can fix up any potential + * misalignment at runtime. In practice, this can only occur on 64k + * pages configurations with CONFIG_VMAP_STACK=y. + */ + . += THREAD_ALIGN - SEGMENT_ALIGN; + ASSERT(. == init_stack, "init_stack not at start of RW_DATA as expected") +#endif + RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, SEGMENT_ALIGN) /* * Data written with the MMU off but read with the MMU on requires diff --git a/include/linux/efi.h b/include/linux/efi.h index 16b7318957b0709f..19eda0bb4617a4cf 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -421,11 +421,7 @@ void efi_native_runtime_setup(void); /* * This GUID may be installed onto the kernel image's handle as a NULL protocol * to signal to the stub that the placement of the image should be respected, - * and moving the image in physical memory is undesirable. To ensure - * compatibility with 64k pages kernels with virtually mapped stacks, and to - * avoid defeating physical randomization, this protocol should only be - * installed if the image was placed at a randomized 128k aligned address in - * memory. + * and moving the image in physical memory is undesirable. */ #define LINUX_EFI_LOADED_IMAGE_FIXED_GUID EFI_GUID(0xf5a37b6d, 0x3344, 0x42a5, 0xb6, 0xbb, 0x97, 0x86, 0x48, 0xc1, 0x89, 0x0a) From patchwork Tue Nov 8 18:22:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 623101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F933C4332F for ; Tue, 8 Nov 2022 18:22:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234755AbiKHSWl (ORCPT ); Tue, 8 Nov 2022 13:22:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234557AbiKHSWk (ORCPT ); Tue, 8 Nov 2022 13:22:40 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA1C0C22 for ; Tue, 8 Nov 2022 10:22:39 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4566761727 for ; Tue, 8 Nov 2022 18:22:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED2DEC43470; Tue, 8 Nov 2022 18:22:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1667931758; bh=2LT4g6R1nMcPAxzEz9SIz9J/vXCoCtzO6ohXg48lfz8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nftRC+QasozAq1uqe4KN25bXFi/tqyLWUxl/S5dhpDyX+Fb2ULDzKnUiN5dTNRPm0 GB3qIfAaCoqW8+A74W53XvPGqtD77/VjZpbifyTZUHxIzal5+v2x4OuFmbKaRezOhS DxfRltz80V9WvXmlptTwF30X1fPqh4tjsQXZQzE1uewjV9RcKmmcTy4VvDxV0uzCGP JbbRv5HXLPwdkE4uQ718a7erSNgLRII70nXRksmF4RuKz1/TrvLoSdGKNsIf6SPkwg t0/QCtBta19JXkm+YA2WS5g2PxFaXuxtMT3kNsgYC6Pk7pSURUGXnDvVN/nRpJNtKF ld2mXY3keSV/A== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v5 7/7] efi: arm64: enter with MMU and caches enabled Date: Tue, 8 Nov 2022 19:22:04 +0100 Message-Id: <20221108182204.2447664-8-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108182204.2447664-1-ardb@kernel.org> References: <20221108182204.2447664-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10257; i=ardb@kernel.org; h=from:subject; bh=2LT4g6R1nMcPAxzEz9SIz9J/vXCoCtzO6ohXg48lfz8=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjap5LsbQS1qsGAKfMV/J3zbH30Ca+KL5ZNgIEy5K9 2es1fnKJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY2qeSwAKCRDDTyI5ktmPJJlVC/ 9EwYzqD5I3O+Kls7Zze6WX+/P1sWi7aw+tl7mMRZ5387xzNpkt+fTJ9hzqTjhm1WBY+R3DoAx+9MSU H5s9Iqm+nlm76ONiZAUQsJkZGGnv6leAkFZsmDHLGn7fvoEUfzphDsGS5ZsbVbwmOwXe0vYsTHC8Yc zh9l3zNn75l9xaxfS4KyI0iZ8He65YzACGW+0UoWDDzbZNLfMVn1zrBifFZzavoK5d8IWYq1m9SzWo LWlwMF0haSjD/3NY+1kq4qZHT4i+CJHm8lg8TSunUo0j5gamjmYtUgaPSbP3LlpWPhz6bPq0LqJNH9 Jb8/UXtmEZ/fJeJnAJ8OJ2YzEk2Ttd1XN9nZkjOw/j0OY3A03JT8o/52fJCt1xrcYwp2Zxuk6uQKth Her1hXlLDsVaFIoVQOyTEXc3tTz+s4z3sXC9KCbhh/WiT3nxFOkaG8lrrtoQ8LdpfRVADFo2O7RoJM 8Dxpbskb0Iaa7RfyFzusczX528YpEJCjMZuPEWsjYkszE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Instead of cleaning the entire loaded kernel image to the PoC and disabling the MMU and caches before branching to the kernel's bare metal entry point, we can leave the MMU and caches enabled, and rely on EFI's cacheable 1:1 mapping of all of system RAM (which is mandated by the spec) to populate the initial page tables. This removes the need for managing coherency in software, which is tedious and error prone. Note that we still need to clean the executable region of the image to the PoU if this is required for I/D coherency, but only if we actually decided to move the image in memory, as otherwise, this will have been taken care of by the loader. This change affects both the builtin EFI stub as well as the zboot decompressor, which now carries the entire EFI stub along with the decompression code and the compressed image. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/image-vars.h | 5 +- arch/arm64/mm/cache.S | 5 +- drivers/firmware/efi/libstub/Makefile | 4 +- drivers/firmware/efi/libstub/arm64-entry.S | 67 -------------------- drivers/firmware/efi/libstub/arm64-stub.c | 26 +++++--- drivers/firmware/efi/libstub/arm64.c | 41 ++++++++++-- 6 files changed, 61 insertions(+), 87 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index f31130ba02331060..40ebb882d2d8c97b 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -10,7 +10,7 @@ #error This file should only be included in vmlinux.lds.S #endif -PROVIDE(__efistub_primary_entry_offset = primary_entry - _text); +PROVIDE(__efistub_primary_entry = primary_entry); /* * The EFI stub has its own symbol namespace prefixed by __efistub_, to @@ -21,10 +21,11 @@ PROVIDE(__efistub_primary_entry_offset = primary_entry - _text); * linked at. The routines below are all implemented in assembler in a * position independent manner */ -PROVIDE(__efistub_dcache_clean_poc = __pi_dcache_clean_poc); +PROVIDE(__efistub_caches_clean_inval_pou = __pi_caches_clean_inval_pou); PROVIDE(__efistub__text = _text); PROVIDE(__efistub__end = _end); +PROVIDE(__efistub___inittext_end = __inittext_end); PROVIDE(__efistub__edata = _edata); PROVIDE(__efistub_screen_info = screen_info); PROVIDE(__efistub__ctype = _ctype); diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 081058d4e4366edb..8c3b3ee9b1d725c8 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -52,10 +52,11 @@ alternative_else_nop_endif * - start - virtual start address of region * - end - virtual end address of region */ -SYM_FUNC_START(caches_clean_inval_pou) +SYM_FUNC_START(__pi_caches_clean_inval_pou) caches_clean_inval_pou_macro ret -SYM_FUNC_END(caches_clean_inval_pou) +SYM_FUNC_END(__pi_caches_clean_inval_pou) +SYM_FUNC_ALIAS(caches_clean_inval_pou, __pi_caches_clean_inval_pou) /* * caches_clean_inval_user_pou(start,end) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index 402dfb30ddc7a01e..f838ab98978f1038 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -86,7 +86,7 @@ lib-$(CONFIG_EFI_GENERIC_STUB) += efi-stub.o string.o intrinsics.o systable.o \ screen_info.o efi-stub-entry.o lib-$(CONFIG_ARM) += arm32-stub.o -lib-$(CONFIG_ARM64) += arm64.o arm64-stub.o arm64-entry.o +lib-$(CONFIG_ARM64) += arm64.o arm64-stub.o lib-$(CONFIG_X86) += x86-stub.o lib-$(CONFIG_RISCV) += riscv.o riscv-stub.o lib-$(CONFIG_LOONGARCH) += loongarch.o loongarch-stub.o @@ -140,7 +140,7 @@ STUBCOPY_RELOC-$(CONFIG_ARM) := R_ARM_ABS # STUBCOPY_FLAGS-$(CONFIG_ARM64) += --prefix-alloc-sections=.init \ --prefix-symbols=__efistub_ -STUBCOPY_RELOC-$(CONFIG_ARM64) := R_AARCH64_ABS64 +STUBCOPY_RELOC-$(CONFIG_ARM64) := R_AARCH64_ABS # For RISC-V, we don't need anything special other than arm64. Keep all the # symbols in .init section and make sure that no absolute symbols references diff --git a/drivers/firmware/efi/libstub/arm64-entry.S b/drivers/firmware/efi/libstub/arm64-entry.S deleted file mode 100644 index b5c17e89a4fc0c21..0000000000000000 --- a/drivers/firmware/efi/libstub/arm64-entry.S +++ /dev/null @@ -1,67 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * EFI entry point. - * - * Copyright (C) 2013, 2014 Red Hat, Inc. - * Author: Mark Salter - */ -#include -#include - - /* - * The entrypoint of a arm64 bare metal image is at offset #0 of the - * image, so this is a reasonable default for primary_entry_offset. - * Only when the EFI stub is integrated into the core kernel, it is not - * guaranteed that the PE/COFF header has been copied to memory too, so - * in this case, primary_entry_offset should be overridden by the - * linker and point to primary_entry() directly. - */ - .weak primary_entry_offset - -SYM_CODE_START(efi_enter_kernel) - /* - * efi_pe_entry() will have copied the kernel image if necessary and we - * end up here with device tree address in x1 and the kernel entry - * point stored in x0. Save those values in registers which are - * callee preserved. - */ - ldr w2, =primary_entry_offset - add x19, x0, x2 // relocated Image entrypoint - - mov x0, x1 // DTB address - mov x1, xzr - mov x2, xzr - mov x3, xzr - - /* - * Clean the remainder of this routine to the PoC - * so that we can safely disable the MMU and caches. - */ - adr x4, 1f - dc civac, x4 - dsb sy - - /* Turn off Dcache and MMU */ - mrs x4, CurrentEL - cmp x4, #CurrentEL_EL2 - mrs x4, sctlr_el1 - b.ne 0f - mrs x4, sctlr_el2 -0: bic x4, x4, #SCTLR_ELx_M - bic x4, x4, #SCTLR_ELx_C - b.eq 1f - b 2f - - .balign 32 -1: pre_disable_mmu_workaround - msr sctlr_el2, x4 - isb - br x19 // jump to kernel entrypoint - -2: pre_disable_mmu_workaround - msr sctlr_el1, x4 - isb - br x19 // jump to kernel entrypoint - - .org 1b + 32 -SYM_CODE_END(efi_enter_kernel) diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index 7f0aab3a8ab302d6..00fb2eab6d0c74ef 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -58,7 +58,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, efi_handle_t image_handle) { efi_status_t status; - unsigned long kernel_size, kernel_memsize = 0; + unsigned long kernel_size, kernel_codesize, kernel_memsize; u32 phys_seed = 0; u64 min_kimg_align = efi_get_kimg_min_align(); @@ -93,6 +93,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, SEGMENT_ALIGN >> 10); kernel_size = _edata - _text; + kernel_codesize = __inittext_end - _text; kernel_memsize = kernel_size + (_end - _edata); *reserve_size = kernel_memsize; @@ -120,7 +121,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, */ *image_addr = (u64)_text; *reserve_size = 0; - goto clean_image_to_poc; + return EFI_SUCCESS; } status = efi_allocate_pages_aligned(*reserve_size, reserve_addr, @@ -136,14 +137,21 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, *image_addr = *reserve_addr; memcpy((void *)*image_addr, _text, kernel_size); + caches_clean_inval_pou(*image_addr, *image_addr + kernel_codesize); -clean_image_to_poc: + return EFI_SUCCESS; +} + +asmlinkage void primary_entry(void); + +unsigned long primary_entry_offset(void) +{ /* - * Clean the copied Image to the PoC, and ensure it is not shadowed by - * stale icache entries from before relocation. + * When built as part of the kernel, the EFI stub cannot branch to the + * kernel proper via the image header, as the PE/COFF header is + * strictly not part of the in-memory presentation of the image, only + * of the file representation. So instead, we need to jump to the + * actual entrypoint in the .text region of the image. */ - dcache_clean_poc(*image_addr, *image_addr + kernel_size); - asm("ic ialluis"); - - return EFI_SUCCESS; + return (char *)primary_entry - _text; } diff --git a/drivers/firmware/efi/libstub/arm64.c b/drivers/firmware/efi/libstub/arm64.c index d2e94972c5fad523..99f86ddc91cf10cf 100644 --- a/drivers/firmware/efi/libstub/arm64.c +++ b/drivers/firmware/efi/libstub/arm64.c @@ -41,6 +41,12 @@ efi_status_t check_platform_features(void) return EFI_SUCCESS; } +#ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE +#define DCTYPE "civac" +#else +#define DCTYPE "cvau" +#endif + void efi_cache_sync_image(unsigned long image_base, unsigned long alloc_size, unsigned long code_size) @@ -49,13 +55,38 @@ void efi_cache_sync_image(unsigned long image_base, u64 lsize = 4 << cpuid_feature_extract_unsigned_field(ctr, CTR_EL0_DminLine_SHIFT); - do { - asm("dc civac, %0" :: "r"(image_base)); - image_base += lsize; - alloc_size -= lsize; - } while (alloc_size >= lsize); + /* only perform the cache maintenance if needed for I/D coherency */ + if (!(ctr & BIT(CTR_EL0_IDC_SHIFT))) { + do { + asm("dc " DCTYPE ", %0" :: "r"(image_base)); + image_base += lsize; + code_size -= lsize; + } while (code_size >= lsize); + } asm("ic ialluis"); dsb(ish); isb(); } + +unsigned long __weak primary_entry_offset(void) +{ + /* + * By default, we can invoke the kernel via the branch instruction in + * the image header, so offset #0. This will be overridden by the EFI + * stub build that is linked into the core kernel, as in that case, the + * image header may not have been loaded into memory, or may be mapped + * with non-executable permissions. + */ + return 0; +} + +void __noreturn efi_enter_kernel(unsigned long entrypoint, + unsigned long fdt_addr, + unsigned long fdt_size) +{ + void (* __noreturn enter_kernel)(u64, u64, u64, u64); + + enter_kernel = (void *)entrypoint + primary_entry_offset(); + enter_kernel(fdt_addr, 0, 0, 0); +}