From patchwork Mon Apr 11 09:47:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 498D3C433EF for ; Mon, 11 Apr 2022 09:49:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344762AbiDKJvz (ORCPT ); Mon, 11 Apr 2022 05:51:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344810AbiDKJvY (ORCPT ); Mon, 11 Apr 2022 05:51:24 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFA2E419BD for ; Mon, 11 Apr 2022 02:48:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C1D5A61164 for ; Mon, 11 Apr 2022 09:48:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 44832C385AC; Mon, 11 Apr 2022 09:48:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670525; bh=ezCtD03gd+BlFVjzy1QZJINfQRhaZrdS2CGkPF2JN04=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Swr4L8aw2q7WvLiJoLiAkEZaHhLBYqt0vRR5dn20oEUaJvfDcqV3qABGxWOUlVWLn 2THuserZMB+453QpIUea9WhdYc0TWvuGR+fi7Rov81n/v9rHUs9NH2T3Opqq9nSnF/ hw7q7p0TmwBrwF7HdHKfcqf53VPHnp9lhhUVxYT1X3prrB5qbsL2r1PqSZPA/Rh9bH 2gt4wSODIKYh+WsXnl/TbR8J25aRTdvPj/9mS+mwKrh7u4bBIyJwNuyIdD5UTSFcxu FRMMiZ2u7Go3sR3lT7UmGvPWF0MIHC9Yet7pUucyj3+LwKBfFlfFXRJAgdXZqQhulz jANstbO4phxfw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 01/30] arm64: head: move kimage_vaddr variable into C file Date: Mon, 11 Apr 2022 11:47:55 +0200 Message-Id: <20220411094824.4176877-2-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1306; h=from:subject; bh=ezCtD03gd+BlFVjzy1QZJINfQRhaZrdS2CGkPF2JN04=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/k3RkoJrVyON20j4c7YANGKSScRaBHFmhZh4GNE SPoopYiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5NwAKCRDDTyI5ktmPJKQoC/ 91KwS5lG9U/akIcV36XJFSdhXEY3tbQwETKNegGIlEdbDuV2/e3DXLGJn4m8R0OpiivSIWe11WBnKH qJUhlZLOHTGBsb5IUKV5Ce6AdApDqNTKWU1P85VGMZDqjVdxsP2dydlcBzEhPzJWy/wgFztlSrXBlF g78wn739mxYMsza3x8vBM12oi4Sm0zmRiQImb6RBLZSfAWoi8TvXBUxu73WGG+pfsn5VfO+v1KIBlR 9z9S5dFTvYTZ8mPTKoF65v0ryopZubEcB95X3sfnEPg/zqtykkhy3wmUna0HDc0lUvUJDrDf6R0PZQ i70+EFosBspwKvkag/NzVzi5aldslvrY2GBkyaECdNO6a5QbFbAHVKwKrY9HKD2C1PCz3GCJFlnb9s wn3oye/Tl6EpD4+HMUiPaz/z3WNuRegqtENo92rxWYZr9/S4iK+ETOFsPzGhymZndCZCeIIvV3NfG4 4w7OrcJVVacPSvcyUCnX79zjWoXYR9gaVqCHrCb8FC4hs= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org This variable definition does not need to be in head.S so move it out. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 7 ------- arch/arm64/mm/mmu.c | 3 +++ 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 6a98f1a38c29..1cdecce552bb 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -469,13 +469,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) ASM_BUG() SYM_FUNC_END(__primary_switched) - .pushsection ".rodata", "a" -SYM_DATA_START(kimage_vaddr) - .quad _text -SYM_DATA_END(kimage_vaddr) -EXPORT_SYMBOL(kimage_vaddr) - .popsection - /* * end early head section, begin head code that is also used for * hotplug and needs to have the same protections as the text region diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 626ec32873c6..fde2b326419a 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -49,6 +49,9 @@ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; u64 __section(".mmuoff.data.write") vabits_actual; EXPORT_SYMBOL(vabits_actual); +u64 kimage_vaddr __ro_after_init = (u64)&_text; +EXPORT_SYMBOL(kimage_vaddr); + u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); From patchwork Mon Apr 11 09:47:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3550EC433EF for ; Mon, 11 Apr 2022 09:51:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344751AbiDKJw3 (ORCPT ); Mon, 11 Apr 2022 05:52:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344787AbiDKJvv (ORCPT ); Mon, 11 Apr 2022 05:51:51 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 427484198E for ; Mon, 11 Apr 2022 02:48:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id EA9C4B81199 for ; Mon, 11 Apr 2022 09:48:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5156C385AF; Mon, 11 Apr 2022 09:48:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670527; bh=nnF7M7PcH7AGljFxhk+1kS0DWN9btzeG7hbQYqOxYrQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nHn/66Dcqkmue8O9qjtSGu/PRWu/RIuh+0yc8Mya99Ex6qDyj1+ooe9IUIMalhi1y zQkOcoDq2TC+A9FcwraXhNmBjLP8+rEw0sLTYbtvPsIR9iEvG1AcyqsL9St/aRIPTt q6wMHskOTTYf6LYOtTfMwwb4pi/4ZNgU9wEtL2HLhX1+/hfU9I0OckQJbeqV/9Oo1t noH2k88MIcraef6yNFs+m0rsk1NV307A3nACrEQ0XyafCXIRX/ZzyDR4W0tbhwiaIc RVc6QaYmdesSfNAJNlNyNM5HOh2vYSEGFmgprRlzrihOqtgYpswkIepHue4cQIEqMn BRhmgi/C2Nd3g== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 02/30] arm64: mm: make vabits_actual a build time constant if possible Date: Mon, 11 Apr 2022 11:47:56 +0200 Message-Id: <20220411094824.4176877-3-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3103; h=from:subject; bh=nnF7M7PcH7AGljFxhk+1kS0DWN9btzeG7hbQYqOxYrQ=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/k49z3HshtZskOsuOPow52IVZV3y9S8oWY9dDxR 1KfkHXKJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5OAAKCRDDTyI5ktmPJE04C/ 42FMvcJRjeULojkYUP5mu6KQ5dvmOxDO5LCDKnAogTTJ0wyfSG7RdIr5MMvr2SqEGlYBwuqUTAcLzX zFMAat4VLbUZj3sJgKQBWpML3WCv8ZTqj9ig3eQwtG/+uDiPcw0toKxfIDg9lQYfhAUA0dGcNh32Oj aNzEs4+cja+CztoE9vpJaBWkruYYrwhcN3VEH7e9cvunA6ZVi1kAtFZzhWrPfrsHaB20f9yBkEhUWN N6WJYt/F/PH3AePPGCOyasLu9YsASaUU+gZ6IM/puZ/SnZcXcfa/bD8jiaHXJA+1lVdlQWZ2a6rbpx uT0ataOYqe18w4Qeub8r8VMQSJHC3V/7EBujcfVAqrrnqPrN2MTWqH8dEGxcVFMdQKPXVq7DjJWAgQ L+u9y/9bbpgs+tB1J0x19xrcAJMRAvbvn9rvnN4SrR9uNXSSAtUizMuflfWsmTKM6162/9ue4nJQNP rmBWgUdUtzg4yEoJKnS/S8thdSt4dUFD8lEafCv/27i9Q= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Currently, we only support 52-bit virtual addressing on 64k pages configurations, and in all other cases, vabits_actual is guaranteed to equal VA_BITS (== VA_BITS_MIN). So get rid of the variable entirely in that case. While at it, move the assignment out of the asm entry code - it has no need to be there. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 4 ++++ arch/arm64/kernel/head.S | 15 +-------------- arch/arm64/mm/mmu.c | 15 ++++++++++++++- 3 files changed, 19 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 0af70d9abede..c751cd9b94f8 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -174,7 +174,11 @@ #include #include +#if VA_BITS > 48 extern u64 vabits_actual; +#else +#define vabits_actual ((u64)VA_BITS) +#endif extern s64 memstart_addr; /* PHYS_OFFSET - the physical address of the start of memory. */ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 1cdecce552bb..dc07858eb673 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -293,19 +293,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables) adrp x0, idmap_pg_dir adrp x3, __idmap_text_start // __pa(__idmap_text_start) -#ifdef CONFIG_ARM64_VA_BITS_52 - mrs_s x6, SYS_ID_AA64MMFR2_EL1 - and x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT) - mov x5, #52 - cbnz x6, 1f -#endif - mov x5, #VA_BITS_MIN -1: - adr_l x6, vabits_actual - str x5, [x6] - dmb sy - dc ivac, x6 // Invalidate potentially stale cache line - /* * VA_BITS may be too small to allow for an ID mapping to be created * that covers system RAM if that is located sufficiently high in the @@ -713,7 +700,7 @@ SYM_FUNC_START(__enable_mmu) SYM_FUNC_END(__enable_mmu) SYM_FUNC_START(__cpu_secondary_check52bitva) -#ifdef CONFIG_ARM64_VA_BITS_52 +#if VA_BITS > 48 ldr_l x0, vabits_actual cmp x0, #52 b.ne 2f diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index fde2b326419a..2018e75974ca 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -46,8 +46,10 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN); u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; -u64 __section(".mmuoff.data.write") vabits_actual; +#if VA_BITS > 48 +u64 vabits_actual __ro_after_init = VA_BITS_MIN; EXPORT_SYMBOL(vabits_actual); +#endif u64 kimage_vaddr __ro_after_init = (u64)&_text; EXPORT_SYMBOL(kimage_vaddr); @@ -769,6 +771,17 @@ void __init paging_init(void) { pgd_t *pgdp = pgd_set_fixmap(__pa_symbol(swapper_pg_dir)); +#if VA_BITS > 48 + if (cpuid_feature_extract_unsigned_field( + read_sysreg_s(SYS_ID_AA64MMFR2_EL1), + ID_AA64MMFR2_LVA_SHIFT)) + vabits_actual = VA_BITS; + + /* make the variable visible to secondaries with the MMU off */ + dcache_clean_inval_poc((u64)&vabits_actual, + (u64)&vabits_actual + sizeof(vabits_actual)); +#endif + map_kernel(pgdp); map_mem(pgdp); From patchwork Mon Apr 11 09:47:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FDE7C433F5 for ; Mon, 11 Apr 2022 09:49:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344718AbiDKJv7 (ORCPT ); Mon, 11 Apr 2022 05:51:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344795AbiDKJvw (ORCPT ); Mon, 11 Apr 2022 05:51:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C72C4198F for ; Mon, 11 Apr 2022 02:48:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 93996611BE for ; Mon, 11 Apr 2022 09:48:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1062EC385A3; Mon, 11 Apr 2022 09:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670530; bh=1i7oWF7CpW4iPsr714tfW2VvgBXGaNXD89YtNIdsq0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=okFbbwDOhm0nSobcfzN7IlqWu5n6slQcUeAcDl1d4MMcwSdnCE0tTj4MAu6UDtMSq 3Ptx+9igAMkqPQjPQy4TqqOgPd8vqOXnL5lSljymVknRjQcUOKWfNgNHfRVfwYj37c SvomTly4DQlMNToYlt4pIfQdjb4H5tdlcjGYLVeYIILJ6cIbqwtW9lcarkvZeEvsaI h3dpNu9S/CzxlR44WTWSWJL0654Idr7Np5N7J8+KjuHDuJpttimzfKYQd+4joZ74Ej LxgVFohsqaoXd7hAB8jrTqTF8ISjxYZkXpfl9qSvuYe+B8ps3YG5iSNNa0D/8kKsEL e1aGe5IqIU5ow== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 03/30] arm64: head: move assignment of idmap_t0sz to C code Date: Mon, 11 Apr 2022 11:47:57 +0200 Message-Id: <20220411094824.4176877-4-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4319; h=from:subject; bh=1i7oWF7CpW4iPsr714tfW2VvgBXGaNXD89YtNIdsq0s=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/k69ea2mRWPz5W9fWAfs/UuHghgQiSWcGvgl+mo ZL4fpWqJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5OgAKCRDDTyI5ktmPJIhVC/ wMgIpoXcKoZcbC7uQ58fUkU1FhnBHEn3B/NIu3VFWmFL7viWkKGqlDqQqcwyP9nLwxtDhFm7LXkB0s pmnCa4ERlnW1EsW+ONOAEpoQt65+7XgSPqqM96VTg6D1eVixm3+XEq4z2FdoudQDJDwKkdu+NbcRUh uNQKDTqSIXgNDKjNyLoXudGbc96kjF6H7RiWqkqyt3LPKLzEIwhBCfs9jsM/iy0FdQGxKX5Dd2bCSs ln4HlMBZIF/s2iFUvQVJvMMR2PhHN/jnUR0icy3zpl7Iy1ykhd45/zZoL+p/RNtQbA7ty0yOINo166 oOx6B/1jC43fOc6l1dtj2ZubtJXlTZLpMDF8yCBiQ533h6bGHzLM4/S/hiEa35wC98SGPx33wqsFL6 4BseGZfzWY4yDUWl1O/k1VXdt9OLd58Tq6naZ7k8GFGDtHFNmm1vM+QOn8qTJdQsGK1VmsicVsjGEm R02ShZI/haQnzJ0R/o7kQgTETSIs6OOUmCmjteczIJ3Iw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Setting idmap_t0sz involves fiddling with the caches if done with the MMU off. Since we will be creating an initial ID map with the MMU and caches off, and the permanent ID map with the MMU and caches on, let's move this assignment of idmap_t0sz out of the startup code, and replace it with a macro that simply issues the three instructions needed to calculate the value wherever it is needed before the MMU is turned on. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 14 ++++++++++++++ arch/arm64/include/asm/mmu_context.h | 2 +- arch/arm64/kernel/head.S | 13 +------------ arch/arm64/mm/mmu.c | 5 ++++- arch/arm64/mm/proc.S | 2 +- 5 files changed, 21 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 8c5a61aeaf8e..9468f45c07a6 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -359,6 +359,20 @@ alternative_cb_end bfi \valreg, \t1sz, #TCR_T1SZ_OFFSET, #TCR_TxSZ_WIDTH .endm +/* + * idmap_get_t0sz - get the T0SZ value needed to cover the ID map + * + * Calculate the maximum allowed value for TCR_EL1.T0SZ so that the + * entire ID map region can be mapped. As T0SZ == (64 - #bits used), + * this number conveniently equals the number of leading zeroes in + * the physical address of _end. + */ + .macro idmap_get_t0sz, reg + adrp \reg, _end + orr \reg, \reg, #(1 << VA_BITS_MIN) - 1 + clz \reg, \reg + .endm + /* * tcr_compute_pa_size - set TCR.(I)PS to the highest supported * ID_AA64MMFR0_EL1.PARange value diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 6770667b34a3..6ac0086ebb1a 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -60,7 +60,7 @@ static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm) * TCR_T0SZ(VA_BITS), unless system RAM is positioned very high in * physical memory, in which case it will be smaller. */ -extern u64 idmap_t0sz; +extern int idmap_t0sz; extern u64 idmap_ptrs_per_pgd; /* diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index dc07858eb673..7f361bc72d12 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -299,22 +299,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables) * physical address space. So for the ID map, use an extended virtual * range in that case, and configure an additional translation level * if needed. - * - * Calculate the maximum allowed value for TCR_EL1.T0SZ so that the - * entire ID map region can be mapped. As T0SZ == (64 - #bits used), - * this number conveniently equals the number of leading zeroes in - * the physical address of __idmap_text_end. */ - adrp x5, __idmap_text_end - clz x5, x5 + idmap_get_t0sz x5 cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough? b.ge 1f // .. then skip VA range extension - adr_l x6, idmap_t0sz - str x5, [x6] - dmb sy - dc ivac, x6 // Invalidate potentially stale cache line - #if (VA_BITS < 48) #define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) #define EXTRA_PTRS (1 << (PHYS_MASK_SHIFT - EXTRA_SHIFT)) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 2018e75974ca..a6732da20874 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -43,7 +43,7 @@ #define NO_CONT_MAPPINGS BIT(1) #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ -u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN); +int idmap_t0sz __ro_after_init; u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; #if VA_BITS > 48 @@ -782,6 +782,9 @@ void __init paging_init(void) (u64)&vabits_actual + sizeof(vabits_actual)); #endif + idmap_t0sz = min(63UL - __fls(__pa_symbol(_end)), + TCR_T0SZ(VA_BITS_MIN)); + map_kernel(pgdp); map_mem(pgdp); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 50bbed947bec..e802badf9ac0 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -469,7 +469,7 @@ SYM_FUNC_START(__cpu_setup) add x9, x9, #64 tcr_set_t1sz tcr, x9 #else - ldr_l x9, idmap_t0sz + idmap_get_t0sz x9 #endif tcr_set_t0sz tcr, x9 From patchwork Mon Apr 11 09:47:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F269C433F5 for ; Mon, 11 Apr 2022 09:49:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344816AbiDKJwB (ORCPT ); Mon, 11 Apr 2022 05:52:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344817AbiDKJvy (ORCPT ); Mon, 11 Apr 2022 05:51:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 429A141F8B for ; Mon, 11 Apr 2022 02:48:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ED24061182 for ; Mon, 11 Apr 2022 09:48:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6FD50C385A6; Mon, 11 Apr 2022 09:48:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670532; bh=crxoRPnQH0+ODL0k8//ShruXGvOuJPs1P54hH/efCN4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Cx+b6a7WclkUItElRcNCWKoxjpVoeDeT0fPdODnVogRT2LiDYqtrsJTptFFGq/zWk v8sV+fL+9BIOB1h25JtJhMcVEmWrELeM23fD0Y6o4nIia3ENvge696qdMvhW71ypkD Ya47oyB3GI1ZLlpCbQNIQup09sFsqhMVyN+K4LqrPEOZmYdWFmBXPvKS6vntWkyNR2 zCuyDFoXogk/QVYksEPiI5o5ot5R/BLTRq+teJVku0RV3TwI53r85zFt8afdnVOo5g LuPW2cYqu+qvAarHVIvIGrQczuneg8pP8m5D4ELVHXDzl4k+78ad4epQCqurccyj8d Nmf8QQ8HlKesA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 04/30] arm64: head: drop idmap_ptrs_per_pgd Date: Mon, 11 Apr 2022 11:47:58 +0200 Message-Id: <20220411094824.4176877-5-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2609; h=from:subject; bh=crxoRPnQH0+ODL0k8//ShruXGvOuJPs1P54hH/efCN4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/k8otA5pWFSgQEp9o6nmLZZTu1p+AulYE5Nu77I hmkif+SJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5PAAKCRDDTyI5ktmPJIQlDA CccFIuJ6iWKny86UQPxdYrt4FiNKxAPZt+TVyBCo7husaotYUovpobqTcjzPgNvNwwLF8Ktd9mNHor pybKSjnivrHDxEvJXEONznR8lcfd7+sQfyW0tHdQ0B1YzJXthpI5RT1oapJqSRTn0Mh4vijINDndJ9 fj/Ik5S4Jktp8z0hE/nYCy4eJaTWgB9hBij7d/6t0WcRn2SCAM/DNaFY289nDpX4+EcPLtB9kVPk7R hnenPB2BMcL730DsWVmLdxIjzPZDS2NzCvNOsZolyKI+EcM6N48nI4ebLLTco2+5OR17YcqAYh5rbu N5EjOeO/AFma+hc8m7DUAnuZB5MFy/SwSuKZ7JomoOuat2oVsvBq1rqxqtPEP8jGIKREzyuD8z+a9J MfERsBkLs86u5qjidx8P9Q3tWOLiwkI/IJzGRWA+de9G7ivzkP9PKZgY0Yom0j9zGOK2e7c46UL6xX O6/2KsjPt5+wV2JlgwI+C3ErIdq0M33og5mV/KD88TT7s= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org The assignment of idmap_ptrs_per_pgd lacks any cache invalidation, even though it is updated with the MMU and caches disabled. However, we never bother to read the value again except in the very next instruction, and so we can just drop the variable entirely. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/mmu_context.h | 1 - arch/arm64/kernel/head.S | 7 +++---- arch/arm64/mm/mmu.c | 1 - 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 6ac0086ebb1a..7b387c3b312a 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -61,7 +61,6 @@ static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm) * physical memory, in which case it will be smaller. */ extern int idmap_t0sz; -extern u64 idmap_ptrs_per_pgd; /* * Ensure TCR.T0SZ is set to the provided value. diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 7f361bc72d12..53126a35d73c 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -300,6 +300,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) * range in that case, and configure an additional translation level * if needed. */ + mov x4, #PTRS_PER_PGD idmap_get_t0sz x5 cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough? b.ge 1f // .. then skip VA range extension @@ -319,18 +320,16 @@ SYM_FUNC_START_LOCAL(__create_page_tables) #error "Mismatch between VA_BITS and page size/number of translation levels" #endif - mov x4, EXTRA_PTRS - create_table_entry x0, x3, EXTRA_SHIFT, x4, x5, x6 + mov x2, EXTRA_PTRS + create_table_entry x0, x3, EXTRA_SHIFT, x2, x5, x6 #else /* * If VA_BITS == 48, we don't have to configure an additional * translation level, but the top-level table has more entries. */ mov x4, #1 << (PHYS_MASK_SHIFT - PGDIR_SHIFT) - str_l x4, idmap_ptrs_per_pgd, x5 #endif 1: - ldr_l x4, idmap_ptrs_per_pgd adr_l x6, __idmap_text_end // __pa(__idmap_text_end) map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14 diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a6732da20874..0618ece00b7e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -44,7 +44,6 @@ #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ int idmap_t0sz __ro_after_init; -u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; #if VA_BITS > 48 u64 vabits_actual __ro_after_init = VA_BITS_MIN; From patchwork Mon Apr 11 09:47:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB513C4332F for ; Mon, 11 Apr 2022 09:50:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344709AbiDKJw1 (ORCPT ); Mon, 11 Apr 2022 05:52:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344826AbiDKJvy (ORCPT ); Mon, 11 Apr 2022 05:51:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66CB2419A1 for ; Mon, 11 Apr 2022 02:48:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 57E166115F for ; Mon, 11 Apr 2022 09:48:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CDA45C385AD; Mon, 11 Apr 2022 09:48:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670534; bh=CrPCfA3V4PBM2GL+sj5SJDi9iVpmtimBzj/JgVJ0+Lw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G85PtY6NVLLrpuoR3LPfgT8UT15JuDeXNijSMKWFLUEwHqlz+buRDQUNOJGH8TCtt ByumpKdGuQLP4orGFNlmDWW4KWRl98pojbS/Iz7dV5GjijHwGyW+EWJxOaxkjKXMjC vd+CgeQ7E6RjWbNQ3n69c8J2vIuWotcvLwJp8/Hwqk9jBpECtobnaeXj8Dy85hAVLd 4Qsa89oiV3cKAvoTfDjcBvoArRP5jNsXNe3wS1iUCYsRyTP7P+Pqg0i+rK6uEHZYBV 806I9Qx2ogbSg7oEEawgVxB13ecIfKMW/3t960+oJLZ7oZQygJ9oPEfK3AR8r8lmel 6MJzZlrAsKBnA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 05/30] arm64: head: simplify page table mapping macros (slightly) Date: Mon, 11 Apr 2022 11:47:59 +0200 Message-Id: <20220411094824.4176877-6-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6906; h=from:subject; bh=CrPCfA3V4PBM2GL+sj5SJDi9iVpmtimBzj/JgVJ0+Lw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/k9FXSc03bKRkbZZivAv4z6021AB3/YP9xUV9nI oLbsa+OJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5PQAKCRDDTyI5ktmPJBJzC/ 9Ro+R6drJNZXHy8IEK8YalAoFVJCt3N+Wt5e6WIx75e9A2wT6Z/Ok18MPFcj1zrh6CiHjg566alNtf chcCM+t/ctY+Dej6MFTwN9hfae+E/JFcXI+alkgZVVcpg+I8ymyzHRnR3/poOrhFPLHulOvl+HafY5 V1lr2oUfZwa4MUaLVWR5FV41dgcH8jVMdMJTuWgKg9s0/iesnxwO/Kb18rrNSLMGfbdtzi6laOEzq9 wpmIprb1VRYr0fd+hUKKLx4FkNQGrgFRdlKv3mAJPYGR5h7p/03xtrk0Uvao1eq6QBPR2HVTA61fqj HQY0uIbhykZJlmzLSkTXwVkoL/EFJuATHtxcyzu9ycwTz/oBFpO16OR5u/ylbolDuDCz//FCHgIF2u OgStc/AVgkPsOwQa3i6GbFxutEZmvBzH3TIn+NOuMvqKCprGWjmTCPRVks9H7HPwJDemCJgdooi0bU uNJH15qhOaBGwuNTLk2t0QntBEbTiDOfdLu6iuLGYahgg= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Simplify the macros in head.S that are used to set up the early page tables, by switching to immediates for the number of bits that are interpreted as the table index at each level. This makes it much easier to infer from the instruction stream what is going on, and reduces the number of instructions emitted substantially. Note that the extended ID map for cases where no additional level needs to be configured now uses a compile time size as well, which means that we interpret up to 10 bits as the table index at the root level (for 52-bit physical addressing), without taking into account whether or not this is supported on the current system. However, those bits can only be set if we are executing the image from an address that exceeds the 48-bit PA range, and are guaranteed to be cleared otherwise, and given that we are dealing with a mapping in the lower TTBR0 range of the address space, the result is therefore the same as if we'd mask off only 6 bits. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 55 ++++++++------------ 1 file changed, 22 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 53126a35d73c..9fdde2f9cc0f 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -179,31 +179,20 @@ SYM_CODE_END(preserve_boot_args) * vstart: virtual address of start of range * vend: virtual address of end of range - we map [vstart, vend] * shift: shift used to transform virtual address into index - * ptrs: number of entries in page table + * order: #imm 2log(number of entries in page table) * istart: index in table corresponding to vstart * iend: index in table corresponding to vend * count: On entry: how many extra entries were required in previous level, scales * our end index. * On exit: returns how many extra entries required for next page table level * - * Preserves: vstart, vend, shift, ptrs + * Preserves: vstart, vend * Returns: istart, iend, count */ - .macro compute_indices, vstart, vend, shift, ptrs, istart, iend, count - lsr \iend, \vend, \shift - mov \istart, \ptrs - sub \istart, \istart, #1 - and \iend, \iend, \istart // iend = (vend >> shift) & (ptrs - 1) - mov \istart, \ptrs - mul \istart, \istart, \count - add \iend, \iend, \istart // iend += count * ptrs - // our entries span multiple tables - - lsr \istart, \vstart, \shift - mov \count, \ptrs - sub \count, \count, #1 - and \istart, \istart, \count - + .macro compute_indices, vstart, vend, shift, order, istart, iend, count + ubfx \istart, \vstart, \shift, \order + ubfx \iend, \vend, \shift, \order + add \iend, \iend, \count, lsl \order sub \count, \iend, \istart .endm @@ -218,38 +207,39 @@ SYM_CODE_END(preserve_boot_args) * vend: virtual address of end of range - we map [vstart, vend - 1] * flags: flags to use to map last level entries * phys: physical address corresponding to vstart - physical memory is contiguous - * pgds: the number of pgd entries + * order: #imm 2log(number of entries in PGD table) * * Temporaries: istart, iend, tmp, count, sv - these need to be different registers * Preserves: vstart, flags * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv */ - .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv + .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, order, istart, iend, tmp, count, sv sub \vend, \vend, #1 add \rtbl, \tbl, #PAGE_SIZE - mov \sv, \rtbl mov \count, #0 - compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count + + compute_indices \vstart, \vend, #PGDIR_SHIFT, #\order, \istart, \iend, \count + mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv - mov \sv, \rtbl #if SWAPPER_PGTABLE_LEVELS > 3 - compute_indices \vstart, \vend, #PUD_SHIFT, #PTRS_PER_PUD, \istart, \iend, \count + compute_indices \vstart, \vend, #PUD_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv - mov \sv, \rtbl #endif #if SWAPPER_PGTABLE_LEVELS > 2 - compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #PTRS_PER_PMD, \istart, \iend, \count + compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp mov \tbl, \sv #endif - compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count - bic \count, \phys, #SWAPPER_BLOCK_SIZE - 1 - populate_entries \tbl, \count, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp + compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count + bic \rtbl, \phys, #SWAPPER_BLOCK_SIZE - 1 + populate_entries \tbl, \rtbl, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp .endm /* @@ -300,12 +290,12 @@ SYM_FUNC_START_LOCAL(__create_page_tables) * range in that case, and configure an additional translation level * if needed. */ - mov x4, #PTRS_PER_PGD idmap_get_t0sz x5 cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough? b.ge 1f // .. then skip VA range extension #if (VA_BITS < 48) +#define IDMAP_PGD_ORDER (VA_BITS - PGDIR_SHIFT) #define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) #define EXTRA_PTRS (1 << (PHYS_MASK_SHIFT - EXTRA_SHIFT)) @@ -323,16 +313,16 @@ SYM_FUNC_START_LOCAL(__create_page_tables) mov x2, EXTRA_PTRS create_table_entry x0, x3, EXTRA_SHIFT, x2, x5, x6 #else +#define IDMAP_PGD_ORDER (PHYS_MASK_SHIFT - PGDIR_SHIFT) /* * If VA_BITS == 48, we don't have to configure an additional * translation level, but the top-level table has more entries. */ - mov x4, #1 << (PHYS_MASK_SHIFT - PGDIR_SHIFT) #endif 1: adr_l x6, __idmap_text_end // __pa(__idmap_text_end) - map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14 + map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14 /* * Map the kernel image (starting with PHYS_OFFSET). @@ -340,13 +330,12 @@ SYM_FUNC_START_LOCAL(__create_page_tables) adrp x0, init_pg_dir mov_q x5, KIMAGE_VADDR // compile time __va(_text) add x5, x5, x23 // add KASLR displacement - mov x4, PTRS_PER_PGD adrp x6, _end // runtime __pa(_end) adrp x3, _text // runtime __pa(_text) sub x6, x6, x3 // _end - _text add x6, x6, x5 // runtime __va(_end) - map_memory x0, x1, x5, x6, x7, x3, x4, x10, x11, x12, x13, x14 + map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14 /* * Since the page tables have been populated with non-cacheable From patchwork Mon Apr 11 09:48:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68964C433FE for ; Mon, 11 Apr 2022 09:49:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344866AbiDKJwC (ORCPT ); Mon, 11 Apr 2022 05:52:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344827AbiDKJvy (ORCPT ); Mon, 11 Apr 2022 05:51:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47B3D4131C for ; Mon, 11 Apr 2022 02:48:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B73A4611D8 for ; Mon, 11 Apr 2022 09:48:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 37874C385AC; Mon, 11 Apr 2022 09:48:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670537; bh=tP8R1xHI6gZfqXXfZqQtZ+TXUEOhPzFX2O0GeERgqzc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BjwGPjotMV83lseT4RVC08AQfTu51QLb6KDG/iPRphy59mvnpEmG1xdww7Whe4vs9 n1aXZIsmGpsrq0VOcZX0i48xvQ5HZnRewjwopyxnNQo2qj6Td3BBcmEFCFRaQ3qRIq X+Xh1NgrXK5PQ71G5VA1OB9JUarLp8ZZqbRUqhjM6NCcdlfm8IXd0Luj5/ZHG8ozk+ ZEJcy5+6QtOqTvmEF20vESPgSYlFD9Ju6Bz9+zFG3kfTUrCoQ+4p6s7hpdIUsft/gv I3E0R5ufSCH3tftd2jqWKaCDU4CloxDsOXh+kP+70OnqxeI45SFE0kEPBDtuABGQoq Wi50VzziyYwVg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 06/30] arm64: head: switch to map_memory macro for the extended ID map Date: Mon, 11 Apr 2022 11:48:00 +0200 Message-Id: <20220411094824.4176877-7-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6360; h=from:subject; bh=tP8R1xHI6gZfqXXfZqQtZ+TXUEOhPzFX2O0GeERgqzc=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/k/w3V5UYIKiCXjI91akig/McBpdcqTLhu/dp6m 6mD3u+qJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5PwAKCRDDTyI5ktmPJB2mC/ 9JNH5D3KzhCgS+Ey65CRlkPxg87Sk1r5IhaVORqwaAzKWF8efarzfL6vE70s1/lpIS26selE8526ua 5YMuSz2tCuph3y5NYiiQqQbLR2w5leQrcoLMzOI8YbDu0qC1K5GI1a1zZR00zsBBNacE7tg0W4+WYe CUK6rSeLJPB56Xh4OE9ZsEne0Y1+PN9jxniRwytKyCx82j90OD2I8rqst1Pj65ymY6uhPx5dKVkvgX y8lIx92PwT7yGSgkAGNVrfyUsLGlcskk5iP/aNcIYWBbfym228lEhugMOaipyR+Ux4w/aIr2bTPQx6 qSCJRWXAKTpl8tS0v3iz1FsbFNyGI053URdrOEfjBfBu553B5i3b6jq3HUYIkmO/R6bOCn2Rk2A5gH 6o7APALWdvy8PWGIsfJQ7NWi3zjgITjjFs4GaPqE8dfgfUTomOl+iWOHK6fwSyNEG4EChfpDbO7/hH w1JOAEP16v3yAui0c/cbML3byjXw7NeekNNF4wc6qZl50= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org In a future patch, we will start using an ID map that covers the entire image, rather than a single page. This means that we need to deal with the pathological case of an extended ID map where the kernel image does not fit neatly inside a single entry at the root level, which means we will need to create additional table entries and map additional pages for page tables. The existing map_memory macro already takes care of most of that, so let's just extend it to deal with this case as well. While at it, drop the conditional branch on the value of T0SZ: we don't set the variable anymore in the entry code, and so we can just let the map_memory macro deal with the case where the output address exceeds VA_BITS. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 76 ++++++++++---------- 1 file changed, 37 insertions(+), 39 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 9fdde2f9cc0f..eb54c0289c8a 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -122,29 +122,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args) b dcache_inval_poc // tail call SYM_CODE_END(preserve_boot_args) -/* - * Macro to create a table entry to the next page. - * - * tbl: page table address - * virt: virtual address - * shift: #imm page table shift - * ptrs: #imm pointers per table page - * - * Preserves: virt - * Corrupts: ptrs, tmp1, tmp2 - * Returns: tbl -> next level table page address - */ - .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2 - add \tmp1, \tbl, #PAGE_SIZE - phys_to_pte \tmp2, \tmp1 - orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type - lsr \tmp1, \virt, #\shift - sub \ptrs, \ptrs, #1 - and \tmp1, \tmp1, \ptrs // table index - str \tmp2, [\tbl, \tmp1, lsl #3] - add \tbl, \tbl, #PAGE_SIZE // next level table page - .endm - /* * Macro to populate page table entries, these entries can be pointers to the next level * or last level entries pointing to physical memory. @@ -209,15 +186,27 @@ SYM_CODE_END(preserve_boot_args) * phys: physical address corresponding to vstart - physical memory is contiguous * order: #imm 2log(number of entries in PGD table) * + * If extra_shift is set, an extra level will be populated if the end address does + * not fit in 'extra_shift' bits. This assumes vend is in the TTBR0 range. + * * Temporaries: istart, iend, tmp, count, sv - these need to be different registers * Preserves: vstart, flags * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv */ - .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, order, istart, iend, tmp, count, sv + .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, order, istart, iend, tmp, count, sv, extra_shift sub \vend, \vend, #1 add \rtbl, \tbl, #PAGE_SIZE mov \count, #0 + .ifnb \extra_shift + tst \vend, #~((1 << (\extra_shift)) - 1) + b.eq .L_\@ + compute_indices \vstart, \vend, #\extra_shift, #(PAGE_SHIFT - 3), \istart, \iend, \count + mov \sv, \rtbl + populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp + mov \tbl, \sv + .endif +.L_\@: compute_indices \vstart, \vend, #PGDIR_SHIFT, #\order, \istart, \iend, \count mov \sv, \rtbl populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp @@ -284,20 +273,32 @@ SYM_FUNC_START_LOCAL(__create_page_tables) adrp x3, __idmap_text_start // __pa(__idmap_text_start) /* - * VA_BITS may be too small to allow for an ID mapping to be created - * that covers system RAM if that is located sufficiently high in the - * physical address space. So for the ID map, use an extended virtual - * range in that case, and configure an additional translation level - * if needed. + * The ID map carries a 1:1 mapping of the physical address range + * covered by the loaded image, which could be anywhere in DRAM. This + * means that the required size of the VA (== PA) space is decided at + * boot time, and could be more than the configured size of the VA + * space for ordinary kernel and user space mappings. + * + * There are three cases to consider here: + * - 39 <= VA_BITS < 48, and the ID map needs up to 48 VA bits to cover + * the placement of the image. In this case, we configure one extra + * level of translation on the fly for the ID map only. (This case + * also covers 42-bit VA/52-bit PA on 64k pages). + * + * - VA_BITS == 48, and the ID map needs more than 48 VA bits. This can + * only happen when using 64k pages, in which case we need to extend + * the root level table rather than add a level. Note that we can + * treat this case as 'always extended' as long as we take care not + * to program an unsupported T0SZ value into the TCR register. + * + * - Combinations that would require two additional levels of + * translation are not supported, e.g., VA_BITS==36 on 16k pages, or + * VA_BITS==39/4k pages with 5-level paging, where the input address + * requires more than 47 or 48 bits, respectively. */ - idmap_get_t0sz x5 - cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough? - b.ge 1f // .. then skip VA range extension - #if (VA_BITS < 48) #define IDMAP_PGD_ORDER (VA_BITS - PGDIR_SHIFT) #define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) -#define EXTRA_PTRS (1 << (PHYS_MASK_SHIFT - EXTRA_SHIFT)) /* * If VA_BITS < 48, we have to configure an additional table level. @@ -309,20 +310,17 @@ SYM_FUNC_START_LOCAL(__create_page_tables) #if VA_BITS != EXTRA_SHIFT #error "Mismatch between VA_BITS and page size/number of translation levels" #endif - - mov x2, EXTRA_PTRS - create_table_entry x0, x3, EXTRA_SHIFT, x2, x5, x6 #else #define IDMAP_PGD_ORDER (PHYS_MASK_SHIFT - PGDIR_SHIFT) +#define EXTRA_SHIFT /* * If VA_BITS == 48, we don't have to configure an additional * translation level, but the top-level table has more entries. */ #endif -1: adr_l x6, __idmap_text_end // __pa(__idmap_text_end) - map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14 + map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT /* * Map the kernel image (starting with PHYS_OFFSET). From patchwork Mon Apr 11 09:48:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561280 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BBEEC433EF for ; Mon, 11 Apr 2022 09:49:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344879AbiDKJwI (ORCPT ); Mon, 11 Apr 2022 05:52:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344835AbiDKJv4 (ORCPT ); Mon, 11 Apr 2022 05:51:56 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C35841327 for ; Mon, 11 Apr 2022 02:49:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BB73CB8119E for ; Mon, 11 Apr 2022 09:49:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 966AFC385A4; Mon, 11 Apr 2022 09:48:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670539; bh=OhKluuvrPX/ZIKI6cSSrnjwjMiDdvqkJIcBzUOSbddk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qBX62F5Wb6q9terO3pGcKDNNUhIWzFFx8vukUGrfjPAj1xYDIC+XA+/XPDHA63kb1 1pflSiT4DZc9s++ZbEyqUS0wjBDHDRn2QJf5iJ0ZlxpWnhJMymyaR77fspyjs5kLQL nzzG15rZfWIbtVvBGuwnAM4TLFhSctmzS/BiUpTs9CtmJ4KYwwnp6DvHBPzMWaUdET Pxc/eKrFbY/BjeV/shbiBWY4ZetuOqN9mC+u8Bc/iCRpP3TdAWoQazhOSerB7WWzA8 38NJ+guWf2+DCMkeGZ8S7A9ZknzYUc30oYbckXA1DMTlTIrK6d+ax3bCCW2cXiezBs vQhU/s0aJOdDA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 07/30] arm64: head: split off idmap creation code Date: Mon, 11 Apr 2022 11:48:01 +0200 Message-Id: <20220411094824.4176877-8-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5970; h=from:subject; bh=OhKluuvrPX/ZIKI6cSSrnjwjMiDdvqkJIcBzUOSbddk=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lABkB7RbgceET9//c1e3q1KM8XQiA2SuRu0Jut Lucg722JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5QAAKCRDDTyI5ktmPJIxwDA CVOTeNOWP0F9RBRumUbmOai/qMGkh7SZ7CPmXGuC/jSz/pBR3PZxKA69aidyXP3DTZyQXp2TxECSIv OoEdG7vRawMboEY0euW9PHo1bPXeeOOwPK0JioU7HEH1h8rvIZu4Ivp76HNSkFBFlPklvwZ9jAvukO DqFpFRbD2w4tTOlTe9maABUUiweg1ULo1QipfpYPJNN1u5Hxd7S8KaF0CG0ybu3/5H9o4Ou6DBAbDW 4alH19NBTYpl23te0PusjQwDsFRaIVz/VB7F7yQw4Y+DEEhNmBFmQiaCQJazdcywK4wt2dxnvUJIFS cLbzr85clKaM2aHGKF19ff9UPjY0puGug907BeQAXawXO5YfcJtIvonbOACuNfnmGsP2dU17u++a5k ELt2HUYFl3zGzljTqzZJiDIR+e+TbDr08EkUIAB/zk3gQvBWDtD6bLaohcrbpSWV5Dqhw27O5CisN6 UyrvMgn3YI0tuDQ84vzbhwF73opdpXu+5yVi64jPPyO00= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Split off the creation of the ID map page tables, so that we can avoid running it again unnecessarily when KASLR is in effect (which only randomizes the virtual placement). This will permit us to drop some explicit cache maintenance to the PoC which was necessary because the cache invalidation being performed on some global variables might otherwise clobber unrelated variables that happen to share a cacheline. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 101 ++++++++++---------- 1 file changed, 52 insertions(+), 49 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index eb54c0289c8a..1cbc52097bf9 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -84,7 +84,7 @@ * Register Scope Purpose * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x23 primary_entry() .. start_kernel() physical misalignment/KASLR offset - * x28 __create_page_tables() callee preserved temp register + * x28 clear_page_tables() callee preserved temp register * x19/x20 __primary_switch() callee preserved temp registers * x24 __primary_switch() .. relocate_kernel() current RELR displacement */ @@ -94,7 +94,10 @@ SYM_CODE_START(primary_entry) adrp x23, __PHYS_OFFSET and x23, x23, MIN_KIMG_ALIGN - 1 // KASLR offset, defaults to 0 bl set_cpu_boot_mode_flag - bl __create_page_tables + bl clear_page_tables + bl create_idmap + bl create_kernel_mapping + /* * The following calls CPU setup code, see arch/arm64/mm/proc.S for * details. @@ -122,6 +125,35 @@ SYM_CODE_START_LOCAL(preserve_boot_args) b dcache_inval_poc // tail call SYM_CODE_END(preserve_boot_args) +SYM_FUNC_START_LOCAL(clear_page_tables) + mov x28, lr + + /* + * Invalidate the init page tables to avoid potential dirty cache lines + * being evicted. Other page tables are allocated in rodata as part of + * the kernel image, and thus are clean to the PoC per the boot + * protocol. + */ + adrp x0, init_pg_dir + adrp x1, init_pg_end + bl dcache_inval_poc + + /* + * Clear the init page tables. + */ + adrp x0, init_pg_dir + adrp x1, init_pg_end + sub x1, x1, x0 +1: stp xzr, xzr, [x0], #16 + stp xzr, xzr, [x0], #16 + stp xzr, xzr, [x0], #16 + stp xzr, xzr, [x0], #16 + subs x1, x1, #64 + b.ne 1b + + ret x28 +SYM_FUNC_END(clear_page_tables) + /* * Macro to populate page table entries, these entries can be pointers to the next level * or last level entries pointing to physical memory. @@ -231,44 +263,8 @@ SYM_CODE_END(preserve_boot_args) populate_entries \tbl, \rtbl, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp .endm -/* - * Setup the initial page tables. We only setup the barest amount which is - * required to get the kernel running. The following sections are required: - * - identity mapping to enable the MMU (low address, TTBR0) - * - first few MB of the kernel linear mapping to jump to once the MMU has - * been enabled - */ -SYM_FUNC_START_LOCAL(__create_page_tables) - mov x28, lr - /* - * Invalidate the init page tables to avoid potential dirty cache lines - * being evicted. Other page tables are allocated in rodata as part of - * the kernel image, and thus are clean to the PoC per the boot - * protocol. - */ - adrp x0, init_pg_dir - adrp x1, init_pg_end - bl dcache_inval_poc - - /* - * Clear the init page tables. - */ - adrp x0, init_pg_dir - adrp x1, init_pg_end - sub x1, x1, x0 -1: stp xzr, xzr, [x0], #16 - stp xzr, xzr, [x0], #16 - stp xzr, xzr, [x0], #16 - stp xzr, xzr, [x0], #16 - subs x1, x1, #64 - b.ne 1b - - mov x7, SWAPPER_MM_MMUFLAGS - - /* - * Create the identity mapping. - */ +SYM_FUNC_START_LOCAL(create_idmap) adrp x0, idmap_pg_dir adrp x3, __idmap_text_start // __pa(__idmap_text_start) @@ -319,12 +315,23 @@ SYM_FUNC_START_LOCAL(__create_page_tables) */ #endif adr_l x6, __idmap_text_end // __pa(__idmap_text_end) + mov x7, SWAPPER_MM_MMUFLAGS map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT /* - * Map the kernel image (starting with PHYS_OFFSET). + * Since the page tables have been populated with non-cacheable + * accesses (MMU disabled), invalidate those tables again to + * remove any speculatively loaded cache lines. */ + dmb sy + + adrp x0, idmap_pg_dir + adrp x1, idmap_pg_end + b dcache_inval_poc // tail call +SYM_FUNC_END(create_idmap) + +SYM_FUNC_START_LOCAL(create_kernel_mapping) adrp x0, init_pg_dir mov_q x5, KIMAGE_VADDR // compile time __va(_text) add x5, x5, x23 // add KASLR displacement @@ -332,6 +339,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) adrp x3, _text // runtime __pa(_text) sub x6, x6, x3 // _end - _text add x6, x6, x5 // runtime __va(_end) + mov x7, SWAPPER_MM_MMUFLAGS map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14 @@ -342,16 +350,10 @@ SYM_FUNC_START_LOCAL(__create_page_tables) */ dmb sy - adrp x0, idmap_pg_dir - adrp x1, idmap_pg_end - bl dcache_inval_poc - adrp x0, init_pg_dir adrp x1, init_pg_end - bl dcache_inval_poc - - ret x28 -SYM_FUNC_END(__create_page_tables) + b dcache_inval_poc // tail call +SYM_FUNC_END(create_kernel_mapping) /* * Initialize CPU registers with task-specific and cpu-specific context. @@ -836,7 +838,8 @@ SYM_FUNC_START_LOCAL(__primary_switch) pre_disable_mmu_workaround msr sctlr_el1, x20 // disable the MMU isb - bl __create_page_tables // recreate kernel mapping + bl clear_page_tables + bl create_kernel_mapping // Recreate kernel mapping tlbi vmalle1 // Remove any stale TLB entries dsb nsh From patchwork Mon Apr 11 09:48:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1857C433F5 for ; Mon, 11 Apr 2022 09:49:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344848AbiDKJwH (ORCPT ); Mon, 11 Apr 2022 05:52:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344839AbiDKJv4 (ORCPT ); Mon, 11 Apr 2022 05:51:56 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20EC140E76 for ; Mon, 11 Apr 2022 02:49:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 82B18611E3 for ; Mon, 11 Apr 2022 09:49:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 00951C385AC; Mon, 11 Apr 2022 09:48:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670541; bh=2XVgmsF56y1icz9ml/x4C3CrLOrJjhf8ku8guxGluyA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ngk8j4RxYGLWKxrKc2fZet+byMzDrhtKAFCxTXbx5HodYN3jFG0kP+cIxSXZoU6Fr Y9C83S9i/Kq1r+HKGsxgOrrzRjLTSkULR216PTQ/WObQjkw0TUErgF9yJCVz+zvW2K HXQ09cPO+x/lRZQ4NL51vMrCZe7tTmX/tJebeu/wiZIHNoV3S3cgLFEDNRe8MaYg/r I+9+UcHm7PwcWJlKAiHn2v/s1dlS5W6vr3mJiniXvqitUXwgcgI0nfiyJvkMyZg+PQ i2thcEFa0xg1OqIM4QicYnbIhNXpVE/L3uQfJbnd1eSjhM5p5rON9Xuqs3N03kCLhm 6UU5yWncb9XiQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 08/30] arm64: kernel: drop unnecessary PoC cache clean+invalidate Date: Mon, 11 Apr 2022 11:48:02 +0200 Message-Id: <20220411094824.4176877-9-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2088; h=from:subject; bh=2XVgmsF56y1icz9ml/x4C3CrLOrJjhf8ku8guxGluyA=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lC8v9i+97hjHtxztia+HkW+5AUge3p9D73EZan Mu7BTKeJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5QgAKCRDDTyI5ktmPJCxdC/ 0ROPrXD/TI1KNw24bQy4IZ0mZZeeqF4ofwfZr423kNAPNilB0ov82Hya/BlusaFwxDSj92ZvIEyCk0 bsURVyk/ZvFm46XYU/zYsodh88bXWh+URzmCg4WrZ5muvnwfGFnEJmaa+bd3zBLF3PvFa+emVDj2Ee H7ekQtRbOB8Eg2JdJxHAZLPfX7SqLMS9LC+K24815HtfT9GSkNhB/hmljXlQqNrOBe15iTy1b40mYt KxWzxVA5Uix/Wo81QdPu5hxlTGk4ryeeKgvA9V20lQPpaQGS0C8AAuXxm+qMJFk263Zy/mNKbpb4z5 TmolO8b2AchxZbKXKaslkTXZ0zV2cEHeTKYOhwhokP9WJn+6jXdd0vV1s/ajrlghBnVPC3M2xJjfuu 7mjWQ/S4bCrJTGgoACPmFcvGFC1EDUopsN31HlqR2Y2w2mohHccpQeesGJacDdHFWZxJxMxnPkNcPS BjhmQzg+vLeV78dT00GVIp7MecQD/26DyTuBWVNQu7NHM= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Some early boot code runs before the virtual placement of the kernel is finalized, and we used to go back to the very start and recreate the ID map along with the page tables describing the virtual kernel mapping, and this involved setting some global variables with the caches off. In order to ensure that global state created by the KASLR code is not corrupted by the cache invalidation that occurs in that case, we needed to clean those global variables to the PoC explicitly. This is no longer needed now that the ID map is created only once (and the associated global variable updates are no longer repeated). So drop the cache maintenance that is no longer necessary. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/kaslr.c | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 418b2bba1521..d5542666182f 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -13,7 +13,6 @@ #include #include -#include #include #include #include @@ -72,9 +71,6 @@ u64 __init kaslr_early_init(void) * we end up running with module randomization disabled. */ module_alloc_base = (u64)_etext - MODULES_VSIZE; - dcache_clean_inval_poc((unsigned long)&module_alloc_base, - (unsigned long)&module_alloc_base + - sizeof(module_alloc_base)); /* * Try to map the FDT early. If this fails, we simply bail, @@ -174,13 +170,6 @@ u64 __init kaslr_early_init(void) module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21; module_alloc_base &= PAGE_MASK; - dcache_clean_inval_poc((unsigned long)&module_alloc_base, - (unsigned long)&module_alloc_base + - sizeof(module_alloc_base)); - dcache_clean_inval_poc((unsigned long)&memstart_offset_seed, - (unsigned long)&memstart_offset_seed + - sizeof(memstart_offset_seed)); - return offset; } From patchwork Mon Apr 11 09:48:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8491CC433FE for ; Mon, 11 Apr 2022 09:49:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344795AbiDKJwJ (ORCPT ); Mon, 11 Apr 2022 05:52:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344847AbiDKJv5 (ORCPT ); Mon, 11 Apr 2022 05:51:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4818D41324 for ; Mon, 11 Apr 2022 02:49:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DA0BF61152 for ; Mon, 11 Apr 2022 09:49:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5ED8DC385AD; Mon, 11 Apr 2022 09:49:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670544; bh=D3Y+z2CgwBqoekMxwkb2ucgC0FP7HG6601DhcDR2wzo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=chaSpqnZZv7DlhwXB8cfXD5+B/2RHrFs8UMPOeQmRCuDOwQY21Z6Hbh69gJyqcuSP D0JaWkPjxmBJUDP7BlYZPGzBxsQwFm9SMmQLGLgsfrHrp9oLmmzt8IwWwDdrTR9GNr IPSzGbiZ9hhgtMz0CuEtqw8vRFmif/pLIafHQ8xXM+acYpCmMIfls4YnJS7J1V1ZvQ EQlYw5rPgWMdIdS+0L5OPjybOGgify9dUjq9z48TvnnEbFsCne70HVemc8Euv55bPZ VIh+yNlmhhhGw9uX8TLFFkUnD54A+19EUnuf5b0TBRGiPkNm/w3IZRWCq/FbFJjRWf tD5Hom75GVdGw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 09/30] arm64: head: pass ID map root table address to __enable_mmu() Date: Mon, 11 Apr 2022 11:48:03 +0200 Message-Id: <20220411094824.4176877-10-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2427; h=from:subject; bh=D3Y+z2CgwBqoekMxwkb2ucgC0FP7HG6601DhcDR2wzo=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lEoirtDh7FaU7FC1agsn7FyZARnqEWUpzu1ICl 1mqkBjKJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5RAAKCRDDTyI5ktmPJH6CDA CYJijaGpa3Bvf+09FbcEunCfrRWVfFVPEgKQ+9TybsnVkM3evmurD1aRHu8mZBo9Ipr+J+0W/hzdzC jaXG815hk98IpuELcGRH3FK3H3aQYn/q1vxoTlx6wVwrccc0MzWfp2vKqIEjJJWqaEYqcDZxK7++jG fv184XxSdOq8cRG1eINGTdRMK7mnPIiLHeniMEBw6SH0DcAAu1BeO3geDYP5WrqX1us8Fy6cL901iu i6YIJCQHCc02ddpaXn6s7zw3MO0gDK4qW1YhWLwQvKr02wTvsacjOlZTdVhNqItQlC2M9EqQhkPwHA EvqLp5j8sBRQ5IMKCpzATIyyRDYE1e+Ml7Ium3Mi2mj2BIFyQTh0EyLPO9fvesjTTiSRiXnSD4P+tT AJyfcFRwZ9P5nbVwj16LlzdG2YXEcq9M5+i2pXSa1BRXOPhEDe9Qd/mGreiuR25HAGesIixNRFron4 WDrXT6kG3N5Ur23JUHoFdn7QH8Gq8nKfgEIykdQDqsZhk= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org We will be adding an initial ID map that covers the entire kernel image, so we will pass the actual ID map root table to use to __enable_mmu(), rather than hard code it. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 14 ++++++++------ arch/arm64/kernel/sleep.S | 1 + 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 1cbc52097bf9..70c462bbd6bf 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -595,6 +595,7 @@ SYM_FUNC_START_LOCAL(secondary_startup) bl __cpu_secondary_check52bitva bl __cpu_setup // initialise processor adrp x1, swapper_pg_dir + adrp x2, idmap_pg_dir bl __enable_mmu ldr x8, =__secondary_switched br x8 @@ -648,6 +649,7 @@ SYM_FUNC_END(__secondary_too_slow) * * x0 = SCTLR_EL1 value for turning on the MMU. * x1 = TTBR1_EL1 value + * x2 = ID map root table address * * Returns to the caller via x30/lr. This requires the caller to be covered * by the .idmap.text section. @@ -656,14 +658,13 @@ SYM_FUNC_END(__secondary_too_slow) * If it isn't, park the CPU */ SYM_FUNC_START(__enable_mmu) - mrs x2, ID_AA64MMFR0_EL1 - ubfx x2, x2, #ID_AA64MMFR0_TGRAN_SHIFT, 4 - cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED_MIN + mrs x3, ID_AA64MMFR0_EL1 + ubfx x3, x3, #ID_AA64MMFR0_TGRAN_SHIFT, 4 + cmp x3, #ID_AA64MMFR0_TGRAN_SUPPORTED_MIN b.lt __no_granule_support - cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED_MAX + cmp x3, #ID_AA64MMFR0_TGRAN_SUPPORTED_MAX b.gt __no_granule_support - update_early_cpu_boot_status 0, x2, x3 - adrp x2, idmap_pg_dir + update_early_cpu_boot_status 0, x3, x4 phys_to_ttbr x1, x1 phys_to_ttbr x2, x2 msr ttbr0_el1, x2 // load TTBR0 @@ -819,6 +820,7 @@ SYM_FUNC_START_LOCAL(__primary_switch) #endif adrp x1, init_pg_dir + adrp x2, idmap_pg_dir bl __enable_mmu #ifdef CONFIG_RELOCATABLE #ifdef CONFIG_RELR diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 4ea9392f86e0..e36b09d942f7 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -104,6 +104,7 @@ SYM_CODE_START(cpu_resume) bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ adrp x1, swapper_pg_dir + adrp x2, idmap_pg_dir bl __enable_mmu ldr x8, =_cpu_resume br x8 From patchwork Mon Apr 11 09:48:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C954C4332F for ; Mon, 11 Apr 2022 09:50:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344812AbiDKJwJ (ORCPT ); Mon, 11 Apr 2022 05:52:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344733AbiDKJv5 (ORCPT ); Mon, 11 Apr 2022 05:51:57 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54EEF4132F for ; Mon, 11 Apr 2022 02:49:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DCF96B81199 for ; Mon, 11 Apr 2022 09:49:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BCCBDC385AC; Mon, 11 Apr 2022 09:49:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670546; bh=BYjl/REcrUqfhGta/QMKlFgjyt7mrq/E9vMZo9XFajk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Xha6tGuUw6BWZT2yyEaVY7dEYgQ4OA0w/xSWr2JuLNOxD+iaiNit03nVXE5ITQheT vlVtPy6jr6olPPJgL5eHxPa7Vd/14nPetEUY2GHjBnn9JIua4D06Ae4Wm3671B0x3Z Ht+9V8IkY8y7kNMOsbSPFuJJVngHy98mdHR6bhRO7UaHT8ZZDn5KSr7wC2gLlbhSqi vl+7bt69lsWf5k/snaTDWHemucp3CBqNgZbwfBxo17jNjSfdRU7rL2p0byBnWVMGGB g9jYs9a8EGHxC8gR8x5GAvVHeRVX3lgJ8U937jakKlNAUrZMniKXZ6dDJkAUkw5vSo SrgOwd6JQ13cQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 10/30] arm64: mm: provide idmap pointer to cpu_replace_ttbr1() Date: Mon, 11 Apr 2022 11:48:04 +0200 Message-Id: <20220411094824.4176877-11-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4222; h=from:subject; bh=BYjl/REcrUqfhGta/QMKlFgjyt7mrq/E9vMZo9XFajk=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lFcr/9/++HfumlZZAGjCHNQ6Yo10SA/GMH/SqO WlGiTZ+JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5RQAKCRDDTyI5ktmPJMciC/ 9uh5pkR+m8BYzclbIUhedXp2HOh4d31TE2jrpu72K3tf2husk7D3Wm/jzIwkBIJByMY9Wqwm681EG2 OS7bSyXkh/F912gLrI04rNI8NmPmb/S7+hbXPzXfroRiOfOWBV/xJZPTHj57CpuGbGhCYT1sNto0Z2 cMc64aZy/BRfb14YCxHMuNkgP5g+eKsCiijPuERUpKwSRl0JnaF5NSIASCaYTd1xUwgmLgEB/djCls 5D2qb0zl+UqIRtv6qs964q8ooOsZXNL/KJs5mUDLmemTxFKgW452FboUYSY5LTFmujX3LAGnNWBNkg CsbG+Wlsz013JKeCkXO4G5FgXpuSGukfAJENZidfjPywE7eKf+YqQ3n2vk1PCKxNVEQLPThViInbt9 DWFsr7CmWFk1iLSF8IHQ6inGhHM0zstdQMBm/Qi3CdH66F6qfxJjniXwhRdicp7N9qWOWq5haW9eBE tmuVaZ7So2Y4hpEiBvPoRIea4od36xmUZfYoU5OR+wHoA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org In preparation for changing the way we initialize the permanent ID map, update cpu_replace_ttbr1() so we can use it with the initial ID map as well. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/mmu_context.h | 13 +++++++++---- arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/suspend.c | 2 +- arch/arm64/mm/kasan_init.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- 5 files changed, 14 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 7b387c3b312a..c7ccd82db1d2 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -105,13 +105,18 @@ static inline void cpu_uninstall_idmap(void) cpu_switch_mm(mm->pgd, mm); } -static inline void cpu_install_idmap(void) +static inline void __cpu_install_idmap(pgd_t *idmap) { cpu_set_reserved_ttbr0(); local_flush_tlb_all(); cpu_set_idmap_tcr_t0sz(); - cpu_switch_mm(lm_alias(idmap_pg_dir), &init_mm); + cpu_switch_mm(lm_alias(idmap), &init_mm); +} + +static inline void cpu_install_idmap(void) +{ + __cpu_install_idmap(idmap_pg_dir); } /* @@ -142,7 +147,7 @@ static inline void cpu_install_ttbr0(phys_addr_t ttbr0, unsigned long t0sz) * Atomically replaces the active TTBR1_EL1 PGD with a new VA-compatible PGD, * avoiding the possibility of conflicting TLB entries being allocated. */ -static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp) +static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) { typedef void (ttbr_replace_func)(phys_addr_t); extern ttbr_replace_func idmap_cpu_replace_ttbr1; @@ -165,7 +170,7 @@ static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp) replace_phys = (void *)__pa_symbol(function_nocfi(idmap_cpu_replace_ttbr1)); - cpu_install_idmap(); + __cpu_install_idmap(idmap); replace_phys(ttbr1); cpu_uninstall_idmap(); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d72c4b4d389c..1661766f50f3 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3108,7 +3108,7 @@ subsys_initcall_sync(init_32bit_el0_mask); static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) { - cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); } /* diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index 19ee7c33769d..40bf1551d1ad 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -52,7 +52,7 @@ void notrace __cpu_suspend_exit(void) /* Restore CnP bit in TTBR1_EL1 */ if (system_supports_cnp()) - cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); /* * PSTATE was not saved over suspend/resume, re-enable any detected diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index c12cd700598f..e969e68de005 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -236,7 +236,7 @@ static void __init kasan_init_shadow(void) */ memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir)); dsb(ishst); - cpu_replace_ttbr1(lm_alias(tmp_pg_dir)); + cpu_replace_ttbr1(lm_alias(tmp_pg_dir), idmap_pg_dir); clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); @@ -280,7 +280,7 @@ static void __init kasan_init_shadow(void) PAGE_KERNEL_RO)); memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); - cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); } static void __init kasan_init_depth(void) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 0618ece00b7e..de171114a979 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -789,7 +789,7 @@ void __init paging_init(void) pgd_clear_fixmap(); - cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); init_mm.pgd = swapper_pg_dir; memblock_phys_free(__pa_symbol(init_pg_dir), From patchwork Mon Apr 11 09:48:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D5FFC43217 for ; Mon, 11 Apr 2022 09:50:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344776AbiDKJwZ (ORCPT ); Mon, 11 Apr 2022 05:52:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344731AbiDKJv5 (ORCPT ); Mon, 11 Apr 2022 05:51:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFDA34132A for ; Mon, 11 Apr 2022 02:49:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A1A4061179 for ; Mon, 11 Apr 2022 09:49:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28460C385A3; Mon, 11 Apr 2022 09:49:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670549; bh=F3be8Bl2OWrM+QxnbwAWqaDIE8d9sl2JmSobwHloz2Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G819WruuA9Y9oawya7YT7CWi7+BAX/8v2AuMI/JlC7oVvqE2pgJe4N9Bn9x+6S6/K wQ772tpK6gTTLpJRik5Cjf0wJ+mjcvuL2HgUSMEiyuVPEm3NIHyp56MojF8waPAZo+ VYXpMco1y9857aUZkZaFXuDBjq64Cj0RJHE3bmyeLGCFhJLgFcJydr4yC5FLumkLAI Vw2X+FGUIAaPF52NY0qSz5KYFg0jCQePAsppi775B3O33IxRrQHF9Jyj7a9EAtj6GY ziMOsjDo/tGbbVjBrKxBpRi0aA9Af5TBLtVXQFgXfreWmXkBK5vm1Nky/GiAiV/4XB HNMY2wx6k62+Q== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 11/30] arm64: head: add helper function to remap regions in early page tables Date: Mon, 11 Apr 2022 11:48:05 +0200 Message-Id: <20220411094824.4176877-12-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2095; h=from:subject; bh=F3be8Bl2OWrM+QxnbwAWqaDIE8d9sl2JmSobwHloz2Q=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lHRuF5psCGBCvTqubIc33ylcXyPLqTFqMroy5z 4YTK4gSJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5RwAKCRDDTyI5ktmPJB5YC/ oClu5IeEtRKbfo2BZ7Cy8kmZ9lseFgFG36j3po4xZgFIcueDxqY46svaMfrSYeV/mgBiBkc9sNfklC L7daOcb4W1XFe5O4PEPPaVod1IsQZuDZv5OHzCITmzGIMBKdVcCW8tInJUA7NJH7PkbdOKETg6IDuh BxnYnbA1YiBLvlwYbMmPfnhaeYnCeKkJOc7Slw6cEAGJbPHPWOHlbEWr3GebsEIkbrHjO0HeA1i+bZ HsBrvX2RQRBH1yhgnY+9mhQOIqyFLPeK8roKWNx+hHU7O+K2hwjTvZuSbos9AdIgqGTVlVDQxzHqgl A3ngODVoIqWls3AsM11D56no16yOTYYQSvt6x7xOEKoGrSeHXM9YSJLstnAO1JI3ZFJqf4CV8rIBCE c7reuUXNYjRBK+OxQJOSi/4imPYv4SOtlL46Pv1SYfLv0t1z6IlXh73NYkXh/T1u8pDv7NivhO/BQ1 Qxj+RP4HN4rfdWBTYYOq1EQr7nk9S65cnO4d5xS3v9Pyw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org The asm macros used to create the initial ID map and kernel mappings don't support randomly remapping parts of the address space after it has been populated. What we can do, however, given that all block or page mappings are created at the final level, is take a subset of the mapped range and update its attributes or output address. This will permit us to make parts of these page tables read-only, or remap a part of it to cover the device tree. So add a helper that encapsulates this. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 31 ++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 70c462bbd6bf..6fc8f7f88a1a 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -263,6 +263,37 @@ SYM_FUNC_END(clear_page_tables) populate_entries \tbl, \rtbl, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp .endm +/* + * Remap a subregion created with the map_memory macro with modified attributes + * or output address. The entire remapped region must have been covered in the + * invocation of map_memory. + * + * x0: last level table address (returned in first argument to map_memory) + * x1: start VA of the existing mapping + * x2: start VA of the region to update + * x3: end VA of the region to update (inclusive) + * x4: start PA associated with the region to update + * x5: attributes to set on the updated region + * x6: order of the last level mappings + */ +SYM_FUNC_START_LOCAL(remap_region) + // Get the index offset for the start of the last level table + lsr x1, x1, x6 + bfc x1, #0, #PAGE_SHIFT - 3 + + // Derive the start and end indexes into the last level table + // associated with the provided region + lsr x2, x2, x6 + lsr x3, x3, x6 + sub x2, x2, x1 + sub x3, x3, x1 + + mov x1, #1 + lsl x6, x1, x6 // block size at this level + + populate_entries x0, x4, x2, x3, x5, x6, x7 + ret +SYM_FUNC_END(remap_region) SYM_FUNC_START_LOCAL(create_idmap) adrp x0, idmap_pg_dir From patchwork Mon Apr 11 09:48:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AF38C433EF for ; Mon, 11 Apr 2022 09:50:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344857AbiDKJwK (ORCPT ); Mon, 11 Apr 2022 05:52:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344794AbiDKJv7 (ORCPT ); Mon, 11 Apr 2022 05:51:59 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D86341625 for ; Mon, 11 Apr 2022 02:49:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A5BE6B80F97 for ; Mon, 11 Apr 2022 09:49:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 85E58C385A4; Mon, 11 Apr 2022 09:49:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670551; bh=n7GAVS9slOBzIfA5O5ulyZVXp1JEEXZyAX5hvOSUOTE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LRreosW3vQd6y8g+gD0cDQUDo3uCblDseEj1vkCa7PPkevsedM+Y4R5in45yO33mQ 24PtJZFWJBvOSyqJ7ZnsCGy5EkOUJf7UlGJWSHx/Q+VyAu/Lpf9cwg6PQ7uKgIvjs+ /avduxSQtySwukejgV2+10lDFU6SUe7NZqISHdOOKQ4rJ/QSEBspZoV7Y5grnaYVyH E5fcJxZV4RImZULoWoDlmYuCXZiiyYUawRBInPaQIMXaDzfY/cpk2dyrfDdUlwOVc7 aaZ3yfqMRU0sob98rsioKQ29AGC+4nKdIJxJtGc3Y/TiG39bVR8VKM7pdrFRFF2AcJ GPDT0jj0p1UWA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 12/30] arm64: head: cover entire kernel image in initial ID map Date: Mon, 11 Apr 2022 11:48:06 +0200 Message-Id: <20220411094824.4176877-13-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7737; h=from:subject; bh=n7GAVS9slOBzIfA5O5ulyZVXp1JEEXZyAX5hvOSUOTE=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lJai4Oa6+l0Tl16sO/od9EEuC8UtRHrmOS9p9c XO8lc72JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5SQAKCRDDTyI5ktmPJExQC/ wJECqRAM3HQBtFiSYBBJROLMlbckFGLY6a08iWSFJi22+WvxytN6ex4zKuIfIE94G8ut+XQN5WB8RG W4GaN32f3oa4JJhCStmVgnLeOhkLJHhbsvb3t6Ka5Tj+07EVNEgEbJfLR/nh8nfnQKNmkW2HTQflKA oEVrSoJ5EQ+5Rhrpo6xggCAQjeQL7lv+CxiWmg0H2zdI9osa7BbcHOWPBMvX/q/uSCJF+0r2GZ4DWK tYUnumOiEq+yXyTPo/FaCDFg+zGe6BqL1DWWuug2pyRLTA5X+nkJ7QR9pH3q57vwpKBiAJxilR97Gv WNLt0A5mrxFc8wH03VkqAaycYOAYDjRiCRSYLSuAhq8uJIizBEkD4lR5YL8x0vUtqWFOHiIMYAPIZP vkS2dl+UmcNQvpR7ZFirgxHYl7dBGIpREVAepk7oIoM++hTtCi9VgiTAeCNeHoWm0G+28t7QUNLCv5 0ufzlnfeY9YAayuDgfy9svHRIHXYT7nevCTb9XoNeoP1s= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org As a first step towards avoiding the need to create, tear down and recreate the kernel virtual mapping with MMU and caches disabled, start by expanding the ID map so it covers the page tables as well as all executable code. This will allow us to populate the page tables with the MMU and caches on, and call KASLR init code before setting up the virtual mapping. Since this ID map is only needed at boot, create it as a temporary set of page tables, and populate the permanent ID map after enabling the MMU and caches. While at it, switch to read-only attributes for the where possible, as writable permissions are only needed for the initial kernel page tables. Note that on 4k granule configurations, the permanent ID map will now be reduced to a single page rather than a 2M block mapping. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kernel-pgtable.h | 16 ++++++---- arch/arm64/kernel/head.S | 31 +++++++++++++------- arch/arm64/kernel/vmlinux.lds.S | 7 +++-- arch/arm64/mm/mmu.c | 23 ++++++++++++++- 4 files changed, 59 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 96dc0f7da258..5395e5a04f35 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -35,10 +35,8 @@ */ #if ARM64_KERNEL_USES_PMD_MAPS #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS - 1) -#define IDMAP_PGTABLE_LEVELS (ARM64_HW_PGTABLE_LEVELS(PHYS_MASK_SHIFT) - 1) #else #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS) -#define IDMAP_PGTABLE_LEVELS (ARM64_HW_PGTABLE_LEVELS(PHYS_MASK_SHIFT)) #endif @@ -87,7 +85,13 @@ + EARLY_PUDS((vstart), (vend)) /* each PUD needs a next level page table */ \ + EARLY_PMDS((vstart), (vend))) /* each PMD needs a next level page table */ #define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end)) -#define IDMAP_DIR_SIZE (IDMAP_PGTABLE_LEVELS * PAGE_SIZE) + +/* the initial ID map may need two extra pages if it needs to be extended */ +#if VA_BITS < 48 +#define INIT_IDMAP_DIR_SIZE (INIT_DIR_SIZE + (2 * PAGE_SIZE)) +#else +#define INIT_IDMAP_DIR_SIZE INIT_DIR_SIZE +#endif /* Initial memory map size */ #if ARM64_KERNEL_USES_PMD_MAPS @@ -107,9 +111,11 @@ #define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) #if ARM64_KERNEL_USES_PMD_MAPS -#define SWAPPER_MM_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS) +#define SWAPPER_RW_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS) +#define SWAPPER_RX_MMUFLAGS (SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY) #else -#define SWAPPER_MM_MMUFLAGS (PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS) +#define SWAPPER_RW_MMUFLAGS (PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS) +#define SWAPPER_RX_MMUFLAGS (SWAPPER_RW_MMUFLAGS | PTE_RDONLY) #endif /* diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 6fc8f7f88a1a..4ef12bcdfe6a 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -87,6 +87,7 @@ * x28 clear_page_tables() callee preserved temp register * x19/x20 __primary_switch() callee preserved temp registers * x24 __primary_switch() .. relocate_kernel() current RELR displacement + * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) bl preserve_boot_args @@ -296,9 +297,7 @@ SYM_FUNC_START_LOCAL(remap_region) SYM_FUNC_END(remap_region) SYM_FUNC_START_LOCAL(create_idmap) - adrp x0, idmap_pg_dir - adrp x3, __idmap_text_start // __pa(__idmap_text_start) - + mov x28, lr /* * The ID map carries a 1:1 mapping of the physical address range * covered by the loaded image, which could be anywhere in DRAM. This @@ -345,11 +344,22 @@ SYM_FUNC_START_LOCAL(create_idmap) * translation level, but the top-level table has more entries. */ #endif - adr_l x6, __idmap_text_end // __pa(__idmap_text_end) - mov x7, SWAPPER_MM_MMUFLAGS + adrp x0, init_idmap_pg_dir + adrp x3, _text + adrp x6, _end + mov x7, SWAPPER_RX_MMUFLAGS map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT + /* Remap the kernel page tables r/w in the ID map */ + adrp x1, _text + adrp x2, init_pg_dir + adr_l x3, init_pg_end - 1 + bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 + mov x5, SWAPPER_RW_MMUFLAGS + mov x6, #SWAPPER_BLOCK_SHIFT + bl remap_region + /* * Since the page tables have been populated with non-cacheable * accesses (MMU disabled), invalidate those tables again to @@ -357,9 +367,10 @@ SYM_FUNC_START_LOCAL(create_idmap) */ dmb sy - adrp x0, idmap_pg_dir - adrp x1, idmap_pg_end - b dcache_inval_poc // tail call + adrp x0, init_idmap_pg_dir + adrp x1, init_idmap_pg_end + bl dcache_inval_poc + ret x28 SYM_FUNC_END(create_idmap) SYM_FUNC_START_LOCAL(create_kernel_mapping) @@ -370,7 +381,7 @@ SYM_FUNC_START_LOCAL(create_kernel_mapping) adrp x3, _text // runtime __pa(_text) sub x6, x6, x3 // _end - _text add x6, x6, x5 // runtime __va(_end) - mov x7, SWAPPER_MM_MMUFLAGS + mov x7, SWAPPER_RW_MMUFLAGS map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14 @@ -851,7 +862,7 @@ SYM_FUNC_START_LOCAL(__primary_switch) #endif adrp x1, init_pg_dir - adrp x2, idmap_pg_dir + adrp x2, init_idmap_pg_dir bl __enable_mmu #ifdef CONFIG_RELOCATABLE #ifdef CONFIG_RELR diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index edaf0faf766f..7030b5a57d23 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -195,8 +195,7 @@ SECTIONS HYPERVISOR_DATA_SECTIONS idmap_pg_dir = .; - . += IDMAP_DIR_SIZE; - idmap_pg_end = .; + . += PAGE_SIZE; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 tramp_pg_dir = .; @@ -232,6 +231,10 @@ SECTIONS __inittext_end = .; __initdata_begin = .; + init_idmap_pg_dir = .; + . += INIT_IDMAP_DIR_SIZE; + init_idmap_pg_end = .; + .init.data : { INIT_DATA INIT_SETUP(16) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index de171114a979..07219afe2723 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -766,9 +766,28 @@ static void __init map_kernel(pgd_t *pgdp) kasan_copy_shadow(pgdp); } +static void __init create_idmap(void) +{ + u64 start = __pa_symbol(__idmap_text_start); + u64 size = __pa_symbol(__idmap_text_end) - start; + pgd_t *pgd = idmap_pg_dir; + u64 pgd_phys; + + /* check if we need an additional level of translation */ + if (VA_BITS < 48 && idmap_t0sz < TCR_T0SZ(VA_BITS_MIN)) { + pgd_phys = early_pgtable_alloc(PAGE_SHIFT); + set_pgd(&idmap_pg_dir[start >> VA_BITS], + __pgd(pgd_phys | P4D_TYPE_TABLE)); + pgd = __va(pgd_phys); + } + __create_pgd_mapping(pgd, start, start, size, PAGE_KERNEL_ROX, + early_pgtable_alloc, 0); +} + void __init paging_init(void) { pgd_t *pgdp = pgd_set_fixmap(__pa_symbol(swapper_pg_dir)); + extern pgd_t init_idmap_pg_dir[]; #if VA_BITS > 48 if (cpuid_feature_extract_unsigned_field( @@ -789,13 +808,15 @@ void __init paging_init(void) pgd_clear_fixmap(); - cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir), init_idmap_pg_dir); init_mm.pgd = swapper_pg_dir; memblock_phys_free(__pa_symbol(init_pg_dir), __pa_symbol(init_pg_end) - __pa_symbol(init_pg_dir)); memblock_allow_resize(); + + create_idmap(); } /* From patchwork Mon Apr 11 09:48:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561278 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5648CC43217 for ; Mon, 11 Apr 2022 09:50:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344894AbiDKJwM (ORCPT ); Mon, 11 Apr 2022 05:52:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344738AbiDKJwA (ORCPT ); Mon, 11 Apr 2022 05:52:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4233B419AD for ; Mon, 11 Apr 2022 02:49:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 73AA161164 for ; Mon, 11 Apr 2022 09:49:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E61F4C385A6; Mon, 11 Apr 2022 09:49:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670553; bh=9AVu6mHMPoK5oK6gxu8iTn6r/yGLVCYbNn1BWuZoPiE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ciI3qJjxsec5kGKKsw1xjchaR3pY2jznjRb0fUmRRRVJLvHEViho+WNqu7pb+lCaA 3J/pJAZW9lwBMVQ4xC986ITcDWc4R4owMezkl95lgSv2Ps5DL5xmGgyAIByGKznpBr K+sOw3AyEKOPSkdczlUAVGcTb1mLUuRwwboRHtKYzpn1XkVp3ahv4jnJ+mq+tCUPdb WJYzWAZdeOrm8VaLEnd/zlizdLFVz7M7IW4oBouP9kdfBOuaFQV00DPOSKv1pCAHlc E1RzM425RmqknXPw6/dxu5nnZy1NfNdrym0+4oPu5hYTLt3+4zrKJhR2jHnpFbt28G OXjSOlNFwcQpw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 13/30] arm64: head: use relative references to the RELA and RELR tables Date: Mon, 11 Apr 2022 11:48:07 +0200 Message-Id: <20220411094824.4176877-14-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2627; h=from:subject; bh=9AVu6mHMPoK5oK6gxu8iTn6r/yGLVCYbNn1BWuZoPiE=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lKS9UO+vDmp3VZscy4lffXl6hh9vn47/4m4jxm q5vW/eCJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5SgAKCRDDTyI5ktmPJIVnDA CimfPlPaFYAo0iN/LnxYQG3BmsRvLvSt+6H64/MD/ykF0PW0CNiJLyg21qrZrsR/pDUS4syQDslo9+ 2AirEU10zMRc3A2SokvJB1tmiC6Zk0qK5IWTndiNHZ4J3WwMEzOmv6iexGjbYqkoO/Lg7pwU1VuUrZ 4oNrCWHNzXuAbeEh6UAGCZpctR4TRTEEH7YDsWoCiHzlREa9JQALq7oBFMnEeyDwyW01PPzKBsp2xj ymn0357ZnrArJNJ1tHgklE6CLlqxD7xfNezf/dK3TRiaZptv04i3y+3Id0Sm43PklmsuoQd4nUYRbZ qxmRWewzl+tI3RGekm3JXiBFA1IlGVk3bGzGDDmkiTYz3eNdPeAph4edjM/5kRjIm+RkOt4JMpveWZ dXBB+xtVz4/l74osHnxBfPE70Tfi0q9jRHxlEAYEvY8OkYIXF5TJ3dUn4zv3SbB9tO0AeiydDSQYg9 i0TUtic+ZMIM3fUKJgVi9HBXyyiY6NFpusIHoNI5dKCgs= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Formerly, we had to access the RELA and RELR tables via the kernel mapping that was being relocated, and so deriving the start and end addresses using ADRP/ADD references was not possible, as the relocation code runs from the ID map. Now that we map the entire kernel image via the ID map, we can simplify this, and just load the entries via the ID map as well. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 13 ++++--------- arch/arm64/kernel/vmlinux.lds.S | 12 ++++-------- 2 files changed, 8 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 4ef12bcdfe6a..2c491cac4871 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -755,13 +755,10 @@ SYM_FUNC_START_LOCAL(__relocate_kernel) * Iterate over each entry in the relocation table, and apply the * relocations in place. */ - ldr w9, =__rela_offset // offset to reloc table - ldr w10, =__rela_size // size of reloc table - + adr_l x9, __rela_start + adr_l x10, __rela_end mov_q x11, KIMAGE_VADDR // default virtual offset add x11, x11, x23 // actual virtual offset - add x9, x9, x11 // __va(.rela) - add x10, x9, x10 // __va(.rela) + sizeof(.rela) 0: cmp x9, x10 b.hs 1f @@ -811,10 +808,8 @@ SYM_FUNC_START_LOCAL(__relocate_kernel) * __relocate_kernel is called twice with non-zero displacements (i.e. * if there is both a physical misalignment and a KASLR displacement). */ - ldr w9, =__relr_offset // offset to reloc table - ldr w10, =__relr_size // size of reloc table - add x9, x9, x11 // __va(.relr) - add x10, x9, x10 // __va(.relr) + sizeof(.relr) + adr_l x9, __relr_start + adr_l x10, __relr_end sub x15, x23, x24 // delta from previous offset cbz x15, 7f // nothing to do if unchanged diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 7030b5a57d23..21ca72e7ad22 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -253,21 +253,17 @@ SECTIONS HYPERVISOR_RELOC_SECTION .rela.dyn : ALIGN(8) { + __rela_start = .; *(.rela .rela*) + __rela_end = .; } - __rela_offset = ABSOLUTE(ADDR(.rela.dyn) - KIMAGE_VADDR); - __rela_size = SIZEOF(.rela.dyn); - -#ifdef CONFIG_RELR .relr.dyn : ALIGN(8) { + __relr_start = .; *(.relr.dyn) + __relr_end = .; } - __relr_offset = ABSOLUTE(ADDR(.relr.dyn) - KIMAGE_VADDR); - __relr_size = SIZEOF(.relr.dyn); -#endif - . = ALIGN(SEGMENT_ALIGN); __initdata_end = .; __init_end = .; From patchwork Mon Apr 11 09:48:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84ADAC433F5 for ; Mon, 11 Apr 2022 09:50:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344815AbiDKJwN (ORCPT ); Mon, 11 Apr 2022 05:52:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344863AbiDKJwB (ORCPT ); Mon, 11 Apr 2022 05:52:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FF60419B6 for ; Mon, 11 Apr 2022 02:49:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D25B361182 for ; Mon, 11 Apr 2022 09:49:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 515A7C385B0; Mon, 11 Apr 2022 09:49:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670556; bh=TEd74fUGGrd4Z1cRABQmAm27ke5FtcewQadVTliZWAk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=APCDKPvnrCFnc/yFQCpADw9E/+8oaOnTBY3oiuTMME/g2E5WUtw9Ug1Egx1RPyxzT EjiEzpbQnwpfVC2LZHJQELZoltQTiGPa0ghutlkMEVeZ6lKxHxlz56ookfqsAB8sZz Gnv4uVaYRmN+lyXILOrrys4CXZ6IZMfIDfGQBvM3T407gd8dgOl7nDJqtkBbqhl/fo bkYj//+mvVzGWLQCG5dcS5x2Rk8MFhWcyqXxQRAsASipW9n9tzrpdXKj6pAQuB1VhP b23fo96u8xrd/fH4Ba9cF7P87FnRomkI7XY541RnmtUGi/mjQUtb2Ik/wV4ufVS1LX k3qd3J3EdpHHA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 14/30] arm64: head: create a temporary FDT mapping in the initial ID map Date: Mon, 11 Apr 2022 11:48:08 +0200 Message-Id: <20220411094824.4176877-15-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3233; h=from:subject; bh=TEd74fUGGrd4Z1cRABQmAm27ke5FtcewQadVTliZWAk=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lMCk6FU1TDgS+d/bUIgpicu6zDRlE/gdeBj3bp /lk4Jn6JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5TAAKCRDDTyI5ktmPJLGMC/ 9hBgnHrXB5qXx9kUoOG/jVi3qO3tUka8huOBorTeivyB/WXIQIn1ee8HVbMbsMvEA8nRkKkozI/YUR fEO5YHcy2EB0g/4cpFiHOyXBfR9AyYUmeV0BIbsMq3V+UbOkrA8OH5YmHU3uqZNaISOwCDOI5KEq2o /9X711XQ4DP0aWBdishsWkAa+lkqfeknOCGji8bePW6A0UZKrRbgio5Jl54RNN9CPL9jBnsC3NBG/Q +vsk+S4a6T9gGSpsY6XdQOB3KbxtHMV87+yQB5bauNvGMwlNg/nR+MDiK0GvE5+WUUfRCPkBSxE4oi y9H0W7xYC0E3RVbff4WmeiQVxAAFmKnELoApi8O5NwVCFRuyFpIO7HBKQrchQQIE7nbxqs+6UYZGG3 OufMrJhtuhYpHSDStTY1y1IRmlpZ4Je2O6dK2Mv6r+3MhY2455pxM26mX8VnLSS/9p3Z15tFWkl73Y PaZKpgB/CJZgjVU6vyaldd+9Jqk7BplYHIlmE1Q9pKBVE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org We need to access the DT very early to get at the command line and the KASLR seed, which currently means we rely on some hacks to call into the kernel before really calling into the kernel, which is undesirable. So instead, let's create a mapping for the FDT in the initial ID map, which is feasible now that it has been extended to cover more than a single page or block, and can be updated in place to remap other output addresses. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kernel-pgtable.h | 6 ++++-- arch/arm64/kernel/head.S | 14 +++++++++++++- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 5395e5a04f35..02e59fa8f293 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -8,6 +8,7 @@ #ifndef __ASM_KERNEL_PGTABLE_H #define __ASM_KERNEL_PGTABLE_H +#include #include #include @@ -88,10 +89,11 @@ /* the initial ID map may need two extra pages if it needs to be extended */ #if VA_BITS < 48 -#define INIT_IDMAP_DIR_SIZE (INIT_DIR_SIZE + (2 * PAGE_SIZE)) +#define INIT_IDMAP_DIR_SIZE ((INIT_IDMAP_DIR_PAGES + 2) * PAGE_SIZE) #else -#define INIT_IDMAP_DIR_SIZE INIT_DIR_SIZE +#define INIT_IDMAP_DIR_SIZE (INIT_IDMAP_DIR_PAGES * PAGE_SIZE) #endif +#define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE) /* Initial memory map size */ #if ARM64_KERNEL_USES_PMD_MAPS diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 2c491cac4871..bec3805c941c 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -82,6 +82,7 @@ * primary lowlevel boot path: * * Register Scope Purpose + * x19 create_idmap() .. __ start_kernel() ID map VA of the DT blob * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x23 primary_entry() .. start_kernel() physical misalignment/KASLR offset * x28 clear_page_tables() callee preserved temp register @@ -346,7 +347,7 @@ SYM_FUNC_START_LOCAL(create_idmap) #endif adrp x0, init_idmap_pg_dir adrp x3, _text - adrp x6, _end + adrp x6, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE mov x7, SWAPPER_RX_MMUFLAGS map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT @@ -360,6 +361,17 @@ SYM_FUNC_START_LOCAL(create_idmap) mov x6, #SWAPPER_BLOCK_SHIFT bl remap_region + /* Remap the FDT read-only after the kernel image */ + adrp x1, _text + adrp x19, _end + SWAPPER_BLOCK_SIZE + bic x2, x19, #SWAPPER_BLOCK_SIZE - 1 + bfi x19, x21, #0, #SWAPPER_BLOCK_SHIFT // remapped FDT address + add x3, x2, #MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE + bic x4, x21, #SWAPPER_BLOCK_SIZE - 1 + mov x5, SWAPPER_RX_MMUFLAGS + mov x6, #SWAPPER_BLOCK_SHIFT + bl remap_region + /* * Since the page tables have been populated with non-cacheable * accesses (MMU disabled), invalidate those tables again to From patchwork Mon Apr 11 09:48:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B85E4C433EF for ; Mon, 11 Apr 2022 09:50:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344829AbiDKJwP (ORCPT ); Mon, 11 Apr 2022 05:52:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344820AbiDKJwC (ORCPT ); Mon, 11 Apr 2022 05:52:02 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF6D5419BA for ; Mon, 11 Apr 2022 02:49:21 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id F05CEB80F97 for ; Mon, 11 Apr 2022 09:49:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B1886C385AC; Mon, 11 Apr 2022 09:49:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670558; bh=O1r2IiWLwP2lKrF/aCfJ2B+0w6PbipWxiDGaVVY2Mw4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b8xOBj62cDHSDh68r9NzNmWSDDyKiBmzl5Q8287izZwwm1YT5S8gIKkeOKdF8TrGq Xe1iGzsl9BI41zT8Fnp0ihPGkspzNvghuj0uT6bmYWRwt0uYCpZaNxeFKRcDW8HfnE 3RC2x4eKI/CAQdw3jXfRwnKe2hKNoLyPtNoUgFDD+sacWV4p0N8zpzcHzJcGcd78Bb eUC5PXrt448+mX0GhL8s/62aXyN1ck6vOga409CQSQS4EJ2gHgZtgnUGzWk4GHR6jN aMvAlq56rwUtnfTZWFfe/S4k6LlLuyhoPVTa7vjAMCc9rqsmbGyug9piyGIruotkQK +EpfZAAo0zIog== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 15/30] arm64: idreg-override: use early FDT mapping in ID map Date: Mon, 11 Apr 2022 11:48:09 +0200 Message-Id: <20220411094824.4176877-16-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2638; h=from:subject; bh=O1r2IiWLwP2lKrF/aCfJ2B+0w6PbipWxiDGaVVY2Mw4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lOmbAezGtgD+Q+3AsS4gfsGcUHYqihpwQH2LdG Ylm7TnCJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5TgAKCRDDTyI5ktmPJKrlC/ 4maa/z3WiiLpAoKax6pd6HSHh4iwqwfzHGw4rMp8K2seUZwVBljjaTRhAkQDllZbAWJ6jo1d1C+MjC 1+hSCAQhdYNuWdQ4EFTuiKJhbu357YQcd4LH1dJt1X266mAIUAu8qZINGZkMQdPtG9Oz/jjO8HRlDm oGhVRKTOi13lHqMAVV6NcWp/Y14XVobSKDrmx3EQ3U2vmIzuNsrIjQGZomwxHHV6mJrOtHFB7Qhp/S pFK1Nkv6BoSlsTqSOgB4WxacYDXY6psOQGlBQr0zXjiNDdJRmond0A/23oXwLnEuNcuMiNnV/GNjhw cjJGsU5lQ/tOeKhNV2cgl1YH9LFGJiDH3eLeB4bRCuCReoKfXydPo14qyUJxsom7MzA9/Q1e/VnvhP zEZDqNet11qeKkXZawLZ8FBdCudJEmoOym6tK9j7eEUYE3+qmUitBoAbis8uPuyCBQleckOQVulGrO MngyLxQfeuwCTBbVgqvoUEj0r63KqY4RtwC1lZo8eLSFk= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Instead of calling into the kernel to map the FDT into the kernel page tables before even calling start_kernel(), let's switch to the initial, temporary mapping of the device tree that has been added to the ID map. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 3 +-- arch/arm64/kernel/idreg-override.c | 17 ++++++----------- 2 files changed, 7 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index bec3805c941c..eae147fabbee 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -468,8 +468,7 @@ SYM_FUNC_START_LOCAL(__primary_switched) #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif - mov x0, x21 // pass FDT address in x0 - bl early_fdt_map // Try mapping the FDT early + mov x0, x19 // pass FDT address in x0 bl init_feature_override // Parse cpu feature overrides #ifdef CONFIG_RANDOMIZE_BASE tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized? diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index 8a2ceb591686..f92836e196e5 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -201,16 +201,11 @@ static __init void __parse_cmdline(const char *cmdline, bool parse_aliases) } while (1); } -static __init const u8 *get_bootargs_cmdline(void) +static __init const u8 *get_bootargs_cmdline(const void *fdt) { const u8 *prop; - void *fdt; int node; - fdt = get_early_fdt_ptr(); - if (!fdt) - return NULL; - node = fdt_path_offset(fdt, "/chosen"); if (node < 0) return NULL; @@ -222,9 +217,9 @@ static __init const u8 *get_bootargs_cmdline(void) return strlen(prop) ? prop : NULL; } -static __init void parse_cmdline(void) +static __init void parse_cmdline(const void *fdt) { - const u8 *prop = get_bootargs_cmdline(); + const u8 *prop = get_bootargs_cmdline(fdt); if (IS_ENABLED(CONFIG_CMDLINE_FORCE) || !prop) __parse_cmdline(CONFIG_CMDLINE, true); @@ -234,9 +229,9 @@ static __init void parse_cmdline(void) } /* Keep checkers quiet */ -void init_feature_override(void); +void init_feature_override(const void *fdt); -asmlinkage void __init init_feature_override(void) +asmlinkage void __init init_feature_override(const void *fdt) { int i; @@ -247,7 +242,7 @@ asmlinkage void __init init_feature_override(void) } } - parse_cmdline(); + parse_cmdline(fdt); for (i = 0; i < ARRAY_SIZE(regs); i++) { if (regs[i]->override) From patchwork Mon Apr 11 09:48:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559784 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A113C4332F for ; Mon, 11 Apr 2022 09:50:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344779AbiDKJwQ (ORCPT ); Mon, 11 Apr 2022 05:52:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344803AbiDKJwC (ORCPT ); Mon, 11 Apr 2022 05:52:02 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDB3841F96 for ; Mon, 11 Apr 2022 02:49:23 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 63464B80F52 for ; Mon, 11 Apr 2022 09:49:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1C949C385A3; Mon, 11 Apr 2022 09:49:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670561; bh=NfYHb8uRiznV0/lqv/eaEk57pNDFEjEWBlGAGII44ng=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IytDfc6GfDj2yUd0gEVJqefXH+eKRx1bbQDZVG0gTGP/+Tzml+HsVy7cXDJZSeG00 2cJ1p+pPqVAX/eiXoEjxzcs2M2wAGstZQQU3+zz8C+74QGgGHrdFE6ISjgeU5NZnLH gHtZZRKfDYywL+o3udfnHZL4+7OBqJm+UwkqWJTaOE0V76jOvo/qLCruhnlM3D6PYz ESGDbnFTT7q3JI9ysePhde0Ad4WTOGPxKfHfqJdrjoKDIAJ7W3igmHsrresvi6xJJm 5+9y4mPasV4eCqzCYJEkp4TV47OQ3+7SJnITQI4tdqmEdE4IYOGd1veCc5vqyaIeAp WNDIDNQ7zxcjg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 16/30] arm64: head: factor out TTBR1 assignment into a macro Date: Mon, 11 Apr 2022 11:48:10 +0200 Message-Id: <20220411094824.4176877-17-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1147; h=from:subject; bh=NfYHb8uRiznV0/lqv/eaEk57pNDFEjEWBlGAGII44ng=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lPIa2f8ZRyBJCObdKzMYXRfUIFoFzIcKkYCYAJ sOzR5ziJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5TwAKCRDDTyI5ktmPJBYfC/ 918YcRhxMBTDNXqgCUk+7/E6r7rGOSDBYOf37EEfYj0e+vlR+uXaX3Hu5PWOZ7NHp+HOWQI7whcRcC KwjJWibfjeyD7EW4GhFHLr1qi8e7TgiJbgFNC5GqIy7mP8kmWmeGeF/Oc6OojPpjdSTuvD0qyLT6++ TrabAFW+dcohAmNwGdgU3QHnYE9KUDuOIsKDf/h2GvS/EvBXYNmOhsS9oTPqewIQBnubcfkLX5fTpq fVhJlHe24OsNsEtCP5cBq2Bdyx1NnVo3N+jY04/eCFofE/SM8aIWxkjHdd9Ot6IyXtU6f9vatR3H9L NUDVURkaJi3cGpt7aTPMFtqLVNqSZd0DqfQZg2I0KcMx5NbOMB/OTGMpqumLF0659FwMHrHrgqh//a epiCQUztnsrktDkGwXAun6/S6oz+Q5cP4ilJBLhlMK1PTQGDiFvGt2Tud5nH7rnoQWRkb99RyIwG2I xYv7X1aT97xJDy3MgjVMi0Aexwie49Ot2qo5+nH2njQtk= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Create a macro load_ttbr1 to avoid having to repeat the same instruction sequence 3 times in a subsequent patch. No functional change intended. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index eae147fabbee..e52429f9a135 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -697,6 +697,13 @@ SYM_FUNC_END(__secondary_too_slow) dc ivac, \tmp1 // Invalidate potentially stale cache line .endm + .macro load_ttbr1, reg, tmp + phys_to_ttbr \reg, \reg + offset_ttbr1 \reg, \tmp + msr ttbr1_el1, \reg + isb + .endm + /* * Enable the MMU. * @@ -718,12 +725,9 @@ SYM_FUNC_START(__enable_mmu) cmp x3, #ID_AA64MMFR0_TGRAN_SUPPORTED_MAX b.gt __no_granule_support update_early_cpu_boot_status 0, x3, x4 - phys_to_ttbr x1, x1 phys_to_ttbr x2, x2 msr ttbr0_el1, x2 // load TTBR0 - offset_ttbr1 x1, x3 - msr ttbr1_el1, x1 // load TTBR1 - isb + load_ttbr1 x1, x3 set_sctlr_el1 x0 From patchwork Mon Apr 11 09:48:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561276 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 615C6C433F5 for ; Mon, 11 Apr 2022 09:50:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344832AbiDKJwR (ORCPT ); Mon, 11 Apr 2022 05:52:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344707AbiDKJwC (ORCPT ); Mon, 11 Apr 2022 05:52:02 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7F41419AA for ; Mon, 11 Apr 2022 02:49:24 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0F9016115F for ; Mon, 11 Apr 2022 09:49:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D042C385A4; Mon, 11 Apr 2022 09:49:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670563; bh=NkID3D8eKoGtatSQ1Pem1naBXL+jY6mndM5eYFvzQoI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Is+HGYUCBWze5mNGvvEvsq+Z9zj67ZMJUGix+5oVP0+NgNNbJDqJSw5KuDABGdbQy QXAqXW8h3rK6ceRwmcr5loLg742FhqQ1/e2wZTz7+ry/k4AkoTZVKhkuOJGy6Rms0+ LbjL/eTXdaRNMVk2WRc47YGeALU+MC+YsvNFL8DhQ6bmXdH5xyp2jPp2UATU1Q2t+R 0UmKJo3oBbGynXI5uauKWtYCCD0JO+b0t8pdB45x1/SqbpHbUFp94zK5brP2565w3x W7u4NQKYP6PjbpEPA/ONgDA8bu2EoYp8dEPJoDCdWjbz2o1JQEjwcoSVNJCNHP/YDG Yzh6I0F613RKA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 17/30] arm64: head: populate kernel page tables with MMU and caches on Date: Mon, 11 Apr 2022 11:48:11 +0200 Message-Id: <20220411094824.4176877-18-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4236; h=from:subject; bh=NkID3D8eKoGtatSQ1Pem1naBXL+jY6mndM5eYFvzQoI=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lR6D+qiYYkrj3l97ekye1ey0iZcFdFhgKKY8tM nTT6HKOJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5UQAKCRDDTyI5ktmPJHo+C/ 9JBSsGNARpUsDK/iEL91eIay4XNqPJHshSrfZeBcoCKx7LnUhUB6Fh3VvTLXR0h4E9D912gJgPst6k gOe4Sk6vCBqxm3v20aFibkez7JaJ6r6BCRx8nHNT8ctVLjW7z33Oohg4bcYwnQHJusZjz1VXwqGce9 KBBDTcDk6rvRBAyfZ6f1yKelwxVhzb6X5zvxEzIaP8l2RTQwvztkzOvzq01ec1EV99I91uSYZLNfjd 2Qfr0/PfPDpe2rfTfIs8/wHEOoPir/NdXOhgF6WxgGESPFLPST+a5D50jG2qPpyz9V4ESyvfXMIHT1 +aO1c29HDW5xb46RiE57NQrSy+8IzB7lo9VjivW8eLzmMQJpWS6zUJZuqccBd4SyPg8c3H8pZOC+4l uI3XqKK+5B6iTl2XRky41rxNOnTmx1MXXdM33z4MeuPggk/9WN2YZnCE6qWyF0OOzz/uFQ0sRyfwdA Tc7gN4hDnFKQktAIdJz3A49JzHkjFEBLZyyU78S4Kmlx8= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Now that we can access the entire kernel image via the ID map, we can execute the page table population code with the MMU and caches enabled. The only thing we need to ensure is that translations via TTBR1 remain disabled while we are updating the page tables the second time around, in case KASLR wants them to be randomized. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 61 +++++--------------- 1 file changed, 15 insertions(+), 46 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index e52429f9a135..f9f4af64d1fc 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -85,8 +85,6 @@ * x19 create_idmap() .. __ start_kernel() ID map VA of the DT blob * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x23 primary_entry() .. start_kernel() physical misalignment/KASLR offset - * x28 clear_page_tables() callee preserved temp register - * x19/x20 __primary_switch() callee preserved temp registers * x24 __primary_switch() .. relocate_kernel() current RELR displacement * x28 create_idmap() callee preserved temp register */ @@ -96,9 +94,7 @@ SYM_CODE_START(primary_entry) adrp x23, __PHYS_OFFSET and x23, x23, MIN_KIMG_ALIGN - 1 // KASLR offset, defaults to 0 bl set_cpu_boot_mode_flag - bl clear_page_tables bl create_idmap - bl create_kernel_mapping /* * The following calls CPU setup code, see arch/arm64/mm/proc.S for @@ -128,32 +124,14 @@ SYM_CODE_START_LOCAL(preserve_boot_args) SYM_CODE_END(preserve_boot_args) SYM_FUNC_START_LOCAL(clear_page_tables) - mov x28, lr - - /* - * Invalidate the init page tables to avoid potential dirty cache lines - * being evicted. Other page tables are allocated in rodata as part of - * the kernel image, and thus are clean to the PoC per the boot - * protocol. - */ - adrp x0, init_pg_dir - adrp x1, init_pg_end - bl dcache_inval_poc - /* * Clear the init page tables. */ adrp x0, init_pg_dir adrp x1, init_pg_end - sub x1, x1, x0 -1: stp xzr, xzr, [x0], #16 - stp xzr, xzr, [x0], #16 - stp xzr, xzr, [x0], #16 - stp xzr, xzr, [x0], #16 - subs x1, x1, #64 - b.ne 1b - - ret x28 + sub x2, x1, x0 + mov x1, xzr + b __pi_memset // tail call SYM_FUNC_END(clear_page_tables) /* @@ -397,16 +375,7 @@ SYM_FUNC_START_LOCAL(create_kernel_mapping) map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14 - /* - * Since the page tables have been populated with non-cacheable - * accesses (MMU disabled), invalidate those tables again to - * remove any speculatively loaded cache lines. - */ - dmb sy - - adrp x0, init_pg_dir - adrp x1, init_pg_end - b dcache_inval_poc // tail call + ret SYM_FUNC_END(create_kernel_mapping) /* @@ -866,14 +835,15 @@ SYM_FUNC_END(__relocate_kernel) #endif SYM_FUNC_START_LOCAL(__primary_switch) -#ifdef CONFIG_RANDOMIZE_BASE - mov x19, x0 // preserve new SCTLR_EL1 value - mrs x20, sctlr_el1 // preserve old SCTLR_EL1 value -#endif - - adrp x1, init_pg_dir + adrp x1, reserved_pg_dir adrp x2, init_idmap_pg_dir bl __enable_mmu + + bl clear_page_tables + bl create_kernel_mapping + + adrp x1, init_pg_dir + load_ttbr1 x1, x2 #ifdef CONFIG_RELOCATABLE #ifdef CONFIG_RELR mov x24, #0 // no RELR displacement yet @@ -889,9 +859,8 @@ SYM_FUNC_START_LOCAL(__primary_switch) * to take into account by discarding the current kernel mapping and * creating a new one. */ - pre_disable_mmu_workaround - msr sctlr_el1, x20 // disable the MMU - isb + adrp x1, reserved_pg_dir // Disable translations via TTBR1 + load_ttbr1 x1, x2 bl clear_page_tables bl create_kernel_mapping // Recreate kernel mapping @@ -899,8 +868,8 @@ SYM_FUNC_START_LOCAL(__primary_switch) dsb nsh isb - set_sctlr_el1 x19 // re-enable the MMU - + adrp x1, init_pg_dir // Re-enable translations via TTBR1 + load_ttbr1 x1, x2 bl __relocate_kernel #endif #endif From patchwork Mon Apr 11 09:48:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96AF6C433EF for ; Mon, 11 Apr 2022 09:50:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344707AbiDKJwT (ORCPT ); Mon, 11 Apr 2022 05:52:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344706AbiDKJwD (ORCPT ); Mon, 11 Apr 2022 05:52:03 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98D25CD1 for ; Mon, 11 Apr 2022 02:49:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2FA36B811AB for ; Mon, 11 Apr 2022 09:49:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC0B4C385A5; Mon, 11 Apr 2022 09:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670565; bh=DC0j77pdpEuyY4bbJQfiQplf0y7I6OUkaVcvafBc/IY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ezoNETr9wq6mmhBjtkQGNncurVvv0686HMwb9GlsKs4s/EqVmLzqD7Xl4jOOTjRME YTPC+xN3wOIaW9bv6bHTBI3Z3oXezqX7a+mGjw6zpbEdtBxRVeELshcmlX1RUPttfI FSAfkB6ZIw6ymJGHoqsIDDrAbWFyljdJtRTjDNa8H6YDhVXt4cEaWyNHhuuYF/9DHb Lv/7dwgwVsgc1p49kr9Acr07MiWdq+TXLWQGUzXWESNZ3XWKrAZ16eLPvabrP8kcx1 YtAqga5d8IvrlC9btbzrnXfOD1g9ZdQKY0vX+hi5jNyBHUMQmgBPk5M519w1bp1UF9 9lHZAbTV5+ZmQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 18/30] arm64: head: record CPU boot mode after enabling the MMU Date: Mon, 11 Apr 2022 11:48:12 +0200 Message-Id: <20220411094824.4176877-19-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4919; h=from:subject; bh=DC0j77pdpEuyY4bbJQfiQplf0y7I6OUkaVcvafBc/IY=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lTp6ME19ykmTFWNfCzvuF8WLTGejP5JLGMXVvy GXYpifGJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5UwAKCRDDTyI5ktmPJAXgC/ 9574QZGzrSF1RLTIGJYvUtM21x1NitbwfUniClKodFNSLRoRUJT8+LI1y3VhBjDaFSWS6fb+K/jGJW TrQoIyqattrUSrkZCK18hpV5R2qscB5v/Rey1wGxjgeAcCYNzHeMqWTgF1klKH9/5035YZfgAfHnJF pHHjnVzkZRyCmDSH/OwMwFewGUyV/qoq5jluwX18trKZXRX+NO72+pshUxY1JGFPfWnpprpYeoKY6F hLkiNLOTClRsyOe3dZ+mC+SoOpiP8+rVCl5U2Qs36quLAf1+WTeW2YnENS6+MuF2aYYRhPOa2eDBDU ogF1/buxevPjBqDkUyztT05g/xsasv79yoTnAmkJm3ncpWIZpR20fs8NelVPgzw9S+7jRu/RZHS8V6 8o/wARhgYlQ+FjeR5RI3ijaMEfcatLCnjXoQiKltQtdzhov81pJUFb8KR52SoN9r6jBA84COxATF7h +Cwrcc9yR3ZY7zexe+4jWzSuP94c7YOh+SvsIEddamSrA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org In order to avoid having to touch memory with the MMU and caches disabled, and therefore having to invalidate it from the caches explicitly, just defer storing the value until after the MMU has been turned on, unless we are giving up with an error. While at it, move the associated variable definitions into C code. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 46 +++++--------------- arch/arm64/mm/mmu.c | 8 ++++ 2 files changed, 19 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index f9f4af64d1fc..7744d5af3fa6 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -81,8 +81,9 @@ * The following callee saved general purpose registers are used on the * primary lowlevel boot path: * - * Register Scope Purpose + * Register Scope Purpose * x19 create_idmap() .. __ start_kernel() ID map VA of the DT blob + * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x23 primary_entry() .. start_kernel() physical misalignment/KASLR offset * x24 __primary_switch() .. relocate_kernel() current RELR displacement @@ -91,9 +92,9 @@ SYM_CODE_START(primary_entry) bl preserve_boot_args bl init_kernel_el // w0=cpu_boot_mode + mov x20, x0 adrp x23, __PHYS_OFFSET and x23, x23, MIN_KIMG_ALIGN - 1 // KASLR offset, defaults to 0 - bl set_cpu_boot_mode_flag bl create_idmap /* @@ -426,6 +427,9 @@ SYM_FUNC_START_LOCAL(__primary_switched) sub x4, x4, x0 // the kernel virtual and str_l x4, kimage_voffset, x5 // physical mappings + mov x0, x20 + bl set_cpu_boot_mode_flag + // Clear BSS adr_l x0, __bss_start mov x1, xzr @@ -548,46 +552,16 @@ SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) b.ne 1f add x1, x1, #4 1: str w0, [x1] // Save CPU boot mode - dmb sy - dc ivac, x1 // Invalidate potentially stale cache line ret SYM_FUNC_END(set_cpu_boot_mode_flag) -/* - * These values are written with the MMU off, but read with the MMU on. - * Writers will invalidate the corresponding address, discarding up to a - * 'Cache Writeback Granule' (CWG) worth of data. The linker script ensures - * sufficient alignment that the CWG doesn't overlap another section. - */ - .pushsection ".mmuoff.data.write", "aw" -/* - * We need to find out the CPU boot mode long after boot, so we need to - * store it in a writable variable. - * - * This is not in .bss, because we set it sufficiently early that the boot-time - * zeroing of .bss would clobber it. - */ -SYM_DATA_START(__boot_cpu_mode) - .long BOOT_CPU_MODE_EL2 - .long BOOT_CPU_MODE_EL1 -SYM_DATA_END(__boot_cpu_mode) -/* - * The booting CPU updates the failed status @__early_cpu_boot_status, - * with MMU turned off. - */ -SYM_DATA_START(__early_cpu_boot_status) - .quad 0 -SYM_DATA_END(__early_cpu_boot_status) - - .popsection - /* * This provides a "holding pen" for platforms to hold all secondary * cores are held until we're ready for them to initialise. */ SYM_FUNC_START(secondary_holding_pen) bl init_kernel_el // w0=cpu_boot_mode - bl set_cpu_boot_mode_flag + mov x20, x0 mrs x0, mpidr_el1 mov_q x1, MPIDR_HWID_BITMASK and x0, x0, x1 @@ -605,7 +579,7 @@ SYM_FUNC_END(secondary_holding_pen) */ SYM_FUNC_START(secondary_entry) bl init_kernel_el // w0=cpu_boot_mode - bl set_cpu_boot_mode_flag + mov x20, x0 b secondary_startup SYM_FUNC_END(secondary_entry) @@ -624,6 +598,9 @@ SYM_FUNC_START_LOCAL(secondary_startup) SYM_FUNC_END(secondary_startup) SYM_FUNC_START_LOCAL(__secondary_switched) + mov x0, x20 + bl set_cpu_boot_mode_flag + str_l xzr, __early_cpu_boot_status, x3 adr_l x5, vectors msr vbar_el1, x5 isb @@ -693,7 +670,6 @@ SYM_FUNC_START(__enable_mmu) b.lt __no_granule_support cmp x3, #ID_AA64MMFR0_TGRAN_SUPPORTED_MAX b.gt __no_granule_support - update_early_cpu_boot_status 0, x3, x4 phys_to_ttbr x2, x2 msr ttbr0_el1, x2 // load TTBR0 load_ttbr1 x1, x3 diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 07219afe2723..e7145f0281be 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -56,6 +56,14 @@ EXPORT_SYMBOL(kimage_vaddr); u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); +u32 __boot_cpu_mode[] = { BOOT_CPU_MODE_EL2, BOOT_CPU_MODE_EL1 }; + +/* + * The booting CPU updates the failed status @__early_cpu_boot_status, + * with MMU turned off. + */ +long __section(".mmuoff.data.write") __early_cpu_boot_status; + /* * Empty_zero_page is a special page that is used for zero-initialized data * and COW. From patchwork Mon Apr 11 09:48:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29192C433F5 for ; Mon, 11 Apr 2022 09:50:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344771AbiDKJwX (ORCPT ); Mon, 11 Apr 2022 05:52:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344776AbiDKJwE (ORCPT ); Mon, 11 Apr 2022 05:52:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95D3FEA7 for ; Mon, 11 Apr 2022 02:49:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C49FA611B3 for ; Mon, 11 Apr 2022 09:49:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46926C385AF; Mon, 11 Apr 2022 09:49:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670568; bh=p0HM6bJ2W5zi0lRXz9RkRFCa7ovecepIMCIRdegPyDw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SBvb2MHe5VVkg+UBXHLC7juXlD6aiuUwLnyzwwqZ44gxE7ojvYZL1ToBDS3g1jzJS 52EgSUvcz7l5s0VoKqSHGM1b7BEZGQins/KGgWxv0NsFo4wP04sjoSPSPUeONF6uTD VNolwzcpmPGRWJSqnTlss+ZXjAR5WpYiBRlorDEKeiNJFdgfYFys2+NqzIt63HASAZ JmOvKnsbxal1Vv0i/1g+n7jhXaO5ZxB1Eln/xeRUZaDwGnBdFRdjQHHI7ibO472Z7R yeAq8hewZXbHmHZFZYpviAjUWJELDTUJ4Hqb+gKzJGj6AWFQNayzTEqel9WTZmmhya 37pKGuw50GL3w== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 19/30] arm64: kaslr: deal with init called with VA randomization enabled Date: Mon, 11 Apr 2022 11:48:13 +0200 Message-Id: <20220411094824.4176877-20-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2118; h=from:subject; bh=p0HM6bJ2W5zi0lRXz9RkRFCa7ovecepIMCIRdegPyDw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lUlLpgYuYd8m7Cituz9DWVDd29GEV96RwqmEfh 2lFiCsuJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5VAAKCRDDTyI5ktmPJECKC/ 9GnbFIfvmXhg2KARoQsvFqlt63XFKF9d/aWERxIFvnLtrQsQLeqf/OLJdFL6wAJlFAHrV2W1iAJxQh GEUT1pHP677KDVLRYwOeSdYlDL/kOMm3k2Esr4q0AZ0UqQ0NQUdf5g7GxLvELN+GSY32rW3vda4SXw vKZ6WDc8KZ3UEaQAS5Va2zv4e7NYC8W7zBvud0ggMtw8s05EkzDjxbvFRWdh6aUFXvCJlaqbn62nqQ OyXo1m5Y+p/0W9+Ob1eBo0+GhCN44i2hsUMC1dGPLVTbEln/v6fV3iMRxIbLB+c4JNICJD9923x3dn Z8Q89V7eREGfv+C+dx6HOolM02k0O/6TozbRvqKORlUauZqJ6risIz2Plgf3mKZBGW8Cz1ImRq2SGn v3CZxOwPO6n6Ke5COipWf1gaB+itOgimspc01+52kE03uWj49ZOnga/Uuzp0LHPJXnulBFUWAUHxxC Pq9ybiPLmaOpnGkO2TuJZKYa6sRAUyAm0kqjB3Hf4yQwU= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org We will be entering kaslr_init() fully randomized, and so any addresses taken by this code already take the randomization into account. This means that taking the address of _end or _etext and adding offset to it produces the wrong value, given that _end and _etext references will have been fixed up already, and therefore already incorporate offset. So instead of referring to these symbols directly, use their offsets relative to _text, which should produce values that depend on the size and layout of the Image only. Then, add KIMAGE_VADDR to obtain the unrandomized values. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/kaslr.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index d5542666182f..3b12715642ce 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -141,6 +141,8 @@ u64 __init kaslr_early_init(void) return offset % SZ_2G; if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { + u64 end = (u64)_end - (u64)_text + KIMAGE_VADDR; + /* * Randomize the module region over a 2 GB window covering the * kernel. This reduces the risk of modules leaking information @@ -150,9 +152,11 @@ u64 __init kaslr_early_init(void) * resolved normally.) */ module_range = SZ_2G - (u64)(_end - _stext); - module_alloc_base = max((u64)_end + offset - SZ_2G, + module_alloc_base = max(end + offset - SZ_2G, (u64)MODULES_VADDR); } else { + u64 end = (u64)_etext - (u64)_text + KIMAGE_VADDR; + /* * Randomize the module region by setting module_alloc_base to * a PAGE_SIZE multiple in the range [_etext - MODULES_VSIZE, @@ -163,7 +167,7 @@ u64 __init kaslr_early_init(void) * when ARM64_MODULE_PLTS is enabled. */ module_range = MODULES_VSIZE - (u64)(_etext - _stext); - module_alloc_base = (u64)_etext + offset - MODULES_VSIZE; + module_alloc_base = end + offset - MODULES_VSIZE; } /* use the lower 21 bits to randomize the base of the module region */ From patchwork Mon Apr 11 09:48:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A75DC433FE for ; Mon, 11 Apr 2022 09:50:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344706AbiDKJwT (ORCPT ); Mon, 11 Apr 2022 05:52:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344784AbiDKJwF (ORCPT ); Mon, 11 Apr 2022 05:52:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AD6B6570 for ; Mon, 11 Apr 2022 02:49:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5D95561179 for ; Mon, 11 Apr 2022 09:49:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4EF6C385AD; Mon, 11 Apr 2022 09:49:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670570; bh=ccI3bEv12vP3jMk9pe+plmbmyNznlI1rhcVkhSAbqkk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HnoGb6ZXKBXfPJcMPkO7wLxk2rgiQQLkLSAypY4O04mQa5bg+EjmTY0HgltmPrHHz quNKqzN4S63IRHJzecpMMSbqSJHT6RkYwiFdtd7Tr3fDlX20yMG/IxpXySkQ6NC54/ NT5m/W2h+YipHSjmexIw6HZHn1pILFU1FBLpZNqgZGUlKSJJbTZnPchXuTN5uX3iJT 5FYgPwSdjV0XVayg+WS3NKxgL2Gv97UoffpGmrwukgfX6JTeGdj+7IFeupWukwOL6f F6xt1Oce6X3NVzUZh4iFSM5t6OxptmOlcRY/eh1CuerBpGgEEhD6mduAxC35tA/VLs DpvVRZzkRIe0A== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 20/30] arm64: head: relocate kernel only a single time if KASLR is enabled Date: Mon, 11 Apr 2022 11:48:14 +0200 Message-Id: <20220411094824.4176877-21-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=19694; h=from:subject; bh=ccI3bEv12vP3jMk9pe+plmbmyNznlI1rhcVkhSAbqkk=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lW2hXoBZT6xMkOE4tr/K/3QTck+9e56DTQQh9I GOdYSEWJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5VgAKCRDDTyI5ktmPJMkgC/ 9+23+4HD6Yvuum40n/x9Ee3v5Yr+hmuLgIcCmD8x7Ciddh2RqfwzgxH2OkaX0w2JGA+72t4IayeoC9 gs8qTHPuHxMA2ibb84mKH3McO7wv7Wa6QQukexnU+/Ew4LmhHXyPZF3Eq23zskn2BhmPDnzhf+mbIw 8jRzZa+rr8sM5pNaLvrvoP+3tEtHZcbgKw/WUySAZWa5v/u+6jJzrILYpeLFaYAFQKtY+21yBmHrVe 7A5+AvWImVSLUewJhj6ASiJfBJLB9VYCyjtbPvazBGRO6IENm7qzNxDnnZKfkM8qL2rVEXNADXcVYA BkIrmZHi8AWctlcbdO0xkgPQgO8P+BcA9rarClDdbaEvg/LXyAaPLncEzgieJen2327W2T/wX85ZnT 63N849ZlXWB485j+loFj11Hf/uU6KWuu1wd9Rna5w8Zb9WIsKCbPLuUNygzzS8BdNz2DYoBh8r8Muz HxDITqrqn9ZKZXx8AXdiQUjYb8tSg44BU51YrA3R1+Bfc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Currently, when KASLR is in effect, we set up the kernel virtual address space twice: the first time, the KASLR seed is looked up in the device tree, and the kernel virtual mapping is torn down and recreated again, after which the relocations are applied a second time. The latter step means that statically initialized global pointer variables will be reset to their initial values, and to ensure that BSS variables are not set to values based on the initial translation, they are cleared again as well. All of this is needed because we need the command line (taken from the DT) to tell us whether or not to randomize the virtual address space before entering the kernel proper. However, this code has expanded little by little and now creates global state unrelated to the virtual randomization of the kernel before the mapping is torn down and set up again, and the BSS cleared for a second time. This has created some issues in the past, and it would be better to avoid this little dance if possible. So instead, let's use the temporary mapping of the device tree, and execute the bare minimum of code to decide whether or not KASLR should be enabled, and what the seed is. Only then, create the virtual kernel mapping, clear BSS, etc and proceed as normal. This avoids the issues around inconsistent global state due to BSS being cleared twice, and is generally more maintainable, as it permits us to defer all the remaining DT parsing and KASLR initialization to a later time. This means the relocation fixup code runs only a single time as well, allowing us to simplify the RELR handling code too, which is not idempotent and was therefore required to keep track of the offset that was applied the first time around. Note that this means we have to clone a pair of FDT library objects, so that we can control how they are built - we need the stack protector and other instrumentation disabled so that the code can tolerate being called this early. Note that only the kernel page tables and the temporary stack are mapped read-write at this point, which ensures that the early code does not modify any global state inadvertently. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 2 + arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/head.S | 68 +++-------- arch/arm64/kernel/image-vars.h | 4 + arch/arm64/kernel/kaslr.c | 76 ++---------- arch/arm64/kernel/pi/Makefile | 33 +++++ arch/arm64/kernel/pi/kaslr_early.c | 128 ++++++++++++++++++++ arch/arm64/kernel/setup.c | 12 +- 8 files changed, 203 insertions(+), 122 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index c751cd9b94f8..c17635f1538f 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -195,6 +195,8 @@ static inline unsigned long kaslr_offset(void) return kimage_vaddr - KIMAGE_VADDR; } +void kaslr_init(void *fdt); + /* * Allow all memory at the discovery stage. We will clip it later. */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 986837d7ec82..45f7a0e2d35e 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -59,7 +59,7 @@ obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o obj-$(CONFIG_PARAVIRT) += paravirt.o -obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o +obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o pi/ obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o obj-$(CONFIG_ELF_CORE) += elfcore.o obj-$(CONFIG_KEXEC_CORE) += machine_kexec.o relocate_kernel.o \ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 7744d5af3fa6..87498e414725 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -85,16 +85,13 @@ * x19 create_idmap() .. __ start_kernel() ID map VA of the DT blob * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 - * x23 primary_entry() .. start_kernel() physical misalignment/KASLR offset - * x24 __primary_switch() .. relocate_kernel() current RELR displacement + * x23 __primary_switch() .. relocate_kernel() physical misalignment/KASLR offset * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) bl preserve_boot_args bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 - adrp x23, __PHYS_OFFSET - and x23, x23, MIN_KIMG_ALIGN - 1 // KASLR offset, defaults to 0 bl create_idmap /* @@ -443,16 +440,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) #endif mov x0, x19 // pass FDT address in x0 bl init_feature_override // Parse cpu feature overrides -#ifdef CONFIG_RANDOMIZE_BASE - tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized? - b.ne 0f - bl kaslr_early_init // parse FDT for KASLR options - cbz x0, 0f // KASLR disabled? just proceed - orr x23, x23, x0 // record KASLR offset - ldp x29, x30, [sp], #16 // we must enable KASLR, return - ret // to __primary_switch() -0: -#endif bl switch_to_vhe // Prefer VHE if possible ldp x29, x30, [sp], #16 bl start_kernel @@ -761,27 +748,17 @@ SYM_FUNC_START_LOCAL(__relocate_kernel) * entry in x9, the address being relocated by the current address or * bitmap entry in x13 and the address being relocated by the current * bit in x14. - * - * Because addends are stored in place in the binary, RELR relocations - * cannot be applied idempotently. We use x24 to keep track of the - * currently applied displacement so that we can correctly relocate if - * __relocate_kernel is called twice with non-zero displacements (i.e. - * if there is both a physical misalignment and a KASLR displacement). */ adr_l x9, __relr_start adr_l x10, __relr_end - sub x15, x23, x24 // delta from previous offset - cbz x15, 7f // nothing to do if unchanged - mov x24, x23 // save new offset - 2: cmp x9, x10 b.hs 7f ldr x11, [x9], #8 tbnz x11, #0, 3f // branch to handle bitmaps add x13, x11, x23 ldr x12, [x13] // relocate address entry - add x12, x12, x15 + add x12, x12, x23 str x12, [x13], #8 // adjust to start of bitmap b 2b @@ -790,7 +767,7 @@ SYM_FUNC_START_LOCAL(__relocate_kernel) cbz x11, 6f tbz x11, #0, 5f // skip bit if not set ldr x12, [x14] // relocate bit - add x12, x12, x15 + add x12, x12, x23 str x12, [x14] 5: add x14, x14, #8 // move to next bit's address @@ -814,40 +791,25 @@ SYM_FUNC_START_LOCAL(__primary_switch) adrp x1, reserved_pg_dir adrp x2, init_idmap_pg_dir bl __enable_mmu - +#ifdef CONFIG_RELOCATABLE + adrp x23, __PHYS_OFFSET + and x23, x23, MIN_KIMG_ALIGN - 1 +#ifdef CONFIG_RANDOMIZE_BASE + mov x0, x19 + adrp x1, init_pg_end + mov sp, x1 + mov x29, xzr + bl __pi_kaslr_early_init + orr x23, x23, x0 // record KASLR offset +#endif +#endif bl clear_page_tables bl create_kernel_mapping adrp x1, init_pg_dir load_ttbr1 x1, x2 #ifdef CONFIG_RELOCATABLE -#ifdef CONFIG_RELR - mov x24, #0 // no RELR displacement yet -#endif bl __relocate_kernel -#ifdef CONFIG_RANDOMIZE_BASE - ldr x8, =__primary_switched - adrp x0, __PHYS_OFFSET - blr x8 - - /* - * If we return here, we have a KASLR displacement in x23 which we need - * to take into account by discarding the current kernel mapping and - * creating a new one. - */ - adrp x1, reserved_pg_dir // Disable translations via TTBR1 - load_ttbr1 x1, x2 - bl clear_page_tables - bl create_kernel_mapping // Recreate kernel mapping - - tlbi vmalle1 // Remove any stale TLB entries - dsb nsh - isb - - adrp x1, init_pg_dir // Re-enable translations via TTBR1 - load_ttbr1 x1, x2 - bl __relocate_kernel -#endif #endif ldr x8, =__primary_switched adrp x0, __PHYS_OFFSET diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 241c86b67d01..0c381a405bf0 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -41,6 +41,10 @@ __efistub_dcache_clean_poc = __pi_dcache_clean_poc; __efistub___memcpy = __pi_memcpy; __efistub___memmove = __pi_memmove; __efistub___memset = __pi_memset; + +__pi___memcpy = __pi_memcpy; +__pi___memmove = __pi_memmove; +__pi___memset = __pi_memset; #endif __efistub__text = _text; diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 3b12715642ce..16dafd66be6d 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -24,7 +24,6 @@ enum kaslr_status { KASLR_ENABLED, KASLR_DISABLED_CMDLINE, KASLR_DISABLED_NO_SEED, - KASLR_DISABLED_FDT_REMAP, }; static enum kaslr_status __initdata kaslr_status; @@ -52,18 +51,9 @@ static __init u64 get_kaslr_seed(void *fdt) struct arm64_ftr_override kaslr_feature_override __initdata; -/* - * This routine will be executed with the kernel mapped at its default virtual - * address, and if it returns successfully, the kernel will be remapped, and - * start_kernel() will be executed from a randomized virtual offset. The - * relocation will result in all absolute references (e.g., static variables - * containing function pointers) to be reinitialized, and zero-initialized - * .bss variables will be reset to 0. - */ -u64 __init kaslr_early_init(void) +void __init kaslr_init(void *fdt) { - void *fdt; - u64 seed, offset, mask, module_range; + u64 seed, module_range; unsigned long raw; /* @@ -72,17 +62,6 @@ u64 __init kaslr_early_init(void) */ module_alloc_base = (u64)_etext - MODULES_VSIZE; - /* - * Try to map the FDT early. If this fails, we simply bail, - * and proceed with KASLR disabled. We will make another - * attempt at mapping the FDT in setup_machine() - */ - fdt = get_early_fdt_ptr(); - if (!fdt) { - kaslr_status = KASLR_DISABLED_FDT_REMAP; - return 0; - } - /* * Retrieve (and wipe) the seed from the FDT */ @@ -94,7 +73,7 @@ u64 __init kaslr_early_init(void) */ if (kaslr_feature_override.val & kaslr_feature_override.mask & 0xf) { kaslr_status = KASLR_DISABLED_CMDLINE; - return 0; + return; } /* @@ -105,44 +84,15 @@ u64 __init kaslr_early_init(void) if (arch_get_random_seed_long_early(&raw)) seed ^= raw; - if (!seed) { + if (!seed || !kaslr_offset()) { kaslr_status = KASLR_DISABLED_NO_SEED; - return 0; + return; } - /* - * OK, so we are proceeding with KASLR enabled. Calculate a suitable - * kernel image offset from the seed. Let's place the kernel in the - * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of - * the lower and upper quarters to avoid colliding with other - * allocations. - * Even if we could randomize at page granularity for 16k and 64k pages, - * let's always round to 2 MB so we don't interfere with the ability to - * map using contiguous PTEs - */ - mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1); - offset = BIT(VA_BITS_MIN - 3) + (seed & mask); - /* use the top 16 bits to randomize the linear region */ memstart_offset_seed = seed >> 48; - if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) && - (IS_ENABLED(CONFIG_KASAN_GENERIC) || - IS_ENABLED(CONFIG_KASAN_SW_TAGS))) - /* - * KASAN without KASAN_VMALLOC does not expect the module region - * to intersect the vmalloc region, since shadow memory is - * allocated for each module at load time, whereas the vmalloc - * region is shadowed by KASAN zero pages. So keep modules - * out of the vmalloc region if KASAN is enabled without - * KASAN_VMALLOC, and put the kernel well within 4 GB of the - * module region. - */ - return offset % SZ_2G; - if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { - u64 end = (u64)_end - (u64)_text + KIMAGE_VADDR; - /* * Randomize the module region over a 2 GB window covering the * kernel. This reduces the risk of modules leaking information @@ -152,11 +102,8 @@ u64 __init kaslr_early_init(void) * resolved normally.) */ module_range = SZ_2G - (u64)(_end - _stext); - module_alloc_base = max(end + offset - SZ_2G, - (u64)MODULES_VADDR); + module_alloc_base = max((u64)_end - SZ_2G, (u64)MODULES_VADDR); } else { - u64 end = (u64)_etext - (u64)_text + KIMAGE_VADDR; - /* * Randomize the module region by setting module_alloc_base to * a PAGE_SIZE multiple in the range [_etext - MODULES_VSIZE, @@ -167,17 +114,15 @@ u64 __init kaslr_early_init(void) * when ARM64_MODULE_PLTS is enabled. */ module_range = MODULES_VSIZE - (u64)(_etext - _stext); - module_alloc_base = end + offset - MODULES_VSIZE; + module_alloc_base = (u64)_etext - MODULES_VSIZE; } /* use the lower 21 bits to randomize the base of the module region */ module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21; module_alloc_base &= PAGE_MASK; - - return offset; } -static int __init kaslr_init(void) +static int __init kaslr_report_status(void) { switch (kaslr_status) { case KASLR_ENABLED: @@ -189,11 +134,8 @@ static int __init kaslr_init(void) case KASLR_DISABLED_NO_SEED: pr_warn("KASLR disabled due to lack of seed\n"); break; - case KASLR_DISABLED_FDT_REMAP: - pr_warn("KASLR disabled due to FDT remapping failure\n"); - break; } return 0; } -core_initcall(kaslr_init) +core_initcall(kaslr_report_status) diff --git a/arch/arm64/kernel/pi/Makefile b/arch/arm64/kernel/pi/Makefile new file mode 100644 index 000000000000..839291430cb3 --- /dev/null +++ b/arch/arm64/kernel/pi/Makefile @@ -0,0 +1,33 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright 2022 Google LLC + +KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \ + -Os -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) \ + $(call cc-option,-mbranch-protection=none) \ + -I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \ + -include $(srctree)/include/linux/hidden.h \ + -D__DISABLE_EXPORTS -ffreestanding -D__NO_FORTIFY \ + $(call cc-option,-fno-addrsig) + +# remove SCS flags from all objects in this directory +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) +# disable LTO +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO), $(KBUILD_CFLAGS)) + +GCOV_PROFILE := n +KASAN_SANITIZE := n +KCSAN_SANITIZE := n +UBSAN_SANITIZE := n +KCOV_INSTRUMENT := n + +$(obj)/%.pi.o: OBJCOPYFLAGS := --prefix-symbols=__pi_ \ + --remove-section=.note.gnu.property \ + --prefix-alloc-sections=.init +$(obj)/%.pi.o: $(obj)/%.o FORCE + $(call if_changed,objcopy) + +$(obj)/lib-%.o: $(srctree)/lib/%.c FORCE + $(call if_changed_rule,cc_o_c) + +obj-y := kaslr_early.pi.o lib-fdt.pi.o lib-fdt_ro.pi.o +extra-y := $(patsubst %.pi.o,%.o,$(obj-y)) diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c new file mode 100644 index 000000000000..ef2f1f2fe690 --- /dev/null +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -0,0 +1,128 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright 2022 Google LLC +// Author: Ard Biesheuvel + +// NOTE: code in this file runs *very* early, and is not permitted to use +// global variables or anything that relies on absolute addressing. + +#include +#include +#include +#include +#include +#include + +#include +#include + +/* taken from lib/string.c */ +static char *__strstr(const char *s1, const char *s2) +{ + size_t l1, l2; + + l2 = strlen(s2); + if (!l2) + return (char *)s1; + l1 = strlen(s1); + while (l1 >= l2) { + l1--; + if (!memcmp(s1, s2, l2)) + return (char *)s1; + s1++; + } + return NULL; +} +static bool cmdline_contains_nokaslr(const u8 *cmdline) +{ + const u8 *str; + + str = __strstr(cmdline, "nokaslr"); + return str == cmdline || (str > cmdline && *(str - 1) == ' '); +} + +static bool is_kaslr_disabled_cmdline(const void *fdt) +{ + if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) { + int node; + const u8 *prop; + + node = fdt_path_offset(fdt, "/chosen"); + if (node < 0) + goto out; + + prop = fdt_getprop(fdt, node, "bootargs", NULL); + if (!prop) + goto out; + + if (cmdline_contains_nokaslr(prop)) + return true; + + if (IS_ENABLED(CONFIG_CMDLINE_EXTEND)) + goto out; + + return false; + } +out: + return cmdline_contains_nokaslr(CONFIG_CMDLINE); +} + +static u64 get_kaslr_seed(const void *fdt) +{ + int node, len; + const fdt64_t *prop; + u64 ret; + + node = fdt_path_offset(fdt, "/chosen"); + if (node < 0) + return 0; + + prop = fdt_getprop(fdt, node, "kaslr-seed", &len); + if (!prop || len != sizeof(u64)) + return 0; + + ret = fdt64_to_cpu(*prop); + return ret; +} + +asmlinkage u64 kaslr_early_init(const void *fdt) +{ + u64 seed, mask, offset; + + if (is_kaslr_disabled_cmdline(fdt)) + return 0; + + seed = get_kaslr_seed(fdt); + if (!seed && (!__early_cpu_has_rndr() || + !__arm64_rndr((unsigned long *)&seed))) + return 0; + + /* + * OK, so we are proceeding with KASLR enabled. Calculate a suitable + * kernel image offset from the seed. Let's place the kernel in the + * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of + * the lower and upper quarters to avoid colliding with other + * allocations. + * Even if we could randomize at page granularity for 16k and 64k pages, + * let's always round to 2 MB so we don't interfere with the ability to + * map using contiguous PTEs + */ + mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1); + offset = BIT(VA_BITS_MIN - 3) + (seed & mask); + + if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) && + (IS_ENABLED(CONFIG_KASAN_GENERIC) || + IS_ENABLED(CONFIG_KASAN_SW_TAGS))) + /* + * KASAN without KASAN_VMALLOC does not expect the module region + * to intersect the vmalloc region, since shadow memory is + * allocated for each module at load time, whereas the vmalloc + * region is shadowed by KASAN zero pages. So keep modules + * out of the vmalloc region if KASAN is enabled without + * KASAN_VMALLOC, and put the kernel well within 4 GB of the + * module region. + */ + return offset % SZ_2G; + + return offset; +} + diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 3505789cf4bd..de546c8d543b 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -184,9 +184,19 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys) void *dt_virt = fixmap_remap_fdt(dt_phys, &size, PAGE_KERNEL); const char *name; - if (dt_virt) + if (dt_virt) { memblock_reserve(dt_phys, size); + /* + * kaslr_init() will modify the DT, by wiping the KASLR seed + * before returning it. So we must call it before remapping it + * r/o [below] and before calling early_init_dt_scan(), which + * takes a CRC and verifies it later. + */ + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) + kaslr_init(dt_virt); + } + if (!dt_virt || !early_init_dt_scan(dt_virt)) { pr_crit("\n" "Error: invalid device tree blob at physical address %pa (virtual address 0x%px)\n" From patchwork Mon Apr 11 09:48:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEAACC433F5 for ; Mon, 11 Apr 2022 09:51:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345161AbiDKJx4 (ORCPT ); Mon, 11 Apr 2022 05:53:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344875AbiDKJwH (ORCPT ); Mon, 11 Apr 2022 05:52:07 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A10EB1FB for ; Mon, 11 Apr 2022 02:49:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 85592B811AF for ; Mon, 11 Apr 2022 09:49:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3DBF8C385A4; Mon, 11 Apr 2022 09:49:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670573; bh=dfvrz+2dbg6t5hlmtDi/5B3nL5WpnhfCyf6Bzny7Qlk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=osW2+eO5J1Za7vl701HD9MyfJota6a3mPqz2/3Riqb/wpEcR5erdEua8IhdtTlddw r5AcnL8jdDAcitjF4iVHaUYu1z7MtZeIjpkWs8GfNdKZPGJa9xqKaKfJd1ig3W/L7c o95vZPqA5T8VdVbD1MV1aYXj+AAUdxXmpVw/DnuITK92wONuhPXPvS+uUBjqw3v4Of CJ2rYsTDI7qAJsccXs2LbRLNM0qt/3NUHcBREc9svuzdkpUP7EyvxdXjuV50QgQNId IqRTh1Rw/ftF49MmT7KB4TZNGjR7yK4MkKPVKbLzEpdv8CZv2wquE5j4Ss5csw2yF5 NPD1nsWDYzGyg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 21/30] arm64: head: remap the kernel text/inittext region read-only Date: Mon, 11 Apr 2022 11:48:15 +0200 Message-Id: <20220411094824.4176877-22-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4347; h=from:subject; bh=dfvrz+2dbg6t5hlmtDi/5B3nL5WpnhfCyf6Bzny7Qlk=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lYf9kmlG7qsfci8zLXeSNfoLU11+RwfIKrWzAl fh8M6gmJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5WAAKCRDDTyI5ktmPJO2YC/ 9410Y/X6yaWlLqu96vn1eKF4S0xb9JWjwCFezuOuj2VHsnVAkYQdNcBQeAUfQi0Olz4YcI8Q4Zl8Y2 nlEHSCXEVuI9NpG/bXJiUEw41xSCRwQ6VkgQ1lpEm8iP9ztJpwDU8EPhVi+XSqOIAN0jKiBDVmWRM5 VdiHdSh4U6Tj720Z7wLxJ/A0PyR/8J7fZnzWcMl/dK+/U+BAHvGzTuj/UC3DpndgMiy4jiiLmJ7XXK s2SJiFmCnfp3Dm8uyzG1lGYdAjxvsPi12dGr0B2cchyUTcY7P4NVNm2C9UzhmV7mFeA0AVVHesuhAW AyYpjKgmKwcxTY+FRMjZ6QdEDmLvpgkEEGJzUpTLYI38kH4oPzRrjBKXD27CgHk6sDKR81KY0P+lVg ogKIqgUkcDt2HDQt8UAroDuSKuvYdgp+BIB0TFf+2UCXjTJgIE31CUQ1CYP2Q/Uyc+NlaLlRxs6yPf re/nVjU2dvhkOdl2ENIOODo1u+VBghH5DdjrIl+qZ6rGc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org In order to be able to run with WXN from boot (which could potentially be under a hypervisor regime that mandates this), update the temporary kernel page tables with read-only attributes for the text regions before attempting to execute from them. This is rather straight-forward for 16k and 64k granule configurations, as the split between executable and writable regions is guaranteed to be aligned to the granule used for the early kernel page tables. For 4k, it involves installing a single table entry and populating it accordingly. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 71 +++++++++++++++++++- arch/arm64/kernel/vmlinux.lds.S | 2 +- 2 files changed, 69 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 87498e414725..54886c4b6347 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -86,7 +86,7 @@ * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x23 __primary_switch() .. relocate_kernel() physical misalignment/KASLR offset - * x28 create_idmap() callee preserved temp register + * x28 create_idmap(), remap_kernel_text() callee preserved temp register */ SYM_CODE_START(primary_entry) bl preserve_boot_args @@ -372,10 +372,62 @@ SYM_FUNC_START_LOCAL(create_kernel_mapping) mov x7, SWAPPER_RW_MMUFLAGS map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14 - ret SYM_FUNC_END(create_kernel_mapping) +SYM_FUNC_START_LOCAL(remap_kernel_text) + mov x28, lr + + ldr x1, =_text + mov x2, x1 + ldr x3, =__initdata_begin - 1 + adrp x4, _text + bic x4, x4, #SWAPPER_BLOCK_SIZE - 1 + mov x5, SWAPPER_RX_MMUFLAGS + mov x6, #SWAPPER_BLOCK_SHIFT + bl remap_region + +#if SWAPPER_BLOCK_SHIFT > PAGE_SHIFT + /* + * If the boundary between inittext and initdata happens to be aligned + * sufficiently, we are done here. Otherwise, we have to replace its block + * entry with a table entry, and populate the lower level table accordingly. + */ + ldr x3, =__initdata_begin + tst x3, #SWAPPER_BLOCK_SIZE - 1 + b.eq 0f + + /* First, create a table mapping to replace the block mapping */ + ldr x1, =_text + bic x2, x3, #SWAPPER_BLOCK_SIZE - 1 + adrp x4, init_pg_end - PAGE_SIZE + mov x5, #PMD_TYPE_TABLE + mov x6, #SWAPPER_BLOCK_SHIFT + bl remap_region + + /* Apply executable permissions to the first subregion */ + adrp x0, init_pg_end - PAGE_SIZE + ldr x3, =__initdata_begin - 1 + bic x1, x3, #SWAPPER_BLOCK_SIZE - 1 + mov x2, x1 + adrp x4, __initdata_begin + bic x4, x4, #SWAPPER_BLOCK_SIZE - 1 + mov x5, SWAPPER_RX_MMUFLAGS | PTE_TYPE_PAGE + mov x6, #PAGE_SHIFT + bl remap_region + + /* Apply writable permissions to the second subregion */ + ldr x2, =__initdata_begin + bic x1, x2, #SWAPPER_BLOCK_SIZE - 1 + orr x3, x1, #SWAPPER_BLOCK_SIZE - 1 + adrp x4, __initdata_begin + mov x5, SWAPPER_RW_MMUFLAGS | PTE_TYPE_PAGE + mov x6, #PAGE_SHIFT + bl remap_region +#endif +0: ret x28 +SYM_FUNC_END(remap_kernel_text) + /* * Initialize CPU registers with task-specific and cpu-specific context. * @@ -805,12 +857,25 @@ SYM_FUNC_START_LOCAL(__primary_switch) #endif bl clear_page_tables bl create_kernel_mapping +#ifdef CONFIG_RELOCATABLE + mov x29, x0 // preserve returned page table pointer adrp x1, init_pg_dir load_ttbr1 x1, x2 -#ifdef CONFIG_RELOCATABLE bl __relocate_kernel + adrp x1, reserved_pg_dir + load_ttbr1 x1, x2 + + tlbi vmalle1 + dsb nsh + isb + + mov x0, x29 // pass page table pointer to remap_kernel_text #endif + bl remap_kernel_text + adrp x1, init_pg_dir + load_ttbr1 x1, x2 + ldr x8, =__primary_switched adrp x0, __PHYS_OFFSET br x8 diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 21ca72e7ad22..cb4821c411f4 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -298,7 +298,7 @@ SECTIONS . = ALIGN(PAGE_SIZE); init_pg_dir = .; - . += INIT_DIR_SIZE; + . += INIT_DIR_SIZE + PAGE_SIZE; init_pg_end = .; . = ALIGN(SEGMENT_ALIGN); From patchwork Mon Apr 11 09:48:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D55BC433FE for ; Mon, 11 Apr 2022 09:51:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235528AbiDKJx5 (ORCPT ); Mon, 11 Apr 2022 05:53:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344884AbiDKJwI (ORCPT ); Mon, 11 Apr 2022 05:52:08 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D811541F9B for ; Mon, 11 Apr 2022 02:49:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 1F280CE16CF for ; Mon, 11 Apr 2022 09:49:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D2A5C385A5; Mon, 11 Apr 2022 09:49:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670575; bh=C3gzrQFKHEBqXaiW/FwDmIgNpF8Lr01RJwVGYMDades=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Rib0l+9AlaQL6qeKlSC3vVu0fDnVyLu72JMQJvGHv9gkZ2ZTq2E+WTn9N4JmaYgG6 eSVvqS2YcaTQIs3FARMIMdqisYWbMiImYwgehP4ro+Y15OMQlZCwug2lImzvJRC2y4 0NiHiWChqwOVVBxMu+M47L6medsNbqgI1c2cAuJgxp016wnSR9gObPkNbpwoPFefAD SOdGlyLK/41Aj+eBENeHRWQA2VDdfvaS2hkCH2L2B2+yBgQV0HE2Gu+MsQoD2l3XDY 21Vja8g4mLuvCw2BX7AVYBEbymJ90tyX9MkeOLCX6X9gvfiPW9Eur/j8YYc1G13v3Z XbfTnd/Rf4CjQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 22/30] arm64: setup: drop early FDT pointer helpers Date: Mon, 11 Apr 2022 11:48:16 +0200 Message-Id: <20220411094824.4176877-23-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1406; h=from:subject; bh=C3gzrQFKHEBqXaiW/FwDmIgNpF8Lr01RJwVGYMDades=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lZ/JZBZHxahP+H14buByBaeMzElzK5xRHx+6vG TspgKXiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5WQAKCRDDTyI5ktmPJI5NDA CpB1e4GUGYANOsleQ/LwIz8k3N/zQ3F8dJFj/c5zmCRkFsjmsfUwIY8F+ri9KKtYeOsZsaGzvV4uSn bbHoDOM92MMggMgJg20aqogqBkz/+N+FhGc8pnEdeoW9tYq1nRt5rtLq8hYKD92UQMdL1EoMcMcE6W gce/eefASl6j6AtnUqjDLkP6meF4Aq8Rwm6WO7cXhwS8MNtaxqzZYC+rTzS8E3Erk8BcqOMcbY1Bt3 1hKFjtJRH9Tk0tkEEAvONsF9r2JwYcniXIiNBm14oNtc776bEnzc1jWyyl3rXBzyIPs1FxWVS5C+TB ybqcqzT5bfgJvkaprtmU8pYDAwtAFh64rTOvp0D/yyvcWxu3UQgyuD2orCyrSUzk7eiTVUU9gXOphG tAhpm6VOT+QkyIqG+llBgSG8U96t0guR8wjcSDYUyBc4KsWUKALy72BtGc5ibySjyVn3Zhj2ea/x3e zSYRs34zE85YLlquvdtJbHhopsUZy9buOo38kUn+sSgtg= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org We no longer need to call into the kernel to map the FDT before calling into the kernel so let's drop the helpers we added for this. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/setup.h | 3 --- arch/arm64/kernel/setup.c | 15 --------------- 2 files changed, 18 deletions(-) diff --git a/arch/arm64/include/asm/setup.h b/arch/arm64/include/asm/setup.h index 6437df661700..5f147a418281 100644 --- a/arch/arm64/include/asm/setup.h +++ b/arch/arm64/include/asm/setup.h @@ -5,9 +5,6 @@ #include -void *get_early_fdt_ptr(void); -void early_fdt_map(u64 dt_phys); - /* * These two variables are used in the head.S file. */ diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index de546c8d543b..2ca8d4a509e5 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -163,21 +163,6 @@ static void __init smp_build_mpidr_hash(void) pr_warn("Large number of MPIDR hash buckets detected\n"); } -static void *early_fdt_ptr __initdata; - -void __init *get_early_fdt_ptr(void) -{ - return early_fdt_ptr; -} - -asmlinkage void __init early_fdt_map(u64 dt_phys) -{ - int fdt_size; - - early_fixmap_init(); - early_fdt_ptr = fixmap_remap_fdt(dt_phys, &fdt_size, PAGE_KERNEL); -} - static void __init setup_machine_fdt(phys_addr_t dt_phys) { int size; From patchwork Mon Apr 11 09:48:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CABBC433EF for ; Mon, 11 Apr 2022 09:51:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344875AbiDKJx4 (ORCPT ); Mon, 11 Apr 2022 05:53:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344839AbiDKJwI (ORCPT ); Mon, 11 Apr 2022 05:52:08 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6AE04162A for ; Mon, 11 Apr 2022 02:49:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2887EB80F97 for ; Mon, 11 Apr 2022 09:49:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 099B8C385AD; Mon, 11 Apr 2022 09:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670578; bh=YtSuSXiDuWch666ErRH9dPdBAf8I6jd+Df9mu/gOaZw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DzLw3aWBy8WmRMoMjWoqIcOelOmc7L1YCVZO87zpXRuv7AQfJhAoKh9XgmWGLXbQg 1s0Rxku6fCCVr1cfEvyngjAwWXap3S5+R25Xow4Lr+s5VZjvURR1ODfMrRL+iC3QAT tXRDd67rvKibOmOkwksJ2IpCVDMw0/cR98e+KiXCptX3ON4UwIdzo+K7defpnD351c SK9PO65fFW9VN6tKyEUnaA8xO9qwEfJcnaYQwqNfk4A6+USAdnZTAji07U6Wsgqj0l AKQ0/bJVAKkWdfPG1Rs506goNeoBKp67y8Y6iuIQxLVt4S+6M+d2DGASlYVYRyWDnz yYKHIphhJH8OA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 23/30] arm64: mm: move ro_after_init section into the data segment Date: Mon, 11 Apr 2022 11:48:17 +0200 Message-Id: <20220411094824.4176877-24-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5001; h=from:subject; bh=YtSuSXiDuWch666ErRH9dPdBAf8I6jd+Df9mu/gOaZw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lbsbTLg/fLmyhtxio90Ih3mCllrNJrjvzr0JA3 YnjysQGJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5WwAKCRDDTyI5ktmPJO44C/ 99Mc64RsrOpbZiSXzUz24edLR0Dx3KSU/rQpBTVvLaCGh4xktmqYkCltVxTlYkDcb8ncSYOBvgN/Wu Zviqaxhfm5iWgQHVwcL8b0cVieNBTLgud6/hPQWcHPwfnW+Cny4oCnpbwQCj4iayX1wI+xG7tFgoab jHLDtbnBxtKRy0uKNfuFlKHKmOFxQDReXQQlz2FzRlw5/x2gcxddiouIAMpBvvWML0Zwlkp1aTPkLG NUw3u5aOFA/BmAEG+WjT3ZjymCSpRMuBjB1eBFN5EDs5P0vBQAp+OSzzmwpl3vQBQvOSVSeuAI2epo SJZ1fyZEfWXMH2eAGUvjVyCs7umdHxmYSe7HpILabyL/WpvpTm9W1XnREeHcKg1Vz5Eii18TgTycNG 1//gYVgoXlQrsYUUo/FeBRhXTdpZF27/HCh43x/sgPB6E0TtDKa7hE5zeCd+Cs5ms1UuDgSjDVZJSp KJ04aIiU+zC3rm99GbxEtHoc3VfKxfrOKGRMQt0wH46l8= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Currently, the ro_after_init sections sits right in the middle of the text/rodata/inittext segment, making it difficult to map any of those non-writable during early boot. So instead, move it to the start of .data, and update the init sequences so that the section is remapped read-only once startup completes. Note that this moves the entire HYP data section into .data as well - this likely needs to remain as a single block for now, but could perhaps split into a .rodata and .data..ro_after_init section later. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/vmlinux.lds.S | 41 ++++++++++++-------- arch/arm64/mm/mmu.c | 29 ++++++++------ 2 files changed, 42 insertions(+), 28 deletions(-) diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index cb4821c411f4..5b465295335a 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -59,6 +59,7 @@ #define RO_EXCEPTION_TABLE_ALIGN 4 #define RUNTIME_DISCARD_EXIT +#define RO_AFTER_INIT_DATA #include #include @@ -192,22 +193,6 @@ SECTIONS /* everything from this point to __init_begin will be marked RO NX */ RO_DATA(PAGE_SIZE) - HYPERVISOR_DATA_SECTIONS - - idmap_pg_dir = .; - . += PAGE_SIZE; - -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 - tramp_pg_dir = .; - . += PAGE_SIZE; -#endif - - reserved_pg_dir = .; - . += PAGE_SIZE; - - swapper_pg_dir = .; - . += PAGE_SIZE; - . = ALIGN(SEGMENT_ALIGN); __init_begin = .; __inittext_begin = .; @@ -270,6 +255,30 @@ SECTIONS _data = .; _sdata = .; + + __start_ro_after_init = .; + idmap_pg_dir = .; + . += PAGE_SIZE; + +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 + tramp_pg_dir = .; + . += PAGE_SIZE; +#endif + reserved_pg_dir = .; + . += PAGE_SIZE; + + swapper_pg_dir = .; + . += PAGE_SIZE; + + HYPERVISOR_DATA_SECTIONS + + .data.ro_after_init : { + *(.data..ro_after_init) + JUMP_TABLE_DATA + . = ALIGN(SEGMENT_ALIGN); + __end_ro_after_init = .; + } + RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) /* diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e7145f0281be..ef1f01da387d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -488,11 +488,17 @@ static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start, void __init mark_linear_text_alias_ro(void) { /* - * Remove the write permissions from the linear alias of .text/.rodata + * Remove the write permissions from the linear alias of .text/.rodata/ro_after_init */ update_mapping_prot(__pa_symbol(_stext), (unsigned long)lm_alias(_stext), (unsigned long)__init_begin - (unsigned long)_stext, PAGE_KERNEL_RO); + + update_mapping_prot(__pa_symbol(__start_ro_after_init), + (unsigned long)lm_alias(__start_ro_after_init), + (unsigned long)__end_ro_after_init - + (unsigned long)__start_ro_after_init, + PAGE_KERNEL_RO); } static bool crash_mem_map __initdata; @@ -601,12 +607,10 @@ void mark_rodata_ro(void) { unsigned long section_size; - /* - * mark .rodata as read only. Use __init_begin rather than __end_rodata - * to cover NOTES and EXCEPTION_TABLE. - */ - section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; - update_mapping_prot(__pa_symbol(__start_rodata), (unsigned long)__start_rodata, + section_size = (unsigned long)__end_ro_after_init - + (unsigned long)__start_ro_after_init; + update_mapping_prot(__pa_symbol(__start_ro_after_init), + (unsigned long)__start_ro_after_init, section_size, PAGE_KERNEL_RO); debug_checkwx(); @@ -730,18 +734,19 @@ static void __init map_kernel(pgd_t *pgdp) text_prot = __pgprot_modify(text_prot, PTE_GP, PTE_GP); /* - * Only rodata will be remapped with different permissions later on, - * all other segments are allowed to use contiguous mappings. + * Only data will be partially remapped with different permissions + * later on, all other segments are allowed to use contiguous mappings. */ map_kernel_segment(pgdp, _stext, _etext, text_prot, &vmlinux_text, 0, VM_NO_GUARD); - map_kernel_segment(pgdp, __start_rodata, __inittext_begin, PAGE_KERNEL, - &vmlinux_rodata, NO_CONT_MAPPINGS, VM_NO_GUARD); + map_kernel_segment(pgdp, __start_rodata, __inittext_begin, PAGE_KERNEL_RO, + &vmlinux_rodata, 0, VM_NO_GUARD); map_kernel_segment(pgdp, __inittext_begin, __inittext_end, text_prot, &vmlinux_inittext, 0, VM_NO_GUARD); map_kernel_segment(pgdp, __initdata_begin, __initdata_end, PAGE_KERNEL, &vmlinux_initdata, 0, VM_NO_GUARD); - map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, 0, 0); + map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, + NO_CONT_MAPPINGS | NO_BLOCK_MAPPINGS, 0); if (!READ_ONCE(pgd_val(*pgd_offset_pgd(pgdp, FIXADDR_START)))) { /* From patchwork Mon Apr 11 09:48:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36B5EC433EF for ; Mon, 11 Apr 2022 09:51:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237264AbiDKJx6 (ORCPT ); Mon, 11 Apr 2022 05:53:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344885AbiDKJwI (ORCPT ); Mon, 11 Apr 2022 05:52:08 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11B9D40E73 for ; Mon, 11 Apr 2022 02:49:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8608DB8119A for ; Mon, 11 Apr 2022 09:49:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68059C385A3; Mon, 11 Apr 2022 09:49:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670580; bh=fQzD5d1wsl8OGJspoUQEe2SoyfGFGbRsvl6hPA5/Iv0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eOSs2ZnwLgLDlLLHUukJd/pE6GFa/wru9bPmC/9iLcCjNGvGDCAlriRy0955hl9HS BydPVarwG3DbqQ0HLYaA1J95Ma4nKT7W6cje5MH/ymZW0Or/EauTe74gFlWvX6ELpU t6A3jmimcaIk188WGeqpx9p/6AH2B46Jcp+/l/E5TdVqarY/ayJ0GebN91bMclV0G0 Yl8OsIRKwlfEXDrnL1fZdaCye07bT5zqbZSSuLBAvrGs5VWnutfoqF5qzmchCP0opl 8TFJCHVuZQGOI5i+WuZa87bbjxEPZr+xxfSVpLKaFNTZMrd7rFhdSWBrXNn+I4Tj+Q xfypiD7yUc93g== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 24/30] arm64: mm: add support for WXN memory translation attribute Date: Mon, 11 Apr 2022 11:48:18 +0200 Message-Id: <20220411094824.4176877-25-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6148; h=from:subject; bh=fQzD5d1wsl8OGJspoUQEe2SoyfGFGbRsvl6hPA5/Iv0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/ldH8cqdt12fKqmCKvL6mqYMhTkVih595gbZWse 8kWxiOmJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5XQAKCRDDTyI5ktmPJICuDA CyIe6MJmRpZBYpybmRcNonB6f/6IiT3iMVUXFeeN5nPk/23nwfYeA5hkr9q6HTE+Nc9L7QfXLQHj1b JsHQDtmcAwfs5sDKhGrw5hPDPWnIsVe+WGKa8Gher0XM0vPNEclAaj4nhxmNx3/d7JP3KYMnCaRo7x KAGq9W91UMB87Lq6Tc7dNwhyCOKR2WCmmrOZC0LzUEv6tcwL1L34uszNYarDd3ZjesdA37CtVhUKvC GCge/PShx+ntyPl2PHIDUdhMKPMd7CX2GyJRmA0CGOHb4AUA5S56EyjtJJP2OXKuGNfLv37KZ+irVw +Ais61GPtzmMRu/pDx/wJDlM/IlleYBT7Xvpxlyi5yrvWuUKD7E53XB6dnFMFa780TmQMajyqNueTB GnokHOzxE4Y3IK67ex0kaw1ZY8/+ittfVQBJgDHgxfJYVbpg/FCGswSgj5c2lOLvjhhsXiAkTqs67/ TH4xE3GpoCHs28jl/muocOMUNC0z9KRXDUjkMKhN0LV4c= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org The AArch64 virtual memory system supports the WXN attribute, which can be set to make all writable mappings implicitly no-exec. This attribute applies to both EL0 and EL1 if enabled at EL1, making it problematic in the general case, as user space may rely on mmap() or mprotect() to return executable writable memory when asked for it. However, in specific cases where user space is known not to rely on this, the WXN can now be enabled, ensuring that inadvertent mistakes in managing memory permissions do not result in real vulnerabilities. If enabled at compile time, the feature can still be disabled at boot, by passing arm64.nowxn on the kernel command line. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 11 +++++++ arch/arm64/include/asm/mmu_context.h | 31 +++++++++++++++++++- arch/arm64/kernel/head.S | 28 +++++++++++++++++- arch/arm64/kernel/idreg-override.c | 16 ++++++++++ arch/arm64/mm/proc.S | 6 ++++ 5 files changed, 90 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 57c4c995965f..c3f94c94d535 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1411,6 +1411,17 @@ config RODATA_FULL_DEFAULT_ENABLED This requires the linear region to be mapped down to pages, which may adversely affect performance in some cases. +config ARM64_WXN + bool "Enable WXN attribute so all writable mappings are non-exec" + help + Set the WXN bit in the SCTLR system register so that all writable + mappings are treated as if the PXN/UXN bit is set as well. + If this is set to Y, it can still be disabled at runtime by + passing 'arm64.nowxn' on the kernel command line. + + This should only be set if no software needs to be supported that + relies on being able to execute from writable mappings. + config ARM64_SW_TTBR0_PAN bool "Emulate Privileged Access Never using TTBR0_EL1 switching" help diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index c7ccd82db1d2..01cb78e153c1 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -19,13 +19,42 @@ #include #include #include -#include #include #include #include extern bool rodata_full; +static inline int arch_dup_mmap(struct mm_struct *oldmm, + struct mm_struct *mm) +{ + return 0; +} + +static inline void arch_exit_mmap(struct mm_struct *mm) +{ +} + +static inline void arch_unmap(struct mm_struct *mm, + unsigned long start, unsigned long end) +{ +} + +static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, + bool write, bool execute, bool foreign) +{ + if (IS_ENABLED(CONFIG_ARM64_WXN) && execute && + (vma->vm_flags & (VM_WRITE | VM_EXEC)) == (VM_WRITE | VM_EXEC)) { + extern struct arm64_ftr_override sctlr_override; + pr_warn_ratelimited( + "process %s (%d) attempted to execute from writable memory\n", + current->comm, current->pid); + /* disallow unless the nowxn override is set */ + return sctlr_override.val & sctlr_override.mask & 0xf; + } + return true; +} + static inline void contextidr_thread_switch(struct task_struct *next) { if (!IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR)) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 54886c4b6347..cba9a5e8abb8 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -494,6 +494,12 @@ SYM_FUNC_START_LOCAL(__primary_switched) bl init_feature_override // Parse cpu feature overrides bl switch_to_vhe // Prefer VHE if possible ldp x29, x30, [sp], #16 +#ifdef CONFIG_ARM64_WXN + ldr_l x1, sctlr_override + FTR_OVR_VAL_OFFSET + tbz x1, #0, 0f + blr lr +0: +#endif bl start_kernel ASM_BUG() SYM_FUNC_END(__primary_switched) @@ -878,5 +884,25 @@ SYM_FUNC_START_LOCAL(__primary_switch) ldr x8, =__primary_switched adrp x0, __PHYS_OFFSET - br x8 + blr x8 +#ifdef CONFIG_ARM64_WXN + /* + * If we return here, we need to disable WXN before we proceed. This + * requires the MMU to be disabled, so it needs to occur while running + * from the ID map. + */ + mrs x0, sctlr_el1 + bic x1, x0, #SCTLR_ELx_M + msr sctlr_el1, x1 + isb + + tlbi vmalle1 + dsb nsh + isb + + bic x0, x0, #SCTLR_ELx_WXN + msr sctlr_el1, x0 + isb + ret +#endif SYM_FUNC_END(__primary_switch) diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index f92836e196e5..85d8fa47d196 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -94,12 +94,27 @@ static const struct ftr_set_desc kaslr __initconst = { }, }; +#ifdef CONFIG_ARM64_WXN +asmlinkage struct arm64_ftr_override sctlr_override __ro_after_init; +static const struct ftr_set_desc sctlr __initconst = { + .name = "sctlr", + .override = &sctlr_override, + .fields = { + { "nowxn", 0 }, + {} + }, +}; +#endif + static const struct ftr_set_desc * const regs[] __initconst = { &mmfr1, &pfr1, &isar1, &isar2, &kaslr, +#ifdef CONFIG_ARM64_WXN + &sctlr, +#endif }; static const struct { @@ -115,6 +130,7 @@ static const struct { "id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0" }, { "arm64.nomte", "id_aa64pfr1.mte=0" }, { "nokaslr", "kaslr.disabled=1" }, + { "arm64.nowxn", "sctlr.nowxn=1" }, }; static int __init find_field(const char *cmdline, diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index e802badf9ac0..abc3696bd601 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -495,6 +495,12 @@ SYM_FUNC_START(__cpu_setup) * Prepare SCTLR */ mov_q x0, INIT_SCTLR_EL1_MMU_ON +#ifdef CONFIG_ARM64_WXN + ldr_l x1, sctlr_override + FTR_OVR_VAL_OFFSET + tst x1, #0x1 // WXN disabled on command line? + orr x1, x0, #SCTLR_ELx_WXN + csel x0, x0, x1, ne +#endif ret // return to head.S .unreq mair From patchwork Mon Apr 11 09:48:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1B41C433F5 for ; Mon, 11 Apr 2022 09:51:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241839AbiDKJyA (ORCPT ); Mon, 11 Apr 2022 05:54:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344739AbiDKJwJ (ORCPT ); Mon, 11 Apr 2022 05:52:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 955464132C for ; Mon, 11 Apr 2022 02:49:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1626AB811AB for ; Mon, 11 Apr 2022 09:49:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6308C385A4; Mon, 11 Apr 2022 09:49:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670582; bh=s8AKGqEFOsz31ZwapTyGV7TPt8y4sHTFiRuOdTmTALs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JYLk+RscRb7totamDe2luXxAAeBHVobMOU++kdwvx8TgXehsUGdWdTj4M0PnaBtau lf5eBHut23RrhtTTv24wreGdCAglbWk6+wDEEicEhcii5QR2Uu9boGHdtBpoqezlIw s0hum98EH1rPnMdtVSILXerwYGeI5kwzdTDaxYRCCX2+ub2teC3cbkHJCV5/ymARUM KuvQzHXFvcxS0hC1SKsB2ZB07yeMuLJPJneOux8wVIElCVEK63pqbrxMHCCtDOMro5 MEYqdRaYO/3Uhd5sIuAfEbdnAPR2WnhXt5VVqCXhY38ACZ2gc6zxNUisXhxjSXNUk5 r0TqAZdHJlmEw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 25/30] arm64: head: record the MMU state at primary entry Date: Mon, 11 Apr 2022 11:48:19 +0200 Message-Id: <20220411094824.4176877-26-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2395; h=from:subject; bh=s8AKGqEFOsz31ZwapTyGV7TPt8y4sHTFiRuOdTmTALs=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lfVrl2ZyQV7T/5jk+VV1+yTznDo1IHIGQ5m/vC SgaZc3yJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5XwAKCRDDTyI5ktmPJKZmC/ 48B2/AmID/cWaIXQdBsoZnGmGYf11fk+MevkVFdP+qgZk2hycc2bmaYNZpOGDjj/vYpbFsb3zflqCI 20zAr4IoM4m+8ZYxuse3W4w6P9bBYWwBMdFg+jaU+0OzpGIfPjhrepnBk304twV3ZgBhxcjlDzd94F wEleYYE+HrpEztLv8yPdYl1Lrz8PIAyZbLyDIh2y8MLj7c3lSLBl3EhyqO7Bgv5jMqIknulYX5eVt7 ufnDMx+3LMEsTvY+N5yCfdctJ5n5UrP7+eo+7GYT9BtmjrbekX/Ki6gZVcjIAQQs7R3lhuReH/JJmf KHGVqD10GNgDDwjv5EO4IBRdcYJ+NR1nRH3mBIeDYWH+klJTXdcX4zRclaJ7Azb9BQ2gbpv/ZpkwLc 5qGeHN+WGoISTcX6c+eDk+HbDatErs6CvTd+D5SRLRv8sLi1KHYBmLtOS553NCfURi7lfbfL3Cxa88 MzOnrgxIIuui33URZCYWiIrRsotxOpmqPCW6FAtl+PHD8= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Prepare for being able to deal with primary entry with the MMU and caches enabled, by recording whether or not we entered with the MMU on in register x22. While at it, add disable_mmu_workaround macro invocations to init_kernel_el, as its manipulation of SCTLR_ELx may come down to disabling of the MMU after subsequent patches. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index cba9a5e8abb8..1ff474701e99 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -85,10 +85,12 @@ * x19 create_idmap() .. __ start_kernel() ID map VA of the DT blob * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 + * x22 primary_entry() .. start_kernel() whether we entered with the MMU on * x23 __primary_switch() .. relocate_kernel() physical misalignment/KASLR offset * x28 create_idmap(), remap_kernel_text() callee preserved temp register */ SYM_CODE_START(primary_entry) + bl record_mmu_state bl preserve_boot_args bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -104,6 +106,17 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) +SYM_CODE_START_LOCAL(record_mmu_state) + mrs x22, CurrentEL + cmp x22, #CurrentEL_EL2 + mrs x22, sctlr_el1 + b.ne 0f + mrs x22, sctlr_el2 +0: tst x22, #SCTLR_ELx_M + cset w22, ne + ret +SYM_CODE_END(record_mmu_state) + /* * Preserve the arguments passed by the bootloader in x0 .. x3 */ @@ -528,6 +541,7 @@ SYM_FUNC_START(init_kernel_el) SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) mov_q x0, INIT_SCTLR_EL1_MMU_OFF + pre_disable_mmu_workaround msr sctlr_el1, x0 isb mov_q x0, INIT_PSTATE_EL1 @@ -559,6 +573,7 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) /* Switching to VHE requires a sane SCTLR_EL1 as a start */ mov_q x0, INIT_SCTLR_EL1_MMU_OFF + pre_disable_mmu_workaround msr_s SYS_SCTLR_EL12, x0 /* @@ -574,6 +589,7 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) 1: mov_q x0, INIT_SCTLR_EL1_MMU_OFF + pre_disable_mmu_workaround msr sctlr_el1, x0 msr elr_el2, lr From patchwork Mon Apr 11 09:48:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C60BDC433FE for ; Mon, 11 Apr 2022 09:51:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344689AbiDKJyD (ORCPT ); Mon, 11 Apr 2022 05:54:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344892AbiDKJwL (ORCPT ); Mon, 11 Apr 2022 05:52:11 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C12141322 for ; Mon, 11 Apr 2022 02:49:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id A6F9ECE16CA for ; Mon, 11 Apr 2022 09:49:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30A23C385AD; Mon, 11 Apr 2022 09:49:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670585; bh=XxI2AF5xD7GY4EBoXrk7Kfh7spE/Yd2cWMa8RrYU8ig=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OMfDo/w27ac+Zmx7u5TbY0enbL1V8QgtgQu+POfQSNfvCHJc8OVsdhHz0DzwwIigg hSRAGiHKg12uKIyyoaPgjJGnnrEjTcfcEkhrq33KmexE08/m5AGIb8F04gP+ne4MjX kWt3zJKKrJuFdjKs9Eg2UjKEGhixu6YjCX/sYokHJ4XCcWtT1XnyHCmdiq8aBiCEE9 +sDaejeZR3KGrYdFe//bZjHoWLQtGJx1jNBsP7MPuRatFsUTFlnnvfAy3TGxU9O2xQ vdsUGlwIq2HwDH+krwa3r++rjznpOQAaoBx35E6FsQEPXg5l67lxBEw4d5XXJchTUu J+drsu1wRy3lw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 26/30] arm64: head: avoid cache invalidation when entering with the MMU on Date: Mon, 11 Apr 2022 11:48:20 +0200 Message-Id: <20220411094824.4176877-27-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1782; h=from:subject; bh=XxI2AF5xD7GY4EBoXrk7Kfh7spE/Yd2cWMa8RrYU8ig=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lgbxCxfVzMmOVUpp7oJCq0arLzzk6ZpS/U7ovI vsF/ammJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5YAAKCRDDTyI5ktmPJMdhC/ 9scjqLQCdqy+vMsxBUhfgC/i7tIgDvfjYGcXusyIZIdzZAuTAsf0tbxBpZ+gXtq9jGISpEDXzp18n2 duSpaEYNBlRjxHwtk4oFpgC9Hmgs+hjy7w2j4N/uuT+XCFJH4zwNBb7x+mhPzUBOzZ+hPPrYut6+df O+337jwEvIS4SjnMvtFcZmUdZV1vTyykvpiRESyWjduFibIkfKMdCvyv5+8Xs9ScZAb2ChtsMNhJ4w 3N8vwSWSf4yekXBAKyIe29StocbdT2hy5AEVwHIZCJ/l3aeaj7yr1ah7nn1PRmegaNSSunbroQgVFX AtdBy+BvWUNjaUupfprEdoPPS3HMUTg9No8qYiF1e2aGEeViBs5NCY02ibhXyL0A49KXnasSJNJF4M g+1bpoDIY7HxlC9TthsgBcQJYS+8nfNdBkAa7OPrUUEsUk0Om5XyOYpGrGF43QQgOCqvh/Tg1SzYNj 6yEHv9uBgVvVN7NMr53s2bRmDHl9L3vRu+lLopHnCxLa0= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU on, there is no need for explicit cache invalidation for stores to memory, as they will be coherent with the caches. Let's take advantage of this, and create the ID map with the MMU still enabled if that is how we entered, and avoid any cache invalidation calls in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 1ff474701e99..4a05f4480207 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -92,9 +92,9 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args + bl create_idmap bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 - bl create_idmap /* * The following calls CPU setup code, see arch/arm64/mm/proc.S for @@ -127,11 +127,13 @@ SYM_CODE_START_LOCAL(preserve_boot_args) stp x21, x1, [x0] // x0 .. x3 at kernel entry stp x2, x3, [x0, #16] + cbnz x22, 0f // skip cache invalidation if MMU is on dmb sy // needed before dc ivac with // MMU off add x1, x0, #0x20 // 4 x 8 bytes b dcache_inval_poc // tail call +0: ret SYM_CODE_END(preserve_boot_args) SYM_FUNC_START_LOCAL(clear_page_tables) @@ -366,12 +368,13 @@ SYM_FUNC_START_LOCAL(create_idmap) * accesses (MMU disabled), invalidate those tables again to * remove any speculatively loaded cache lines. */ + cbnz x22, 0f // skip cache invalidation if MMU is on dmb sy adrp x0, init_idmap_pg_dir adrp x1, init_idmap_pg_end bl dcache_inval_poc - ret x28 +0: ret x28 SYM_FUNC_END(create_idmap) SYM_FUNC_START_LOCAL(create_kernel_mapping) From patchwork Mon Apr 11 09:48:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0399C43217 for ; Mon, 11 Apr 2022 09:51:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243553AbiDKJyC (ORCPT ); Mon, 11 Apr 2022 05:54:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344794AbiDKJwN (ORCPT ); Mon, 11 Apr 2022 05:52:13 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C4BA4132E for ; Mon, 11 Apr 2022 02:49:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id AD8C1B80F52 for ; Mon, 11 Apr 2022 09:49:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8EF55C385A5; Mon, 11 Apr 2022 09:49:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670587; bh=JHj7u1pCVmjQgf7UUyl1sriCy1PcVUp8548Yf0Nq4Vc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s337dd9Scc5aWIF/9ibEo/1RYCYU9/fL2SMFnpUmDPQiv3/FQlNiz722BvwnoL/D+ z7NiL/2LzovJF6VikgHx6BdXk+0mgKoY2CZTS4Blm8JBk8KR0H8SfI9PQ/fYbdsM4L lFeQHTZKFJWF73P0OeN+FALQ2iW6Ga6uZl6yHWIwLXCR/jzzjn2SEuYgQ9dAZsunJd VJGPbD0Qh85rZFuzUWeUqXVxMocYA1LnIe9PnRn1Ys9mg1+KudeV7ZabmFWuSfB8KJ 6n1Z1Roh2v5wKD5Y84Ro24MSzLgWpjgcV0QoKj7vRwvX50U/JzvsYYsPEBzKWB6Fob x+mvHJBh7ZRlg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 27/30] arm64: head: clean the ID map page to the PoC Date: Mon, 11 Apr 2022 11:48:21 +0200 Message-Id: <20220411094824.4176877-28-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1619; h=from:subject; bh=JHj7u1pCVmjQgf7UUyl1sriCy1PcVUp8548Yf0Nq4Vc=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lixBBUpdndZHZMtjN5PEMOPQCzaF7c29q/nJJN aRtDQ4CJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5YgAKCRDDTyI5ktmPJCZsDA ClIqRzc7T3wbRvV2q/kYv0k+jWoZ7xbk4p3aOV6V1sedjJlyo/CgagzF/Foac8/wPU719mrHUH9QiC 5RMf0znu3y7eGb0ZJv4g1/nNwiPHumAm2D+lHC7hFIEkjqSG0F1ETYDaNZgsRiFEfLWOuS5V7DLtBz QKa2Vomr9Mn+K4DSJmnb1l7qv4dXrkXrXrodaIr7Jjo7erFNCl+4J9CTr8ccU56DGncENird+G2gm/ OfePCPGyonKRn3JMTeoQlHGcNdBiVROsGVmDy5rQse8Z+hNNJFHTDibhIz4rmZ8/obKtHkPvgUXPy/ d/KQLwwOot3dn+fcr0VGTdP1hZo6zWRDzWl1VHgZqcA0heGC7n0NK1URpzDXHdv3XAPwmUEmF9hlF2 LpCn8SrS7TIIjJTTI94naIf/bqY1VPKJEeMb9FSnhAE6YaFVVErWeLppLM64CpdQRbq9YeQcZFitbx zs+kHOgHCXSzdoxKNLDXOmpGg2uQccxIXH68L8KsAtunw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU and caches enabled, the caller may not have performed any cache maintenance. So clean the ID mapped page to the PoC, and invalidate the I-cache so we can safely execute from it after disabling the MMU and caches. Note that this means primary_entry() itself needs to be moved into the ID map as well, as we will return from init_kernel_el() with the MMU and caches off. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 4a05f4480207..0987d59ae333 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -75,7 +75,7 @@ __EFI_PE_HEADER - __INIT + .section ".idmap.text","awx" /* * The following callee saved general purpose registers are used on the @@ -93,6 +93,18 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args bl create_idmap + + /* + * If we entered with the MMU and caches on, clean the ID mapped part + * of the primary boot code to the PoC and invalidate it from the + * I-cache so we can safely turn them off. + */ + cbz x22, 0f + adrp x0, __idmap_text_start + adr_l x1, __idmap_text_end + bl dcache_clean_poc + ic ialluis +0: bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -106,6 +118,7 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) + __INIT SYM_CODE_START_LOCAL(record_mmu_state) mrs x22, CurrentEL cmp x22, #CurrentEL_EL2 From patchwork Mon Apr 11 09:48:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AF45C4167B for ; Mon, 11 Apr 2022 09:51:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344716AbiDKJyE (ORCPT ); Mon, 11 Apr 2022 05:54:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344903AbiDKJwR (ORCPT ); Mon, 11 Apr 2022 05:52:17 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBF883F8A5 for ; Mon, 11 Apr 2022 02:49:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 40D77B811CB for ; Mon, 11 Apr 2022 09:49:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED4DCC385A3; Mon, 11 Apr 2022 09:49:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670589; bh=1CikZ2pxQ07LdCOKvbfGEUejKhtp/ShmFjaqIdNlSrM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=askvxoerYFZVWC3aTFAuLQ8ozaFSpZLh3vN+hJg/ga9uduRUPmHaz0V8EQHS8XUQs HsPlZ/uClJrxldIleFdaYLTO2bRkCwqHuU1Npz2xu83HTHaDH8PMZpqCm2g3m2CcKG z3SOHA+ffWqY7ogEIBqTu2gn0pa7svdT+QSsF0be5oZuOesHUonM9CF2godsjprGaH xvApoTmo4YM/kF3GcwqI7jjdjTSfAHVy1WEDlzwluHFHLZamAj9Z5RToVjZBKN8mwi fzWd6zc2pfZWGvH1OB1HU0QkNDiSp5IalyPa2YnZKR1YYGRqA0PyK++nirev3SLQ1j tWOLUmQbLabMg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 28/30] efi: libstub: pass image handle to handle_kernel_image() Date: Mon, 11 Apr 2022 11:48:22 +0200 Message-Id: <20220411094824.4176877-29-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3709; h=from:subject; bh=1CikZ2pxQ07LdCOKvbfGEUejKhtp/ShmFjaqIdNlSrM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/ljZUSNQ0i79wjfMTAXX35/YMEC9rq9fAqQAUIR lU19mgKJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5YwAKCRDDTyI5ktmPJIeDDA C4vlBkQeiUY0osUjwpoTqwmpaOdhVYUhcgz6ZEUUMF31/QgaOJRhjn69q4yzo9DNCDk9Zs/VyDV+Ws DJfMGC74IBHh1qSzJk8BmcLmpUQJSG6qSIol9727lvkhPgsb3ySym2N2C7exI0ZJfFOsNLhBRqQW19 5/DsiIrxrqk+cr8L90aoPRNkAF1fmylEbU2a/rAQy6iwOTTNpnmVUDnje3zmhe+BuN5MXrBUzd3/X+ AmCTUOyK9HrWLS27vYGz8HFBtZCA5dp4DX90iBJXWRL5OEm64KCkorJjc1DcQkGk3DD4kFZ/t9356I izppl16pPOVjUxKZBBHipSYaYljN2hJoMDNwRzM3gi/hTKuWzDk3qxwD6qUEkHP6o39ViClnBJEwjY crnu4NK45W+qj6HTcptrmP5ss/YvCPm9CxpyKgBiC8hjCMN5H29RNs/vHatJSwkQlbi1l/fIX09TGi S3XvVyzGveaF0V6AOWNs8UQ1tr5ZaebtuArBMfOc0lwQw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org In a future patch, arm64's implementation of handle_kernel_image() will omit randomizing the placement of the kernel if the load address was chosen randomly by the loader. In order to do this, it needs to locate a protocol on the image handle, so pass it to handle_kernel_image(). Signed-off-by: Ard Biesheuvel --- drivers/firmware/efi/libstub/arm32-stub.c | 3 ++- drivers/firmware/efi/libstub/arm64-stub.c | 3 ++- drivers/firmware/efi/libstub/efi-stub.c | 2 +- drivers/firmware/efi/libstub/efistub.h | 3 ++- drivers/firmware/efi/libstub/riscv-stub.c | 3 ++- 5 files changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/firmware/efi/libstub/arm32-stub.c b/drivers/firmware/efi/libstub/arm32-stub.c index 4b5b2403b3a0..0131e3aaa605 100644 --- a/drivers/firmware/efi/libstub/arm32-stub.c +++ b/drivers/firmware/efi/libstub/arm32-stub.c @@ -117,7 +117,8 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, unsigned long *image_size, unsigned long *reserve_addr, unsigned long *reserve_size, - efi_loaded_image_t *image) + efi_loaded_image_t *image, + efi_handle_t image_handle) { const int slack = TEXT_OFFSET - 5 * PAGE_SIZE; int alloc_size = MAX_UNCOMP_KERNEL_SIZE + EFI_PHYS_ALIGN; diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index 9cc556013d08..00c91a3807ea 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -83,7 +83,8 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, unsigned long *image_size, unsigned long *reserve_addr, unsigned long *reserve_size, - efi_loaded_image_t *image) + efi_loaded_image_t *image, + efi_handle_t image_handle) { efi_status_t status; unsigned long kernel_size, kernel_memsize = 0; diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c index da93864d7abc..f515394cce6e 100644 --- a/drivers/firmware/efi/libstub/efi-stub.c +++ b/drivers/firmware/efi/libstub/efi-stub.c @@ -198,7 +198,7 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle, status = handle_kernel_image(&image_addr, &image_size, &reserve_addr, &reserve_size, - image); + image, handle); if (status != EFI_SUCCESS) { efi_err("Failed to relocate kernel\n"); goto fail_free_screeninfo; diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h index edb77b0621ea..c4f4f078087d 100644 --- a/drivers/firmware/efi/libstub/efistub.h +++ b/drivers/firmware/efi/libstub/efistub.h @@ -865,7 +865,8 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, unsigned long *image_size, unsigned long *reserve_addr, unsigned long *reserve_size, - efi_loaded_image_t *image); + efi_loaded_image_t *image, + efi_handle_t image_handle); asmlinkage void __noreturn efi_enter_kernel(unsigned long entrypoint, unsigned long fdt_addr, diff --git a/drivers/firmware/efi/libstub/riscv-stub.c b/drivers/firmware/efi/libstub/riscv-stub.c index 9c460843442f..eec043873354 100644 --- a/drivers/firmware/efi/libstub/riscv-stub.c +++ b/drivers/firmware/efi/libstub/riscv-stub.c @@ -80,7 +80,8 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, unsigned long *image_size, unsigned long *reserve_addr, unsigned long *reserve_size, - efi_loaded_image_t *image) + efi_loaded_image_t *image, + efi_handle_t image_handle) { unsigned long kernel_size = 0; unsigned long preferred_addr; From patchwork Mon Apr 11 09:48:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 561268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5353EC433FE for ; Mon, 11 Apr 2022 09:52:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344727AbiDKJyb (ORCPT ); Mon, 11 Apr 2022 05:54:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344900AbiDKJwR (ORCPT ); Mon, 11 Apr 2022 05:52:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C0E040E79 for ; Mon, 11 Apr 2022 02:49:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D6656611DF for ; Mon, 11 Apr 2022 09:49:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 57BC0C385AF; Mon, 11 Apr 2022 09:49:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670592; bh=nY5MgIoq4fA9USqK31dnu7E3Bd7d2UqYghLsLSuHjOs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U5tlWgFXnRecHK9IysL2hLGfp66fZjtr3JhUYjQkqGQAKEVPTy0jcJnza67eAKEN7 jmFLUtyz2CEA0NN+qP1rj0FYZvFBbe6MPAFpTJe1pDzFf0GXehw/aUyEKH3Weo8aEv Ag0aB5iKYSJKcpPTUBijw9L1ECPXDCOnLWhOYYLR39ww8RC5ScoATsWKZeIuxpZMEt 1Awg4o1M1HqIpoDoPAouphp/Ef2l87GVyvwEs3gs6dUIw/MpZvAZ1R0IpryFq1vy9j CwLdrO+qyAdRf3f5VKhkM5vczbZ3Cj/FTKBmoSizKTsSKQwYDx638AnamDkBtdlTWi 3OlfcwuoWFycw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 29/30] efi/arm64: libstub: run image in place if randomized by the loader Date: Mon, 11 Apr 2022 11:48:23 +0200 Message-Id: <20220411094824.4176877-30-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2987; h=from:subject; bh=nY5MgIoq4fA9USqK31dnu7E3Bd7d2UqYghLsLSuHjOs=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/llzSWWWLuFC4/Pg5EMCFfx8VQCtcysoeuYuZ+r VfZf23yJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5ZQAKCRDDTyI5ktmPJBWTC/ 9gJy6GiRy8IrgDMdqxEgGLv0O/5AYyxtBfyTH4jNdznL7YmRWRRb78h+Of7Jx9vaDMGa1hGgLYLBka NTYTVMSvpd5oyt15T0wdchMUBdn5492PQA6Pv4yT6qZB5KiqEZaBoWTsttFbDBKcr9YUfneMuTsY25 hD2EVQ4mi0ZKdJReP69LYy5jYjkQXItHMhS7wCTOjUun47NRebqBus3rAKfQ4RnmQ0lwWERf2RY+4A ACYHM8leENh5hhlh62OaNHLYdXvP9GVBuoZlyXPEaeybUqyBcX+tCodudAV6ID21bGmICXHGOTDn1/ ssi8jWbw1kqtpr5r08iUciggaUXh4a6XIJnxPUKjqES8i9SikzAB6S++9qw8QkjL+1Qn76In12zGIr rtnCssk6QdtyByBvgDAFQ1Yc6qU8s47WkN5Z47sZz4tdx1r/KDln7019CcJJMU4wttsds0Fs4VLI+M LPbG0csnzg7k0onYzsSj1lRf8IpsbUTUPSYVsuFNiD5iY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If the loader has already placed the EFI kernel image randomly in physical memory, and indicates having done so by installing the 'fixed placement' protocol onto the image handle, don't bother randomizing the placement again in the EFI stub. Signed-off-by: Ard Biesheuvel --- drivers/firmware/efi/libstub/arm64-stub.c | 12 +++++++++--- include/linux/efi.h | 11 +++++++++++ 2 files changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index 00c91a3807ea..577173ee1f83 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -101,7 +101,15 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN; if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { - if (!efi_nokaslr) { + efi_guid_t li_fixed_proto = LINUX_EFI_LOADED_IMAGE_FIXED_GUID; + void *p; + + if (efi_nokaslr) { + efi_info("KASLR disabled on kernel command line\n"); + } else if (efi_bs_call(handle_protocol, image_handle, + &li_fixed_proto, &p) == EFI_SUCCESS) { + efi_info("Image placement fixed by loader\n"); + } else { status = efi_get_random_bytes(sizeof(phys_seed), (u8 *)&phys_seed); if (status == EFI_NOT_FOUND) { @@ -112,8 +120,6 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, status); efi_nokaslr = true; } - } else { - efi_info("KASLR disabled on kernel command line\n"); } } diff --git a/include/linux/efi.h b/include/linux/efi.h index ccd4d3f91c98..d7567006e151 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -406,6 +406,17 @@ void efi_native_runtime_setup(void); #define LINUX_EFI_INITRD_MEDIA_GUID EFI_GUID(0x5568e427, 0x68fc, 0x4f3d, 0xac, 0x74, 0xca, 0x55, 0x52, 0x31, 0xcc, 0x68) #define LINUX_EFI_MOK_VARIABLE_TABLE_GUID EFI_GUID(0xc451ed2b, 0x9694, 0x45d3, 0xba, 0xba, 0xed, 0x9f, 0x89, 0x88, 0xa3, 0x89) +/* + * This GUID may be installed onto the kernel image's handle as a NULL protocol + * to signal to the stub that the placement of the image should be respected, + * and moving the image in physical memory is undesirable. To ensure + * compatibility with 64k pages kernels with virtually mapped stacks, and to + * avoid defeating physical randomization, this protocol should only be + * installed if the image was placed at a randomized 128k aligned address in + * memory. + */ +#define LINUX_EFI_LOADED_IMAGE_FIXED_GUID EFI_GUID(0xf5a37b6d, 0x3344, 0x42a5, 0xb6, 0xbb, 0x97, 0x86, 0x48, 0xc1, 0x89, 0x0a) + /* OEM GUIDs */ #define DELLEMC_EFI_RCI2_TABLE_GUID EFI_GUID(0x2d9f28a2, 0xa886, 0x456a, 0x97, 0xa8, 0xf1, 0x1e, 0xf2, 0x4f, 0xf4, 0x55) #define AMD_SEV_MEM_ENCRYPT_GUID EFI_GUID(0x0cf29b71, 0x9e51, 0x433a, 0xa3, 0xb7, 0x81, 0xf3, 0xab, 0x16, 0xb8, 0x75) From patchwork Mon Apr 11 09:48:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 559776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08F18C433F5 for ; Mon, 11 Apr 2022 09:52:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245634AbiDKJy2 (ORCPT ); Mon, 11 Apr 2022 05:54:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344909AbiDKJwR (ORCPT ); Mon, 11 Apr 2022 05:52:17 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2754D41326 for ; Mon, 11 Apr 2022 02:49:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D659CB80F97 for ; Mon, 11 Apr 2022 09:49:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B614CC385A3; Mon, 11 Apr 2022 09:49:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649670594; bh=1qxefU9ioI70as0lbJZH1S8z3I47C60UFcTZFTnl2os=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n+rblAVRH4wDlEtdFm7egzo+8XH9XkP+N9ksYrhYr0XqMyHVhUPUVjemwfV6fqmy5 PRjAqC8+Kf5qwGUoAzG0JEX+IKVS81z2av8dnUyThc+yEAO+VsLKwNi0ngvJEOXgod 5KiLFlaE1/+Bg+10ZFQzcH0F02hbn1b1PEAl2GKPeD6P1TLPC9+iLNxGOOggy57RXj 3e3enBZ2z2N5mqNc6VE82oNGG8OClP+llsmNzKP/ZZQC0tTV4OrTj6yPRSrT6pUiy6 j4VgCWxiq9d7QOHJH3xrtDPpfSMxeweobEWAHwzO+CiYUJNANm99/b0Hh8zivnjk/q w7MFxb/dXiLfg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown Subject: [PATCH v3 30/30] arm64: efi/libstub: enter with the MMU on if executing in place Date: Mon, 11 Apr 2022 11:48:24 +0200 Message-Id: <20220411094824.4176877-31-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411094824.4176877-1-ardb@kernel.org> References: <20220411094824.4176877-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=986; h=from:subject; bh=1qxefU9ioI70as0lbJZH1S8z3I47C60UFcTZFTnl2os=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBiU/lnFUYaEzHCp/GnbA810DD+iTSzsBU687E/JIft 4fc53xCJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYlP5ZwAKCRDDTyI5ktmPJHQcC/ 92qP+noLshpJmyk5f21siOVQ14gjQqcxRXRd3mQAmsSR5m586LNL+dJdre3/Y/k6UQ/6VK1kvODh6Y VyLa73v4SF6I/dInO+KGJt5sOexXtTBr2XeKK4TydG2faSK0ZjEzdbZcJK0gJ5fDHMn1+bp2ymhl1t xG94oC8bEQmGp+MeX7Op9gOOqSTeo6TyXwsKkp4z0jApClWEqXGgb7ntv0uKaoXwEbE23j9V9TKDAf bTWCij7AVbE/l9wPG59Jlo12EAWvYDTtGSwFweMhxuDbcGge/P3XlCkzwjrMSDrMALOZmjA2jYReBM 23WUxyDmwW7tkMgVm9HmkVQwl4cXWu0fM8RMFr0FKvD+22+MQ9/Sz5la1UiMSQmAPifylVVwbEII52 NEEFf8RPBMh5B2Sa4+ZPLAZYMlmOeaQHBYYwwXLEK7xDFLXovNU1Mwa7/Rw0ZQiJ3O70U1faR5JDl3 SID408Az8bMDf9dvjBJXLa8Y63E8G51uv65SahI9z324k= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If the kernel image has not been moved from the place where it was loaded by the firmware, just call the kernel entrypoint directly, and keep the MMU and caches enabled. This removes the need for any cache invalidation in the entry path. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/efi-entry.S | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S index 61a87fa1c305..0da0b373cf32 100644 --- a/arch/arm64/kernel/efi-entry.S +++ b/arch/arm64/kernel/efi-entry.S @@ -23,6 +23,10 @@ SYM_CODE_START(efi_enter_kernel) add x19, x0, x2 // relocated Image entrypoint mov x20, x1 // DTB address + adrp x3, _text // just call the entrypoint + cmp x0, x3 // directly if the image was + b.eq 2f // not moved around in memory + /* * Clean the copied Image to the PoC, and ensure it is not shadowed by * stale icache entries from before relocation.