From patchwork Sat Nov 9 12:07:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Tokarev X-Patchwork-Id: 842099 Delivered-To: patch@linaro.org Received: by 2002:a5d:6307:0:b0:381:e71e:8f7b with SMTP id i7csp2118960wru; Sat, 9 Nov 2024 04:24:29 -0800 (PST) X-Forwarded-Encrypted: i=2; AJvYcCWcZz4MppKHMBRhc1LuPJtxEjdVq2OIKU5Mw6eGWOqyw7y9rYYGC4+b4BOlFM8gs5oKxFuc9w==@linaro.org X-Google-Smtp-Source: AGHT+IHBPWMy/cHaXwjVKjTrgIvjy15+GlVJnRg47zNbTdd8dXox0R3KldCykdwjSvOqIo78mBzq X-Received: by 2002:a05:620a:410e:b0:7b1:ae66:e8ba with SMTP id af79cd13be357-7b331dbf53emr893462485a.50.1731155068995; Sat, 09 Nov 2024 04:24:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1731155068; cv=none; d=google.com; s=arc-20240605; b=JTaG1eUWbpWIRGYSEwoliEhhPwFif0GcAngbGMci20lEe1lDQZJGB/k5q+rn3gXYE4 +/3rCTjGNQQoAWCy0wOMp0u94eOdnMxaear5cqoz0/CoufgMKf8cgGG0MqC4VtHyBUzU J2MSIo2PRqBHBfb8HN5eDZufLAquzBrdKsy4rxvQJ8ncXu9S3KXtuMgRqf5zMk/4HJue gZcPkOP0V5nn+qdhaBQNoMSVdEYbrgli/r2Ia8TBUaVnvg6fZ/cl0fzmj4qIZE30y17I nlr6mcyDvXhOrFp2AJsp/U8F/9S8BeHW3fRw8qnX6zHrWLQNItwjtBdLNrfuK3xMtO10 lkQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=GTYs7Es7DxCcTicLmEvPxeEkt9PDF1Oqlh6kf2XOPhY=; fh=DLg1riWqObR+l4mvp+jVLnDlwxU6jpgkVze2KyMpyGg=; b=WOS5LmTIZQyLwhPOXKIvTSP+iv1b9DwphriV2+x/9lU2ERVR6/EpD2cjiK+YyJXMXY M5O6i2yeWPTMG7hEto6dC3f20FiabUSKCsRN5bjg8mwRNIoSDbVVVsD+WKTPVzAaeVAD giOCTAQ7w88P5rQ65mK65KfaLV4GKKA+ahjUdyTRxK3aIBadDLcPzr8DCrA5bw9PeTBL iLhHEmF9juj0qifYUzBdaPq5cH2AvgUWpnGaIcEwShreV8OmCw1c8xAFLoW0R3lHBIDc 1dIgASGZYq51lRO4DKsJkq7nSsIzs9dbjQLpx45zFfu1oN9dAqLeRpRkZnw+AeJvaf00 B+cg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org" Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 6a1803df08f44-6d3966bf3f7si63632476d6.522.2024.11.09.04.24.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sat, 09 Nov 2024 04:24:28 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org" Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t9kM0-0007Tc-Rm; Sat, 09 Nov 2024 07:14:12 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t9kLw-0007A5-Uu; Sat, 09 Nov 2024 07:14:09 -0500 Received: from isrv.corpit.ru ([86.62.121.231]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t9kLu-0004hb-3z; Sat, 09 Nov 2024 07:14:08 -0500 Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2]) by isrv.corpit.ru (Postfix) with ESMTP id 08500A161D; Sat, 9 Nov 2024 15:07:10 +0300 (MSK) Received: from tls.msk.ru (mjt.wg.tls.msk.ru [192.168.177.130]) by tsrv.corpit.ru (Postfix) with SMTP id C1974167FAB; Sat, 9 Nov 2024 15:08:04 +0300 (MSK) Received: (nullmailer pid 3295416 invoked by uid 1000); Sat, 09 Nov 2024 12:08:01 -0000 From: Michael Tokarev To: qemu-devel@nongnu.org Cc: qemu-stable@nongnu.org, Peter Maydell , Thomas Huth , Richard Henderson , Michael Tokarev Subject: [Stable-9.0.4 53/57] target/arm: Add new MMU indexes for AArch32 Secure PL1&0 Date: Sat, 9 Nov 2024 15:07:55 +0300 Message-Id: <20241109120801.3295120-53-mjt@tls.msk.ru> X-Mailer: git-send-email 2.39.5 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=86.62.121.231; envelope-from=mjt@tls.msk.ru; helo=isrv.corpit.ru X-Spam_score_int: -68 X-Spam_score: -6.9 X-Spam_bar: ------ X-Spam_report: (-6.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_HI=-5, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Peter Maydell Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0) This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime. We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits. The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it. Fix this by adding two new MMU indexes: * ARMMMUIdx_E30_0 is for Secure PL0 * ARMMMUIdx_E30_3_PAN is for Secure PL1 when PAN is enabled The existing ARMMMUIdx_E3 is used to mean "Secure PL1 without PAN" (and would be named ARMMMUIdx_E30_3 in an AArch32-centric scheme). These extra two indexes bring us up to the maximum of 16 that the core code can currently support. This commit: * adds the new MMU index handling to the various places where we deal in MMU index values * adds assertions that we aren't AArch32 EL3 in a couple of places that currently use the E10 indexes, to document why they don't also need to handle the E30 indexes * documents in a comment why regime_has_2_ranges() doesn't need updating Notes for backporting: this commit depends on the preceding revert of 4c2c04746932; that revert and this commit should probably be backported to everywhere that we originally backported 4c2c04746932. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2588 Signed-off-by: Peter Maydell Tested-by: Thomas Huth Reviewed-by: Richard Henderson Message-id: 20241101142845.1712482-3-peter.maydell@linaro.org (cherry picked from commit efbe180ad2ed75d4cc64dfc6fb46a015eef713d1) Signed-off-by: Michael Tokarev diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 7c721f22bd..7688ceaf8f 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2699,8 +2699,7 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync); * + NonSecure PL1 & 0 stage 1 * + NonSecure PL1 & 0 stage 2 * + NonSecure PL2 - * + Secure PL0 - * + Secure PL1 + * + Secure PL1 & 0 * (reminder: for 32 bit EL3, Secure PL1 is *EL3*, not EL1.) * * For QEMU, an mmu_idx is not quite the same as a translation regime because: @@ -2735,19 +2734,21 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync); * * This gives us the following list of cases: * - * EL0 EL1&0 stage 1+2 (aka NS PL0) - * EL1 EL1&0 stage 1+2 (aka NS PL1) - * EL1 EL1&0 stage 1+2 +PAN + * EL0 EL1&0 stage 1+2 (aka NS PL0 PL1&0 stage 1+2) + * EL1 EL1&0 stage 1+2 (aka NS PL1 PL1&0 stage 1+2) + * EL1 EL1&0 stage 1+2 +PAN (aka NS PL1 P1&0 stage 1+2 +PAN) * EL0 EL2&0 * EL2 EL2&0 * EL2 EL2&0 +PAN * EL2 (aka NS PL2) - * EL3 (aka S PL1) + * EL3 (aka AArch32 S PL1 PL1&0) + * AArch32 S PL0 PL1&0 (we call this EL30_0) + * AArch32 S PL1 PL1&0 +PAN (we call this EL30_3_PAN) * Stage2 Secure * Stage2 NonSecure * plus one TLB per Physical address space: S, NS, Realm, Root * - * for a total of 14 different mmu_idx. + * for a total of 16 different mmu_idx. * * R profile CPUs have an MPU, but can use the same set of MMU indexes * as A profile. They only need to distinguish EL0 and EL1 (and @@ -2811,6 +2812,8 @@ typedef enum ARMMMUIdx { ARMMMUIdx_E20_2_PAN = 5 | ARM_MMU_IDX_A, ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A, ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A, + ARMMMUIdx_E30_0 = 8 | ARM_MMU_IDX_A, + ARMMMUIdx_E30_3_PAN = 9 | ARM_MMU_IDX_A, /* * Used for second stage of an S12 page table walk, or for descriptor @@ -2818,14 +2821,14 @@ typedef enum ARMMMUIdx { * are in use simultaneously for SecureEL2: the security state for * the S2 ptw is selected by the NS bit from the S1 ptw. */ - ARMMMUIdx_Stage2_S = 8 | ARM_MMU_IDX_A, - ARMMMUIdx_Stage2 = 9 | ARM_MMU_IDX_A, + ARMMMUIdx_Stage2_S = 10 | ARM_MMU_IDX_A, + ARMMMUIdx_Stage2 = 11 | ARM_MMU_IDX_A, /* TLBs with 1-1 mapping to the physical address spaces. */ - ARMMMUIdx_Phys_S = 10 | ARM_MMU_IDX_A, - ARMMMUIdx_Phys_NS = 11 | ARM_MMU_IDX_A, - ARMMMUIdx_Phys_Root = 12 | ARM_MMU_IDX_A, - ARMMMUIdx_Phys_Realm = 13 | ARM_MMU_IDX_A, + ARMMMUIdx_Phys_S = 12 | ARM_MMU_IDX_A, + ARMMMUIdx_Phys_NS = 13 | ARM_MMU_IDX_A, + ARMMMUIdx_Phys_Root = 14 | ARM_MMU_IDX_A, + ARMMMUIdx_Phys_Realm = 15 | ARM_MMU_IDX_A, /* * These are not allocated TLBs and are used only for AT system @@ -2864,6 +2867,8 @@ typedef enum ARMMMUIdxBit { TO_CORE_BIT(E20_2), TO_CORE_BIT(E20_2_PAN), TO_CORE_BIT(E3), + TO_CORE_BIT(E30_0), + TO_CORE_BIT(E30_3_PAN), TO_CORE_BIT(Stage2), TO_CORE_BIT(Stage2_S), diff --git a/target/arm/helper.c b/target/arm/helper.c index 42044ae14b..a0b63a1bce 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -444,6 +444,9 @@ static int alle1_tlbmask(CPUARMState *env) * Note that the 'ALL' scope must invalidate both stage 1 and * stage 2 translations, whereas most other scopes only invalidate * stage 1 translations. + * + * For AArch32 this is only used for TLBIALLNSNH and VTTBR + * writes, so only needs to apply to NS PL1&0, not S PL1&0. */ return (ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_1_PAN | @@ -3762,7 +3765,11 @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) /* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */ switch (el) { case 3: - mmu_idx = ARMMMUIdx_E3; + if (ri->crm == 9 && arm_pan_enabled(env)) { + mmu_idx = ARMMMUIdx_E30_3_PAN; + } else { + mmu_idx = ARMMMUIdx_E3; + } break; case 2: g_assert(ss != ARMSS_Secure); /* ARMv8.4-SecEL2 is 64-bit only */ @@ -3782,7 +3789,7 @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) /* stage 1 current state PL0: ATS1CUR, ATS1CUW */ switch (el) { case 3: - mmu_idx = ARMMMUIdx_E10_0; + mmu_idx = ARMMMUIdx_E30_0; break; case 2: g_assert(ss != ARMSS_Secure); /* ARMv8.4-SecEL2 is 64-bit only */ @@ -4892,11 +4899,14 @@ static int vae1_tlbmask(CPUARMState *env) uint64_t hcr = arm_hcr_el2_eff(env); uint16_t mask; + assert(arm_feature(env, ARM_FEATURE_AARCH64)); + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { mask = ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_2_PAN | ARMMMUIdxBit_E20_0; } else { + /* This is AArch64 only, so we don't need to touch the EL30_x TLBs */ mask = ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_1_PAN | ARMMMUIdxBit_E10_0; @@ -4935,6 +4945,8 @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr) uint64_t hcr = arm_hcr_el2_eff(env); ARMMMUIdx mmu_idx; + assert(arm_feature(env, ARM_FEATURE_AARCH64)); + /* Only the regime of the mmu_idx below is significant. */ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { mmu_idx = ARMMMUIdx_E20_0; @@ -11768,10 +11780,20 @@ void arm_cpu_do_interrupt(CPUState *cs) uint64_t arm_sctlr(CPUARMState *env, int el) { - /* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */ + /* Only EL0 needs to be adjusted for EL1&0 or EL2&0 or EL3&0 */ if (el == 0) { ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0); - el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1; + switch (mmu_idx) { + case ARMMMUIdx_E20_0: + el = 2; + break; + case ARMMMUIdx_E30_0: + el = 3; + break; + default: + el = 1; + break; + } } return env->cp15.sctlr_el[el]; } @@ -12439,6 +12461,7 @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) switch (mmu_idx) { case ARMMMUIdx_E10_0: case ARMMMUIdx_E20_0: + case ARMMMUIdx_E30_0: return 0; case ARMMMUIdx_E10_1: case ARMMMUIdx_E10_1_PAN: @@ -12448,6 +12471,7 @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) case ARMMMUIdx_E20_2_PAN: return 2; case ARMMMUIdx_E3: + case ARMMMUIdx_E30_3_PAN: return 3; default: g_assert_not_reached(); @@ -12476,6 +12500,9 @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) hcr = arm_hcr_el2_eff(env); if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { idx = ARMMMUIdx_E20_0; + } else if (arm_is_secure_below_el3(env) && + !arm_el_is_aa64(env, 3)) { + idx = ARMMMUIdx_E30_0; } else { idx = ARMMMUIdx_E10_0; } @@ -12500,6 +12527,9 @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) } break; case 3: + if (!arm_el_is_aa64(env, 3) && arm_pan_enabled(env)) { + return ARMMMUIdx_E30_3_PAN; + } return ARMMMUIdx_E3; default: g_assert_not_reached(); diff --git a/target/arm/internals.h b/target/arm/internals.h index f078e5377e..7867778150 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -835,7 +835,16 @@ static inline void arm_call_el_change_hook(ARMCPU *cpu) } } -/* Return true if this address translation regime has two ranges. */ +/* + * Return true if this address translation regime has two ranges. + * Note that this will not return the correct answer for AArch32 + * Secure PL1&0 (i.e. mmu indexes E3, E30_0, E30_3_PAN), but it is + * never called from a context where EL3 can be AArch32. (The + * correct return value for ARMMMUIdx_E3 would be different for + * that case, so we can't just make the function return the + * correct value anyway; we would need an extra "bool e3_is_aarch32" + * argument which all the current callsites would pass as 'false'.) + */ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx) { switch (mmu_idx) { @@ -860,6 +869,7 @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx) case ARMMMUIdx_Stage1_E1_PAN: case ARMMMUIdx_E10_1_PAN: case ARMMMUIdx_E20_2_PAN: + case ARMMMUIdx_E30_3_PAN: return true; default: return false; @@ -883,10 +893,11 @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx) case ARMMMUIdx_E2: return 2; case ARMMMUIdx_E3: + case ARMMMUIdx_E30_0: + case ARMMMUIdx_E30_3_PAN: return 3; case ARMMMUIdx_E10_0: case ARMMMUIdx_Stage1_E0: - return arm_el_is_aa64(env, 3) || !arm_is_secure_below_el3(env) ? 1 : 3; case ARMMMUIdx_Stage1_E1: case ARMMMUIdx_Stage1_E1_PAN: case ARMMMUIdx_E10_1: @@ -910,6 +921,7 @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) switch (mmu_idx) { case ARMMMUIdx_E10_0: case ARMMMUIdx_E20_0: + case ARMMMUIdx_E30_0: case ARMMMUIdx_Stage1_E0: case ARMMMUIdx_MUser: case ARMMMUIdx_MSUser: diff --git a/target/arm/ptw.c b/target/arm/ptw.c index f2c9e5a422..a244db98ee 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -264,6 +264,8 @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, case ARMMMUIdx_E20_2_PAN: case ARMMMUIdx_E2: case ARMMMUIdx_E3: + case ARMMMUIdx_E30_0: + case ARMMMUIdx_E30_3_PAN: break; case ARMMMUIdx_Phys_S: @@ -3603,6 +3605,8 @@ bool get_phys_addr(CPUARMState *env, vaddr address, ss = ARMSS_Secure; break; case ARMMMUIdx_E3: + case ARMMMUIdx_E30_0: + case ARMMMUIdx_E30_3_PAN: if (arm_feature(env, ARM_FEATURE_AARCH64) && cpu_isar_feature(aa64_rme, env_archcpu(env))) { ss = ARMSS_Root; diff --git a/target/arm/tcg/op_helper.c b/target/arm/tcg/op_helper.c index c199b69fbf..7d47afff6d 100644 --- a/target/arm/tcg/op_helper.c +++ b/target/arm/tcg/op_helper.c @@ -858,7 +858,19 @@ void HELPER(tidcp_el0)(CPUARMState *env, uint32_t syndrome) { /* See arm_sctlr(), but we also need the sctlr el. */ ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0); - int target_el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1; + int target_el; + + switch (mmu_idx) { + case ARMMMUIdx_E20_0: + target_el = 2; + break; + case ARMMMUIdx_E30_0: + target_el = 3; + break; + default: + target_el = 1; + break; + } /* * The bit is not valid unless the target el is aa64, but since the diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c index dc49a8d806..9b0decd4e3 100644 --- a/target/arm/tcg/translate.c +++ b/target/arm/tcg/translate.c @@ -229,6 +229,9 @@ static inline int get_a32_user_mem_index(DisasContext *s) */ switch (s->mmu_idx) { case ARMMMUIdx_E3: + case ARMMMUIdx_E30_0: + case ARMMMUIdx_E30_3_PAN: + return arm_to_core_mmu_idx(ARMMMUIdx_E30_0); case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */ case ARMMMUIdx_E10_0: case ARMMMUIdx_E10_1: