Message ID | 20221001162318.153420-26-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | target/arm: Implement FEAT_HAFDBS | expand |
On Sat, 1 Oct 2022 at 17:39, Richard Henderson <richard.henderson@linaro.org> wrote: > > We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb. > Flush the tlb when invalidating stage 1+2 translations. > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > @@ -2977,15 +2987,6 @@ typedef enum ARMMMUIdx { > ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB, > ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB, > ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB, > - /* > - * Not allocated a TLB: used only for second stage of an S12 page > - * table walk, or for descriptor loads during first stage of an S1 > - * page table walk. Note that if we ever want to have a TLB for this > - * then various TLB flush insns which currently are no-ops or flush > - * only stage 1 MMU indexes will need to change to flush stage 2. > - */ > - ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB, > - ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB, > > /* > * M-profile. > @@ -3016,6 +3017,8 @@ typedef enum ARMMMUIdxBit { > TO_CORE_BIT(E20_2), > TO_CORE_BIT(E20_2_PAN), > TO_CORE_BIT(E3), > + TO_CORE_BIT(Stage2), > + TO_CORE_BIT(Stage2_S), > > TO_CORE_BIT(MUser), > TO_CORE_BIT(MPriv), > diff --git a/target/arm/helper.c b/target/arm/helper.c > index 6fe85c6642..19a03eb200 100644 > --- a/target/arm/helper.c > +++ b/target/arm/helper.c > @@ -4319,7 +4319,9 @@ static int alle1_tlbmask(CPUARMState *env) > */ > return (ARMMMUIdxBit_E10_1 | > ARMMMUIdxBit_E10_1_PAN | > - ARMMMUIdxBit_E10_0); > + ARMMMUIdxBit_E10_0 | > + ARMMMUIdxBit_Stage2 | > + ARMMMUIdxBit_Stage2_S); > } This isn't sufficient. As the comment notes, you also need to change all the TLBI ops for S2 invalidates which we currently implement as ARM_CP_NOP so they now flush the stage 2 TLB. I think that searching helper.c for 'IPAS2' probably finds you all of them. alle1_tlbmask() is also only used for the aarch64 TLBI ops -- the aarch32 ones are tlbiall_nsnh_write() and tlbiall_nsnh_is_write(), I think, and those also now need to flush stage 2. VMID writes also now need to flush the stage 2 TLB as well as the combined s1&2 TLB -- see vttbr_write(). Side note, looks like we didn't update vttbr_write() to know about the EL2&0 MMU indexes ? thanks -- PMM
On 10/6/22 08:46, Peter Maydell wrote: > Side note, looks like we didn't update vttbr_write() to know about > the EL2&0 MMU indexes ? EL2&0 is a single-stage regime, unaffected by VTTBR. r~
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h index 98bd9e435e..283618f601 100644 --- a/target/arm/cpu-param.h +++ b/target/arm/cpu-param.h @@ -40,6 +40,6 @@ bool guarded; #endif -#define NB_MMU_MODES 10 +#define NB_MMU_MODES 12 #endif diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 0effa85c56..732c0c00ac 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2900,8 +2900,9 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync); * EL2 (aka NS PL2) * EL3 (aka S PL1) * Physical (NS & S) + * Stage2 (NS & S) * - * for a total of 10 different mmu_idx. + * for a total of 12 different mmu_idx. * * R profile CPUs have an MPU, but can use the same set of MMU indexes * as A profile. They only need to distinguish EL0 and EL1 (and @@ -2970,6 +2971,15 @@ typedef enum ARMMMUIdx { ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A, ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A, + /* + * Used for second stage of an S12 page table walk, or for descriptor + * loads during first stage of an S1 page table walk. Note that both + * are in use simultaneously for SecureEL2: the security state for + * the S2 ptw is selected by the NS bit from the S1 ptw. + */ + ARMMMUIdx_Stage2 = 10 | ARM_MMU_IDX_A, + ARMMMUIdx_Stage2_S = 11 | ARM_MMU_IDX_A, + /* * These are not allocated TLBs and are used only for AT system * instructions or for the first stage of an S12 page table walk. @@ -2977,15 +2987,6 @@ typedef enum ARMMMUIdx { ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB, ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB, ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB, - /* - * Not allocated a TLB: used only for second stage of an S12 page - * table walk, or for descriptor loads during first stage of an S1 - * page table walk. Note that if we ever want to have a TLB for this - * then various TLB flush insns which currently are no-ops or flush - * only stage 1 MMU indexes will need to change to flush stage 2. - */ - ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB, - ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB, /* * M-profile. @@ -3016,6 +3017,8 @@ typedef enum ARMMMUIdxBit { TO_CORE_BIT(E20_2), TO_CORE_BIT(E20_2_PAN), TO_CORE_BIT(E3), + TO_CORE_BIT(Stage2), + TO_CORE_BIT(Stage2_S), TO_CORE_BIT(MUser), TO_CORE_BIT(MPriv), diff --git a/target/arm/helper.c b/target/arm/helper.c index 6fe85c6642..19a03eb200 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -4319,7 +4319,9 @@ static int alle1_tlbmask(CPUARMState *env) */ return (ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_1_PAN | - ARMMMUIdxBit_E10_0); + ARMMMUIdxBit_E10_0 | + ARMMMUIdxBit_Stage2 | + ARMMMUIdxBit_Stage2_S); } static int e2_tlbmask(CPUARMState *env)
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb. Flush the tlb when invalidating stage 1+2 translations. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/cpu-param.h | 2 +- target/arm/cpu.h | 23 +++++++++++++---------- target/arm/helper.c | 4 +++- 3 files changed, 17 insertions(+), 12 deletions(-)