Message ID | 20191011134744.2477-22-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | [v5,01/22] target/arm: Add MTE_ACTIVE to tb_flags | expand |
On Fri, 11 Oct 2019 at 14:50, Richard Henderson <richard.henderson@linaro.org> wrote: > > The process by which one goes from an address space plus physical > address to a host pointer is complex. It is easiest to reuse the > mechanism already present within cputlb, and letting that cache > the results. > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > target/arm/cpu-param.h | 2 +- > target/arm/cpu.h | 12 +++++++++--- > target/arm/internals.h | 2 ++ > target/arm/helper.c | 25 +++++++++++++++++++++++-- > 4 files changed, 35 insertions(+), 6 deletions(-) > > diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h > index 6e6948e960..18ac562346 100644 > --- a/target/arm/cpu-param.h > +++ b/target/arm/cpu-param.h > @@ -29,6 +29,6 @@ > # define TARGET_PAGE_BITS_MIN 10 > #endif > > -#define NB_MMU_MODES 8 > +#define NB_MMU_MODES 9 > > #endif > diff --git a/target/arm/cpu.h b/target/arm/cpu.h > index faca43ea78..c3609ef9d5 100644 > --- a/target/arm/cpu.h > +++ b/target/arm/cpu.h > @@ -2854,8 +2854,8 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, > #define ARM_MMU_IDX_M_NEGPRI 0x2 > #define ARM_MMU_IDX_M_S 0x4 > > -#define ARM_MMU_IDX_TYPE_MASK (~0x7) > -#define ARM_MMU_IDX_COREIDX_MASK 0x7 > +#define ARM_MMU_IDX_TYPE_MASK (~0xf) > +#define ARM_MMU_IDX_COREIDX_MASK 0xf > > typedef enum ARMMMUIdx { > ARMMMUIdx_S12NSE0 = 0 | ARM_MMU_IDX_A, > @@ -2865,6 +2865,9 @@ typedef enum ARMMMUIdx { > ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A, > ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A, > ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A, > + ARMMMUIdx_TagNS = 7 | ARM_MMU_IDX_A, > + ARMMMUIdx_TagS = 8 | ARM_MMU_IDX_A, > + > ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M, > ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M, > ARMMMUIdx_MUserNegPri = 2 | ARM_MMU_IDX_M, > @@ -2891,6 +2894,8 @@ typedef enum ARMMMUIdxBit { > ARMMMUIdxBit_S1SE0 = 1 << 4, > ARMMMUIdxBit_S1SE1 = 1 << 5, > ARMMMUIdxBit_S2NS = 1 << 6, > + ARMMMUIdxBit_TagNS = 1 << 7, > + ARMMMUIdxBit_TagS = 1 << 8, > ARMMMUIdxBit_MUser = 1 << 0, > ARMMMUIdxBit_MPriv = 1 << 1, > ARMMMUIdxBit_MUserNegPri = 1 << 2, > @@ -3254,7 +3259,8 @@ enum { > /* Return the address space index to use for a memory access */ > static inline int arm_asidx_from_attrs(CPUState *cs, MemTxAttrs attrs) > { > - return attrs.secure ? ARMASIdx_S : ARMASIdx_NS; > + return ((attrs.target_tlb_bit2 ? ARMASIdx_TagNS : ARMASIdx_NS) > + + attrs.secure); If you want to do the "just add attrs.secure" can we have a build-time assert that ARMASIdx_S is ARMASIdx_NS + 1, and ditto for TagNS/TagS, please? It seems like the kind of thing that will catch us out later on. > } > if (env->cp15.hcr_el2 & HCR_TGE) { > @@ -10662,7 +10671,9 @@ bool get_phys_addr(CPUARMState *env, target_ulong address, > target_ulong *page_size, > ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) > { > - if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) { > + switch (mmu_idx) { > + case ARMMMUIdx_S12NSE0: > + case ARMMMUIdx_S12NSE1: > /* Call ourselves recursively to do the stage 1 and then stage 2 > * translations. > */ > @@ -10713,6 +10724,16 @@ bool get_phys_addr(CPUARMState *env, target_ulong address, > */ > mmu_idx = stage_1_mmu_idx(mmu_idx); > } > + break; > + > + case ARMMMUIdx_TagS: > + case ARMMMUIdx_TagNS: > + /* Indicate tag memory to arm_asidx_from_attrs. */ > + attrs->target_tlb_bit2 = true; > + break; So here we fall through to the "handle a stage 1 lookup" code, which: * sets attrs->secure * sets attrs->user (always false, so we could have left it alone) * skips the FCSE handling (as we're v8) * skips the PMSA handling * hits the regime_translation_disabled() check, which fills in *phys_ptr, *prot and *page_size and returns Maybe it would be clearer if this case here just did all of that: case ARMMMUIdx_TagS: attrs->secure = true; /* fall through */ case ARMMMUIdx_TagNS: /* Indicate tag memory to arm_asidx_from_attrs. */ attrs->target_tlb_bit2 = true; *phys_ptr = address; *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; *page_size = TARGET_PAGE_SIZE; return 0; Did I miss anything out? Or are we going to want more things which are common between the stage 1 codepath and the "just a tag ram access" case in future? thanks -- PMM
On 12/6/19 3:46 AM, Peter Maydell wrote: >> + case ARMMMUIdx_TagS: >> + case ARMMMUIdx_TagNS: >> + /* Indicate tag memory to arm_asidx_from_attrs. */ >> + attrs->target_tlb_bit2 = true; >> + break; > > So here we fall through to the "handle a stage 1 lookup" code, which: > * sets attrs->secure > * sets attrs->user (always false, so we could have left it alone) > * skips the FCSE handling (as we're v8) > * skips the PMSA handling > * hits the regime_translation_disabled() check, which fills in > *phys_ptr, *prot and *page_size and returns Exactly. > Maybe it would be clearer if this case here just did all of that: > > case ARMMMUIdx_TagS: > attrs->secure = true; > /* fall through */ > case ARMMMUIdx_TagNS: > /* Indicate tag memory to arm_asidx_from_attrs. */ > attrs->target_tlb_bit2 = true; > *phys_ptr = address; > *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; > *page_size = TARGET_PAGE_SIZE; > return 0; > > Did I miss anything out? I think that's about it. I thought about doing exactly this. Also, this is a better location if I ever do something about the TODO in tne next patch, wherein I talk about mapping not from physical address but from the normal ram's ramaddr_t, so as to cache that mapping step as well. r~
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h index 6e6948e960..18ac562346 100644 --- a/target/arm/cpu-param.h +++ b/target/arm/cpu-param.h @@ -29,6 +29,6 @@ # define TARGET_PAGE_BITS_MIN 10 #endif -#define NB_MMU_MODES 8 +#define NB_MMU_MODES 9 #endif diff --git a/target/arm/cpu.h b/target/arm/cpu.h index faca43ea78..c3609ef9d5 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2854,8 +2854,8 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, #define ARM_MMU_IDX_M_NEGPRI 0x2 #define ARM_MMU_IDX_M_S 0x4 -#define ARM_MMU_IDX_TYPE_MASK (~0x7) -#define ARM_MMU_IDX_COREIDX_MASK 0x7 +#define ARM_MMU_IDX_TYPE_MASK (~0xf) +#define ARM_MMU_IDX_COREIDX_MASK 0xf typedef enum ARMMMUIdx { ARMMMUIdx_S12NSE0 = 0 | ARM_MMU_IDX_A, @@ -2865,6 +2865,9 @@ typedef enum ARMMMUIdx { ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A, ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A, ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A, + ARMMMUIdx_TagNS = 7 | ARM_MMU_IDX_A, + ARMMMUIdx_TagS = 8 | ARM_MMU_IDX_A, + ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M, ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M, ARMMMUIdx_MUserNegPri = 2 | ARM_MMU_IDX_M, @@ -2891,6 +2894,8 @@ typedef enum ARMMMUIdxBit { ARMMMUIdxBit_S1SE0 = 1 << 4, ARMMMUIdxBit_S1SE1 = 1 << 5, ARMMMUIdxBit_S2NS = 1 << 6, + ARMMMUIdxBit_TagNS = 1 << 7, + ARMMMUIdxBit_TagS = 1 << 8, ARMMMUIdxBit_MUser = 1 << 0, ARMMMUIdxBit_MPriv = 1 << 1, ARMMMUIdxBit_MUserNegPri = 1 << 2, @@ -3254,7 +3259,8 @@ enum { /* Return the address space index to use for a memory access */ static inline int arm_asidx_from_attrs(CPUState *cs, MemTxAttrs attrs) { - return attrs.secure ? ARMASIdx_S : ARMASIdx_NS; + return ((attrs.target_tlb_bit2 ? ARMASIdx_TagNS : ARMASIdx_NS) + + attrs.secure); } /* Return the AddressSpace to use for a memory access diff --git a/target/arm/internals.h b/target/arm/internals.h index a434743b15..dfa395eb35 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -828,6 +828,7 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx) case ARMMMUIdx_S1NSE1: case ARMMMUIdx_S1E2: case ARMMMUIdx_S2NS: + case ARMMMUIdx_TagNS: case ARMMMUIdx_MPrivNegPri: case ARMMMUIdx_MUserNegPri: case ARMMMUIdx_MPriv: @@ -836,6 +837,7 @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx) case ARMMMUIdx_S1E3: case ARMMMUIdx_S1SE0: case ARMMMUIdx_S1SE1: + case ARMMMUIdx_TagS: case ARMMMUIdx_MSPrivNegPri: case ARMMMUIdx_MSUserNegPri: case ARMMMUIdx_MSPriv: diff --git a/target/arm/helper.c b/target/arm/helper.c index 17981d7c48..3147469899 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -8658,9 +8658,18 @@ static inline bool regime_translation_disabled(CPUARMState *env, } } - if (mmu_idx == ARMMMUIdx_S2NS) { + switch (mmu_idx) { + case ARMMMUIdx_S2NS: /* HCR.DC means HCR.VM behaves as 1 */ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0; + + case ARMMMUIdx_TagS: + case ARMMMUIdx_TagNS: + /* These indexes are qemu internal, and are physically mapped. */ + return true; + + default: + break; } if (env->cp15.hcr_el2 & HCR_TGE) { @@ -10662,7 +10671,9 @@ bool get_phys_addr(CPUARMState *env, target_ulong address, target_ulong *page_size, ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) { - if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) { + switch (mmu_idx) { + case ARMMMUIdx_S12NSE0: + case ARMMMUIdx_S12NSE1: /* Call ourselves recursively to do the stage 1 and then stage 2 * translations. */ @@ -10713,6 +10724,16 @@ bool get_phys_addr(CPUARMState *env, target_ulong address, */ mmu_idx = stage_1_mmu_idx(mmu_idx); } + break; + + case ARMMMUIdx_TagS: + case ARMMMUIdx_TagNS: + /* Indicate tag memory to arm_asidx_from_attrs. */ + attrs->target_tlb_bit2 = true; + break; + + default: + break; } /* The page table entries may downgrade secure to non-secure, but
The process by which one goes from an address space plus physical address to a host pointer is complex. It is easiest to reuse the mechanism already present within cputlb, and letting that cache the results. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/cpu-param.h | 2 +- target/arm/cpu.h | 12 +++++++++--- target/arm/internals.h | 2 ++ target/arm/helper.c | 25 +++++++++++++++++++++++-- 4 files changed, 35 insertions(+), 6 deletions(-) -- 2.17.1