Message ID | 20221001162318.153420-24-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | target/arm: Implement FEAT_HAFDBS | expand |
On Sat, 1 Oct 2022 at 17:38, Richard Henderson <richard.henderson@linaro.org> wrote: > > Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit. > In is_guarded_page, use probe_access_full instead of just guessing > that the tlb entry is still present. Also handles the FIXME about > executing from device memory. > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > target/arm/cpu-param.h | 8 ++++---- > target/arm/cpu.h | 13 ------------- > target/arm/internals.h | 1 + > target/arm/ptw.c | 7 ++++--- > target/arm/translate-a64.c | 22 ++++++++-------------- > 5 files changed, 17 insertions(+), 34 deletions(-) > > diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h > index 118ca0e5c0..689a9645dc 100644 > --- a/target/arm/cpu-param.h > +++ b/target/arm/cpu-param.h > @@ -32,12 +32,12 @@ > # define TARGET_PAGE_BITS_MIN 10 > > /* > - * Cache the attrs and sharability fields from the page table entry. > + * Cache the attrs, sharability, and gp fields from the page table entry. > */ > # define TARGET_PAGE_ENTRY_EXTRA \ > - uint8_t pte_attrs; \ > - uint8_t shareability; > - > + uint8_t pte_attrs; \ > + uint8_t shareability; \ > + bool guarded; I notice this now brings this very close to just having an ARMCacheAttrs struct in it (in fact it's going to be one byte bigger than the ARMCachettrs). But it's probably better to keep them separate since we care a lot more about keeping the TLB entry small I suppose. > diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c > index 5b67375f4e..22802d1d2f 100644 > --- a/target/arm/translate-a64.c > +++ b/target/arm/translate-a64.c > @@ -14601,22 +14601,16 @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s) > #ifdef CONFIG_USER_ONLY > return page_get_flags(addr) & PAGE_BTI; > #else > + CPUTLBEntryFull *full; > + void *host; > int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx); > - unsigned int index = tlb_index(env, mmu_idx, addr); > - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); > + int flags; > > - /* > - * We test this immediately after reading an insn, which means > - * that any normal page must be in the TLB. The only exception > - * would be for executing from flash or device memory, which > - * does not retain the TLB entry. > - * > - * FIXME: Assume false for those, for now. We could use > - * arm_cpu_get_phys_page_attrs_debug to re-read the page > - * table entry even for that case. > - */ I think we should keep at least some of this comment: the part about the reason we can assert that probe_access_full() doesn't return TLB_INVALID being that we tested immediately after the insn read is still true, right? > - return (tlb_hit(entry->addr_code, addr) && > - arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs)); > + flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx, > + false, &host, &full, 0); > + assert(!(flags & TLB_INVALID_MASK)); > + > + return full->guarded; > #endif > } Otherwise Reviewed-by: Peter Maydell <peter.maydell@linaro.org> thanks -- PMM
On 10/6/22 07:57, Peter Maydell wrote: > On Sat, 1 Oct 2022 at 17:38, Richard Henderson > <richard.henderson@linaro.org> wrote: >> >> Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit. >> In is_guarded_page, use probe_access_full instead of just guessing >> that the tlb entry is still present. Also handles the FIXME about >> executing from device memory. >> >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> >> --- >> target/arm/cpu-param.h | 8 ++++---- >> target/arm/cpu.h | 13 ------------- >> target/arm/internals.h | 1 + >> target/arm/ptw.c | 7 ++++--- >> target/arm/translate-a64.c | 22 ++++++++-------------- >> 5 files changed, 17 insertions(+), 34 deletions(-) >> >> diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h >> index 118ca0e5c0..689a9645dc 100644 >> --- a/target/arm/cpu-param.h >> +++ b/target/arm/cpu-param.h >> @@ -32,12 +32,12 @@ >> # define TARGET_PAGE_BITS_MIN 10 >> >> /* >> - * Cache the attrs and sharability fields from the page table entry. >> + * Cache the attrs, sharability, and gp fields from the page table entry. >> */ >> # define TARGET_PAGE_ENTRY_EXTRA \ >> - uint8_t pte_attrs; \ >> - uint8_t shareability; >> - >> + uint8_t pte_attrs; \ >> + uint8_t shareability; \ >> + bool guarded; > > I notice this now brings this very close to just having an ARMCacheAttrs > struct in it (in fact it's going to be one byte bigger than the ARMCachettrs). > But it's probably better to keep them separate since we care a lot more > about keeping the TLB entry small I suppose. I kept them as separate fields like this for simplicity. Since CPUTLBEntryFull is 4 or 8-byte aligned (depending on the host), the structure still has 1 or 5 bytes of padding after the addition of this bool. >> - /* >> - * We test this immediately after reading an insn, which means >> - * that any normal page must be in the TLB. The only exception >> - * would be for executing from flash or device memory, which >> - * does not retain the TLB entry. >> - * >> - * FIXME: Assume false for those, for now. We could use >> - * arm_cpu_get_phys_page_attrs_debug to re-read the page >> - * table entry even for that case. >> - */ > > I think we should keep at least some of this comment: the part > about the reason we can assert that probe_access_full() doesn't > return TLB_INVALID being that we tested immediately after the > insn read is still true, right? Yes. r~
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h index 118ca0e5c0..689a9645dc 100644 --- a/target/arm/cpu-param.h +++ b/target/arm/cpu-param.h @@ -32,12 +32,12 @@ # define TARGET_PAGE_BITS_MIN 10 /* - * Cache the attrs and sharability fields from the page table entry. + * Cache the attrs, sharability, and gp fields from the page table entry. */ # define TARGET_PAGE_ENTRY_EXTRA \ - uint8_t pte_attrs; \ - uint8_t shareability; - + uint8_t pte_attrs; \ + uint8_t shareability; \ + bool guarded; #endif #define NB_MMU_MODES 8 diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 2694a93894..c8cad2ef7c 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -3382,19 +3382,6 @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno) /* Shared between translate-sve.c and sve_helper.c. */ extern const uint64_t pred_esz_masks[5]; -/* Helper for the macros below, validating the argument type. */ -static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x) -{ - return x; -} - -/* - * Lvalue macros for ARM TLB bits that we must cache in the TCG TLB. - * Using these should be a bit more self-documenting than using the - * generic target bits directly. - */ -#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0) - /* * AArch64 usage of the PAGE_TARGET_* bits for linux-user. * Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect diff --git a/target/arm/internals.h b/target/arm/internals.h index fd17aee459..a50189e2e4 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1067,6 +1067,7 @@ typedef struct ARMCacheAttrs { unsigned int attrs:8; unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */ bool is_s2_format:1; + bool guarded:1; /* guarded bit of the v8-64 PTE */ } ARMCacheAttrs; /* Fields that are valid upon success. */ diff --git a/target/arm/ptw.c b/target/arm/ptw.c index 1bc194ffa1..ccfef2caca 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -1319,9 +1319,10 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, */ result->f.attrs.secure = false; } - /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */ - if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) { - arm_tlb_bti_gp(&result->f.attrs) = true; + + /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */ + if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) { + result->f.guarded = guarded; } if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index 5b67375f4e..22802d1d2f 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -14601,22 +14601,16 @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s) #ifdef CONFIG_USER_ONLY return page_get_flags(addr) & PAGE_BTI; #else + CPUTLBEntryFull *full; + void *host; int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx); - unsigned int index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + int flags; - /* - * We test this immediately after reading an insn, which means - * that any normal page must be in the TLB. The only exception - * would be for executing from flash or device memory, which - * does not retain the TLB entry. - * - * FIXME: Assume false for those, for now. We could use - * arm_cpu_get_phys_page_attrs_debug to re-read the page - * table entry even for that case. - */ - return (tlb_hit(entry->addr_code, addr) && - arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs)); + flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx, + false, &host, &full, 0); + assert(!(flags & TLB_INVALID_MASK)); + + return full->guarded; #endif }
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit. In is_guarded_page, use probe_access_full instead of just guessing that the tlb entry is still present. Also handles the FIXME about executing from device memory. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/cpu-param.h | 8 ++++---- target/arm/cpu.h | 13 ------------- target/arm/internals.h | 1 + target/arm/ptw.c | 7 ++++--- target/arm/translate-a64.c | 22 ++++++++-------------- 5 files changed, 17 insertions(+), 34 deletions(-)