Message ID | 20200623193658.623279-39-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | target/arm: Implement ARMv8.5-MemTag, system mode | expand |
On Tue, 23 Jun 2020 at 20:37, Richard Henderson <richard.henderson@linaro.org> wrote: > > There are a number of paths by which the TBI is still intact > for user-only in the SVE helpers. > > Because we currently always set TBI for user-only, we do not > need to pass down the actual TBI setting from above, and we > can remove the top byte in the inner-most primitives, so that > none are forgotten. Moreover, this keeps the "dirty" pointer > around at the higher levels, where we need it for any MTE checking. > > Since the normal case, especially for user-only, goes through > RAM, this clearing merely adds two insns per page lookup, which > will be completely in the noise. Can we have an assert() somewhere suitable that TBI is set? That way if we ever do have an SVE-capable linux-user which doesn't set TBI for some reason we'll get a useful reminder that we need to fix something. Otherwise Reviewed-by: Peter Maydell <peter.maydell@linaro.org> thanks -- PMM
On 6/25/20 5:52 AM, Peter Maydell wrote: > On Tue, 23 Jun 2020 at 20:37, Richard Henderson > <richard.henderson@linaro.org> wrote: >> >> There are a number of paths by which the TBI is still intact >> for user-only in the SVE helpers. >> >> Because we currently always set TBI for user-only, we do not >> need to pass down the actual TBI setting from above, and we >> can remove the top byte in the inner-most primitives, so that >> none are forgotten. Moreover, this keeps the "dirty" pointer >> around at the higher levels, where we need it for any MTE checking. >> >> Since the normal case, especially for user-only, goes through >> RAM, this clearing merely adds two insns per page lookup, which >> will be completely in the noise. > > Can we have an assert() somewhere suitable that TBI is set? > That way if we ever do have an SVE-capable linux-user which > doesn't set TBI for some reason we'll get a useful reminder > that we need to fix something. At what level would you like such an assert? At present we have, in arm_cpu_reset, /* * Enable TBI0 and TBI1. While the real kernel only enables TBI0, * turning on both here will produce smaller code and otherwise * make no difference to the user-level emulation. */ env->cp15.tcr_el[1].raw_tcr = (3ULL << 37); r~
On Thu, 25 Jun 2020 at 17:54, Richard Henderson <richard.henderson@linaro.org> wrote: > > On 6/25/20 5:52 AM, Peter Maydell wrote: > > On Tue, 23 Jun 2020 at 20:37, Richard Henderson > > <richard.henderson@linaro.org> wrote: > >> > >> There are a number of paths by which the TBI is still intact > >> for user-only in the SVE helpers. > >> > >> Because we currently always set TBI for user-only, we do not > >> need to pass down the actual TBI setting from above, and we > >> can remove the top byte in the inner-most primitives, so that > >> none are forgotten. Moreover, this keeps the "dirty" pointer > >> around at the higher levels, where we need it for any MTE checking. > >> > >> Since the normal case, especially for user-only, goes through > >> RAM, this clearing merely adds two insns per page lookup, which > >> will be completely in the noise. > > > > Can we have an assert() somewhere suitable that TBI is set? > > That way if we ever do have an SVE-capable linux-user which > > doesn't set TBI for some reason we'll get a useful reminder > > that we need to fix something. > > At what level would you like such an assert? I don't particularly care, as long as we get notified if our assumption that TBI is always set gets broken (ie we change that bit of CPU reset code for some reason). thanks -- PMM
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index ad974c2cc5..382fa82bc8 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -3966,14 +3966,16 @@ static void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \ static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \ target_ulong addr, uintptr_t ra) \ { \ - *(TYPEE *)(vd + H(reg_off)) = (TYPEM)TLB(env, addr, ra); \ + *(TYPEE *)(vd + H(reg_off)) = \ + (TYPEM)TLB(env, useronly_clean_ptr(addr), ra); \ } #define DO_ST_TLB(NAME, H, TYPEE, TYPEM, TLB) \ static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \ target_ulong addr, uintptr_t ra) \ { \ - TLB(env, addr, (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \ + TLB(env, useronly_clean_ptr(addr), \ + (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \ } #define DO_LD_PRIM_1(NAME, H, TE, TM) \ @@ -4091,6 +4093,19 @@ static bool sve_probe_page(SVEHostPage *info, bool nofault, int flags; addr += mem_off; + + /* + * User-only currently always issues with TBI. See the comment + * above useronly_clean_ptr. Usually we clean this top byte away + * during translation, but we can't do that for e.g. vector + imm + * addressing modes. + * + * We currently always enable TBI for user-only, and do not provide + * a way to turn it off. So clean the pointer unconditionally here, + * rather than look it up here, or pass it down from above. + */ + addr = useronly_clean_ptr(addr); + flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault, &info->host, retaddr); info->flags = flags;
There are a number of paths by which the TBI is still intact for user-only in the SVE helpers. Because we currently always set TBI for user-only, we do not need to pass down the actual TBI setting from above, and we can remove the top byte in the inner-most primitives, so that none are forgotten. Moreover, this keeps the "dirty" pointer around at the higher levels, where we need it for any MTE checking. Since the normal case, especially for user-only, goes through RAM, this clearing merely adds two insns per page lookup, which will be completely in the noise. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/sve_helper.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) -- 2.25.1