Message ID | 20210729004647.282017-18-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | Unaligned accesses for user-only | expand |
On Thu, 29 Jul 2021 at 02:09, Richard Henderson <richard.henderson@linaro.org> wrote: > > Use the newly exposed do_unaligned_access hook from atomic_mmu_lookup, > which has access to complete alignment info from the TCGMemOpIdx arg. > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > accel/tcg/user-exec.c | 23 ++++++++++++++++++++++- > 1 file changed, 22 insertions(+), 1 deletion(-) > > diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c > index 90d1a2d327..dd77e90789 100644 > --- a/accel/tcg/user-exec.c > +++ b/accel/tcg/user-exec.c > @@ -852,6 +852,16 @@ int cpu_signal_handler(int host_signum, void *pinfo, > > /* The softmmu versions of these helpers are in cputlb.c. */ > > +static void cpu_unaligned_access(CPUState *cpu, vaddr addr, > + MMUAccessType access_type, > + int mmu_idx, uintptr_t ra) > +{ > + CPUClass *cc = CPU_GET_CLASS(cpu); > + > + cc->tcg_ops->do_unaligned_access(cpu, addr, access_type, mmu_idx, ra); > + g_assert_not_reached(); > +} The softmmu version doesn't g_assert_not_reached(), I think perhaps with the intent that a CPU implementation could in some cases return without raising an exception to mean "continue with the unaligned access". We should decide whether we want the API to permit that, or else consistently have both softmmu and useronly versions be marked noreturn and with an assert, and we should document whichever we choose. -- PMM
On 7/29/21 5:02 PM, Peter Maydell wrote: > On Thu, 29 Jul 2021 at 02:09, Richard Henderson > <richard.henderson@linaro.org> wrote: >> >> Use the newly exposed do_unaligned_access hook from atomic_mmu_lookup, >> which has access to complete alignment info from the TCGMemOpIdx arg. >> >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> >> --- >> accel/tcg/user-exec.c | 23 ++++++++++++++++++++++- >> 1 file changed, 22 insertions(+), 1 deletion(-) >> >> diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c >> index 90d1a2d327..dd77e90789 100644 >> --- a/accel/tcg/user-exec.c >> +++ b/accel/tcg/user-exec.c >> @@ -852,6 +852,16 @@ int cpu_signal_handler(int host_signum, void *pinfo, >> >> /* The softmmu versions of these helpers are in cputlb.c. */ >> >> +static void cpu_unaligned_access(CPUState *cpu, vaddr addr, >> + MMUAccessType access_type, >> + int mmu_idx, uintptr_t ra) >> +{ >> + CPUClass *cc = CPU_GET_CLASS(cpu); >> + >> + cc->tcg_ops->do_unaligned_access(cpu, addr, access_type, mmu_idx, ra); >> + g_assert_not_reached(); >> +} > > The softmmu version doesn't g_assert_not_reached(), I think > perhaps with the intent that a CPU implementation could > in some cases return without raising an exception to > mean "continue with the unaligned access". We should decide > whether we want the API to permit that, or else consistently > have both softmmu and useronly versions be marked noreturn > and with an assert, and we should document whichever we choose. Agreed. I'd rather use noreturn, which exposed these bugs: - "target/xtensa: clean up unaligned access" https://lists.gnu.org/archive/html/qemu-devel/2021-05/msg05890.html - "target/nios2: Mark raise_exception() as noreturn" https://lists.gnu.org/archive/html/qemu-devel/2021-07/msg07001.html
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 90d1a2d327..dd77e90789 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -852,6 +852,16 @@ int cpu_signal_handler(int host_signum, void *pinfo, /* The softmmu versions of these helpers are in cputlb.c. */ +static void cpu_unaligned_access(CPUState *cpu, vaddr addr, + MMUAccessType access_type, + int mmu_idx, uintptr_t ra) +{ + CPUClass *cc = CPU_GET_CLASS(cpu); + + cc->tcg_ops->do_unaligned_access(cpu, addr, access_type, mmu_idx, ra); + g_assert_not_reached(); +} + uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr) { uint32_t ret; @@ -1230,11 +1240,22 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, int size, int prot, uintptr_t retaddr) { + MemOp mop = get_memop(oi); + int a_bits = get_alignment_bits(mop); + void *ret; + + /* Enforce guest required alignment. */ + if (unlikely(addr & ((1 << a_bits) - 1))) { + MMUAccessType t = prot == PAGE_READ ? MMU_DATA_LOAD : MMU_DATA_STORE; + cpu_unaligned_access(env_cpu(env), addr, t, get_mmuidx(oi), retaddr); + } + /* Enforce qemu required alignment. */ if (unlikely(addr & (size - 1))) { cpu_loop_exit_atomic(env_cpu(env), retaddr); } - void *ret = g2h(env_cpu(env), addr); + + ret = g2h(env_cpu(env), addr); set_helper_retaddr(retaddr); return ret; }
Use the newly exposed do_unaligned_access hook from atomic_mmu_lookup, which has access to complete alignment info from the TCGMemOpIdx arg. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- accel/tcg/user-exec.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) -- 2.25.1