Message ID | 20220914115217.117532-3-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | target/arm: Do alignment check when translation disabled | expand |
On Wed, 14 Sept 2022 at 13:47, Richard Henderson <richard.henderson@linaro.org> wrote: > > If translation is disabled, the default memory type is Device, > which requires alignment checking. Document, but defer, the > more general case of per-page alignment checking. > > Reported-by: Idan Horowitz <idan.horowitz@gmail.com> > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1204 > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > target/arm/helper.c | 38 ++++++++++++++++++++++++++++++++++++-- > 1 file changed, 36 insertions(+), 2 deletions(-) > > diff --git a/target/arm/helper.c b/target/arm/helper.c > index d7bc467a2a..79609443aa 100644 > --- a/target/arm/helper.c > +++ b/target/arm/helper.c > @@ -10713,6 +10713,39 @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env) > return arm_mmu_idx_el(env, arm_current_el(env)); > } > > +/* > + * Return true if memory alignment should be enforced. > + */ > +static bool aprofile_require_alignment(CPUARMState *env, int el, uint64_t sctlr) > +{ > + /* Check the alignment enable bit. */ > + if (sctlr & SCTLR_A) { > + return true; > + } > + > + /* > + * If translation is disabled, then the default memory type > + * may be Device(-nGnRnE) instead of Normal, which requires that "may be" ? > + * alignment be enforced. > + * > + * TODO: The more general case is translation enabled, with a per-page > + * check of the memory type as assigned via MAIR_ELx and the PTE. > + * We could arrange for a bit in MemTxAttrs to enforce alignment > + * via forced use of the softmmu slow path. Given that such pages > + * are intended for MMIO, where the slow path is required anyhow, > + * this should not result in extra overhead. > + */ > + if (sctlr & SCTLR_M) { > + /* Translation enabled: memory type in PTE via MAIR_ELx. */ > + return false; > + } > + if (el < 2 && (arm_hcr_el2_eff(env) & (HCR_DC | HCR_VM))) { > + /* Stage 2 translation enabled: memory type in PTE. */ > + return false; > + } > + return true; The SCTLR_EL1 docs say that if HCR_EL2.{DC,TGE} != {0,0} then we need to treat SCTLR_EL1.M as if it is 0. DC is covered above, but do we need/want to do anything special for TGE ? Maybe we just never get into this case because TGE means regime_sctlr() is never SCTLR_EL1 ? I forget how it works... We also need to not do this for anything with ARM_FEATURE_PMSA : with PMSA, if the MPU is disabled because SCTLR.M is 0 then the default memory type depends on the address (it's defined by the "default memory map", DDI0406C.d table B5-1) and isn't always Device. We should also mention in the comment why we're doing this particular special case even though we don't care to do full alignment checking for Device memory accesses: because initial MMU-off code is a common use-case where the guest will be working with RAM that's set up as Device memory, and it's nice to be able to detect misaligned-access bugs in it. > +} > + > static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el, > ARMMMUIdx mmu_idx, > CPUARMTBFlags flags) > @@ -10777,8 +10810,9 @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el, > { > CPUARMTBFlags flags = {}; > int el = arm_current_el(env); > + uint64_t sctlr = arm_sctlr(env, el); > > - if (arm_sctlr(env, el) & SCTLR_A) { > + if (aprofile_require_alignment(env, el, sctlr)) { > DP_TBFLAG_ANY(flags, ALIGN_MEM, 1); > } > > @@ -10871,7 +10905,7 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, > > sctlr = regime_sctlr(env, stage1); > > - if (sctlr & SCTLR_A) { > + if (aprofile_require_alignment(env, el, sctlr)) { > DP_TBFLAG_ANY(flags, ALIGN_MEM, 1); > } > > -- > 2.34.1 thanks -- PMM
On 9/22/22 08:31, Peter Maydell wrote: > On Wed, 14 Sept 2022 at 13:47, Richard Henderson > <richard.henderson@linaro.org> wrote: >> >> If translation is disabled, the default memory type is Device, >> which requires alignment checking. Document, but defer, the >> more general case of per-page alignment checking. >> >> Reported-by: Idan Horowitz <idan.horowitz@gmail.com> >> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1204 >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> >> --- >> target/arm/helper.c | 38 ++++++++++++++++++++++++++++++++++++-- >> 1 file changed, 36 insertions(+), 2 deletions(-) >> >> diff --git a/target/arm/helper.c b/target/arm/helper.c >> index d7bc467a2a..79609443aa 100644 >> --- a/target/arm/helper.c >> +++ b/target/arm/helper.c >> @@ -10713,6 +10713,39 @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env) >> return arm_mmu_idx_el(env, arm_current_el(env)); >> } >> >> +/* >> + * Return true if memory alignment should be enforced. >> + */ >> +static bool aprofile_require_alignment(CPUARMState *env, int el, uint64_t sctlr) >> +{ >> + /* Check the alignment enable bit. */ >> + if (sctlr & SCTLR_A) { >> + return true; >> + } >> + >> + /* >> + * If translation is disabled, then the default memory type >> + * may be Device(-nGnRnE) instead of Normal, which requires that > > "may be" ? Indeed, weak wording: "is". > >> + * alignment be enforced. >> + * >> + * TODO: The more general case is translation enabled, with a per-page >> + * check of the memory type as assigned via MAIR_ELx and the PTE. >> + * We could arrange for a bit in MemTxAttrs to enforce alignment >> + * via forced use of the softmmu slow path. Given that such pages >> + * are intended for MMIO, where the slow path is required anyhow, >> + * this should not result in extra overhead. I have addressed this todo for v2. It turns out to be quite easy. > The SCTLR_EL1 docs say that if HCR_EL2.{DC,TGE} != {0,0} then we need to > treat SCTLR_EL1.M as if it is 0. DC is covered above, but do we need/want > to do anything special for TGE ? Maybe we just never get into this case > because TGE means regime_sctlr() is never SCTLR_EL1 ? I forget how it > works... It might be, I'll double-check. > We also need to not do this for anything with ARM_FEATURE_PMSA : > with PMSA, if the MPU is disabled because SCTLR.M is 0 then the > default memory type depends on the address (it's defined by the > "default memory map", DDI0406C.d table B5-1) and isn't always Device. Ok, thanks for the pointer. > We should also mention in the comment why we're doing this particular > special case even though we don't care to do full alignment checking > for Device memory accesses: because initial MMU-off code is a common > use-case where the guest will be working with RAM that's set up as > Device memory, and it's nice to be able to detect misaligned-access > bugs in it. Without the todo, I guess this goes away? I will have a comment about the difference between whole-address space vs per-page alignment checking. r~
diff --git a/target/arm/helper.c b/target/arm/helper.c index d7bc467a2a..79609443aa 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -10713,6 +10713,39 @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env) return arm_mmu_idx_el(env, arm_current_el(env)); } +/* + * Return true if memory alignment should be enforced. + */ +static bool aprofile_require_alignment(CPUARMState *env, int el, uint64_t sctlr) +{ + /* Check the alignment enable bit. */ + if (sctlr & SCTLR_A) { + return true; + } + + /* + * If translation is disabled, then the default memory type + * may be Device(-nGnRnE) instead of Normal, which requires that + * alignment be enforced. + * + * TODO: The more general case is translation enabled, with a per-page + * check of the memory type as assigned via MAIR_ELx and the PTE. + * We could arrange for a bit in MemTxAttrs to enforce alignment + * via forced use of the softmmu slow path. Given that such pages + * are intended for MMIO, where the slow path is required anyhow, + * this should not result in extra overhead. + */ + if (sctlr & SCTLR_M) { + /* Translation enabled: memory type in PTE via MAIR_ELx. */ + return false; + } + if (el < 2 && (arm_hcr_el2_eff(env) & (HCR_DC | HCR_VM))) { + /* Stage 2 translation enabled: memory type in PTE. */ + return false; + } + return true; +} + static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el, ARMMMUIdx mmu_idx, CPUARMTBFlags flags) @@ -10777,8 +10810,9 @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el, { CPUARMTBFlags flags = {}; int el = arm_current_el(env); + uint64_t sctlr = arm_sctlr(env, el); - if (arm_sctlr(env, el) & SCTLR_A) { + if (aprofile_require_alignment(env, el, sctlr)) { DP_TBFLAG_ANY(flags, ALIGN_MEM, 1); } @@ -10871,7 +10905,7 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, sctlr = regime_sctlr(env, stage1); - if (sctlr & SCTLR_A) { + if (aprofile_require_alignment(env, el, sctlr)) { DP_TBFLAG_ANY(flags, ALIGN_MEM, 1); }
If translation is disabled, the default memory type is Device, which requires alignment checking. Document, but defer, the more general case of per-page alignment checking. Reported-by: Idan Horowitz <idan.horowitz@gmail.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1204 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/helper.c | 38 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 36 insertions(+), 2 deletions(-)