Message ID | 20220620175235.60881-38-richard.henderson@linaro.org |
---|---|
State | Superseded |
Headers | show |
Series | target/arm: Scalable Matrix Extension | expand |
On Mon, 20 Jun 2022 at 19:14, Richard Henderson <richard.henderson@linaro.org> wrote: > > We can handle both exception entry and exception return by > hooking into aarch64_sve_change_el. > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > target/arm/helper.c | 15 +++++++++++++-- > 1 file changed, 13 insertions(+), 2 deletions(-) > > diff --git a/target/arm/helper.c b/target/arm/helper.c > index 26f4a4bc26..9c5b1a10eb 100644 > --- a/target/arm/helper.c > +++ b/target/arm/helper.c > @@ -11754,6 +11754,19 @@ void aarch64_sve_change_el(CPUARMState *env, int old_el, > return; > } > > + old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64; > + new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64; > + > + /* > + * Both AArch64.TakeException and AArch64.ExceptionReturn > + * invoke ResetSVEState when taking an exception from, or > + * returning to, AArch32 state when PSTATE.SM is enabled. > + */ > + if (old_a64 != new_a64 && FIELD_EX64(env->svcr, SVCR, SM)) { > + arm_reset_sve_state(env); > + return; > + } Reviewed-by: Peter Maydell <peter.maydell@linaro.org> thanks -- PMM
diff --git a/target/arm/helper.c b/target/arm/helper.c index 26f4a4bc26..9c5b1a10eb 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -11754,6 +11754,19 @@ void aarch64_sve_change_el(CPUARMState *env, int old_el, return; } + old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64; + new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64; + + /* + * Both AArch64.TakeException and AArch64.ExceptionReturn + * invoke ResetSVEState when taking an exception from, or + * returning to, AArch32 state when PSTATE.SM is enabled. + */ + if (old_a64 != new_a64 && FIELD_EX64(env->svcr, SVCR, SM)) { + arm_reset_sve_state(env); + return; + } + /* * DDI0584A.d sec 3.2: "If SVE instructions are disabled or trapped * at ELx, or not available because the EL is in AArch32 state, then @@ -11766,10 +11779,8 @@ void aarch64_sve_change_el(CPUARMState *env, int old_el, * we already have the correct register contents when encountering the * vq0->vq0 transition between EL0->EL1. */ - old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64; old_len = (old_a64 && !sve_exception_el(env, old_el) ? sve_vqm1_for_el(env, old_el) : 0); - new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64; new_len = (new_a64 && !sve_exception_el(env, new_el) ? sve_vqm1_for_el(env, new_el) : 0);
We can handle both exception entry and exception return by hooking into aarch64_sve_change_el. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/helper.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)