Message ID | 20230414160413.549801-4-peter.maydell@linaro.org |
---|---|
State | Superseded |
Headers | show |
Series | target/arm: Fix handling of VSTCR_EL2.SW and VTCR_EL2.NSW | expand |
On 4/14/23 18:04, Peter Maydell wrote: > In get_phys_addr_twostage() when we set up the stage 2 translation, > we currently incorrectly set all of in_mmu_idx, in_ptw_idx and > in_secure based on s2walk_secure. > > Here s2walk_secure is true if we should be doing this stage 2 > walk to physical memory. ipa_secure is true if the intermediate > physical address we got from stage 1 is secure. The VSTCR_EL2.SW > and VTCR_EL2.NSW bits allow the guest to configure secure EL2 > so that these two things are different, eg: > * a stage 2 walk for an NS IPA done to secure physical memory > (where the translation table base address and other parameters > for the walk come from the NS control registers VTTBR_EL2 > and VTCR_EL2) > * a stage 2 walk for an S IPA done to non-secure physical memory > (where the parameters from the walk come from the S control > registers VSTTBR_EL2 and VSTCR_EL2) > > To tell get_phys_addr_lpae() to do this, we need to pass in an > in_mmu_idx based on ipa_secure, and an in_ptw_idx based on > s2walk_secure. The in_secure parameter follows in_mmu_idx, as usual. > > Note that this change relies on fixes in the previous two commits > ("Don't allow stage 2 page table walks to downgrade to NS" and > "Set ptw->out_secure correctly for stage 2 translations"). > > Cc:qemu-stable@nongnu.org > Resolves:https://gitlab.com/qemu-project/qemu/-/issues/1600 > Signed-off-by: Peter Maydell<peter.maydell@linaro.org> > --- > This change also means that v8M, which also uses > get_phys_addr_twostage(), is no longer using a ptw->in_mmu_idx > calculated based on s2walk_secure, which was always a little > strange given that v8M doesn't do any kind of s2 walk, even > though it didn't produce incorrect behaviour. > --- > target/arm/ptw.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> r~
On Fri, 14 Apr 2023 at 17:04, Peter Maydell <peter.maydell@linaro.org> wrote: > > In get_phys_addr_twostage() when we set up the stage 2 translation, > we currently incorrectly set all of in_mmu_idx, in_ptw_idx and > in_secure based on s2walk_secure. > > Here s2walk_secure is true if we should be doing this stage 2 > walk to physical memory. ipa_secure is true if the intermediate > physical address we got from stage 1 is secure. The VSTCR_EL2.SW > and VTCR_EL2.NSW bits allow the guest to configure secure EL2 > so that these two things are different, eg: > * a stage 2 walk for an NS IPA done to secure physical memory > (where the translation table base address and other parameters > for the walk come from the NS control registers VTTBR_EL2 > and VTCR_EL2) > * a stage 2 walk for an S IPA done to non-secure physical memory > (where the parameters from the walk come from the S control > registers VSTTBR_EL2 and VSTCR_EL2) > > To tell get_phys_addr_lpae() to do this, we need to pass in an > in_mmu_idx based on ipa_secure, and an in_ptw_idx based on > s2walk_secure. The in_secure parameter follows in_mmu_idx, as usual. Looking again at this patchset, I think these changes are right, but we might still be missing one -- in get_phys_addr_with_struct() when we set up the ptw struct for the stage 1 walk, don't we need to look at NSW there also to correctly set the ptw->in_ptw_idx ? At the moment we do that based only on is_secure. Otherwise the S2 page table walks we do for the S1 page table walks won't honour NSW/SW correctly, I think. (At the moment we sort of do something with that in S1_ptw_translate(), but it looks like only by saying "once we've done the s2 walk coerce the result into the right address space", so we won't actually do the s2 walk itself in the right address space.) -- PMM
diff --git a/target/arm/ptw.c b/target/arm/ptw.c index c1e124df495..2ee2b0b9241 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -2741,9 +2741,9 @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw, } is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0; - ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; + ptw->in_mmu_idx = ipa_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; ptw->in_ptw_idx = s2walk_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS; - ptw->in_secure = s2walk_secure; + ptw->in_secure = ipa_secure; /* * S1 is done, now do S2 translation.
In get_phys_addr_twostage() when we set up the stage 2 translation, we currently incorrectly set all of in_mmu_idx, in_ptw_idx and in_secure based on s2walk_secure. Here s2walk_secure is true if we should be doing this stage 2 walk to physical memory. ipa_secure is true if the intermediate physical address we got from stage 1 is secure. The VSTCR_EL2.SW and VTCR_EL2.NSW bits allow the guest to configure secure EL2 so that these two things are different, eg: * a stage 2 walk for an NS IPA done to secure physical memory (where the translation table base address and other parameters for the walk come from the NS control registers VTTBR_EL2 and VTCR_EL2) * a stage 2 walk for an S IPA done to non-secure physical memory (where the parameters from the walk come from the S control registers VSTTBR_EL2 and VSTCR_EL2) To tell get_phys_addr_lpae() to do this, we need to pass in an in_mmu_idx based on ipa_secure, and an in_ptw_idx based on s2walk_secure. The in_secure parameter follows in_mmu_idx, as usual. Note that this change relies on fixes in the previous two commits ("Don't allow stage 2 page table walks to downgrade to NS" and "Set ptw->out_secure correctly for stage 2 translations"). Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1600 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- This change also means that v8M, which also uses get_phys_addr_twostage(), is no longer using a ptw->in_mmu_idx calculated based on s2walk_secure, which was always a little strange given that v8M doesn't do any kind of s2 walk, even though it didn't produce incorrect behaviour. --- target/arm/ptw.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)