Message ID | 20211208231154.392029-5-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | target/arm: Implement LVA, LPA, LPA2 features | expand |
Richard Henderson <richard.henderson@linaro.org> writes: > This feature is relatively small, as it applies only to > 64k pages and thus requires no additional changes to the > table descriptor walking algorithm, only a change to the > minimum TSZ (which is the inverse of the maximum virtual > address space size). > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
On Wed, 8 Dec 2021 at 23:16, Richard Henderson <richard.henderson@linaro.org> wrote: > > This feature is relatively small, as it applies only to > 64k pages and thus requires no additional changes to the > table descriptor walking algorithm, only a change to the > minimum TSZ (which is the inverse of the maximum virtual > address space size). > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> FEAT_LVA also expands the size of the VA field in DBGBVR<n>_EL1. We currently hardcode the size of that in hw_breakpoint_update() where we do: addr = sextract64(bvr, 0, 49) & ~3ULL; This is also true of DBGWVR<n>_EL1, except that there we seem to have chosen to take advantage of the spec defining the high bits of the register as RESS (ie sign-extended) and we always use all of the address bits regardless. Maybe we could do something similar with DBGBVR. (Similarly we use all the bits in the VBAR_ELx so that code needs no changes.) Otherwise looks good. -- PMM
On 1/7/22 07:23, Peter Maydell wrote: > On Wed, 8 Dec 2021 at 23:16, Richard Henderson > <richard.henderson@linaro.org> wrote: >> >> This feature is relatively small, as it applies only to >> 64k pages and thus requires no additional changes to the >> table descriptor walking algorithm, only a change to the >> minimum TSZ (which is the inverse of the maximum virtual >> address space size). >> >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > > FEAT_LVA also expands the size of the VA field in > DBGBVR<n>_EL1. We currently hardcode the size of that > in hw_breakpoint_update() where we do: > addr = sextract64(bvr, 0, 49) & ~3ULL; > > This is also true of DBGWVR<n>_EL1, except that there > we seem to have chosen to take advantage of the spec > defining the high bits of the register as RESS (ie > sign-extended) and we always use all of the address bits > regardless. Maybe we could do something similar with DBGBVR. We treat DBGBVR and DBGWVR similarly, with the exception that DVGBVR is context dependent, so we must wait until we interpret it together with DBGBCR. However, I think the combination of IMPLEMENTATION DEFINED for storing the value as written and CONSTRAINED UNPREDICTABLE for comparing the RESS bits means that we're allowed to rely on Software to perform the appropriate extension and store and compare the entire register. I'll fix this in a separate patch. r~
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h index 7f38d33b8e..5f9c288b1a 100644 --- a/target/arm/cpu-param.h +++ b/target/arm/cpu-param.h @@ -11,7 +11,7 @@ #ifdef TARGET_AARCH64 # define TARGET_LONG_BITS 64 # define TARGET_PHYS_ADDR_SPACE_BITS 48 -# define TARGET_VIRT_ADDR_SPACE_BITS 48 +# define TARGET_VIRT_ADDR_SPACE_BITS 52 #else # define TARGET_LONG_BITS 32 # define TARGET_PHYS_ADDR_SPACE_BITS 40 diff --git a/target/arm/cpu.h b/target/arm/cpu.h index e33f37b70a..3149000004 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -4288,6 +4288,11 @@ static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id) return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0; } +static inline bool isar_feature_aa64_lva(const ARMISARegisters *id) +{ + return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, VARANGE) != 0; +} + static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id) { return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0; diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index 15245a60a8..f44ee643ef 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -755,6 +755,7 @@ static void aarch64_max_initfn(Object *obj) t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */ t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */ + t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LPA */ cpu->isar.id_aa64mmfr2 = t; t = cpu->isar.id_aa64zfr0; diff --git a/target/arm/helper.c b/target/arm/helper.c index 568914bd42..6a59975028 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -11324,7 +11324,7 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, /* TODO: This code does not support shareability levels. */ if (aarch64) { - int min_tsz = 16, max_tsz = 39; /* TODO: ARMv8.2-LVA */ + int min_tsz = 16, max_tsz = 39; int parange; param = aa64_va_parameters(env, address, mmu_idx, @@ -11334,6 +11334,12 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, if (cpu_isar_feature(aa64_st, env_archcpu(env))) { max_tsz = 48 - param.using64k; } + if (param.using64k) { + if (cpu_isar_feature(aa64_lva, env_archcpu(env))) { + min_tsz = 12; + } + } + /* TODO: FEAT_LPA2 */ /* * If TxSZ is programmed to a value larger than the maximum,
This feature is relatively small, as it applies only to 64k pages and thus requires no additional changes to the table descriptor walking algorithm, only a change to the minimum TSZ (which is the inverse of the maximum virtual address space size). Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/cpu-param.h | 2 +- target/arm/cpu.h | 5 +++++ target/arm/cpu64.c | 1 + target/arm/helper.c | 8 +++++++- 4 files changed, 14 insertions(+), 2 deletions(-)