From patchwork Fri Dec 22 16:21:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 757718 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A52FA31594; Fri, 22 Dec 2023 16:22:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SsZPbHsJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71C84C433C7; Fri, 22 Dec 2023 16:22:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262146; bh=5lGKtbZC4emqAt1UoPsSy6+KTnV2qiOUKkrVu+cYfdA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=SsZPbHsJVmRvFTYI/OsJX9MIk5Susi8hJC2uSeSRnCEvLMegybLLuyc1pTXCQzcOl e2xarWxNt3I9dC0czy+uR974eWszcQkBr36qm8C+wF6XoHcu4MDBcxsvTOAoj2jHff mv4K3Pi/h3JVutKO8aeQnR8fxwTIz3ftaSR4u71jjc5JB4HKUPYEq2HXF1i+ql2y+4 W3ABRsUQnyD0i/s1FAabi/5tKYiW5oXDPsukY6dIREsC0FvhDAc5jN7lXKXQAKJ/pV va1DxwyCFQYcqDJ/c/HF6Me0lFkuDM9MJ1Ts4uAuTQKKKVwRE6nLMspW9ThoJEVAWS iK2adhxuXz/sQ== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:22 +0000 Subject: [PATCH RFC v2 14/22] KVM: arm64: Manage and handle SME traps Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-14-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=8667; i=broonie@kernel.org; h=from:subject:message-id; bh=5lGKtbZC4emqAt1UoPsSy6+KTnV2qiOUKkrVu+cYfdA=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeG8EACeRVbdusbRrj7cFY0oHaBwBosw+yJeaBL tTUCkSGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3hgAKCRAk1otyXVSH0PfAB/ 0WfUiWBGJnsdeGbcDq9z3wPpvvJ6jt/5uFYr2PMc0SE0yZ8bnU8zcm6moyrSysPaya0BvrVj4PfEl/ +HI9nQfaCAs99DChph/YeUnP46mzFgWghzcRQ70UJjimmXLms8J1+QobR7yzP+dRKH4ji4D9zR+zd8 EYcCCp/h91RhtJwkG1OzTZxP/DuEI31Qm2phu4ieEqefqr6JGcVAs/qnY0669qD/uolOJSBvcFNW0V rNNSidC41HJ71htF7WCO+vgoDn4srSt0nIJWGjZXQGVB8ubPK4eV4vrF3tnpMExEKolmLUT20xgsI8 HynQa/CSpf8rx4dYxag1Oj/ld/QEIH X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Now that we have support for managing SME state for KVM guests add handling for SME exceptions generated by guests. As with SVE these are routed to the generic floating point exception handlers for both VHE and nVHE, the floating point state is handled as a uniform block. Since we do not presently support SME for protected VMs handle exceptions from protected guests as UNDEF. For nVHE and hVHE modes we currently do a lazy restore of the host EL2 setup for SVE, do the same for SME. Since it is likely that there will be common situations where SVE and SME are both used in quick succession by the host (eg, saving the guest state) restore the configuration for both at once in order to minimise the number of EL2 entries. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_emulate.h | 12 ++++---- arch/arm64/kvm/handle_exit.c | 11 +++++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 56 ++++++++++++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/switch.c | 13 ++++----- arch/arm64/kvm/hyp/vhe/switch.c | 3 ++ 5 files changed, 73 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 14d6ff2e2a39..756c2c28c592 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -584,16 +584,15 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) if (has_vhe()) { val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN | - CPACR_EL1_ZEN_EL1EN); - if (cpus_have_final_cap(ARM64_SME)) - val |= CPACR_EL1_SMEN_EL1EN; + CPACR_EL1_ZEN_EL1EN | CPACR_EL1_SMEN_EL1EN); } else if (has_hvhe()) { val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN); if (!vcpu_has_sve(vcpu) || (vcpu->arch.fp_state != FP_STATE_GUEST_OWNED)) val |= CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN; - if (cpus_have_final_cap(ARM64_SME)) + if (!vcpu_has_sme(vcpu) || + (vcpu->arch.fp_state != FP_STATE_GUEST_OWNED)) val |= CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN; } else { val = CPTR_NVHE_EL2_RES1; @@ -602,8 +601,9 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)) val |= CPTR_EL2_TZ; - if (cpus_have_final_cap(ARM64_SME)) - val &= ~CPTR_EL2_TSM; + if (vcpu_has_sme(vcpu) && + (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)) + val |= CPTR_EL2_TSM; } return val; diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 617ae6dea5d5..e5d8d8767872 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -206,6 +206,16 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; } +/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + kvm_inject_undefined(vcpu); + return 1; +} + /* * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all @@ -268,6 +278,7 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_ELx_EC_SVC64] = handle_svc, [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg, [ESR_ELx_EC_SVE] = handle_sve, + [ESR_ELx_EC_SME] = handle_sme, [ESR_ELx_EC_ERET] = kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 56808df6a078..b2da4800b673 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -411,6 +411,52 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt) kvm_skip_host_instr(); } +static void handle_host_vec(void) +{ + u64 old_smcr, new_smcr; + u64 mask = 0; + + /* + * Handle lazy restore of the EL2 configuration for host SVE + * and SME usage. It is likely that when a host supports both + * SVE and SME it will use both in quick succession (eg, + * saving guest state) so we restore both when either traps. + */ + if (has_hvhe()) { + if (cpus_have_final_cap(ARM64_SVE)) + mask |= CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN; + if (cpus_have_final_cap(ARM64_SME)) + mask |= CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN; + + sysreg_clear_set(cpacr_el1, 0, mask); + } else { + if (cpus_have_final_cap(ARM64_SVE)) + mask |= CPTR_EL2_TZ; + if (cpus_have_final_cap(ARM64_SME)) + mask |= CPTR_EL2_TSM; + + sysreg_clear_set(cptr_el2, mask, 0); + } + + isb(); + + if (cpus_have_final_cap(ARM64_SVE)) + sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); + + if (cpus_have_final_cap(ARM64_SME)) { + old_smcr = read_sysreg_s(SYS_SMCR_EL2); + new_smcr = SMCR_ELx_LEN_MASK; + + if (cpus_have_final_cap(ARM64_SME_FA64)) + new_smcr |= SMCR_ELx_FA64_MASK; + if (cpus_have_final_cap(ARM64_SME2)) + new_smcr |= SMCR_ELx_EZT0_MASK; + + if (old_smcr != new_smcr) + write_sysreg_s(new_smcr, SYS_SMCR_EL2); + } +} + void handle_trap(struct kvm_cpu_context *host_ctxt) { u64 esr = read_sysreg_el2(SYS_ESR); @@ -423,14 +469,8 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) handle_host_smc(host_ctxt); break; case ESR_ELx_EC_SVE: - /* Handle lazy restore of the host VL */ - if (has_hvhe()) - sysreg_clear_set(cpacr_el1, 0, (CPACR_EL1_ZEN_EL1EN | - CPACR_EL1_ZEN_EL0EN)); - else - sysreg_clear_set(cptr_el2, CPTR_EL2_TZ, 0); - isb(); - sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); + case ESR_ELx_EC_SME: + handle_host_vec(); break; case ESR_ELx_EC_IABT_LOW: case ESR_ELx_EC_DABT_LOW: diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index c50f8459e4fc..b022728edb2f 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -46,19 +46,14 @@ static void __activate_traps(struct kvm_vcpu *vcpu) val = vcpu->arch.cptr_el2; val |= CPTR_EL2_TAM; /* Same bit irrespective of E2H */ val |= has_hvhe() ? CPACR_EL1_TTA : CPTR_EL2_TTA; - if (cpus_have_final_cap(ARM64_SME)) { - if (has_hvhe()) - val &= ~(CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN); - else - val |= CPTR_EL2_TSM; - } if (!guest_owns_fp_regs(vcpu)) { if (has_hvhe()) val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN | - CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN); + CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN | + CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN); else - val |= CPTR_EL2_TFP | CPTR_EL2_TZ; + val |= CPTR_EL2_TFP | CPTR_EL2_TZ | CPTR_EL2_TSM; __activate_traps_fpsimd32(vcpu); } @@ -186,6 +181,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, @@ -198,6 +194,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { static const exit_handler_fn pvm_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, + [ESR_ELx_EC_SME] = kvm_handle_pvm_restricted, [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 1581df6aec87..0b1a9733f3e0 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -78,6 +78,8 @@ static void __activate_traps(struct kvm_vcpu *vcpu) if (guest_owns_fp_regs(vcpu)) { if (vcpu_has_sve(vcpu)) val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN; + if (vcpu_has_sme(vcpu)) + val |= CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN; } else { val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN); __activate_traps_fpsimd32(vcpu); @@ -177,6 +179,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low,