From patchwork Fri Dec 20 16:46:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852623 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7C2521C9E1; Fri, 20 Dec 2024 16:51:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713461; cv=none; b=ZZiUcXrw1CkYVMi+LvKwMcWUaaDNIFlJ1z8YzUK/Am5jOHYGHZ8OTKrkOWpi0njmP0PXUSioo5GwRIVuwneKnWNGWlHALr9cfKjvK2Le26OBwofnseq/AtVlSIroDghu82c3Gp0VNd3l6TnS9xob6TFoNfT6ufmM7/zeoIXMNP4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713461; c=relaxed/simple; bh=tNHHLZ/GUKR3zyLQMr9SmLccDJKFOywaygU53a8ORrU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=FpeGKoRTgY4hKlj8LQsq4vOddUTlK79ckcDUHMR4m8Wt7A2Gtea9oJwuLG/4I1Zi67RjB3BqwfWPkLT1ovcqpB2MMK+l5kqEo28f82YytBsNolTdkMv4pZPPvuSq1vrB6eOpsxGFxGb4AouVboYzlFoRK3yBO5bYHK81Cq5AFs8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=U0fMS7+s; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="U0fMS7+s" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4A804C4CECD; Fri, 20 Dec 2024 16:50:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713461; bh=tNHHLZ/GUKR3zyLQMr9SmLccDJKFOywaygU53a8ORrU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=U0fMS7+sg4ObOs47tgUj0fJ6TvQ3m1eKtDUMQOgcNhgNYtgarH025ARgglxrkQ+G5 EVEq4J4DM2rO48CtxNBvZ6r6SCBrgs+K92FgefbzrWCLec/XnR4YbQSbMFN284x3Sz SpqLlZEUVk7WIks5QIbfSIkyEP+kW9votWz62CfS4pC8RYL36fyMrAFwiLV4wGf7iK Qb+RLAIeaC+M8hxAGz52UD00v5YPM+ZOeMcT9QMPzZWXnBtuYt3QuSC+QE7GmeP0Sw lmEwoRR4W0mWrOWy4qI6IlLHSKZ2pA7QxWNKknKhxZZnubS+/ZUNHIisQSGEunWj6I 305q5yriDAmBg== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:27 +0000 Subject: [PATCH RFC v3 02/27] arm64/fpsimd: Decide to save ZT0 and streaming mode FFR at bind time Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-2-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=3401; i=broonie@kernel.org; h=from:subject:message-id; bh=tNHHLZ/GUKR3zyLQMr9SmLccDJKFOywaygU53a8ORrU=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBWsX7ZqoOdB4b5mGScOlPYolUX6HFXVv3II/y+ il9/MEyJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgVgAKCRAk1otyXVSH0CzoB/ 0b3XUZC1SIgcyr9aZTGKpBhSfDnj+lLwovphIWHJbmHxOLdeTpwX7t2E+7WHr3+fSGnQtllpadq5Xg inrHu99pLcZyW/J5cjYKX2MWKGZFy8O5iiWhsK+TOCAoafRsrz+9oiVh9yS5NPBG40xobgGIJhlxom qcyODonMbX+yhcoEB/mpdde0lGAc2zTXucrUmjNMHhHIPxwxgdC81HZP/6Y2pzpoLIYIMeDcZ2F1Em nW1dvtLh//GnK0DEbO5Rbh22K+6mRJWMJtDV6WJI6/kT3eg9mJGc8GuCXNJg8IjSFTqzgWrtlganth MZbZToJdR+xo9jlKj0GNdCPPLKsfiI X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Some parts of the SME state are optional, enabled by additional features on top of the base FEAT_SME and controlled with enable bits in SMCR_ELx. We unconditionally enable these for the host but for KVM we will allow the feature set exposed to guests to be restricted by the VMM. These are the FFR register (FEAT_SME_FA64) and ZT0 (FEAT_SME2). We defer saving of guest floating point state for non-protected guests to the host kernel. We also want to avoid having to reconfigure the guest floating point state if nothing used the floating point state while running the host. If the guest was running with the optional features disabled then traps will be enabled for them so the host kernel will need to skip accessing that state when saving state for the guest. Support this by moving the decision about saving this state to the point where we bind floating point state to the CPU, adding a new variable to the cpu_fp_state which uses the enable bits in SMCR_ELx to flag which features are enabled. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/fpsimd.c | 10 ++++++++-- arch/arm64/kvm/fpsimd.c | 1 + 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 95355892d47b3ec1c77a3ab19ccad0d7f9a8d621..144cc805bfea112341b89c9c6028cf4b2a201c6c 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -88,6 +88,7 @@ struct cpu_fp_state { void *sme_state; u64 *svcr; u64 *fpmr; + u64 sme_features; unsigned int sve_vl; unsigned int sme_vl; enum fp_type *fp_type; diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 92c085288ed2cbc4f51f49546c6abbde6ba891a3..7c66ed6e43c34d1b5e1cc00595c12244d13d3d0d 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -478,12 +478,12 @@ static void fpsimd_save_user_state(void) if (*svcr & SVCR_ZA_MASK) sme_save_state(last->sme_state, - system_supports_sme2()); + last->sme_features & SMCR_ELx_EZT0); /* If we are in streaming mode override regular SVE. */ if (*svcr & SVCR_SM_MASK) { save_sve_regs = true; - save_ffr = system_supports_fa64(); + save_ffr = last->sme_features & SMCR_ELx_FA64; vl = last->sme_vl; } } @@ -1722,6 +1722,12 @@ static void fpsimd_bind_task_to_cpu(void) last->to_save = FP_STATE_CURRENT; current->thread.fpsimd_cpu = smp_processor_id(); + last->sme_features = 0; + if (system_supports_fa64()) + last->sme_features |= SMCR_ELx_FA64; + if (system_supports_sme2()) + last->sme_features |= SMCR_ELx_EZT0; + /* * Toggle SVE and SME trapping for userspace if needed, these * are serialsied by ret_to_user(). diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index ea5484ce1f3ba3121b6938bda15f7a8057d49051..09b65abaf9db60cc57dbc554ad2108a80c2dc46b 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -138,6 +138,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fp_state.svcr = &__vcpu_sys_reg(vcpu, SVCR); fp_state.fpmr = &__vcpu_sys_reg(vcpu, FPMR); fp_state.fp_type = &vcpu->arch.fp_type; + fp_state.sme_features = 0; if (vcpu_has_sve(vcpu)) fp_state.to_save = FP_STATE_SVE; From patchwork Fri Dec 20 16:46:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852622 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A42A021A931; Fri, 20 Dec 2024 16:51:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713470; cv=none; b=cf/72tAfg15eal5yyEQyB4zEz9DpzN0GRgjaCRNL0r0OvyiduUPHqfBBYSAHOzXQQu34sPcpl2dqHmrM1mLezITxgBwTy2OUkpm6mY8wlID4D/lbAENk61q7qs9Z+ZJeusqcZoNgLwS89wExN629k5H8agj8ewPUnzYLfEO2H2Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713470; c=relaxed/simple; bh=fk7img7pT6wgpuZQLS9waoWrhY76ao9cwX81iGcxSKQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Zh1oLFdICW7Ciwwl5f6f/4yBdLrxJ4zAN1AzwGGTYH3NmJwawcEasu8fF168NS4LE6xOoDu6nB91HM2nKGgl59QNv9yFNfI4FZL9+jVCtAfXDoZWHLV6mUbX7kNBueXkuJn8SsTOnhkMwT+3/XRUFZHyDSc6DjEc3nnjLggULBI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=r40FRzzA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="r40FRzzA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DE162C4CEDD; Fri, 20 Dec 2024 16:51:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713470; bh=fk7img7pT6wgpuZQLS9waoWrhY76ao9cwX81iGcxSKQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=r40FRzzAD3sThPfoOUv5MyiG47YJJ0vUQJfugJx/VJ2FdIH4Jq1EQjAqh5xLDpERG pRPOMBM0moFgk2Gk8JtejusZ97o1LMZON6Jz5p+Rs1HExLVpMmPpllE+R7lKLmspFp YL+9ifTPscFe0nulX2K2q9kBmzbsdsydoAyevBAGYd8T4MHQC+I5y4ndI3DxxKO8tP IhJCXqOmgFehQaX0nZQ7itPUxCjACzMYXxIPg4s+42L6byPdHRM4RXXGa8Jr3C0czi AVunGWujtHucm8uUfArMp9QaAyonFMc2Uc31N+IQbyI3biz4o+k1ntq7W8YKMHHe1Y ptJz6lfM1rerQ== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:29 +0000 Subject: [PATCH RFC v3 04/27] arm64/fpsimd: Determine maximum virtualisable SME vector length Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-4-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=2233; i=broonie@kernel.org; h=from:subject:message-id; bh=fk7img7pT6wgpuZQLS9waoWrhY76ao9cwX81iGcxSKQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBXHrVm0TzLRgpwCo4EdZG2bTDNm8vyq1D2fzBF 7WlvjpuJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgVwAKCRAk1otyXVSH0BO3B/ 0Qyd/4GV4pTYzevIebRa4EB3r1j8mUDGAUC466RIT9GNevbNxUj/nblW6f2YTc27rq9axg3XaNJxh7 oVessmtZ70Q7Dt7AmJeBZ6jU24Sfa2pqjH05G8Xi+d9VU8Jy/vRa9fmom8LcX2vmJLtJixGNPcIg0g TaS5sQ2wmPvYfON+whNW9XgsG3MK4cdpIc+pPu90FNe9bMScnZ4LDXQx+PUvmMaZ8LhZY7PnXXOmzL 8JNOilCBVafF3TGjr2asrEQvoLkUpeCRmlobv5NBg7g6dHGOAebedFOx+2B2r5cCRxZkgyT7g/Ncjc 6imY3E7ReG1krovkI0/39Mtgs69+gf X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB As with SVE we can only virtualise SME vector lengths that are supported by all CPUs in the system, implement similar checks to those for SVE. Since unlike SVE there are no specific vector lengths that are architecturally required the handling is subtly different, we report a system where this happens with a maximum vector length of -1. Signed-off-by: Mark Brown --- arch/arm64/kernel/fpsimd.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index a6f9a102fadb0547b4988cb5b0c239ca90a262a0..d976708d84854846fe38a35a19c60ff36f44030a 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1276,7 +1276,8 @@ void cpu_enable_sme(const struct arm64_cpu_capabilities *__always_unused p) void __init sme_setup(void) { struct vl_info *info = &vl_info[ARM64_VEC_SME]; - int min_bit, max_bit; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + int min_bit, max_bit, b; if (!system_supports_sme()) return; @@ -1306,12 +1307,32 @@ void __init sme_setup(void) */ set_sme_default_vl(find_supported_vector_length(ARM64_VEC_SME, 32)); + bitmap_andnot(tmp_map, info->vq_partial_map, info->vq_map, + SVE_VQ_MAX); + + b = find_last_bit(tmp_map, SVE_VQ_MAX); + if (b >= SVE_VQ_MAX) + /* All VLs virtualisable */ + info->max_virtualisable_vl = SVE_VQ_MAX; + else if (b == SVE_VQ_MAX - 1) + /* No virtualisable VLs */ + info->max_virtualisable_vl = -1; + else + info->max_virtualisable_vl = sve_vl_from_vq(__bit_to_vq(b + 1)); + + if (info->max_virtualisable_vl > info->max_vl) + info->max_virtualisable_vl = info->max_vl; + pr_info("SME: minimum available vector length %u bytes per vector\n", info->min_vl); pr_info("SME: maximum available vector length %u bytes per vector\n", info->max_vl); pr_info("SME: default vector length %u bytes per vector\n", get_sme_default_vl()); + + /* KVM decides whether to support mismatched systems. Just warn here: */ + if (info->max_virtualisable_vl < info->max_vl) + pr_warn("SME: unvirtualisable vector lengths present\n"); } void sme_suspend_exit(void) From patchwork Fri Dec 20 16:46:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852621 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 466FB21A45A; Fri, 20 Dec 2024 16:51:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713480; cv=none; b=ZcAXmgcyeVEnS5zaLldlp+AAXZeDfqFpflSkf3iaqRN6KyTxJmyWQJDaCMIO+ehsLGuKpH0JfC3CzPiEPIZL72/zodIH8+WaBl5FIKhx+8bRvY9vnq+6NRcVrAWnlaGVbk3eqRsNkmOIRhkVaO4YJ6n57bJcc9Gk84hJNx7hy7E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713480; c=relaxed/simple; bh=FQHYwXdmL3GR1X0fB9NNfgXhfHPcJEGCJwnB7whsybM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=iI6IwBOyxyKljMCItKDv8R0jDP77G9DrQJzwbeePDaGruqaRDurGfnG+rlkHhl0vAd/FyozorhgmKiwbv7d3hTavHQvO/16+C44tuitVkzWMXSaR58UhZB+9IZiYW2VyONxsmpFAgulXoIuOJbIkeec+Za9lS9g/wQjA6f/tNRM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sPgjhNy0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sPgjhNy0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 82FB2C4CEDC; Fri, 20 Dec 2024 16:51:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713479; bh=FQHYwXdmL3GR1X0fB9NNfgXhfHPcJEGCJwnB7whsybM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=sPgjhNy0pQKxJmsHGssak/MuCZK0aWt8tI8thh7Ae1mYzo8bd/TG5eVWNdGdoCbat uhsduMcY4oz+ozc2revjhXx5Ldwd1tUs1mbdHzzqvAy//oLujy57t/PM1i9A3p0Nub mdeL6ThJaawK+c185FioTm68KJk0O1YgsDofyKeKjeHv1yFjsK5vsdDBVK/1OJCiZM ldjMcZeZNFIo7QxH02lr5TZehIbMzW4naWTGh4+nUmdwOv9LbldTXEl5+lhS3pke75 AlZHD8K0aqKZ45t2sii65DiV0vD8fj+BJuKdfndpTf5Cf+1tp8appwhRUbYMix1aeT qMpcvUEnLKZQg== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:31 +0000 Subject: [PATCH RFC v3 06/27] KVM: arm64: Pull ctxt_has_ helpers to start of sysreg-sr.h Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-6-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=2305; i=broonie@kernel.org; h=from:subject:message-id; bh=FQHYwXdmL3GR1X0fB9NNfgXhfHPcJEGCJwnB7whsybM=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBZgmK/htQnfZrdQRb9ftVqYA9k0EzKH9YaTlHT OJFL6e+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgWQAKCRAk1otyXVSH0CzFB/ 9j6wPn0JHK8/QjGznInjFWbpmWtsO145mBHDy9gv2tvca9YZMjsAUZtnDXUvpcSrkpBx/jXpG6ytgq 7XDa+pa2H5NQ1gj/Rq39700Ks+VG2FD4l8mxJW+k67UAJrVWHrcFPmbruxwgKOFEDZEOmns7L+v6u3 qhfFaSKWmiAwdmNqo8rrfoUvLUsBj88wdGyI/UzD0SiEVx/HvDtIpqX9wyORNCGkOhumngUqYW3Sgw wANy0gt4totBOrUpWdnf8xnhpP2NjDOaMMU/HPlMODt07U/NElcodWuT/SQ5lL1MwpMxjcUuATF8wK +JaOWIYQjAWxxaIDZFs42qEFNYffZT X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Rather than add earlier prototypes of specific ctxt_has_ helpers let's just pull all their definitions to the top of sysreg-sr.h so they're all available to all the individual save/restore functions. Signed-off-by: Mark Brown --- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 32 ++++++++++++++---------------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index a651c43ad679fcc5a13ab7a619e252d96fd46281..8c234d53acb2753c59aa37d7a66f856f2eb87882 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -16,23 +16,6 @@ #include #include -static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt); - -static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) -{ - ctxt_sys_reg(ctxt, MDSCR_EL1) = read_sysreg(mdscr_el1); - - // POR_EL0 can affect uaccess, so must be saved/restored early. - if (ctxt_has_s1poe(ctxt)) - ctxt_sys_reg(ctxt, POR_EL0) = read_sysreg_s(SYS_POR_EL0); -} - -static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) -{ - ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); - ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); -} - static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) { struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu; @@ -83,6 +66,21 @@ static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt) return kvm_has_s1poe(kern_hyp_va(vcpu->kvm)); } +static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) +{ + ctxt_sys_reg(ctxt, MDSCR_EL1) = read_sysreg(mdscr_el1); + + // POR_EL0 can affect uaccess, so must be saved/restored early. + if (ctxt_has_s1poe(ctxt)) + ctxt_sys_reg(ctxt, POR_EL0) = read_sysreg_s(SYS_POR_EL0); +} + +static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) +{ + ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); + ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); +} + static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, SCTLR_EL1) = read_sysreg_el1(SYS_SCTLR); From patchwork Fri Dec 20 16:46:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852620 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76BA721B1BC; Fri, 20 Dec 2024 16:51:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713489; cv=none; b=fUWJNLxw8Qy5wZSV9QUGRlL3PqMOVXRso+VtNYT5oE4g+gMx+aPrlxn1eYf/GthtpQ3Gm//BZfv7/crdBfFQ0GHUMUGDsSCgUI6lBdwR/LeX69DxQA2lAz3DEtzKeXPTCW35ZY1dCErJEGrKv3aG507GVrb7zdDuV6DbfvnZgQg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713489; c=relaxed/simple; bh=/i3TwaKgCBLSWDUFzgZJ5E4rHb3UDDDbQ/9/hT2MAGs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=QP3EUwUh/YIi/AddD6wIiHG1c19lCqL7axrTeEFVHQanh56xi6R8y6l58jmlJGbTiFJFkLP/Wwp+AJDVz0ItRYpJXuj9XOIGL5Qfcztgs8Eu48VDzKLaoprL8JlLaDUqkh9X+yNfyoqBBWR+6ME5/qZKqBc13aK/walxXWI0G6Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=crEj/i06; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="crEj/i06" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ACAB8C4CEDC; Fri, 20 Dec 2024 16:51:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713488; bh=/i3TwaKgCBLSWDUFzgZJ5E4rHb3UDDDbQ/9/hT2MAGs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=crEj/i06gGmZk5M1LHwTVOq3Ku4eZAwf5Lzjlpp4UJMXlDax1g3QMdBxXpwDIF7FL 3f4Sw2E19x5pgh+/SUWeExgu5ZqBXoC0UQYa9MfQqEiEqAYj7i1uvEoPBmwRt3Zch/ yBRzbMe3fLfIJ0ZQU2nlMSoQ4Z4Ft0xsLUnbt10tgLxn8eAPoNfh4jbCJEWaXWYKkj 6+JriIQTN1rfxrzPFATGPza69g/KlQ6qL7xrR2ftbW4arHneM8tELY2c83conD3+sc Ne7bT4T5Z9rC0XG/I9u3BvNBBAG9H0C3RtojGEnJmqcjV8an0PT22I8GWfXd1YQeOo D+NPSWaiBOsYg== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:33 +0000 Subject: [PATCH RFC v3 08/27] KVM: arm64: Move SVE state access macros after feature test macros Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-8-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=2613; i=broonie@kernel.org; h=from:subject:message-id; bh=/i3TwaKgCBLSWDUFzgZJ5E4rHb3UDDDbQ/9/hT2MAGs=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBaSCMFrVKugleuCqcjESXruDC/s6CaD4yZ5F6Z p7wReKWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgWgAKCRAk1otyXVSH0LqlB/ 9idSTtrsfAftjwtJ51rl1MDfrczjOESiyC+PCFAePnmVi3KdagMZPCJQvdE7wrKQuchGOWvo9/CLq+ cN7EaMnLRHfgp8Z/hUnReRjkCeiHX6BBOj0iNBGSRirbpxgDO95XXMQAGYnQLEeGAXP1431tdMRmzf LAzBVbJtNVeUBBEHCRD0Yn5Wb8AKl7fWQYwAJHJikESsLd6fPuwCVKe+oekBg/R1YaUxiTNfVApgOM s4PZRY/HFMAqM77VIzYl/XflL9D6jQAwR2yKjwLDgt7OK5tGxJk41JJH0hl36jcAdH41yrYuMic6lU kq+tPZijmIyttMxtCnye3BE46zZ1AN X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB In preparation for SME support move the macros used to access SVE state after the feature test macros, we will need to test for SME subfeatures to determine the size of the SME state. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 46 +++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index fca81ede6140c0ee7d03cb6ca8f5eead45b87033..97b617606221e8c11fd2b55d9636848d8453209f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -934,29 +934,6 @@ struct kvm_vcpu_arch { #define IN_WFI __vcpu_single_flag(sflags, BIT(7)) -/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) - -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) - -#define vcpu_sve_zcr_elx(vcpu) \ - (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) - -#define vcpu_sve_state_size(vcpu) ({ \ - size_t __size_ret; \ - unsigned int __vcpu_vq; \ - \ - if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ - __size_ret = 0; \ - } else { \ - __vcpu_vq = vcpu_sve_max_vq(vcpu); \ - __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ - } \ - \ - __size_ret; \ -}) - #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ KVM_GUESTDBG_USE_HW | \ @@ -992,6 +969,29 @@ struct kvm_vcpu_arch { #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.sve_max_vl)) + +#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) + +#define vcpu_sve_zcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) + +#define vcpu_sve_state_size(vcpu) ({ \ + size_t __size_ret; \ + unsigned int __vcpu_vq; \ + \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + __size_ret = 0; \ + } else { \ + __vcpu_vq = vcpu_sve_max_vq(vcpu); \ + __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ + } \ + \ + __size_ret; \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently From patchwork Fri Dec 20 16:46:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852619 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F29C5221462; Fri, 20 Dec 2024 16:51:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713496; cv=none; b=EFixN4Lm8NRd0jjcJaUR0drfhzE9s1B6WnJatyqT3uaIBLKNCKykgNOZrxxafSwvf1r8SjqrYjnjIyhyczSnR8OVfzMFVNWzC3MJwuWxIDGwnEALXamlFJEaoOAJw8XnFFuMyu2cnL+U/CrsR1Z/HEMJsl7PhoFmECK2LPeXmHI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713496; c=relaxed/simple; bh=3aaZhNirOZ+7l9BeX+c3sgo5KaaCuR6ys35EPFSwf5I=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dsfZJZMpnRHdYTOt7r+KL9mnQbseDw2esYMXbXOHuYLsMo/xFmyjpFwi13IJJDePhcARLu/vjXTXWhdTgsG8LmbDmPzTo3omWrqjIBMJyoAe0Q+wTJbcleRTd9GXh9KSe/8xCKFva2Wjht5uJ2M18Rvm/bxCwWrbbpaqhmgDTNo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ivQ4iWm1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ivQ4iWm1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3109EC4CEDC; Fri, 20 Dec 2024 16:51:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713495; bh=3aaZhNirOZ+7l9BeX+c3sgo5KaaCuR6ys35EPFSwf5I=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ivQ4iWm1QqcGDvdvFW2D7CizC4bu4+NLYAkUu1z6hEuoDkKIRPNgFdoOjGrPLZPx0 8iunglxHnzAf/Kw+EyWDi999khVY2SmWxFb8jn6V/WCUXOzthp0RDO0TjfSwCpBPx1 L/2qboNeQKxSTgBDgs9xsKF1MgOLis83ORHb6s1Osr0wy0iI+gkFhZnXnAZc6IGlMA 9CLcet2iXGVta8LAt5sse2AR6W67K1pGEafmWc71jWluWW6OPtkr4S3gseNAQI9cHz XN4+4r8upRTpK463qnJEv/IUwvzk/VqN6OvUDrNayOJuUcyn+AFmXPH8xOIB/OsExE o9Js31he67Nqg== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:35 +0000 Subject: [PATCH RFC v3 10/27] KVM: arm64: Rename SVE finalization constants to be more general Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-10-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=7533; i=broonie@kernel.org; h=from:subject:message-id; bh=3aaZhNirOZ+7l9BeX+c3sgo5KaaCuR6ys35EPFSwf5I=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBcHuq5Lk45QXE8KUQ/QhEs1N1N14WzcOZ4wGiW krf/jliJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgXAAKCRAk1otyXVSH0IF6B/ 4t9q0gL7iHDyrcIJDUPG9jcN0euzIJJrpuoBsjuqccr3yuWuHS+PskLa+15vbpMf/dMpx6VTNMpISo Hw6QNQq5HECzh66lBgfetGl9X28k6KB+mid8G6x/X5XrkYPLBL8fKzWu5kowdwGINS1WOiwmkybCIe ShCjwWUEz9BNx3ertYmduRBlhprlCpEsQ4HkTbtlJzXd+mnkjgSh3qpwND2fqFoOWOaLs8UO+62vIA /gHCB3DrMJtiqB1jvIvtFW1dmg4hlPEe0UHsNRglkTmz8yrW3HNQLVfv+aLRFFmtuNvldsTEX09MVN GVJBvuszAqq3rHyaa7GqsuPLCjBtkE X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Due to the overlap between SVE and SME vector length configuration created by streaming mode SVE we will finalize both at once. Rename the existing finalization to use _VEC (vector) for the naming to avoid confusion. Since this includes the userspace API we create an alias KVM_ARM_VCPU_VEC for the existing KVM_ARM_VCPU_SVE capability, existing code which does not enable SME will be unaffected and any SME only code will not need to use SVE constants. No functional change. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 6 ++++-- arch/arm64/include/uapi/asm/kvm.h | 6 ++++++ arch/arm64/kvm/guest.c | 10 +++++----- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +- arch/arm64/kvm/reset.c | 20 ++++++++++---------- 5 files changed, 26 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 97b617606221e8c11fd2b55d9636848d8453209f..f64ad573573cf000c4644f12f9e072a2fdfc3824 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -873,7 +873,7 @@ struct kvm_vcpu_arch { /* KVM_ARM_VCPU_INIT completed */ #define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(0)) /* SVE config completed */ -#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) +#define VCPU_VEC_FINALIZED __vcpu_single_flag(cflags, BIT(1)) /* Exception pending */ #define PENDING_EXCEPTION __vcpu_single_flag(iflags, BIT(0)) @@ -948,6 +948,8 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif +#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) + #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ @@ -1414,7 +1416,7 @@ struct kvm *kvm_arch_alloc_vm(void); int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); -#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED) +#define kvm_arm_vcpu_vec_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_VEC_FINALIZED) #define kvm_has_mte(kvm) \ (system_supports_mte() && \ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 66736ff04011e0fa9fcfb74154d5613bf4ee89f7..9d80d22af9d4e00204f5096fb7c8c2ee8c3646c1 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -109,6 +109,12 @@ struct kvm_regs { #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ +/* + * An alias for _SVE since we finalize VL configuration for both SVE and SME + * simultaneously. + */ +#define KVM_ARM_VCPU_VEC KVM_ARM_VCPU_SVE + struct kvm_vcpu_init { __u32 target; __u32 features[7]; diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 12dad841f2a51276eee4d4da7400c1b2a5732ff8..62ff51d6e4584acc71205f5d4b1d2f3d2e2d2f88 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -342,7 +342,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; - if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */ if (WARN_ON(vcpu->arch.sve_state)) @@ -497,7 +497,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret; - if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, @@ -523,7 +523,7 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret; - if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, @@ -657,7 +657,7 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) return 0; /* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) + 1; /* KVM_REG_ARM64_SVE_VLS */ @@ -675,7 +675,7 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, return 0; /* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); /* * Enumerate this first, so that userspace can save/restore in diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 446a9114b0d3ee4323a9cd8d618d36035e85e4d0..0a4e1f5105592b23a0505bf7680c66e76b5c2a65 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -314,7 +314,7 @@ static void pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu * struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) - vcpu_clear_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_clear_flag(vcpu, VCPU_VEC_FINALIZED); } static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 803e11b0dc8f5eb74b07b0ad745b0c4f666713d5..ce726b1d4e8e90cfd4459a6cb9c67b8805424e22 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -92,7 +92,7 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * Finalize vcpu's maximum SVE vector length, allocating * vcpu->arch.sve_state as necessary. */ -static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) +static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { void *buf; unsigned int vl; @@ -122,21 +122,21 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) } vcpu->arch.sve_state = buf; - vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; } int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) { switch (feature) { - case KVM_ARM_VCPU_SVE: - if (!vcpu_has_sve(vcpu)) + case KVM_ARM_VCPU_VEC: + if (!vcpu_has_vec(vcpu)) return -EINVAL; - if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; - return kvm_vcpu_finalize_sve(vcpu); + return kvm_vcpu_finalize_vec(vcpu); } return -EINVAL; @@ -144,7 +144,7 @@ int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) { - if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) + if (vcpu_has_vec(vcpu) && !kvm_arm_vcpu_vec_finalized(vcpu)) return false; return true; @@ -161,7 +161,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) kfree(vcpu->arch.ccsidr); } -static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) +static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); @@ -204,11 +204,11 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (loaded) kvm_arch_vcpu_put(vcpu); - if (!kvm_arm_vcpu_sve_finalized(vcpu)) { + if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); } else { - kvm_vcpu_reset_sve(vcpu); + kvm_vcpu_reset_vec(vcpu); } if (vcpu_el1_is_32bit(vcpu)) From patchwork Fri Dec 20 16:46:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852618 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71F82225A32; Fri, 20 Dec 2024 16:51:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713503; cv=none; b=U3ij5z2KOVIY+Hsxk25VtCdcSICfd7WWdkkgugWwA2QjZf4ZeUKmQ9ZDP88PppNzqJiXjVfWAtErNzKUzj5A6LsoOdirQkFbCteoQNdwh9twiVN2Zzq2Q3l0LqSHKg6pYzRViavKCiEpJxUWxtKELBP+LMI8IoOisCQ6foCOfvI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713503; c=relaxed/simple; bh=h8tZV5l27wMi7QZ5NW8MxQzjInn8bIQ6N+gjfbDmOZc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LhgjSqPbtN4LQ9DvXNCU7NorhBSNVExb1k9rtWnv2zRi+vUbbt3GZivdJrXeme0N6MLKhUj+lJHDem98h50bcYXPaWcbdGkP9cUTJdBj+fSGOcOrxC5l748wU52k6jkOFrJHh8BZwX2zpnVwiJhbKB3/mdxNYCsH2K8UMcCkg74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dkAByGPQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dkAByGPQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A56C0C4CECD; Fri, 20 Dec 2024 16:51:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713503; bh=h8tZV5l27wMi7QZ5NW8MxQzjInn8bIQ6N+gjfbDmOZc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=dkAByGPQ/h/zgIqhkzJAORHDMIhQJ3TUM9neW6eHt1GtXikZn+iCRia/Y8bCX+MZc 5wyGc+VLJtOfcT4OfttexHbBH7ITBNFBZz0KQoRessnzWckchu2FFCjawHwIKgCpVS jKdpNLMhuhnseRWg0iOckGMpYYdYc9Eja8Dfbq7OKAGZe2mrbzsjT8LV0SL2ZRlXRF U4LwyFFX985RvP+Saycum72bN+ZS/eueiV7qjwC3WMyF6E6dKNWTrZYPCPV2qmxcQM OJBavdKvXqCLr4VxkTxDhcaZHx94r+YVGrNOIRkfh0hPt7wMqWuqm/hBSthougxx5f 6KYxmggs3xcsQ== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:37 +0000 Subject: [PATCH RFC v3 12/27] KVM: arm64: Define internal features for SME Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-12-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=2935; i=broonie@kernel.org; h=from:subject:message-id; bh=h8tZV5l27wMi7QZ5NW8MxQzjInn8bIQ6N+gjfbDmOZc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBd4w3/lE8UZpVmijwql4FlP/nsQSz593ePQ9Xc nGIMzxOJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgXQAKCRAk1otyXVSH0MdEB/ 9qMnB/e2qi9gqsYL5d2AWSroqgmWPN0FOQjIOFJOU3GIyj2480egTCTNt6+EkHDe1haabUHYho0egr PAFlfc0Lr4+phfys3RTH/dtQr/P++q1GlGCH9aCUDiKl9enB74nktMfgPryQnwNteNu0AwZFCr9K6F 8aliU8+zvqfIMYjkXWJyCDZTy7/TyN6+orQ9D4vPihMYCODSFMJ160xyaoZWsYMkg38v4LjPm7QrwX mfBR7ta2dupKszGKrM1s2SZKONb4tOKezSRbT2D8OSLWJ4ELho14oEAIM3LU5ZgXqX3bPVxan8v/v2 gM5HTvfMXhobBSg4P6LEkPzoOFYEg6 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB In order to simplify interdependencies in the rest of the series define the feature detection for SME and it's subfeatures. Due to the need for vector length configuration we define a flag for SME like for SVE. We also have two subfeatures which add architectural state, FA64 and SME2, which are configured via the normal ID register scheme. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 23 +++++++++++++++++++++-- arch/arm64/kvm/sys_regs.c | 2 +- 2 files changed, 22 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f64ad573573cf000c4644f12f9e072a2fdfc3824..022214e57e74404e8d590a5820a9e77160869b1b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -39,7 +39,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS -#define KVM_VCPU_MAX_FEATURES 7 +#define KVM_VCPU_MAX_FEATURES 9 #define KVM_VCPU_VALID_FEATURES (BIT(KVM_VCPU_MAX_FEATURES) - 1) #define KVM_REQ_SLEEP \ @@ -339,6 +339,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_FGU_INITIALIZED 8 /* SVE exposed to guest */ #define KVM_ARCH_FLAG_GUEST_HAS_SVE 9 + /* SME exposed to guest */ +#define KVM_ARCH_FLAG_GUEST_HAS_SME 10 unsigned long flags; /* VM-wide vCPU feature set */ @@ -948,7 +950,16 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif -#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) +#define kvm_has_sme(kvm) \ + test_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &(kvm)->arch.flags) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme(vcpu) kvm_has_sme(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme(vcpu) kvm_has_sme((vcpu)->kvm) +#endif + +#define vcpu_has_vec(vcpu) (vcpu_has_sve(vcpu) || vcpu_has_sme(vcpu)) #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ @@ -1542,4 +1553,12 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val); #define kvm_has_s1poe(k) \ (kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP)) +#define kvm_has_fa64(k) \ + (system_supports_sme() && \ + kvm_has_feat((k), ID_AA64SMFR0_EL1, FA64, IMP)) + +#define kvm_has_sme2(k) \ + (system_supports_sme() && \ + kvm_has_feat((k), ID_AA64PFR1_EL1, SME, SME2)) + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 83c6b4a07ef56cf0ed9c8751ec80686f45dca6b2..1b16716a6d53525fbe694cc8d5d009d72b6ce416 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1727,7 +1727,7 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, static unsigned int sme_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { - if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, IMP)) + if (vcpu_has_sme(vcpu)) return 0; return REG_HIDDEN; From patchwork Fri Dec 20 16:46:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852617 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9813821C17E; Fri, 20 Dec 2024 16:51:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713510; cv=none; b=jach4/h9d/TACsqnPtE2LL58VkxgfHU3NFaP7QZQJb7BPJnqh2UX6TbbiixUsbBQ0912U+Mb6a7q/BcP2Fi+LjIJNNIqF6D72GYj0n7qtDF6NlzJ1pWyeAItWwpLL9pM00ZT+a18FrPbjvSpDvPWhQSAR19ngFU/5aLhkiqOSmw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713510; c=relaxed/simple; bh=YX6ec2Y9jRqUBq+hF4J3MaIT2MldDbLjE2bFPa/5Uc4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=GSqYGnvh6+drxtAJzW59YzZ8UugzAg4XsmsqtML7Mhu+LOwHEM+tNSsVIPSxX+ly56agUweVTjZGtZeNrVAoaMcwIMmYMPX0NjhL6rhrScG3OyHUcfHE7/eBxQJcIwcUh/Swwv55QoaJq3XliMwFOJBFzLDiu7O764d7IzLJXfA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=AWWTyFT8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="AWWTyFT8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B451C4CED7; Fri, 20 Dec 2024 16:51:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713510; bh=YX6ec2Y9jRqUBq+hF4J3MaIT2MldDbLjE2bFPa/5Uc4=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=AWWTyFT8xAvkubZ/rsOKV1ZC/mH0p9tipTonz114J98laU3xBnBPHi/bqhSJS1feV EF1nJr9CJIs2YYnvJ0k1hHeAd6E4NJ71tFnB1YZlodC9AL8jPUAG6yx+SxI81mZzHz XFCnFS+k74CsfchEXUuAdKtUhq/rEAQ4f3+1xPOYHr3MJDQYBW8fvH7Bu8UPQaCOqM YRjRWsmcLf3uCqm3Ejo00zr8t9Kb0f7mBXn6s+xBKYDcWCjsyLejk8Yxnz+o7MeFxM vVuEiJT+Bqlq6zAVjiuJp7bOSX6XnkAFNrKKtz3iOKNq7liEOk06G25XTpAl/f8P5N bHJ/zrB1BpUzw== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:39 +0000 Subject: [PATCH RFC v3 14/27] KVM: arm64: Store vector lengths in an array Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-14-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=11987; i=broonie@kernel.org; h=from:subject:message-id; bh=YX6ec2Y9jRqUBq+hF4J3MaIT2MldDbLjE2bFPa/5Uc4=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBfj9AI2R2o7U7XViGvhTSGqE5CJ94qvpxOkITB QWIN3k+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgXwAKCRAk1otyXVSH0C/dB/ 4v1xu5gvUpuIA5A7ePvct103fedcOSPZOiua+g9panN54N2NicPZoyCVeaeONRti+JQyfdGIiHDFCS pY3PBttagbOpROPi9vRN5xOrlKDJLTUpWUR4AeWetcHh1l1QvxHEkUdXiQefieHDYKtIoKpouDLYet Y7MGAmAvdrO0sM9dV6L93jRk5aQxSl/bytrIu/z+YQXB7MNptzRZDZJh7RnvD/LjpfmV72RiEKGdb5 njP8wlmQyDpC5W81BVkFbSd89TjFOBW80mFIaz7ANMif7JPIwAPET5RPmIU5bsq6uYzNZlPcYCf076 sX9sBuoc+wts296vYN+JdzUNbcrO4b X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds a second vector length configured in a very similar way to the SVE vector length, in order to facilitate future code sharing for SME refactor our storage of vector lengths to use an array like the host does. We do not yet take much advantage of this so the intermediate code is not as clean as might be. No functional change. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 17 +++++++++++------ arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/include/asm/kvm_pkvm.h | 2 +- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 6 +++--- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 ++++++----- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +- arch/arm64/kvm/reset.c | 22 +++++++++++----------- 9 files changed, 37 insertions(+), 31 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 022214e57e74404e8d590a5820a9e77160869b1b..63e1410146f76fd584374765c04b3ba14090afdc 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -74,8 +74,10 @@ enum kvm_mode kvm_get_mode(void); static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; }; #endif -extern unsigned int __ro_after_init kvm_sve_max_vl; -extern unsigned int __ro_after_init kvm_host_sve_max_vl; +extern unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX]; +extern unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; +DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); + int __init kvm_arm_init_sve(void); u32 __attribute_const__ kvm_target_cpu(void); @@ -709,7 +711,7 @@ struct kvm_vcpu_arch { */ void *sve_state; enum fp_type fp_type; - unsigned int sve_max_vl; + unsigned int max_vl[ARM64_VEC_MAX]; /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; @@ -984,9 +986,12 @@ struct kvm_vcpu_arch { /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) + sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) + +#define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[type]) + +#define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE) -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) @@ -995,7 +1000,7 @@ struct kvm_vcpu_arch { size_t __size_ret; \ unsigned int __vcpu_vq; \ \ - if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.max_vl[ARM64_VEC_SVE]))) { \ __size_ret = 0; \ } else { \ __vcpu_vq = vcpu_sve_max_vq(vcpu); \ diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index c838309e4ec47e395d78127a8ee6bad8390c4411..21943cb98542750a1b626a8de6bbc095d7770ccf 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -143,6 +143,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val); extern unsigned long kvm_nvhe_sym(__icache_flags); extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits); -extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl); +extern unsigned int kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_MAX]); #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 400f7cef1e81b29925ed00593d8198f8d2700025..e6021a2418529064dcd31b4a5301e4d6f6ac8acd 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -159,7 +159,7 @@ static inline size_t pkvm_host_sve_state_size(void) return 0; return size_add(sizeof(struct cpu_sve_state), - SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); + SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]))); } #endif /* __ARM64_KVM_PKVM_H__ */ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 3c2e0b96877ac5b4f3b9d8dfa38975f11b74b60d..51c844e25dfa460ecab5bb0dfc50c7680318aa20 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -133,7 +133,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) */ fp_state.st = &vcpu->arch.ctxt.fp_regs; fp_state.sve_state = vcpu->arch.sve_state; - fp_state.sve_vl = vcpu->arch.sve_max_vl; + fp_state.sve_vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; fp_state.sme_state = NULL; fp_state.svcr = &__vcpu_sys_reg(vcpu, SVCR); fp_state.fpmr = &__vcpu_sys_reg(vcpu, FPMR); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index cde733417f25b5af4f5e996f91c2b962a4d361fd..5fda5dbc0c3c0ce3a20a732a68421376e54f23ca 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -318,7 +318,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; - if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) return -EINVAL; memset(vqs, 0, sizeof(vqs)); @@ -356,7 +356,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq = vq; - if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) + if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) return -EINVAL; /* @@ -375,7 +375,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL; /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.sve_max_vl = sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_vl_from_vq(max_vq); return 0; } diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 247dfadcdb22e1ef96f92a9d86e66c9eefb44600..09a9a237d6dd22d4bb941714363675abdab1baa7 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -370,8 +370,8 @@ static inline void __hyp_sve_save_host(void) struct cpu_sve_state *sve_state = *host_data_ptr(sve_state); sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); + __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[ARM64_VEC_SVE]), &sve_state->fpsr, true); } diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 6aa0b13d86e581a36ed529bcd932498045d2d6df..7468d8516ecaa1370861e51ad4f65adbc01a5d97 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -33,7 +33,7 @@ static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); } static void __hyp_sve_restore_host(void) @@ -49,8 +49,8 @@ static void __hyp_sve_restore_host(void) * that was discovered, if we wish to use larger VLs this will * need to be revisited. */ - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); + __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[ARM64_VEC_SVE]), &sve_state->fpsr, true); write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); @@ -101,7 +101,8 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); /* Limit guest vector length to the maximum supported by the host. */ - hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl); + hyp_vcpu->vcpu.arch.max_vl[ARM64_VEC_SVE] = min(host_vcpu->arch.max_vl[ARM64_VEC_SVE], + kvm_host_max_vl[ARM64_VEC_SVE]); hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; @@ -483,7 +484,7 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) case ESR_ELx_EC_SVE: cpacr_clear_set(0, CPACR_ELx_ZEN); isb(); - sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, + sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); break; case ESR_ELx_EC_IABT_LOW: diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 0a4e1f5105592b23a0505bf7680c66e76b5c2a65..fea01612ac47a8a2f42edb9f17490edbaa89d04c 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -20,7 +20,7 @@ unsigned long __icache_flags; /* Used by kvm_get_vttbr(). */ unsigned int kvm_arm_vmid_bits; -unsigned int kvm_host_sve_max_vl; +unsigned int kvm_host_max_vl[ARM64_VEC_MAX]; static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu) { diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ce726b1d4e8e90cfd4459a6cb9c67b8805424e22..3cb91dc6dc3dc5cc484900dbd9f4cdfedb3e2b4a 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -32,7 +32,7 @@ /* Maximum phys_shift supported for any VM on this host */ static u32 __ro_after_init kvm_ipa_limit; -unsigned int __ro_after_init kvm_host_sve_max_vl; +unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; /* * ARMv8 Reset Values @@ -46,14 +46,14 @@ unsigned int __ro_after_init kvm_host_sve_max_vl; #define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \ PSR_AA32_I_BIT | PSR_AA32_F_BIT) -unsigned int __ro_after_init kvm_sve_max_vl; +unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX]; int __init kvm_arm_init_sve(void) { if (system_supports_sve()) { - kvm_sve_max_vl = sve_max_virtualisable_vl(); - kvm_host_sve_max_vl = sve_max_vl(); - kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl; + kvm_max_vl[ARM64_VEC_SVE] = sve_max_virtualisable_vl(); + kvm_host_max_vl[ARM64_VEC_SVE] = sve_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SVE]) = kvm_host_max_vl[ARM64_VEC_SVE]; /* * The get_sve_reg()/set_sve_reg() ioctl interface will need @@ -61,16 +61,16 @@ int __init kvm_arm_init_sve(void) * order to support vector lengths greater than * VL_ARCH_MAX: */ - if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX)) - kvm_sve_max_vl = VL_ARCH_MAX; + if (WARN_ON(kvm_max_vl[ARM64_VEC_SVE] > VL_ARCH_MAX)) + kvm_max_vl[ARM64_VEC_SVE] = VL_ARCH_MAX; /* * Don't even try to make use of vector lengths that * aren't available on all CPUs, for now: */ - if (kvm_sve_max_vl < sve_max_vl()) + if (kvm_max_vl[ARM64_VEC_SVE] < sve_max_vl()) pr_warn("KVM: SVE vector length for guests limited to %u bytes\n", - kvm_sve_max_vl); + kvm_max_vl[ARM64_VEC_SVE]); } return 0; @@ -78,7 +78,7 @@ int __init kvm_arm_init_sve(void) static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { - vcpu->arch.sve_max_vl = kvm_sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] = kvm_max_vl[ARM64_VEC_SVE]; /* * Userspace can still customize the vector lengths by writing @@ -99,7 +99,7 @@ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) size_t reg_sz; int ret; - vl = vcpu->arch.sve_max_vl; + vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; /* * Responsibility for these properties is shared between From patchwork Fri Dec 20 16:46:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852616 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CD3521C198; Fri, 20 Dec 2024 16:51:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713518; cv=none; b=gPnVgDaBiSj1xw2TORU/iEhtVZDLC/C9DKuJIsWnZz5+rT16fB2syIv0dimp4it+lDxV0p0VQ5lCUuUido/4UKix/391XQzxx8E9Bk4YDABzeb2kwuM8yk/mDwp9rLK6NBT2fH8eO6Bq1qxH6t7fZiB34F4gUYhz6+VraU0aYd4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713518; c=relaxed/simple; bh=ziNSU1l2k6s22Zl2zTPsgpLBrD3+gSVFEY6O/oZT2Fs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ThdwLFwYcXApBsR2cbVvzr66DOj8MS7691Fkjemd0c9XSo8TAmkiEwEzuK79fDRfQAATv6NnQ7oeT20t835/ZbyBhIpj8WE0cC0VX87M9uFa0UqBTOcPIimRnwO4Zy9yhqya1H9jDainDtXa8k4egJaqDKcv962y+3/xy5xEHVQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lJV06mgz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lJV06mgz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A497AC4CED7; Fri, 20 Dec 2024 16:51:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713518; bh=ziNSU1l2k6s22Zl2zTPsgpLBrD3+gSVFEY6O/oZT2Fs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=lJV06mgzy5StSd1+UxdmtGNogKoICEOQbIwQrX4sftFKHzD/Xz4Fk3+HQFPmhaDSv YBYHGSHLmMRXDZZz3zncLv5JzR20izZubLUVEqHjOoonKGZqeop0jSrRp5OjB5XIxc P47WuASqoGzviaf15wBCQXq0n7BRquj5092/6C9Ii5YKSh1c3ZNXCZToNgkE923MJy RMgLsyuT9M8diUPzAVwypVx7fpr6VI6nocKdV/C7tuV4OoXzn4K3ekMxyeXFeToAv7 JRs7/+GVPggrZUJTvAntVz559nMYJjExfWTnAY9p7grNrliu5LMQtPxbt/wLf/0/bP rjln9eSHIcHuA== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:41 +0000 Subject: [PATCH RFC v3 16/27] KVM: arm64: Add definitions for SME control register Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-16-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=5625; i=broonie@kernel.org; h=from:subject:message-id; bh=ziNSU1l2k6s22Zl2zTPsgpLBrD3+gSVFEY6O/oZT2Fs=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBgO7cl0oEi7XeyJsA/z/OYmqthjqYTof8sIJnf bwSelFCJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgYAAKCRAk1otyXVSH0AtZB/ 0YJLUHxjPX6X2u7w+Y+Q0Dv142hnpW7NfvtMvucCl3io6nDs1dHsgnAB+6A5heum5WaQT5R2ffckUT jBbpf6Y4e4/BowX2p8DR4bDZJDJjFLCJDc1LEBfRgENZsr5B7rf5Jfh1GKKxTNp+zt1roTmuf1O3t7 QxudXhCPz58qeFMKT0WIRSeBQDB3cn37bQ6JI7lf4LW+E0xYAeKMceB8X2lvNlP2RaMJKuPLNvbWVR L+LXVHfGmFG5BZI7Yv28w3WBDH9MaFW/On/Xj9el+cRFQqtTwSoG+1OnCSUYVlzMoFe0RbG4nUyrVj XQzXQmxaX1dm4v/aKWILW5ifJjw0u4 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME is configured by the system registers SMCR_EL1 and SMCR_EL2, add definitions and userspace access for them. They will be context switched together with the rest of SME state. In systems with SME priority support there are additional registers SMPRI_EL1 and SMPRIMAP_EL2 managing the priorities however we do not currently have any support for SME priorities and mask that support out from guests. The intention is to revist this once we have physical implementations and can properly evaluate the practical impacts that they have. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++- 3 files changed, 43 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 02f620d95f7dd2cb2b29cc25e78e7ef404cfad4c..8d6342dde02fd99cfd7d2bedeccf0581ad3504ee 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -484,6 +484,7 @@ enum vcpu_sysreg { CPTR_EL2, /* Architectural Feature Trap Register (EL2) */ HACR_EL2, /* Hypervisor Auxiliary Control Register */ ZCR_EL2, /* SVE Control Register (EL2) */ + SMCR_EL2, /* SME Control Register (EL2) */ TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */ TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */ TCR_EL2, /* Translation Control Register (EL2) */ @@ -521,6 +522,7 @@ enum vcpu_sysreg { VNCR(ACTLR_EL1),/* Auxiliary Control Register */ VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ + VNCR(SMCR_EL1), /* SME Control */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ @@ -998,7 +1000,11 @@ struct kvm_vcpu_arch { #define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[type]) #define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE) +#define vcpu_sme_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SME) +#define vcpu_max_vl(vcpu) max((vcpu)->arch.max_vl[ARM64_VEC_SVE], \ + (vcpu)->arch.max_vl[ARM64_VEC_SME]) +#define vcpu_max_vq(vcpu) sve_vq_from_vl(vcpu_max_vl(vcpu)) #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h index 4f9bbd4d6c2671753124599475e5138bf6b9c749..74fc7400efbc7de6b8dd81a485f1e9d545baf7a9 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -42,6 +42,7 @@ #define VNCR_HDFGWTR_EL2 0x1D8 #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 +#define VNCR_SMCR_EL1 0x1F0 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1b16716a6d53525fbe694cc8d5d009d72b6ce416..a9429d9d63b54b5b4d4fe365aa6af4d84a256539 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -142,6 +142,7 @@ static bool get_el2_to_el1_mapping(unsigned int reg, MAPPED_EL2_SYSREG(ELR_EL2, ELR_EL1, NULL ); MAPPED_EL2_SYSREG(SPSR_EL2, SPSR_EL1, NULL ); MAPPED_EL2_SYSREG(ZCR_EL2, ZCR_EL1, NULL ); + MAPPED_EL2_SYSREG(SMCR_EL2, SMCR_EL1, NULL ); MAPPED_EL2_SYSREG(CONTEXTIDR_EL2, CONTEXTIDR_EL1, NULL ); default: return false; @@ -2405,6 +2406,37 @@ static bool access_zcr_el2(struct kvm_vcpu *vcpu, return true; } +static unsigned int sme_el2_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + return __el2_visibility(vcpu, rd, sme_visibility); +} + +static bool access_smcr_el2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + unsigned int vq; + u64 smcr; + + if (guest_hyp_sve_traps_enabled(vcpu)) { + kvm_inject_nested_sve_trap(vcpu); + return true; + } + + if (!p->is_write) { + p->regval = vcpu_read_sys_reg(vcpu, SMCR_EL2); + return true; + } + + smcr = p->regval; + vq = SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1; + vq = min(vq, vcpu_sme_max_vq(vcpu)); + vcpu_write_sys_reg(vcpu, SYS_FIELD_PREP(SMCR_ELx, LEN, vq - 1), + SMCR_EL2); + return true; +} + static unsigned int s1poe_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { @@ -2649,7 +2681,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility = sve_visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, { SYS_DESC(SYS_SMPRI_EL1), undef_access }, - { SYS_DESC(SYS_SMCR_EL1), undef_access }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility = sme_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, @@ -2995,6 +3027,9 @@ static const struct sys_reg_desc sys_reg_descs[] = { EL2_REG_VNCR(HCRX_EL2, reset_val, 0), + EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, + sme_el2_visibility), + EL2_REG(TTBR0_EL2, access_rw, reset_val, 0), EL2_REG(TTBR1_EL2, access_rw, reset_val, 0), EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1), From patchwork Fri Dec 20 16:46:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852615 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD6D121C198; Fri, 20 Dec 2024 16:52:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713526; cv=none; b=GeSRnn4Rkl4pNtoNmx9xeVcOzIXZUQMY+coYgUPE1hNqww9r2ygdueX5ndQsXQBVmjmPpgdWbaHxidPzEQgXu7ugombvK6bR765NTnj5nA1P2f4udtwt5xx+b47bxXGnv1KZiLsSZjIcs7yd5sEi+hec7QmKiwmGNpw4ecT7wNw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713526; c=relaxed/simple; bh=ugWnPjaP6zeZk+xhqbQnNKY7IW5UR6NCSNSJraf8UWI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=PgCoKGbnKnE9ZGSS2SrrKIo+3sV5J4MEa5x0R9ku8MXBUCFOIoOWPALd8uesHDZrtHNA1x5F7TybSTT4dIHZXF6OZ9vJuzW82djbdXefyMUzGxtxlIyVxV0U88yp6uU1yTUEEtmB3BtTYZldLw0mN73Nb98+ftJmHoFvY9vsjLQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QqkvOVSB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QqkvOVSB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2CB5EC4CECD; Fri, 20 Dec 2024 16:52:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713525; bh=ugWnPjaP6zeZk+xhqbQnNKY7IW5UR6NCSNSJraf8UWI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=QqkvOVSBClkTAoxPDQIOr6BOiamrjEBCoqgeF7rmEv8oue8TNeeOMeAGx7vHGSkui /oiM9Gl527gyYX24SsWvk/Ts4rytLCjqk8OepzPydPtl0COdSrzbav8OGHCPDeYmfU TN2a1aRPBrRIMPbvzhveKnzefTpk15rqA6qq61DiYlEE8DX1eSgImlCvUQjqFOTZU/ oo6fIhQ3HXrDPJoKzd2JJLy1EqFaNw5VR6bNN2R9ayQg6r2AJOrhuq3HVZC5ccmrUr thxHTbiF0i5kNXsWVm0eHgFAVU8f3LlXVBEXis20aDtvCIW+Q0xTplsdMZqdt2sBTZ Wqa1B84n4TZgw== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:43 +0000 Subject: [PATCH RFC v3 18/27] KVM: arm64: Support SMIDR_EL1 for guests Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-18-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=4343; i=broonie@kernel.org; h=from:subject:message-id; bh=ugWnPjaP6zeZk+xhqbQnNKY7IW5UR6NCSNSJraf8UWI=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBi306AU+25ymsIInB6kYrxfAyUEl8/NHvT1eLF XMFeG7CJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgYgAKCRAk1otyXVSH0BVOB/ 4qxP9v6MmfqF6r8myKyAk8gF2kl3QE96D+Za51PzoU1WnpI0pYByC6QnpHzdKbLeo/LBGvfvKcPYUJ ZzPhLV6RlKvCV0b7zuzT3ISPcdyOvaArKcyMNOMaPEVV1xalmWFM0u+efWWlp+Nb76XoCdonD+cFqh ZcnlV1Jpc5QnZBhf7Mn3t2qOZ7ckLA8xe5Sxh5TN/2hq0CV8qiufZ2AF32D8CEDU8ciXe5bdWipWXv EBedDKL4CaTlQGhum+UFkRLC0ZUj7Y6uYFOvRMacK+FTvkk5VH8MqUBO9evSNyeRQsSEmRHWy5uetL +IbVJho34Pi7AW3GFwVbg1DgjIXL6E X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds an identification register SMIDR_EL1 which provides a basic description of the SME implementation, describing the implementation in a manner similar to MIDR_EL1 for the PE as well as indicating support for priority management. Since we do not currently support SME priority control we mask out SMPS, indicating that priority management is not supported. We do the same for Affinity, indicating that there is no physical sharing, and unknown fields. As for MIDR_EL1 and REVIDR_EL1 we expose the implementer and revision information to guests with the raw value from the CPU we are running on, this may present issues for asymmetric systems or for migration as it does for the existing registers. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/sys_regs.c | 46 ++++++++++++++++++++++++++++++++++++--- 2 files changed, 44 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 063b75eb4f3bc4fb425d2abc8118a950bccc2317..a304b02efcadba5371edffe97e911bba0634ed62 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -472,6 +472,7 @@ enum vcpu_sysreg { /* FP/SIMD/SVE */ SVCR, FPMR, + SMIDR_EL1, /* Streaming Mode Identification Register */ /* 32bit specific registers. */ DACR32_EL2, /* Domain Access Control Register */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index b5a38fc7a4a9ed4fce053018eb6ff353ae5c0d09..416c855153ca532e4c6557d78599e9af0f913071 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -882,6 +882,39 @@ static u64 reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) return mpidr; } +static u64 reset_smidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +{ + u64 smidr = 0; + + if (!system_supports_sme()) + return smidr; + + smidr = read_sysreg_s(SYS_SMIDR_EL1); + + /* + * Mask out any priority or affinity information, or fields we + * don't know about. + */ + smidr &= ~(SMIDR_EL1_SMPS_MASK | SMIDR_EL1_AFFINITY_MASK | + SMIDR_EL1_RES0); + + vcpu_write_sys_reg(vcpu, smidr, SMIDR_EL1); + + return smidr; +} + +static bool access_smidr(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return write_to_read_only(vcpu, p, r); + + p->regval = vcpu_read_sys_reg(vcpu, r->reg); + + return true; +} + static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -1576,7 +1609,9 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, if (!kvm_has_mte(vcpu->kvm)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + if (!vcpu_has_sme(vcpu)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac); @@ -1676,6 +1711,10 @@ static unsigned int id_visibility(const struct kvm_vcpu *vcpu, if (!vcpu_has_sve(vcpu)) return REG_RAZ; break; + case SYS_ID_AA64SMFR0_EL1: + if (!vcpu_has_sme(vcpu)) + return REG_RAZ; + break; } return 0; @@ -2601,7 +2640,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR), ID_UNALLOCATED(4,3), ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), - ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_WRITABLE(ID_AA64SMFR0_EL1, ~ID_AA64SMFR0_EL1_RES0), ID_UNALLOCATED(4,6), ID_WRITABLE(ID_AA64FPFR0_EL1, ~ID_AA64FPFR0_EL1_RES0), @@ -2799,7 +2838,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1, .set_user = set_clidr, .val = ~CLIDR_EL1_RES0 }, { SYS_DESC(SYS_CCSIDR2_EL1), undef_access }, - { SYS_DESC(SYS_SMIDR_EL1), undef_access }, + { SYS_DESC(SYS_SMIDR_EL1), .access = access_smidr, .reset = reset_smidr, + .reg = SMIDR_EL1, .visibility = sme_visibility }, { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 }, ID_FILTERED(CTR_EL0, ctr_el0, CTR_EL0_DIC_MASK | From patchwork Fri Dec 20 16:46:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852614 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1D4F4229124; Fri, 20 Dec 2024 16:52:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713533; cv=none; b=i+IXQp+PrIzf5+LGaRNb0i6ifRY1B7XivmT3cOorjZ+8cphh9a8Md6hy89A8oGGp98jEp5ocWVgv9fI4+wJarPH07DipAO1WeHasYxiaSAxoKbfQaTHgpWxCJ3TB3AqEveoQPWSwFEXEQ8EYXI7Gg9JLKvrRwy/uSaqof36poTs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713533; c=relaxed/simple; bh=ENOuC1TJcLemryiEjkWj2obhMQPzOqxnrV0P2mql+Ks=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=kc3oX1L4VRcdJjTnwc7p+yxcmzpejXGkhcziA8kXk8ocilo5ey0tJV8KyJFOEH0sVy85cZLqi/8UKSdLUu/W2rAtgpQ673mlIXdi6c69nfhRqA3YbKMOA5Zad7J39pkSKf2UHZ2c75JHW2JdjKJJcTu37OmCucyTSKIs9d9pVL0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ssi8EUnV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ssi8EUnV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A3D48C4CED7; Fri, 20 Dec 2024 16:52:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713533; bh=ENOuC1TJcLemryiEjkWj2obhMQPzOqxnrV0P2mql+Ks=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ssi8EUnVVNwfhRrHMmu+SM0Q4NnT9gMhK3MqiAI8pA6M0bBsb959zketSgp2K4CN5 JlpNGd3mV8s19GLNEdJ0aEOxoR/MILHM6yw4kfFVMnCkgoIbKEgdUlkM4Ifwp6IdEm D3DW05Lwv9eapii3FYvUq84+RhtvBtmubKLBZoH1hYLZjI14TVC9BoNcfMYY2ng8Ae B6zIJL1JjJ1in59kolXbEgkojFnZZhH4Eih9/LGrLfA3CdV5pciy0hR0RX/7FBMASt YvWyq5ejgR4+RXyjYBys+gtLXfY8uW+wrm3etNfKMUzak/AT4qVMfvL7jaqXSxGGKD eA2epJRg+O6Hg== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:45 +0000 Subject: [PATCH RFC v3 20/27] KVM: arm64: Provide assembly for SME state restore Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-20-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=1651; i=broonie@kernel.org; h=from:subject:message-id; bh=ENOuC1TJcLemryiEjkWj2obhMQPzOqxnrV0P2mql+Ks=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBk3E6dm3e3qW64VbwkZ8BsgUXtHAv6JoenG6H9 ivnP0ZWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgZAAKCRAk1otyXVSH0BkOB/ 9pZK/XWcKBxIAbq8Kxz8iIvhPYuh/k36bbRLFCMarZ1Z5rPMkf8dntgyNKiUgIzPm6Uw+3qRbKIe5E Bm2brl/HNycaJBXjRtj5afSTrMoQDsvGkVtIs0I8duEJFnqCf43+XJ0apduhKJ6oaPiDBTJDnOcZTe ULQT9t7Hwlzm4FsjQuXTcdNoOnXcjEnlG8ng440HTikYzJLhmd6F5lBW/Ivx99Ke5XlM+MRrm3k9VZ ot6KxSdKkSj8wIFQr80TkHle48IdnsMkMSd2EBdOrTcrop+pK8XvZWc6CR5iq8aZcEi165we8JzTAp 5tBTTH/jNzQnZA5FYD5EIyS0Nhtz5x X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Provide a __sme_restore_state() for the hypervisor to allow it to restore ZA and ZT for guests. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_hyp.h | 2 ++ arch/arm64/kvm/hyp/fpsimd.S | 16 ++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 21943cb98542750a1b626a8de6bbc095d7770ccf..5a1f8e4be18624efa6b887f09c36f0e8ad318c40 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -113,6 +113,8 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); void __sve_save_state(void *sve_pffr, u32 *fpsr, int save_ffr); void __sve_restore_state(void *sve_pffr, u32 *fpsr, int restore_ffr); +int __sve_get_vl(void); +void __sme_restore_state(void const *state, bool restore_zt); u64 __guest_enter(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index e950875e31cee4df58d041519b7584356463c91b..9e4bce86ef2e632a6071480c06a0b7d69bf48f3d 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -31,3 +31,19 @@ SYM_FUNC_START(__sve_save_state) sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state) + +SYM_FUNC_START(__sve_get_vl) + _sve_rdvl 0, 1 + ret +SYM_FUNC_END(__sve_get_vl) + +SYM_FUNC_START(__sme_restore_state) + _sme_rdsvl 2, 1 // x2 = VL/8 + sme_load_za 0, x2, 12 // Leaves x0 pointing to end of ZA + + cbz x1, 1f + _ldr_zt 0 + +1: + ret +SYM_FUNC_END(__sme_restore_state) From patchwork Fri Dec 20 16:46:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852613 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 926EE229B02; Fri, 20 Dec 2024 16:52:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713540; cv=none; b=N+dmk0ys5WPr3if1/Ua9c8sedvXsDlAKPgmKY8bSqEjbzOOfDnjAP30elVijyyUT1ptXRq6haCy9DIsypLQnyrwpYsDmQAbAru+5e3p7sGyla6SuDowDilTgJSnb4uDRwrmDT2g75LiMZY3IH6e2wiy9eldkwjhBfyVgp28Ru3c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713540; c=relaxed/simple; bh=VTrfqmU0w3nAHvC2QpKS0cpx16L+oXuNWw9Ueg9bmLQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=kIGRtHtLMJIqL6Qpz4WlOEG39GWuMi30qhWWhcfxqoIZJWuHX+Ig38M5puE7/8VzUkB9RwEBdHsnVpeOAGUqw3qXSSRVhOK3AN8w7cPBcbzuMjEUmke8fdbY8S4c6wMzNIgGcYeIEjCHaFIgqtJNAImB/PJkGv1nUgYbtye/2rA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qxF4HzCY; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qxF4HzCY" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2ABE1C4CEDE; Fri, 20 Dec 2024 16:52:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713540; bh=VTrfqmU0w3nAHvC2QpKS0cpx16L+oXuNWw9Ueg9bmLQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=qxF4HzCYE6OBWCA89xa34iOlhk+jCjymHi4GHS0mzK1Ef6ZcHGhwiqUkxMo1FaaeD BQhWwLWOU3JtNHlcBSQmFqvQpkhC+HrdIQ2liyFBF6b5cPQa02wWXPn6uEjXIsAjNQ nasq4ngzZKw6Jw3Zv5RBP+Ya0SIFjxD2ejvAWoVxe1pL9CxiOiKUMvQfQcmtDxRgSC kdtNfOWIKXSYm5RjGImEofDpdWEj0/gvaGTD6b8i0cQqQEW1bPo6IApKZZ33rz1EHY 5wreWHm38C1z5asNv69JiN6tzKxp+wjVblmtLiiKucGLw+eBrf01cyQEFWnn9q8tij ZXWkrMvJ54JCA== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:47 +0000 Subject: [PATCH RFC v3 22/27] KVM: arm64: Expose SME specific state to userspace Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-22-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=8295; i=broonie@kernel.org; h=from:subject:message-id; bh=VTrfqmU0w3nAHvC2QpKS0cpx16L+oXuNWw9Ueg9bmLQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBlHn387PCb2XL6zkz5FUtjqLQ07RLFXrxbc2nw N2vw43mJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgZQAKCRAk1otyXVSH0LBgB/ 0dCjYpNVQaZcv3rNvxGd3kWiiWjUY4Qt3BOWPGwzDnAfkqRoY2GPVc7f1q/5VP3lmHHZKapZE1KbLL 9t+hViZi17u2KbomWg3jmOJrBA37FMiS4CG05CSf9FzCXz3IVzW7NMoNzBRwaXTMS/xjlOOX39KbDo d0D5wvRXQBUdtaV9vYSeVZ0VSNqVA+QeoRiyAjr7YjR5FohZEtygaJEqjStpHIiiz4BR90vw4kVHKN NOAoivOKUrExRepDVvhnCY4TFxDseiwU+vUvNrOsJlTFXwROWgPO1kYD6OG79z+lYwWx1q2GsiPdNx CpQF6mBh1wsnHISZx7xYc5vYWIALnu X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME introduces two new registers, the ZA matrix register and the ZT0 LUT register. Both of these registers are only accessible when PSTATE.ZA is set and ZT0 is only present if SME2 is enabled for the guest. Provide support for configuring these from VMMs. The ZA matrix is a single SVL*SVL register which is available when PSTATE.ZA is set. We follow the pattern established by the architecture itself and expose this to userspace as a series of horizontal SVE vectors with the streaming mode vector length, using the format already established for the SVE vectors themselves. ZT0 is a single register with a refreshingly fixed size 512 bit register which is like ZA accessible only when PSTATE.ZA is set. Add support for it to the userspace API, as with ZA we allow the register to be read or written regardless of the state of PSTATE.ZA in order to simplify userspace usage. The value will be reset to 0 whenever PSTATE.ZA changes from 0 to 1, userspace can read stale values but these are not observable by the guest without manipulation of PSTATE.ZA by userspace. While there is currently only one ZT register the naming as ZT0 and the instruction encoding clearly leave room for future extensions adding more ZT registers. This encoding can readily support such an extension if one is introduced. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 19 +++++++ arch/arm64/include/uapi/asm/kvm.h | 17 ++++++ arch/arm64/kvm/guest.c | 114 +++++++++++++++++++++++++++++++++++++- 3 files changed, 148 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7393672fa0ee9c4ac13adb48a973f94929f767ea..3e064520a86f25fb7b1185b3aca342f593f04994 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1029,6 +1029,22 @@ struct kvm_vcpu_arch { #define vcpu_sme_state(vcpu) (kern_hyp_va((vcpu)->arch.sme_state)) +#define vcpu_sme_state_size(vcpu) ({ \ + size_t __size_ret; \ + unsigned int __vcpu_vq; \ + \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.max_vl[ARM64_VEC_SME]))) { \ + __size_ret = 0; \ + } else { \ + __vcpu_vq = vcpu_sme_max_vq(vcpu); \ + __size_ret = ZA_SIG_REGS_SIZE(__vcpu_vq); \ + if (system_supports_sme2()) \ + __size_ret += ZT_SIG_REG_SIZE; \ + } \ + \ + __size_ret; \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently @@ -1588,4 +1604,7 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val); #define vcpu_in_streaming_mode(vcpu) \ (__vcpu_sys_reg(vcpu, SVCR) & SVCR_SM_MASK) +#define vcpu_za_enabled(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_ZA_MASK) + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index efb384cf9d503007f68aad9233ba949128c94b8b..5092f39138cbf17d9e89191de23ab2ee9f3fa77d 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -359,6 +359,23 @@ struct kvm_arm_counter_offset { /* SME registers */ #define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) +#define KVM_ARM64_SME_VQ_MIN __SVE_VQ_MIN +#define KVM_ARM64_SME_VQ_MAX __SVE_VQ_MAX + +/* ZA and ZTn occupy blocks at the following offsets within this range: */ +#define KVM_REG_ARM64_SME_ZA_BASE 0 +#define KVM_REG_ARM64_SME_ZT_BASE 0x600 + +#define KVM_ARM64_SME_MAX_ZAHREG (__SVE_VQ_BYTES * KVM_ARM64_SME_VQ_MAX) + +#define KVM_REG_ARM64_SME_ZAHREG(n, i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SME | KVM_REG_ARM64_SME_ZA_BASE | \ + KVM_REG_SIZE_U2048 | \ + (((n) & (KVM_ARM64_SME_MAX_ZAHREG - 1)) << 5) | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) + +#define KVM_REG_ARM64_SME_ZTREG_SIZE (512 / 8) + /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ KVM_REG_SIZE_U512 | 0xffff) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index cf468ac93c9e75d642d7293e020d04c4267ffff4..ad32f0f539be9acd5ff78412b369d4134b30559f 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -600,23 +600,133 @@ static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return set_vec_vls(ARM64_VEC_SME, vcpu, reg); } +/* + * Validate SVE register ID and get sanitised bounds for user/kernel SVE + * register copy + */ +static int sme_reg_to_region(struct vec_state_reg_region *region, + struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + /* reg ID ranges for ZA.H[n] registers */ + unsigned int vq = vcpu_sme_max_vq(vcpu) - 1; + const u64 za_h_max = vq * __SVE_VQ_BYTES; + const u64 zah_id_min = KVM_REG_ARM64_SME_ZAHREG(0, 0); + const u64 zah_id_max = KVM_REG_ARM64_SME_ZAHREG(za_h_max - 1, + SVE_NUM_SLICES - 1); + unsigned int reg_num; + + unsigned int reqoffset, reqlen; /* User-requested offset and length */ + unsigned int maxlen; /* Maximum permitted length */ + + size_t sme_state_size; + + reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; + + if (reg->id >= zah_id_min && reg->id <= zah_id_max) { + if (!vcpu_has_sme(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + return -ENOENT; + + /* ZA is exposed as SVE vectors ZA.H[n] */ + reqoffset = ZA_SIG_ZAV_OFFSET(vq, reg_num) - + ZA_SIG_REGS_OFFSET; + reqlen = KVM_SVE_ZREG_SIZE; + maxlen = SVE_SIG_ZREG_SIZE(vq); + } else if (reg->id == KVM_REG_ARM64_SME_ZT_BASE) { + /* ZA is exposed as SVE vectors ZA.H[n] */ + if (!kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, SME2) || + (reg->id & SVE_REG_SLICE_MASK) > 0 || + reg_num > 0) + return -ENOENT; + + /* ZT0 is stored after ZA */ + reqlen = KVM_REG_ARM64_SME_ZTREG_SIZE; + maxlen = KVM_REG_ARM64_SME_ZTREG_SIZE; + } else { + return -EINVAL; + } + + sme_state_size = vcpu_sme_state_size(vcpu); + if (WARN_ON(!sme_state_size)) + return -EINVAL; + + region->koffset = array_index_nospec(reqoffset, sme_state_size); + region->klen = min(maxlen, reqlen); + region->upad = reqlen - region->klen; + + return 0; +} + +/* + * ZA is exposed as an array of horizontal vectors with the same + * format as SVE, mirroring the architecture's LDR ZA[Wv, offs], [Xn] + * instruction. + */ + static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; + struct vec_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SME_VLS) return get_sme_vls(vcpu, reg); - return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + /* + * None of the SME specific registers are accessible unless + * PSTATE.ZA is set. + */ + if (!vcpu_za_enabled(vcpu)) + return -EINVAL; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; + struct vec_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SME_VLS) return set_sme_vls(vcpu, reg); - return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + /* + * None of the SME specific registers are accessible unless + * PSTATE.ZA is set. + */ + if (!vcpu_za_enabled(vcpu)) + return -EINVAL; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } + int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { return -EINVAL; From patchwork Fri Dec 20 16:46:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852612 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5BC6C22A1EA; Fri, 20 Dec 2024 16:52:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713548; cv=none; b=LJkVXiGX/qCLaA4t5SGStOMGJ/LolVaGETq3wiDAYEEcNTTt7wiDhm9QJujkIcdzzR5k1kTiaLRsrE+Nn+9WpYZsX38/OIzxBw496IK7GQTbvXXRHIyj2Bdqy8MEAQSAd+ARHvP3PBX6Id/bXQmcigzbjClDv2s2sZmrH3t5o1M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713548; c=relaxed/simple; bh=xfubkaGiJKFVfZEF1qDdQha6MtUZ0W2BsuPsXw245lc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=J6OEPMtwudS2pcu30+AKgPhpyuEcM2E/5ovMJCTccOOzz0UatokszeaLIe1Z2b9qcNtH0WjXO2Zlhs2RVcEzZOkrZuas6uKW3V154MtjW4kQrhJ/Bg7ShJJHmpJj0uc9CI6En60Ov7OxbyZyxchD1+k7VMYVTElHO2dCn08obDA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ebhH6sA4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ebhH6sA4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A8A04C4CEE2; Fri, 20 Dec 2024 16:52:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713548; bh=xfubkaGiJKFVfZEF1qDdQha6MtUZ0W2BsuPsXw245lc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ebhH6sA4HrUQCgHSXZ1JpeaWxKMuuwEqhTpWV1q63LMujLJHAjp/QD/3Bv2+6cjec PtMBLMOFzwDA3Jhd3llNd20nfxvc76X3a2q2X3SBNt6Z+YNXXQXHViZ0ws04/GZbrn Nc8ehOYEDGMKi6Wpu0jlVJddfo3kxAXBdAdbjH/N553zqEc01I6WBECv25ZpmdbmV9 qn7E10EVpWSjjA4ZtRzUjGQUbcVu6BXbJZRlXKt/ovfJX/Vx10kn11Piv9Wb+PDehg U+8tnilWdXUo9aksu7KfBWlcs7WVWL60AL7KOEU/7blUEFa7BQdgs++CYGTGQ4hPbT E6bTXy8Nb9v6A== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:49 +0000 Subject: [PATCH RFC v3 24/27] KVM: arm64: Handle SME exceptions Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-24-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=8144; i=broonie@kernel.org; h=from:subject:message-id; bh=xfubkaGiJKFVfZEF1qDdQha6MtUZ0W2BsuPsXw245lc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBnh8VqpDxj8bGZCifT1jzEDQ3NgG6oWQFXihXN u/Bk0RqJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgZwAKCRAk1otyXVSH0IsEB/ 4xSMDne5g6rapiNEvGOPsW+plT92CVdursADh2x4yQ7vqI5Sm8FZWtedD6XdXhfqOK/6G60pmwzali Rs8AjjUhSu6SHh3+JAkyR2d71J+X+UapT7XFZ8mrSSqhXW8AyD8LIGmBBSWlDylQ7m3yhLZoBr1EK9 07pG9U/74XsUVbY/d5qK71kvRaUNOU9ft+22CtK67KqoM+kw9PyGLctvLqUWJN3TCn/X4n1jVY+CcI yXW6OlRqaSTRhFBAFqdlmU+ur9mFURwDa3U5G1QhgYNW7fKX6AHBX+t+hBmaaLooNfUbzWqoMeS374 3NwdStpFJazie5oRHwFLT+WQGRE7Hg X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The access control for SME follows the same structure as for the base FP and SVE extensions, with control being via CPACR_ELx.SMEN and CPTR_EL2.TSM mirroring the equivalent FPSIMD and SVE controls in those registers.Add handling for these controls and exceptions mirroring the existing handling for FPSIMD and SVE. When the hardware is in streaming mode guest operations that are invalid in in streaming mode will generate SME exceptions. Since these exceptions may be routed to EL1 with no opportunity for the hypervisor to intercept them we already have code in kvm_arch_vcpu_load_fp() which ensures that we exit streaming mode before running the guest. This ensures that guests do not receive unexpected SME exceptions. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_emulate.h | 4 ++-- arch/arm64/kvm/handle_exit.c | 14 ++++++++++++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 ++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 11 ++++++----- arch/arm64/kvm/hyp/vhe/switch.c | 21 ++++++++++++++++----- 5 files changed, 44 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index c7f3d14c1d69d9b3f7c1c22ad0919c278d2140c1..4c52945779a20604e18d96c78ff920abec9c4dfe 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -624,14 +624,14 @@ static __always_inline void __kvm_reset_cptr_el2(struct kvm *kvm) if (!kvm_has_sve(kvm) || !guest_owns_fp_regs()) val |= CPACR_ELx_ZEN; - if (cpus_have_final_cap(ARM64_SME)) + if (!kvm_has_sme(kvm) || !guest_owns_fp_regs()) val |= CPACR_ELx_SMEN; } else { val = CPTR_NVHE_EL2_RES1; if (kvm_has_sve(kvm) && guest_owns_fp_regs()) val |= CPTR_EL2_TZ; - if (!cpus_have_final_cap(ARM64_SME)) + if (kvm_has_sme(kvm) && guest_owns_fp_regs()) val |= CPTR_EL2_TSM; } diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index d7c2990e7c9ed671833d1011638adeb2c15efd06..48076d0e34038808a36caf2310e11519fd04dd82 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -224,6 +224,19 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; } +/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + if (guest_hyp_sme_traps_enabled(vcpu)) + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + + kvm_inject_undefined(vcpu); + return 1; +} + /* * Two possibilities to handle a trapping ptrauth instruction: * @@ -307,6 +320,7 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_ELx_EC_SVC64] = handle_svc, [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg, [ESR_ELx_EC_SVE] = handle_sve, + [ESR_ELx_EC_SME] = handle_sme, [ESR_ELx_EC_ERET] = kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 7468d8516ecaa1370861e51ad4f65adbc01a5d97..481ecd757e0eba021dad6f3b268bb5235f803553 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -487,6 +487,12 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); break; + case ESR_ELx_EC_SME: + cpacr_clear_set(0, CPACR_ELx_SMEN); + isb(); + sme_cond_update_smcr_vq(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SME]) - 1, + SYS_SMCR_EL2); + break; case ESR_ELx_EC_IABT_LOW: case ESR_ELx_EC_DABT_LOW: handle_host_mem_abort(host_ctxt); diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 0ebf84a9f9e2715793bcd08c494539be25b6870e..7d29585f1fa03ad6b0063a82dcfba4c5c0b1e4a5 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -46,15 +46,14 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) val |= CPACR_ELx_FPEN; if (vcpu_has_sve(vcpu)) val |= CPACR_ELx_ZEN; + if (vcpu_has_sme(vcpu)) + val |= CPACR_ELx_SMEN; } } else { val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; - /* - * Always trap SME since it's not supported in KVM. - * TSM is RES1 if SME isn't implemented. - */ - val |= CPTR_EL2_TSM; + if (!vcpu_has_sme(vcpu) || !guest_owns_fp_regs()) + val |= CPTR_EL2_TSM; if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) val |= CPTR_EL2_TZ; @@ -225,6 +224,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, @@ -236,6 +236,7 @@ static const exit_handler_fn pvm_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, + [ESR_ELx_EC_SME] = kvm_handle_pvm_restricted, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 80581b1c399595fd64d0ccada498edac322480a6..b2ce97d47b2715d8d7c7f4f365dc9b39f93b0673 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -83,6 +83,8 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) val |= CPACR_ELx_FPEN; if (vcpu_has_sve(vcpu)) val |= CPACR_ELx_ZEN; + if (vcpu_has_sme(vcpu)) + val |= CPACR_ELx_SMEN; } else { __activate_traps_fpsimd32(vcpu); } @@ -126,6 +128,8 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) val &= ~CPACR_ELx_FPEN; if (!(SYS_FIELD_GET(CPACR_ELx, ZEN, cptr) & BIT(0))) val &= ~CPACR_ELx_ZEN; + if (!(SYS_FIELD_GET(CPACR_ELx, SMEN, cptr) & BIT(0))) + val &= ~CPACR_ELx_SMEN; if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) val |= cptr & CPACR_ELx_E0POE; @@ -380,22 +384,28 @@ static bool kvm_hyp_handle_cpacr_el1(struct kvm_vcpu *vcpu, u64 *exit_code) return true; } -static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) +static bool kvm_hyp_handle_vec_cr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) { u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); if (!vcpu_has_nv(vcpu)) return false; - if (sysreg != SYS_ZCR_EL2) + switch (sysreg) { + case SYS_ZCR_EL2: + case SYS_SMCR_EL2: + break; + default: return false; + } if (guest_owns_fp_regs()) return false; /* - * ZCR_EL2 traps are handled in the slow path, with the expectation - * that the guest's FP context has already been loaded onto the CPU. + * ZCR_EL2 and SMCR_EL2 traps are handled in the slow path, + * with the expectation that the guest's FP context has + * already been loaded onto the CPU. * * Load the guest's FP context and unconditionally forward to the * slow path for handling (i.e. return false). @@ -412,7 +422,7 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code) if (kvm_hyp_handle_cpacr_el1(vcpu, exit_code)) return true; - if (kvm_hyp_handle_zcr_el2(vcpu, exit_code)) + if (kvm_hyp_handle_vec_cr_el2(vcpu, exit_code)) return true; return kvm_hyp_handle_sysreg(vcpu, exit_code); @@ -422,6 +432,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg_vhe, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, From patchwork Fri Dec 20 16:46:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 852611 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E9EB22ACE4; Fri, 20 Dec 2024 16:52:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713556; cv=none; b=gOhnoFS87spH+jFQtnK5wI/wB68vj//3pb9cwE+Xa0KVHseiHIisOvpEoO1Fd0EKgz5qXwMcifUivSZTOdxFwlYLkMzRMGVS9M4A6oBIghq0f17RwAWMCVVWsCvkPt0RKwrUyAa+WIMexLmGGE+BcwW/J7fvB/wEiP+eVMrPXMc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734713556; c=relaxed/simple; bh=BMpX53mDkGtZeDpM3IZW9C72ZCWeUNb4UNWXTPztxR8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=e0nFU9oFQc+Pzs6RHUGDa9AOLAnVbgtVn9+7qGeXpqr4rCozt+yhoqcb0sPpRL/3rL7OvZsVwswOgwDk+NAUteo5w1EqdnTHbIevzlRwzDtubZ1blnSZZwBKy4npy3hrNwGFT8gmIVI6MIfrftzCi87E5PlQkwSv9q/4QoOOL1A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qcTRhuQb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qcTRhuQb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28D7FC4CECD; Fri, 20 Dec 2024 16:52:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734713555; bh=BMpX53mDkGtZeDpM3IZW9C72ZCWeUNb4UNWXTPztxR8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=qcTRhuQbVKn4fohTb31NbiWO0D8Ehb1pTIFbnDGyigSlNolct15uwnKPDW30WN9g/ InDJmbxVv7nCvXjD3DVPbkwyQ9PRtje3M2q26cwrpyEYiFkdAL3E1neCI1t7okf9Tr BSavZ33MJoo9/AZwG3vJ+BLiGJz/THPM/fml2PWgCCONWWZkpR4S9e9TAivK6ho8Dw 6IC+mRZdSgCMsES2beqius/z1vXlB9wqzHOaowUjac44j/h8ExH7g1S1sTRFiwzSZI Y5fDpsttSsuN/t/es/H1me2muvwdzJKU6gWpXdgTR47Gbqe4MarxHsZaEuN1kM4Bqc sbX9SYiWzk5QQ== From: Mark Brown Date: Fri, 20 Dec 2024 16:46:51 +0000 Subject: [PATCH RFC v3 26/27] KVM: arm64: selftests: Add SME system registers to get-reg-list Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241220-kvm-arm64-sme-v3-26-05b018c1ffeb@kernel.org> References: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> In-Reply-To: <20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=1700; i=broonie@kernel.org; h=from:subject:message-id; bh=BMpX53mDkGtZeDpM3IZW9C72ZCWeUNb4UNWXTPztxR8=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBnZaBoEEPCkL2wDGSGuRsaO/ZJz1XX7TZ4bauI+Jah 6W6aufGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ2WgaAAKCRAk1otyXVSH0Nf/B/ 9CYyGIA/2XCRRw7P5W9LfDx3YHsUN+7rm9dYArODfp6zBm+X/B7PQtaGbBl6i/rCOqU/GOy5lM5/Kg BE+6jEgB3jtsFmK/vmMuF+xFfYh0FXCRRXK1scYdYQYYSYsuld9+hSuzzGsc0JZpJdUklIZpCAC+ez vXERWUzbLNJUOLAs9Vk4sGSVvzfl6VRGBSIQjA40ki/NIGBY2kLC0Mddl/6yWCCjKF1wVSqeckIcnJ TzgrBwUzP0ie5VKhvFF5ahucgFbB0fgju+RaDyyPyXXb32tQnPDQWj7n34wclFS5xc+RkwsE0NzVXq qLffnpbiwWYywwxNlTq3YrzCMKQUen X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB SME adds a number of new system registers, update get-reg-list to check for them based on the visibility of SME. Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/aarch64/get-reg-list.c | 32 +++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index d43fb3f49050ba3de950d19d56b45beefec9dbeb..3e9c19c4a0d658f349a7d476a90b877882815709 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -23,6 +23,18 @@ struct feature_id_reg { }; static struct feature_id_reg feat_id_regs[] = { + { + ARM64_SYS_REG(3, 0, 1, 2, 4), /* SMPRI_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 0, 1, 2, 6), /* SMCR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, { ARM64_SYS_REG(3, 0, 2, 0, 3), /* TCR2_EL1 */ ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ @@ -52,7 +64,25 @@ static struct feature_id_reg feat_id_regs[] = { ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ 16, 1 - } + }, + { + ARM64_SYS_REG(3, 1, 0, 0, 6), /* SMIDR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 3, 4, 2, 2), /* SVCR */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 3, 13, 0, 5), /* TPIDR2_EL0 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, }; bool filter_reg(__u64 reg)