From patchwork Fri Dec 10 18:41:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 523798 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB796C433EF for ; Fri, 10 Dec 2021 18:44:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245516AbhLJSsO (ORCPT ); Fri, 10 Dec 2021 13:48:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238765AbhLJSsO (ORCPT ); Fri, 10 Dec 2021 13:48:14 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8F7BC061746 for ; Fri, 10 Dec 2021 10:44:38 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id DE1B6CE2C8F for ; Fri, 10 Dec 2021 18:44:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 956A8C341C5; Fri, 10 Dec 2021 18:44:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1639161875; bh=QQL7fE8fUqwEna0Yuurf4F0iRHsTWydnTTz1BarHZYE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N/NezmfvYmdrRtrGTGq22HeOvAnJRnrqxFbIhlaARkNbK67LaW0RGTg4lFMveJpb8 Wasdc2Ei2z6z2KfUH80XfAQkvlSFn4FYzIirTmuYg2DDpoFFmKoNbRDHHitQNnk1nX TCPCBIVqCjl8j6+0VRPMbX1jStYpXd9Kvwv3/eQSJC6VB1GrCKbf2OcLREnQlow6KV ISwNHH+bifh/UfSr3iHerWcLsbTYNmVe4FHo+vbdwHCzvPx1MXShu0vtP7KCv2oXa3 9edMrRiXT3JXPZnJsulI4g0n1jjHPGnzAYKtO2zVlABSfj626X+V70s5UkicVa5/Lm 07Yo823rOgxaQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v7 20/37] arm64/sme: Implement ZA context switching Date: Fri, 10 Dec 2021 18:41:16 +0000 Message-Id: <20211210184133.320748-21-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211210184133.320748-1-broonie@kernel.org> References: <20211210184133.320748-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7034; h=from:subject; bh=QQL7fE8fUqwEna0Yuurf4F0iRHsTWydnTTz1BarHZYE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhs59OOhVKykUUol+AT6pwiwo/82xP8XkXgeqetjvk GXfGofmJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYbOfTgAKCRAk1otyXVSH0Cl1B/ wOFj0k+isfheeih2GvCEnrQMIHeKhQdS3Q8qSrUPGGLyLKTTbKLv3UuOBMDbpGbzP2dI+frAwYkLTf 7364ZPFXhwFikacFbOd2msCR6klDhWeK0YBxUOfaAFsslUd0lKK4bPsTLs3Pw+SDhpWr7CDCN5gVtl VYY4KLcULgLtydioCHgB4O/rwgan3jaNvKBF3/tvSeZ9qTfVtuT+yEU/RTH12HsRuFuCbQD2KGUNb1 aPZjRhKdAwtRmXIysmkEwhnzx4cfpxhyuivRRV+RoNMU6qmxMax3tAdGyJ6iXi/2q22NhEOx2wB0ZE K2v38WW+bSTEUjXTP6dltlrZCA+Oe8 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Allocate space for storing ZA on first access to SME and use that to save and restore ZA state when context switching. We do this by using the vector form of the LDR and STR ZA instructions, these do not require streaming mode and have implementation recommendations that they avoid contention issues in shared SMCU implementations. Since ZA is architecturally guaranteed to be zeroed when enabled we do not need to explicitly zero ZA, either we will be restoring from a saved copy or trapping on first use of SME so we know that ZA must be disabled. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 4 +++- arch/arm64/include/asm/fpsimdmacros.h | 22 ++++++++++++++++++++++ arch/arm64/include/asm/processor.h | 1 + arch/arm64/kernel/entry-fpsimd.S | 22 ++++++++++++++++++++++ arch/arm64/kernel/fpsimd.c | 17 +++++++++++------ arch/arm64/kvm/fpsimd.c | 2 +- 6 files changed, 60 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 2f7b64797736..de7947a71618 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -47,7 +47,7 @@ extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, void *sve_state, unsigned int sve_vl, - unsigned int sme_vl); + void *za_state, unsigned int sme_vl); extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_save_and_flush_cpu_state(void); @@ -90,6 +90,8 @@ extern void sve_flush_live(bool flush_ffr, unsigned long vq_minus_1); extern unsigned int sve_get_vl(void); extern void sve_set_vq(unsigned long vq_minus_1); extern void sme_set_vq(unsigned long vq_minus_1); +extern void sme_save_state(void *state, unsigned int vq_minus_1); +extern void sme_load_state(void const *state, unsigned int vq_minus_1); struct arm64_cpu_capabilities; extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); diff --git a/arch/arm64/include/asm/fpsimdmacros.h b/arch/arm64/include/asm/fpsimdmacros.h index 57bcd9ec4cb9..113c37b1a8e9 100644 --- a/arch/arm64/include/asm/fpsimdmacros.h +++ b/arch/arm64/include/asm/fpsimdmacros.h @@ -309,3 +309,25 @@ ldr w\nxtmp, [\xpfpsr, #4] msr fpcr, x\nxtmp .endm + +.macro sme_save_za nxbase, xvl, nw + mov w\nw, #0 + +423: + _sme_str_zav \nw, \nxbase + add x\nxbase, x\nxbase, \xvl + add x\nw, x\nw, #1 + cmp \xvl, x\nw + bne 423b +.endm + +.macro sme_load_za nxbase, xvl, nw + mov w\nw, #0 + +423: + _sme_ldr_zav \nw, \nxbase + add x\nxbase, x\nxbase, \xvl + add x\nw, x\nw, #1 + cmp \xvl, x\nw + bne 423b +.endm diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 6e2af9de153c..5a5c5edd76df 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -153,6 +153,7 @@ struct thread_struct { unsigned int fpsimd_cpu; void *sve_state; /* SVE registers, if any */ + void *za_state; /* ZA register, if any */ unsigned int vl[ARM64_VEC_MAX]; /* vector length */ unsigned int vl_onexec[ARM64_VEC_MAX]; /* vl after next exec */ unsigned long fault_address; /* fault info */ diff --git a/arch/arm64/kernel/entry-fpsimd.S b/arch/arm64/kernel/entry-fpsimd.S index b06bf42e7856..6a224cfcfd50 100644 --- a/arch/arm64/kernel/entry-fpsimd.S +++ b/arch/arm64/kernel/entry-fpsimd.S @@ -94,4 +94,26 @@ SYM_FUNC_START(sme_set_vq) ret SYM_FUNC_END(sme_set_vq) +/* + * Save the SME state + * + * x0 - pointer to buffer for state + * x1 - Bytes per vector + */ +SYM_FUNC_START(sme_save_state) + sme_save_za 0, x1, 12 + ret +SYM_FUNC_END(sme_save_state) + +/* + * Load the SME state + * + * x0 - pointer to buffer for state + * x1 - bytes per vector + */ +SYM_FUNC_START(sme_load_state) + sme_load_za 0, x1, 12 + ret +SYM_FUNC_END(sme_load_state) + #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 39a4dd0a6257..2582374b7e48 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -117,6 +117,7 @@ struct fpsimd_last_state_struct { struct user_fpsimd_state *st; void *sve_state; + void *za_state; unsigned int sve_vl; unsigned int sme_vl; }; @@ -382,11 +383,15 @@ static void task_fpsimd_load(void) if (system_supports_sme()) { unsigned long sme_vl = task_get_sme_vl(current); + /* Ensure VL is set up for restoring data */ if (test_thread_flag(TIF_SME)) sme_set_vq(sve_vq_from_vl(sme_vl) - 1); write_sysreg_s(current->thread.svcr, SYS_SVCR_EL0); + if (thread_za_enabled(¤t->thread)) + sme_load_state(current->thread.za_state, sme_vl); + if (thread_sm_enabled(¤t->thread)) { restore_sve_regs = true; restore_ffr = system_supports_fa64(); @@ -429,11 +434,8 @@ static void fpsimd_save(void) if (system_supports_sme()) { current->thread.svcr = read_sysreg_s(SYS_SVCR_EL0); - if (thread_za_enabled(¤t->thread)) { - /* ZA state managment is not implemented yet */ - force_signal_inject(SIGKILL, SI_KERNEL, 0, 0); - return; - } + if (thread_za_enabled(¤t->thread)) + sme_save_state(last->za_state, last->sme_vl); /* If we are in streaming mode override regular SVE. */ if (thread_sm_enabled(¤t->thread)) { @@ -1481,6 +1483,7 @@ static void fpsimd_bind_task_to_cpu(void) WARN_ON(!system_supports_fpsimd()); last->st = ¤t->thread.uw.fpsimd_state; last->sve_state = current->thread.sve_state; + last->za_state = current->thread.za_state; last->sve_vl = task_get_sve_vl(current); last->sme_vl = task_get_sme_vl(current); current->thread.fpsimd_cpu = smp_processor_id(); @@ -1497,7 +1500,8 @@ static void fpsimd_bind_task_to_cpu(void) } void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, - unsigned int sve_vl, unsigned int sme_vl) + unsigned int sve_vl, void *za_state, + unsigned int sme_vl) { struct fpsimd_last_state_struct *last = this_cpu_ptr(&fpsimd_last_state); @@ -1507,6 +1511,7 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, last->st = st; last->sve_state = sve_state; + last->za_state = za_state; last->sve_vl = sve_vl; last->sme_vl = sme_vl; } diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index d96871002081..007b2e8b9ae9 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -100,7 +100,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs, vcpu->arch.sve_state, vcpu->arch.sve_max_vl, - 0); + NULL, 0); clear_thread_flag(TIF_FOREIGN_FPSTATE); update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu));