From patchwork Fri May 19 14:57:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 684707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 733C5C77B7F for ; Fri, 19 May 2023 14:57:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231371AbjESO5n (ORCPT ); Fri, 19 May 2023 10:57:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230434AbjESO5n (ORCPT ); Fri, 19 May 2023 10:57:43 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E84F6F1 for ; Fri, 19 May 2023 07:57:41 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1684508259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jg1SUpykiUg9DonJuWHzO+vWBLY6fBSjbyxfmKOi1pg=; b=ET4+ZjY0TLgcc048Eu2nWALl4xl/3MdFxnbqmiH3KUNp+pkpPvxOASN0JlXqU6K+GtpI+n Q57niJ25+7ddfypYEFcQvZ+lj5Xc4u8wq3FH1wpZVn/xEtb4sdJ2tBdkrdRCv5Cb2dNySq +yYkQELCvEyX/FBd8vpJHZUk+rAj615F+Xddn1DYWYp0HGXxoq84VUFkMRDnMGXnutIHig JZKy8b75OLemoIF5vNDD5R4UKC6yb+f2L1RwFA0THVYzvLSIJhQySiR77Z/pApjO5VDKh9 xT5/OQnxwW1AUUWXDB63LUNbbE+YO9inYN1YuQB5m/2IPDWvJU89M3jMTCMhag== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1684508259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jg1SUpykiUg9DonJuWHzO+vWBLY6fBSjbyxfmKOi1pg=; b=wVKToWd8WfHzaEl+rUblpAbTQSwuygnpK4efAEyHUFbJRloFTfoM6PeZA16KDiSL8xfDrc sxPPF8bwC2ueQgAg== To: Ard Biesheuvel Cc: Pavel Pisa , linux-rt-users@vger.kernel.org, Pavel Hronek , Thomas Gleixner , Peter Zijlstra , Sebastian Andrzej Siewior Subject: [PATCH 1/3] ARM: vfp: Provide vfp_lock() for VFP locking. Date: Fri, 19 May 2023 16:57:29 +0200 Message-Id: <20230519145731.574867-2-bigeasy@linutronix.de> In-Reply-To: <20230519145731.574867-1-bigeasy@linutronix.de> References: <20230519145731.574867-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org kernel_neon_begin() uses local_bh_disable() to ensure exclusive access to the VFP unit. This is broken on PREEMPT_RT because a BH disabled section remains preemptible on PREEMPT_RT. Introduce vfp_lock() which uses local_bh_disable() and preempt_disable() on PREEMPT_RT. Since softirqs are processed always in thread context, disabling preemption is enough to ensure that the current context won't get interrupted by something that is using the VFP. Use it in kernel_neon_begin(). Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/vfp/vfpmodule.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 349dcb944a937..57f9527d1e50e 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -54,6 +54,34 @@ static unsigned int __initdata VFP_arch; */ union vfp_state *vfp_current_hw_state[NR_CPUS]; +/* + * Claim ownership of the VFP unit. + * + * The caller may change VFP registers until vfp_unlock() is called. + * + * local_bh_disable() is used to disable preemption and to disable VFP + * processing in softirq context. On PREEMPT_RT kernels local_bh_disable() is + * not sufficient because it only serializes soft interrupt related sections + * via a local lock, but stays preemptible. Disabling preemption is the right + * choice here as bottom half processing is always in thread context on RT + * kernels so it implicitly prevents bottom half processing as well. + */ +static void vfp_lock(void) +{ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_disable(); + else + preempt_disable(); +} + +static void vfp_unlock(void) +{ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_enable(); + else + preempt_enable(); +} + /* * Is 'thread's most up to date state stored in this CPUs hardware? * Must be called from non-preemptible context. @@ -738,7 +766,7 @@ void kernel_neon_begin(void) unsigned int cpu; u32 fpexc; - local_bh_disable(); + vfp_lock(); /* * Kernel mode NEON is only allowed outside of hardirq context with @@ -769,7 +797,7 @@ void kernel_neon_end(void) { /* Disable the NEON/VFP unit. */ fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN); - local_bh_enable(); + vfp_unlock(); } EXPORT_SYMBOL(kernel_neon_end); From patchwork Fri May 19 14:57:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 684245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C658C7EE26 for ; Fri, 19 May 2023 14:57:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231994AbjESO5o (ORCPT ); Fri, 19 May 2023 10:57:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231124AbjESO5n (ORCPT ); Fri, 19 May 2023 10:57:43 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E706AC4 for ; Fri, 19 May 2023 07:57:41 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1684508259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LldjaLfYb76ho80XKTV86hUVMPTppvWm3OJfAvriQ1E=; b=m+Lraj4jhQsFo6563+j1BDT13F83Csohj6lBkwKImBX6xGisyATDeieXMa56KWuiFpClOp DoBw9Tu4rkpCT5bMDdBP2CgktK1cfaAIkdzQhT2Qsxdq/4oeRxjxpIG7ReBMhWLtHDrNY0 E5feX0O9BoO1uQkXd5pXTtOEa6PbTH8ZSdUevx2bIytWc0Saz7BqXHhLDNBRkZSaS0ar9E 0B78OOJwbYq0IPbRMyKOnds2GFlzT6m/MgBrG3j278nlF8biKqXlAqzO339LJN3aApSLXL kSDPoIGTsAtDO6FjlcNL42irxL4oO1DMiqCEDhbRTostUN7sT55cC+KdX6jNQQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1684508259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LldjaLfYb76ho80XKTV86hUVMPTppvWm3OJfAvriQ1E=; b=OCkVuzydKYV8IQcT7KmXlfPaecAv8Cl4WGmRWl4uDcVACybYaEQs7OA1yfnJlaQ1Hd1Rrt rG3xRWBP6pElZjAg== To: Ard Biesheuvel Cc: Pavel Pisa , linux-rt-users@vger.kernel.org, Pavel Hronek , Thomas Gleixner , Peter Zijlstra , Sebastian Andrzej Siewior Subject: [PATCH 2/3] ARM: vfp: Use vfp_lock() in vfp_sync_hwstate(). Date: Fri, 19 May 2023 16:57:30 +0200 Message-Id: <20230519145731.574867-3-bigeasy@linutronix.de> In-Reply-To: <20230519145731.574867-1-bigeasy@linutronix.de> References: <20230519145731.574867-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org vfp_sync_hwstate() uses preempt_disable() followed by local_bh_disable() to ensure that it won't get interrupted while checking the VFP state. This harms PREEMPT_RT because softirq handling can get preempted and local_bh_disable() synchronizes the related section with a sleeping lock which does not work with disabled preemption. Use the vfp_lock() to synchronize the access. Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/vfp/vfpmodule.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 57f9527d1e50e..543dc7f5a27e3 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -542,11 +542,9 @@ static inline void vfp_pm_init(void) { } */ void vfp_sync_hwstate(struct thread_info *thread) { - unsigned int cpu = get_cpu(); + vfp_lock(); - local_bh_disable(); - - if (vfp_state_in_hw(cpu, thread)) { + if (vfp_state_in_hw(raw_smp_processor_id(), thread)) { u32 fpexc = fmrx(FPEXC); /* @@ -557,8 +555,7 @@ void vfp_sync_hwstate(struct thread_info *thread) fmxr(FPEXC, fpexc); } - local_bh_enable(); - put_cpu(); + vfp_unlock(); } /* Ensure that the thread reloads the hardware VFP state on the next use. */ From patchwork Fri May 19 14:57:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 684244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C266C77B7F for ; Fri, 19 May 2023 14:57:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231926AbjESO5q (ORCPT ); Fri, 19 May 2023 10:57:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232052AbjESO5o (ORCPT ); Fri, 19 May 2023 10:57:44 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E841EC7 for ; Fri, 19 May 2023 07:57:41 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1684508259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lmDTjAl7MUm9Q0gpDHLAu7fYdFAPH5buoM5UBt1JBHI=; b=BIK89wxcTSSj3nBrnJPjprCQGzKBIq1JMGc/ZrT743NBqlaRpBu2dt1kUA/A371jG1LALp asVqtc/eINwEMJGhB7CvOEYbwkPUBBNl1m4JMadWNUkV4Ob2f2IuweJA8k46Ss3zpiKFrf 76nj5NpAO00LM07Cd3E67PDCmIfpuhnUVVOwHtnI6YIHjBUZwq3FHqnhSRzVxbRunjxEkk j3N5Mxb251GvotypexYbXzNWEXgC/CnEbOhLrBIUy6qUc3MDPqMXzyuH1oIhFL/nxUoNjl AdeFhDbW/Mlxbp66NyYFMCekXI/BuMTk72YeRwqn1KFxL743bbQcFp0D1P2UYg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1684508259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lmDTjAl7MUm9Q0gpDHLAu7fYdFAPH5buoM5UBt1JBHI=; b=RzEPCnh/92e1WwBSgj5GD3sR3+ecOwDGeLCYvT3AMNRV1+0WLmHJUMFV0icBieBc0Ph2rR 7K1Dg7KSMHmLBKCw== To: Ard Biesheuvel Cc: Pavel Pisa , linux-rt-users@vger.kernel.org, Pavel Hronek , Thomas Gleixner , Peter Zijlstra , Sebastian Andrzej Siewior Subject: [PATCH 3/3] ARM: vfp: Use vfp_lock() in vfp_entry(). Date: Fri, 19 May 2023 16:57:31 +0200 Message-Id: <20230519145731.574867-4-bigeasy@linutronix.de> In-Reply-To: <20230519145731.574867-1-bigeasy@linutronix.de> References: <20230519145731.574867-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org vfp_entry() is invoked from exception handler and is fully preemptible. It uses local_bh_disable() to remain uninterrupted while checking the VFP state. This is not working on PREEMPT_RT because local_bh_disable() synchronizes the relevant section but the context remains fully preemptible. Use vfp_lock() for uninterrupted access. VFP_bounce() is invoked from within vfp_entry() and may send a signal. Sending a signal uses spinlock_t which becomes a sleeping lock on PREEMPT_RT and must not be acquired within a preempt-disabled section. Move the vfp_raise_sigfpe() block outside of the preempt-disabled section. Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/vfp/vfphw.S | 7 ++----- arch/arm/vfp/vfpmodule.c | 30 ++++++++++++++++++++---------- 2 files changed, 22 insertions(+), 15 deletions(-) diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S index a4610d0f32152..860512042bc21 100644 --- a/arch/arm/vfp/vfphw.S +++ b/arch/arm/vfp/vfphw.S @@ -180,10 +180,7 @@ ENTRY(vfp_support_entry) @ always subtract 4 from the following @ instruction address. -local_bh_enable_and_ret: - adr r0, . - mov r1, #SOFTIRQ_DISABLE_OFFSET - b __local_bh_enable_ip @ tail call + b vfp_exit @ tail call look_for_VFP_exceptions: @ Check for synchronous or asynchronous exception @@ -206,7 +203,7 @@ ENTRY(vfp_support_entry) @ not recognised by VFP DBGSTR "not VFP" - b local_bh_enable_and_ret + b vfp_exit @ tail call process_exception: DBGSTR "bounce" diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 543dc7f5a27e3..a2745d17e9c71 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -267,7 +267,7 @@ static void vfp_panic(char *reason, u32 inst) /* * Process bitmask of exception conditions. */ -static void vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr, struct pt_regs *regs) +static int vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr) { int si_code = 0; @@ -275,8 +275,7 @@ static void vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr, struct pt_ if (exceptions == VFP_EXCEPTION_ERROR) { vfp_panic("unhandled bounce", inst); - vfp_raise_sigfpe(FPE_FLTINV, regs); - return; + return FPE_FLTINV; } /* @@ -304,8 +303,7 @@ static void vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr, struct pt_ RAISE(FPSCR_OFC, FPSCR_OFE, FPE_FLTOVF); RAISE(FPSCR_IOC, FPSCR_IOE, FPE_FLTINV); - if (si_code) - vfp_raise_sigfpe(si_code, regs); + return si_code; } /* @@ -350,6 +348,8 @@ static u32 vfp_emulate_instruction(u32 inst, u32 fpscr, struct pt_regs *regs) void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs) { u32 fpscr, orig_fpscr, fpsid, exceptions; + int si_code2 = 0; + int si_code = 0; pr_debug("VFP: bounce: trigger %08x fpexc %08x\n", trigger, fpexc); @@ -397,7 +397,7 @@ void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs) * unallocated VFP instruction but with FPSCR.IXE set and not * on VFP subarch 1. */ - vfp_raise_exceptions(VFP_EXCEPTION_ERROR, trigger, fpscr, regs); + si_code = vfp_raise_exceptions(VFP_EXCEPTION_ERROR, trigger, fpscr); goto exit; } @@ -422,7 +422,7 @@ void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs) */ exceptions = vfp_emulate_instruction(trigger, fpscr, regs); if (exceptions) - vfp_raise_exceptions(exceptions, trigger, orig_fpscr, regs); + si_code2 = vfp_raise_exceptions(exceptions, trigger, orig_fpscr); /* * If there isn't a second FP instruction, exit now. Note that @@ -441,9 +441,14 @@ void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs) emulate: exceptions = vfp_emulate_instruction(trigger, orig_fpscr, regs); if (exceptions) - vfp_raise_exceptions(exceptions, trigger, orig_fpscr, regs); + si_code = vfp_raise_exceptions(exceptions, trigger, orig_fpscr); + exit: - local_bh_enable(); + vfp_unlock(); + if (si_code2) + vfp_raise_sigfpe(si_code2, regs); + if (si_code) + vfp_raise_sigfpe(si_code, regs); } static void vfp_enable(void *unused) @@ -684,10 +689,15 @@ asmlinkage void vfp_entry(u32 trigger, struct thread_info *ti, u32 resume_pc, if (unlikely(!have_vfp)) return; - local_bh_disable(); + vfp_lock(); vfp_support_entry(trigger, ti, resume_pc, resume_return_address); } +asmlinkage void vfp_exit(void) +{ + vfp_unlock(); +} + #ifdef CONFIG_KERNEL_MODE_NEON static int vfp_kmode_exception(struct pt_regs *regs, unsigned int instr)