From patchwork Wed Nov 24 16:12:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 517420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23650C433F5 for ; Wed, 24 Nov 2021 16:12:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345777AbhKXQPn (ORCPT ); Wed, 24 Nov 2021 11:15:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241891AbhKXQPm (ORCPT ); Wed, 24 Nov 2021 11:15:42 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCC97C06173E; Wed, 24 Nov 2021 08:12:32 -0800 (PST) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1637770350; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uKN/L/evIH8pd44wrAVkgZq5sxjgxZoyOnlYnKYCCYA=; b=XZ7EF4CRFiIZI2UeAAq6e6x9phusLKk0W96gOl54egnHzSTtJo/eYHwrnCVyiFKIzKS92l gZ3L4FOB1VlqJiclj+p0QRPMgbignWTxzRq9MYXyU8YF+TMAjIQGMU74SxZkshdxNqTuto bbX0c0UFJGAyz84WORE/ClVtli7IoGQjlpulVat9Z27gDeLeT+j/dlAIIOgScyAezy3uLl XpMGD28tsC6qQy8Wsl5IGDpiQRMclFtksm7zMz1D3jp3e4EyI+fpDMuLEhUwkE2veX/SUn MdKYMu772U/IdxBgIPeaOkB+6DWd5+YszdCfEW3UXIy7ssg5lfrcjhTq/HmvWg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1637770350; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uKN/L/evIH8pd44wrAVkgZq5sxjgxZoyOnlYnKYCCYA=; b=A7QUA7jAaDw25sXaRqIOJCr6IScUGaPwQT/lHRZY3Jjmh6A7DddmV/WPtGo8dIeEMH/ce7 wQqLwjKVEvfyfgBw== To: Steven Rostedt Cc: linux-kernel@vger.kernel.org, linux-rt-users , Thomas Gleixner , Carsten Emde , John Kacur , Daniel Wagner , Tom Zanussi , "Srivatsa S . Bhat" , Clark Williams , Maarten Lankhorst , bigeasy@linutronix.de Subject: [PATCH RT 1/3] irq_work: Allow irq_work_sync() to sleep if irq_work() no IRQ support. Date: Wed, 24 Nov 2021 17:12:19 +0100 Message-Id: <20211124161221.2224005-2-bigeasy@linutronix.de> In-Reply-To: <20211124161221.2224005-1-bigeasy@linutronix.de> References: <20211123103755.12d4b776@gandalf.local.home> <20211124161221.2224005-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org irq_work() triggers instantly an interrupt if supported by the architecture. Otherwise the work will be processed on the next timer tick. In worst case irq_work_sync() could spin up to a jiffy. irq_work_sync() is usually used in tear down context which is fully preemptible. Based on review irq_work_sync() is invoked from preemptible context and there is one waiter at a time. This qualifies it to use rcuwait for synchronisation. Let irq_work_sync() synchronize with rcuwait if the architecture processes irqwork via the timer tick. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20211006111852.1514359-3-bigeasy@linutronix.de --- include/linux/irq_work.h | 10 +++++++++- kernel/irq_work.c | 10 ++++++++++ 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h index f941f2d7d71ce..3c6d3a96bca0f 100644 --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -3,6 +3,7 @@ #define _LINUX_IRQ_WORK_H #include +#include /* * An entry can be in one of four states: @@ -22,6 +23,7 @@ struct irq_work { }; }; void (*func)(struct irq_work *); + struct rcuwait irqwait; }; static inline @@ -29,13 +31,19 @@ void init_irq_work(struct irq_work *work, void (*func)(struct irq_work *)) { atomic_set(&work->flags, 0); work->func = func; + rcuwait_init(&work->irqwait); } #define DEFINE_IRQ_WORK(name, _f) struct irq_work name = { \ .flags = ATOMIC_INIT(0), \ - .func = (_f) \ + .func = (_f), \ + .irqwait = __RCUWAIT_INITIALIZER(irqwait), \ } +static inline bool irq_work_is_busy(struct irq_work *work) +{ + return atomic_read(&work->flags) & IRQ_WORK_BUSY; +} bool irq_work_queue(struct irq_work *work); bool irq_work_queue_on(struct irq_work *work, int cpu); diff --git a/kernel/irq_work.c b/kernel/irq_work.c index 8183d30e1bb1c..8969aff790e21 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -165,6 +165,9 @@ void irq_work_single(void *arg) */ flags &= ~IRQ_WORK_PENDING; (void)atomic_cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY); + + if (!arch_irq_work_has_interrupt()) + rcuwait_wake_up(&work->irqwait); } static void irq_work_run_list(struct llist_head *list) @@ -231,6 +234,13 @@ void irq_work_tick_soft(void) void irq_work_sync(struct irq_work *work) { lockdep_assert_irqs_enabled(); + might_sleep(); + + if (!arch_irq_work_has_interrupt()) { + rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work), + TASK_UNINTERRUPTIBLE); + return; + } while (atomic_read(&work->flags) & IRQ_WORK_BUSY) cpu_relax(); From patchwork Wed Nov 24 16:12:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 520089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 942EAC4332F for ; Wed, 24 Nov 2021 16:12:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348073AbhKXQPq (ORCPT ); Wed, 24 Nov 2021 11:15:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241952AbhKXQPm (ORCPT ); Wed, 24 Nov 2021 11:15:42 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCBA7C061714; Wed, 24 Nov 2021 08:12:32 -0800 (PST) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1637770350; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=78hA2G8Hq5PzgNcbacWuFgDWwnHxf/RrDAb03DfHPUs=; b=ePqfQ7FH3yqUTb11wQcWtBHIpn/LNmnESgD4xTaAfSRYLQz/FgA7NFsd+Mb+JKTV5GMwKh +YIgS1pwtJao2T/LDpz7Mbj173oYaYCqWKpyV2FWajSZZY6jOKvyzQ0m6vF2GTBTH5EnhU ARjFds/WGEtBEjxBaPAhHUifMFzN8Tb8U6Lkcw9h81pqtT5l3U9sYot6pVW922zQ14FUUG hMN/1IebE0dIYJc1icKI0FOmOX3tIYijJbmkrC7NFJjlImB7OiVSks0IC9xmdQYgaEoakm Kx7FUKwD3oqnxVvugYzj4+LDRxSxGxVoc+GguEIF3kW/FtL+qAIbna9x2UOQ8A== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1637770350; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=78hA2G8Hq5PzgNcbacWuFgDWwnHxf/RrDAb03DfHPUs=; b=9BYPKIoo3JbzWzA9Lm9+SJLz/rLoJJrBz9hAEFCcccj/zFJoNa295jecKJZ/GBg8+YzuDO GdNBXU9OFx9ODrAw== To: Steven Rostedt Cc: linux-kernel@vger.kernel.org, linux-rt-users , Thomas Gleixner , Carsten Emde , John Kacur , Daniel Wagner , Tom Zanussi , "Srivatsa S . Bhat" , Clark Williams , Maarten Lankhorst , bigeasy@linutronix.de Subject: [PATCH RT 2/3] irq_work: Handle some irq_work in a per-CPU thread on PREEMPT_RT Date: Wed, 24 Nov 2021 17:12:20 +0100 Message-Id: <20211124161221.2224005-3-bigeasy@linutronix.de> In-Reply-To: <20211124161221.2224005-1-bigeasy@linutronix.de> References: <20211123103755.12d4b776@gandalf.local.home> <20211124161221.2224005-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org The irq_work callback is invoked in hard IRQ context. By default all callbacks are scheduled for invocation right away (given supported by the architecture) except for the ones marked IRQ_WORK_LAZY which are delayed until the next timer-tick. While looking over the callbacks, some of them may acquire locks (spinlock_t, rwlock_t) which are transformed into sleeping locks on PREEMPT_RT and must not be acquired in hard IRQ context. Changing the locks into locks which could be acquired in this context will lead to other problems such as increased latencies if everything in the chain has IRQ-off locks. This will not solve all the issues as one callback has been noticed which invoked kref_put() and its callback invokes kfree() and this can not be invoked in hardirq context. Some callbacks are required to be invoked in hardirq context even on PREEMPT_RT to work properly. This includes for instance the NO_HZ callback which needs to be able to observe the idle context. The callbacks which require to be run in hardirq have already been marked. Use this information to split the callbacks onto the two lists on PREEMPT_RT: - lazy_list Work items which are not marked with IRQ_WORK_HARD_IRQ will be added to this list. Callbacks on this list will be invoked from a per-CPU thread. The handler here may acquire sleeping locks such as spinlock_t and invoke kfree(). - raised_list Work items which are marked with IRQ_WORK_HARD_IRQ will be added to this list. They will be invoked in hardirq context and must not acquire any sleeping locks. The wake up of the per-CPU thread occurs from irq_work handler/ hardirq context. The thread runs with lowest RT priority to ensure it runs before any SCHED_OTHER tasks do. [bigeasy: melt tglx's irq_work_tick_soft() which splits irq_work_tick() into a hard and soft variant. Collected fixes over time from Steven Rostedt and Mike Galbraith. Move to per-CPU threads instead of softirq as suggested by PeterZ.] Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20211007092646.uhshe3ut2wkrcfzv@linutronix.de --- include/linux/irq_work.h | 16 +++-- kernel/irq_work.c | 127 +++++++++++++++++++++++++++++---------- kernel/time/timer.c | 2 - 3 files changed, 104 insertions(+), 41 deletions(-) diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h index 3c6d3a96bca0f..f551ba9c99d40 100644 --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -40,6 +40,16 @@ void init_irq_work(struct irq_work *work, void (*func)(struct irq_work *)) .irqwait = __RCUWAIT_INITIALIZER(irqwait), \ } +#define __IRQ_WORK_INIT(_func, _flags) (struct irq_work){ \ + .flags = ATOMIC_INIT(_flags), \ + .func = (_func), \ + .irqwait = __RCUWAIT_INITIALIZER(irqwait), \ +} + +#define IRQ_WORK_INIT(_func) __IRQ_WORK_INIT(_func, 0) +#define IRQ_WORK_INIT_LAZY(_func) __IRQ_WORK_INIT(_func, IRQ_WORK_LAZY) +#define IRQ_WORK_INIT_HARD(_func) __IRQ_WORK_INIT(_func, IRQ_WORK_HARD_IRQ) + static inline bool irq_work_is_busy(struct irq_work *work) { return atomic_read(&work->flags) & IRQ_WORK_BUSY; @@ -63,10 +73,4 @@ static inline void irq_work_run(void) { } static inline void irq_work_single(void *arg) { } #endif -#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT) -void irq_work_tick_soft(void); -#else -static inline void irq_work_tick_soft(void) { } -#endif - #endif /* _LINUX_IRQ_WORK_H */ diff --git a/kernel/irq_work.c b/kernel/irq_work.c index 8969aff790e21..03d09d779ee12 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -18,12 +18,37 @@ #include #include #include +#include #include #include static DEFINE_PER_CPU(struct llist_head, raised_list); static DEFINE_PER_CPU(struct llist_head, lazy_list); +static DEFINE_PER_CPU(struct task_struct *, irq_workd); + +static void wake_irq_workd(void) +{ + struct task_struct *tsk = __this_cpu_read(irq_workd); + + if (!llist_empty(this_cpu_ptr(&lazy_list)) && tsk) + wake_up_process(tsk); +} + +#ifdef CONFIG_SMP +static void irq_work_wake(struct irq_work *entry) +{ + wake_irq_workd(); +} + +static DEFINE_PER_CPU(struct irq_work, irq_work_wakeup) = + IRQ_WORK_INIT_HARD(irq_work_wake); +#endif + +static int irq_workd_should_run(unsigned int cpu) +{ + return !llist_empty(this_cpu_ptr(&lazy_list)); +} /* * Claim the entry so that no one else will poke at it. @@ -54,20 +79,28 @@ void __weak arch_irq_work_raise(void) static void __irq_work_queue_local(struct irq_work *work) { struct llist_head *list; - bool lazy_work, realtime = IS_ENABLED(CONFIG_PREEMPT_RT); + bool rt_lazy_work = false; + bool lazy_work = false; + int work_flags; - lazy_work = atomic_read(&work->flags) & IRQ_WORK_LAZY; + work_flags = atomic_read(&work->flags); + if (work_flags & IRQ_WORK_LAZY) + lazy_work = true; + else if (IS_ENABLED(CONFIG_PREEMPT_RT) && + !(work_flags & IRQ_WORK_HARD_IRQ)) + rt_lazy_work = true; - /* If the work is "lazy", handle it from next tick if any */ - if (lazy_work || (realtime && !(atomic_read(&work->flags) & IRQ_WORK_HARD_IRQ))) + if (lazy_work || rt_lazy_work) list = this_cpu_ptr(&lazy_list); else list = this_cpu_ptr(&raised_list); - if (llist_add(&work->llnode, list)) { - if (!lazy_work || tick_nohz_tick_stopped()) - arch_irq_work_raise(); - } + if (!llist_add(&work->llnode, list)) + return; + + /* If the work is "lazy", handle it from next tick if any */ + if (!lazy_work || tick_nohz_tick_stopped()) + arch_irq_work_raise(); } /* Enqueue the irq work @work on the current CPU */ @@ -110,15 +143,27 @@ bool irq_work_queue_on(struct irq_work *work, int cpu) /* Arch remote IPI send/receive backend aren't NMI safe */ WARN_ON_ONCE(in_nmi()); - if (IS_ENABLED(CONFIG_PREEMPT_RT) && !(atomic_read(&work->flags) & IRQ_WORK_HARD_IRQ)) { - if (llist_add(&work->llnode, &per_cpu(lazy_list, cpu))) - arch_send_call_function_single_ipi(cpu); - } else { - __smp_call_single_queue(cpu, &work->llnode); + /* + * On PREEMPT_RT the items which are not marked as + * IRQ_WORK_HARD_IRQ are added to the lazy list and a HARD work + * item is used on the remote CPU to wake the thread. + */ + if (IS_ENABLED(CONFIG_PREEMPT_RT) && + !(atomic_read(&work->flags) & IRQ_WORK_HARD_IRQ)) { + + if (!llist_add(&work->llnode, &per_cpu(lazy_list, cpu))) + goto out; + + work = &per_cpu(irq_work_wakeup, cpu); + if (!irq_work_claim(work)) + goto out; } + + __smp_call_single_queue(cpu, &work->llnode); } else { __irq_work_queue_local(work); } +out: preempt_enable(); return true; @@ -175,12 +220,13 @@ static void irq_work_run_list(struct llist_head *list) struct irq_work *work, *tmp; struct llist_node *llnode; -#ifndef CONFIG_PREEMPT_RT /* - * nort: On RT IRQ-work may run in SOFTIRQ context. + * On PREEMPT_RT IRQ-work which is not marked as HARD will be processed + * in a per-CPU thread in preemptible context. Only the items which are + * marked as IRQ_WORK_HARD_IRQ will be processed in hardirq context. */ - BUG_ON(!irqs_disabled()); -#endif + BUG_ON(!irqs_disabled() && !IS_ENABLED(CONFIG_PREEMPT_RT)); + if (llist_empty(list)) return; @@ -196,16 +242,10 @@ static void irq_work_run_list(struct llist_head *list) void irq_work_run(void) { irq_work_run_list(this_cpu_ptr(&raised_list)); - if (IS_ENABLED(CONFIG_PREEMPT_RT)) { - /* - * NOTE: we raise softirq via IPI for safety, - * and execute in irq_work_tick() to move the - * overhead from hard to soft irq context. - */ - if (!llist_empty(this_cpu_ptr(&lazy_list))) - raise_softirq(TIMER_SOFTIRQ); - } else + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) irq_work_run_list(this_cpu_ptr(&lazy_list)); + else + wake_irq_workd(); } EXPORT_SYMBOL_GPL(irq_work_run); @@ -218,15 +258,10 @@ void irq_work_tick(void) if (!IS_ENABLED(CONFIG_PREEMPT_RT)) irq_work_run_list(this_cpu_ptr(&lazy_list)); + else + wake_irq_workd(); } -#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT) -void irq_work_tick_soft(void) -{ - irq_work_run_list(this_cpu_ptr(&lazy_list)); -} -#endif - /* * Synchronize against the irq_work @entry, ensures the entry is not * currently in use. @@ -246,3 +281,29 @@ void irq_work_sync(struct irq_work *work) cpu_relax(); } EXPORT_SYMBOL_GPL(irq_work_sync); + +static void run_irq_workd(unsigned int cpu) +{ + irq_work_run_list(this_cpu_ptr(&lazy_list)); +} + +static void irq_workd_setup(unsigned int cpu) +{ + sched_set_fifo_low(current); +} + +static struct smp_hotplug_thread irqwork_threads = { + .store = &irq_workd, + .setup = irq_workd_setup, + .thread_should_run = irq_workd_should_run, + .thread_fn = run_irq_workd, + .thread_comm = "irq_work/%u", +}; + +static __init int irq_work_init_threads(void) +{ + if (IS_ENABLED(CONFIG_PREEMPT_RT)) + BUG_ON(smpboot_register_percpu_thread(&irqwork_threads)); + return 0; +} +early_initcall(irq_work_init_threads); diff --git a/kernel/time/timer.c b/kernel/time/timer.c index af3daf03c9177..cd67ee6d2634d 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1767,8 +1767,6 @@ static __latent_entropy void run_timer_softirq(struct softirq_action *h) { struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]); - irq_work_tick_soft(); - __run_timers(base); if (IS_ENABLED(CONFIG_NO_HZ_COMMON)) __run_timers(this_cpu_ptr(&timer_bases[BASE_DEF])); From patchwork Wed Nov 24 16:12:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 520090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48C04C433FE for ; Wed, 24 Nov 2021 16:12:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348040AbhKXQPp (ORCPT ); Wed, 24 Nov 2021 11:15:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232257AbhKXQPm (ORCPT ); Wed, 24 Nov 2021 11:15:42 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCAA7C061574; Wed, 24 Nov 2021 08:12:32 -0800 (PST) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1637770350; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RJ+BmT3IGNYTQ8lsYWOMjMjjhGwCb3LpVGfxgrX771c=; b=3sa5RMrtdeDy7EMMbu/6WSiQcVaq9CzXNr6z1hl0YNrqzEqD8B2TRDvuOwRVOeQL41nAXZ aWC6gt3R3kkg6uqQ2F5MbG5hyIA2aH3sEHfsN8QCQSSO4BdFoIbro4EXOSGELrv1t341ey atbfKJiLme97nYVi0FYdhg6nZWAfHW0STasDMKj5NBgMP6nS63MbOU6f0pnI0oi2IYlZfj KAm9yVLlLA/JsmtaJy7ejcKgJ/uHcc5BO5qP1oYZ9tCwWygDS3UjyDCQZ0tqsd4mdXLWFH gHs1UXDWlP6hKXG9AE08Cdk6eMoiOzDP3NUTi5SBertZ7VxZemAPKW444CPNcg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1637770350; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RJ+BmT3IGNYTQ8lsYWOMjMjjhGwCb3LpVGfxgrX771c=; b=0/Pm/ZfqktY4bODxA6qMOuDGWxpiNnsOmriMnoEXzCBYpNCo32Y84xxyBH4Smkyl4SaG1P ssgdxL4lhp6g8jAg== To: Steven Rostedt Cc: linux-kernel@vger.kernel.org, linux-rt-users , Thomas Gleixner , Carsten Emde , John Kacur , Daniel Wagner , Tom Zanussi , "Srivatsa S . Bhat" , Clark Williams , Maarten Lankhorst , bigeasy@linutronix.de Subject: [PATCH RT 3/3] irq_work: Also rcuwait for !IRQ_WORK_HARD_IRQ on PREEMPT_RT Date: Wed, 24 Nov 2021 17:12:21 +0100 Message-Id: <20211124161221.2224005-4-bigeasy@linutronix.de> In-Reply-To: <20211124161221.2224005-1-bigeasy@linutronix.de> References: <20211123103755.12d4b776@gandalf.local.home> <20211124161221.2224005-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org On PREEMPT_RT most items are processed as LAZY via softirq context. Avoid to spin-wait for them because irq_work_sync() could have higher priority and not allow the irq-work to be completed. Wait additionally for !IRQ_WORK_HARD_IRQ irq_work items on PREEMPT_RT. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20211006111852.1514359-5-bigeasy@linutronix.de --- include/linux/irq_work.h | 5 +++++ kernel/irq_work.c | 6 ++++-- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h index f551ba9c99d40..2c0059340871d 100644 --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -55,6 +55,11 @@ static inline bool irq_work_is_busy(struct irq_work *work) return atomic_read(&work->flags) & IRQ_WORK_BUSY; } +static inline bool irq_work_is_hard(struct irq_work *work) +{ + return atomic_read(&work->flags) & IRQ_WORK_HARD_IRQ; +} + bool irq_work_queue(struct irq_work *work); bool irq_work_queue_on(struct irq_work *work, int cpu); diff --git a/kernel/irq_work.c b/kernel/irq_work.c index 03d09d779ee12..cbec10c32eade 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -211,7 +211,8 @@ void irq_work_single(void *arg) flags &= ~IRQ_WORK_PENDING; (void)atomic_cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY); - if (!arch_irq_work_has_interrupt()) + if ((IS_ENABLED(CONFIG_PREEMPT_RT) && !irq_work_is_hard(work)) || + !arch_irq_work_has_interrupt()) rcuwait_wake_up(&work->irqwait); } @@ -271,7 +272,8 @@ void irq_work_sync(struct irq_work *work) lockdep_assert_irqs_enabled(); might_sleep(); - if (!arch_irq_work_has_interrupt()) { + if ((IS_ENABLED(CONFIG_PREEMPT_RT) && !irq_work_is_hard(work)) || + !arch_irq_work_has_interrupt()) { rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work), TASK_UNINTERRUPTIBLE); return;