From patchwork Tue Nov 10 15:38:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 323432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF98BC5517A for ; Tue, 10 Nov 2020 15:40:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 991F021D40 for ; Tue, 10 Nov 2020 15:40:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731738AbgKJPka (ORCPT ); Tue, 10 Nov 2020 10:40:30 -0500 Received: from mail.kernel.org ([198.145.29.99]:36390 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731015AbgKJPk1 (ORCPT ); Tue, 10 Nov 2020 10:40:27 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B067D20795; Tue, 10 Nov 2020 15:40:26 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.94) (envelope-from ) id 1kcVkv-008eDG-4P; Tue, 10 Nov 2020 10:40:25 -0500 Message-ID: <20201110154024.958923729@goodmis.org> User-Agent: quilt/0.66 Date: Tue, 10 Nov 2020 10:38:54 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Daniel Wagner , Tom Zanussi , "Srivatsa S. Bhat" , Mike Galbraith , stable-rt@vger.kernel.org Subject: [PATCH RT 1/5] net: Properly annotate the try-lock for the seqlock References: <20201110153853.463368981@goodmis.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 5.4.74-rt42-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior In patch ("net/Qdisc: use a seqlock instead seqcount") the seqcount has been replaced with a seqlock to allow to reader to boost the preempted writer. The try_write_seqlock() acquired the lock with a try-lock but the seqcount annotation was "lock". Opencode write_seqcount_t_begin() and use the try-lock annotation for lockdep. Reported-by: Mike Galbraith Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Tom Zanussi --- include/linux/seqlock.h | 9 --------- include/net/sch_generic.h | 10 +++++++++- 2 files changed, 9 insertions(+), 10 deletions(-) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index e5207897c33e..f390293974ea 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -489,15 +489,6 @@ static inline void write_seqlock(seqlock_t *sl) __raw_write_seqcount_begin(&sl->seqcount); } -static inline int try_write_seqlock(seqlock_t *sl) -{ - if (spin_trylock(&sl->lock)) { - __raw_write_seqcount_begin(&sl->seqcount); - return 1; - } - return 0; -} - static inline void write_sequnlock(seqlock_t *sl) { __raw_write_seqcount_end(&sl->seqcount); diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index e6afb4b9cede..112d2dca8b08 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -168,8 +168,16 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) return false; } #ifdef CONFIG_PREEMPT_RT - if (try_write_seqlock(&qdisc->running)) + if (spin_trylock(&qdisc->running.lock)) { + seqcount_t *s = &qdisc->running.seqcount; + /* + * Variant of write_seqcount_t_begin() telling lockdep that a + * trylock was attempted. + */ + __raw_write_seqcount_begin(s); + seqcount_acquire(&s->dep_map, 0, 1, _RET_IP_); return true; + } return false; #else /* Variant of write_seqcount_begin() telling lockdep a trylock From patchwork Tue Nov 10 15:38:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 323431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 080F0C55ABD for ; Tue, 10 Nov 2020 15:40:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA06720797 for ; Tue, 10 Nov 2020 15:40:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732692AbgKJPku (ORCPT ); Tue, 10 Nov 2020 10:40:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:36416 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731067AbgKJPk1 (ORCPT ); Tue, 10 Nov 2020 10:40:27 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DD899207E8; Tue, 10 Nov 2020 15:40:26 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.94) (envelope-from ) id 1kcVkv-008eEE-Fa; Tue, 10 Nov 2020 10:40:25 -0500 Message-ID: <20201110154025.328193929@goodmis.org> User-Agent: quilt/0.66 Date: Tue, 10 Nov 2020 10:38:56 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Daniel Wagner , Tom Zanussi , "Srivatsa S. Bhat" , "Luis Claudio R. Goncalves" , Oleg Nesterov , stable-rt@vger.kernel.org Subject: [PATCH RT 3/5] ptrace: fix ptrace_unfreeze_traced() race with rt-lock References: <20201110153853.463368981@goodmis.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 5.4.74-rt42-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Oleg Nesterov The patch "ptrace: fix ptrace vs tasklist_lock race" changed ptrace_freeze_traced() to take task->saved_state into account, but ptrace_unfreeze_traced() has the same problem and needs a similar fix: it should check/update both ->state and ->saved_state. Reported-by: Luis Claudio R. Goncalves Fixes: "ptrace: fix ptrace vs tasklist_lock race" Signed-off-by: Oleg Nesterov Signed-off-by: Sebastian Andrzej Siewior Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt (VMware) --- kernel/ptrace.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/ptrace.c b/kernel/ptrace.c index 3075006d720e..3f7156f06b6c 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -197,8 +197,8 @@ static bool ptrace_freeze_traced(struct task_struct *task) static void ptrace_unfreeze_traced(struct task_struct *task) { - if (task->state != __TASK_TRACED) - return; + unsigned long flags; + bool frozen = true; WARN_ON(!task->ptrace || task->parent != current); @@ -207,12 +207,19 @@ static void ptrace_unfreeze_traced(struct task_struct *task) * Recheck state under the lock to close this race. */ spin_lock_irq(&task->sighand->siglock); - if (task->state == __TASK_TRACED) { - if (__fatal_signal_pending(task)) - wake_up_state(task, __TASK_TRACED); - else - task->state = TASK_TRACED; - } + + raw_spin_lock_irqsave(&task->pi_lock, flags); + if (task->state == __TASK_TRACED) + task->state = TASK_TRACED; + else if (task->saved_state == __TASK_TRACED) + task->saved_state = TASK_TRACED; + else + frozen = false; + raw_spin_unlock_irqrestore(&task->pi_lock, flags); + + if (frozen && __fatal_signal_pending(task)) + wake_up_state(task, __TASK_TRACED); + spin_unlock_irq(&task->sighand->siglock); }