From patchwork Tue Dec 18 17:13:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 154170 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp3962874ljp; Tue, 18 Dec 2018 09:15:26 -0800 (PST) X-Google-Smtp-Source: AFSGD/UyHoLP4LD8eQMiPOVJk32s2S5QilyQE2Ar3aHwS69MEAf5YBBX7IjZIeFjq8rE/BXQaDDg X-Received: by 2002:a63:9749:: with SMTP id d9mr16041296pgo.415.1545153326654; Tue, 18 Dec 2018 09:15:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545153326; cv=none; d=google.com; s=arc-20160816; b=WYtsHSx5agV4XFkjsbjnGi2mogUer3CaM0/s90E5Jr9nn6iZylqMwbHCGQYsu7gS+j pZ7Lz2oHHrRxB6wFGyp29BMnS2y+2t8yJGqDrdZXmIOusiotBJb91m5P9RMdaktdfiMO iqm2EU6yfWb7FaLvy5U+KWnBezBoe+ErY6xhSldRNTc6HmzXXffiWwyTzCww0lJFbkyk oOpTZ8AyDYE4cDOzmaJqJwbjcCSYndraAnoAwDE4jaR8mGRTh0WKC5roTGvtDDM9LknZ JbNlSmd/Z55aLwArEc9r1NryTjsk5gvw3grQ1887I691uzU4g8J+c2jzdP0OZYYCbQ1r 7eAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=mJ4h3nvT7mSLfjYM02B7iFIkIbYTHkVLEYZA9Os8XyU=; b=x8uoubPVh4ZGnb1oDX+rzyJjWKPYRatyK/jWG71a6ePDQLNOISK6GZ9PVBCUs/3piT iJ3oFULs4feUi5N8WGLnZe2IXk+VRHQTUinh/632VX/ftdEvSHknNwhg28FirMicltG3 qj8qfyKqQudV4hqfvsur1Ns+u5fyoUU5jy1z5uLVnfVdbuSldHLqDV1rXXklTV5LfwNt 5Xds4ES0QOu4vcjKM86U9MLj7gLEffAd9ZwuFOBlrjDqkGHlSGDvIgK9B9zhBaqqYqMC 1dTeC/w7VlsYrolm2NLbYHnIWRknGKHdaSYCiB/V9FGXZIuCnHU8Uc/AyyppBoo2z4Cd U1Yw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k69si14181034pga.176.2018.12.18.09.15.26; Tue, 18 Dec 2018 09:15:26 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727339AbeLRROX (ORCPT + 15 others); Tue, 18 Dec 2018 12:14:23 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:56637 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727316AbeLRROX (ORCPT ); Tue, 18 Dec 2018 12:14:23 -0500 Received: from localhost ([127.0.0.1] helo=bazinga.breakpoint.cc) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1gZIwh-0006QU-0X; Tue, 18 Dec 2018 18:14:15 +0100 From: Sebastian Andrzej Siewior To: stable@vger.kernel.org Cc: Peter Zijlstra , Will Deacon , Thomas Gleixner , Daniel Wagner , bigeasy@linutronix.de, Waiman Long , Linus Torvalds , boqun.feng@gmail.com, linux-arm-kernel@lists.infradead.org, paulmck@linux.vnet.ibm.com, Ingo Molnar Subject: [PATCH STABLE v4.14 03/10] locking/qspinlock: Bound spinning on pending->locked transition in slowpath Date: Tue, 18 Dec 2018 18:13:53 +0100 Message-Id: <20181218171400.22711-4-bigeasy@linutronix.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20181218171400.22711-1-bigeasy@linutronix.de> References: <20181218171400.22711-1-bigeasy@linutronix.de> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit 6512276d97b160d90b53285bd06f7f201459a7e3 upstream. If a locker taking the qspinlock slowpath reads a lock value indicating that only the pending bit is set, then it will spin whilst the concurrent pending->locked transition takes effect. Unfortunately, there is no guarantee that such a transition will ever be observed since concurrent lockers could continuously set pending and hand over the lock amongst themselves, leading to starvation. Whilst this would probably resolve in practice, it means that it is not possible to prove liveness properties about the lock and means that lock acquisition time is unbounded. Rather than removing the pending->locked spinning from the slowpath altogether (which has been shown to heavily penalise a 2-threaded locking stress test on x86), this patch replaces the explicit spinning with a call to atomic_cond_read_relaxed and allows the architecture to provide a bound on the number of spins. For architectures that can respond to changes in cacheline state in their smp_cond_load implementation, it should be sufficient to use the default bound of 1. Suggested-by: Waiman Long Signed-off-by: Will Deacon Acked-by: Peter Zijlstra (Intel) Acked-by: Waiman Long Cc: Linus Torvalds Cc: Thomas Gleixner Cc: boqun.feng@gmail.com Cc: linux-arm-kernel@lists.infradead.org Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1524738868-31318-4-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar Signed-off-by: Sebastian Andrzej Siewior --- kernel/locking/qspinlock.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) -- 2.20.1 diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index d880296245c59..18161264227a3 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -76,6 +76,18 @@ #define MAX_NODES 4 #endif +/* + * The pending bit spinning loop count. + * This heuristic is used to limit the number of lockword accesses + * made by atomic_cond_read_relaxed when waiting for the lock to + * transition out of the "== _Q_PENDING_VAL" state. We don't spin + * indefinitely because there's no guarantee that we'll make forward + * progress. + */ +#ifndef _Q_PENDING_LOOPS +#define _Q_PENDING_LOOPS 1 +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. @@ -306,13 +318,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) return; /* - * wait for in-progress pending->locked hand-overs + * Wait for in-progress pending->locked hand-overs with a bounded + * number of spins so that we guarantee forward progress. * * 0,1,0 -> 0,0,1 */ if (val == _Q_PENDING_VAL) { - while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL) - cpu_relax(); + int cnt = _Q_PENDING_LOOPS; + val = smp_cond_load_acquire(&lock->val.counter, + (VAL != _Q_PENDING_VAL) || !cnt--); } /*