From patchwork Wed Apr 11 18:01:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 133159 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp895919ljb; Wed, 11 Apr 2018 11:02:02 -0700 (PDT) X-Google-Smtp-Source: AIpwx482wezWySqPczGNplFGC4KyplxslarIlY7l56bsbf4D86NTUxla4Q4hdVVCIlgR8ujnl/eK X-Received: by 10.99.111.136 with SMTP id k130mr4122069pgc.378.1523469722314; Wed, 11 Apr 2018 11:02:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523469722; cv=none; d=google.com; s=arc-20160816; b=ugg7ecuYmiiiy7atKnhd3w1P3OGlI6oGzwfpdfeGSuPZaIkHAyEWj2moVbWJCCJIms VWzCSntKbPD6XS38zi56gcXHTl62RT6Y/GaJHgaoMuv4YSx4GHFp99q803SBR+thtva+ K8r+QeF6oZVyen1CW/K+jVEWcEBsXkU5k2kOd2+B0haaRkeqvL+7Ql1+SyOwbbhkmeHN CK3GDH1kQGWixlGGxsXbhtlPZGkvpAeSlp7cB92rgfB5sYu1RWPkc7KhdtvRWIk1UyNl hmn0fSio/Fjlanw7JwGHo/ynW67vpqK/XLiACCwXgVn1kayiG400oXPay5OOzkNZZBt1 X2nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=jogq1PSvaE9goeMt6+g3G+VezE+MccDQKZEajYCIqzU=; b=MuJCVYSQOSJvRPAPMEBhh3qSvYxDeF53uOrytdqONnhJkoZxxFNKh1Xd/OIaLXPy9l E4DJ2y30vNpO0rgXcRt+GLvppQcDdAqxyiBE+wMRVI9u+56qBqsc42hcME1A7inTWM29 wrom7vIOU5Hf1hKii4bqHS4FdkL8LZzJEXj9yvW7RGwWnbW0QK7PI88yIOJmQJovbErW NSfOHsOSX6fHtrH0NDfv/GpjtoeCIivAKQ1vsban/5ykpyWdIJAlHgaQGcuHUnHpsm05 5y+nWvmwVSj0e0WUkRmjO4Q3PqPSM/cF8BaSMHobrcHp+pivszWN9kP648yIhL3KwYLR hgTQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f2-v6si1587087plo.434.2018.04.11.11.02.02; Wed, 11 Apr 2018 11:02:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753317AbeDKSBN (ORCPT + 29 others); Wed, 11 Apr 2018 14:01:13 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52066 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752574AbeDKSBG (ORCPT ); Wed, 11 Apr 2018 14:01:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4C18415BF; Wed, 11 Apr 2018 11:01:06 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1C1E63F487; Wed, 11 Apr 2018 11:01:06 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 8654F1AE5592; Wed, 11 Apr 2018 19:01:21 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, longman@redhat.com, Will Deacon Subject: [PATCH v2 04/13] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Date: Wed, 11 Apr 2018 19:01:11 +0100 Message-Id: <1523469680-17699-5-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1523469680-17699-1-git-send-email-will.deacon@arm.com> References: <1523469680-17699-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The qspinlock locking slowpath utilises a "pending" bit as a simple form of an embedded test-and-set lock that can avoid the overhead of explicit queuing in cases where the lock is held but uncontended. This bit is managed using a cmpxchg loop which tries to transition the uncontended lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). Unfortunately, the cmpxchg loop is unbounded and lockers can be starved indefinitely if the lock word is seen to oscillate between unlocked (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are able to take the lock in the cmpxchg loop without queuing and pass it around amongst themselves. This patch fixes the problem by unconditionally setting _Q_PENDING_VAL using atomic_fetch_or, and then inspecting the old value to see whether we need to spin on the current lock owner, or whether we now effectively hold the lock. The tricky scenario is when concurrent lockers end up queuing on the lock and the lock becomes available, causing us to see a lockword of (n,0,0). With pending now set, simply queuing could lead to deadlock as the head of the queue may not have observed the pending flag being cleared. Conversely, if the head of the queue did observe pending being cleared, then it could transition the lock from (n,0,0) -> (0,0,1) meaning that any attempt to "undo" our setting of the pending bit could race with a concurrent locker trying to set it. We handle this race by preserving the pending bit when taking the lock after reaching the head of the queue and leaving the tail entry intact if we saw pending set, because we know that the tail is going to be updated shortly. Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Will Deacon --- kernel/locking/qspinlock.c | 102 ++++++++++++++++++++++++++------------------- 1 file changed, 58 insertions(+), 44 deletions(-) -- 2.1.4 diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 396701e8c62d..a8fc402b3f3a 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -162,6 +162,17 @@ struct __qspinlock { #if _Q_PENDING_BITS == 8 /** + * clear_pending - clear the pending bit. + * @lock: Pointer to queued spinlock structure + * + * *,1,* -> *,0,* + */ +static __always_inline void clear_pending(struct qspinlock *lock) +{ + WRITE_ONCE(lock->pending, 0); +} + +/** * clear_pending_set_locked - take ownership and clear the pending bit. * @lock: Pointer to queued spinlock structure * @@ -201,6 +212,17 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) #else /* _Q_PENDING_BITS == 8 */ /** + * clear_pending - clear the pending bit. + * @lock: Pointer to queued spinlock structure + * + * *,1,* -> *,0,* + */ +static __always_inline void clear_pending(struct qspinlock *lock) +{ + atomic_andnot(_Q_PENDING_VAL, &lock->val); +} + +/** * clear_pending_set_locked - take ownership and clear the pending bit. * @lock: Pointer to queued spinlock structure * @@ -306,7 +328,7 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { struct mcs_spinlock *prev, *next, *node; - u32 new, old, tail; + u32 old, tail; int idx; BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); @@ -330,58 +352,50 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) } /* + * If we observe any contention; queue. + */ + if (val & ~_Q_LOCKED_MASK) + goto queue; + + /* * trylock || pending * * 0,0,0 -> 0,0,1 ; trylock * 0,0,1 -> 0,1,1 ; pending */ - for (;;) { + val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val); + if (!(val & ~_Q_LOCKED_MASK)) { /* - * If we observe any contention; queue. + * we're pending, wait for the owner to go away. + * + * *,1,1 -> *,1,0 + * + * this wait loop must be a load-acquire such that we match the + * store-release that clears the locked bit and create lock + * sequentiality; this is because not all + * clear_pending_set_locked() implementations imply full + * barriers. */ - if (val & ~_Q_LOCKED_MASK) - goto queue; - - new = _Q_LOCKED_VAL; - if (val == new) - new |= _Q_PENDING_VAL; + if (val & _Q_LOCKED_MASK) { + smp_cond_load_acquire(&lock->val.counter, + !(VAL & _Q_LOCKED_MASK)); + } /* - * Acquire semantic is required here as the function may - * return immediately if the lock was free. + * take ownership and clear the pending bit. + * + * *,1,0 -> *,0,1 */ - old = atomic_cmpxchg_acquire(&lock->val, val, new); - if (old == val) - break; - - val = old; - } - - /* - * we won the trylock - */ - if (new == _Q_LOCKED_VAL) + clear_pending_set_locked(lock); return; + } /* - * we're pending, wait for the owner to go away. - * - * *,1,1 -> *,1,0 - * - * this wait loop must be a load-acquire such that we match the - * store-release that clears the locked bit and create lock - * sequentiality; this is because not all clear_pending_set_locked() - * implementations imply full barriers. - */ - smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK)); - - /* - * take ownership and clear the pending bit. - * - * *,1,0 -> *,0,1 + * If pending was clear but there are waiters in the queue, then + * we need to undo our setting of pending before we queue ourselves. */ - clear_pending_set_locked(lock); - return; + if (!(val & _Q_PENDING_MASK)) + clear_pending(lock); /* * End of pending bit optimistic spinning and beginning of MCS @@ -485,15 +499,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * claim the lock: * * n,0,0 -> 0,0,1 : lock, uncontended - * *,0,0 -> *,0,1 : lock, contended + * *,*,0 -> *,*,1 : lock, contended * - * If the queue head is the only one in the queue (lock value == tail), - * clear the tail code and grab the lock. Otherwise, we only need - * to grab the lock. + * If the queue head is the only one in the queue (lock value == tail) + * and nobody is pending, clear the tail code and grab the lock. + * Otherwise, we only need to grab the lock. */ for (;;) { /* In the PV case we might already have _Q_LOCKED_VAL set */ - if ((val & _Q_TAIL_MASK) != tail) { + if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) { set_locked(lock); break; }