From patchwork Thu Dec 13 14:09:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 153643 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp846144ljp; Thu, 13 Dec 2018 06:10:02 -0800 (PST) X-Google-Smtp-Source: AFSGD/UzBjagMTLwst9c2tQEgWrZ18/ZXwR7bMxv7dWVXQ7h1AE6vDYtprlIM5eEL9xI3nWgF/7v X-Received: by 2002:a17:902:128c:: with SMTP id g12mr22719375pla.146.1544710202382; Thu, 13 Dec 2018 06:10:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544710202; cv=none; d=google.com; s=arc-20160816; b=QTQGJbM5GVYgjJ50HFUwzMdp2VKo832FWOoFvHufV1G/mvIO7zzJYRB+VAwoZ/Jtyf nAXfNFUB2c3/bRN1H8kL/nGpbjrGACfRglHyp4cbm0TpymwNLyUoqZ6JR4VqilDN1W/a 1wDw7i2ShOFIqrsw/rTfozTaVg7mmkSo0ypkOZzvYWS9G/XRtsOiEgNEJEJ09l2428eu 7rf4vbCNsGnTNL8njoeClbp8bWJADU6/fuGBqdsSpovnQxoBt7DRJBrAruSY2mfx1d3+ 0imYF/QdEC5msyvof78a3qpjtkeNpmugLKnh2hn4x2xkO+8ykR3eONmilpYLrVHdIz/h ai+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :illegal-object:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from; bh=BTlZUDnDtaNAs2upqp08v36e/ko42YXu043vOrcoCCs=; b=0SyEnwdVF8beJLqN5qaroZ/sEQ6dTVkkyhS5hUYsR/Xgbh7bWTo47YwXFY4o7gKmwS G8VG8glPVcDv7yxrOtYlj3Blsgi3J+hMkDmVPrTeHUiM7nfO676lKxkn6lm+A+issPx+ reZ5LxoWCNOfXOk4lnWQfL3Yhl6sW0pFGTfDnOY7EnTCUR2sf75bUV/Qg1ggg9YjfS6H LeCNTw5Yd74mE5Codg4waVIxfqzfxmHMvGdBRQUAdMXmlCQPT+Trww06qeaPX0dBasIe H4fx2s0JpuPVpVegWNMt5wu5T84U56ahlV4F7AEPRTZHpW+NGISgk45ZNRXmI6S4BIWR CKcg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t16si1817482pfk.139.2018.12.13.06.10.02; Thu, 13 Dec 2018 06:10:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728445AbeLMOJk (ORCPT + 15 others); Thu, 13 Dec 2018 09:09:40 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:46509 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727757AbeLMOJk (ORCPT ); Thu, 13 Dec 2018 09:09:40 -0500 Received: from localhost ([127.0.0.1] helo=bazinga.breakpoint.cc) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1gXRgE-0001G3-NI; Thu, 13 Dec 2018 15:09:34 +0100 From: Sebastian Andrzej Siewior To: stable@vger.kernel.org Cc: Peter Zijlstra , Will Deacon , Thomas Gleixner , Daniel Wagner , Linus Torvalds , Ingo Molnar , Sebastian Andrzej Siewior Subject: [PATCH STABLE v4.9 02/10] locking/qspinlock: Ensure node is initialised before updating prev->next Date: Thu, 13 Dec 2018 15:09:17 +0100 Message-Id: <20181213140925.6179-3-bigeasy@linutronix.de> X-Mailer: git-send-email 2.20.0 In-Reply-To: <20181213140925.6179-1-bigeasy@linutronix.de> References: <20181213140925.6179-1-bigeasy@linutronix.de> Reply-To: [PATCH STABLE v4.9 00/10], Backport, for, cache line starvation on x86 Illegal-Object: Syntax error in Reply-To: address found on vger.kernel.org: Reply-To: cache line starvation on x86 ^-extraneous tokens in address MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit 95bcade33a8af38755c9b0636e36a36ad3789fe6 upstream. When a locker ends up queuing on the qspinlock locking slowpath, we initialise the relevant mcs node and publish it indirectly by updating the tail portion of the lock word using xchg_tail. If we find that there was a pre-existing locker in the queue, we subsequently update their ->next field to point at our node so that we are notified when it's our turn to take the lock. This can be roughly illustrated as follows: /* Initialise the fields in node and encode a pointer to node in tail */ tail = initialise_node(node); /* * Exchange tail into the lockword using an atomic read-modify-write * operation with release semantics */ old = xchg_tail(lock, tail); /* If there was a pre-existing waiter ... */ if (old & _Q_TAIL_MASK) { prev = decode_tail(old); smp_read_barrier_depends(); /* ... then update their ->next field to point to node. WRITE_ONCE(prev->next, node); } The conditional update of prev->next therefore relies on the address dependency from the result of xchg_tail ensuring order against the prior initialisation of node. However, since the release semantics of the xchg_tail operation apply only to the write portion of the RmW, then this ordering is not guaranteed and it is possible for the CPU to return old before the writes to node have been published, consequently allowing us to point prev->next to an uninitialised node. This patch fixes the problem by making the update of prev->next a RELEASE operation, which also removes the reliance on dependency ordering. Signed-off-by: Will Deacon Acked-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1518528177-19169-2-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar Signed-off-by: Sebastian Andrzej Siewior --- kernel/locking/qspinlock.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) -- 2.20.0 diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 8710fbe8d26c0..6fce84401dba1 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -532,14 +532,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) */ if (old & _Q_TAIL_MASK) { prev = decode_tail(old); - /* - * The above xchg_tail() is also a load of @lock which - * generates, through decode_tail(), a pointer. The address - * dependency matches the RELEASE of xchg_tail() such that - * the subsequent access to @prev happens after. - */ - WRITE_ONCE(prev->next, node); + /* + * We must ensure that the stores to @node are observed before + * the write to prev->next. The address dependency from + * xchg_tail is not sufficient to ensure this because the read + * component of xchg_tail is unordered with respect to the + * initialisation of @node. + */ + smp_store_release(&prev->next, node); pv_wait_node(node, prev); arch_mcs_spin_lock_contended(&node->locked);