From patchwork Tue Dec 18 17:13:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 154167 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp3962849ljp; Tue, 18 Dec 2018 09:15:25 -0800 (PST) X-Google-Smtp-Source: AFSGD/X3+D4PQeVznDgd/LQf7FQ19KATFvKRV+RkEqoqxjlgSD4DGLQNtxomCEu6pnKJr72ZPSJi X-Received: by 2002:a17:902:9897:: with SMTP id s23mr16387814plp.69.1545153325178; Tue, 18 Dec 2018 09:15:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545153325; cv=none; d=google.com; s=arc-20160816; b=t0n7K1NvHH0JEDOD4jlMadZdnsytDw0kJ6w42f6uEQ/d+UArNG5dMelnkg2z5NlDaN v8iIFol4b4WNlFkykiwjIap7VZHuipyfM4AEgC5sfyX4J8LzEp4sZ9bLhuQWkyUPUskr Xdj5DURYEeM/YSWqlp2RRm9pnViLWKET/WuA4wK2Pvwn4OhYvafoAsZXoNmzm5HSLBNU MCp33xFcjnqfrwZeCgTNXeX2qZhBC+uVhQH83dUSuGVuHALesZNvfWBzBjgVUlFkMf8S j7NchTrDiEDOddKmjziW//xgm37D+UjWNUJXDIh0gIF2SOri9yFalWLg+M4k1rWi1etk pM5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Xz6PC4LP+dc+ef2a8wKspn9XrS2BJlq20KDgrijpyvA=; b=yqdw4bmwzJq08Co+OmGJMMIUKC/sNDJScuhN9F104e+XsbaBHzxUDvO7Qw/jVBEqDL z3FHSCbMdiq2lfcbXCmobP9uhhd7+VMmHoPlGq51HCYcD+vmh0UIF9UwvT5Y/xQml3h6 5NRNqLcqMcM73zjWG87+Vnb+jhTkwcTjsx8sN2WdwK/pxt69VZ8ClT3k7dXsEgsCHBDd G21YZ997JbaxsTpmyndLsuNZb37+gOd8HCnR+/oXXMblVRk68ipXlDFS76zKJw+/Nydn /DVWe6cy1gJmdAgnqV9NQQTKPIornupB47X74OKiLourUBkhVHEH27y1ZZ/XMq/fkWeJ 7Apg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k69si14181034pga.176.2018.12.18.09.15.24; Tue, 18 Dec 2018 09:15:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727366AbeLRRO1 (ORCPT + 15 others); Tue, 18 Dec 2018 12:14:27 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:56655 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727333AbeLRRO0 (ORCPT ); Tue, 18 Dec 2018 12:14:26 -0500 Received: from localhost ([127.0.0.1] helo=bazinga.breakpoint.cc) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1gZIwn-0006QU-By; Tue, 18 Dec 2018 18:14:21 +0100 From: Sebastian Andrzej Siewior To: stable@vger.kernel.org Cc: Peter Zijlstra , Will Deacon , Thomas Gleixner , Daniel Wagner , bigeasy@linutronix.de, Waiman Long , Linus Torvalds , boqun.feng@gmail.com, linux-arm-kernel@lists.infradead.org, paulmck@linux.vnet.ibm.com, Ingo Molnar Subject: [PATCH STABLE v4.14 09/10] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound Date: Tue, 18 Dec 2018 18:13:59 +0100 Message-Id: <20181218171400.22711-10-bigeasy@linutronix.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20181218171400.22711-1-bigeasy@linutronix.de> References: <20181218171400.22711-1-bigeasy@linutronix.de> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit b247be3fe89b6aba928bf80f4453d1c4ba8d2063 upstream. On x86, atomic_cond_read_relaxed will busy-wait with a cpu_relax() loop, so it is desirable to increase the number of times we spin on the qspinlock lockword when it is found to be transitioning from pending to locked. According to Waiman Long: | Ideally, the spinning times should be at least a few times the typical | cacheline load time from memory which I think can be down to 100ns or | so for each cacheline load with the newest systems or up to several | hundreds ns for older systems. which in his benchmarking corresponded to 512 iterations. Suggested-by: Waiman Long Signed-off-by: Will Deacon Acked-by: Peter Zijlstra (Intel) Acked-by: Waiman Long Cc: Linus Torvalds Cc: Thomas Gleixner Cc: boqun.feng@gmail.com Cc: linux-arm-kernel@lists.infradead.org Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1524738868-31318-5-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar Signed-off-by: Sebastian Andrzej Siewior --- arch/x86/include/asm/qspinlock.h | 2 ++ 1 file changed, 2 insertions(+) -- 2.20.1 diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index cf4cdf508ef42..2cb6624acaec6 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -6,6 +6,8 @@ #include #include +#define _Q_PENDING_LOOPS (1 << 9) + #define queued_spin_unlock queued_spin_unlock /** * queued_spin_unlock - release a queued spinlock