From patchwork Thu Apr 5 16:59:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 132871 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp6520951ljb; Thu, 5 Apr 2018 10:00:08 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/5ifwkFE+fTVLweKzI7BXCpkC5CrpH+EAo5tDiR40Mgx21B1DIgToREXOzrNoDr042nyFT X-Received: by 10.101.75.66 with SMTP id k2mr15297689pgt.66.1522947608489; Thu, 05 Apr 2018 10:00:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522947608; cv=none; d=google.com; s=arc-20160816; b=OGQZyHax+UUDoJSWW6RC5OLctrl8aK6FABZ7ABF3iLUSWsNH5zPTy3SDTZeLzlxPTv EMOK5KBYgC9sO6zYZeqi/z3CE98nCBjV2GoFr7d825QgHXqU2pirpTM+ncdN6LVpbSi0 DLdk7PIDH4n0jPTGmRBTX4KV0g9re4kUVHhcvKK1OeaM86WAarAGngIF4g+kEHd0H97K FX+GUc6JRjzNa3MWfnCB6clowctJK0oMBLQGYiR2dihVMUjpvXQoFUYOGnsXckGFhmFS FNb5Qrb3fo9qRc0UZDw5VK4l3eFtR9Lj4yAhvsEwMl4SSD6Dc1hbEHDEbTnWFPLNFg10 xciA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=irkULNHmtNPgNAdekth1s68Hkl75EPNE3ReOorX3J4o=; b=Q4GXNaAy06AgwzDUcnFPdfJ90fX8frcjzA1+eDDZi9vUj96aP5HQfrwZZL463kDDKR +LjlgbwU0qF6b9ds668pv4amTj81KLADS+kf50Z5AncgfMM3RmTYLLNNjv1nSxziLHk5 Qh1VKIKzwtQ1c6uh23KX7kRcF1N//h1w/K3AmuyjnaGwqdPj3wEndHZ3n0S8jVrJETSA kJ3noZiBFAV+cFm2AW/L9nBs5DTWwEfeqhEmItHLl6vyyggULA+WbVYNtTmaSzRUKxdl n6uRALHJgOD3yAFH9YoCUEa/xkC6U+e+1wQmHxDQhLEIu4gyNX6NdWpVXBeh67/4aSnS /n9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o5-v6si6400875pll.667.2018.04.05.10.00.08; Thu, 05 Apr 2018 10:00:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752014AbeDERAF (ORCPT + 29 others); Thu, 5 Apr 2018 13:00:05 -0400 Received: from foss.arm.com ([217.140.101.70]:57372 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751392AbeDEQ64 (ORCPT ); Thu, 5 Apr 2018 12:58:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C69C1684; Thu, 5 Apr 2018 09:58:56 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D175C3F5B1; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id D3C711AE55A6; Thu, 5 Apr 2018 17:59:08 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Jason Low , Will Deacon Subject: [PATCH 05/10] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Date: Thu, 5 Apr 2018 17:59:02 +0100 Message-Id: <1522947547-24081-6-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> References: <1522947547-24081-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jason Low For qspinlocks on ARM64, we would like to use WFE instead of purely spinning. Qspinlocks internally have lock contenders spin on an MCS lock. Update arch_mcs_spin_lock_contended() such that it uses the new smp_cond_load_acquire() so that ARM64 can also override this spin loop with its own implementation using WFE. On x86, this can also be cheaper than spinning on smp_load_acquire(). Signed-off-by: Jason Low Signed-off-by: Will Deacon --- kernel/locking/mcs_spinlock.h | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) -- 2.1.4 diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index f046b7ce9dd6..5e10153b4d3c 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -23,13 +23,15 @@ struct mcs_spinlock { #ifndef arch_mcs_spin_lock_contended /* - * Using smp_load_acquire() provides a memory barrier that ensures - * subsequent operations happen after the lock is acquired. + * Using smp_cond_load_acquire() provides the acquire semantics + * required so that subsequent operations happen after the + * lock is acquired. Additionally, some architectures such as + * ARM64 would like to do spin-waiting instead of purely + * spinning, and smp_cond_load_acquire() provides that behavior. */ #define arch_mcs_spin_lock_contended(l) \ do { \ - while (!(smp_load_acquire(l))) \ - cpu_relax(); \ + smp_cond_load_acquire(l, VAL); \ } while (0) #endif