From patchwork Thu Apr 5 16:59:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 132873 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp6521485ljb; Thu, 5 Apr 2018 10:00:34 -0700 (PDT) X-Google-Smtp-Source: AIpwx49lafJiZdUs07YjxMAY0i7ABh3Ro+TCpKVQaXOU4GFeOICHR9HhM/2el8dsc0wysEqZvHxF X-Received: by 10.98.73.214 with SMTP id r83mr17862932pfi.76.1522947634311; Thu, 05 Apr 2018 10:00:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522947634; cv=none; d=google.com; s=arc-20160816; b=AANccoiADOxDXh023MYU80dp8e8GyzDGjVKxiAG0Lrl66zfx/99RZzHYRXbgsNdTR/ AnQ1gnaqFeCzLwscIl744+yje+DFw8pJCfGxe2YjwQZKfH9lYa5YnIGgZplnkxvDAkrX HqY7QJxGJF3iVxdrCELpjz75FW2uqGchnMSWjVnrkmTOOhdpz4/Yk+sRSFuRdstE5ofw x4NUz+D8BAmPnBq6i6oJSXJH4krzwqbQVHeSsAz9zGVhcKoiEHAvddJAjwCPrOQzNhGZ Qz8apTy1ILBLD3xf6yaI49eBXxrh+A/O5XjnAy2VA4PtMNRPjgo2Zra4ORddnEuKTedB HrBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Rrf0RML/T5OFaLbrRAtnAfQ0xdG6DHae23xzfq0+onA=; b=KCpuZLC31920WLfVVB3AbKvOH3H6IWQ0FwaKhWvizZVXsotcWEWKHESNo6ZmDtVnoH gxEt/+pH7w7h5dfUrXChnjdpimA+VXP3tOWeXldcsl1bww/FY8ELCcKIqj/iRcTH0DsO MILbeeddo1PxfeggGiSpis3refeSSSg5DlhVoE+7lNprn6Pk8Cd8wfj/MeVtN7hQxTwF kbtINp+fIeupq6ckd4b8vepiuFisDO1sFxQziPhpJoOriuKOMKjuVw0OcFw9SXzZTg07 jJAQcnDW3nF8L+qBq+rgE/SFmKho/PD+PmN/7Ub6SHklqt4iTwztU74QoIxVrg4XlZpX gjXQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s2-v6si6306213plq.353.2018.04.05.10.00.34; Thu, 05 Apr 2018 10:00:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751645AbeDERAb (ORCPT + 29 others); Thu, 5 Apr 2018 13:00:31 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:57368 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751389AbeDEQ64 (ORCPT ); Thu, 5 Apr 2018 12:58:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 020CB165D; Thu, 5 Apr 2018 09:58:56 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C85293F25D; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E2CB11AE55A7; Thu, 5 Apr 2018 17:59:08 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Will Deacon Subject: [PATCH 06/10] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed Date: Thu, 5 Apr 2018 17:59:03 +0100 Message-Id: <1522947547-24081-7-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> References: <1522947547-24081-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Whilst we currently provide smp_cond_load_acquire and atomic_cond_read_acquire, there are cases where the ACQUIRE semantics are not required because of a subsequent fence or release operation once the conditional loop has exited. This patch adds relaxed versions of the conditional spinning primitives to avoid unnecessary barrier overhead on architectures such as arm64. Signed-off-by: Will Deacon --- include/asm-generic/barrier.h | 27 +++++++++++++++++++++------ include/linux/atomic.h | 2 ++ 2 files changed, 23 insertions(+), 6 deletions(-) -- 2.1.4 diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index fe297b599b0a..305e03b19a26 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -221,18 +221,17 @@ do { \ #endif /** - * smp_cond_load_acquire() - (Spin) wait for cond with ACQUIRE ordering + * smp_cond_load_relaxed() - (Spin) wait for cond with no ordering guarantees * @ptr: pointer to the variable to wait on * @cond: boolean expression to wait for * - * Equivalent to using smp_load_acquire() on the condition variable but employs - * the control dependency of the wait to reduce the barrier on many platforms. + * Equivalent to using READ_ONCE() on the condition variable. * * Due to C lacking lambda expressions we load the value of *ptr into a * pre-named variable @VAL to be used in @cond. */ -#ifndef smp_cond_load_acquire -#define smp_cond_load_acquire(ptr, cond_expr) ({ \ +#ifndef smp_cond_load_relaxed +#define smp_cond_load_relaxed(ptr, cond_expr) ({ \ typeof(ptr) __PTR = (ptr); \ typeof(*ptr) VAL; \ for (;;) { \ @@ -241,10 +240,26 @@ do { \ break; \ cpu_relax(); \ } \ - smp_acquire__after_ctrl_dep(); \ VAL; \ }) #endif +/** + * smp_cond_load_acquire() - (Spin) wait for cond with ACQUIRE ordering + * @ptr: pointer to the variable to wait on + * @cond: boolean expression to wait for + * + * Equivalent to using smp_load_acquire() on the condition variable but employs + * the control dependency of the wait to reduce the barrier on many platforms. + */ +#ifndef smp_cond_load_acquire +#define smp_cond_load_acquire(ptr, cond_expr) ({ \ + typeof(*ptr) _val; \ + _val = smp_cond_load_relaxed(ptr, cond_expr); \ + smp_acquire__after_ctrl_dep(); \ + _val; \ +}) +#endif + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_GENERIC_BARRIER_H */ diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 8b276fd9a127..01ce3997cb42 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -654,6 +654,7 @@ static inline int atomic_dec_if_positive(atomic_t *v) } #endif +#define atomic_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c)) #define atomic_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c)) #ifdef CONFIG_GENERIC_ATOMIC64 @@ -1075,6 +1076,7 @@ static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v } #endif +#define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c)) #define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c)) #include