From patchwork Thu Jun 21 12:13:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 139538 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp1992207lji; Thu, 21 Jun 2018 05:14:29 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIGYVFl+utEFwaGWzflew5w3A0Cvi3s4XKQhKX2sR5MX95KCMLwhE4YB+590Ea5Kd2TDErG X-Received: by 2002:a17:902:7105:: with SMTP id a5-v6mr28087818pll.171.1529583269238; Thu, 21 Jun 2018 05:14:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529583269; cv=none; d=google.com; s=arc-20160816; b=jNU5xM7XHqAodrCFbVR1e4/uMNDL1buB/1xfx3jplNgZ3KiF+pdHrFSideKfbq8TxN jcqXixFi1/m2BvSyjboJKz+WbBqmcFj+MAaFEvu3uBhI0xshBWyp8d4Ymtv3VYLHSMuf 7BZTYjZCFAjGudYEHT+xVWbnOWaaFKHrC+PbNClDW7sLaN2bFnAQzz6V9AZZ2V1Fv2ur 9ieqbRUUIo9ruXE+LOvtKESOzmk/s5XatHAiCk5ayHUFwe8bajw/wyEmdPNqwnf6Jp6g WgOLV5/4Fh5KpWrDkESSyNqgxW1UXliuus9f84tLsucZVx7Fdm+qJZGbFy5sT6co3+bu EbmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=HdCgPY8kaiHLaYR83aLEt9wNEQ6r4ORC5GQUu6F49P0=; b=dr849y5+8LNu+vJ+HYMye19/TEPJL61BgaIXIMy2jvNSCl7t6jM4nNttgvrsNrsWW0 mOMEb288pfyxbGc+EfUY1wERZNtdyyxBFX+FGoBNamlDgTTimTcCDbcYsZrJ/PGPLX+w NQiVJjo/U1iDQ9TRKqT5dqwtePGRFoXGRZr6eJrQl0d6TX2mIeFpY58rbbLBZWmXP01t NSnkU7GsP7S4QV25ApHKwmOVEhMut1XLRN9O1N5LXaqtcq0ihOUoVvze9GvH7krz1jOb xKhQ7Km6ccCUz0O4DajI8HFpS/Q+EM2mGq3nFiSSQ655CMEBsCscDsqroXFOgeyxDrx4 wHCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n7-v6si4857627pfn.270.2018.06.21.05.14.28; Thu, 21 Jun 2018 05:14:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933821AbeFUMO1 (ORCPT + 30 others); Thu, 21 Jun 2018 08:14:27 -0400 Received: from foss.arm.com ([217.140.101.70]:49452 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933796AbeFUMOI (ORCPT ); Thu, 21 Jun 2018 08:14:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 750C215AD; Thu, 21 Jun 2018 05:14:08 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 446EC3F578; Thu, 21 Jun 2018 05:14:07 -0700 (PDT) From: Mark Rutland To: mingo@kernel.org, will.deacon@arm.com, peterz@infradead.org, linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng Subject: [PATCHv4 17/18] atomics/treewide: make conditional inc/dec ops optional Date: Thu, 21 Jun 2018 13:13:20 +0100 Message-Id: <20180621121321.4761-18-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180621121321.4761-1-mark.rutland@arm.com> References: <20180621121321.4761-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The conditional inc/dec ops differ for atomic_t and atomic64_t: * atomic_inc_unless_positive() is optional for atomic_t, and doesn't exist for atomic64_t. * atomic_dec_unless_negative() is optional for atomic_t, and doesn't exist for atomic64_t. * atomic_dec_if_positive is optional for atomic_t, and is mandatory for atomic64_t. Let's make these consistently optional for both. At the same time, let's clean up the existing fallbacks to use atomic_try_cmpxchg(). The instrumented atomics are updated accordingly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Reviewed-by: Will Deacon Cc: Boqun Feng --- arch/alpha/include/asm/atomic.h | 1 + arch/arc/include/asm/atomic.h | 1 + arch/arm/include/asm/atomic.h | 1 + arch/arm64/include/asm/atomic.h | 2 + arch/ia64/include/asm/atomic.h | 16 ----- arch/parisc/include/asm/atomic.h | 23 -------- arch/powerpc/include/asm/atomic.h | 1 + arch/s390/include/asm/atomic.h | 17 ------ arch/sparc/include/asm/atomic_64.h | 1 + arch/x86/include/asm/atomic64_32.h | 1 + arch/x86/include/asm/atomic64_64.h | 18 ------ include/asm-generic/atomic-instrumented.h | 3 + include/asm-generic/atomic64.h | 1 + include/linux/atomic.h | 97 +++++++++++++++++++++++-------- 14 files changed, 85 insertions(+), 98 deletions(-) -- 2.11.0 diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index f6410cb68058..4a6a8f58c9c9 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -296,5 +296,6 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) smp_mb(); return old - 1; } +#define atomic64_dec_if_positive atomic64_dec_if_positive #endif /* _ALPHA_ATOMIC_H */ diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 27b95a928c1e..8f64f3b79b8a 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -517,6 +517,7 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) return val; } +#define atomic64_dec_if_positive atomic64_dec_if_positive /** * atomic64_fetch_add_unless - add unless the number is a given value diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 5a58d061d3d2..884c241424fe 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -474,6 +474,7 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) return result; } +#define atomic64_dec_if_positive atomic64_dec_if_positive static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u) diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index 078f785cd97f..9bca54dda75c 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -159,5 +159,7 @@ #define atomic64_andnot atomic64_andnot +#define atomic64_dec_if_positive atomic64_dec_if_positive + #endif #endif diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 46a15a974bed..206530d0751b 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -215,22 +215,6 @@ ATOMIC64_FETCH_OP(xor, ^) (cmpxchg(&((v)->counter), old, new)) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static __inline__ long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - #define atomic_add(i,v) (void)atomic_add_return((i), (v)) #define atomic_sub(i,v) (void)atomic_sub_return((i), (v)) diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 10bc490327c1..118953d41763 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -223,29 +223,6 @@ atomic64_read(const atomic64_t *v) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -/* - * atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ -static inline long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - #endif /* !CONFIG_64BIT */ diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index ebaefdee4a57..a0156cb43d1f 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -488,6 +488,7 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) return t; } +#define atomic64_dec_if_positive atomic64_dec_if_positive #define atomic64_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic64_cmpxchg_relaxed(v, o, n) \ diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index 376e64af951f..fd20ab5d4cf7 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -145,23 +145,6 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OPS -static inline long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - #define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v) #define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v) #define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v) diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 304865c7cdbb..6963482c81d8 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -62,5 +62,6 @@ static inline int atomic_xchg(atomic_t *v, int new) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) long atomic64_dec_if_positive(atomic64_t *v); +#define atomic64_dec_if_positive atomic64_dec_if_positive #endif /* !(__ARCH_SPARC64_ATOMIC__) */ diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 472c7af0ed48..ef959f02d070 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -254,6 +254,7 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v) return r; } +#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive static inline long long arch_atomic64_dec_if_positive(atomic64_t *v) { long long r; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 1b282272a801..849f1c566a11 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -191,24 +191,6 @@ static inline long arch_atomic64_xchg(atomic64_t *v, long new) return xchg(&v->counter, new); } -/* - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ -static inline long arch_atomic64_dec_if_positive(atomic64_t *v) -{ - s64 dec, c = arch_atomic64_read(v); - do { - dec = c - 1; - if (unlikely(dec < 0)) - break; - } while (!arch_atomic64_try_cmpxchg(v, &c, dec)); - return dec; -} - static inline void arch_atomic64_and(long i, atomic64_t *v) { asm volatile(LOCK_PREFIX "andq %1,%0" diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 12f9634750d7..3c64e95d5ed0 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -243,11 +243,14 @@ static __always_inline bool atomic64_inc_not_zero(atomic64_t *v) } #endif +#ifdef arch_atomic64_dec_if_positive +#define atomic64_dec_if_positive atomic64_dec_if_positive static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_dec_if_positive(v); } +#endif #ifdef arch_atomic_dec_and_test #define atomic_dec_and_test atomic_dec_and_test diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index 242b79ae0b57..97b28b7f1f29 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -51,6 +51,7 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OP extern long long atomic64_dec_if_positive(atomic64_t *v); +#define atomic64_dec_if_positive atomic64_dec_if_positive extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n); extern long long atomic64_xchg(atomic64_t *v, long long new); extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u); diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 24f345df7ba6..93fe5b4041e1 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -683,28 +683,30 @@ static inline int atomic_fetch_andnot_release(int i, atomic_t *v) #endif #ifndef atomic_inc_unless_negative -static inline bool atomic_inc_unless_negative(atomic_t *p) +static inline bool atomic_inc_unless_negative(atomic_t *v) { - int v, v1; - for (v = 0; v >= 0; v = v1) { - v1 = atomic_cmpxchg(p, v, v + 1); - if (likely(v1 == v)) - return true; - } - return false; + int c = atomic_read(v); + + do { + if (unlikely(c < 0)) + return false; + } while (!atomic_try_cmpxchg(v, &c, c + 1)); + + return true; } #endif #ifndef atomic_dec_unless_positive -static inline bool atomic_dec_unless_positive(atomic_t *p) +static inline bool atomic_dec_unless_positive(atomic_t *v) { - int v, v1; - for (v = 0; v <= 0; v = v1) { - v1 = atomic_cmpxchg(p, v, v - 1); - if (likely(v1 == v)) - return true; - } - return false; + int c = atomic_read(v); + + do { + if (unlikely(c > 0)) + return false; + } while (!atomic_try_cmpxchg(v, &c, c - 1)); + + return true; } #endif @@ -718,17 +720,14 @@ static inline bool atomic_dec_unless_positive(atomic_t *p) #ifndef atomic_dec_if_positive static inline int atomic_dec_if_positive(atomic_t *v) { - int c, old, dec; - c = atomic_read(v); - for (;;) { + int dec, c = atomic_read(v); + + do { dec = c - 1; if (unlikely(dec < 0)) break; - old = atomic_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } + } while (!atomic_try_cmpxchg(v, &c, dec)); + return dec; } #endif @@ -1290,6 +1289,56 @@ static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v } #endif +#ifndef atomic64_inc_unless_negative +static inline bool atomic64_inc_unless_negative(atomic64_t *v) +{ + long long c = atomic64_read(v); + + do { + if (unlikely(c < 0)) + return false; + } while (!atomic64_try_cmpxchg(v, &c, c + 1)); + + return true; +} +#endif + +#ifndef atomic64_dec_unless_positive +static inline bool atomic64_dec_unless_positive(atomic64_t *v) +{ + long long c = atomic64_read(v); + + do { + if (unlikely(c > 0)) + return false; + } while (!atomic64_try_cmpxchg(v, &c, c - 1)); + + return true; +} +#endif + +/* + * atomic64_dec_if_positive - decrement by 1 if old value positive + * @v: pointer of type atomic64_t + * + * The function returns the old value of *v minus 1, even if + * the atomic64 variable, v, was not decremented. + */ +#ifndef atomic64_dec_if_positive +static inline long long atomic64_dec_if_positive(atomic64_t *v) +{ + long long dec, c = atomic64_read(v); + + do { + dec = c - 1; + if (unlikely(dec < 0)) + break; + } while (!atomic64_try_cmpxchg(v, &c, dec)); + + return dec; +} +#endif + #define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c)) #define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))