From patchwork Thu Jun 21 12:13:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 139537 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp1991942lji; Thu, 21 Jun 2018 05:14:15 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIuA7Qebq2ddGuld4jzn7qytxJ1LnfshyDmHSEps0xTGAftnB0g99Nb+rrUPnZvH0PvMjuZ X-Received: by 2002:a62:d345:: with SMTP id q66-v6mr27115443pfg.158.1529583255285; Thu, 21 Jun 2018 05:14:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529583255; cv=none; d=google.com; s=arc-20160816; b=Vac+fBurI1cTKRyfhxAKGY2n4iOpz+qkQ0VtWLw+WJ3bE16wSF84jMRtpJtGB8Gx0Y hBSwuGYKPVEPB8xyrI8R8TKU8xTxh5zyoytf2D5aRncKyXwymK8YvGFwFrgmL2vGicID VFA6JKf1sdQt8QJvH0jGGCjL11qNsH7/RDwhgAK1Exy7TAzxVnRo4n/yqLMnRFnVhYLr 9FGP8TeT9+sLGFW71lO3lK7CZYaH6p9SPxMLJXCOuEPX2Z9oMbXeBBcjSNX9PN4XtL4n ubnproDPZfMwO4Fd2Hw1xPzDDM7Bhxph/nYdQpGg08FahpBFsMo3+YJJrAppAcNpWz88 BEJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ESqF6xyihtkaiGmJG4LaXWn5cVshdseDlU2IKiJgViE=; b=ysz40n+kpqwdO+Xk1oB3sLOeiFgpwMF2RvQ8vShGh65X2uuhEuW3ZN2f4fDgLftyvb 5Rhha7Q+IMcVZzfMYf4qnzfyUh7BgIB1FWTLrKdrOUB7b25MAKWmbbRFT5QHlJDSmWq5 17hcxLLqUYRPZ77H8XM3lVaOIpFrCf1Mryb5cBgayJqUxTbAZ7+ZeWqXdNILPjgsYaxQ nmD+XaFDljx29dwNJdQTfjA2bVSthqhLmkOs6eW/pTokjlef8o6Tu/tvN2A3CbdZRxvN tRPZlcYR5ZJ45PvkwVUbcQIhPoERAu7H/W5+CTEfILVgPV9YBNpwSp90E7iDBhXmqBYs iZmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f90-v6si4668924pfe.291.2018.06.21.05.14.15; Thu, 21 Jun 2018 05:14:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933819AbeFUMON (ORCPT + 30 others); Thu, 21 Jun 2018 08:14:13 -0400 Received: from foss.arm.com ([217.140.101.70]:49462 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933782AbeFUMOK (ORCPT ); Thu, 21 Jun 2018 08:14:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6318C15BF; Thu, 21 Jun 2018 05:14:10 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 30E853F578; Thu, 21 Jun 2018 05:14:09 -0700 (PDT) From: Mark Rutland To: mingo@kernel.org, will.deacon@arm.com, peterz@infradead.org, linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng Subject: [PATCHv4 18/18] atomics/treewide: clean up andnot ifdeffery Date: Thu, 21 Jun 2018 13:13:21 +0100 Message-Id: <20180621121321.4761-19-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180621121321.4761-1-mark.rutland@arm.com> References: <20180621121321.4761-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The ifdeffery for atomic*_{fetch_,}andnot() is unlike that for all the other atomics. If atomic*_andnot() is not defined, the corresponding atomic*_fetch_andnot() is assumed to not be defined. Additionally, the fallbacks for the various ordering cases are written much later in atomic.h as static inlines. This isn't problematic today, but gets in the way of scripting the generation of atomics. To prepare for scripting, this patch: * Switches to separate ifdefs for atomic*_andnot() and atomic*_fetch_andnot(), updating implementations as appropriate. * Moves the fallbacks into the standards ifdefs, as macro expansions rather than static inlines. * Removes trivial andnot implementations from architectures, where these are superseded by core code. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Reviewed-by: Will Deacon Cc: Boqun Feng --- arch/arc/include/asm/atomic.h | 8 ++-- arch/arm/include/asm/atomic.h | 2 + include/linux/atomic.h | 96 ++++++++++++++----------------------------- 3 files changed, 36 insertions(+), 70 deletions(-) -- 2.11.0 diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 8f64f3b79b8a..4e0072730241 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -187,7 +187,8 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ ATOMIC_OPS(add, +=, add) ATOMIC_OPS(sub, -=, sub) -#define atomic_andnot atomic_andnot +#define atomic_andnot atomic_andnot +#define atomic_fetch_andnot atomic_fetch_andnot #undef ATOMIC_OPS #define ATOMIC_OPS(op, c_op, asm_op) \ @@ -296,8 +297,6 @@ ATOMIC_OPS(add, +=, CTOP_INST_AADD_DI_R2_R2_R3) ATOMIC_FETCH_OP(op, c_op, asm_op) ATOMIC_OPS(and, &=, CTOP_INST_AAND_DI_R2_R2_R3) -#define atomic_andnot(mask, v) atomic_and(~(mask), (v)) -#define atomic_fetch_andnot(mask, v) atomic_fetch_and(~(mask), (v)) ATOMIC_OPS(or, |=, CTOP_INST_AOR_DI_R2_R2_R3) ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) @@ -430,7 +429,8 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ ATOMIC64_OP_RETURN(op, op1, op2) \ ATOMIC64_FETCH_OP(op, op1, op2) -#define atomic64_andnot atomic64_andnot +#define atomic64_andnot atomic64_andnot +#define atomic64_fetch_andnot atomic64_fetch_andnot ATOMIC64_OPS(add, add.f, adc) ATOMIC64_OPS(sub, sub.f, sbc) diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 884c241424fe..f74756641410 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -216,6 +216,8 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return ret; } +#define atomic_fetch_andnot atomic_fetch_andnot + #endif /* __LINUX_ARM_ARCH__ */ #define ATOMIC_OPS(op, c_op, asm_op) \ diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 93fe5b4041e1..8e04f1f69bd9 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -354,12 +354,22 @@ #endif #endif /* atomic_fetch_and_relaxed */ -#ifdef atomic_andnot -/* atomic_fetch_andnot_relaxed */ +#ifndef atomic_andnot +#define atomic_andnot(i, v) atomic_and(~(int)(i), (v)) +#endif + #ifndef atomic_fetch_andnot_relaxed -#define atomic_fetch_andnot_relaxed atomic_fetch_andnot -#define atomic_fetch_andnot_acquire atomic_fetch_andnot -#define atomic_fetch_andnot_release atomic_fetch_andnot + +#ifndef atomic_fetch_andnot +#define atomic_fetch_andnot(i, v) atomic_fetch_and(~(int)(i), (v)) +#define atomic_fetch_andnot_relaxed(i, v) atomic_fetch_and_relaxed(~(int)(i), (v)) +#define atomic_fetch_andnot_acquire(i, v) atomic_fetch_and_acquire(~(int)(i), (v)) +#define atomic_fetch_andnot_release(i, v) atomic_fetch_and_release(~(int)(i), (v)) +#else /* atomic_fetch_andnot */ +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot +#define atomic_fetch_andnot_acquire atomic_fetch_andnot +#define atomic_fetch_andnot_release atomic_fetch_andnot +#endif /* atomic_fetch_andnot */ #else /* atomic_fetch_andnot_relaxed */ @@ -378,7 +388,6 @@ __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) #endif #endif /* atomic_fetch_andnot_relaxed */ -#endif /* atomic_andnot */ /* atomic_fetch_xor_relaxed */ #ifndef atomic_fetch_xor_relaxed @@ -655,33 +664,6 @@ static inline bool atomic_add_negative(int i, atomic_t *v) } #endif -#ifndef atomic_andnot -static inline void atomic_andnot(int i, atomic_t *v) -{ - atomic_and(~i, v); -} - -static inline int atomic_fetch_andnot(int i, atomic_t *v) -{ - return atomic_fetch_and(~i, v); -} - -static inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v) -{ - return atomic_fetch_and_relaxed(~i, v); -} - -static inline int atomic_fetch_andnot_acquire(int i, atomic_t *v) -{ - return atomic_fetch_and_acquire(~i, v); -} - -static inline int atomic_fetch_andnot_release(int i, atomic_t *v) -{ - return atomic_fetch_and_release(~i, v); -} -#endif - #ifndef atomic_inc_unless_negative static inline bool atomic_inc_unless_negative(atomic_t *v) { @@ -1029,12 +1011,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif #endif /* atomic64_fetch_and_relaxed */ -#ifdef atomic64_andnot -/* atomic64_fetch_andnot_relaxed */ +#ifndef atomic64_andnot +#define atomic64_andnot(i, v) atomic64_and(~(long long)(i), (v)) +#endif + #ifndef atomic64_fetch_andnot_relaxed -#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot -#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot -#define atomic64_fetch_andnot_release atomic64_fetch_andnot + +#ifndef atomic64_fetch_andnot +#define atomic64_fetch_andnot(i, v) atomic64_fetch_and(~(long long)(i), (v)) +#define atomic64_fetch_andnot_relaxed(i, v) atomic64_fetch_and_relaxed(~(long long)(i), (v)) +#define atomic64_fetch_andnot_acquire(i, v) atomic64_fetch_and_acquire(~(long long)(i), (v)) +#define atomic64_fetch_andnot_release(i, v) atomic64_fetch_and_release(~(long long)(i), (v)) +#else /* atomic64_fetch_andnot */ +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot +#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot +#define atomic64_fetch_andnot_release atomic64_fetch_andnot +#endif /* atomic64_fetch_andnot */ #else /* atomic64_fetch_andnot_relaxed */ @@ -1053,7 +1045,6 @@ static inline int atomic_dec_if_positive(atomic_t *v) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) #endif #endif /* atomic64_fetch_andnot_relaxed */ -#endif /* atomic64_andnot */ /* atomic64_fetch_xor_relaxed */ #ifndef atomic64_fetch_xor_relaxed @@ -1262,33 +1253,6 @@ static inline bool atomic64_add_negative(long long i, atomic64_t *v) } #endif -#ifndef atomic64_andnot -static inline void atomic64_andnot(long long i, atomic64_t *v) -{ - atomic64_and(~i, v); -} - -static inline long long atomic64_fetch_andnot(long long i, atomic64_t *v) -{ - return atomic64_fetch_and(~i, v); -} - -static inline long long atomic64_fetch_andnot_relaxed(long long i, atomic64_t *v) -{ - return atomic64_fetch_and_relaxed(~i, v); -} - -static inline long long atomic64_fetch_andnot_acquire(long long i, atomic64_t *v) -{ - return atomic64_fetch_and_acquire(~i, v); -} - -static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v) -{ - return atomic64_fetch_and_release(~i, v); -} -#endif - #ifndef atomic64_inc_unless_negative static inline bool atomic64_inc_unless_negative(atomic64_t *v) {