From patchwork Thu Jun 21 12:13:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 139530 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp1991520lji; Thu, 21 Jun 2018 05:13:52 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIHGzIqiso2XtO2PtagM5f3HumMi9dWwOtKCgFDP2Q3BwdbqufUP1HWtHnI29RVyexNCYgE X-Received: by 2002:a62:be0a:: with SMTP id l10-v6mr26730899pff.180.1529583232411; Thu, 21 Jun 2018 05:13:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529583232; cv=none; d=google.com; s=arc-20160816; b=uXO6zSXamI/jhsIL3oAHR/SrhKhOWBkS/xNVYyGiC+r/Mgxo+NAii7mSsVEw5yJahc /MVf5sFEf5Z9JtjqScylz9UBdGL+hY2Ytb5oK6sC0Y6yme027ueU+ya5cSeSrj2ubDa0 vJZUDAxRUQ6jFn74ZBZy63M8feXsaGSQwsHe5NE1Ddw5qF6eCpOrcH5WcE9scdFoVBsJ nLsUIKU2jjlruQpmtPIc+F72NtpoADVaMqNiXUcbN7reqYfvAejWqGoAqfcj3Ahwwqg7 VOkMH0LHZ8V7yzp/wKJ4x+VGyBfeZnOVHBpLW7K78OuqbySWvGShbAMajF7Q6SYTKfwB G1ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=tIuMqLcqddjoO+fRKfO8ZUWwPCbnYgY+ZSHHfd9/+v0=; b=jkvljNW6mhhAb3NYoHqs75wLEJn8vpZI05ve4/GMyB1dgxNmLfhRudez5/98Sjr0vG mCAcyJX2FZXt5Jolc8t9XRYxLtMIolSuAJwRnqTTtlzsaXYjGRzpAGyQuVQc7dasTZ5i XDAXE0KzrMr6fe9G64bHcuqf9+tId9iaPlvStAv8aZCxl2s+GLy5dImDlwuM8TZMsqjp b7b4Po2anPMJZECcrLh+S4Kll3Sv5/Gj1M7BGCZ47gyv1AwtLOTqMaqctMCLIhh0xEkI qvDRj5K3h7u4ALwNKfRx55Bw8EnNSqtbjUcn9S7j5KDZ9decXdpZ5+qcEjGjFpXtLVBT 5tRw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f90-v6si4668924pfe.291.2018.06.21.05.13.52; Thu, 21 Jun 2018 05:13:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933466AbeFUMNu (ORCPT + 30 others); Thu, 21 Jun 2018 08:13:50 -0400 Received: from foss.arm.com ([217.140.101.70]:49334 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933612AbeFUMNr (ORCPT ); Thu, 21 Jun 2018 08:13:47 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3A98815AD; Thu, 21 Jun 2018 05:13:47 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6D79C3F578; Thu, 21 Jun 2018 05:13:44 -0700 (PDT) From: Mark Rutland To: mingo@kernel.org, will.deacon@arm.com, peterz@infradead.org, linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Arnd Bergmann , Richard Henderson , Ivan Kokshaysky , Matt Turner , Vineet Gupta , Russell King , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Palmer Dabbelt , Albert Ou Subject: [PATCHv4 07/18] atomics: prepare for atomic64_fetch_add_unless() Date: Thu, 21 Jun 2018 13:13:10 +0100 Message-Id: <20180621121321.4761-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180621121321.4761-1-mark.rutland@arm.com> References: <20180621121321.4761-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently all architectures must implement atomic_fetch_add_unless(), with common code providing atomic_add_unless(). Architectures must also implement atomic64_add_unless() directly, with no corresponding atomic64_fetch_add_unless(). This divergence is unfortunate, and means that the APIs for atomic_t, atomic64_t, and atomic_long_t differ. In preparation for unifying things, with architectures providing atomic64_fetch_add_unless, this patch adds a generic atomic64_add_unless() which will use atomic64_fetch_add_unless(). The instrumented atomics are updated to take this case into account. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Reviewed-by: Will Deacon Cc: Boqun Feng Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: Vineet Gupta Cc: Russell King Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Palmer Dabbelt Cc: Albert Ou --- include/asm-generic/atomic-instrumented.h | 9 +++++++++ include/linux/atomic.h | 16 ++++++++++++++++ 2 files changed, 25 insertions(+) -- 2.11.0 diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 1f9b2a767d3c..444bf2f9d54d 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -93,11 +93,20 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) } #endif +#ifdef arch_atomic64_fetch_add_unless +#define atomic64_fetch_add_unless atomic64_fetch_add_unless +static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_unless(v, a, u); +} +#else static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } +#endif static __always_inline void atomic_inc(atomic_t *v) { diff --git a/include/linux/atomic.h b/include/linux/atomic.h index b89ba36cab94..3c03de648007 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -1043,6 +1043,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif /* atomic64_try_cmpxchg */ /** + * atomic64_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, if @v was not already @u. + * Returns true if the addition was done. + */ +#ifdef atomic64_fetch_add_unless +static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u) +{ + return atomic64_fetch_add_unless(v, a, u) != u; +} +#endif + +/** * atomic64_inc_not_zero - increment unless the number is zero * @v: pointer of type atomic64_t *