From patchwork Tue May 29 15:43:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137194 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4216544lji; Tue, 29 May 2018 08:48:04 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrX0zBuks1swjKlpof4B+mwUfpqZvGVBRiA9OaHw/N9oU0ORZlsx+1ohXMGkFy2nWR1uBoy X-Received: by 2002:a63:b307:: with SMTP id i7-v6mr14250364pgf.448.1527608884152; Tue, 29 May 2018 08:48:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608884; cv=none; d=google.com; s=arc-20160816; b=yuV6Ga2mIk+lS74UOa1uE4bkMPNFDs/9h9yQMmWP/+IeaR7ll/M2JYLYNkvtVNmk+C sN8hf0p5n+z2zXLLxqkJQsY/3aO/bomh6MF6UjQKuVgm1ai85vJNS3y6ceoTszo1/rmo IRhvkzQ2wCu3tbDwlT6h4fyA83vypP0wGKl2jAdss/METBYIRe+bPQZ7tJ/NauPvtito eKvDwstToRt/glTQ6hT6HxU8h8nDsPiSbL1GIKs5yopqcVRJJWBJu0wEfbbVCFo51ukY yBY17ZN/zt0n40Ljf6XZzN4HL87Pg+C6ChK9dkpG3RwwOk+/cyaAdlNasWPSRPdhzCSj gGVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=tIb8M3HHkq5jyOYY2EofNBFtGj04uwlggnhRXeOefu4=; b=G1NSs+wXkFX7djnH2zdNSkL2VXMe1FEFHgoZzNOr8DLAXyQMjPj7A2JfvEqXyb2dMN 9IPKjzDwWL4WFmmU7I0WYD3oGsq0OzQr4NrVZyIWopoRb6mcI6+lJrMhZHay00c8GUFA oCK8OXaXsNZZMedAFlo/Bl/q4k4DmAJTcf6I3NUKQ1xd/kvUIZF3yg2YUr0mspWYGILB 7qE8qPaTeujJu0Ek8bJJ/uabC7xSjKje9CBV238Rk74S3tG1x00cZJhgLKRTa1ylBUry 2lOhR7kd/g8HqtMbHw6+bOGvmuS1sz4u0t4E3wV5HZmMrfiwAa1h99FpNjz66+9dw7wp kVWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g15-v6si12383194plq.242.2018.05.29.08.48.03; Tue, 29 May 2018 08:48:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936908AbeE2PsB (ORCPT + 30 others); Tue, 29 May 2018 11:48:01 -0400 Received: from foss.arm.com ([217.140.101.70]:43412 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934948AbeE2PoN (ORCPT ); Tue, 29 May 2018 11:44:13 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C12AE80D; Tue, 29 May 2018 08:44:12 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 470B03F25D; Tue, 29 May 2018 08:44:10 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Arnd Bergmann , Richard Henderson , Ivan Kokshaysky , Matt Turner , Vineet Gupta , Russell King , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Palmer Dabbelt , Albert Ou Subject: [PATCHv2 05/16] atomics: prepare for atomic64_fetch_add_unless() Date: Tue, 29 May 2018 16:43:35 +0100 Message-Id: <20180529154346.3168-6-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently architecture must implement atomic_fetch_add_unless(), with common code providing atomic_add_unless(). Architectures must also implement atmic64_add_unless() directly, with no corresponding atomic64_fetch_add_unless(). This divergenece is unfortunate, and means that the APIs for atomic_t, atomic64_t, and atomic_long_t differ. In preparation for unifying things, with architectures providing atomic64_fetch_add_unless, this patch adds a generic atomic64_add_unless() which will use atomic64_fetch_add_unless(). The instrumented atomics are updated to take this case into account. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: Vineet Gupta Cc: Russell King Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Palmer Dabbelt Cc: Albert Ou --- include/asm-generic/atomic-instrumented.h | 9 +++++++++ include/linux/atomic.h | 16 ++++++++++++++++ 2 files changed, 25 insertions(+) -- 2.11.0 diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 6e0818c182e2..e22d7e5f4ce7 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -93,11 +93,20 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) } #endif +#ifdef arch_atomic64_fetch_add_unless +#define atomic64_fetch_add_unless atomic64_fetch_add_unless +static __always_inline int atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_unless(v, a, u); +} +#else static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } +#endif static __always_inline void atomic_inc(atomic_t *v) { diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 1105c0b37f27..8d93209052e1 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -1072,6 +1072,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif /* atomic64_try_cmpxchg */ /** + * atomic64_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +#ifdef atomic64_fetch_add_unless +static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) +{ + return atomic64_fetch_add_unless(v, a, u) != u; +} +#endif + +/** * atomic64_inc_not_zero - increment unless the number is zero * @v: pointer of type atomic64_t *