From patchwork Tue Oct 3 18:25:29 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 114713 Delivered-To: patch@linaro.org Received: by 10.140.22.163 with SMTP id 32csp2187623qgn; Tue, 3 Oct 2017 11:26:13 -0700 (PDT) X-Google-Smtp-Source: AOwi7QAx/QyOOg/TJHSc1WrFvBMumyZmGApwn3qQPjN11eKysyvSYUoklXvg1Pzq1aW4GFbZJd9d X-Received: by 10.98.104.198 with SMTP id d189mr7453443pfc.55.1507055172867; Tue, 03 Oct 2017 11:26:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507055172; cv=none; d=google.com; s=arc-20160816; b=lrOy/Jznp8ORhWmN0MCv+FMMFotyPYTdVeVrcBSjBrc+LcZxM5XDmAHV4+XRomcEAU g3BVGPbbXpsjQ8fYWtywbOBgHpNMBjwk237ta7FiBmyuCV+m/x5JSqkAAC3guNJTVnIx DSEWpRaCfO0Kjz185Tovks69xUQB9fYMaWca7oau/QBcCb/dT2TZ2Q/mlUNvlM8RYcYA 81iyKgSoLI4ClqfM+UVv3rohITrUp/GgSAMjsbNCsaDaEmNo2+L4SfmbhOC8FFyQt8dP g6Tj6hJbSW2YwrkGx7z3+JaHOrOMYpm+ak2nl74lmT+S88QjChPDwCyohoV5NWp9qlU0 057A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=/yI8soR/kA8Jzp0RhnvuEql3w9fcXtl1QyHFB9EEQho=; b=cF7uwHVsaJOC6hASRZTBZEiDQ/9/c427jED3WXr4KPsYBtU/uGkx7aML/HGSkxMObI NCVXF9ItDTEYENII5ffTDyhF3gNo6Yskwjfb5vvGisRD2+TTdBXuShlv8MlZ1oq0wshh baWoWZo1EirKWuIhxi1QzsoOgPiOfSUMm7kukI4njTdgj5l/uTKR2Iu6SOG1hSkn8bqA s6NjtxTyDMW9bRHRztkNPYfc2HTRPiSeMwUUo8J0yunO2GHsDN7JFhEXShirsUkZO7bQ xZqN/vyFHXVjsZn6daVDcL5GbgxtlH428ODW4tdsGoEhMIbx68V4Ev/+QlRVq86ap/v2 eXmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m6si7622792pll.81.2017.10.03.11.26.12; Tue, 03 Oct 2017 11:26:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751625AbdJCSZ4 (ORCPT + 26 others); Tue, 3 Oct 2017 14:25:56 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52476 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751101AbdJCSZa (ORCPT ); Tue, 3 Oct 2017 14:25:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8CF0515BF; Tue, 3 Oct 2017 11:25:30 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3BD063F578; Tue, 3 Oct 2017 11:25:30 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E14331AE2E09; Tue, 3 Oct 2017 19:25:30 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: paulmck@linux.vnet.ibm.com, Will Deacon , Peter Zijlstra Subject: [PATCH v2 4/4] locking: Remove dummy arch_{read, spin, write}_lock_flags implementations Date: Tue, 3 Oct 2017 19:25:29 +0100 Message-Id: <1507055129-12300-4-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1507055129-12300-1-git-send-email-will.deacon@arm.com> References: <1507055129-12300-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The arch_{read,spin,write}_lock_flags macros are simply mapped to the non-flags versions by the majority of architectures, so do this in core code and remove the dummy implementations. Also remove the implementation in spinlock_up.h, since all callers of do_raw_spin_lock_flags call local_irq_save(flags) anyway. Cc: Peter Zijlstra Signed-off-by: Will Deacon --- arch/alpha/include/asm/spinlock.h | 4 ---- arch/arc/include/asm/spinlock.h | 4 ---- arch/arm/include/asm/spinlock.h | 5 ----- arch/arm64/include/asm/spinlock.h | 5 ----- arch/blackfin/include/asm/spinlock.h | 6 ------ arch/hexagon/include/asm/spinlock.h | 5 ----- arch/ia64/include/asm/spinlock.h | 5 +++-- arch/m32r/include/asm/spinlock.h | 4 ---- arch/metag/include/asm/spinlock.h | 5 ----- arch/metag/include/asm/spinlock_lnkget.h | 3 --- arch/mips/include/asm/spinlock.h | 3 --- arch/mn10300/include/asm/spinlock.h | 4 +--- arch/parisc/include/asm/spinlock.h | 4 +--- arch/powerpc/include/asm/spinlock.h | 4 +--- arch/s390/include/asm/spinlock.h | 4 +--- arch/sh/include/asm/spinlock-cas.h | 4 ---- arch/sh/include/asm/spinlock-llsc.h | 4 ---- arch/sparc/include/asm/spinlock_32.h | 4 ---- arch/sparc/include/asm/spinlock_64.h | 3 --- arch/tile/include/asm/spinlock_32.h | 6 ------ arch/tile/include/asm/spinlock_64.h | 6 ------ arch/x86/include/asm/spinlock.h | 3 --- arch/xtensa/include/asm/spinlock.h | 5 ----- include/asm-generic/qspinlock.h | 1 - include/linux/rwlock.h | 9 +++++++++ include/linux/spinlock.h | 4 ++++ include/linux/spinlock_up.h | 8 -------- 27 files changed, 20 insertions(+), 102 deletions(-) -- 2.1.4 diff --git a/arch/alpha/include/asm/spinlock.h b/arch/alpha/include/asm/spinlock.h index 7bff6316b8bb..3e2b4a05cb0f 100644 --- a/arch/alpha/include/asm/spinlock.h +++ b/arch/alpha/include/asm/spinlock.h @@ -13,7 +13,6 @@ * We make no fairness assumptions. They have a cost. */ -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) #define arch_spin_is_locked(x) ((x)->lock != 0) static inline int arch_spin_value_unlocked(arch_spinlock_t lock) @@ -160,7 +159,4 @@ static inline void arch_write_unlock(arch_rwlock_t * lock) lock->lock = 0; } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* _ALPHA_SPINLOCK_H */ diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h index f85bb585cdfc..2ba04a7db621 100644 --- a/arch/arc/include/asm/spinlock.h +++ b/arch/arc/include/asm/spinlock.h @@ -14,7 +14,6 @@ #include #define arch_spin_is_locked(x) ((x)->slock != __ARCH_SPIN_LOCK_UNLOCKED__) -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) #ifdef CONFIG_ARC_HAS_LLSC @@ -410,7 +409,4 @@ static inline void arch_write_unlock(arch_rwlock_t *rw) #endif -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h index d40a28fcbc62..daa87212c9a1 100644 --- a/arch/arm/include/asm/spinlock.h +++ b/arch/arm/include/asm/spinlock.h @@ -52,8 +52,6 @@ static inline void dsb_sev(void) * memory. */ -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) - static inline void arch_spin_lock(arch_spinlock_t *lock) { unsigned long tmp; @@ -270,7 +268,4 @@ static inline int arch_read_trylock(arch_rwlock_t *rw) } } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index 1504f2b95c57..aa51a38e46e4 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -27,8 +27,6 @@ * instructions. */ -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) - static inline void arch_spin_lock(arch_spinlock_t *lock) { unsigned int tmp; @@ -303,9 +301,6 @@ static inline int arch_read_trylock(arch_rwlock_t *rw) /* read_can_lock - would read_trylock() succeed? */ #define arch_read_can_lock(x) ((x)->lock < 0x80000000) -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - /* See include/linux/spinlock.h */ #define smp_mb__after_spinlock() smp_mb() diff --git a/arch/blackfin/include/asm/spinlock.h b/arch/blackfin/include/asm/spinlock.h index 3885d12d9939..839d1441af3a 100644 --- a/arch/blackfin/include/asm/spinlock.h +++ b/arch/blackfin/include/asm/spinlock.h @@ -36,8 +36,6 @@ static inline void arch_spin_lock(arch_spinlock_t *lock) __raw_spin_lock_asm(&lock->lock); } -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) - static inline int arch_spin_trylock(arch_spinlock_t *lock) { return __raw_spin_trylock_asm(&lock->lock); @@ -53,8 +51,6 @@ static inline void arch_read_lock(arch_rwlock_t *rw) __raw_read_lock_asm(&rw->lock); } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) - static inline int arch_read_trylock(arch_rwlock_t *rw) { return __raw_read_trylock_asm(&rw->lock); @@ -70,8 +66,6 @@ static inline void arch_write_lock(arch_rwlock_t *rw) __raw_write_lock_asm(&rw->lock); } -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - static inline int arch_write_trylock(arch_rwlock_t *rw) { return __raw_write_trylock_asm(&rw->lock); diff --git a/arch/hexagon/include/asm/spinlock.h b/arch/hexagon/include/asm/spinlock.h index 9f9414b9c303..48020863f53a 100644 --- a/arch/hexagon/include/asm/spinlock.h +++ b/arch/hexagon/include/asm/spinlock.h @@ -167,11 +167,6 @@ static inline unsigned int arch_spin_trylock(arch_spinlock_t *lock) /* * SMP spinlocks are intended to allow only a single CPU at the lock */ -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) - #define arch_spin_is_locked(x) ((x)->lock != 0) -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif diff --git a/arch/ia64/include/asm/spinlock.h b/arch/ia64/include/asm/spinlock.h index ed1e6212e9de..35b31884863b 100644 --- a/arch/ia64/include/asm/spinlock.h +++ b/arch/ia64/include/asm/spinlock.h @@ -126,6 +126,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock, { arch_spin_lock(lock); } +#define arch_spin_lock_flags arch_spin_lock_flags #ifdef ASM_SUPPORTED @@ -153,6 +154,7 @@ arch_read_lock_flags(arch_rwlock_t *lock, unsigned long flags) : "p6", "p7", "r2", "memory"); } +#define arch_read_lock_flags arch_read_lock_flags #define arch_read_lock(lock) arch_read_lock_flags(lock, 0) #else /* !ASM_SUPPORTED */ @@ -205,6 +207,7 @@ arch_write_lock_flags(arch_rwlock_t *lock, unsigned long flags) : "ar.ccv", "p6", "p7", "r2", "r29", "memory"); } +#define arch_write_lock_flags arch_write_lock_flags #define arch_write_lock(rw) arch_write_lock_flags(rw, 0) #define arch_write_trylock(rw) \ @@ -228,8 +231,6 @@ static inline void arch_write_unlock(arch_rwlock_t *x) #else /* !ASM_SUPPORTED */ -#define arch_write_lock_flags(l, flags) arch_write_lock(l) - #define arch_write_lock(l) \ ({ \ __u64 ia64_val, ia64_set_val = ia64_dep_mi(-1, 0, 31, 1); \ diff --git a/arch/m32r/include/asm/spinlock.h b/arch/m32r/include/asm/spinlock.h index 6809a9bbd169..882203db8723 100644 --- a/arch/m32r/include/asm/spinlock.h +++ b/arch/m32r/include/asm/spinlock.h @@ -28,7 +28,6 @@ */ #define arch_spin_is_locked(x) (*(volatile int *)(&(x)->slock) <= 0) -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) /** * arch_spin_trylock - Try spin lock and return a result @@ -305,7 +304,4 @@ static inline int arch_write_trylock(arch_rwlock_t *lock) return 0; } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* _ASM_M32R_SPINLOCK_H */ diff --git a/arch/metag/include/asm/spinlock.h b/arch/metag/include/asm/spinlock.h index b5b4174cde5e..80e3e59172f2 100644 --- a/arch/metag/include/asm/spinlock.h +++ b/arch/metag/include/asm/spinlock.h @@ -15,9 +15,4 @@ * locked. */ -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) - -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/metag/include/asm/spinlock_lnkget.h b/arch/metag/include/asm/spinlock_lnkget.h index d5c334ddfd62..5708ac0a9d09 100644 --- a/arch/metag/include/asm/spinlock_lnkget.h +++ b/arch/metag/include/asm/spinlock_lnkget.h @@ -209,7 +209,4 @@ static inline int arch_read_trylock(arch_rwlock_t *rw) return tmp; } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* __ASM_SPINLOCK_LNKGET_H */ diff --git a/arch/mips/include/asm/spinlock.h b/arch/mips/include/asm/spinlock.h index 4260d3f80d3a..ee81297d9117 100644 --- a/arch/mips/include/asm/spinlock.h +++ b/arch/mips/include/asm/spinlock.h @@ -13,7 +13,4 @@ #include #include -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* _ASM_SPINLOCK_H */ diff --git a/arch/mn10300/include/asm/spinlock.h b/arch/mn10300/include/asm/spinlock.h index 54f75dac8094..879cd0df53ba 100644 --- a/arch/mn10300/include/asm/spinlock.h +++ b/arch/mn10300/include/asm/spinlock.h @@ -84,6 +84,7 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *lock, : "d" (flags), "a"(&lock->slock), "i"(EPSW_IE | MN10300_CLI_LEVEL) : "memory", "cc"); } +#define arch_spin_lock_flags arch_spin_lock_flags #ifdef __KERNEL__ @@ -171,9 +172,6 @@ static inline int arch_write_trylock(arch_rwlock_t *lock) return 0; } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #define _raw_spin_relax(lock) cpu_relax() #define _raw_read_relax(lock) cpu_relax() #define _raw_write_relax(lock) cpu_relax() diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h index 136e1c9bb8a9..d66d7b1efc4e 100644 --- a/arch/parisc/include/asm/spinlock.h +++ b/arch/parisc/include/asm/spinlock.h @@ -31,6 +31,7 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x, cpu_relax(); mb(); } +#define arch_spin_lock_flags arch_spin_lock_flags static inline void arch_spin_unlock(arch_spinlock_t *x) { @@ -168,7 +169,4 @@ static __inline__ int arch_write_trylock(arch_rwlock_t *rw) return result; } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index d83f4f755ad8..b9ebc3085fb7 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -161,6 +161,7 @@ void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags) local_irq_restore(flags_dis); } } +#define arch_spin_lock_flags arch_spin_lock_flags static inline void arch_spin_unlock(arch_spinlock_t *lock) { @@ -299,9 +300,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw) rw->lock = 0; } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #define arch_spin_relax(lock) __spin_yield(lock) #define arch_read_relax(lock) __rw_yield(lock) #define arch_write_relax(lock) __rw_yield(lock) diff --git a/arch/s390/include/asm/spinlock.h b/arch/s390/include/asm/spinlock.h index 4eca60cc81e4..9fa855f91e55 100644 --- a/arch/s390/include/asm/spinlock.h +++ b/arch/s390/include/asm/spinlock.h @@ -81,6 +81,7 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *lp, if (!arch_spin_trylock_once(lp)) arch_spin_lock_wait_flags(lp, flags); } +#define arch_spin_lock_flags arch_spin_lock_flags static inline int arch_spin_trylock(arch_spinlock_t *lp) { @@ -114,9 +115,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lp) extern int _raw_read_trylock_retry(arch_rwlock_t *lp); extern int _raw_write_trylock_retry(arch_rwlock_t *lp); -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - static inline int arch_read_trylock_once(arch_rwlock_t *rw) { int old = ACCESS_ONCE(rw->lock); diff --git a/arch/sh/include/asm/spinlock-cas.h b/arch/sh/include/asm/spinlock-cas.h index 295993c2598e..270ee4d3e25b 100644 --- a/arch/sh/include/asm/spinlock-cas.h +++ b/arch/sh/include/asm/spinlock-cas.h @@ -27,7 +27,6 @@ static inline unsigned __sl_cas(volatile unsigned *p, unsigned old, unsigned new */ #define arch_spin_is_locked(x) ((x)->lock <= 0) -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) static inline void arch_spin_lock(arch_spinlock_t *lock) { @@ -90,7 +89,4 @@ static inline int arch_write_trylock(arch_rwlock_t *rw) return __sl_cas(&rw->lock, RW_LOCK_BIAS, 0) == RW_LOCK_BIAS; } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* __ASM_SH_SPINLOCK_CAS_H */ diff --git a/arch/sh/include/asm/spinlock-llsc.h b/arch/sh/include/asm/spinlock-llsc.h index a6f9edd15317..715595de286a 100644 --- a/arch/sh/include/asm/spinlock-llsc.h +++ b/arch/sh/include/asm/spinlock-llsc.h @@ -19,7 +19,6 @@ */ #define arch_spin_is_locked(x) ((x)->lock <= 0) -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) /* * Simple spin lock operations. There are two variants, one clears IRQ's @@ -197,7 +196,4 @@ static inline int arch_write_trylock(arch_rwlock_t *rw) return (oldval > (RW_LOCK_BIAS - 1)); } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* __ASM_SH_SPINLOCK_LLSC_H */ diff --git a/arch/sparc/include/asm/spinlock_32.h b/arch/sparc/include/asm/spinlock_32.h index 9d9129efd5d6..12bf857b471e 100644 --- a/arch/sparc/include/asm/spinlock_32.h +++ b/arch/sparc/include/asm/spinlock_32.h @@ -182,10 +182,6 @@ static inline int __arch_read_trylock(arch_rwlock_t *rw) res; \ }) -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) -#define arch_read_lock_flags(rw, flags) arch_read_lock(rw) -#define arch_write_lock_flags(rw, flags) arch_write_lock(rw) - #endif /* !(__ASSEMBLY__) */ #endif /* __SPARC_SPINLOCK_H */ diff --git a/arch/sparc/include/asm/spinlock_64.h b/arch/sparc/include/asm/spinlock_64.h index 3b67705e1b74..99b6e1c4f630 100644 --- a/arch/sparc/include/asm/spinlock_64.h +++ b/arch/sparc/include/asm/spinlock_64.h @@ -13,9 +13,6 @@ #include #include -#define arch_read_lock_flags(p, f) arch_read_lock(p) -#define arch_write_lock_flags(p, f) arch_write_lock(p) - #endif /* !(__ASSEMBLY__) */ #endif /* !(__SPARC64_SPINLOCK_H) */ diff --git a/arch/tile/include/asm/spinlock_32.h b/arch/tile/include/asm/spinlock_32.h index 91d05f21cba9..fb5313d77315 100644 --- a/arch/tile/include/asm/spinlock_32.h +++ b/arch/tile/include/asm/spinlock_32.h @@ -51,9 +51,6 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) void arch_spin_lock(arch_spinlock_t *lock); -/* We cannot take an interrupt after getting a ticket, so don't enable them. */ -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) - int arch_spin_trylock(arch_spinlock_t *lock); static inline void arch_spin_unlock(arch_spinlock_t *lock) @@ -109,7 +106,4 @@ void arch_read_unlock(arch_rwlock_t *rwlock); */ void arch_write_unlock(arch_rwlock_t *rwlock); -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* _ASM_TILE_SPINLOCK_32_H */ diff --git a/arch/tile/include/asm/spinlock_64.h b/arch/tile/include/asm/spinlock_64.h index c802f48badf4..5b616ef642a8 100644 --- a/arch/tile/include/asm/spinlock_64.h +++ b/arch/tile/include/asm/spinlock_64.h @@ -75,9 +75,6 @@ static inline void arch_spin_lock(arch_spinlock_t *lock) /* Try to get the lock, and return whether we succeeded. */ int arch_spin_trylock(arch_spinlock_t *lock); -/* We cannot take an interrupt after getting a ticket, so don't enable them. */ -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) - /* * Read-write spinlocks, allowing multiple readers * but only one writer. @@ -138,7 +135,4 @@ static inline int arch_write_trylock(arch_rwlock_t *rw) return 0; } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* _ASM_TILE_SPINLOCK_64_H */ diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index a558c187f20c..c6a6adf0a5c5 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -41,7 +41,4 @@ #include -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* _ASM_X86_SPINLOCK_H */ diff --git a/arch/xtensa/include/asm/spinlock.h b/arch/xtensa/include/asm/spinlock.h index d005af51e2e1..c6e1290dcbb7 100644 --- a/arch/xtensa/include/asm/spinlock.h +++ b/arch/xtensa/include/asm/spinlock.h @@ -33,8 +33,6 @@ #define arch_spin_is_locked(x) ((x)->slock != 0) -#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) - static inline void arch_spin_lock(arch_spinlock_t *lock) { unsigned long tmp; @@ -198,7 +196,4 @@ static inline void arch_read_unlock(arch_rwlock_t *rw) : "memory"); } -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #endif /* _XTENSA_SPINLOCK_H */ diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index 66260777d644..b37b4ad7eb94 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -121,6 +121,5 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock) #define arch_spin_lock(l) queued_spin_lock(l) #define arch_spin_trylock(l) queued_spin_trylock(l) #define arch_spin_unlock(l) queued_spin_unlock(l) -#define arch_spin_lock_flags(l, f) queued_spin_lock(l) #endif /* __ASM_GENERIC_QSPINLOCK_H */ diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 766c5ca5cbd1..3dcd617e65ae 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -38,6 +38,15 @@ do { \ extern int do_raw_write_trylock(rwlock_t *lock); extern void do_raw_write_unlock(rwlock_t *lock) __releases(lock); #else + +#ifndef arch_read_lock_flags +# define arch_read_lock_flags(lock, flags) arch_read_lock(lock) +#endif + +#ifndef arch_write_lock_flags +# define arch_write_lock_flags(lock, flags) arch_write_lock(lock) +#endif + # define do_raw_read_lock(rwlock) do {__acquire(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0) # define do_raw_read_lock_flags(lock, flags) \ do {__acquire(lock); arch_read_lock_flags(&(lock)->raw_lock, *(flags)); } while (0) diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 1e3e48041800..4e202b00dd66 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -165,6 +165,10 @@ static inline void do_raw_spin_lock(raw_spinlock_t *lock) __acquires(lock) arch_spin_lock(&lock->raw_lock); } +#ifndef arch_spin_lock_flags +#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) +#endif + static inline void do_raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long *flags) __acquires(lock) { diff --git a/include/linux/spinlock_up.h b/include/linux/spinlock_up.h index 901cf8f44388..0ac9112c1bbe 100644 --- a/include/linux/spinlock_up.h +++ b/include/linux/spinlock_up.h @@ -32,14 +32,6 @@ static inline void arch_spin_lock(arch_spinlock_t *lock) barrier(); } -static inline void -arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags) -{ - local_irq_save(flags); - lock->slock = 0; - barrier(); -} - static inline int arch_spin_trylock(arch_spinlock_t *lock) { char oldval = lock->slock;