From patchwork Fri Feb 22 18:50:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 159051 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp2069814jaa; Fri, 22 Feb 2019 10:50:41 -0800 (PST) X-Google-Smtp-Source: AHgI3IZnQIkTQ3CyNGffniN/lqu2x503SF4NN5mlc6YTtI2gnnMF4lDFEVtFDfVF/kiVVK16qEoY X-Received: by 2002:a65:6298:: with SMTP id f24mr5327482pgv.183.1550861441878; Fri, 22 Feb 2019 10:50:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550861441; cv=none; d=google.com; s=arc-20160816; b=faGI78GV8ehXMIFBehCM9pTp6pcvbXJ2TiWC1F2n0hXljhPui8SNGQ5Ah+AolbEyPu +8k/8aNuMUEZo2ldS1glod9it0BQNzZeWJfm/z0Q2kDvQS+C4g0w229k9/965JO5uQRz 6qr2nricn83Mdp2PvUI09TIduQWxWm3U9vhKjyMWBU6E0H6uU4/5XUE6Du3spZkKdNc0 wtH7yf6eOgYY3GCqLWb3jroAUTvK2hsha27BXegDDQS0pDc4vaQnS8gUgW/VicsWaUjv 61aHpiSGBjjKGsgbXfvKAaXU5Ereoqo5X5E4RpmpwT/UctGtIk7XPcFNiLwdRLcMKJ/T QZ0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=yyRiIeCaj5WDJDvdeHeBhelQzSi3zljjkMhk3XBcv00=; b=ZfX57QJdvL5cuDvPIvzlt2Db/OdoTZt4L4hCD5+3SiiPED4Puyc6lRqPHGB+Jnww1w MndS0yPfyd6MIXbC3GFkrwxqJ/3PvASbIriabV3toAY4A0mxEwRmPVg3GcfF2+nRqIol ZpVT5lv7bO441xGG+iuxbKdsfMGtanChky4cUY7EA6O5GnFqT1z/tKJQmT+OcIMtWb+M i1OBcoH2Zj0455P3X7E3t309NEUHxpt5vr1RWBpdtWyhxdzH+3AjTgrtqc64rLKRxfHU yd2Y48GLV2BKLN4Kn6QddFUTujNV4YGqf+JDVAC57m5gmpuwyi8M4ftovFPxFXDHUlFn pwWg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d13si1970182pgh.196.2019.02.22.10.50.41; Fri, 22 Feb 2019 10:50:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726570AbfBVSuk (ORCPT + 32 others); Fri, 22 Feb 2019 13:50:40 -0500 Received: from foss.arm.com ([217.140.101.70]:39054 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725832AbfBVSui (ORCPT ); Fri, 22 Feb 2019 13:50:38 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1DAC615AB; Fri, 22 Feb 2019 10:50:38 -0800 (PST) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D72FF3F5C1; Fri, 22 Feb 2019 10:50:34 -0800 (PST) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck Subject: [RFC PATCH 01/20] asm-generic/mmiowb: Add generic implementation of mmiowb() tracking Date: Fri, 22 Feb 2019 18:50:07 +0000 Message-Id: <20190222185026.10973-2-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190222185026.10973-1-will.deacon@arm.com> References: <20190222185026.10973-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for removing all explicit mmiowb() calls from driver code, implement a tracking system in asm-generic based on the PowerPC implementation. This allows architectures with a non-empty mmiowb() definition to automatically have the barrier inserted in spin_unlock() following a critical section containing an I/O write. Signed-off-by: Will Deacon --- include/asm-generic/mmiowb.h | 60 ++++++++++++++++++++++++++++++++++++++++++++ kernel/Kconfig.locks | 3 +++ kernel/locking/spinlock.c | 5 ++++ 3 files changed, 68 insertions(+) create mode 100644 include/asm-generic/mmiowb.h -- 2.11.0 diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h new file mode 100644 index 000000000000..1cec8907806f --- /dev/null +++ b/include/asm-generic/mmiowb.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_GENERIC_MMIOWB_H +#define __ASM_GENERIC_MMIOWB_H + +/* + * Generic implementation of mmiowb() tracking for spinlocks. + * + * If your architecture doesn't ensure that writes to an I/O peripheral + * within two spinlocked sections on two different CPUs are seen by the + * peripheral in the order corresponding to the lock handover, then you + * need to follow these FIVE easy steps: + * + * 1. Implement mmiowb() in asm/mmiowb.h and then #include this file + * 2. Ensure your I/O write accessors call mmiowb_set_pending() + * 3. Select ARCH_HAS_MMIOWB + * 4. Untangle the resulting mess of header files + * 5. Complain to your architects + */ +#if defined(CONFIG_ARCH_HAS_MMIOWB) && defined(CONFIG_SMP) + +#include +#include +#include + +struct mmiowb_state { + u16 nesting_count; + u16 mmiowb_pending; +}; +DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state); + +#ifndef mmiowb_set_pending +static inline void mmiowb_set_pending(void) +{ + __this_cpu_write(__mmiowb_state.mmiowb_pending, 1); +} +#endif + +#ifndef mmiowb_spin_lock +static inline void mmiowb_spin_lock(void) +{ + if (__this_cpu_inc_return(__mmiowb_state.nesting_count) == 1) + __this_cpu_write(__mmiowb_state.mmiowb_pending, 0); +} +#endif + +#ifndef mmiowb_spin_unlock +static inline void mmiowb_spin_unlock(void) +{ + if (__this_cpu_xchg(__mmiowb_state.mmiowb_pending, 0)) + mmiowb(); + __this_cpu_dec_return(__mmiowb_state.nesting_count); +} +#endif + +#else +#define mmiowb_set_pending() do { } while (0) +#define mmiowb_spin_lock() do { } while (0) +#define mmiowb_spin_unlock() do { } while (0) +#endif /* CONFIG_ARCH_HAS_MMIOWB && CONFIG_SMP */ +#endif /* __ASM_GENERIC_MMIOWB_H */ diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks index 84d882f3e299..04976ae41176 100644 --- a/kernel/Kconfig.locks +++ b/kernel/Kconfig.locks @@ -248,3 +248,6 @@ config ARCH_USE_QUEUED_RWLOCKS config QUEUED_RWLOCKS def_bool y if ARCH_USE_QUEUED_RWLOCKS depends on SMP + +config ARCH_HAS_MMIOWB + bool diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c index 936f3d14dd6b..cbae365d7dd1 100644 --- a/kernel/locking/spinlock.c +++ b/kernel/locking/spinlock.c @@ -22,6 +22,11 @@ #include #include +#ifdef CONFIG_ARCH_HAS_MMIOWB +DEFINE_PER_CPU(struct mmiowb_state, __mmiowb_state); +EXPORT_PER_CPU_SYMBOL(__mmiowb_state); +#endif + /* * If lockdep is enabled then we use the non-preemption spin-ops * even on CONFIG_PREEMPT, because lockdep assumes that interrupts are