From patchwork Fri Apr 5 13:59:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 161848 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp471777jan; Fri, 5 Apr 2019 07:00:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqwjziv92Hlh+JfZNWzqBGP2gIWY35MFlpYxSvBatn+zEhNp9Jwg+mXolNMeXpcZijnaSJVN X-Received: by 2002:a17:902:be09:: with SMTP id r9mr12623160pls.215.1554472805396; Fri, 05 Apr 2019 07:00:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554472805; cv=none; d=google.com; s=arc-20160816; b=kbZvqkLWuHu/wAkxQuFEvriUb/WxWBn8d7Gz2OpctNNcB+Cre6G8WVrh+db9Sx1JQQ Ca95EkEgwEoJAHI2gYoxl0Y3nBt4ZLZheX9dCCs2xf8ILKiziAm6ivAADDEDg3P4XmEg M2EjurW4wxTvAcUS3Zzfc2NsongcNRjHJR2FxQ0S2wYZa03q3LJv4pjIbfyQ8O5hM5Ff zG0AoMdy3ESv4E1hnk19qaDNQC70kJY6CTa5JEzd+0dlcEDGwQXJH78kSHFX3X2EMlDG wMO7X8nuzFHvOYBLgzlJSFeV1Dl+Pm6xAJKwarkOaGU3EBKiKPAV7hZrRpUNJimRPX83 aQjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=bnfrrSA06D5VQHMQc+YjJNrxGoe095jq2hzJTb+20nc=; b=ZldjS2iX38vbtAAXJ7cQdKd8bDuGS5L6jyWpYPPP4S7YR4+2Z7Xmh2nvhR1hyjpQOX WUQlrUt9Ys3d2goL2LwdOl8GiX8g245UVaiGed+I/VzWBIZCAOnZqZchTrlrgyBZk4tY 3kjyTUBl8aZjd+Q8gculxKKG2KQGxFhLpcaoIaQawF2EJtYilf+1ZbkxtMKb5Pml8H5E QY3sEptZisDXnv7N0gYJMck9RAcuKjhiz5bL9vgcbrgvU5SsYQxpq2kscvWjQLwbxP72 Pa94/+Ly6oS4llRWWWymDaRWLyq2QQGvA9BEXGovuiWU6CYJecTtgmg6BaMoBR9RgonP 4btA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p6si19108274pgs.407.2019.04.05.07.00.05; Fri, 05 Apr 2019 07:00:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731305AbfDEOAE (ORCPT + 31 others); Fri, 5 Apr 2019 10:00:04 -0400 Received: from foss.arm.com ([217.140.101.70]:49134 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731264AbfDEOAB (ORCPT ); Fri, 5 Apr 2019 10:00:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 02EC419BF; Fri, 5 Apr 2019 07:00:01 -0700 (PDT) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2E1B63F68F; Fri, 5 Apr 2019 06:59:57 -0700 (PDT) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck , Mikulas Patocka , Akira Yokosawa , Luis Chamberlain , Nicholas Piggin Subject: [PATCH v2 04/21] mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors Date: Fri, 5 Apr 2019 14:59:19 +0100 Message-Id: <20190405135936.7266-5-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190405135936.7266-1-will.deacon@arm.com> References: <20190405135936.7266-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Removing explicit calls to mmiowb() from driver code means that we must now call into the generic mmiowb_spin_{lock,unlock}() functions from the core spinlock code. In order to elide barriers following critical sections without any I/O writes, we also hook into the asm-generic I/O routines. Signed-off-by: Will Deacon --- include/asm-generic/io.h | 3 ++- include/linux/spinlock.h | 11 ++++++++++- kernel/locking/spinlock_debug.c | 6 +++++- 3 files changed, 17 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h index 303871651f8a..bc490a746602 100644 --- a/include/asm-generic/io.h +++ b/include/asm-generic/io.h @@ -19,6 +19,7 @@ #include #endif +#include #include #ifndef mmiowb @@ -49,7 +50,7 @@ /* serialize device access against a spin_unlock, usually handled there. */ #ifndef __io_aw -#define __io_aw() barrier() +#define __io_aw() mmiowb_set_pending() #endif #ifndef __io_pbw diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index e089157dcf97..ed7c4d6b8235 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -57,6 +57,7 @@ #include #include #include +#include /* @@ -178,6 +179,7 @@ static inline void do_raw_spin_lock(raw_spinlock_t *lock) __acquires(lock) { __acquire(lock); arch_spin_lock(&lock->raw_lock); + mmiowb_spin_lock(); } #ifndef arch_spin_lock_flags @@ -189,15 +191,22 @@ do_raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long *flags) __acquires(lo { __acquire(lock); arch_spin_lock_flags(&lock->raw_lock, *flags); + mmiowb_spin_lock(); } static inline int do_raw_spin_trylock(raw_spinlock_t *lock) { - return arch_spin_trylock(&(lock)->raw_lock); + int ret = arch_spin_trylock(&(lock)->raw_lock); + + if (ret) + mmiowb_spin_lock(); + + return ret; } static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) { + mmiowb_spin_unlock(); arch_spin_unlock(&lock->raw_lock); __release(lock); } diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c index 9aa0fccd5d43..399669f7eba8 100644 --- a/kernel/locking/spinlock_debug.c +++ b/kernel/locking/spinlock_debug.c @@ -111,6 +111,7 @@ void do_raw_spin_lock(raw_spinlock_t *lock) { debug_spin_lock_before(lock); arch_spin_lock(&lock->raw_lock); + mmiowb_spin_lock(); debug_spin_lock_after(lock); } @@ -118,8 +119,10 @@ int do_raw_spin_trylock(raw_spinlock_t *lock) { int ret = arch_spin_trylock(&lock->raw_lock); - if (ret) + if (ret) { + mmiowb_spin_lock(); debug_spin_lock_after(lock); + } #ifndef CONFIG_SMP /* * Must not happen on UP: @@ -131,6 +134,7 @@ int do_raw_spin_trylock(raw_spinlock_t *lock) void do_raw_spin_unlock(raw_spinlock_t *lock) { + mmiowb_spin_unlock(); debug_spin_unlock(lock); arch_spin_unlock(&lock->raw_lock); }