From patchwork Fri Feb 22 18:50:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 159061 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp2070378jaa; Fri, 22 Feb 2019 10:51:17 -0800 (PST) X-Google-Smtp-Source: AHgI3IYlLecoXTghH7KiyI+sPSafYM4r+ifBooUWY+M+gcShQ7Ead9oucZ7VzrRWlIJg3NfNfc7R X-Received: by 2002:a17:902:45:: with SMTP id 63mr5654343pla.281.1550861477091; Fri, 22 Feb 2019 10:51:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550861477; cv=none; d=google.com; s=arc-20160816; b=p7hSlOwnxHzyDJo3MqsROjrYcGnbLfEZSDLEA/obN3H0ITkVPPyp5WNJM2hyU1Rvoa 4nEMpEfuXiGi+qEnVkqVh+Q0SEkTUEGr8kr00a2jT76Ta8L35x+GkIBfbiui/+Z6wnKW Xxq61u5+D75g2iX2JhT8FJPBk/X+QO99cawZJp9V9lzJwqUXSKe/+g8HakZMssVzsoVE USfgyQTWfLQyleUS3NmkuRHew9OizDWO8ubKeAbOAyXZj/mx+6DtsYEY6PZ5fyexrXm/ AyMP/rtM6iTq/tSFXIRLYMsoKrN2ZLcUWXKTj8yMT4cemOgIa1zVOwkvLo/9EsG7xbbs vJ0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=VmTxe9vAIUAVUG86Np9izV4NGfGTQCxT6UantY3rVL8=; b=bTHgMurmAVXzP38WLFPO1utjU2CcyNSvpkrPsd8F5GVsCqhcglOAaoBTnZpCORHk7S AsxAwRzTtWT83OTpJir52peQ1TnwppYhkB8hoRflQteD8mZ5UyVcXvB80rKl/g78rfjC ATX7Pn3npCoNMiyVyuIm29Zv05QuS0uAu6bhpMQWVudfTMZAQYhzU2cbvIt2GXEcAG2T 0byPhQXAXlllfxzy1//S1xpJIyamw1h7Nrifq+YF86P6wuNqyifp68vjafrpX7ndAgBG 3i22b4D8i2CwC8dVqlYT574nCE1d+gULRUK4xvIzY4BwX4PqPQ/LC38B5VUAgShCmEx7 g3rg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1si177506ply.232.2019.02.22.10.51.16; Fri, 22 Feb 2019 10:51:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727584AbfBVSvP (ORCPT + 32 others); Fri, 22 Feb 2019 13:51:15 -0500 Received: from foss.arm.com ([217.140.101.70]:39448 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726368AbfBVSvO (ORCPT ); Fri, 22 Feb 2019 13:51:14 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1A0880D; Fri, 22 Feb 2019 10:51:13 -0800 (PST) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 66A263F5C1; Fri, 22 Feb 2019 10:51:10 -0800 (PST) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck Subject: [RFC PATCH 11/20] ia64: Add unconditional mmiowb() to arch_spin_unlock() Date: Fri, 22 Feb 2019 18:50:17 +0000 Message-Id: <20190222185026.10973-12-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190222185026.10973-1-will.deacon@arm.com> References: <20190222185026.10973-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The mmiowb() macro is horribly difficult to use and drivers will continue to work most of the time if they omit a call when it is required. Rather than rely on driver authors getting this right, push mmiowb() into arch_spin_unlock() for ia64. If this is deemed to be a performance issue, a subsequent optimisation could make use of ARCH_HAS_MMIOWB to elide the barrier in cases where no I/O writes were performned inside the critical section. Signed-off-by: Will Deacon --- arch/ia64/include/asm/Kbuild | 1 - arch/ia64/include/asm/io.h | 4 ---- arch/ia64/include/asm/mmiowb.h | 12 ++++++++++++ arch/ia64/include/asm/spinlock.h | 2 ++ 4 files changed, 14 insertions(+), 5 deletions(-) create mode 100644 arch/ia64/include/asm/mmiowb.h -- 2.11.0 diff --git a/arch/ia64/include/asm/Kbuild b/arch/ia64/include/asm/Kbuild index 3273d7aedfa0..43e21fe3499c 100644 --- a/arch/ia64/include/asm/Kbuild +++ b/arch/ia64/include/asm/Kbuild @@ -4,7 +4,6 @@ generic-y += exec.h generic-y += irq_work.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h -generic-y += mmiowb.h generic-y += preempt.h generic-y += trace_clock.h generic-y += vtime.h diff --git a/arch/ia64/include/asm/io.h b/arch/ia64/include/asm/io.h index 1e6fef69bb01..7f2371ba04a4 100644 --- a/arch/ia64/include/asm/io.h +++ b/arch/ia64/include/asm/io.h @@ -119,8 +119,6 @@ extern int valid_mmap_phys_addr_range (unsigned long pfn, size_t count); * Ensure ordering of I/O space writes. This will make sure that writes * following the barrier will arrive after all previous writes. For most * ia64 platforms, this is a simple 'mf.a' instruction. - * - * See Documentation/driver-api/device-io.rst for more information. */ static inline void ___ia64_mmiowb(void) { @@ -296,7 +294,6 @@ __outsl (unsigned long port, const void *src, unsigned long count) #define __outb platform_outb #define __outw platform_outw #define __outl platform_outl -#define __mmiowb platform_mmiowb #define inb(p) __inb(p) #define inw(p) __inw(p) @@ -310,7 +307,6 @@ __outsl (unsigned long port, const void *src, unsigned long count) #define outsb(p,s,c) __outsb(p,s,c) #define outsw(p,s,c) __outsw(p,s,c) #define outsl(p,s,c) __outsl(p,s,c) -#define mmiowb() __mmiowb() /* * The address passed to these functions are ioremap()ped already. diff --git a/arch/ia64/include/asm/mmiowb.h b/arch/ia64/include/asm/mmiowb.h new file mode 100644 index 000000000000..238d56172c6f --- /dev/null +++ b/arch/ia64/include/asm/mmiowb.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _ASM_IA64_MMIOWB_H +#define _ASM_IA64_MMIOWB_H + +#include + +#define mmiowb() platform_mmiowb() + +#include + +#endif /* _ASM_IA64_MMIOWB_H */ diff --git a/arch/ia64/include/asm/spinlock.h b/arch/ia64/include/asm/spinlock.h index afd0b3121b4c..5f620e66384e 100644 --- a/arch/ia64/include/asm/spinlock.h +++ b/arch/ia64/include/asm/spinlock.h @@ -73,6 +73,8 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock) { unsigned short *p = (unsigned short *)&lock->lock + 1, tmp; + /* This could be optimised with ARCH_HAS_MMIOWB */ + mmiowb(); asm volatile ("ld2.bias %0=[%1]" : "=r"(tmp) : "r"(p)); WRITE_ONCE(*p, (tmp + 2) & ~1); }