From patchwork Fri Apr 5 13:59:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 161845 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp471565jan; Fri, 5 Apr 2019 06:59:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqwY8NxC6pSsSqlEjeAL4rT3B0mrPMGPpdscavWjjrIfCJvVzz05DC+cVWx+BtbX40x0Zs/2 X-Received: by 2002:a63:c45:: with SMTP id 5mr12312725pgm.385.1554472792998; Fri, 05 Apr 2019 06:59:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554472792; cv=none; d=google.com; s=arc-20160816; b=CWm5OUXmKLXkvcW3NcNg+Sxcxt9AMek+OP03V5RHSXxBP9wdmPoI046gWbr6GggLF7 xMGltu4sBQUt/OwmkncPlYUCq+OADic13yltFMh49XxrApKJSnTeq/1Wbm955qO4CFZ2 gboSioN1deMITaRQcdP9VGb30B+HrdvVq5M5UIABUTK1N8EDRc0NfYZtEUP+sHBkgh/t vH5qxYheqy7rdL3gpnGNzug8FWKEuj76eCtHfBTsDEf0B6hl8rV15moRGhG0Apt9Ww6N ph3UmjjSdTXI5h+aNtNuZ0nKD9ryzlCwf+VwN+no9uF3AvZ910WcR6A01rsYaphDoeKc tC8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=MmrEWKpTmSzF9fNjdJJgejVevIStDgXbJNNP2Sp75lw=; b=W4Uurwd48Fx6MpIr8bSFrK3FjlTnP/ATSQS2x9keQplqQzZoi7ASUjBWJW3gT/qEi7 x5RM8vsvLmh64f2SVt+YKTeTn46266OjAf6RpmLc2SuAeukPBVrT+qnuz0mtc8zp6n8R YxQ15YQ2ycd2+EGuXtd8NshQfTRhLeF0pCNC3Geayspiu3loKDcn1m/h2JCNi1NvWqOY kjXW+528+6A4uCqGSgEnKSzQyIV5qa36eHiKQPRcQAhsCQb+jkfFWBnbrle2UR7j996C /oRPH+ARrrQ/Tp+uS31gS3Vce9LSq/x4hZTxHgs0iCwiEJi2TbNb6k72fqWEBgQ63WLo /gog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o1si19510820pfe.194.2019.04.05.06.59.52; Fri, 05 Apr 2019 06:59:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726806AbfDEN7u (ORCPT + 31 others); Fri, 5 Apr 2019 09:59:50 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:49002 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726027AbfDEN7t (ORCPT ); Fri, 5 Apr 2019 09:59:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F9C11993; Fri, 5 Apr 2019 06:59:48 -0700 (PDT) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B9B4B3F68F; Fri, 5 Apr 2019 06:59:44 -0700 (PDT) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck , Mikulas Patocka , Akira Yokosawa , Luis Chamberlain , Nicholas Piggin Subject: [PATCH v2 01/21] docs/memory-barriers.txt: Rewrite "KERNEL I/O BARRIER EFFECTS" section Date: Fri, 5 Apr 2019 14:59:16 +0100 Message-Id: <20190405135936.7266-2-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190405135936.7266-1-will.deacon@arm.com> References: <20190405135936.7266-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The "KERNEL I/O BARRIER EFFECTS" section of memory-barriers.txt is vague, x86-centric, out-of-date, incomplete and demonstrably incorrect in places. This is largely because I/O ordering is a horrible can of worms, but also because the document has stagnated as our understanding has evolved. Attempt to address some of that, by rewriting the section based on recent(-ish) discussions with Arnd, BenH and others. Maybe one day we'll find a way to formalise this stuff, but for now let's at least try to make the English easier to understand. Cc: "Paul E. McKenney" Cc: Benjamin Herrenschmidt Cc: Michael Ellerman Cc: Arnd Bergmann Cc: Peter Zijlstra Cc: Andrea Parri Cc: Palmer Dabbelt Cc: Daniel Lustig Cc: David Howells Cc: Alan Stern Cc: Linus Torvalds Cc: "Maciej W. Rozycki" Cc: Mikulas Patocka Signed-off-by: Will Deacon --- Documentation/memory-barriers.txt | 115 +++++++++++++++++++++++--------------- 1 file changed, 70 insertions(+), 45 deletions(-) -- 2.11.0 diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index 1c22b21ae922..5eb6f4c6a133 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -2599,72 +2599,97 @@ likely, then interrupt-disabling locks should be used to guarantee ordering. KERNEL I/O BARRIER EFFECTS ========================== -When accessing I/O memory, drivers should use the appropriate accessor -functions: +Interfacing with peripherals via I/O accesses is deeply architecture and device +specific. Therefore, drivers which are inherently non-portable may rely on +specific behaviours of their target systems in order to achieve synchronization +in the most lightweight manner possible. For drivers intending to be portable +between multiple architectures and bus implementations, the kernel offers a +series of accessor functions that provide various degrees of ordering +guarantees: - (*) inX(), outX(): + (*) readX(), writeX(): - These are intended to talk to I/O space rather than memory space, but - that's primarily a CPU-specific concept. The i386 and x86_64 processors - do indeed have special I/O space access cycles and instructions, but many - CPUs don't have such a concept. + The readX() and writeX() MMIO accessors take a pointer to the peripheral + being accessed as an __iomem * parameter. For pointers mapped with the + default I/O attributes (e.g. those returned by ioremap()), then the + ordering guarantees are as follows: - The PCI bus, amongst others, defines an I/O space concept which - on such - CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O - space. However, it may also be mapped as a virtual I/O space in the CPU's - memory map, particularly on those CPUs that don't support alternate I/O - spaces. + 1. All readX() and writeX() accesses to the same peripheral are ordered + with respect to each other. For example, this ensures that MMIO register + writes by the CPU to a particular device will arrive in program order. - Accesses to this space may be fully synchronous (as on i386), but - intermediary bridges (such as the PCI host bridge) may not fully honour - that. + 2. A writeX() by the CPU to the peripheral will first wait for the + completion of all prior CPU writes to memory. For example, this ensures + that writes by the CPU to an outbound DMA buffer allocated by + dma_alloc_coherent() will be visible to a DMA engine when the CPU writes + to its MMIO control register to trigger the transfer. - They are guaranteed to be fully ordered with respect to each other. + 3. A readX() by the CPU from the peripheral will complete before any + subsequent CPU reads from memory can begin. For example, this ensures + that reads by the CPU from an incoming DMA buffer allocated by + dma_alloc_coherent() will not see stale data after reading from the DMA + engine's MMIO status register to establish that the DMA transfer has + completed. - They are not guaranteed to be fully ordered with respect to other types of - memory and I/O operation. + 4. A readX() by the CPU from the peripheral will complete before any + subsequent delay() loop can begin execution. For example, this ensures + that two MMIO register writes by the CPU to a peripheral will arrive at + least 1us apart if the first write is immediately read back with readX() + and udelay(1) is called prior to the second writeX(). - (*) readX(), writeX(): + __iomem pointers obtained with non-default attributes (e.g. those returned + by ioremap_wc()) are unlikely to provide many of these guarantees. - Whether these are guaranteed to be fully ordered and uncombined with - respect to each other on the issuing CPU depends on the characteristics - defined for the memory window through which they're accessing. On later - i386 architecture machines, for example, this is controlled by way of the - MTRR registers. + (*) readX_relaxed(), writeX_relaxed(): - Ordinarily, these will be guaranteed to be fully ordered and uncombined, - provided they're not accessing a prefetchable device. + These are similar to readX() and writeX(), but provide weaker memory + ordering guarantees. Specifically, they do not guarantee ordering with + respect to normal memory accesses or delay() loops (i.e bullets 2-4 above) + but they are still guaranteed to be ordered with respect to other accesses + to the same peripheral when operating on __iomem pointers mapped with the + default I/O attributes. - However, intermediary hardware (such as a PCI bridge) may indulge in - deferral if it so wishes; to flush a store, a load from the same location - is preferred[*], but a load from the same device or from configuration - space should suffice for PCI. + (*) readsX(), writesX(): - [*] NOTE! attempting to load from the same location as was written to may - cause a malfunction - consider the 16550 Rx/Tx serial registers for - example. + The readsX() and writesX() MMIO accessors are designed for accessing + register-based, memory-mapped FIFOs residing on peripherals that are not + capable of performing DMA. Consequently, they provide only the ordering + guarantees of readX_relaxed() and writeX_relaxed(), as documented above. - Used with prefetchable I/O memory, an mmiowb() barrier may be required to - force stores to be ordered. + (*) inX(), outX(): - Please refer to the PCI specification for more information on interactions - between PCI transactions. + The inX() and outX() accessors are intended to access legacy port-mapped + I/O peripherals, which may require special instructions on some + architectures (notably x86). The port number of the peripheral being + accessed is passed as an argument. - (*) readX_relaxed(), writeX_relaxed() + Since many CPU architectures ultimately access these peripherals via an + internal virtual memory mapping, the portable ordering guarantees provided + by inX() and outX() are the same as those provided by readX() and writeX() + respectively when accessing a mapping with the default I/O attributes. - These are similar to readX() and writeX(), but provide weaker memory - ordering guarantees. Specifically, they do not guarantee ordering with - respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee - ordering with respect to LOCK or UNLOCK operations. If the latter is - required, an mmiowb() barrier can be used. Note that relaxed accesses to - the same peripheral are guaranteed to be ordered with respect to each - other. + Device drivers may expect outX() to emit a non-posted write transaction + that waits for a completion response from the I/O peripheral before + returning. This is not guaranteed by all architectures and is therefore + not part of the portable ordering semantics. + + (*) insX(), outsX(): + + As above, the insX() and outsX() accessors provide the same ordering + guarantees as readsX() and writesX() respectively when accessing a mapping + with the default I/O attributes. (*) ioreadX(), iowriteX() These will perform appropriately for the type of access they're actually doing, be it inX()/outX() or readX()/writeX(). +All of these accessors assume that the underlying peripheral is little-endian, +and will therefore perform byte-swapping operations on big-endian architectures. + +Composing I/O ordering barriers with SMP ordering barriers and LOCK/UNLOCK +operations is a dangerous sport which may require the use of mmiowb(). See the +subsection "Acquires vs I/O accesses" for more information. ======================================== ASSUMED MINIMUM EXECUTION ORDERING MODEL