From patchwork Fri Nov 14 18:56:37 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 40864 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f70.google.com (mail-ee0-f70.google.com [74.125.83.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BC14B240ED for ; Fri, 14 Nov 2014 19:05:47 +0000 (UTC) Received: by mail-ee0-f70.google.com with SMTP id b57sf11076579eek.5 for ; Fri, 14 Nov 2014 11:05:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=xnbuJbEK0S6OiYqLACj0bVMBdGtU1ZuRgq0rJUMkIPM=; b=XrvIJcHXYv+9fkUOtOkDVH5/ITi4QNJE99Icds3rYPYf7qyjNwb5eD+Umh7HB+7/Z2 fzdFFJPNknrf7IAKFktBm/mmqkyC+RrTXhmp6rQV7cryC56OXe07aO5PNYiA0H4E4+0k 73qrZjZUvLmohK+zHPAVkke2IOtuFfmG/j7nSfN33HU3LMWlwHCeyrVbA4N26lsWuKDE VmpenseeuCT6Kp/2kYdJH1PjxGj8V0wLoLC+nrqJ8SszOP5VJXoCdAxHjXTzJx4bBD8/ bMALMa/FiWaGjCKM4i7Z6vYxQwIXnKMb7KAT6JtTDH6YF3Qpe1JndUBZz/cI175AXm3x 8OlA== X-Gm-Message-State: ALoCoQlyspKElrSOOnU9jg3SazUmW80pJGV6fEqYdea++kEvb/jIm4aAlIjkSw2WiN3Rasns+WhX X-Received: by 10.180.98.165 with SMTP id ej5mr2517939wib.1.1415991947026; Fri, 14 Nov 2014 11:05:47 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.29.170 with SMTP id l10ls793014lah.18.gmail; Fri, 14 Nov 2014 11:05:46 -0800 (PST) X-Received: by 10.152.170.194 with SMTP id ao2mr9963655lac.60.1415991946832; Fri, 14 Nov 2014 11:05:46 -0800 (PST) Received: from mail-lb0-f173.google.com (mail-lb0-f173.google.com. [209.85.217.173]) by mx.google.com with ESMTPS id m7si42531604lah.97.2014.11.14.11.05.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 14 Nov 2014 11:05:46 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) client-ip=209.85.217.173; Received: by mail-lb0-f173.google.com with SMTP id n15so13411080lbi.32 for ; Fri, 14 Nov 2014 11:05:46 -0800 (PST) X-Received: by 10.112.52.37 with SMTP id q5mr9772357lbo.32.1415991946568; Fri, 14 Nov 2014 11:05:46 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp819692lbc; Fri, 14 Nov 2014 11:05:45 -0800 (PST) X-Received: by 10.68.250.131 with SMTP id zc3mr12163098pbc.34.1415991944861; Fri, 14 Nov 2014 11:05:44 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id zu1si29138366pac.119.2014.11.14.11.05.44 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Nov 2014 11:05:44 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XpM9F-00083Z-9O; Fri, 14 Nov 2014 19:03:09 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XpM3M-0002Pb-Fa; Fri, 14 Nov 2014 18:57:05 +0000 Received: from edgewater-inn.cambridge.arm.com (edgewater-inn.cambridge.arm.com [10.1.203.36]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id sAEIuZwo008181; Fri, 14 Nov 2014 18:56:35 GMT Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E449F1AE021F; Fri, 14 Nov 2014 18:56:39 +0000 (GMT) From: Will Deacon To: linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org Subject: [RFC PATCH v4 8/8] arm: dma-mapping: plumb our iommu mapping ops into arch_setup_dma_ops Date: Fri, 14 Nov 2014 18:56:37 +0000 Message-Id: <1415991397-9618-9-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1415991397-9618-1-git-send-email-will.deacon@arm.com> References: <1415991397-9618-1-git-send-email-will.deacon@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141114_105704_952045_630BA42E X-CRM114-Status: GOOD ( 14.36 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: jroedel@suse.de, arnd@arndb.de, Will Deacon , thierry.reding@gmail.com, laurent.pinchart@ideasonboard.com, Varun.Sethi@freescale.com, m.szyprowski@samsung.com, dwmw2@infradead.org, hdoyu@nvidia.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: will.deacon@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This patch plumbs the existing ARM IOMMU DMA infrastructure (which isn't actually called outside of a few drivers) into arch_setup_dma_ops, so that we can use IOMMUs for DMA transfers in a more generic fashion. Since this significantly complicates the arch_setup_dma_ops function, it is moved out of line into dma-mapping.c. If CONFIG_ARM_DMA_USE_IOMMU is not set, the iommu parameter is ignored and the normal ops are used instead. Signed-off-by: Will Deacon --- arch/arm/include/asm/dma-mapping.h | 12 +++--- arch/arm/mm/dma-mapping.c | 80 ++++++++++++++++++++++++++++++++++---- 2 files changed, 78 insertions(+), 14 deletions(-) diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h index f3c0d953f6a2..9410b7e548fc 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -121,14 +121,12 @@ static inline unsigned long dma_max_pfn(struct device *dev) } #define dma_max_pfn(dev) dma_max_pfn(dev) -static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base, - u64 size, struct iommu_ops *iommu, - bool coherent) -{ - if (coherent) - set_dma_ops(dev, &arm_coherent_dma_ops); -} #define arch_setup_dma_ops arch_setup_dma_ops +extern void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, + struct iommu_ops *iommu, bool coherent); + +#define arch_teardown_dma_ops arch_teardown_dma_ops +extern void arch_teardown_dma_ops(struct device *dev); static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) { diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index e8907117861e..f15eae5e0513 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1947,9 +1947,8 @@ EXPORT_SYMBOL_GPL(arm_iommu_release_mapping); * arm_iommu_create_mapping) * * Attaches specified io address space mapping to the provided device, - * this replaces the dma operations (dma_map_ops pointer) with the - * IOMMU aware version. More than one client might be attached to - * the same io address space mapping. + * More than one client might be attached to the same io address space + * mapping. */ int arm_iommu_attach_device(struct device *dev, struct dma_iommu_mapping *mapping) @@ -1962,7 +1961,6 @@ int arm_iommu_attach_device(struct device *dev, kref_get(&mapping->kref); dev->archdata.mapping = mapping; - set_dma_ops(dev, &iommu_ops); pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev)); return 0; @@ -1974,7 +1972,6 @@ EXPORT_SYMBOL_GPL(arm_iommu_attach_device); * @dev: valid struct device pointer * * Detaches the provided device from a previously attached map. - * This voids the dma operations (dma_map_ops pointer) */ void arm_iommu_detach_device(struct device *dev) { @@ -1989,10 +1986,79 @@ void arm_iommu_detach_device(struct device *dev) iommu_detach_device(mapping->domain, dev); kref_put(&mapping->kref, release_iommu_mapping); dev->archdata.mapping = NULL; - set_dma_ops(dev, NULL); pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev)); } EXPORT_SYMBOL_GPL(arm_iommu_detach_device); -#endif +static struct dma_map_ops *arm_get_iommu_dma_map_ops(bool coherent) +{ + return coherent ? &iommu_coherent_ops : &iommu_ops; +} + +static bool arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size, + struct iommu_ops *iommu) +{ + struct dma_iommu_mapping *mapping; + + mapping = arm_iommu_create_mapping(dev->bus, dma_base, size); + if (IS_ERR(mapping)) { + pr_warn("Failed to create %llu-byte IOMMU mapping for device %s\n", + size, dev_name(dev)); + return false; + } + + if (arm_iommu_attach_device(dev, mapping)) { + pr_warn("Failed to attached device %s to IOMMU_mapping\n", + dev_name(dev)); + arm_iommu_release_mapping(mapping); + return false; + } + + return true; +} + +static void arm_teardown_iommu_dma_ops(struct device *dev) +{ + struct dma_iommu_mapping *mapping = dev->archdata.mapping; + + arm_iommu_detach_device(dev); + arm_iommu_release_mapping(mapping); +} + +#else + +static bool arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size, + struct iommu_ops *iommu) +{ + return false; +} + +static void arm_teardown_iommu_dma_ops(struct device *dev) { } + +#define arm_get_iommu_dma_map_ops arm_get_dma_map_ops + +#endif /* CONFIG_ARM_DMA_USE_IOMMU */ + +static struct dma_map_ops *arm_get_dma_map_ops(bool coherent) +{ + return coherent ? &arm_coherent_dma_ops : &arm_dma_ops; +} + +void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, + struct iommu_ops *iommu, bool coherent) +{ + struct dma_map_ops *dma_ops; + + if (arm_setup_iommu_dma_ops(dev, dma_base, size, iommu)) + dma_ops = arm_get_iommu_dma_map_ops(coherent); + else + dma_ops = arm_get_dma_map_ops(coherent); + + set_dma_ops(dev, dma_ops); +} + +void arch_teardown_dma_ops(struct device *dev) +{ + arm_teardown_iommu_dma_ops(dev); +}