From patchwork Wed Nov 19 11:41:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 41149 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f71.google.com (mail-wg0-f71.google.com [74.125.82.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3F4C0241C9 for ; Wed, 19 Nov 2014 11:43:47 +0000 (UTC) Received: by mail-wg0-f71.google.com with SMTP id l18sf277771wgh.10 for ; Wed, 19 Nov 2014 03:43:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:date:from:to:subject:message-id :references:mime-version:in-reply-to:user-agent:cc:precedence :list-id:list-unsubscribe:list-archive:list-post:list-help :list-subscribe:sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:content-disposition :content-type:content-transfer-encoding; bh=5DezE44EsDhf0/yQplKendacbWV9bh9/Xqmf4XaYdiI=; b=P748wwTaickVWCEbwHpCAJFsZguex3rTKe7CEBA2knJnC7zY+T+1EMI925duQSNXe4 3BCoG+v5cJBBIvPYplLKyP2//dviYUy/ttZ3i+a4yzJWOiVq4V72iCgceP7vBRiY8o6h NXOyXjrpfcn4Lr2y4W6zrfZhl771PYzc11cRLRQS3koTPiaDtNamzj0TixIo9CsWUyif bv/SQl3O8A4z6WOP+0228lS+PX1FmD1k1r6uv5vV4KPgMqiC2J46XVkDrPMdnJMjd++c Dif8F2XB6Y1qLSQRG2d/e9IWqJ95N22Hi5hG0h4yE/d8snFM+7MBzVfXP/kccLUD3t+i +xKA== X-Gm-Message-State: ALoCoQlNWhqxTx92WMMolkYIUjglYPiVD97j6tX8n9AdRu14/fcAavTSpEdrZRsu+2uN4pAN8kAz X-Received: by 10.195.17.134 with SMTP id ge6mr4017598wjd.2.1416397426555; Wed, 19 Nov 2014 03:43:46 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.42.137 with SMTP id o9ls1142863lal.17.gmail; Wed, 19 Nov 2014 03:43:46 -0800 (PST) X-Received: by 10.152.3.229 with SMTP id f5mr1403831laf.94.1416397425975; Wed, 19 Nov 2014 03:43:45 -0800 (PST) Received: from mail-la0-f43.google.com (mail-la0-f43.google.com. [209.85.215.43]) by mx.google.com with ESMTPS id j15si1560733lbg.30.2014.11.19.03.43.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Nov 2014 03:43:45 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) client-ip=209.85.215.43; Received: by mail-la0-f43.google.com with SMTP id q1so338854lam.16 for ; Wed, 19 Nov 2014 03:43:45 -0800 (PST) X-Received: by 10.152.37.69 with SMTP id w5mr4793992laj.67.1416397425700; Wed, 19 Nov 2014 03:43:45 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp67524lbc; Wed, 19 Nov 2014 03:43:44 -0800 (PST) X-Received: by 10.66.90.161 with SMTP id bx1mr26624542pab.35.1416397423832; Wed, 19 Nov 2014 03:43:43 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id zk5si2431353pbc.24.2014.11.19.03.43.43 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Nov 2014 03:43:43 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xr3eL-0003Kq-03; Wed, 19 Nov 2014 11:42:17 +0000 Received: from foss-mx-na.foss.arm.com ([217.140.108.86]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xr3eH-0003Il-Rr for linux-arm-kernel@lists.infradead.org; Wed, 19 Nov 2014 11:42:14 +0000 Received: from foss-smtp-na-1.foss.arm.com (unknown [10.80.61.8]) by foss-mx-na.foss.arm.com (Postfix) with ESMTP id 1B115D6; Wed, 19 Nov 2014 05:41:49 -0600 (CST) Received: from collaborate-mta1.arm.com (highbank-bc01-b06.austin.arm.com [10.112.81.134]) by foss-smtp-na-1.foss.arm.com (Postfix) with ESMTP id E447E5FAD7; Wed, 19 Nov 2014 05:41:46 -0600 (CST) Received: from arm.com (edgewater-inn.cambridge.arm.com [10.1.203.36]) by collaborate-mta1.arm.com (Postfix) with ESMTPS id 2EEE213F6FA; Wed, 19 Nov 2014 05:41:45 -0600 (CST) Date: Wed, 19 Nov 2014 11:41:50 +0000 From: Will Deacon To: Marek Szyprowski Subject: Re: [RFC PATCH v4 0/8] Introduce automatic DMA configuration for IOMMU masters Message-ID: <20141119114150.GD15985@arm.com> References: <1415991397-9618-1-git-send-email-will.deacon@arm.com> <546C7D36.7030400@samsung.com> MIME-Version: 1.0 In-Reply-To: <546C7D36.7030400@samsung.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141119_034213_908686_DFE6EDF6 X-CRM114-Status: GOOD ( 28.84 ) X-Spam-Score: -0.0 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: "jroedel@suse.de" , "arnd@arndb.de" , "iommu@lists.linux-foundation.org" , "thierry.reding@gmail.com" , "laurent.pinchart@ideasonboard.com" , "Varun.Sethi@freescale.com" , "dwmw2@infradead.org" , "linux-arm-kernel@lists.infradead.org" , "hdoyu@nvidia.com" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: will.deacon@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Content-Disposition: inline Hi Marek, On Wed, Nov 19, 2014 at 11:21:26AM +0000, Marek Szyprowski wrote: > On 2014-11-14 19:56, Will Deacon wrote: > > Hello everybody, > > > > Here is the fourth iteration of the RFC I've previously posted here: > > > > RFCv1: http://lists.infradead.org/pipermail/linux-arm-kernel/2014-August/283023.html > > RFCv2: http://lists.infradead.org/pipermail/linux-arm-kernel/2014-September/283752.html > > RFCv3: http://lists.infradead.org/pipermail/linux-arm-kernel/2014-September/287031.html > > > > Changes since RFCv3 include: > > > > - Drastic simplification of the data structures, so that we no longer > > pass around lists of domains. Instead, dma-mapping is expected to > > allocate the domain (Joerg talked about adding a get_default_domain > > operation to iommu_ops). > > > > - iommu_ops is used to hold the per-instance IOMMU data > > > > - Configuration of DMA segments added to of_dma_configure > > > > All feedback welcome. > > I've rebased my Exynos SYSMMU patches on top of this patchset and it > works fine, > You can find them in the "[PATCH v3 00/19] Exynos SYSMMU (IOMMU) > integration with DT > and DMA-mapping subsystem" thread. I just saw that and it looks great, thanks! FWIW, I'll take the first 3 patches you have into my series in some shape or another. > You can add to all your patches: > Acked-by: Marek Szyprowski Cheers. > I'm also interested in adding get_default_domain() callback, but I > assume that this > can be done once the basic patchset get merged. Do you plan to work on > it, do you want > me to implement it? If Joerg isn't working on it already (I don't think he is), then please do have a go if you have time. You'll probably want to avoid adding devices with addressing restrictions (i.e. non-zero dma_pfn_offset, weird dma masks) to the default domain, otherwise you'll run into issues initialising the iova allocator. I had a go at getting ARM dma-mapping to use a hypothetical get_default_domain function, so I've included the diff I ended up with below, in case it's at all useful. Will --->8 diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h index f3c0d953f6a2..5071553bf6b8 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -121,14 +121,9 @@ static inline unsigned long dma_max_pfn(struct device *dev) } #define dma_max_pfn(dev) dma_max_pfn(dev) -static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base, - u64 size, struct iommu_ops *iommu, - bool coherent) -{ - if (coherent) - set_dma_ops(dev, &arm_coherent_dma_ops); -} #define arch_setup_dma_ops arch_setup_dma_ops +extern void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, + struct iommu_ops *iommu, bool coherent); static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) { diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index c245d903927f..da2c2667bbb1 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1849,7 +1849,8 @@ struct dma_map_ops iommu_coherent_ops = { * arm_iommu_attach_device function. */ struct dma_iommu_mapping * -arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size) +__arm_iommu_create_mapping(struct iommu_domain *domain, dma_addr_t base, + size_t size) { unsigned int bits = size >> PAGE_SHIFT; unsigned int bitmap_size = BITS_TO_LONGS(bits) * sizeof(long); @@ -1883,17 +1884,12 @@ arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size) mapping->extensions = extensions; mapping->base = base; mapping->bits = BITS_PER_BYTE * bitmap_size; + mapping->domain = domain; spin_lock_init(&mapping->lock); - mapping->domain = iommu_domain_alloc(bus); - if (!mapping->domain) - goto err4; - kref_init(&mapping->kref); return mapping; -err4: - kfree(mapping->bitmaps[0]); err3: kfree(mapping->bitmaps); err2: @@ -1901,6 +1897,23 @@ err2: err: return ERR_PTR(err); } + +struct dma_iommu_mapping * +arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size) +{ + struct dma_iommu_mapping *mapping; + struct iommu_domain *domain; + + domain = iommu_domain_alloc(bus); + if (!domain) + return ERR_PTR(-ENOMEM); + + mapping = __arm_iommu_create_mapping(domain, base, size); + if (IS_ERR(mapping)) + iommu_domain_free(domain); + + return mapping; +} EXPORT_SYMBOL_GPL(arm_iommu_create_mapping); static void release_iommu_mapping(struct kref *kref) @@ -1948,9 +1961,8 @@ EXPORT_SYMBOL_GPL(arm_iommu_release_mapping); * arm_iommu_create_mapping) * * Attaches specified io address space mapping to the provided device, - * this replaces the dma operations (dma_map_ops pointer) with the - * IOMMU aware version. More than one client might be attached to - * the same io address space mapping. + * More than one client might be attached to the same io address space + * mapping. */ int arm_iommu_attach_device(struct device *dev, struct dma_iommu_mapping *mapping) @@ -1963,7 +1975,6 @@ int arm_iommu_attach_device(struct device *dev, kref_get(&mapping->kref); dev->archdata.mapping = mapping; - set_dma_ops(dev, &iommu_ops); pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev)); return 0; @@ -1975,7 +1986,6 @@ EXPORT_SYMBOL_GPL(arm_iommu_attach_device); * @dev: valid struct device pointer * * Detaches the provided device from a previously attached map. - * This voids the dma operations (dma_map_ops pointer) */ void arm_iommu_detach_device(struct device *dev) { @@ -1990,10 +2000,141 @@ void arm_iommu_detach_device(struct device *dev) iommu_detach_device(mapping->domain, dev); kref_put(&mapping->kref, release_iommu_mapping); dev->archdata.mapping = NULL; - set_dma_ops(dev, NULL); pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev)); } EXPORT_SYMBOL_GPL(arm_iommu_detach_device); -#endif +static struct dma_map_ops *arm_get_iommu_dma_map_ops(bool coherent) +{ + return coherent ? &iommu_coherent_ops : &iommu_ops; +} + +struct dma_iommu_mapping_entry { + struct list_head list; + struct dma_iommu_mapping *mapping; + struct iommu_domain *domain; + u64 dma_base; + u64 size; + struct kref kref; +}; + +static DEFINE_SPINLOCK(dma_iommu_mapping_lock); +static LIST_HEAD(dma_iommu_mapping_table); + +static void __remove_iommu_mapping_entry(struct kref *kref) +{ + struct dma_iommu_mapping_entry *entry; + + entry = container_of(kref, struct dma_iommu_mapping_entry, kref); + list_del(&entry->list); +} + +static bool arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size, + struct iommu_ops *iommu) +{ + struct iommu_domain *domain; + struct dma_iommu_mapping_entry *entry = NULL; + + if (!iommu->get_default_domain) + return false; + + domain = iommu->get_default_domain(dev); + if (!domain) + return false; + + spin_lock(&dma_iommu_mapping_lock); + + list_for_each_entry(entry, &dma_iommu_mapping_table, list) { + if (entry->domain == domain) + break; + } + + /* Load entry->mapping after entry -- not strictly necessary for ARM */ + smp_read_barrier_depends(); + + if (!entry) { + struct dma_iommu_mapping *mapping; + + entry = kzalloc(sizeof(*entry), GFP_ATOMIC); + if (!entry) + goto err_unlock; + + entry->domain = domain; + entry->dma_base = dma_base; + entry->size = size; + kref_init(&entry->kref); + list_add(&entry->list, &dma_iommu_mapping_table); + spin_unlock(&dma_iommu_mapping_lock); + + mapping = __arm_iommu_create_mapping(domain, dma_base, size); + if (!IS_ERR(mapping)) + return false; + + smp_wmb(); + entry->mapping = mapping; + } else if (entry->mapping) { + if (entry->dma_base > dma_base || entry->size > size) + goto err_unlock; + + kref_get(&entry->kref); + spin_unlock(&dma_iommu_mapping_lock); + } else { + /* Racing on the same IOMMU */ + goto err_unlock; + } + + if (arm_iommu_attach_device(dev, entry->mapping)) { + int entry_dead; + + pr_warn("Failed to attached device %s to IOMMU mapping\n", + dev_name(dev)); + spin_lock(&dma_iommu_mapping_lock); + entry_dead = kref_put(&entry->kref, + __remove_iommu_mapping_entry); + spin_unlock(&dma_iommu_mapping_lock); + + if (entry_dead) { + entry->mapping->domain = NULL; + arm_iommu_release_mapping(entry->mapping); + } + + return false; + } + + return true; + +err_unlock: + spin_unlock(&dma_iommu_mapping_lock); + return false; +} + +#else + +static bool arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size, + struct iommu_ops *iommu) +{ + return false; +} + +#define arm_get_iommu_dma_map_ops arm_get_dma_map_ops + +#endif /* CONFIG_ARM_DMA_USE_IOMMU */ + +static struct dma_map_ops *arm_get_dma_map_ops(bool coherent) +{ + return coherent ? &arm_coherent_dma_ops : &arm_dma_ops; +} + +void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, + struct iommu_ops *iommu, bool coherent) +{ + struct dma_map_ops *dma_ops; + + if (arm_setup_iommu_dma_ops(dev, dma_base, size, iommu)) + dma_ops = arm_get_iommu_dma_map_ops(coherent); + else + dma_ops = arm_get_dma_map_ops(coherent); + + set_dma_ops(dev, dma_ops); +}