From patchwork Tue Feb 25 12:42:20 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 25291 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f72.google.com (mail-qa0-f72.google.com [209.85.216.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0726E20543 for ; Tue, 25 Feb 2014 12:43:06 +0000 (UTC) Received: by mail-qa0-f72.google.com with SMTP id i13sf703873qae.11 for ; Tue, 25 Feb 2014 04:43:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list:content-type :content-transfer-encoding; bh=BgPGheSsGH/Lg8i9LwnNkin+1Omof6DItefakgtPHgI=; b=mmBhMrQONpS/hi4hn4CelXCCtnUHTt8iyWopmB4Em4chP4++aFIOeFTKoYTr+f7Q6M Pi3UA7kfBmHCH8m3QBcgOO0UH+w1aK3VF/Fq2kWiKyIlxjoNzmKs+W8Biwxg3q2SQkpt Mniu3G0+Sebole7paJbA6HdpJOAX8IBtvkHXhT1IVJuD+2pFoa6oqWoa5Wp44/On1qen ID5DhFd4cVhHglQrBqhPT99Pho47Nl3GqCIU5AbANMeE3WkBkJZt89NCQ+rp+epNbCZ2 Y43W1O5GHTerOtm3IDNx7DyQFPW4cnClvppOW/neF4u6fuctjIPAmkdEJjujWEsrnANE 4UoQ== X-Gm-Message-State: ALoCoQmVtvSgTcOCfttmOvFw4T4mmEK1rECUDKrqKKRS3MoD0hnrTED7Ttfq2NeuWbzW2IaCMKue X-Received: by 10.236.143.198 with SMTP id l46mr1987896yhj.56.1393332186821; Tue, 25 Feb 2014 04:43:06 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.36.7 with SMTP id o7ls2269112qgo.15.gmail; Tue, 25 Feb 2014 04:43:06 -0800 (PST) X-Received: by 10.52.232.168 with SMTP id tp8mr114063vdc.38.1393332186645; Tue, 25 Feb 2014 04:43:06 -0800 (PST) Received: from mail-ve0-f176.google.com (mail-ve0-f176.google.com [209.85.128.176]) by mx.google.com with ESMTPS id sr19si1630020vcb.146.2014.02.25.04.43.06 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Feb 2014 04:43:06 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.176; Received: by mail-ve0-f176.google.com with SMTP id pa12so335380veb.21 for ; Tue, 25 Feb 2014 04:43:06 -0800 (PST) X-Received: by 10.52.188.41 with SMTP id fx9mr772712vdc.19.1393332186509; Tue, 25 Feb 2014 04:43:06 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp136080vcz; Tue, 25 Feb 2014 04:43:03 -0800 (PST) X-Received: by 10.195.13.17 with SMTP id eu17mr24699397wjd.24.1393332182533; Tue, 25 Feb 2014 04:43:02 -0800 (PST) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id x41si42925320eee.138.2014.02.25.04.43.01 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 25 Feb 2014 04:43:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1WIHKF-00013J-RL; Tue, 25 Feb 2014 12:41:31 +0000 Received: from mailout1.w1.samsung.com ([210.118.77.11]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1WIHK9-000135-52 for linaro-mm-sig@lists.linaro.org; Tue, 25 Feb 2014 12:41:25 +0000 Received: from eucpsbgm1.samsung.com (unknown [203.254.199.244]) by mailout1.w1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N1J00FR7XZ5RW50@mailout1.w1.samsung.com> for linaro-mm-sig@lists.linaro.org; Tue, 25 Feb 2014 12:42:41 +0000 (GMT) X-AuditID: cbfec7f4-b7f796d000005a13-04-530c8fc0a2b6 Received: from eusync1.samsung.com ( [203.254.199.211]) by eucpsbgm1.samsung.com (EUCPMTA) with SMTP id 1D.13.23059.0CF8C035; Tue, 25 Feb 2014 12:42:40 +0000 (GMT) Received: from amdc1339.mshome.net ([106.116.147.30]) by eusync1.samsung.com (Oracle Communications Messaging Server 7u4-23.01 (7.0.4.23.0) 64bit (built Aug 10 2011)) with ESMTPA id <0N1J00EGTXYWKEB0@eusync1.samsung.com>; Tue, 25 Feb 2014 12:42:40 +0000 (GMT) From: Marek Szyprowski To: linux-arm-kernel@lists.infradead.org, linaro-mm-sig@lists.linaro.org, iommu@lists.linux-foundation.org Date: Tue, 25 Feb 2014 13:42:20 +0100 Message-id: <1393332141-17781-2-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1393332141-17781-1-git-send-email-m.szyprowski@samsung.com> References: <1393332141-17781-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrKLMWRmVeSWpSXmKPExsVy+t/xy7oH+nmCDbY81bdYfCTG4uPl2cwW Z39vYbZYsN/aonP2BnaLL1ceMllsenyN1eL2ZV6LtUfuslt8evaP3eLlxxMsDtweTw7OY/JY M28No0dLcw+bx4LPV9g9nk6YzO5x59oeNo/NS+o9bv97zOwx+cZyRo/e5ndsHn1bVjEGcEdx 2aSk5mSWpRbp2yVwZczbt4Cl4Khtxa6Zi9gaGH8bdjFyckgImEi83HebGcIWk7hwbz1bFyMX h5DAUkaJtb8+sEM43UwS2zZOYgGpYhMwlOh628UGYosI5EhMmnYHrIhZ4BKTRPe1uUwgCWEB f4mtL+aDFbEIqEps+fGdEcTmFfCQmPhlAWsXIwfQOgWJOZNsQMKcAp4S67c+BCsXAiqZ8PIS +wRG3gWMDKsYRVNLkwuKk9JzDfWKE3OLS/PS9ZLzczcxQoL2yw7GxcesDjEKcDAq8fA+KOUO FmJNLCuuzD3EKMHBrCTCa93KEyzEm5JYWZValB9fVJqTWnyIkYmDU6qBccLlfdYyURm6iQav +K8v3z/9j42Qxr76mI87VwSu2Fa26+vfllbOF3InhGon1p++HjF9z3kJzVU7NXx1qw767rQK eC7icHnevu3ODg+j2lyObtkcdH0i6xQeJWFptkttduItu2uyXGpLbgcsV7PbcPt74l9/TbPK 29Nv31W91f7K4myeYPbiOUosxRmJhlrMRcWJALfgJ0E4AgAA Cc: Russell King , Andreas Herrmann , Joerg Roedel , Will Deacon , Andreas Herrmann Subject: [Linaro-mm-sig] [PATCH 1/2] arm: dma-mapping: Add support to extend DMA IOMMU mappings X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: linaro-mm-sig-bounces@lists.linaro.org Sender: linaro-mm-sig-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: m.szyprowski@samsung.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Andreas Herrmann Instead of using just one bitmap to keep track of IO virtual addresses (handed out for IOMMU use) introduce an array of bitmaps. This allows us to extend existing mappings when running out of iova space in the initial mapping etc. If there is not enough space in the mapping to service an IO virtual address allocation request, __alloc_iova() tries to extend the mapping -- by allocating another bitmap -- and makes another allocation attempt using the freshly allocated bitmap. This allows arm iommu drivers to start with a decent initial size when an dma_iommu_mapping is created and still to avoid running out of IO virtual addresses for the mapping. Signed-off-by: Andreas Herrmann [mszyprow: removed extensions parameter to arm_iommu_create_mapping() function, which will be modified in the next patch anyway, also some debug messages about extending bitmap] Signed-off-by: Marek Szyprowski --- arch/arm/include/asm/dma-iommu.h | 8 ++- arch/arm/mm/dma-mapping.c | 123 ++++++++++++++++++++++++++++++++------ 2 files changed, 110 insertions(+), 21 deletions(-) diff --git a/arch/arm/include/asm/dma-iommu.h b/arch/arm/include/asm/dma-iommu.h index a8c56ac..686797c 100644 --- a/arch/arm/include/asm/dma-iommu.h +++ b/arch/arm/include/asm/dma-iommu.h @@ -13,8 +13,12 @@ struct dma_iommu_mapping { /* iommu specific data */ struct iommu_domain *domain; - void *bitmap; - size_t bits; + unsigned long **bitmaps; /* array of bitmaps */ + unsigned int nr_bitmaps; /* nr of elements in array */ + unsigned int extensions; + size_t bitmap_size; /* size of a single bitmap */ + size_t bits; /* per bitmap */ + unsigned int size; /* per bitmap */ unsigned int order; dma_addr_t base; diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index c9c6acdf..cc42bc2 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1066,6 +1066,8 @@ fs_initcall(dma_debug_do_init); /* IOMMU */ +static int extend_iommu_mapping(struct dma_iommu_mapping *mapping); + static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping, size_t size) { @@ -1073,6 +1075,8 @@ static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping, unsigned int align = 0; unsigned int count, start; unsigned long flags; + dma_addr_t iova; + int i; if (order > CONFIG_ARM_DMA_IOMMU_ALIGNMENT) order = CONFIG_ARM_DMA_IOMMU_ALIGNMENT; @@ -1084,30 +1088,78 @@ static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping, align = (1 << (order - mapping->order)) - 1; spin_lock_irqsave(&mapping->lock, flags); - start = bitmap_find_next_zero_area(mapping->bitmap, mapping->bits, 0, - count, align); - if (start > mapping->bits) { - spin_unlock_irqrestore(&mapping->lock, flags); - return DMA_ERROR_CODE; + for (i = 0; i < mapping->nr_bitmaps; i++) { + start = bitmap_find_next_zero_area(mapping->bitmaps[i], + mapping->bits, 0, count, align); + + if (start > mapping->bits) + continue; + + bitmap_set(mapping->bitmaps[i], start, count); + break; } - bitmap_set(mapping->bitmap, start, count); + /* + * No unused range found. Try to extend the existing mapping + * and perform a second attempt to reserve an IO virtual + * address range of size bytes. + */ + if (i == mapping->nr_bitmaps) { + if (extend_iommu_mapping(mapping)) { + spin_unlock_irqrestore(&mapping->lock, flags); + return DMA_ERROR_CODE; + } + + start = bitmap_find_next_zero_area(mapping->bitmaps[i], + mapping->bits, 0, count, align); + + if (start > mapping->bits) { + spin_unlock_irqrestore(&mapping->lock, flags); + return DMA_ERROR_CODE; + } + + bitmap_set(mapping->bitmaps[i], start, count); + } spin_unlock_irqrestore(&mapping->lock, flags); - return mapping->base + (start << (mapping->order + PAGE_SHIFT)); + iova = mapping->base + (mapping->size * i); + iova += start << (mapping->order + PAGE_SHIFT); + + return iova; } static inline void __free_iova(struct dma_iommu_mapping *mapping, dma_addr_t addr, size_t size) { - unsigned int start = (addr - mapping->base) >> - (mapping->order + PAGE_SHIFT); - unsigned int count = ((size >> PAGE_SHIFT) + - (1 << mapping->order) - 1) >> mapping->order; + unsigned int start, count; unsigned long flags; + dma_addr_t bitmap_base; + u32 bitmap_index; + + if (!size) + return; + + bitmap_index = (u32) (addr - mapping->base) / (u32) mapping->size; + BUG_ON(addr < mapping->base || bitmap_index > mapping->extensions); + + bitmap_base = mapping->base + mapping->size * bitmap_index; + + start = (addr - bitmap_base) >> (mapping->order + PAGE_SHIFT); + + if (addr + size > bitmap_base + mapping->size) { + /* + * The address range to be freed reaches into the iova + * range of the next bitmap. This should not happen as + * we don't allow this in __alloc_iova (at the + * moment). + */ + BUG(); + } else + count = ((size >> PAGE_SHIFT) + + (1 << mapping->order) - 1) >> mapping->order; spin_lock_irqsave(&mapping->lock, flags); - bitmap_clear(mapping->bitmap, start, count); + bitmap_clear(mapping->bitmaps[bitmap_index], start, count); spin_unlock_irqrestore(&mapping->lock, flags); } @@ -1887,8 +1939,8 @@ arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size, int order) { unsigned int count = size >> (PAGE_SHIFT + order); - unsigned int bitmap_size = BITS_TO_LONGS(count) * sizeof(long); struct dma_iommu_mapping *mapping; + int extensions = 0; int err = -ENOMEM; if (!count) @@ -1898,23 +1950,35 @@ arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size, if (!mapping) goto err; - mapping->bitmap = kzalloc(bitmap_size, GFP_KERNEL); - if (!mapping->bitmap) + mapping->bitmap_size = BITS_TO_LONGS(count) * sizeof(long); + mapping->bitmaps = kzalloc((extensions + 1) * sizeof(unsigned long *), + GFP_KERNEL); + if (!mapping->bitmaps) goto err2; + mapping->bitmaps[0] = kzalloc(mapping->bitmap_size, GFP_KERNEL); + if (!mapping->bitmaps[0]) + goto err3; + + mapping->nr_bitmaps = 1; + mapping->extensions = extensions; mapping->base = base; - mapping->bits = BITS_PER_BYTE * bitmap_size; + mapping->size = size; mapping->order = order; + mapping->bits = BITS_PER_BYTE * mapping->bitmap_size; + spin_lock_init(&mapping->lock); mapping->domain = iommu_domain_alloc(bus); if (!mapping->domain) - goto err3; + goto err4; kref_init(&mapping->kref); return mapping; +err4: + kfree(mapping->bitmaps[0]); err3: - kfree(mapping->bitmap); + kfree(mapping->bitmaps); err2: kfree(mapping); err: @@ -1924,14 +1988,35 @@ EXPORT_SYMBOL_GPL(arm_iommu_create_mapping); static void release_iommu_mapping(struct kref *kref) { + int i; struct dma_iommu_mapping *mapping = container_of(kref, struct dma_iommu_mapping, kref); iommu_domain_free(mapping->domain); - kfree(mapping->bitmap); + for (i = 0; i < mapping->nr_bitmaps; i++) + kfree(mapping->bitmaps[i]); + kfree(mapping->bitmaps); kfree(mapping); } +static int extend_iommu_mapping(struct dma_iommu_mapping *mapping) +{ + int next_bitmap; + + if (mapping->nr_bitmaps > mapping->extensions) + return -EINVAL; + + next_bitmap = mapping->nr_bitmaps; + mapping->bitmaps[next_bitmap] = kzalloc(mapping->bitmap_size, + GFP_ATOMIC); + if (!mapping->bitmaps[next_bitmap]) + return -ENOMEM; + + mapping->nr_bitmaps++; + + return 0; +} + void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping) { if (mapping)