From patchwork Mon Apr 4 08:07:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 64956 Delivered-To: patches@linaro.org Received: by 10.112.199.169 with SMTP id jl9csp1047266lbc; Mon, 4 Apr 2016 01:07:30 -0700 (PDT) X-Received: by 10.25.131.147 with SMTP id f141mr4531981lfd.12.1459757250274; Mon, 04 Apr 2016 01:07:30 -0700 (PDT) Return-Path: Received: from mail-lb0-x22e.google.com (mail-lb0-x22e.google.com. [2a00:1450:4010:c04::22e]) by mx.google.com with ESMTPS id w6si15233601lbk.185.2016.04.04.01.07.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Apr 2016 01:07:30 -0700 (PDT) Received-SPF: pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:4010:c04::22e as permitted sender) client-ip=2a00:1450:4010:c04::22e; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:4010:c04::22e as permitted sender) smtp.mailfrom=eric.auger@linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by mail-lb0-x22e.google.com with SMTP id u8so148375990lbk.0 for ; Mon, 04 Apr 2016 01:07:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zEa2GM/639iE4UyyV8tHb984WpLF/EeQ4nQp5dvnrSY=; b=dAfTgagKkH49mfjvkS7i07crJAVjzC/6pjgb5VTonkAa2QiXecda5wavuQYyxuplJI N3JgrKmyWvQq5bNjqpwnMn5g2ub7X4WScxUr0gGyupY7IWxxZvn0v3v4lP1gmybsxMMy w9HhrMEFXxJIND1s0nizwPs+g5Lqa4UWt0PWk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zEa2GM/639iE4UyyV8tHb984WpLF/EeQ4nQp5dvnrSY=; b=V++y99ZB/nDT5XHj1VEBXI+ogXqESl1UDWePe6v9BcReKpGjg5wzioMnSm0agiJYBz Cl6GO3O8FJmJabW317hF4YofQt1JH1F7NqF1EYxxUw0Q19E5lRmLyXWppqpPGl66U/Q3 sKv3RmfsTrG+AyOpuUuGnLxdsNvbdUBnMIiKNcVIwrtspd7US3uSCXI+QlnntHtUsbIC ZmWGHf2iBDuQKNg46KhTiW83i7fbgaGrx6d3MNBsQzy+INNJg8RX3SVaFR5ywiBljKwf 61/ngv026BhDBFEAcFIeZ8SrVpNPuK4juM1YK9dn4xo7zm1EMLY74+Q5r2OxbMFAhcpJ XpVQ== X-Gm-Message-State: AD7BkJJI9k5A2p9AJ4fIyEOCQ8Xf2drrJyjBR5FBvP5fR0lYySNW7LBVmqlJ1364kR6PibAAu7o= X-Received: by 10.28.12.80 with SMTP id 77mr10973925wmm.19.1459757249756; Mon, 04 Apr 2016 01:07:29 -0700 (PDT) Return-Path: Received: from new-host-2.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id m67sm7505239wma.3.2016.04.04.01.07.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Apr 2016 01:07:28 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: suravee.suthikulpanit@amd.com, patches@linaro.org, linux-kernel@vger.kernel.org, Manish.Jaggi@caviumnetworks.com, Bharat.Bhushan@freescale.com, pranav.sawargaonkar@gmail.com, p.fedin@samsung.com, iommu@lists.linux-foundation.org, Jean-Philippe.Brucker@arm.com, julien.grall@arm.com Subject: [PATCH v6 6/7] dma-reserved-iommu: iommu_get/put_single_reserved Date: Mon, 4 Apr 2016 08:07:01 +0000 Message-Id: <1459757222-2668-7-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> References: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> This patch introduces iommu_get/put_single_reserved. iommu_get_single_reserved allows to allocate a new reserved iova page and map it onto the physical page that contains a given physical address. Page size is the IOMMU page one. It is the responsability of the system integrator to make sure the in use IOMMU page size corresponds to the granularity of the MSI frame. It returns the iova that is mapped onto the provided physical address. Hence the physical address passed in argument does not need to be aligned. In case a mapping already exists between both pages, the IOVA mapped to the PA is directly returned. Each time an iova is successfully returned a binding ref count is incremented. iommu_put_single_reserved decrements the ref count and when this latter is null, the mapping is destroyed and the iova is released. Signed-off-by: Eric Auger Signed-off-by: Ankit Jindal Signed-off-by: Pranavkumar Sawargaonkar Signed-off-by: Bharat Bhushan --- v5 -> v6: - revisit locking with spin_lock instead of mutex - do not kref_get on 1st get - add size parameter to the get function following Marc's request - use the iova domain shift instead of using the smallest supported page size v3 -> v4: - formerly in iommu: iommu_get/put_single_reserved & iommu/arm-smmu: implement iommu_get/put_single_reserved - Attempted to address Marc's doubts about missing size/alignment at VFIO level (user-space knows the IOMMU page size and the number of IOVA pages to provision) v2 -> v3: - remove static implementation of iommu_get_single_reserved & iommu_put_single_reserved when CONFIG_IOMMU_API is not set v1 -> v2: - previously a VFIO API, named vfio_alloc_map/unmap_free_reserved_iova --- drivers/iommu/dma-reserved-iommu.c | 146 +++++++++++++++++++++++++++++++++++++ include/linux/dma-reserved-iommu.h | 28 +++++++ 2 files changed, 174 insertions(+) -- 1.9.1 diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index f592118..3c759d9 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -136,3 +136,149 @@ void iommu_free_reserved_iova_domain(struct iommu_domain *domain) spin_unlock_irqrestore(&domain->reserved_lock, flags); } EXPORT_SYMBOL_GPL(iommu_free_reserved_iova_domain); + +static void delete_reserved_binding(struct iommu_domain *domain, + struct iommu_reserved_binding *b) +{ + struct iova_domain *iovad = + (struct iova_domain *)domain->reserved_iova_cookie; + unsigned long order = iova_shift(iovad); + + iommu_unmap(domain, b->iova, b->size); + free_iova(iovad, b->iova >> order); + kfree(b); +} + +int iommu_get_reserved_iova(struct iommu_domain *domain, + phys_addr_t addr, size_t size, int prot, + dma_addr_t *iova) +{ + struct iova_domain *iovad = + (struct iova_domain *)domain->reserved_iova_cookie; + unsigned long order = iova_shift(iovad); + unsigned long base_pfn, end_pfn, nb_iommu_pages; + size_t iommu_page_size = 1 << order, binding_size; + phys_addr_t aligned_base, offset; + struct iommu_reserved_binding *b, *newb; + unsigned long flags; + struct iova *p_iova; + bool unmap = false; + int ret; + + base_pfn = addr >> order; + end_pfn = (addr + size - 1) >> order; + nb_iommu_pages = end_pfn - base_pfn + 1; + aligned_base = base_pfn << order; + offset = addr - aligned_base; + binding_size = nb_iommu_pages * iommu_page_size; + + if (!iovad) + return -EINVAL; + + spin_lock_irqsave(&domain->reserved_lock, flags); + + b = find_reserved_binding(domain, aligned_base, binding_size); + if (b) { + *iova = b->iova + offset; + kref_get(&b->kref); + ret = 0; + goto unlock; + } + + spin_unlock_irqrestore(&domain->reserved_lock, flags); + + /* + * no reserved IOVA was found for this PA, start allocating and + * registering one while the spin-lock is not held. iommu_map/unmap + * are not supposed to be atomic + */ + + p_iova = alloc_iova(iovad, nb_iommu_pages, iovad->dma_32bit_pfn, true); + if (!p_iova) + return -ENOMEM; + + *iova = iova_dma_addr(iovad, p_iova); + + newb = kzalloc(sizeof(*b), GFP_KERNEL); + if (!newb) { + free_iova(iovad, p_iova->pfn_lo); + return -ENOMEM; + } + + ret = iommu_map(domain, *iova, aligned_base, binding_size, prot); + if (ret) { + kfree(newb); + free_iova(iovad, p_iova->pfn_lo); + return ret; + } + + spin_lock_irqsave(&domain->reserved_lock, flags); + + /* re-check the PA was not mapped in our back when lock was not held */ + b = find_reserved_binding(domain, aligned_base, binding_size); + if (b) { + *iova = b->iova + offset; + kref_get(&b->kref); + ret = 0; + unmap = true; + goto unlock; + } + + kref_init(&newb->kref); + newb->domain = domain; + newb->addr = aligned_base; + newb->iova = *iova; + newb->size = binding_size; + + link_reserved_binding(domain, newb); + + *iova += offset; + goto unlock; + +unlock: + spin_unlock_irqrestore(&domain->reserved_lock, flags); + if (unmap) + delete_reserved_binding(domain, newb); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_get_reserved_iova); + +void iommu_put_reserved_iova(struct iommu_domain *domain, dma_addr_t iova) +{ + struct iova_domain *iovad = + (struct iova_domain *)domain->reserved_iova_cookie; + unsigned long order; + phys_addr_t aligned_addr; + dma_addr_t aligned_iova, page_size, mask, offset; + struct iommu_reserved_binding *b; + unsigned long flags; + bool unmap = false; + + order = iova_shift(iovad); + page_size = (uint64_t)1 << order; + mask = page_size - 1; + + aligned_iova = iova & ~mask; + offset = iova - aligned_iova; + + aligned_addr = iommu_iova_to_phys(domain, aligned_iova); + + spin_lock_irqsave(&domain->reserved_lock, flags); + b = find_reserved_binding(domain, aligned_addr, page_size); + if (!b) + goto unlock; + + if (atomic_sub_and_test(1, &b->kref.refcount)) { + unlink_reserved_binding(domain, b); + unmap = true; + } + +unlock: + spin_unlock_irqrestore(&domain->reserved_lock, flags); + if (unmap) + delete_reserved_binding(domain, b); +} +EXPORT_SYMBOL_GPL(iommu_put_reserved_iova); + + + diff --git a/include/linux/dma-reserved-iommu.h b/include/linux/dma-reserved-iommu.h index 5bf863b..dedea56 100644 --- a/include/linux/dma-reserved-iommu.h +++ b/include/linux/dma-reserved-iommu.h @@ -40,6 +40,34 @@ int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, */ void iommu_free_reserved_iova_domain(struct iommu_domain *domain); +/** + * iommu_get_reserved_iova: allocate a contiguous set of iova pages and + * map them to the physical range defined by @addr and @size. + * + * @domain: iommu domain handle + * @addr: physical address to bind + * @size: size of the binding + * @prot: mapping protection attribute + * @iova: returned iova + * + * Mapped physical pfns are within [@addr >> order, (@addr + size -1) >> order] + * where order corresponds to the iova domain order. + * This mapping is reference counted as a whole and cannot by split. + */ +int iommu_get_reserved_iova(struct iommu_domain *domain, + phys_addr_t addr, size_t size, int prot, + dma_addr_t *iova); + +/** + * iommu_put_reserved_iova: decrement a ref count of the reserved mapping + * + * @domain: iommu domain handle + * @iova: reserved iova whose binding ref count is decremented + * + * if the binding ref count is null, destroy the reserved mapping + */ +void iommu_put_reserved_iova(struct iommu_domain *domain, dma_addr_t iova); + #endif /* CONFIG_IOMMU_DMA_RESERVED */ #endif /* __KERNEL__ */ #endif /* __DMA_RESERVED_IOMMU_H */