From patchwork Tue Jan 26 13:12:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 60456 Delivered-To: patches@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp1949089lbb; Tue, 26 Jan 2016 05:13:37 -0800 (PST) X-Received: by 10.28.111.18 with SMTP id k18mr22784433wmc.86.1453814016933; Tue, 26 Jan 2016 05:13:36 -0800 (PST) Return-Path: Received: from mail-wm0-x22e.google.com (mail-wm0-x22e.google.com. [2a00:1450:400c:c09::22e]) by mx.google.com with ESMTPS id vw1si1715375wjc.216.2016.01.26.05.13.36 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Jan 2016 05:13:36 -0800 (PST) Received-SPF: pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:400c:c09::22e as permitted sender) client-ip=2a00:1450:400c:c09::22e; Authentication-Results: mx.google.com; spf=pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:400c:c09::22e as permitted sender) smtp.mailfrom=eric.auger@linaro.org; dkim=pass header.i=@linaro.org Received: by mail-wm0-x22e.google.com with SMTP id n5so129556853wmn.0 for ; Tue, 26 Jan 2016 05:13:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LL6c/K9/enJdNr3i1fmo891UFyKz6SzYGk0sh4nJ9kY=; b=dNZAaH6LxvvXxLv9H5Ytziw5xHISeHSgTXg0AmXQwbn5KTcTe7NgCpzfIMJoHK4B7d ZNZKPj5jdD6ErRxa/BRaV0vnvryVX7PlxLo+7lYW6v7zcwtrYiDlERUow1f6H3BW6yIu S0n4luYu4nBNMrTD2FXsh+N4kYkWLsmSf+b3c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LL6c/K9/enJdNr3i1fmo891UFyKz6SzYGk0sh4nJ9kY=; b=mJi8/ZaPSsyBlPQvBweHNKX6TYsz+ds6NOM1VjO0HoxHZO9p9fcg/GWZh/G+4NaBOJ 5lW5nAM2/Rhv58d2JUQ38WWgYsx6SpDjuEnTLERfD7GVyFbVOD2cLB3A1ES5nVbD1NeE RV2TQkwYDmdNE317IN5tl1sF9K9/jPAPjjNCslVCjGwJScZQppsOxlrmSrp1vS31bDcj 12+Rk32NFqS+TgvY92li1XdUGou3EXlTo46D9pAJoCjQngfZdZzt3/uILkUXIl6/mBrJ ILK0RjQJtZA5NkPJcqx8fkeUIvbXRMXWfvBQvrpVcOov6URt9CEkTyc1XMQLQYloV5Zd L02Q== X-Gm-Message-State: AG10YOQvWcQmC8mCEbnh7NoiqZS6nnhfYrA6c6Eyfd3+smSBuTfzasOVuucJwSainRmT4fOVm7o= X-Received: by 10.28.98.133 with SMTP id w127mr23606779wmb.4.1453814016728; Tue, 26 Jan 2016 05:13:36 -0800 (PST) Return-Path: Received: from localhost.localdomain (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id ct2sm1388885wjb.46.2016.01.26.05.13.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Jan 2016 05:13:35 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: Bharat.Bhushan@freescale.com, pranav.sawargaonkar@gmail.com, p.fedin@samsung.com, suravee.suthikulpanit@amd.com, linux-kernel@vger.kernel.org, patches@linaro.org, iommu@lists.linux-foundation.org Subject: [PATCH 06/10] vfio: introduce vfio_group_alloc_map_/unmap_free_reserved_iova Date: Tue, 26 Jan 2016 13:12:44 +0000 Message-Id: <1453813968-2024-7-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1453813968-2024-1-git-send-email-eric.auger@linaro.org> References: <1453813968-2024-1-git-send-email-eric.auger@linaro.org> This patch introduces vfio_group_alloc_map_/unmap_free_reserved_iova and implements corresponding vfio_iommu_type1 operations. alloc_map allows to allocate a new reserved iova page and map it onto the physical page that contains a given PA. It returns the iova that is mapped onto the provided PA. In case a mapping already exist between both pages, the IOVA corresponding to the PA is directly returned. Signed-off-by: Eric Auger Signed-off-by: Ankit Jindal Signed-off-by: Pranavkumar Sawargaonkar Signed-off-by: Bharat Bhushan --- drivers/vfio/vfio.c | 39 ++++++++++ drivers/vfio/vfio_iommu_type1.c | 163 ++++++++++++++++++++++++++++++++++++++-- include/linux/vfio.h | 34 ++++++++- 3 files changed, 228 insertions(+), 8 deletions(-) -- 1.9.1 diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 82f25cc..3d9de00 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -268,6 +268,45 @@ void vfio_unregister_iommu_driver(const struct vfio_iommu_driver_ops *ops) } EXPORT_SYMBOL_GPL(vfio_unregister_iommu_driver); +int vfio_group_alloc_map_reserved_iova(struct vfio_group *group, + phys_addr_t addr, int prot, + dma_addr_t *iova) +{ + struct vfio_container *container = group->container; + const struct vfio_iommu_driver_ops *ops = container->iommu_driver->ops; + int ret; + + if (!ops->alloc_map_reserved_iova) + return -EINVAL; + + down_read(&container->group_lock); + ret = ops->alloc_map_reserved_iova(container->iommu_data, + group->iommu_group, + addr, prot, iova); + up_read(&container->group_lock); + return ret; + +} +EXPORT_SYMBOL_GPL(vfio_group_alloc_map_reserved_iova); + +int vfio_group_unmap_free_reserved_iova(struct vfio_group *group, + dma_addr_t iova) +{ + struct vfio_container *container = group->container; + const struct vfio_iommu_driver_ops *ops = container->iommu_driver->ops; + int ret; + + if (!ops->unmap_free_reserved_iova) + return -EINVAL; + + down_read(&container->group_lock); + ret = ops->unmap_free_reserved_iova(container->iommu_data, + group->iommu_group, iova); + up_read(&container->group_lock); + return ret; +} +EXPORT_SYMBOL_GPL(vfio_group_unmap_free_reserved_iova); + /** * Group minor allocation/free - both called with vfio.group_lock held */ diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 33304c0..a79e2a8 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -156,6 +156,19 @@ static void vfio_unlink_reserved_binding(struct vfio_domain *d, rb_erase(&old->node, &d->reserved_binding_list); } +static void vfio_reserved_binding_release(struct kref *kref) +{ + struct vfio_reserved_binding *b = + container_of(kref, struct vfio_reserved_binding, kref); + struct vfio_domain *d = b->domain; + unsigned long order = __ffs(b->size); + + iommu_unmap(d->domain, b->iova, b->size); + free_iova(d->reserved_iova_domain, b->iova >> order); + vfio_unlink_reserved_binding(d, b); + kfree(b); +} + /* * This code handles mapping and unmapping of user data buffers * into DMA'ble space using the IOMMU @@ -1034,6 +1047,138 @@ done: mutex_unlock(&iommu->lock); } +static struct vfio_domain *vfio_find_iommu_domain(void *iommu_data, + struct iommu_group *group) +{ + struct vfio_iommu *iommu = iommu_data; + struct vfio_group *g; + struct vfio_domain *d; + + list_for_each_entry(d, &iommu->domain_list, next) { + list_for_each_entry(g, &d->group_list, next) { + if (g->iommu_group == group) + return d; + } + } + return NULL; +} + +static int vfio_iommu_type1_alloc_map_reserved_iova(void *iommu_data, + struct iommu_group *group, + phys_addr_t addr, int prot, + dma_addr_t *iova) +{ + struct vfio_iommu *iommu = iommu_data; + struct vfio_domain *d; + uint64_t mask, iommu_page_size; + struct vfio_reserved_binding *b; + unsigned long order; + struct iova *p_iova; + phys_addr_t aligned_addr, offset; + int ret = 0; + + order = __ffs(vfio_pgsize_bitmap(iommu)); + iommu_page_size = (uint64_t)1 << order; + mask = iommu_page_size - 1; + aligned_addr = addr & ~mask; + offset = addr - aligned_addr; + + mutex_lock(&iommu->lock); + + d = vfio_find_iommu_domain(iommu_data, group); + if (!d) { + ret = -EINVAL; + goto unlock; + } + + b = vfio_find_reserved_binding(d, aligned_addr, iommu_page_size); + if (b) { + ret = 0; + *iova = b->iova + offset; + kref_get(&b->kref); + goto unlock; + } + + /* allocate a new reserved IOVA page and a new binding node */ + p_iova = alloc_iova(d->reserved_iova_domain, 1, + d->reserved_iova_domain->dma_32bit_pfn, true); + if (!p_iova) { + ret = -ENOMEM; + goto unlock; + } + *iova = p_iova->pfn_lo << order; + + b = kzalloc(sizeof(*b), GFP_KERNEL); + if (!b) { + ret = -ENOMEM; + goto free_iova_unlock; + } + + ret = iommu_map(d->domain, *iova, aligned_addr, iommu_page_size, prot); + if (ret) + goto free_binding_iova_unlock; + + kref_init(&b->kref); + kref_get(&b->kref); + b->domain = d; + b->addr = aligned_addr; + b->iova = *iova; + b->size = iommu_page_size; + vfio_link_reserved_binding(d, b); + *iova += offset; + + goto unlock; + +free_binding_iova_unlock: + kfree(b); +free_iova_unlock: + free_iova(d->reserved_iova_domain, *iova >> order); +unlock: + mutex_unlock(&iommu->lock); + return ret; +} + +static int vfio_iommu_type1_unmap_free_reserved_iova(void *iommu_data, + struct iommu_group *group, + dma_addr_t iova) +{ + struct vfio_iommu *iommu = iommu_data; + struct vfio_reserved_binding *b; + struct vfio_domain *d; + phys_addr_t aligned_addr; + dma_addr_t aligned_iova, iommu_page_size, mask, offset; + unsigned long order; + int ret = 0; + + order = __ffs(vfio_pgsize_bitmap(iommu)); + iommu_page_size = (uint64_t)1 << order; + mask = iommu_page_size - 1; + aligned_iova = iova & ~mask; + offset = iova - aligned_iova; + + mutex_lock(&iommu->lock); + + d = vfio_find_iommu_domain(iommu_data, group); + if (!d) { + ret = -EINVAL; + goto unlock; + } + + aligned_addr = iommu_iova_to_phys(d->domain, aligned_iova); + + b = vfio_find_reserved_binding(d, aligned_addr, iommu_page_size); + if (!b) { + ret = -EINVAL; + goto unlock; + } + + kref_put(&b->kref, vfio_reserved_binding_release); + +unlock: + mutex_unlock(&iommu->lock); + return ret; +} + static void *vfio_iommu_type1_open(unsigned long arg) { struct vfio_iommu *iommu; @@ -1180,13 +1325,17 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, } static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { - .name = "vfio-iommu-type1", - .owner = THIS_MODULE, - .open = vfio_iommu_type1_open, - .release = vfio_iommu_type1_release, - .ioctl = vfio_iommu_type1_ioctl, - .attach_group = vfio_iommu_type1_attach_group, - .detach_group = vfio_iommu_type1_detach_group, + .name = "vfio-iommu-type1", + .owner = THIS_MODULE, + .open = vfio_iommu_type1_open, + .release = vfio_iommu_type1_release, + .ioctl = vfio_iommu_type1_ioctl, + .attach_group = vfio_iommu_type1_attach_group, + .detach_group = vfio_iommu_type1_detach_group, + .alloc_map_reserved_iova = + vfio_iommu_type1_alloc_map_reserved_iova, + .unmap_free_reserved_iova = + vfio_iommu_type1_unmap_free_reserved_iova, }; static int __init vfio_iommu_type1_init(void) diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 610a86a..0020f81 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -75,7 +75,13 @@ struct vfio_iommu_driver_ops { struct iommu_group *group); void (*detach_group)(void *iommu_data, struct iommu_group *group); - + int (*alloc_map_reserved_iova)(void *iommu_data, + struct iommu_group *group, + phys_addr_t addr, int prot, + dma_addr_t *iova); + int (*unmap_free_reserved_iova)(void *iommu_data, + struct iommu_group *group, + dma_addr_t iova); }; extern int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops); @@ -138,4 +144,30 @@ extern int vfio_virqfd_enable(void *opaque, void *data, struct virqfd **pvirqfd, int fd); extern void vfio_virqfd_disable(struct virqfd **pvirqfd); +/** + * vfio_group_alloc_map_reserved_iova: allocates a new iova page and map + * it onto the aligned physical page that contains a given physical addr. + * page size is the domain iommu page size. + * + * @group: vfio group handle + * @addr: physical address to map + * @prot: protection attribute + * @iova: returned iova that is mapped onto addr + * + * returns 0 on success, < 0 on failure + */ +extern int vfio_group_alloc_map_reserved_iova(struct vfio_group *group, + phys_addr_t addr, int prot, + dma_addr_t *iova); +/** + * vfio_group_unmap_free_reserved_iova: unmap and free the reserved iova page + * + * @group: vfio group handle + * @iova: base iova, must be aligned on the IOMMU page size + * + * returns 0 on success, < 0 on failure + */ +extern int vfio_group_unmap_free_reserved_iova(struct vfio_group *group, + dma_addr_t iova); + #endif /* VFIO_H */