From patchwork Thu Oct 29 13:59:45 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 55774 Delivered-To: patches@linaro.org Received: by 10.112.61.134 with SMTP id p6csp581298lbr; Thu, 29 Oct 2015 06:59:54 -0700 (PDT) X-Received: by 10.28.11.13 with SMTP id 13mr3848803wml.4.1446127192761; Thu, 29 Oct 2015 06:59:52 -0700 (PDT) Return-Path: Received: from mail-wi0-x22f.google.com (mail-wi0-x22f.google.com. [2a00:1450:400c:c05::22f]) by mx.google.com with ESMTPS id ws7si2217612wjb.101.2015.10.29.06.59.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 29 Oct 2015 06:59:52 -0700 (PDT) Received-SPF: pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:400c:c05::22f as permitted sender) client-ip=2a00:1450:400c:c05::22f; Authentication-Results: mx.google.com; spf=pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:400c:c05::22f as permitted sender) smtp.mailfrom=eric.auger@linaro.org; dkim=pass header.i=@linaro_org.20150623.gappssmtp.com Received: by wicll6 with SMTP id ll6so228659781wic.0 for ; Thu, 29 Oct 2015 06:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=ZE+EKc0cH3mMAUNH5lgyZGqvsVo+11vFW8Aokb+lrVY=; b=oOzteIkh+7qt8dS8umOn/vIKJCQhzQz/KUqMR2+9k9OEv/tvDAx0v6E0xi2ePDT4JF rxCkGEDvDqW2VY7zdcTFKUt1sUODhyk75vvGUZiQiI6an5d9h6CYUE2hzeqSFR2HUhJJ 43tDcTt9Wk6U42vy5LCdeL8BXbL2hQh5rY7XUTF8fpdcXvH/aX3PcAbfgzVUhY03eZUU demfNkaD11kHlnlfhNkO+bCJrMNFbLMPPvoPlyFsJDC0U6KVbn75M8DdJX7KGHJhOyXi wGCtjtpY0Tq8j1Dq0E31HLzm60V6XgAeUatIk/+CtrB14IlzOaRMCA/Zq2r5J6ICkub3 iQPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ZE+EKc0cH3mMAUNH5lgyZGqvsVo+11vFW8Aokb+lrVY=; b=DYbwDeTwX0NQvvjKiZ82cENkmz+r0VuHf3SZIXWR7nK4tTzCEsutEnDP7AbzVYvE5s y0EVmVtPOpLNcVqvyOuTuC2m5+Kn+RSukY8WhciaCI9mRBo7fmBqNMak92VSOatWXMwF u6+Vm8jgNCxX9bjEWHCWVD0gw2GvmJ9NLFEMUzLSRyH0CI9hp7vtFT/eMK/YK4Soh+fB UOQlTUNck5NIkjfIlcJamLnG4ux3aovrXb/ysV6HjdmNmPI1uLhmigbrHogoOsfqvzny JMfNRFZ6cKWmSCHzx/5GoNsLamP54Ex3tFOvxsrLY2DGE1+FvJnL8+rbQhJCPOr/1j7k yVZQ== X-Gm-Message-State: ALoCoQkwUhpGaFcYm5IhNGk4jKcm/k4e+gGG9N78hUuAfl044HA+uPXGmZIqfMUhkF9Nl4VdTcay X-Received: by 10.195.13.38 with SMTP id ev6mr2458925wjd.150.1446127192489; Thu, 29 Oct 2015 06:59:52 -0700 (PDT) Return-Path: Received: from new-host-4.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id an7sm1930271wjc.44.2015.10.29.06.59.50 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Oct 2015 06:59:51 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, will.deacon@arm.com Cc: suravee.suthikulpanit@amd.com, christoffer.dall@linaro.org, linux-kernel@vger.kernel.org, patches@linaro.org Subject: [PATCH] vfio/type1: handle case where IOMMU does not support PAGE_SIZE size Date: Thu, 29 Oct 2015 13:59:45 +0000 Message-Id: <1446127185-2096-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 Current vfio_pgsize_bitmap code hides the supported IOMMU page sizes smaller than PAGE_SIZE. As a result, in case the IOMMU does not support PAGE_SIZE page, the alignment check on map/unmap is done with larger page sizes, if any. This can fail although mapping could be done with pages smaller than PAGE_SIZE. This patch modifies vfio_pgsize_bitmap implementation so that, in case the IOMMU supports page sizes smaller than PAGE_HOST we pretend PAGE_HOST is supported and hide sub-PAGE_HOST sizes. That way the user will be able to map/unmap buffers whose size/ start address is aligned with PAGE_HOST. Pinning code uses that granularity while iommu driver can use the sub-PAGE_HOST size to map the buffer. Signed-off-by: Eric Auger Signed-off-by: Alex Williamson --- This was tested on AMD Seattle with 64kB page host. ARM MMU 401 currently expose 4kB, 2MB and 1GB page support. With a 64kB page host, the map/unmap check is done against 2MB. Some alignment check fail so VFIO_IOMMU_MAP_DMA fail while we could map using 4kB IOMMU page size. RFC -> PATCH v1: - move all modifications in vfio_pgsize_bitmap following Alex' suggestion to expose a fake PAGE_HOST support - restore WARN_ON's --- drivers/vfio/vfio_iommu_type1.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) -- 1.9.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 57d8c37..cee504a 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -403,13 +403,26 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma) static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) { struct vfio_domain *domain; - unsigned long bitmap = PAGE_MASK; + unsigned long bitmap = ULONG_MAX; mutex_lock(&iommu->lock); list_for_each_entry(domain, &iommu->domain_list, next) bitmap &= domain->domain->ops->pgsize_bitmap; mutex_unlock(&iommu->lock); + /* + * In case the IOMMU supports page sizes smaller than PAGE_HOST + * we pretend PAGE_HOST is supported and hide sub-PAGE_HOST sizes. + * That way the user will be able to map/unmap buffers whose size/ + * start address is aligned with PAGE_HOST. Pinning code uses that + * granularity while iommu driver can use the sub-PAGE_HOST size + * to map the buffer. + */ + if (bitmap & ~PAGE_MASK) { + bitmap &= PAGE_MASK; + bitmap |= PAGE_SIZE; + } + return bitmap; }