From patchwork Thu May 18 07:59:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 100052 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp602468qge; Thu, 18 May 2017 01:03:39 -0700 (PDT) X-Received: by 10.99.170.2 with SMTP id e2mr3089631pgf.204.1495094619021; Thu, 18 May 2017 01:03:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495094619; cv=none; d=google.com; s=arc-20160816; b=jzDtzTh95UMbMIs8hq8fcNn3zeFCjHOJEcW/LKeWg7v1oiXhuUb9/Y2mc1vwLQ9aUl I7XzO01LohwuaxfLPfkXQSZaUCaVOJZOUXbTg4Jc5NvlunSCBQ/YXTgsuRFQ9UGY7LY8 EOVuc3Sh8hmGuK+YI1H7iQHETL7T9NHF6wPV4xdip9ZFe8letjSgwt6Yl6EDLLrW3m9Q sciPElHI9sUA/nAwTzlwWHgSGac1hsid87UVvq7QVQDBS1yODOVSfQFOCIDPApS1TM75 sRBxFgzgZZBI1QGlnUAL8ZgmB/mSfXReMTKFnMzHPAMn/lFN31DUaiXoGWAunipI9IM6 s6aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=94c45xZw80hRPvfEFp6Kj9U+fJhW8Y+5ObaDzUsaiZ8=; b=YXMV632ziUMESYeijem5uUQMft/HSEt2XF8HjHUgwneNPjirtx8IJBdyqMcKUsNR6i XhiT3f1FlSX0IGREGNzPgwW7eopqWrCT+6XRfFGudzHLWpXvuw8273QTl4T1WFiv/676 IimzVWsGVVOYkIX3K/5N4rWp9Rg+zodWCu0JPIfxf66hwpmh2dc/YlvqP+jZWOeh1uVB Y59l1Sd3mbmOI962lFPfxfiwray0nbeaVoES8YqUyzcY34EeWKMMDQHsvufv7H1S7Gb7 gz6C/pAp8lpSNDIS37MDGFHlbtoxeeWp9tbqvTYOjbjcwAEvb0GUEIrj3eYCNOzNhtKI MyBQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 91si4542469plb.204.2017.05.18.01.03.38; Thu, 18 May 2017 01:03:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755201AbdERIDf (ORCPT + 25 others); Thu, 18 May 2017 04:03:35 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:6349 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754219AbdERICJ (ORCPT ); Thu, 18 May 2017 04:02:09 -0400 Received: from 172.30.72.54 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.54]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id ANU58275; Thu, 18 May 2017 16:02:04 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Thu, 18 May 2017 16:01:54 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v3 6/6] iommu/iova: fix iovad->dma_32bit_pfn as the last pfn of dma32 Date: Thu, 18 May 2017 15:59:57 +0800 Message-ID: <1495094397-9132-7-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> References: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.591D54FF.001B, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 38adf588acf5df0ed075fa2114e32454 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To make sure iovad->cached32_node and iovad->cached64_node can exactly control dma32 and dma64 area. It also help us to remove the parameter pfn_32bit of init_iova_domain. Signed-off-by: Zhen Lei --- drivers/iommu/amd_iommu.c | 7 ++----- drivers/iommu/dma-iommu.c | 21 ++++----------------- drivers/iommu/intel-iommu.c | 11 +++-------- drivers/iommu/iova.c | 4 ++-- drivers/misc/mic/scif/scif_rma.c | 3 +-- include/linux/iova.h | 2 +- 6 files changed, 13 insertions(+), 35 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 63cacf5..9aebfa6 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -61,7 +61,6 @@ /* IO virtual address start page frame number */ #define IOVA_START_PFN (1) #define IOVA_PFN(addr) ((addr) >> PAGE_SHIFT) -#define DMA_32BIT_PFN IOVA_PFN(DMA_BIT_MASK(32)) /* Reserved IOVA ranges */ #define MSI_RANGE_START (0xfee00000) @@ -1776,8 +1775,7 @@ static struct dma_ops_domain *dma_ops_domain_alloc(void) if (!dma_dom->domain.pt_root) goto free_dma_dom; - init_iova_domain(&dma_dom->iovad, PAGE_SIZE, - IOVA_START_PFN, DMA_32BIT_PFN); + init_iova_domain(&dma_dom->iovad, PAGE_SIZE, IOVA_START_PFN); /* Initialize reserved ranges */ copy_reserved_iova(&reserved_iova_ranges, &dma_dom->iovad); @@ -2747,8 +2745,7 @@ static int init_reserved_iova_ranges(void) struct pci_dev *pdev = NULL; struct iova *val; - init_iova_domain(&reserved_iova_ranges, PAGE_SIZE, - IOVA_START_PFN, DMA_32BIT_PFN); + init_iova_domain(&reserved_iova_ranges, PAGE_SIZE, IOVA_START_PFN); lockdep_set_class(&reserved_iova_ranges.iova_rbtree_lock, &reserved_rbtree_key); diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 8348f366..b3455d4 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -290,18 +290,7 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, /* ...then finally give it a kicking to make sure it fits */ base_pfn = max_t(unsigned long, base_pfn, domain->geometry.aperture_start >> order); - end_pfn = min_t(unsigned long, end_pfn, - domain->geometry.aperture_end >> order); } - /* - * PCI devices may have larger DMA masks, but still prefer allocating - * within a 32-bit mask to avoid DAC addressing. Such limitations don't - * apply to the typical platform device, so for those we may as well - * leave the cache limit at the top of their range to save an rb_last() - * traversal on every allocation. - */ - if (dev && dev_is_pci(dev)) - end_pfn &= DMA_BIT_MASK(32) >> order; /* start_pfn is always nonzero for an already-initialised domain */ if (iovad->start_pfn) { @@ -310,19 +299,17 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, pr_warn("Incompatible range for DMA domain\n"); return -EFAULT; } - /* - * If we have devices with different DMA masks, move the free - * area cache limit down for the benefit of the smaller one. - */ - iovad->dma_32bit_pfn = min(end_pfn, iovad->dma_32bit_pfn); return 0; } - init_iova_domain(iovad, 1UL << order, base_pfn, end_pfn); + init_iova_domain(iovad, 1UL << order, base_pfn); if (!dev) return 0; + if (end_pfn < iovad->dma_32bit_pfn) + dev_dbg(dev, "ancient device or dma range missed some bits?"); + return iova_reserve_iommu_regions(dev, domain); } EXPORT_SYMBOL(iommu_dma_init_domain); diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 90ab011..266b96b 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -82,8 +82,6 @@ #define IOVA_START_PFN (1) #define IOVA_PFN(addr) ((addr) >> PAGE_SHIFT) -#define DMA_32BIT_PFN IOVA_PFN(DMA_BIT_MASK(32)) -#define DMA_64BIT_PFN IOVA_PFN(DMA_BIT_MASK(64)) /* page table handling */ #define LEVEL_STRIDE (9) @@ -1874,8 +1872,7 @@ static int dmar_init_reserved_ranges(void) struct iova *iova; int i; - init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN); lockdep_set_class(&reserved_iova_list.iova_rbtree_lock, &reserved_rbtree_key); @@ -1933,8 +1930,7 @@ static int domain_init(struct dmar_domain *domain, struct intel_iommu *iommu, int adjust_width, agaw; unsigned long sagaw; - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); domain_reserve_special_ranges(domain); /* calculate AGAW */ @@ -4999,8 +4995,7 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) { int adjust_width; - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); domain_reserve_special_ranges(domain); /* calculate AGAW */ diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 338930b..c400f6b 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -50,7 +50,7 @@ insert_iova_boundary(struct iova_domain *iovad) void init_iova_domain(struct iova_domain *iovad, unsigned long granule, - unsigned long start_pfn, unsigned long pfn_32bit) + unsigned long start_pfn) { /* * IOVA granularity will normally be equal to the smallest @@ -63,7 +63,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, iovad->rbroot = RB_ROOT; iovad->granule = granule; iovad->start_pfn = start_pfn; - iovad->dma_32bit_pfn = pfn_32bit; + iovad->dma_32bit_pfn = DMA_BIT_MASK(32) >> ilog2(granule); init_iova_rcaches(iovad); /* diff --git a/drivers/misc/mic/scif/scif_rma.c b/drivers/misc/mic/scif/scif_rma.c index 329727e..c824329 100644 --- a/drivers/misc/mic/scif/scif_rma.c +++ b/drivers/misc/mic/scif/scif_rma.c @@ -39,8 +39,7 @@ void scif_rma_ep_init(struct scif_endpt *ep) struct scif_endpt_rma_info *rma = &ep->rma_info; mutex_init(&rma->rma_lock); - init_iova_domain(&rma->iovad, PAGE_SIZE, SCIF_IOVA_START_PFN, - SCIF_DMA_64BIT_PFN); + init_iova_domain(&rma->iovad, PAGE_SIZE, SCIF_IOVA_START_PFN); spin_lock_init(&rma->tc_lock); mutex_init(&rma->mmn_lock); INIT_LIST_HEAD(&rma->reg_list); diff --git a/include/linux/iova.h b/include/linux/iova.h index 2d34112..c0fcd18 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -102,7 +102,7 @@ struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi); void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to); void init_iova_domain(struct iova_domain *iovad, unsigned long granule, - unsigned long start_pfn, unsigned long pfn_32bit); + unsigned long start_pfn); struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); void put_iova_domain(struct iova_domain *iovad); struct iova *split_and_remove_iova(struct iova_domain *iovad,