From patchwork Tue Sep 19 16:31:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 113044 Delivered-To: patch@linaro.org Received: by 10.140.106.117 with SMTP id d108csp5119048qgf; Tue, 19 Sep 2017 09:32:16 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDbxbh0ccK5ikuie914n41Hh9ViTrYMvGPpNniGR2E0cuNDRAUCS2yCBFwAYrBr0ZWD77tF X-Received: by 10.99.95.204 with SMTP id t195mr1869052pgb.68.1505838735962; Tue, 19 Sep 2017 09:32:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505838735; cv=none; d=google.com; s=arc-20160816; b=S+tCMyLgBLyA7YrcBwWztR1j3393np2rTjjvRYwzGi5jIoF31pwnhTHJzW3lEU0ntA XslAL+2v7KAVWQTHONEWF6y+Q2dkqQP5ZzeJd4K62yGPT9xDOuqmqcRRUfxG/v3gtRtz PyAo+YF4WNJYDKTOa/fr273W7wu+h+qI3WBezv41OpnCI3YouwrB4y56ykq2H/oiy97u nkvTneujN9lCMKPiMkgm/mIrj65c1yb5QjKZFTseX/7XXw6OEYc1PnmsjL5SBtSU0Tw8 q1tFiXxV3kP0uDEwFhxQdn08yMfPfJWcprlQ+XnRkQK+ZWCKQMUEw0RCuroTB9MOEhek G8yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=pGW6hMdSgA/mPQn6urDFZyyGYSwmpg2PrWDpe7Zra6Y=; b=nRFva3Y/Y6Os/6wb9LrPMF6st43VV0R5IKnDuDeuJTT7WAecki40g2RPkUaKRpXrpi ffDC9T7c7m/AGj2R0Vz0cZIQAnD2ilY4ApGEd0ralVfmVoevwwyVHs7HZl27Rd6B/g8Z ocM0PIyR5TnoAnGyKKD+vciLrn2xoOsn1omXS8emqxjB7lS8MOPgXCyAOLdF2sUG6edp rGnBXKC4JvhniTZaM2CJw6lpRQlL1WM0DNLTKclX0c1lbaDbv91C69ZFt8nYGo0bkGGw jicGUiZnP4igioAfE6yqcQyEB0wOJcOpLXz0M81UqjQpFbU4BZ3N04ivmSeuHi/6Hfxo 9eLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l36si6801443plg.198.2017.09.19.09.32.15; Tue, 19 Sep 2017 09:32:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751594AbdISQcM (ORCPT + 26 others); Tue, 19 Sep 2017 12:32:12 -0400 Received: from foss.arm.com ([217.140.101.70]:52916 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751542AbdISQcK (ORCPT ); Tue, 19 Sep 2017 12:32:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A366A1435; Tue, 19 Sep 2017 09:32:09 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.210.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 635B43F58C; Tue, 19 Sep 2017 09:32:08 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org Cc: iommu@lists.linux-foundation.org, thunder.leizhen@huawei.com, nwatters@codeaurora.org, tomasz.nowicki@caviumnetworks.com, linux-kernel@vger.kernel.org Subject: [PATCH v4 1/6] iommu/iova: Optimise rbtree searching Date: Tue, 19 Sep 2017 17:31:52 +0100 Message-Id: <9d6aff3f458a54d210eabdaa38ee6baa0a901a3a.1505829018.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.13.4.dirty In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei Checking the IOVA bounds separately before deciding which direction to continue the search (if necessary) results in redundantly comparing both pfns twice each. GCC can already determine that the final comparison op is redundant and optimise it down to 3 in total, but we can go one further with a little tweak of the ordering (which makes the intent of the code that much cleaner as a bonus). Signed-off-by: Zhen Lei Tested-by: Ard Biesheuvel Tested-by: Zhen Lei Tested-by: Nate Watterson [rm: rewrote commit message to clarify] Signed-off-by: Robin Murphy --- v4: No change drivers/iommu/iova.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) -- 2.13.4.dirty diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 33edfa794ae9..f129ff4f5c89 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -342,15 +342,12 @@ private_find_iova(struct iova_domain *iovad, unsigned long pfn) while (node) { struct iova *iova = rb_entry(node, struct iova, node); - /* If pfn falls within iova's range, return iova */ - if ((pfn >= iova->pfn_lo) && (pfn <= iova->pfn_hi)) { - return iova; - } - if (pfn < iova->pfn_lo) node = node->rb_left; - else if (pfn > iova->pfn_lo) + else if (pfn > iova->pfn_hi) node = node->rb_right; + else + return iova; /* pfn falls within iova's range */ } return NULL; From patchwork Tue Sep 19 16:31:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 113045 Delivered-To: patch@linaro.org Received: by 10.140.106.117 with SMTP id d108csp5119084qgf; Tue, 19 Sep 2017 09:32:18 -0700 (PDT) X-Google-Smtp-Source: AOwi7QAuBorAb+SwXc45nuU+7v8sLY6oFMXgaeXnxoueiRKv1++V+fTDYry7TY0vuEQzLJ4/hKY/ X-Received: by 10.84.224.134 with SMTP id s6mr1850591plj.413.1505838738323; Tue, 19 Sep 2017 09:32:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505838738; cv=none; d=google.com; s=arc-20160816; b=EsyarG3uIyH4qyGKe2GvuiOn3Bry8/MbBsqdcMfvaVvNSJY+6MiFdnwSXkCuYBzKO9 3Q8yfZlw5XxuvvcoZ0mQ5eI6vpgLK6iAXSA37zl4xlLlmnwgkm1OOJ8k36lGG+xqGYuk 1FzNAKBd3PvNMp6AhXaxiAM7XlnIx49pOXpthqoNYe0VnbpTGDHFwdaFFZwKtdGiJVo/ 4Qcujz0sDETCoYW6JRVaNQP2IyytKQ5wmhbKdW93FswcvVS7Z6RhGD+gkM1Yr2fDb/0O Q6oSJYXU8zqRKUiBUZxGXMN23BcPRvO3YUn1kMeB+obNP6gCSMIoPHHKZb81GaonlVrX UepA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=LQYR0IvR1qi0xJildPHy9R6gAcztY4b3PGi54Cze95E=; b=fgjwCNAlGDe1N6NfcX0HvwyADO2eOB8cQOS17iuTfGpzt5iqF3YhFb2id3rQBMXmTF FwxuFwzoBXi1KK2FwH6WB2+cQ7geB7490NqXPfxMRXDpI7Uxpa1usWicOF5QHiDnb59R +AXCSn9JHCunfU0eu+ie15xpi/ZXoRfh+frmySRM6luzZDLWAR3KmFo+8Ojik51qvDG+ 1ClIRsaZYnciXLdjZiB7QD83ho0LP6jh9dRgXpSRQmzTMG4zhsjDe8XrjIxYp3SFYlPk HKe1t00DnbggHM3MWV160zNqH65T9UFIy7xk//ecGQ/Jw/IW6raisZ4RSBfMb7vJd6AN 3Y/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l36si6801443plg.198.2017.09.19.09.32.17; Tue, 19 Sep 2017 09:32:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751645AbdISQcP (ORCPT + 26 others); Tue, 19 Sep 2017 12:32:15 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52928 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751542AbdISQcN (ORCPT ); Tue, 19 Sep 2017 12:32:13 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 12D9C1435; Tue, 19 Sep 2017 09:32:13 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.210.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D2A803F58C; Tue, 19 Sep 2017 09:32:11 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org Cc: iommu@lists.linux-foundation.org, thunder.leizhen@huawei.com, nwatters@codeaurora.org, tomasz.nowicki@caviumnetworks.com, linux-kernel@vger.kernel.org Subject: [PATCH v4 2/6] iommu/iova: Optimise the padding calculation Date: Tue, 19 Sep 2017 17:31:53 +0100 Message-Id: <728494c4a85091828347685fece707968f522cd1.1505829018.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.13.4.dirty In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei The mask for calculating the padding size doesn't change, so there's no need to recalculate it every loop iteration. Furthermore, Once we've done that, it becomes clear that we don't actually need to calculate a padding size at all - by flipping the arithmetic around, we can just combine the upper limit, size, and mask directly to check against the lower limit. For an arm64 build, this alone knocks 20% off the object code size of the entire alloc_iova() function! Signed-off-by: Zhen Lei Tested-by: Ard Biesheuvel Tested-by: Zhen Lei Tested-by: Nate Watterson [rm: simplified more of the arithmetic, rewrote commit message] Signed-off-by: Robin Murphy --- v4: - Round align_mask up instead of down (oops!) - Remove redundant !curr check - Introduce new_pfn variable here to reduce churn in later patches drivers/iommu/iova.c | 42 +++++++++++++++--------------------------- 1 file changed, 15 insertions(+), 27 deletions(-) -- 2.13.4.dirty diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index f129ff4f5c89..20be9a8b3188 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -182,24 +182,17 @@ iova_insert_rbtree(struct rb_root *root, struct iova *iova, rb_insert_color(&iova->node, root); } -/* - * Computes the padding size required, to make the start address - * naturally aligned on the power-of-two order of its size - */ -static unsigned int -iova_get_pad_size(unsigned int size, unsigned int limit_pfn) -{ - return (limit_pfn - size) & (__roundup_pow_of_two(size) - 1); -} - static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, struct iova *new, bool size_aligned) { struct rb_node *prev, *curr = NULL; unsigned long flags; - unsigned long saved_pfn; - unsigned int pad_size = 0; + unsigned long saved_pfn, new_pfn; + unsigned long align_mask = ~0UL; + + if (size_aligned) + align_mask <<= fls_long(size - 1); /* Walk the tree backwards */ spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); @@ -209,31 +202,26 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, while (curr) { struct iova *curr_iova = rb_entry(curr, struct iova, node); - if (limit_pfn <= curr_iova->pfn_lo) { + if (limit_pfn <= curr_iova->pfn_lo) goto move_left; - } else if (limit_pfn > curr_iova->pfn_hi) { - if (size_aligned) - pad_size = iova_get_pad_size(size, limit_pfn); - if ((curr_iova->pfn_hi + size + pad_size) < limit_pfn) - break; /* found a free slot */ - } + + if (((limit_pfn - size) & align_mask) > curr_iova->pfn_hi) + break; /* found a free slot */ + limit_pfn = curr_iova->pfn_lo; move_left: prev = curr; curr = rb_prev(curr); } - if (!curr) { - if (size_aligned) - pad_size = iova_get_pad_size(size, limit_pfn); - if ((iovad->start_pfn + size + pad_size) > limit_pfn) { - spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); - return -ENOMEM; - } + new_pfn = (limit_pfn - size) & align_mask; + if (limit_pfn < size || new_pfn < iovad->start_pfn) { + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + return -ENOMEM; } /* pfn_lo will point to size aligned address if size_aligned is set */ - new->pfn_lo = limit_pfn - (size + pad_size); + new->pfn_lo = new_pfn; new->pfn_hi = new->pfn_lo + size - 1; /* If we have 'prev', it's a valid place to start the insertion. */ From patchwork Tue Sep 19 16:31:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 113046 Delivered-To: patch@linaro.org Received: by 10.140.106.117 with SMTP id d108csp5119236qgf; Tue, 19 Sep 2017 09:32:28 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCtuTiNH0LejhefJQXSKXcI6vLG3B5Rf+GWgPpSAkACkvI7MHOaIao5BisGzPA0Bd3jhyYC X-Received: by 10.98.73.197 with SMTP id r66mr1860438pfi.242.1505838747959; Tue, 19 Sep 2017 09:32:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505838747; cv=none; d=google.com; s=arc-20160816; b=FEPE+VP+djOi4pXNldVEjkD52SHMr54bqa2aEntLq2LtQ1EG2BKM0klS7PwPwGJg/n y16Co+Nr604uLj85I8oJvQsKDTLG6fFVMcyiySmeXmTwM5U6TZJOUnpS5MhchtQ9VRZJ W6R30AvSMc6ITtmHdNIDJfLoer86j7voJpMvkR9rnOt3o7fxtLHVb55+IhExT8zuS/Qm 3aBsB6yNAK438W3O2D4ZmHN8V0MJ7zPM9L4yMF5qvd3mbqfsYmtDRH9bb5FPvLGXCBZa WPHXJfyCUH47ADQdnOVYiB8KM5JEO2VcuTXoXInmTSQ1ZiIRUYRLvumQ7IxhTiG03EQc IsYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=jsOiSMFXjZOyGvf4FEf88uptYxXF/YuTO6XN4mFc1Us=; b=moL4SD8IwWCC52NOmO/cxXv1bwFZelVkti6DBZAaKYjRx4q7G4tfN8VvMVtusY09Ks rL+VHxVVGiKrdQVn0EAiABdHzI4fUZUOMsnkrAyzSJD605SKezSwxs/+MrUIztamltMD Zgk9/l1yCX2TNw4bozvK7HHZZ6RbMM3O3f3KigYIHx412DA0FD8J2CQZ4UZ27OhOcBPz GAJJPMMIeCBCUEkpl3a0ngZRo9Gxfcy0pYvTIeVg50eWThnznfERg4CF/v25eQu9fMaw Y2UbekR78cx4ZVSDjO4DcufChBKPyIzom/fVHYJNM1BQgSTKlvC3ofQxmgJdaq9UmJwK 8vCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f19si6796836plj.295.2017.09.19.09.32.27; Tue, 19 Sep 2017 09:32:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751716AbdISQcY (ORCPT + 26 others); Tue, 19 Sep 2017 12:32:24 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52980 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751541AbdISQcV (ORCPT ); Tue, 19 Sep 2017 12:32:21 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B67CF1435; Tue, 19 Sep 2017 09:32:20 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.210.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C50FE3F58C; Tue, 19 Sep 2017 09:32:18 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org Cc: iommu@lists.linux-foundation.org, thunder.leizhen@huawei.com, nwatters@codeaurora.org, tomasz.nowicki@caviumnetworks.com, linux-kernel@vger.kernel.org, Thierry Reding , Jonathan Hunter , David Airlie , Sudeep Dutt , Ashutosh Dixit Subject: [PATCH v4 6/6] iommu/iova: Make dma_32bit_pfn implicit Date: Tue, 19 Sep 2017 17:31:57 +0100 Message-Id: <7683c29df187f50a08299269a469380553336a4d.1505829018.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.13.4.dirty In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei Now that the cached node optimisation can apply to all allocations, the couple of users which were playing tricks with dma_32bit_pfn in order to benefit from it can stop doing so. Conversely, there is also no need for all the other users to explicitly calculate a 'real' 32-bit PFN, when init_iova_domain() can happily do that itself from the page granularity. CC: Thierry Reding CC: Jonathan Hunter CC: David Airlie CC: Sudeep Dutt CC: Ashutosh Dixit Signed-off-by: Zhen Lei Tested-by: Ard Biesheuvel Tested-by: Zhen Lei Tested-by: Nate Watterson [rm: use iova_shift(), rewrote commit message] Signed-off-by: Robin Murphy --- v4: No change drivers/gpu/drm/tegra/drm.c | 3 +-- drivers/gpu/host1x/dev.c | 3 +-- drivers/iommu/amd_iommu.c | 7 ++----- drivers/iommu/dma-iommu.c | 18 +----------------- drivers/iommu/intel-iommu.c | 11 +++-------- drivers/iommu/iova.c | 4 ++-- drivers/misc/mic/scif/scif_rma.c | 3 +-- include/linux/iova.h | 5 ++--- 8 files changed, 13 insertions(+), 41 deletions(-) -- 2.13.4.dirty diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c index 597d563d636a..b822e484b7e5 100644 --- a/drivers/gpu/drm/tegra/drm.c +++ b/drivers/gpu/drm/tegra/drm.c @@ -155,8 +155,7 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags) order = __ffs(tegra->domain->pgsize_bitmap); init_iova_domain(&tegra->carveout.domain, 1UL << order, - carveout_start >> order, - carveout_end >> order); + carveout_start >> order); tegra->carveout.shift = iova_shift(&tegra->carveout.domain); tegra->carveout.limit = carveout_end >> tegra->carveout.shift; diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c index 7f22c5c37660..5267c62e8896 100644 --- a/drivers/gpu/host1x/dev.c +++ b/drivers/gpu/host1x/dev.c @@ -198,8 +198,7 @@ static int host1x_probe(struct platform_device *pdev) order = __ffs(host->domain->pgsize_bitmap); init_iova_domain(&host->iova, 1UL << order, - geometry->aperture_start >> order, - geometry->aperture_end >> order); + geometry->aperture_start >> order); host->iova_end = geometry->aperture_end; } diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 51f8215877f5..647ab7691aee 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -63,7 +63,6 @@ /* IO virtual address start page frame number */ #define IOVA_START_PFN (1) #define IOVA_PFN(addr) ((addr) >> PAGE_SHIFT) -#define DMA_32BIT_PFN IOVA_PFN(DMA_BIT_MASK(32)) /* Reserved IOVA ranges */ #define MSI_RANGE_START (0xfee00000) @@ -1788,8 +1787,7 @@ static struct dma_ops_domain *dma_ops_domain_alloc(void) if (!dma_dom->domain.pt_root) goto free_dma_dom; - init_iova_domain(&dma_dom->iovad, PAGE_SIZE, - IOVA_START_PFN, DMA_32BIT_PFN); + init_iova_domain(&dma_dom->iovad, PAGE_SIZE, IOVA_START_PFN); if (init_iova_flush_queue(&dma_dom->iovad, iova_domain_flush_tlb, NULL)) goto free_dma_dom; @@ -2696,8 +2694,7 @@ static int init_reserved_iova_ranges(void) struct pci_dev *pdev = NULL; struct iova *val; - init_iova_domain(&reserved_iova_ranges, PAGE_SIZE, - IOVA_START_PFN, DMA_32BIT_PFN); + init_iova_domain(&reserved_iova_ranges, PAGE_SIZE, IOVA_START_PFN); lockdep_set_class(&reserved_iova_ranges.iova_rbtree_lock, &reserved_rbtree_key); diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 9d1cebe7f6cb..191be9c80a8a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -292,18 +292,7 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, /* ...then finally give it a kicking to make sure it fits */ base_pfn = max_t(unsigned long, base_pfn, domain->geometry.aperture_start >> order); - end_pfn = min_t(unsigned long, end_pfn, - domain->geometry.aperture_end >> order); } - /* - * PCI devices may have larger DMA masks, but still prefer allocating - * within a 32-bit mask to avoid DAC addressing. Such limitations don't - * apply to the typical platform device, so for those we may as well - * leave the cache limit at the top of their range to save an rb_last() - * traversal on every allocation. - */ - if (dev && dev_is_pci(dev)) - end_pfn &= DMA_BIT_MASK(32) >> order; /* start_pfn is always nonzero for an already-initialised domain */ if (iovad->start_pfn) { @@ -312,16 +301,11 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, pr_warn("Incompatible range for DMA domain\n"); return -EFAULT; } - /* - * If we have devices with different DMA masks, move the free - * area cache limit down for the benefit of the smaller one. - */ - iovad->dma_32bit_pfn = min(end_pfn + 1, iovad->dma_32bit_pfn); return 0; } - init_iova_domain(iovad, 1UL << order, base_pfn, end_pfn); + init_iova_domain(iovad, 1UL << order, base_pfn); if (!dev) return 0; diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 6784a05dd6b2..ebb48353dd39 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -82,8 +82,6 @@ #define IOVA_START_PFN (1) #define IOVA_PFN(addr) ((addr) >> PAGE_SHIFT) -#define DMA_32BIT_PFN IOVA_PFN(DMA_BIT_MASK(32)) -#define DMA_64BIT_PFN IOVA_PFN(DMA_BIT_MASK(64)) /* page table handling */ #define LEVEL_STRIDE (9) @@ -1878,8 +1876,7 @@ static int dmar_init_reserved_ranges(void) struct iova *iova; int i; - init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN); lockdep_set_class(&reserved_iova_list.iova_rbtree_lock, &reserved_rbtree_key); @@ -1938,8 +1935,7 @@ static int domain_init(struct dmar_domain *domain, struct intel_iommu *iommu, unsigned long sagaw; int err; - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); err = init_iova_flush_queue(&domain->iovad, iommu_flush_iova, iova_entry_free); @@ -4897,8 +4893,7 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) { int adjust_width; - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); domain_reserve_special_ranges(domain); /* calculate AGAW */ diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index a125a5786dbf..dbe26067250e 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -40,7 +40,7 @@ static void fq_flush_timeout(unsigned long data); void init_iova_domain(struct iova_domain *iovad, unsigned long granule, - unsigned long start_pfn, unsigned long pfn_32bit) + unsigned long start_pfn) { /* * IOVA granularity will normally be equal to the smallest @@ -55,7 +55,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, iovad->cached32_node = NULL; iovad->granule = granule; iovad->start_pfn = start_pfn; - iovad->dma_32bit_pfn = pfn_32bit + 1; + iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad)); iovad->flush_cb = NULL; iovad->fq = NULL; iovad->anchor.pfn_lo = iovad->anchor.pfn_hi = IOVA_ANCHOR; diff --git a/drivers/misc/mic/scif/scif_rma.c b/drivers/misc/mic/scif/scif_rma.c index 329727e00e97..c824329f7012 100644 --- a/drivers/misc/mic/scif/scif_rma.c +++ b/drivers/misc/mic/scif/scif_rma.c @@ -39,8 +39,7 @@ void scif_rma_ep_init(struct scif_endpt *ep) struct scif_endpt_rma_info *rma = &ep->rma_info; mutex_init(&rma->rma_lock); - init_iova_domain(&rma->iovad, PAGE_SIZE, SCIF_IOVA_START_PFN, - SCIF_DMA_64BIT_PFN); + init_iova_domain(&rma->iovad, PAGE_SIZE, SCIF_IOVA_START_PFN); spin_lock_init(&rma->tc_lock); mutex_init(&rma->mmn_lock); INIT_LIST_HEAD(&rma->reg_list); diff --git a/include/linux/iova.h b/include/linux/iova.h index 5eaedf77b152..c696ee81054e 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -155,7 +155,7 @@ struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi); void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to); void init_iova_domain(struct iova_domain *iovad, unsigned long granule, - unsigned long start_pfn, unsigned long pfn_32bit); + unsigned long start_pfn); int init_iova_flush_queue(struct iova_domain *iovad, iova_flush_cb flush_cb, iova_entry_dtor entry_dtor); struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); @@ -231,8 +231,7 @@ static inline void copy_reserved_iova(struct iova_domain *from, static inline void init_iova_domain(struct iova_domain *iovad, unsigned long granule, - unsigned long start_pfn, - unsigned long pfn_32bit) + unsigned long start_pfn) { }