From patchwork Wed Mar 22 06:27:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 95699 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp86639qgd; Tue, 21 Mar 2017 23:31:00 -0700 (PDT) X-Received: by 10.98.93.150 with SMTP id n22mr45124917pfj.103.1490164260025; Tue, 21 Mar 2017 23:31:00 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w19si399900pfa.377.2017.03.21.23.30.59; Tue, 21 Mar 2017 23:31:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933184AbdCVGao (ORCPT + 16 others); Wed, 22 Mar 2017 02:30:44 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:4350 "EHLO dggrg02-dlp.huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1758405AbdCVG33 (ORCPT ); Wed, 22 Mar 2017 02:29:29 -0400 Received: from 172.30.72.54 (EHLO DGGEML404-HUB.china.huawei.com) ([172.30.72.54]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AKH60782; Wed, 22 Mar 2017 14:29:22 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML404-HUB.china.huawei.com (10.3.17.39) with Microsoft SMTP Server id 14.3.301.0; Wed, 22 Mar 2017 14:29:11 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH 3/7] iommu/iova: insert start_pfn boundary of dma32 Date: Wed, 22 Mar 2017 14:27:43 +0800 Message-ID: <1490164067-12552-4-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1490164067-12552-1-git-send-email-thunder.leizhen@huawei.com> References: <1490164067-12552-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A0B0205.58D219C3.03B5, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: f7a4e4003358ecd7ea91be973f1218f1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Reserve the first granule size memory(start at start_pfn) as boundary iova, to make sure that iovad->cached32_node can not be NULL in future. Meanwhile, changed the assignment of iovad->cached32_node from rb_next to rb_prev of &free->node in function __cached_rbnode_delete_update. Signed-off-by: Zhen Lei --- drivers/iommu/iova.c | 63 ++++++++++++++++++++++++++++++---------------------- 1 file changed, 37 insertions(+), 26 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 1c49969..b5a148e 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -32,6 +32,17 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad, static void init_iova_rcaches(struct iova_domain *iovad); static void free_iova_rcaches(struct iova_domain *iovad); +static void +insert_iova_boundary(struct iova_domain *iovad) +{ + struct iova *iova; + unsigned long start_pfn_32bit = iovad->start_pfn; + + iova = reserve_iova(iovad, start_pfn_32bit, start_pfn_32bit); + BUG_ON(!iova); + iovad->cached32_node = &iova->node; +} + void init_iova_domain(struct iova_domain *iovad, unsigned long granule, unsigned long start_pfn, unsigned long pfn_32bit) @@ -45,27 +56,38 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, spin_lock_init(&iovad->iova_rbtree_lock); iovad->rbroot = RB_ROOT; - iovad->cached32_node = NULL; iovad->granule = granule; iovad->start_pfn = start_pfn; iovad->dma_32bit_pfn = pfn_32bit; init_iova_rcaches(iovad); + + /* + * Insert boundary nodes for dma32. So cached32_node can not be NULL in + * future. + */ + insert_iova_boundary(iovad); } EXPORT_SYMBOL_GPL(init_iova_domain); static struct rb_node * __get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn) { - if ((*limit_pfn > iovad->dma_32bit_pfn) || - (iovad->cached32_node == NULL)) + struct rb_node *cached_node; + struct rb_node *next_node; + + if (*limit_pfn > iovad->dma_32bit_pfn) return rb_last(&iovad->rbroot); - else { - struct rb_node *prev_node = rb_prev(iovad->cached32_node); - struct iova *curr_iova = - rb_entry(iovad->cached32_node, struct iova, node); - *limit_pfn = curr_iova->pfn_lo - 1; - return prev_node; + else + cached_node = iovad->cached32_node; + + next_node = rb_next(cached_node); + if (next_node) { + struct iova *next_iova = rb_entry(next_node, struct iova, node); + + *limit_pfn = min(*limit_pfn, next_iova->pfn_lo - 1); } + + return cached_node; } static void @@ -83,20 +105,13 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) struct iova *cached_iova; struct rb_node *curr; - if (!iovad->cached32_node) - return; curr = iovad->cached32_node; cached_iova = rb_entry(curr, struct iova, node); if (free->pfn_lo >= cached_iova->pfn_lo) { - struct rb_node *node = rb_next(&free->node); - struct iova *iova = rb_entry(node, struct iova, node); - /* only cache if it's below 32bit pfn */ - if (node && iova->pfn_lo < iovad->dma_32bit_pfn) - iovad->cached32_node = node; - else - iovad->cached32_node = NULL; + if (free->pfn_hi <= iovad->dma_32bit_pfn) + iovad->cached32_node = rb_prev(&free->node); } } @@ -114,7 +129,7 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, struct iova *new, bool size_aligned) { - struct rb_node *prev, *curr = NULL; + struct rb_node *prev, *curr; unsigned long flags; unsigned long saved_pfn; unsigned long pad_size = 0; @@ -144,13 +159,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, curr = rb_prev(curr); } - if (!curr) { - if (size_aligned) - pad_size = iova_get_pad_size(size, limit_pfn); - if ((iovad->start_pfn + size + pad_size) > limit_pfn) { - spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); - return -ENOMEM; - } + if (unlikely(!curr)) { + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + return -ENOMEM; } /* pfn_lo will point to size aligned address if size_aligned is set */