From patchwork Fri Mar 19 13:25:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 404794 Delivered-To: patch@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp1378014jai; Fri, 19 Mar 2021 06:31:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwX1VbpnO8pxcNckoufbxy0KMmnMGeIhSuO9tRCguL9MSKwIH6RQ67whE3Y0bF8gqsIgcwT X-Received: by 2002:a05:6402:1754:: with SMTP id v20mr9413501edx.191.1616160705878; Fri, 19 Mar 2021 06:31:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616160705; cv=none; d=google.com; s=arc-20160816; b=KXTsuJl9aF33YN3xOzC9eJ8Rqyav9+l+nFOtFkeynFQX+4CZR/4PMlJzrnnca5Q5hn UKUgizM8hjVv8jjLgoUdlihxcQZsb3yq5ZqRP2Urx1XYCJ3W1SvvME/Pq2LJmRfQG780 BPup3vCvggypD3esUVpm3C8U+LL68zXK5gUFku3Iz+BilKEy//mvAfB7M1a4wTejUc2b YwepQlFOY7qjoc2+S0BCoufhHj6DHP3jnnPKG6TXEOtSwgmJGFs/45pnnHDm/nO7Wb0q nK6oOUWsPabQmbUBD+n0GM0qdtGePoDMQYOGVtH9GpRiGCzmH6zMSJxzazFpGDD+x1pc fEHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=b9Ih1m/407oj+EOmqSjZvPkpF8ndg/gNL7osGbnsE/A=; b=cAhSn5CguGk5klaJWljuXUgricLpx04EsR0NCAmaH+XGwWbe72Q6hXvTz2T9YMLqdK /P7ZYgpBliL7fAtTfLwPZRGlz3Ubo6kCdRlxrCykZ0SL6Aj7pszH2Yw7dMOl9PSN3/Zs TPoh7NSxw/2QfhAvyiQLqTTIvuXk9x62jm8ThmZGMJ2h3kGggIjEiHmq9ru6QPlm/h9n H2/i/a3iEKVRortPMGXIreWKxBmXqRlvCuzAK7aBsTHH5gW76szvmoLsM3K1+jAYtnkf ohbUYkGvGR6HwpOLnIVXigUbP/hlstAuUVblBw6a6PnKLRXlVGu/Rn+lc8BuqoRjXUqY jenw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id rp14si4089970ejb.435.2021.03.19.06.31.45 for ; Fri, 19 Mar 2021 06:31:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230377AbhCSNbO (ORCPT ); Fri, 19 Mar 2021 09:31:14 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14016 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229806AbhCSNan (ORCPT ); Fri, 19 Mar 2021 09:30:43 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F24T269c5zPkbj; Fri, 19 Mar 2021 21:27:50 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 19 Mar 2021 21:30:06 +0800 From: John Garry To: , , , , , , CC: , , , , John Garry Subject: [PATCH 1/6] iommu: Move IOVA power-of-2 roundup into allocator Date: Fri, 19 Mar 2021 21:25:43 +0800 Message-ID: <1616160348-29451-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1616160348-29451-1-git-send-email-john.garry@huawei.com> References: <1616160348-29451-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move the IOVA size power-of-2 rcache roundup into the IOVA allocator. This is to eventually make it possible to be able to configure the upper limit of the IOVA rcache range. Signed-off-by: John Garry --- drivers/iommu/dma-iommu.c | 8 ------ drivers/iommu/iova.c | 51 ++++++++++++++++++++++++++------------- 2 files changed, 34 insertions(+), 25 deletions(-) -- 2.26.2 diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index af765c813cc8..15b7270a5c2a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -429,14 +429,6 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, shift = iova_shift(iovad); iova_len = size >> shift; - /* - * Freeing non-power-of-two-sized allocations back into the IOVA caches - * will come back to bite us badly, so we have to waste a bit of space - * rounding up anything cacheable to make sure that can't happen. The - * order of the unadjusted size will still match upon freeing. - */ - if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1))) - iova_len = roundup_pow_of_two(iova_len); dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit); diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index e6e2fa85271c..e62e9e30b30c 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -179,7 +179,7 @@ iova_insert_rbtree(struct rb_root *root, struct iova *iova, static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, - struct iova *new, bool size_aligned) + struct iova *new, bool size_aligned, bool fast) { struct rb_node *curr, *prev; struct iova *curr_iova; @@ -188,6 +188,15 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long align_mask = ~0UL; unsigned long high_pfn = limit_pfn, low_pfn = iovad->start_pfn; + /* + * Freeing non-power-of-two-sized allocations back into the IOVA caches + * will come back to bite us badly, so we have to waste a bit of space + * rounding up anything cacheable to make sure that can't happen. The + * order of the unadjusted size will still match upon freeing. + */ + if (fast && size < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1))) + size = roundup_pow_of_two(size); + if (size_aligned) align_mask <<= fls_long(size - 1); @@ -288,21 +297,10 @@ void iova_cache_put(void) } EXPORT_SYMBOL_GPL(iova_cache_put); -/** - * alloc_iova - allocates an iova - * @iovad: - iova domain in question - * @size: - size of page frames to allocate - * @limit_pfn: - max limit address - * @size_aligned: - set if size_aligned address range is required - * This function allocates an iova in the range iovad->start_pfn to limit_pfn, - * searching top-down from limit_pfn to iovad->start_pfn. If the size_aligned - * flag is set then the allocated address iova->pfn_lo will be naturally - * aligned on roundup_power_of_two(size). - */ -struct iova * -alloc_iova(struct iova_domain *iovad, unsigned long size, +static struct iova * +__alloc_iova(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, - bool size_aligned) + bool size_aligned, bool fast) { struct iova *new_iova; int ret; @@ -312,7 +310,7 @@ alloc_iova(struct iova_domain *iovad, unsigned long size, return NULL; ret = __alloc_and_insert_iova_range(iovad, size, limit_pfn + 1, - new_iova, size_aligned); + new_iova, size_aligned, fast); if (ret) { free_iova_mem(new_iova); @@ -321,6 +319,25 @@ alloc_iova(struct iova_domain *iovad, unsigned long size, return new_iova; } + +/** + * alloc_iova - allocates an iova + * @iovad: - iova domain in question + * @size: - size of page frames to allocate + * @limit_pfn: - max limit address + * @size_aligned: - set if size_aligned address range is required + * This function allocates an iova in the range iovad->start_pfn to limit_pfn, + * searching top-down from limit_pfn to iovad->start_pfn. If the size_aligned + * flag is set then the allocated address iova->pfn_lo will be naturally + * aligned on roundup_power_of_two(size). + */ +struct iova * +alloc_iova(struct iova_domain *iovad, unsigned long size, + unsigned long limit_pfn, + bool size_aligned) +{ + return __alloc_iova(iovad, size, limit_pfn, size_aligned, false); +} EXPORT_SYMBOL_GPL(alloc_iova); static struct iova * @@ -433,7 +450,7 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size, return iova_pfn; retry: - new_iova = alloc_iova(iovad, size, limit_pfn, true); + new_iova = __alloc_iova(iovad, size, limit_pfn, true, true); if (!new_iova) { unsigned int cpu;