From patchwork Tue Apr 5 07:32:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 556811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E56AC433FE for ; Tue, 5 Apr 2022 11:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379198AbiDELkd (ORCPT ); Tue, 5 Apr 2022 07:40:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354466AbiDEKOT (ORCPT ); Tue, 5 Apr 2022 06:14:19 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2E5E6A41E; Tue, 5 Apr 2022 03:00:20 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8E43161673; Tue, 5 Apr 2022 10:00:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B03AC385A1; Tue, 5 Apr 2022 10:00:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649152820; bh=mgtRBFgMqU+b1ionpcQaRy21ElERCcfUPd6hniBMnz8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MkJGrmsTHa9vmfcBOhEpCdBCajt740dD2yJ2bnQHArwQBjL/1+gMKbJsH8PNh4aJs Oz/tGMKOJIneTuzsUkf5mdQDfVzF2qGyYzig92wBMa0QNKHvKLCBIFT+HAEE5ULPWw QMOtgw0mo6Td8tW5k/JHS3TfHFYKOg0n4f/xSziI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, David Stevens , Robin Murphy , Christoph Hellwig , Joerg Roedel , Mario Limonciello Subject: [PATCH 5.15 907/913] iommu/dma: Check CONFIG_SWIOTLB more broadly Date: Tue, 5 Apr 2022 09:32:49 +0200 Message-Id: <20220405070407.005991513@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405070339.801210740@linuxfoundation.org> References: <20220405070339.801210740@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: David Stevens commit 2e727bffbe93750a13d2414f3ce43de2f21600d2 upstream. Introduce a new dev_use_swiotlb function to guard swiotlb code, instead of overloading dev_is_untrusted. This allows CONFIG_SWIOTLB to be checked more broadly, so the swiotlb related code can be removed more aggressively. Signed-off-by: David Stevens Reviewed-by: Robin Murphy Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20210929023300.335969-6-stevensd@google.com Signed-off-by: Joerg Roedel Cc: Mario Limonciello Signed-off-by: Greg Kroah-Hartman --- drivers/iommu/dma-iommu.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -317,6 +317,11 @@ static bool dev_is_untrusted(struct devi return dev_is_pci(dev) && to_pci_dev(dev)->untrusted; } +static bool dev_use_swiotlb(struct device *dev) +{ + return IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev); +} + /* sysfs updates are serialised by the mutex of the group owning @domain */ int iommu_dma_init_fq(struct iommu_domain *domain) { @@ -731,7 +736,7 @@ static void iommu_dma_sync_single_for_cp { phys_addr_t phys; - if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev)) + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); @@ -747,7 +752,7 @@ static void iommu_dma_sync_single_for_de { phys_addr_t phys; - if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev)) + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); @@ -765,7 +770,7 @@ static void iommu_dma_sync_sg_for_cpu(st struct scatterlist *sg; int i; - if (dev_is_untrusted(dev)) + if (dev_use_swiotlb(dev)) for_each_sg(sgl, sg, nelems, i) iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg), sg->length, dir); @@ -781,7 +786,7 @@ static void iommu_dma_sync_sg_for_device struct scatterlist *sg; int i; - if (dev_is_untrusted(dev)) + if (dev_use_swiotlb(dev)) for_each_sg(sgl, sg, nelems, i) iommu_dma_sync_single_for_device(dev, sg_dma_address(sg), @@ -808,8 +813,7 @@ static dma_addr_t iommu_dma_map_page(str * If both the physical buffer start address and size are * page aligned, we don't need to use a bounce page. */ - if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) && - iova_offset(iovad, phys | size)) { + if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { void *padding_start; size_t padding_size; @@ -995,7 +999,7 @@ static int iommu_dma_map_sg(struct devic goto out; } - if (dev_is_untrusted(dev)) + if (dev_use_swiotlb(dev)) return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) @@ -1073,7 +1077,7 @@ static void iommu_dma_unmap_sg(struct de struct scatterlist *tmp; int i; - if (dev_is_untrusted(dev)) { + if (dev_use_swiotlb(dev)) { iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs); return; }