From patchwork Mon Jun 6 06:19:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 579740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31B30CCA483 for ; Mon, 6 Jun 2022 06:20:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229946AbiFFGUQ (ORCPT ); Mon, 6 Jun 2022 02:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229891AbiFFGUD (ORCPT ); Mon, 6 Jun 2022 02:20:03 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2054.outbound.protection.outlook.com [40.107.220.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CB2821E0C; Sun, 5 Jun 2022 23:19:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IZs1Isgo5gYBbEZVXT8C39faLJaXE1cXQRLasDvvbzqMrHpAUMWZzK9/cK9+kyqBYK6OyOxPsWnMsRYDsDgCCCGI6PjsbppkETWBSdowrUwgIkMmECaVoAGoNaX9OfbenXEz9OxGvDWCkZ0Ii/XokAXCdrplFa7x/BcEUE1TzC/EqCncCFBMB5uh1dWY9DhMPnHTVKMqJQo5FZDaVYUhcJWY8HQFYlXiOP1LgTVyb2O9nckVWqW9UFF91S4QdgGDUGeLLjJMvXHoNfDGd0Dg2rhadfEzlmTAj+CgxSWeFe8gAp3tErHwSCRADnKL1Znbqtm4wuvY3em9/wUbjIGWmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/lD6En4jLD2A0WwJaY5egXjV7PHtQKwxdT6UcRXfAro=; b=WVz3jQaucBxcU3/wthTiL1bXrbIU70YSiSZRMrK3h44uaZaOFgxGzjrLjnkpmPMGMZQve8PoeZJ3+C94e5YGo0/pHrpdbYd/qOXJ/YpN2Mj7HtRSt4yH3zlnmT7j+XvRr/Fg4NI/X5AOnXlieJ4n8o2/cDfY5ZHLuxSna9JNayBZEqlI4GPJ8Fkqc/Hk4FCZFd4qXNUM4aOi75uz5VIf2f3FSs2wwAf+ilzD8a7gLfIFfapk/W02wNgu06qKK0ZhAliRB7/lDEIUtCFwclRkafqDYC+jHNXSVyEubziYq2khEZTq87eP5mQ5n2wjpZwmZriVLVYNURsAlMNVHV4VDQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=samsung.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/lD6En4jLD2A0WwJaY5egXjV7PHtQKwxdT6UcRXfAro=; b=DQ4Bp67Hl7HCHGTOzmnvgy0+KA9n62Y6U2o4hNe9tHjjtTPG/41iHH6piyvxFE/FmC2+9s1+dEXeHO9cuW2doRsQZX1jgczPvbhJtEg50WkQ5OSulArDkorGQMqrWHrR0oU/1VcRpxACGtOegEwPTzmA5sYifczXQZen76CfgSSvO/hHde1r1HvxowYYiDnZlIsA+aLBfly6IvqFKM001ap8GliCprzWX8x7Okl3VBNNLm7l7259ALiZLmMr//bx96Vs3K7gQraLpe5ua5FMQqHiBfaoevoAWrQWUK3X7cLs1V9VkWQVN371VQufS/WLoB9Yqrn4Mdbh8cs2KJ/ytg== Received: from BN1PR12CA0023.namprd12.prod.outlook.com (2603:10b6:408:e1::28) by LV2PR12MB5893.namprd12.prod.outlook.com (2603:10b6:408:175::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.18; Mon, 6 Jun 2022 06:19:56 +0000 Received: from BN8NAM11FT067.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e1:cafe::fd) by BN1PR12CA0023.outlook.office365.com (2603:10b6:408:e1::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.18 via Frontend Transport; Mon, 6 Jun 2022 06:19:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT067.mail.protection.outlook.com (10.13.177.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 06:19:55 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 06:19:54 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 5 Jun 2022 23:19:53 -0700 Received: from Asurada-Nvidia.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Sun, 5 Jun 2022 23:19:51 -0700 From: Nicolin Chen To: , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH 1/5] iommu: Return -EMEDIUMTYPE for incompatible domain and device/group Date: Sun, 5 Jun 2022 23:19:23 -0700 Message-ID: <20220606061927.26049-2-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220606061927.26049-1-nicolinc@nvidia.com> References: <20220606061927.26049-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3b0e22f2-25ab-4220-9b4d-08da4784935c X-MS-TrafficTypeDiagnostic: LV2PR12MB5893:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MAESquNyov2hSlN2LefCZ1SDG3fhf1NCMjKy4+Wu3pjUcI0vw3MFZO88F5cIvmMSyFqYY2Ai6CXdvk9eGdjQpBBXglWenu3n1LbkKCv59qoPZRWBPYkluOaztJmLAMbNURV1kw6GShrI20s02fgKL6facc+JU+PbklOF8cKCv89g+xjDmP3cN45D4cKyHJHJU7G4LdjgOWz4p2tccjZ74G7VjPUwaAskilJ0Ksd9MNBLe0ey+YYWcmiwHuyKiHPEM7z57baiZPzqkJdOxDiW4UqCp8Yop89Mk70rgcftvJAPoru1/LCkNN80ovttpZ7kutOsnJgjyQ967e6jOpD+NsGkmds9ClDSvdmENKgd8nKV/qdxSzWKaQw3Mw+BJhQvb/dzTGEOYOAoxcj+aUVeTiH2z3sBl0R+GEkMYygQTL97wWuT6KbcsRhxCYih+BoTzIOGU8Y9RBQYOdUU32ew+vLtvAWwA5QzWEmgS5Iu6V8SqGBIrQD8c/sB0sRoglXvKi6ENJB9FrOL2aDukOvimacO3XhwbYvv3rvbaB0Tgy/o1VLApdfYpcCw5mFHhCdN4sFYly+FPyjXCP+EnPKvfG7680j8/uuK0Slkq1tc1x0zS/NZl8xFNExQvI7WeROd2NapAyHysGp2+yyvI6M0reRBI/mB2CSW6Y18sWK5GKuc4Qg2Gq06Z5kCSMpOCtDJi/aoQDqUU24n39OE/uGE54yEN2nSZbIojFHumwImgE0eGnZyOSAzpkHVt9at3zfBSL4bYizNBDoPKJiDHZJdJQQqyPYTNGPp4xF4daYS6scTK1Pk4Fd8RwhVVSmxZfYKe+X+9brpCYul4LI0VfGKDQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(356005)(1076003)(508600001)(81166007)(4326008)(54906003)(110136005)(36756003)(921005)(70586007)(8676002)(2616005)(186003)(26005)(83380400001)(6666004)(5660300002)(8936002)(2906002)(336012)(47076005)(426003)(7406005)(70206006)(40460700003)(86362001)(316002)(7696005)(36860700001)(82310400005)(7416002)(334744004)(2101003)(83996005)(36900700001)(473944003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 06:19:55.8046 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3b0e22f2-25ab-4220-9b4d-08da4784935c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT067.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5893 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org Cases like VFIO wish to attach a device to an existing domain that was not allocated specifically from the device. This raises a condition where the IOMMU driver can fail the domain attach because the domain and device are incompatible with each other. This is a soft failure that can be resolved by using a different domain. Provide a dedicated errno from the IOMMU driver during attach that the reason attached failed is because of domain incompatability. EMEDIUMTYPE is chosen because it is never used within the iommu subsystem today and evokes a sense that the 'medium' aka the domain is incompatible. VFIO can use this to know attach is a soft failure and it should continue searching. Otherwise the attach will be a hard failure and VFIO will return the code to userspace. Update all drivers to return EMEDIUMTYPE in their failure paths that are related to domain incompatability. Add kdocs describing this behavior. Suggested-by: Jason Gunthorpe Signed-off-by: Nicolin Chen --- drivers/iommu/amd/iommu.c | 2 +- drivers/iommu/apple-dart.c | 4 ++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 6 +++--- drivers/iommu/arm/arm-smmu/qcom_iommu.c | 2 +- drivers/iommu/intel/iommu.c | 4 ++-- drivers/iommu/iommu.c | 22 +++++++++++++++++++++ drivers/iommu/ipmmu-vmsa.c | 2 +- drivers/iommu/omap-iommu.c | 2 +- drivers/iommu/virtio-iommu.c | 2 +- 9 files changed, 34 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 840831d5d2ad..ad499658a6b6 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -1662,7 +1662,7 @@ static int attach_device(struct device *dev, if (domain->flags & PD_IOMMUV2_MASK) { struct iommu_domain *def_domain = iommu_get_dma_domain(dev); - ret = -EINVAL; + ret = -EMEDIUMTYPE; if (def_domain->type != IOMMU_DOMAIN_IDENTITY) goto out; diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c index 8af0242a90d9..e58dc310afd7 100644 --- a/drivers/iommu/apple-dart.c +++ b/drivers/iommu/apple-dart.c @@ -495,10 +495,10 @@ static int apple_dart_attach_dev(struct iommu_domain *domain, if (cfg->stream_maps[0].dart->force_bypass && domain->type != IOMMU_DOMAIN_IDENTITY) - return -EINVAL; + return -EMEDIUMTYPE; if (!cfg->stream_maps[0].dart->supports_bypass && domain->type == IOMMU_DOMAIN_IDENTITY) - return -EINVAL; + return -EMEDIUMTYPE; ret = apple_dart_finalize_domain(domain, cfg); if (ret) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 88817a3376ef..6c393cd84925 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2424,20 +2424,20 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) "cannot attach to SMMU %s (upstream of %s)\n", dev_name(smmu_domain->smmu->dev), dev_name(smmu->dev)); - ret = -ENXIO; + ret = -EMEDIUMTYPE; goto out_unlock; } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 && master->ssid_bits != smmu_domain->s1_cfg.s1cdmax) { dev_err(dev, "cannot attach to incompatible domain (%u SSID bits != %u)\n", smmu_domain->s1_cfg.s1cdmax, master->ssid_bits); - ret = -EINVAL; + ret = -EMEDIUMTYPE; goto out_unlock; } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 && smmu_domain->stall_enabled != master->stall_enabled) { dev_err(dev, "cannot attach to stall-%s domain\n", smmu_domain->stall_enabled ? "enabled" : "disabled"); - ret = -EINVAL; + ret = -EMEDIUMTYPE; goto out_unlock; } diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c index 4c077c38fbd6..a8b63b855ffb 100644 --- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c +++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c @@ -386,7 +386,7 @@ static int qcom_iommu_attach_dev(struct iommu_domain *domain, struct device *dev "attached to domain on IOMMU %s\n", dev_name(qcom_domain->iommu->dev), dev_name(qcom_iommu->dev)); - return -EINVAL; + return -EMEDIUMTYPE; } return 0; diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 44016594831d..0813b119d680 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4323,7 +4323,7 @@ static int prepare_domain_attach_device(struct iommu_domain *domain, return -ENODEV; if (dmar_domain->force_snooping && !ecap_sc_support(iommu->ecap)) - return -EOPNOTSUPP; + return -EMEDIUMTYPE; /* check if this iommu agaw is sufficient for max mapped address */ addr_width = agaw_to_width(iommu->agaw); @@ -4334,7 +4334,7 @@ static int prepare_domain_attach_device(struct iommu_domain *domain, dev_err(dev, "%s: iommu width (%d) is not " "sufficient for the mapped address (%llx)\n", __func__, addr_width, dmar_domain->max_addr); - return -EFAULT; + return -EMEDIUMTYPE; } dmar_domain->gaw = addr_width; diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 847ad47a2dfd..19cf28d40ebe 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1972,6 +1972,17 @@ static int __iommu_attach_device(struct iommu_domain *domain, return ret; } +/** + * iommu_attach_device - Attach a device to an IOMMU domain + * @domain: IOMMU domain to attach + * @dev: Device that will be attached + * + * Returns 0 on success and error code on failure + * + * Specifically, -EMEDIUMTYPE is returned if the domain and the device are + * incompatible in some way. This indicates that a caller should try another + * existing IOMMU domain or allocate a new one. + */ int iommu_attach_device(struct iommu_domain *domain, struct device *dev) { struct iommu_group *group; @@ -2098,6 +2109,17 @@ static int __iommu_attach_group(struct iommu_domain *domain, return ret; } +/** + * iommu_attach_group - Attach an IOMMU group to an IOMMU domain + * @domain: IOMMU domain to attach + * @dev: IOMMU group that will be attached + * + * Returns 0 on success and error code on failure + * + * Specifically, -EMEDIUMTYPE is returned if the domain and the group are + * incompatible in some way. This indicates that a caller should try another + * existing IOMMU domain or allocate a new one. + */ int iommu_attach_group(struct iommu_domain *domain, struct iommu_group *group) { int ret; diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index 8fdb84b3642b..e491e410add5 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -630,7 +630,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain, */ dev_err(dev, "Can't attach IPMMU %s to domain on IPMMU %s\n", dev_name(mmu->dev), dev_name(domain->mmu->dev)); - ret = -EINVAL; + ret = -EMEDIUMTYPE; } else dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id); diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c index d9cf2820c02e..bbc6c4cd7aae 100644 --- a/drivers/iommu/omap-iommu.c +++ b/drivers/iommu/omap-iommu.c @@ -1472,7 +1472,7 @@ omap_iommu_attach_dev(struct iommu_domain *domain, struct device *dev) /* only a single client device can be attached to a domain */ if (omap_domain->dev) { dev_err(dev, "iommu domain is already attached\n"); - ret = -EBUSY; + ret = -EMEDIUMTYPE; goto out; } diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index 25be4b822aa0..e3b812d8fa96 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -734,7 +734,7 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev) ret = viommu_domain_finalise(vdev, domain); } else if (vdomain->viommu != vdev->viommu) { dev_err(dev, "cannot attach to foreign vIOMMU\n"); - ret = -EXDEV; + ret = -EMEDIUMTYPE; } mutex_unlock(&vdomain->mutex); From patchwork Mon Jun 6 06:19:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 579214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A213CCA473 for ; Mon, 6 Jun 2022 06:20:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229940AbiFFGUQ (ORCPT ); Mon, 6 Jun 2022 02:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229892AbiFFGUE (ORCPT ); Mon, 6 Jun 2022 02:20:04 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2079.outbound.protection.outlook.com [40.107.244.79]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF09B21E10; Sun, 5 Jun 2022 23:20:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X+yoRHNb6Y1lEvdgXIT9mBtUUih+T/nOGdQWZqlrZW+S32JOgQ+kDuTaqPkvtBJ2Pyz/0zXIQFr+OnWGs04qYtlO6lwa8BogdZ8sj4ggGDrlwTh0ggBzOKx7F9uFvLbdaj/TO+p4pZ0LXZv26cpq8mbJS87D8kmcn0Xohyud85rBqN0CTWEP/nO2KKXxT53H38xumSPBb57jgCeGMyzh5qLvv3PGS2k/KybyZJL0cNPCq+UE16SvboJ6XBchRYES4l+tKvjDAPy0mk1booqplvPfsM91ix8YnT47vRdFVEUgD9t6yRqq8td1vWCDTEyBHRdxkxWtjFWxbprRab1Wbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2HFnWkjNuwvFTLEgfy+LKH/9Hv7pxkxUGW+YelLbjpw=; b=hEix5cl+XEjem40NGjgpi+hqXZuQ1TJ1ggiMz27QLers+moR2wbV5oGnZp5HPKK86CWP4pzUWkbuvFp3OSnrsdBSu3/T6wrgN7DJAWpNuAP4xOHcpdH3aPFUg77rmkpCjd4SNKCqV3vRXTBlcz8gvQ/PgqdpmS0qu5xS+M4ISqphA8ik3Zn0hkpB2KpSTJGRqrg9jT4bfKy5PmN39pE0mPstPRyF51v6sBR6a1UkjkkgBEiqi09LrFAH8A/k0RT9+eAwE1ESCSKHrWh+U0hq5T6XGjwGAcXYi2OiHVMJVLwNwajoSMhqqW70gNWos/iYXPsAiYXQNSAbvB2F1B8CdQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=samsung.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2HFnWkjNuwvFTLEgfy+LKH/9Hv7pxkxUGW+YelLbjpw=; b=kOZecp2VgSMnxZJPkTybdYN0FEMMc8ATEptt1vesrv+R5WdPagFtWCAWO6shYs8EvCXiSuGg+Y7S+WeHQZAnnQAz7WzYHgUPOIpTZ3R9lyZp+TM3+WxuQmDWoOS0QS/VGrSkv+XRci9m5/hv44xdrdLDsHaC+T9tFqNBL6PgMPuH0RYt9X1JFaGTThuhn1z8B2rkri7MSD3ahVhYt03WIcWQIISqImP/NNBDXGFcKa7QHOMOhjXG8z4yAy6j7pJZnyg6xZ8KtD9o298qiKIn6GIP3hyeWYLiRKjJ4tx47jpb3xqSfLznjrDRSoUGhbcY57m7oNJO0194FZt/FKhAiQ== Received: from MW4PR03CA0009.namprd03.prod.outlook.com (2603:10b6:303:8f::14) by BN6PR12MB1444.namprd12.prod.outlook.com (2603:10b6:405:10::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun 2022 06:19:58 +0000 Received: from CO1NAM11FT060.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8f:cafe::69) by MW4PR03CA0009.outlook.office365.com (2603:10b6:303:8f::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 06:19:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT060.mail.protection.outlook.com (10.13.175.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 06:19:57 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 06:19:57 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 5 Jun 2022 23:19:56 -0700 Received: from Asurada-Nvidia.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Sun, 5 Jun 2022 23:19:54 -0700 From: Nicolin Chen To: , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH 2/5] iommu: Ensure device has the same iommu_ops as the domain Date: Sun, 5 Jun 2022 23:19:24 -0700 Message-ID: <20220606061927.26049-3-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220606061927.26049-1-nicolinc@nvidia.com> References: <20220606061927.26049-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ddeb1635-b86f-4718-31d7-08da47849471 X-MS-TrafficTypeDiagnostic: BN6PR12MB1444:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RzKbDRKbCjtm3qbUSXwAfIVXaa3RZDSCG+LNftj4L/YNt+gsReSy+IS6IWAyvLwy/BG+61AxvHD0HcnynJOW/vhoB3kv3yqRzqpA659jYeVk1IvSBg+iJsrlINHjMuiHq2AtkZHWXdgKgwqeoDN/7g6t9ugWpS/jN7nuloyGVgGFJo0SitmL4HG+Fncz6KbMSGq1FNEBYA/F0NkUTat7usEQHpiBVM4BU2WVgp+zFyPeW1043DrKkXxGQIigkYwpubnTZni6/kinl9P7gaa7Hf/j0DbmuMrlVyTmwIzDffQOlDw5/bG4GgDGUELtdfJkmWJtRQLz2bTTsw7jrJKoPhzxSqqPiwnJ8PIy4lYU41+ZgGoj0Z8nQo8JxNgoolwcCz5Xf4rjFI1aTBG41hKdcJhw1UTzCC5y+4d9KvQWkaLb7Vd6CH4VwssAqUjLYQhJUubw2NKJs8NzW3glj0DwD0I9XsDp2hJl/PR4ZKY99RHjmrRTNqpV+U6sTncUvV73GZccDuHourXOhOI6Kuxf81pN7RUThc0PtVnaoShi46ebgfSnLi/XndIxoS1GLqb1gD6JPsZ1sYibH1D+evH09NFtxHNGcPZUx89C7s5Iftn483fc6b/bKBb4eZfZHT3jbi04XRTb+Fca4XM7faw5EyGXQtQNt18S8iUzb3p60y5xoDrmY5NTmURJUOzZZSkkfnQ/lPWnb9nN11ZmBj9J9OaQlyCIVuD1JC0kljYc3Wxz8C83Fl4Wdk/30261UAnLCjNHPpQOJnDRwBFs3JqY5w== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(86362001)(4326008)(5660300002)(8676002)(40460700003)(6666004)(8936002)(7416002)(70586007)(70206006)(7696005)(356005)(36860700001)(30864003)(921005)(1076003)(316002)(7406005)(54906003)(110136005)(82310400005)(83380400001)(26005)(186003)(2906002)(36756003)(508600001)(2616005)(81166007)(47076005)(336012)(426003)(36900700001)(2101003)(83996005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 06:19:57.6641 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ddeb1635-b86f-4718-31d7-08da47849471 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT060.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1444 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org The core code should not call an iommu driver op with a struct device parameter unless it knows that the dev_iommu_priv_get() for that struct device was setup by the same driver. Otherwise in a mixed driver system the iommu_priv could be casted to the wrong type. Store the iommu_ops pointer in the iommu_domain and use it as a check to validate that the struct device is correct before invoking any domain op that accepts a struct device. This allows removing the check of the domain op equality in VFIO. Co-developed-by: Jason Gunthorpe Signed-off-by: Jason Gunthorpe Signed-off-by: Nicolin Chen --- drivers/iommu/amd/iommu.c | 1 + drivers/iommu/apple-dart.c | 1 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 1 + drivers/iommu/arm/arm-smmu/arm-smmu.c | 1 + drivers/iommu/arm/arm-smmu/qcom_iommu.c | 1 + drivers/iommu/exynos-iommu.c | 1 + drivers/iommu/fsl_pamu_domain.c | 1 + drivers/iommu/intel/iommu.c | 1 + drivers/iommu/iommu.c | 4 ++++ drivers/iommu/ipmmu-vmsa.c | 1 + drivers/iommu/msm_iommu.c | 1 + drivers/iommu/mtk_iommu.c | 1 + drivers/iommu/mtk_iommu_v1.c | 1 + drivers/iommu/omap-iommu.c | 1 + drivers/iommu/rockchip-iommu.c | 1 + drivers/iommu/s390-iommu.c | 1 + drivers/iommu/sprd-iommu.c | 1 + drivers/iommu/sun50i-iommu.c | 1 + drivers/iommu/tegra-gart.c | 1 + drivers/iommu/tegra-smmu.c | 1 + drivers/iommu/virtio-iommu.c | 1 + include/linux/iommu.h | 2 ++ 22 files changed, 26 insertions(+) diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index ad499658a6b6..679f7a265013 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -2285,6 +2285,7 @@ const struct iommu_ops amd_iommu_ops = { .pgsize_bitmap = AMD_IOMMU_PGSIZES, .def_domain_type = amd_iommu_def_domain_type, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &amd_iommu_ops, .attach_dev = amd_iommu_attach_device, .detach_dev = amd_iommu_detach_device, .map = amd_iommu_map, diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c index e58dc310afd7..3d36d9a12aa7 100644 --- a/drivers/iommu/apple-dart.c +++ b/drivers/iommu/apple-dart.c @@ -775,6 +775,7 @@ static const struct iommu_ops apple_dart_iommu_ops = { .pgsize_bitmap = -1UL, /* Restricted during dart probe */ .owner = THIS_MODULE, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &apple_dart_iommu_ops, .attach_dev = apple_dart_attach_dev, .detach_dev = apple_dart_detach_dev, .map_pages = apple_dart_map_pages, diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 6c393cd84925..471ceb60427c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2859,6 +2859,7 @@ static struct iommu_ops arm_smmu_ops = { .pgsize_bitmap = -1UL, /* Restricted during device attach */ .owner = THIS_MODULE, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &arm_smmu_ops, .attach_dev = arm_smmu_attach_dev, .map_pages = arm_smmu_map_pages, .unmap_pages = arm_smmu_unmap_pages, diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 2ed3594f384e..52c2589a4deb 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -1597,6 +1597,7 @@ static struct iommu_ops arm_smmu_ops = { .pgsize_bitmap = -1UL, /* Restricted during device attach */ .owner = THIS_MODULE, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &arm_smmu_ops, .attach_dev = arm_smmu_attach_dev, .map_pages = arm_smmu_map_pages, .unmap_pages = arm_smmu_unmap_pages, diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c index a8b63b855ffb..8806a621f81e 100644 --- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c +++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c @@ -596,6 +596,7 @@ static const struct iommu_ops qcom_iommu_ops = { .of_xlate = qcom_iommu_of_xlate, .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &qcom_iommu_ops, .attach_dev = qcom_iommu_attach_dev, .detach_dev = qcom_iommu_detach_dev, .map = qcom_iommu_map, diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c index 71f2018e23fe..fa93f94313e3 100644 --- a/drivers/iommu/exynos-iommu.c +++ b/drivers/iommu/exynos-iommu.c @@ -1315,6 +1315,7 @@ static const struct iommu_ops exynos_iommu_ops = { .pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE, .of_xlate = exynos_iommu_of_xlate, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &exynos_iommu_ops, .attach_dev = exynos_iommu_attach_device, .detach_dev = exynos_iommu_detach_device, .map = exynos_iommu_map, diff --git a/drivers/iommu/fsl_pamu_domain.c b/drivers/iommu/fsl_pamu_domain.c index 94b4589dc67c..7bdce4168d2c 100644 --- a/drivers/iommu/fsl_pamu_domain.c +++ b/drivers/iommu/fsl_pamu_domain.c @@ -458,6 +458,7 @@ static const struct iommu_ops fsl_pamu_ops = { .release_device = fsl_pamu_release_device, .device_group = fsl_pamu_device_group, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &fsl_pamu_ops, .attach_dev = fsl_pamu_attach_device, .detach_dev = fsl_pamu_detach_device, .iova_to_phys = fsl_pamu_iova_to_phys, diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 0813b119d680..c6022484ca2d 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4925,6 +4925,7 @@ const struct iommu_ops intel_iommu_ops = { .page_response = intel_svm_page_response, #endif .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &intel_iommu_ops, .attach_dev = intel_iommu_attach_device, .detach_dev = intel_iommu_detach_device, .map_pages = intel_iommu_map_pages, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 19cf28d40ebe..8a1f437a51f2 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1963,6 +1963,10 @@ static int __iommu_attach_device(struct iommu_domain *domain, { int ret; + /* Ensure the device was probe'd onto the same driver as the domain */ + if (dev->bus->iommu_ops != domain->ops->iommu_ops) + return -EMEDIUMTYPE; + if (unlikely(domain->ops->attach_dev == NULL)) return -ENODEV; diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index e491e410add5..767b93da5800 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -877,6 +877,7 @@ static const struct iommu_ops ipmmu_ops = { .pgsize_bitmap = SZ_1G | SZ_2M | SZ_4K, .of_xlate = ipmmu_of_xlate, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &ipmmu_ops, .attach_dev = ipmmu_attach_device, .detach_dev = ipmmu_detach_device, .map = ipmmu_map, diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c index f09aedfdd462..29f6a6d5691e 100644 --- a/drivers/iommu/msm_iommu.c +++ b/drivers/iommu/msm_iommu.c @@ -682,6 +682,7 @@ static struct iommu_ops msm_iommu_ops = { .pgsize_bitmap = MSM_IOMMU_PGSIZES, .of_xlate = qcom_iommu_of_xlate, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &msm_iommu_ops, .attach_dev = msm_iommu_attach_dev, .detach_dev = msm_iommu_detach_dev, .map = msm_iommu_map, diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c index bb9dd92c9898..c5c45f65077d 100644 --- a/drivers/iommu/mtk_iommu.c +++ b/drivers/iommu/mtk_iommu.c @@ -937,6 +937,7 @@ static const struct iommu_ops mtk_iommu_ops = { .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M, .owner = THIS_MODULE, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &mtk_iommu_ops, .attach_dev = mtk_iommu_attach_device, .detach_dev = mtk_iommu_detach_device, .map = mtk_iommu_map, diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c index e1cb51b9866c..77c53580f730 100644 --- a/drivers/iommu/mtk_iommu_v1.c +++ b/drivers/iommu/mtk_iommu_v1.c @@ -594,6 +594,7 @@ static const struct iommu_ops mtk_iommu_v1_ops = { .pgsize_bitmap = ~0UL << MT2701_IOMMU_PAGE_SHIFT, .owner = THIS_MODULE, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &mtk_iommu_v1_ops, .attach_dev = mtk_iommu_v1_attach_device, .detach_dev = mtk_iommu_v1_detach_device, .map = mtk_iommu_v1_map, diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c index bbc6c4cd7aae..a0bf85ccebcd 100644 --- a/drivers/iommu/omap-iommu.c +++ b/drivers/iommu/omap-iommu.c @@ -1739,6 +1739,7 @@ static const struct iommu_ops omap_iommu_ops = { .device_group = omap_iommu_device_group, .pgsize_bitmap = OMAP_IOMMU_PGSIZES, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &omap_iommu_ops, .attach_dev = omap_iommu_attach_dev, .detach_dev = omap_iommu_detach_dev, .map = omap_iommu_map, diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c index ab57c4b8fade..5f5387e902e0 100644 --- a/drivers/iommu/rockchip-iommu.c +++ b/drivers/iommu/rockchip-iommu.c @@ -1193,6 +1193,7 @@ static const struct iommu_ops rk_iommu_ops = { .pgsize_bitmap = RK_IOMMU_PGSIZE_BITMAP, .of_xlate = rk_iommu_of_xlate, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &rk_iommu_ops, .attach_dev = rk_iommu_attach_device, .detach_dev = rk_iommu_detach_device, .map = rk_iommu_map, diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index c898bcbbce11..62e6d152b0a0 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -377,6 +377,7 @@ static const struct iommu_ops s390_iommu_ops = { .device_group = generic_device_group, .pgsize_bitmap = S390_IOMMU_PGSIZES, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &s390_iommu_ops, .attach_dev = s390_iommu_attach_device, .detach_dev = s390_iommu_detach_device, .map = s390_iommu_map, diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c index bd409bab6286..6e8ca34d6a00 100644 --- a/drivers/iommu/sprd-iommu.c +++ b/drivers/iommu/sprd-iommu.c @@ -423,6 +423,7 @@ static const struct iommu_ops sprd_iommu_ops = { .pgsize_bitmap = ~0UL << SPRD_IOMMU_PAGE_SHIFT, .owner = THIS_MODULE, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &sprd_iommu_ops, .attach_dev = sprd_iommu_attach_device, .detach_dev = sprd_iommu_detach_device, .map = sprd_iommu_map, diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c index c54ab477b8fd..560cff8e0f04 100644 --- a/drivers/iommu/sun50i-iommu.c +++ b/drivers/iommu/sun50i-iommu.c @@ -766,6 +766,7 @@ static const struct iommu_ops sun50i_iommu_ops = { .probe_device = sun50i_iommu_probe_device, .release_device = sun50i_iommu_release_device, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &sun50i_iommu_ops, .attach_dev = sun50i_iommu_attach_device, .detach_dev = sun50i_iommu_detach_device, .flush_iotlb_all = sun50i_iommu_flush_iotlb_all, diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index a6700a40a6f8..cd4553611cc9 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -278,6 +278,7 @@ static const struct iommu_ops gart_iommu_ops = { .pgsize_bitmap = GART_IOMMU_PGSIZES, .of_xlate = gart_iommu_of_xlate, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &gart_iommu_ops, .attach_dev = gart_iommu_attach_dev, .detach_dev = gart_iommu_detach_dev, .map = gart_iommu_map, diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 2f2b12033618..67c101d1ad66 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -971,6 +971,7 @@ static const struct iommu_ops tegra_smmu_ops = { .of_xlate = tegra_smmu_of_xlate, .pgsize_bitmap = SZ_4K, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &tegra_smmu_ops, .attach_dev = tegra_smmu_attach_dev, .detach_dev = tegra_smmu_detach_dev, .map = tegra_smmu_map, diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index e3b812d8fa96..703d87922786 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -1017,6 +1017,7 @@ static struct iommu_ops viommu_ops = { .of_xlate = viommu_of_xlate, .owner = THIS_MODULE, .default_domain_ops = &(const struct iommu_domain_ops) { + .iommu_ops = &viommu_ops, .attach_dev = viommu_attach_dev, .map = viommu_map, .unmap = viommu_unmap, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 5e1afe169549..77deaf4fc7f8 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -261,6 +261,7 @@ struct iommu_ops { /** * struct iommu_domain_ops - domain specific operations + * @iommu_ops: Pointer to the ops associated with compatible devices * @attach_dev: attach an iommu domain to a device * @detach_dev: detach an iommu domain from a device * @map: map a physically contiguous memory region to an iommu domain @@ -281,6 +282,7 @@ struct iommu_ops { * @free: Release the domain after use. */ struct iommu_domain_ops { + const struct iommu_ops *iommu_ops; int (*attach_dev)(struct iommu_domain *domain, struct device *dev); void (*detach_dev)(struct iommu_domain *domain, struct device *dev); From patchwork Mon Jun 6 06:19:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 579215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 555CCCCA485 for ; Mon, 6 Jun 2022 06:20:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229953AbiFFGUR (ORCPT ); Mon, 6 Jun 2022 02:20:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229895AbiFFGUE (ORCPT ); Mon, 6 Jun 2022 02:20:04 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2049.outbound.protection.outlook.com [40.107.220.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAABA21E12; Sun, 5 Jun 2022 23:20:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WGcMjBA4IAx2mEiIDAlCIX67CCFMhUtMYv60zw0osDpL97q0LfsN+hxR4/94p9h+DALvmJNqsqLtpBR4LiIidvqG2iVk32fXvEHfL1uBJL4PcdhNK0TNqUBt/z8hAYqQqh2EvqT+pLi84M7ERQvY4y5PuQdkpeR24tJZPmKNBChidIC1Ye1eWUllPXJpYCCsz3F2NMVKrTMYwBJbGpVsjnwfgH1uJTn/uWt0wJYnun91vuYvh66nojiK49SIl76ljulPeWLyoHNjP2VfH2CBXF16CrSJeftEGj6dyXva7gutP6PwFx/Z6yIyXFIE+G70G49G9OomItZU1Scnp5eLkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SeO7Te4yt4I3yBW6/n8sTDab7gbKG7kE6H+bEojxsls=; b=ROlR5sVzYZbqKOnILTWxcLGXCLNNdCxqKXYZ8360hsoCWuihyIvjXIDkL4CZf5fJmnNuonDRhp3RngZxeYu5Kdltp+ic7z+4W5a00L0HaVn2ooTo2cQqSEWDjGtbPivDjXw8SG7XVrSgg5ZhO9+F6Sa6hrJkoBFtqOV6xVJRgnTaJTQk34J4CLq9ojXumJCzHJU5ots3wm1WD9KvAiKcuc3oCLwAh79E+OCs3+8zqt4lA5OqqaGAT3E51MAbmLRc7skysXxQJa0ouzCBiUyTh/R4jR/Hvr0YJR9AunKna5o4SVr2Dnl9cR9ByZh/oJLTfkWfDc4PAjbnuoQmqmMrYQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=samsung.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SeO7Te4yt4I3yBW6/n8sTDab7gbKG7kE6H+bEojxsls=; b=sENtcgjYXlx7gF1Ww8pbiuMwRJ/UhVPfvEiS6SG5dqH2SW9aZTLDbH7YNcpBj0y8rlDMLqi0ymZUbxCTQgvmQqcgLXsp/tQMfps8ZJ0MeNwzeAMlZSc2THfYZ+hJ7da55qfjM+fDWejB9ZXUuQna1t+cCJfPbmhHL/h1uOlx0kZoIG7MUD/yMHZmDkwPnxuyOsyeK9Sirgmnj3EAjy6aQ70yYfcPeVsgftxdTdjY1fspMXRfMy6RmRpMSrsJQZJgMUk6V7Ijy/YRyn0h3ujniXtkUv1b2XN2ttdGJzX8lKlFTSMdS0YaK12UEnMhKgM2oFbBkHYKEtoiu7Fq7ht70g== Received: from DM6PR13CA0049.namprd13.prod.outlook.com (2603:10b6:5:134::26) by DS7PR12MB6118.namprd12.prod.outlook.com (2603:10b6:8:9a::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.18; Mon, 6 Jun 2022 06:20:00 +0000 Received: from DM6NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:5:134:cafe::66) by DM6PR13CA0049.outlook.office365.com (2603:10b6:5:134::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.11 via Frontend Transport; Mon, 6 Jun 2022 06:20:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT057.mail.protection.outlook.com (10.13.172.252) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 06:20:00 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 06:19:59 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 5 Jun 2022 23:19:58 -0700 Received: from Asurada-Nvidia.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Sun, 5 Jun 2022 23:19:56 -0700 From: Nicolin Chen To: , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH 3/5] vfio/iommu_type1: Prefer to reuse domains vs match enforced cache coherency Date: Sun, 5 Jun 2022 23:19:25 -0700 Message-ID: <20220606061927.26049-4-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220606061927.26049-1-nicolinc@nvidia.com> References: <20220606061927.26049-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bb407244-624f-4b4a-99c6-08da478495e9 X-MS-TrafficTypeDiagnostic: DS7PR12MB6118:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: i8tdHq15yT1WINxiiwZZdMsO9IrJtkA4R9MXKqwKxXQG6mgzUX3kzv9rK+rShRMSWnkD6J8/Yu6KG2q23Fsy/6CG4W3Vo7HrhwugSUVcZEJzK55e2Vge6vsJdmr/OTVkfBq7ifIz2vxnJ+Q2sCWDBM0E+zR/OGLxM5wXspdnewX51zk5kEhankAbfsmXSE777idXG+f4gLjLmQOvEZEvJrzpdp9db1MxcF2YgYlnPxbfrmY2lldC5dJyWkYu01PjIOM2vzQ+CN/XeE8gw8QM9TdKXh7oaP5YjUsOhnaS+synX6yddE/NgHpPhhju9OI556Cg2b0rVxo3DUdl6UOeK2MmlaWnZpBSWRnFPqi/j2XIZpwMUVhTflKx/Z/v0/kir0tWA9yiK0U6RR9TYlJb5sjHX6KJjNJFxBSeIGEMVGVWzXbcpNtHBjbshi5nXZRxQ2+K7SsrnPHA8KgPwRY7eCekAfZlrSYGm0//m7IX9gSH3ci5LqcxUq8U1s8GJ/t4A8Mw92r7Et6WZiBRaEETgpSmNkc81MrjVhf7HH5SeYfLOCqrt95mYq70nP01MUfaDUpPppLCD9r/ybwIfZnCULmxrTdnaZfbAzFGrm1ujrrobmL4WpzRRMnhSC39Xr6XzQ915Ucie2z7KIUkMNZ+rWoHkqgKQOWdzLL/RUZZHODr+Yvpjj6hMIiLqMQMDyHroNlzuY2pk4HxgIvmFtjHzUPsYmYaWKvuENFgOSI0dndstJUI4viewQOj4dupX+Lo/V8EZ3mEG8tJ3An4oGu7yQ== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(7696005)(2616005)(6666004)(1076003)(47076005)(54906003)(26005)(86362001)(40460700003)(426003)(2906002)(7416002)(7406005)(83380400001)(186003)(5660300002)(36756003)(316002)(81166007)(336012)(8936002)(82310400005)(70586007)(921005)(110136005)(356005)(4326008)(70206006)(8676002)(36860700001)(508600001)(2101003)(36900700001)(83996005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 06:20:00.1466 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bb407244-624f-4b4a-99c6-08da478495e9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6118 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org From: Jason Gunthorpe The KVM mechanism for controlling wbinvd is only triggered during kvm_vfio_group_add(), meaning it is a one-shot test done once the devices are setup. So, there is no value in trying to push a device that could do enforced cache coherency to a dedicated domain vs re-using an existing domain since KVM won't be able to take advantage of it. This just wastes domain memory. Simplify this code and eliminate the test. This removes the only logic that needed to have a dummy domain attached prior to searching for a matching domain and simplifies the next patches. If someday we want to try and optimize this further the better approach is to update the Intel driver so that enforce_cache_coherency() can work on a domain that already has IOPTEs and then call the enforce_cache_coherency() after detaching a device from a domain to upgrade the whole domain to enforced cache coherency mode. Signed-off-by: Jason Gunthorpe Signed-off-by: Nicolin Chen --- drivers/vfio/vfio_iommu_type1.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index c13b9290e357..f4e3b423a453 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2285,9 +2285,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, * testing if they're on the same bus_type. */ list_for_each_entry(d, &iommu->domain_list, next) { - if (d->domain->ops == domain->domain->ops && - d->enforce_cache_coherency == - domain->enforce_cache_coherency) { + if (d->domain->ops == domain->domain->ops) { iommu_detach_group(domain->domain, group->iommu_group); if (!iommu_attach_group(d->domain, group->iommu_group)) { From patchwork Mon Jun 6 06:19:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 579213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DED4CCA484 for ; Mon, 6 Jun 2022 06:20:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229909AbiFFGUT (ORCPT ); Mon, 6 Jun 2022 02:20:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229908AbiFFGUH (ORCPT ); Mon, 6 Jun 2022 02:20:07 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2066.outbound.protection.outlook.com [40.107.101.66]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42BF721E0B; Sun, 5 Jun 2022 23:20:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MmFuRlq4B4zmQhM4Y9wGdVA1y94RhEqErEs1UibVsoG+O7WxlQkABLa0+q5k31/4SBFRl0be9BUazhuveu9ZW8nlRXTZfNk9xOMY1NqmkP9jxbmxZnUw3TDhG+OcrZU1OfKJp2CVtT5fxXK4FUQzwT6jwTG7BOk15tUY6Y1ix2jRsyJTyO1mfXEo3S3FlAKrUIYD2fOcXuTFX9f/VN0XcoAJma0Bt6W+/3hYWfYFSNItO2KLfNojreKr2y5Dqy6qetQL/srLp4k8NISB6v2VszpBnO4SLo2kClAOfkBXWsvHfEFdA4cglrHB2KXLnRWqCJb3VvSuwVoD9Q26c71lcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5vJi8j8ipUwYoCOTfV3pczsa/F9LbUb944CBW72LYXQ=; b=jMXGB+b3mxbfgb6jruDlFAOcHDinm211tGnDCeiGv4CcKZqwEaxijRgVBGNGmuuL6gYzVdKSChD4PmeW6V4ZxkjNpEnVT+VT1af3u5lR1BvwCk6ArbT3nxQyobNEFw/ePx2x5SltRxGXOCrdbkX5a7yTtBDx6P+qKkCy4gwkgcntNAdEpGRmbNoxLfqFjGaJbV00NGOABgNEYROxdcamTz60/ET151ziUm9EEriKHhim+eA39nNySgm3MGKb07or31aCWgS4zqrRGJKwg1SrtqLLfvQsWqmpJjHIO5x8HVZzLNZ3dwrx9E6amP3Gg5X0rnCPVmIiO4s27kNnpTnuxA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=samsung.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5vJi8j8ipUwYoCOTfV3pczsa/F9LbUb944CBW72LYXQ=; b=Y9mxZ2H14k4Swu3gnZ+EN4wHTmyF8OH8NOJ0m0SoMC1g3YMKG/AhS30MQcRdJvl2pOK/D2G4LB5hp6Rn1yYzxQE5O+c1KabLg31D79SJHajoSuLTOKvcX6MDs/Mx7sBZJ8OOdH0nJNtfnJzbX6CfoYEA0XEjOQ+MTAonuIBWFQWVk5BbqfyE0bNjbv3cD3vIqKSLjPte0EFfy4vSTGqtQoiWneXPmlc2NYQWIxvpzZyJrvp5Z9KOxZaIgZ2dqZWggRTKk6fsfD4s7r2DCFI43Z6V/QHR20VqRwD7C74rg1KHW+dHBPCWotMzDPaLBT+Wz/cjiZLUy7DYi+RminIoUw== Received: from DM6PR07CA0116.namprd07.prod.outlook.com (2603:10b6:5:330::13) by LV2PR12MB5848.namprd12.prod.outlook.com (2603:10b6:408:173::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Mon, 6 Jun 2022 06:20:04 +0000 Received: from DM6NAM11FT015.eop-nam11.prod.protection.outlook.com (2603:10b6:5:330:cafe::3) by DM6PR07CA0116.outlook.office365.com (2603:10b6:5:330::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend Transport; Mon, 6 Jun 2022 06:20:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT015.mail.protection.outlook.com (10.13.172.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 06:20:03 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 06:20:01 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 5 Jun 2022 23:20:01 -0700 Received: from Asurada-Nvidia.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Sun, 5 Jun 2022 23:19:59 -0700 From: Nicolin Chen To: , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH 4/5] vfio/iommu_type1: Clean up update_dirty_scope in detach_group() Date: Sun, 5 Jun 2022 23:19:26 -0700 Message-ID: <20220606061927.26049-5-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220606061927.26049-1-nicolinc@nvidia.com> References: <20220606061927.26049-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: dec089d7-760e-4a1c-a1ee-08da478497fd X-MS-TrafficTypeDiagnostic: LV2PR12MB5848:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hJTkLbx5IPx/ufeIfWPJtNWb2YjIH1e0tlNwLL3W7ptuwVGCNN/EQOR5eUkaVNHjTudD/133+wd0AO8oq/10vfkHliTEeDiey1/fWClQAAnsbQNyAGQSVuD5cMdFkU4tKWa/f+OPtwFC7mn9NQUcsP4QOPmkY1rsgJ9G/mek4AZRmKy9q81B4vEuilPo+oTg33pObpU7l3y3g8SXnyA7WW/HI5mUsrplSidaQopeVMyhuG2ftIjVV+azHYq9i0BBbU+cnXWTCFiHDAlugK5ijQpmBtTqlT6CBRDLl4APD3mDiHIzUGSodS4ar7WxUVnm5t0MsEcmyy1zh/NqtfwDRDW6+gD6i61egyYq8yCmz9PvBllAsoEg6OpdDG99HfXXufRE9FscFKd1mklK3pl0jL+FlAMM+MNQX6eKHRjnHlxzMDHnU8JOHGZlMKW1N/hIv9Nxv4MfgDGPGVvyefilUOJ4YsiCUvss0EnJGmReCglgYRo+wQ7oRbf6wU5i/xlXspF5vCbU5u6OM4huulnTmmyfTCMxqqLwkAGsLJGFdT1e+ktwoVNd9Lhz+nxBsdCQYKhRCA+xXpeXKnPUVThLpEuzQcK/ZIeJpgVOssgm/w9B3vznsoU9vb4nIVXymKj05swFG/ydpSN5o5gtCJltd3pHrEJOaZ6K3uBSZdEDkZEVNhc8AzjCgLe6jOZxJz5ZKlIZCvhzv9MtdV5/tfbFrca09/7zdHC5MxICsXaknJfQiN0ZsutPXhlw9hD3Q1TQzhoQUOJ2Q9z0sMrH62CFNw10fSVG3pt73iNdOTA+KKA= X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(186003)(336012)(426003)(2616005)(47076005)(316002)(1076003)(70206006)(54906003)(83380400001)(110136005)(70586007)(6666004)(2906002)(7696005)(15650500001)(36756003)(508600001)(8936002)(7406005)(7416002)(40460700003)(4326008)(26005)(5660300002)(82310400005)(8676002)(36860700001)(86362001)(81166007)(356005)(921005)(14143004)(36900700001)(2101003)(83996005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 06:20:03.6489 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dec089d7-760e-4a1c-a1ee-08da478497fd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT015.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5848 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org All devices in emulated_iommu_groups have pinned_page_dirty_scope set, so the update_dirty_scope in the first list_for_each_entry is always false. Clean it up, and move the "if update_dirty_scope" part from the detach_group_done routine to the domain_list part. Rename the "detach_group_done" goto label accordingly. Suggested-by: Jason Gunthorpe Signed-off-by: Nicolin Chen --- drivers/vfio/vfio_iommu_type1.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index f4e3b423a453..b45b1cc118ef 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2463,14 +2463,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_iommu_group *group; - bool update_dirty_scope = false; LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); list_for_each_entry(group, &iommu->emulated_iommu_groups, next) { if (group->iommu_group != iommu_group) continue; - update_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); @@ -2479,7 +2477,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, WARN_ON(iommu->notifier.head); vfio_iommu_unmap_unpin_all(iommu); } - goto detach_group_done; + goto out_unlock; } /* @@ -2495,9 +2493,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, continue; iommu_detach_group(domain->domain, group->iommu_group); - update_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); - kfree(group); /* * Group ownership provides privilege, if the group list is * empty, the domain goes away. If it's the last domain with @@ -2519,7 +2515,17 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, kfree(domain); vfio_iommu_aper_expand(iommu, &iova_copy); vfio_update_pgsize_bitmap(iommu); + /* + * Removal of a group without dirty tracking may allow + * the iommu scope to be promoted. + */ + if (!group->pinned_page_dirty_scope) { + iommu->num_non_pinned_groups--; + if (iommu->dirty_page_tracking) + vfio_iommu_populate_bitmap_full(iommu); + } } + kfree(group); break; } @@ -2528,16 +2534,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, else vfio_iommu_iova_free(&iova_copy); -detach_group_done: - /* - * Removal of a group without dirty tracking may allow the iommu scope - * to be promoted. - */ - if (update_dirty_scope) { - iommu->num_non_pinned_groups--; - if (iommu->dirty_page_tracking) - vfio_iommu_populate_bitmap_full(iommu); - } +out_unlock: mutex_unlock(&iommu->lock); } From patchwork Mon Jun 6 06:19:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolin Chen X-Patchwork-Id: 579739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D6C5C43334 for ; Mon, 6 Jun 2022 06:20:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229966AbiFFGUS (ORCPT ); Mon, 6 Jun 2022 02:20:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbiFFGUK (ORCPT ); Mon, 6 Jun 2022 02:20:10 -0400 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2040.outbound.protection.outlook.com [40.107.102.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5362A21E0B; Sun, 5 Jun 2022 23:20:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CMbTNfsQzZZ4Hu4qxRN2xdahDyj1n2imrw9YtdF0ZVkWdgw46P4qx66u8y6dP6lncvNcLcpH+1zsL7amCYCMyN+qJLNXda4GvCftu+AQO4M48/c74khbsDKU3B/JEJJQ2gYHU/A2BVDZTfasFzJoxhJ/gMfZm9gWrisO3mTgIN9W5Cj8Zzjzq/739CMjQJzt8t3w5ZhTRbQwWnrDQRochhDXYo3OyJWf6d5zu7HG/2CQ9FQe++efqE5zVQ2JIeHqVpHMdDMc5MKgmUHKRXboWxPRdFyKWq3oYiKJX9mk5wrQj+KghBtbHtSvZ0PlF9yQxHWTNOprdmZHfbyMZVK2QA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8B+c+H2V+bCnzCJetNz/ttkZvQqE/XCS0a2agveDiyc=; b=MTSAv9EdIrbVQU0RbsIpUvsH5rvxCIC+WYiry5QSzrvgl02B/9zxxqxlb9KqC9GAvGFBWXCAbZRoo9Oq51bKITFL/Hrx7ppk5y77/o8n1vL8AHfXeHYUAvf5LSK8zi+3dGOxLR8A33asY8mBo4eqRe0dVnisP/rkbJrjUvz0hwUA7ognpr7vIF6TpwPSGuZkOHVNwdvbJUp1WvrhgAjuPBVIbeWtUXOTT2V+5NO9dLyrkjHuY+2aPKqDpkDPgO2e4Q2zALeFnEJw5MtpMKFi4RdZVdtvY/Wz7TpIcczXr6UdynXdAeC7uMOlbKUa2LwptOUfI2wfv7/oQbdQzS9aaA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=samsung.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8B+c+H2V+bCnzCJetNz/ttkZvQqE/XCS0a2agveDiyc=; b=H2+4eFhBXOQACIvRHWsHO018QqmROfBzKwfUz+n7RdbMNflN1GdFKX2hd2kl9v/VN7EO5eP3TjiKyLbN+CwLgQJ9OrJ2XFD5EpzqKNyis2vUchqoUSg1MnHy6lnez+DrNB/ycmbTr8Pe3pnHCuhleI9dXZP3MtZuUcGlf5Zh3b1AWN/zn7t5jvifAIylp94XCgZhaZBWABttPpNg+96mxLstBHOs7qn6cM/fwQcagkvvTuEfZtpDpfvLmrO3VKCy+4IqDvh+Wj0PecwyKfeVCovyvtNlros0iINDa2vp4bxsLOWdP9nJhFiDQoZBGnifQuji6w/o57jDl2kXRIwMxA== Received: from BN9PR03CA0604.namprd03.prod.outlook.com (2603:10b6:408:106::9) by DM6PR12MB3801.namprd12.prod.outlook.com (2603:10b6:5:1cc::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Mon, 6 Jun 2022 06:20:06 +0000 Received: from BN8NAM11FT064.eop-nam11.prod.protection.outlook.com (2603:10b6:408:106:cafe::7) by BN9PR03CA0604.outlook.office365.com (2603:10b6:408:106::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 06:20:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT064.mail.protection.outlook.com (10.13.176.160) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 06:20:05 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 06:20:04 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Sun, 5 Jun 2022 23:20:03 -0700 Received: from Asurada-Nvidia.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Sun, 5 Jun 2022 23:20:01 -0700 From: Nicolin Chen To: , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH 5/5] vfio/iommu_type1: Simplify group attachment Date: Sun, 5 Jun 2022 23:19:27 -0700 Message-ID: <20220606061927.26049-6-nicolinc@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220606061927.26049-1-nicolinc@nvidia.com> References: <20220606061927.26049-1-nicolinc@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0965645e-e480-4d14-69be-08da4784994f X-MS-TrafficTypeDiagnostic: DM6PR12MB3801:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YvfS3ssPqqS5158rpVqrnyBXpmtW8PgI33WsDidNZ53bZQHUsl8rq3mpdGppPdocv/bcj7I0tB4TDIiXkNQ/BHchidxrvIm9AQntNRVyds4Z9Sxv+7q3rMqTboYh2Z9g96ngDSyKrmt66x81CVDPTOP9zT5mfi2MJWr4Ff5GpC8gV8OinO0yawsmanxxKDOX22+ptl4Wk6v7CrJsgm/apOI66ehzXr3cRURFWr4kSuytX2urfvGDthYpV9vSeBmUYGlnAu0qE4a2dDU4heI8nOOcS1xN7MJomELyCKFyV1r5xtYN9XNPaxsFEXxJorrmsQxY2vAL73vj7IaTh7Dh2c5Pti+Q5LB6nQ+0PMb+rbNSa9HPSIIKiY8WC14YV2l//rVR0eT+W7fUFHQdv+81JGaFOs9GKaVV72T2/lQpVpJBSUMrES8pvK78nieglpm3uCs3lL3wKxmYN6G+bfBqKNFwS6vXLW28cGaNd4ESEhBDohCjMv0Mbpd4ZGyxMnE7nClZPHI2OZg7njWg6YGRSeVoDtZZW0deXOObO2/Z47hR+tlA0LywS4FQXsLAczlvrezR8hsLwEMlKM/0eeRH4y4tp6+dwPfBOn0/DH2CO6NaXhg4E6cAw1yqIwZiwb9da7KSUiqHnQTX4PyrRxB0rXHP4aFOixDj9zsUbWB3CfJiHWzFOgocnN5ZdzXWQYltWKVmziqbV2BM7Fmp4rlb0iVEpA3/ioQECZ4/kbyIsoYRS+oWIa/6qEGHAa0KHv7LwUkQnFav5co4tjMW0OgKLw== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(47076005)(30864003)(7416002)(7406005)(70586007)(426003)(36756003)(356005)(70206006)(921005)(8676002)(1076003)(82310400005)(36860700001)(4326008)(5660300002)(8936002)(186003)(336012)(83380400001)(81166007)(2616005)(40460700003)(26005)(7696005)(2906002)(6666004)(316002)(86362001)(508600001)(54906003)(110136005)(83996005)(2101003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 06:20:05.7845 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0965645e-e480-4d14-69be-08da4784994f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT064.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3801 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org Un-inline the domain specific logic from the attach/detach_group ops into two paired functions vfio_iommu_alloc_attach_domain() and vfio_iommu_detach_destroy_domain() that strictly deal with creating and destroying struct vfio_domains. Add the logic to check for EMEDIUMTYPE return code of iommu_attach_group() and avoid the extra domain allocations and attach/detach sequences of the old code. This allows properly detecting an actual attach error, like -ENOMEM, vs treating all attach errors as an incompatible domain. Remove the duplicated domain->ops comparison that is taken care of in the IOMMU core. Co-developed-by: Jason Gunthorpe Signed-off-by: Jason Gunthorpe Signed-off-by: Nicolin Chen --- drivers/vfio/vfio_iommu_type1.c | 306 +++++++++++++++++--------------- 1 file changed, 161 insertions(+), 145 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index b45b1cc118ef..c6f937e1d71f 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -86,6 +86,7 @@ struct vfio_domain { struct list_head group_list; bool fgsp : 1; /* Fine-grained super pages */ bool enforce_cache_coherency : 1; + bool msi_cookie : 1; }; struct vfio_dma { @@ -2153,12 +2154,161 @@ static void vfio_iommu_iova_insert_copy(struct vfio_iommu *iommu, list_splice_tail(iova_copy, iova); } +static struct vfio_domain * +vfio_iommu_alloc_attach_domain(struct bus_type *bus, struct vfio_iommu *iommu, + struct vfio_iommu_group *group) +{ + struct iommu_domain *new_domain; + struct vfio_domain *domain; + int ret = 0; + + /* Try to match an existing compatible domain */ + list_for_each_entry (domain, &iommu->domain_list, next) { + ret = iommu_attach_group(domain->domain, group->iommu_group); + if (ret == -EMEDIUMTYPE) + continue; + if (ret) + return ERR_PTR(ret); + list_add(&group->next, &domain->group_list); + return domain; + } + + new_domain = iommu_domain_alloc(bus); + if (!new_domain) + return ERR_PTR(-EIO); + + if (iommu->nesting) { + ret = iommu_enable_nesting(new_domain); + if (ret) + goto out_free_iommu_domain; + } + + ret = iommu_attach_group(new_domain, group->iommu_group); + if (ret) + goto out_free_iommu_domain; + + domain = kzalloc(sizeof(*domain), GFP_KERNEL); + if (!domain) { + ret = -ENOMEM; + goto out_detach; + } + + domain->domain = new_domain; + vfio_test_domain_fgsp(domain); + + /* + * If the IOMMU can block non-coherent operations (ie PCIe TLPs with + * no-snoop set) then VFIO always turns this feature on because on Intel + * platforms it optimizes KVM to disable wbinvd emulation. + */ + if (new_domain->ops->enforce_cache_coherency) + domain->enforce_cache_coherency = + new_domain->ops->enforce_cache_coherency(new_domain); + + /* replay mappings on new domains */ + ret = vfio_iommu_replay(iommu, domain); + if (ret) + goto out_free_domain; + + /* + * An iommu backed group can dirty memory directly and therefore + * demotes the iommu scope until it declares itself dirty tracking + * capable via the page pinning interface. + */ + iommu->num_non_pinned_groups++; + + INIT_LIST_HEAD(&domain->group_list); + list_add(&group->next, &domain->group_list); + list_add(&domain->next, &iommu->domain_list); + vfio_update_pgsize_bitmap(iommu); + return domain; + +out_free_domain: + kfree(domain); +out_detach: + iommu_detach_group(domain->domain, group->iommu_group); +out_free_iommu_domain: + iommu_domain_free(new_domain); + return ERR_PTR(ret); +} + +static void vfio_iommu_unmap_unpin_all(struct vfio_iommu *iommu) +{ + struct rb_node *node; + + while ((node = rb_first(&iommu->dma_list))) + vfio_remove_dma(iommu, rb_entry(node, struct vfio_dma, node)); +} + +static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu) +{ + struct rb_node *n, *p; + + n = rb_first(&iommu->dma_list); + for (; n; n = rb_next(n)) { + struct vfio_dma *dma; + long locked = 0, unlocked = 0; + + dma = rb_entry(n, struct vfio_dma, node); + unlocked += vfio_unmap_unpin(iommu, dma, false); + p = rb_first(&dma->pfn_list); + for (; p; p = rb_next(p)) { + struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn, + node); + + if (!is_invalid_reserved_pfn(vpfn->pfn)) + locked++; + } + vfio_lock_acct(dma, locked - unlocked, true); + } +} + +static void vfio_iommu_detach_destroy_domain(struct vfio_domain *domain, + struct vfio_iommu *iommu, + struct vfio_iommu_group *group) +{ + iommu_detach_group(domain->domain, group->iommu_group); + list_del(&group->next); + if (!list_empty(&domain->group_list)) + return; + + /* + * Group ownership provides privilege, if the group list is empty, the + * domain goes away. If it's the last domain with iommu and external + * domain doesn't exist, then all the mappings go away too. If it's the + * last domain with iommu and external domain exist, update accounting + */ + if (list_is_singular(&iommu->domain_list)) { + if (list_empty(&iommu->emulated_iommu_groups)) { + WARN_ON(iommu->notifier.head); + vfio_iommu_unmap_unpin_all(iommu); + } else { + vfio_iommu_unmap_unpin_reaccount(iommu); + } + } + iommu_domain_free(domain->domain); + list_del(&domain->next); + kfree(domain); + + /* + * Removal of a group without dirty tracking may allow the iommu scope + * to be promoted. + */ + if (!group->pinned_page_dirty_scope) { + iommu->num_non_pinned_groups--; + if (iommu->dirty_page_tracking) + vfio_iommu_populate_bitmap_full(iommu); + } + + vfio_update_pgsize_bitmap(iommu); +} + static int vfio_iommu_type1_attach_group(void *iommu_data, struct iommu_group *iommu_group, enum vfio_group_type type) { struct vfio_iommu *iommu = iommu_data; struct vfio_iommu_group *group; - struct vfio_domain *domain, *d; + struct vfio_domain *domain; struct bus_type *bus = NULL; bool resv_msi, msi_remap; phys_addr_t resv_msi_base = 0; @@ -2197,26 +2347,12 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_free_group; - ret = -ENOMEM; - domain = kzalloc(sizeof(*domain), GFP_KERNEL); - if (!domain) + domain = vfio_iommu_alloc_attach_domain(bus, iommu, group); + if (IS_ERR(domain)) { + ret = PTR_ERR(domain); goto out_free_group; - - ret = -EIO; - domain->domain = iommu_domain_alloc(bus); - if (!domain->domain) - goto out_free_domain; - - if (iommu->nesting) { - ret = iommu_enable_nesting(domain->domain); - if (ret) - goto out_domain; } - ret = iommu_attach_group(domain->domain, group->iommu_group); - if (ret) - goto out_domain; - /* Get aperture info */ geo = &domain->domain->geometry; if (vfio_iommu_aper_conflict(iommu, geo->aperture_start, @@ -2254,9 +2390,6 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, resv_msi = vfio_iommu_has_sw_msi(&group_resv_regions, &resv_msi_base); - INIT_LIST_HEAD(&domain->group_list); - list_add(&group->next, &domain->group_list); - msi_remap = irq_domain_check_msi_remap() || iommu_capable(bus, IOMMU_CAP_INTR_REMAP); @@ -2267,117 +2400,32 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, goto out_detach; } - /* - * If the IOMMU can block non-coherent operations (ie PCIe TLPs with - * no-snoop set) then VFIO always turns this feature on because on Intel - * platforms it optimizes KVM to disable wbinvd emulation. - */ - if (domain->domain->ops->enforce_cache_coherency) - domain->enforce_cache_coherency = - domain->domain->ops->enforce_cache_coherency( - domain->domain); - - /* - * Try to match an existing compatible domain. We don't want to - * preclude an IOMMU driver supporting multiple bus_types and being - * able to include different bus_types in the same IOMMU domain, so - * we test whether the domains use the same iommu_ops rather than - * testing if they're on the same bus_type. - */ - list_for_each_entry(d, &iommu->domain_list, next) { - if (d->domain->ops == domain->domain->ops) { - iommu_detach_group(domain->domain, group->iommu_group); - if (!iommu_attach_group(d->domain, - group->iommu_group)) { - list_add(&group->next, &d->group_list); - iommu_domain_free(domain->domain); - kfree(domain); - goto done; - } - - ret = iommu_attach_group(domain->domain, - group->iommu_group); - if (ret) - goto out_domain; - } - } - - vfio_test_domain_fgsp(domain); - - /* replay mappings on new domains */ - ret = vfio_iommu_replay(iommu, domain); - if (ret) - goto out_detach; - - if (resv_msi) { + if (resv_msi && !domain->msi_cookie) { ret = iommu_get_msi_cookie(domain->domain, resv_msi_base); if (ret && ret != -ENODEV) goto out_detach; + domain->msi_cookie = true; } - list_add(&domain->next, &iommu->domain_list); - vfio_update_pgsize_bitmap(iommu); -done: /* Delete the old one and insert new iova list */ vfio_iommu_iova_insert_copy(iommu, &iova_copy); - /* - * An iommu backed group can dirty memory directly and therefore - * demotes the iommu scope until it declares itself dirty tracking - * capable via the page pinning interface. - */ - iommu->num_non_pinned_groups++; mutex_unlock(&iommu->lock); vfio_iommu_resv_free(&group_resv_regions); return 0; out_detach: - iommu_detach_group(domain->domain, group->iommu_group); -out_domain: - iommu_domain_free(domain->domain); - vfio_iommu_iova_free(&iova_copy); - vfio_iommu_resv_free(&group_resv_regions); -out_free_domain: - kfree(domain); + vfio_iommu_detach_destroy_domain(domain, iommu, group); out_free_group: kfree(group); out_unlock: mutex_unlock(&iommu->lock); + vfio_iommu_iova_free(&iova_copy); + vfio_iommu_resv_free(&group_resv_regions); return ret; } -static void vfio_iommu_unmap_unpin_all(struct vfio_iommu *iommu) -{ - struct rb_node *node; - - while ((node = rb_first(&iommu->dma_list))) - vfio_remove_dma(iommu, rb_entry(node, struct vfio_dma, node)); -} - -static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu) -{ - struct rb_node *n, *p; - - n = rb_first(&iommu->dma_list); - for (; n; n = rb_next(n)) { - struct vfio_dma *dma; - long locked = 0, unlocked = 0; - - dma = rb_entry(n, struct vfio_dma, node); - unlocked += vfio_unmap_unpin(iommu, dma, false); - p = rb_first(&dma->pfn_list); - for (; p; p = rb_next(p)) { - struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn, - node); - - if (!is_invalid_reserved_pfn(vpfn->pfn)) - locked++; - } - vfio_lock_acct(dma, locked - unlocked, true); - } -} - /* * Called when a domain is removed in detach. It is possible that * the removed domain decided the iova aperture window. Modify the @@ -2491,44 +2539,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, group = find_iommu_group(domain, iommu_group); if (!group) continue; - - iommu_detach_group(domain->domain, group->iommu_group); - list_del(&group->next); - /* - * Group ownership provides privilege, if the group list is - * empty, the domain goes away. If it's the last domain with - * iommu and external domain doesn't exist, then all the - * mappings go away too. If it's the last domain with iommu and - * external domain exist, update accounting - */ - if (list_empty(&domain->group_list)) { - if (list_is_singular(&iommu->domain_list)) { - if (list_empty(&iommu->emulated_iommu_groups)) { - WARN_ON(iommu->notifier.head); - vfio_iommu_unmap_unpin_all(iommu); - } else { - vfio_iommu_unmap_unpin_reaccount(iommu); - } - } - iommu_domain_free(domain->domain); - list_del(&domain->next); - kfree(domain); - vfio_iommu_aper_expand(iommu, &iova_copy); - vfio_update_pgsize_bitmap(iommu); - /* - * Removal of a group without dirty tracking may allow - * the iommu scope to be promoted. - */ - if (!group->pinned_page_dirty_scope) { - iommu->num_non_pinned_groups--; - if (iommu->dirty_page_tracking) - vfio_iommu_populate_bitmap_full(iommu); - } - } + vfio_iommu_detach_destroy_domain(domain, iommu, group); kfree(group); break; } + vfio_iommu_aper_expand(iommu, &iova_copy); if (!vfio_iommu_resv_refresh(iommu, &iova_copy)) vfio_iommu_iova_insert_copy(iommu, &iova_copy); else