From patchwork Mon Apr 4 12:41:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 558903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7F87C433F5 for ; Mon, 4 Apr 2022 12:43:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344509AbiDDMpB (ORCPT ); Mon, 4 Apr 2022 08:45:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233970AbiDDMpA (ORCPT ); Mon, 4 Apr 2022 08:45:00 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3DF12F3A5 for ; Mon, 4 Apr 2022 05:43:04 -0700 (PDT) Received: from fraeml739-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9PC1Cqvz67bZ4; Mon, 4 Apr 2022 20:41:03 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml739-chm.china.huawei.com (10.206.15.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:43:02 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:42:54 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 01/11] ACPI/IORT: Add temporary RMR node flag definitions Date: Mon, 4 Apr 2022 13:41:59 +0100 Message-ID: <20220404124209.1086-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org IORT rev E.d introduces more details into the RMR node Flags field. Add temporary definitions to describe and access these Flags field until ACPICA header is updated to support E.d. This patch can be reverted once the include/acpi/actbl2.h has all the relevant definitions. Signed-off-by: Shameer Kolothum --- Please find the ACPICA E.d related changes pull request here, https://github.com/acpica/acpica/pull/765 This is now merged to acpica:master. --- drivers/acpi/arm64/iort.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index f2f8f05662de..fd06cf43ba31 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -25,6 +25,30 @@ #define IORT_IOMMU_TYPE ((1 << ACPI_IORT_NODE_SMMU) | \ (1 << ACPI_IORT_NODE_SMMU_V3)) +/* + * The following RMR related definitions are temporary and + * can be removed once ACPICA headers support IORT rev E.d + */ +#ifndef ACPI_IORT_RMR_REMAP_PERMITTED +#define ACPI_IORT_RMR_REMAP_PERMITTED (1) +#endif + +#ifndef ACPI_IORT_RMR_ACCESS_PRIVILEGE +#define ACPI_IORT_RMR_ACCESS_PRIVILEGE (1 << 1) +#endif + +#ifndef ACPI_IORT_RMR_ACCESS_ATTRIBUTES +#define ACPI_IORT_RMR_ACCESS_ATTRIBUTES(flags) (((flags) >> 2) & 0xFF) +#endif + +#ifndef ACPI_IORT_RMR_ATTR_DEVICE_GRE +#define ACPI_IORT_RMR_ATTR_DEVICE_GRE 0x03 +#endif + +#ifndef ACPI_IORT_RMR_ATTR_NORMAL_IWB_OWB +#define ACPI_IORT_RMR_ATTR_NORMAL_IWB_OWB 0x05 +#endif + struct iort_its_msi_chip { struct list_head list; struct fwnode_handle *fw_node; From patchwork Mon Apr 4 12:42:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 556048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AA21C433FE for ; Mon, 4 Apr 2022 12:43:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229683AbiDDMpK (ORCPT ); Mon, 4 Apr 2022 08:45:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233970AbiDDMpK (ORCPT ); Mon, 4 Apr 2022 08:45:10 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A55A3C4B8 for ; Mon, 4 Apr 2022 05:43:14 -0700 (PDT) Received: from fraeml740-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9PN5ZRBz686RQ; Mon, 4 Apr 2022 20:41:12 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml740-chm.china.huawei.com (10.206.15.221) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:43:12 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:43:04 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 02/11] iommu: Introduce a union to struct iommu_resv_region Date: Mon, 4 Apr 2022 13:42:00 +0100 Message-ID: <20220404124209.1086-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org A union is introduced to struct iommu_resv_region to hold any firmware specific data. This is in preparation to add support for IORT RMR reserve regions and the union now holds the RMR specific information. Signed-off-by: Shameer Kolothum --- include/linux/iommu.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 9208eca4b0d1..733f46b14ac8 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -127,6 +127,11 @@ enum iommu_resv_type { IOMMU_RESV_SW_MSI, }; +struct iommu_iort_rmr_data { + const u32 *sids; /* Stream IDs associated with IORT RMR entry */ + u32 num_sids; +}; + /** * struct iommu_resv_region - descriptor for a reserved memory region * @list: Linked list pointers @@ -134,6 +139,7 @@ enum iommu_resv_type { * @length: Length of the region in bytes * @prot: IOMMU Protection flags (READ/WRITE/...) * @type: Type of the reserved region + * @fw_data: Firmware-specific data */ struct iommu_resv_region { struct list_head list; @@ -141,6 +147,9 @@ struct iommu_resv_region { size_t length; int prot; enum iommu_resv_type type; + union { + struct iommu_iort_rmr_data rmr; + } fw_data; }; /** From patchwork Mon Apr 4 12:42:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 558902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BE66C433F5 for ; Mon, 4 Apr 2022 12:43:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347658AbiDDMp2 (ORCPT ); Mon, 4 Apr 2022 08:45:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347567AbiDDMpY (ORCPT ); Mon, 4 Apr 2022 08:45:24 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08970E00 for ; Mon, 4 Apr 2022 05:43:27 -0700 (PDT) Received: from fraeml738-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9Q02fN3z687ND; Mon, 4 Apr 2022 20:41:44 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml738-chm.china.huawei.com (10.206.15.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:43:25 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:43:17 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 03/11] ACPI/IORT: Make iort_iommu_msi_get_resv_regions() return void Date: Mon, 4 Apr 2022 13:42:01 +0100 Message-ID: <20220404124209.1086-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org At present iort_iommu_msi_get_resv_regions() returns the number of MSI reserved regions on success and there are no users for this. The reserved region list will get populated anyway for platforms that require the HW MSI region reservation. Hence, change the function to return void instead. Signed-off-by: Shameer Kolothum --- drivers/acpi/arm64/iort.c | 26 ++++++++++---------------- include/linux/acpi_iort.h | 6 +++--- 2 files changed, 13 insertions(+), 19 deletions(-) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index fd06cf43ba31..c5ebb2be9a19 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -832,25 +832,23 @@ static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) /** * iort_iommu_msi_get_resv_regions - Reserved region driver helper + * for HW MSI regions. * @dev: Device from iommu_get_resv_regions() * @head: Reserved region list from iommu_get_resv_regions() * - * Returns: Number of msi reserved regions on success (0 if platform - * doesn't require the reservation or no associated msi regions), - * appropriate error value otherwise. The ITS interrupt translation - * spaces (ITS_base + SZ_64K, SZ_64K) associated with the device - * are the msi reserved regions. + * The ITS interrupt translation spaces (ITS_base + SZ_64K, SZ_64K) + * associated with the device are the HW MSI reserved regions. */ -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct acpi_iort_its_group *its; struct acpi_iort_node *iommu_node, *its_node = NULL; - int i, resv = 0; + int i; iommu_node = iort_get_msi_resv_iommu(dev); if (!iommu_node) - return 0; + return; /* * Current logic to reserve ITS regions relies on HW topologies @@ -870,7 +868,7 @@ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) } if (!its_node) - return 0; + return; /* Move to ITS specific data */ its = (struct acpi_iort_its_group *)its_node->node_data; @@ -884,14 +882,10 @@ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) region = iommu_alloc_resv_region(base + SZ_64K, SZ_64K, prot, IOMMU_RESV_MSI); - if (region) { + if (region) list_add_tail(®ion->list, head); - resv++; - } } } - - return (resv == its->its_count) ? resv : -ENODEV; } static inline bool iort_iommu_driver_enabled(u8 type) @@ -1058,8 +1052,8 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in) } #else -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) -{ return 0; } +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +{ } int iort_iommu_configure_id(struct device *dev, const u32 *input_id) { return -ENODEV; } #endif diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index f1f0842a2cb2..a8198b83753d 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -36,7 +36,7 @@ int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); phys_addr_t acpi_iort_dma_get_max_cpu_address(void); #else static inline void acpi_iort_init(void) { } @@ -52,8 +52,8 @@ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in) { return -ENODEV; } static inline -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) -{ return 0; } +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +{ } static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void) { return PHYS_ADDR_MAX; } From patchwork Mon Apr 4 12:42:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 556047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 839BFC433F5 for ; Mon, 4 Apr 2022 12:43:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347576AbiDDMpj (ORCPT ); Mon, 4 Apr 2022 08:45:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347754AbiDDMpd (ORCPT ); Mon, 4 Apr 2022 08:45:33 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57753DFBE for ; Mon, 4 Apr 2022 05:43:37 -0700 (PDT) Received: from fraeml737-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9Pq4TV3z685B2; Mon, 4 Apr 2022 20:41:35 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml737-chm.china.huawei.com (10.206.15.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:43:35 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:43:27 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 04/11] ACPI/IORT: Provide a generic helper to retrieve reserve regions Date: Mon, 4 Apr 2022 13:42:02 +0100 Message-ID: <20220404124209.1086-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Currently IORT provides a helper to retrieve HW MSI reserve regions. Change this to a generic helper to retrieve any IORT related reserve regions. This will be useful when we add support for RMR nodes in subsequent patches. Signed-off-by: Shameer Kolothum --- drivers/acpi/arm64/iort.c | 23 +++++++++++++++-------- drivers/iommu/dma-iommu.c | 2 +- include/linux/acpi_iort.h | 4 ++-- 3 files changed, 18 insertions(+), 11 deletions(-) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index c5ebb2be9a19..63acc3c5b275 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -830,16 +830,13 @@ static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) return NULL; } -/** - * iort_iommu_msi_get_resv_regions - Reserved region driver helper - * for HW MSI regions. - * @dev: Device from iommu_get_resv_regions() - * @head: Reserved region list from iommu_get_resv_regions() - * +/* + * Retrieve platform specific HW MSI reserve regions. * The ITS interrupt translation spaces (ITS_base + SZ_64K, SZ_64K) * associated with the device are the HW MSI reserved regions. */ -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +static void +iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct acpi_iort_its_group *its; @@ -888,6 +885,16 @@ void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) } } +/** + * iort_iommu_get_resv_regions - Generic helper to retrieve reserved regions. + * @dev: Device from iommu_get_resv_regions() + * @head: Reserved region list from iommu_get_resv_regions() + */ +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) +{ + iort_iommu_msi_get_resv_regions(dev, head); +} + static inline bool iort_iommu_driver_enabled(u8 type) { switch (type) { @@ -1052,7 +1059,7 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in) } #else -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { } int iort_iommu_configure_id(struct device *dev, const u32 *input_id) { return -ENODEV; } diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 09f6e1c0f9c0..93d76b666888 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -384,7 +384,7 @@ void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list) { if (!is_of_node(dev_iommu_fwspec_get(dev)->iommu_fwnode)) - iort_iommu_msi_get_resv_regions(dev, list); + iort_iommu_get_resv_regions(dev, list); } EXPORT_SYMBOL(iommu_dma_get_resv_regions); diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index a8198b83753d..e5d2de9caf7f 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -36,7 +36,7 @@ int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head); phys_addr_t acpi_iort_dma_get_max_cpu_address(void); #else static inline void acpi_iort_init(void) { } @@ -52,7 +52,7 @@ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in) { return -ENODEV; } static inline -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { } static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void) From patchwork Mon Apr 4 12:42:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 558901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F418C433F5 for ; Mon, 4 Apr 2022 12:43:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233970AbiDDMpn (ORCPT ); Mon, 4 Apr 2022 08:45:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347542AbiDDMpn (ORCPT ); Mon, 4 Apr 2022 08:45:43 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C5E51AF01 for ; Mon, 4 Apr 2022 05:43:47 -0700 (PDT) Received: from fraeml735-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9QN2d1Yz686H8; Mon, 4 Apr 2022 20:42:04 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml735-chm.china.huawei.com (10.206.15.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:43:45 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:43:37 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 05/11] iommu/dma: Introduce a helper to remove reserved regions Date: Mon, 4 Apr 2022 13:42:03 +0100 Message-ID: <20220404124209.1086-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Currently drivers use generic_iommu_put_resv_regions() to remove reserved regions. Introduce a dma-iommu specific reserve region removal helper(iommu_dma_put_resv_regions()). This will be useful when we introduce reserve regions with any firmware specific memory allocations(eg: IORT RMR) that have to be freed. Also update current users of iommu_dma_get_resv_regions() to use iommu_dma_put_resv_regions() for removal. Signed-off-by: Shameer Kolothum --- drivers/iommu/apple-dart.c | 2 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 2 +- drivers/iommu/arm/arm-smmu/arm-smmu.c | 2 +- drivers/iommu/dma-iommu.c | 6 ++++++ drivers/iommu/virtio-iommu.c | 2 +- include/linux/dma-iommu.h | 5 +++++ 6 files changed, 15 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c index decafb07ad08..6c198a08e50f 100644 --- a/drivers/iommu/apple-dart.c +++ b/drivers/iommu/apple-dart.c @@ -771,7 +771,7 @@ static const struct iommu_ops apple_dart_iommu_ops = { .of_xlate = apple_dart_of_xlate, .def_domain_type = apple_dart_def_domain_type, .get_resv_regions = apple_dart_get_resv_regions, - .put_resv_regions = generic_iommu_put_resv_regions, + .put_resv_regions = iommu_dma_put_resv_regions, .pgsize_bitmap = -1UL, /* Restricted during dart probe */ .default_domain_ops = &(const struct iommu_domain_ops) { .attach_dev = apple_dart_attach_dev, diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 627a3ed5ee8f..efa38b4411f3 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2847,7 +2847,7 @@ static struct iommu_ops arm_smmu_ops = { .device_group = arm_smmu_device_group, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, - .put_resv_regions = generic_iommu_put_resv_regions, + .put_resv_regions = iommu_dma_put_resv_regions, .dev_has_feat = arm_smmu_dev_has_feature, .dev_feat_enabled = arm_smmu_dev_feature_enabled, .dev_enable_feat = arm_smmu_dev_enable_feature, diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 568cce590ccc..9a5b785d28fd 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -1589,7 +1589,7 @@ static struct iommu_ops arm_smmu_ops = { .device_group = arm_smmu_device_group, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, - .put_resv_regions = generic_iommu_put_resv_regions, + .put_resv_regions = iommu_dma_put_resv_regions, .def_domain_type = arm_smmu_def_domain_type, .pgsize_bitmap = -1UL, /* Restricted during device attach */ .owner = THIS_MODULE, diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 93d76b666888..44e3f3feaab6 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -389,6 +389,12 @@ void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list) } EXPORT_SYMBOL(iommu_dma_get_resv_regions); +void iommu_dma_put_resv_regions(struct device *dev, struct list_head *list) +{ + generic_iommu_put_resv_regions(dev, list); +} +EXPORT_SYMBOL(iommu_dma_put_resv_regions); + static int cookie_init_hw_msi_region(struct iommu_dma_cookie *cookie, phys_addr_t start, phys_addr_t end) { diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index 25be4b822aa0..b8fea7576bbd 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -1013,7 +1013,7 @@ static struct iommu_ops viommu_ops = { .release_device = viommu_release_device, .device_group = viommu_device_group, .get_resv_regions = viommu_get_resv_regions, - .put_resv_regions = generic_iommu_put_resv_regions, + .put_resv_regions = iommu_dma_put_resv_regions, .of_xlate = viommu_of_xlate, .owner = THIS_MODULE, .default_domain_ops = &(const struct iommu_domain_ops) { diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index 24607dc3c2ac..0628db1e3272 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -37,6 +37,7 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc, struct msi_msg *msg); void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); +void iommu_dma_put_resv_regions(struct device *dev, struct list_head *list); void iommu_dma_free_cpu_cached_iovas(unsigned int cpu, struct iommu_domain *domain); @@ -89,5 +90,9 @@ static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_he { } +static inline void iommu_dma_put_resv_regions(struct device *dev, struct list_head *list) +{ +} + #endif /* CONFIG_IOMMU_DMA */ #endif /* __DMA_IOMMU_H */ From patchwork Mon Apr 4 12:42:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 556046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC555C433F5 for ; Mon, 4 Apr 2022 12:44:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347542AbiDDMpz (ORCPT ); Mon, 4 Apr 2022 08:45:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347701AbiDDMpy (ORCPT ); Mon, 4 Apr 2022 08:45:54 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1204D30565 for ; Mon, 4 Apr 2022 05:43:58 -0700 (PDT) Received: from fraeml734-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9QZ6ZTbz685ZZ; Mon, 4 Apr 2022 20:42:14 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml734-chm.china.huawei.com (10.206.15.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:43:56 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:43:48 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 06/11] ACPI/IORT: Add support to retrieve IORT RMR reserved regions Date: Mon, 4 Apr 2022 13:42:04 +0100 Message-ID: <20220404124209.1086-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Parse through the IORT RMR nodes and populate the reserve region list corresponding to a given IOMMU and device(optional). Also, go through the ID mappings of the RMR node and retrieve all the SIDs associated with it. Now that we have this support, update iommu_dma_get/_put_resv_regions() paths to include the RMR reserve regions. Signed-off-by: Shameer Kolothum Reviewed-by: Lorenzo Pieralisi # for ACPI --- drivers/acpi/arm64/iort.c | 275 ++++++++++++++++++++++++++++++++++++++ drivers/iommu/dma-iommu.c | 3 + include/linux/acpi_iort.h | 4 + 3 files changed, 282 insertions(+) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 63acc3c5b275..1147387cfddb 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -812,6 +812,259 @@ void acpi_configure_pmsi_domain(struct device *dev) } #ifdef CONFIG_IOMMU_API +static void iort_rmr_desc_check_overlap(struct acpi_iort_rmr_desc *desc, u32 count) +{ + int i, j; + + for (i = 0; i < count; i++) { + u64 end, start = desc[i].base_address, length = desc[i].length; + + if (!length) { + pr_err(FW_BUG "RMR descriptor[0x%llx] with zero length, continue anyway\n", + start); + continue; + } + + end = start + length - 1; + + /* Check for address overlap */ + for (j = i + 1; j < count; j++) { + u64 e_start = desc[j].base_address; + u64 e_end = e_start + desc[j].length - 1; + + if (start <= e_end && end >= e_start) + pr_err(FW_BUG "RMR descriptor[0x%llx - 0x%llx] overlaps, continue anyway\n", + start, end); + } + } +} + +/* + * Please note, we will keep the already allocated RMR reserve + * regions in case of a memory allocation failure. + */ +static void iort_get_rmrs(struct acpi_iort_node *node, + struct acpi_iort_node *smmu, + u32 *sids, u32 num_sids, + struct list_head *head) +{ + struct acpi_iort_rmr *rmr = (struct acpi_iort_rmr *)node->node_data; + struct acpi_iort_rmr_desc *rmr_desc; + int i; + + rmr_desc = ACPI_ADD_PTR(struct acpi_iort_rmr_desc, node, + rmr->rmr_offset); + + iort_rmr_desc_check_overlap(rmr_desc, rmr->rmr_count); + + for (i = 0; i < rmr->rmr_count; i++, rmr_desc++) { + struct iommu_resv_region *region; + enum iommu_resv_type type; + u32 *sids_copy; + int prot = IOMMU_READ | IOMMU_WRITE; + u64 addr = rmr_desc->base_address, size = rmr_desc->length; + + if (!IS_ALIGNED(addr, SZ_64K) || !IS_ALIGNED(size, SZ_64K)) { + /* PAGE align base addr and size */ + addr &= PAGE_MASK; + size = PAGE_ALIGN(size + offset_in_page(rmr_desc->base_address)); + + pr_err(FW_BUG "RMR descriptor[0x%llx - 0x%llx] not aligned to 64K, continue with [0x%llx - 0x%llx]\n", + rmr_desc->base_address, + rmr_desc->base_address + rmr_desc->length - 1, + addr, addr + size - 1); + } + + if (rmr->flags & ACPI_IORT_RMR_REMAP_PERMITTED) + type = IOMMU_RESV_DIRECT_RELAXABLE; + else + type = IOMMU_RESV_DIRECT; + + if (rmr->flags & ACPI_IORT_RMR_ACCESS_PRIVILEGE) + prot |= IOMMU_PRIV; + + /* Attributes 0x00 - 0x03 represents device memory */ + if (ACPI_IORT_RMR_ACCESS_ATTRIBUTES(rmr->flags) <= + ACPI_IORT_RMR_ATTR_DEVICE_GRE) + prot |= IOMMU_MMIO; + else if (ACPI_IORT_RMR_ACCESS_ATTRIBUTES(rmr->flags) == + ACPI_IORT_RMR_ATTR_NORMAL_IWB_OWB) + prot |= IOMMU_CACHE; + + /* Create a copy of SIDs array to associate with this resv region */ + sids_copy = kmemdup(sids, num_sids * sizeof(*sids), GFP_KERNEL); + if (!sids_copy) + return; + + region = iommu_alloc_resv_region(addr, size, prot, type); + if (!region) { + kfree(sids_copy); + return; + } + + region->fw_data.rmr.sids = sids_copy; + region->fw_data.rmr.num_sids = num_sids; + list_add_tail(®ion->list, head); + } +} + +static u32 *iort_rmr_alloc_sids(u32 *sids, u32 count, u32 id_start, + u32 new_count) +{ + u32 *new_sids; + u32 total_count = count + new_count; + int i; + + new_sids = krealloc_array(sids, count + new_count, + sizeof(*new_sids), GFP_KERNEL); + if (!new_sids) + return NULL; + + for (i = count; i < total_count; i++) + new_sids[i] = id_start++; + + return new_sids; +} + +static bool iort_rmr_has_dev(struct device *dev, u32 id_start, + u32 id_count) +{ + int i; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + /* + * Make sure the kernel has preserved the boot firmware PCIe + * configuration. This is required to ensure that the RMR PCIe + * StreamIDs are still valid (Refer: ARM DEN 0049E.d Section 3.1.1.5). + */ + if (dev_is_pci(dev)) { + struct pci_dev *pdev = to_pci_dev(dev); + struct pci_host_bridge *host = pci_find_host_bridge(pdev->bus); + + if (!host->preserve_config) + return false; + } + + for (i = 0; i < fwspec->num_ids; i++) { + if (fwspec->ids[i] >= id_start && + fwspec->ids[i] <= id_start + id_count) + return true; + } + + return false; +} + +static void iort_node_get_rmr_info(struct acpi_iort_node *node, + struct acpi_iort_node *iommu, + struct device *dev, struct list_head *head) +{ + struct acpi_iort_node *smmu = NULL; + struct acpi_iort_rmr *rmr; + struct acpi_iort_id_mapping *map; + u32 *sids = NULL; + u32 num_sids = 0; + int i; + + if (!node->mapping_offset || !node->mapping_count) { + pr_err(FW_BUG "Invalid ID mapping, skipping RMR node %p\n", + node); + return; + } + + rmr = (struct acpi_iort_rmr *)node->node_data; + if (!rmr->rmr_offset || !rmr->rmr_count) + return; + + map = ACPI_ADD_PTR(struct acpi_iort_id_mapping, node, + node->mapping_offset); + + /* + * Go through the ID mappings and see if we have a match for SMMU + * and dev(if !NULL). If found, get the sids for the Node. + * Please note, id_count is equal to the number of IDs in the + * range minus one. + */ + for (i = 0; i < node->mapping_count; i++, map++) { + struct acpi_iort_node *parent; + + if (!map->id_count) + continue; + + parent = ACPI_ADD_PTR(struct acpi_iort_node, iort_table, + map->output_reference); + if (parent != iommu) + continue; + + /* If dev is valid, check RMR node corresponds to the dev SID */ + if (dev && !iort_rmr_has_dev(dev, map->output_base, + map->id_count)) + continue; + + /* Retrieve SIDs associated with the Node. */ + sids = iort_rmr_alloc_sids(sids, num_sids, map->output_base, + map->id_count + 1); + if (!sids) + return; + + num_sids += map->id_count + 1; + } + + if (!sids) + return; + + iort_get_rmrs(node, smmu, sids, num_sids, head); + kfree(sids); +} + +static void iort_find_rmrs(struct acpi_iort_node *iommu, struct device *dev, + struct list_head *head) +{ + struct acpi_table_iort *iort; + struct acpi_iort_node *iort_node, *iort_end; + int i; + + /* Only supports ARM DEN 0049E.d onwards */ + if (iort_table->revision < 5) + return; + + iort = (struct acpi_table_iort *)iort_table; + + iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort, + iort->node_offset); + iort_end = ACPI_ADD_PTR(struct acpi_iort_node, iort, + iort_table->length); + + for (i = 0; i < iort->node_count; i++) { + if (WARN_TAINT(iort_node >= iort_end, TAINT_FIRMWARE_WORKAROUND, + "IORT node pointer overflows, bad table!\n")) + return; + + if (iort_node->type == ACPI_IORT_NODE_RMR) + iort_node_get_rmr_info(iort_node, iommu, dev, head); + + iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node, + iort_node->length); + } +} + +/* + * Populate the RMR list associated with a given IOMMU and dev(if provided). + * If dev is NULL, the function populates all the RMRs associated with the + * given IOMMU. + */ +static void +iort_iommu_rmr_get_resv_regions(struct fwnode_handle *iommu_fwnode, + struct device *dev, struct list_head *head) +{ + struct acpi_iort_node *iommu; + + iommu = iort_get_iort_node(iommu_fwnode); + if (!iommu) + return; + + iort_find_rmrs(iommu, dev, head); +} + static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) { struct acpi_iort_node *iommu; @@ -892,7 +1145,27 @@ iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) */ void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + iort_iommu_msi_get_resv_regions(dev, head); + iort_iommu_rmr_get_resv_regions(fwspec->iommu_fwnode, dev, head); +} + +/** + * iort_iommu_put_resv_regions - Free any IORT specific memory allocations + * associated with reserve regions. + * @dev: Associated device(Optional) + * @head: Resereved region list + */ +void iort_iommu_put_resv_regions(struct device *dev, struct list_head *head) +{ + struct iommu_resv_region *e, *tmp; + + /* RMR entries will have mem allocated for fw_data.rmr.sids. */ + list_for_each_entry_safe(e, tmp, head, list) { + if (e->fw_data.rmr.sids) + kfree(e->fw_data.rmr.sids); + } } static inline bool iort_iommu_driver_enabled(u8 type) @@ -1061,6 +1334,8 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in) #else void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { } +void iort_iommu_put_resv_regions(struct device *dev, struct list_head *head) +{ } int iort_iommu_configure_id(struct device *dev, const u32 *input_id) { return -ENODEV; } #endif diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 44e3f3feaab6..5811233dc9fb 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -391,6 +391,9 @@ EXPORT_SYMBOL(iommu_dma_get_resv_regions); void iommu_dma_put_resv_regions(struct device *dev, struct list_head *list) { + if (!is_of_node(dev_iommu_fwspec_get(dev)->iommu_fwnode)) + iort_iommu_put_resv_regions(dev, list); + generic_iommu_put_resv_regions(dev, list); } EXPORT_SYMBOL(iommu_dma_put_resv_regions); diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index e5d2de9caf7f..eb3c28853110 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -37,6 +37,7 @@ int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head); +void iort_iommu_put_resv_regions(struct device *dev, struct list_head *head); phys_addr_t acpi_iort_dma_get_max_cpu_address(void); #else static inline void acpi_iort_init(void) { } @@ -54,6 +55,9 @@ static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in) static inline void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { } +static inline +void iort_iommu_put_resv_regions(struct device *dev, struct list_head *head) +{ } static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void) { return PHYS_ADDR_MAX; } From patchwork Mon Apr 4 12:42:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 558900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85357C433EF for ; Mon, 4 Apr 2022 12:44:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347634AbiDDMqE (ORCPT ); Mon, 4 Apr 2022 08:46:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347646AbiDDMqE (ORCPT ); Mon, 4 Apr 2022 08:46:04 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05F6931923 for ; Mon, 4 Apr 2022 05:44:08 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9Qm669Sz685ZZ; Mon, 4 Apr 2022 20:42:24 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:44:06 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:43:57 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 07/11] ACPI/IORT: Add a helper to retrieve RMR info directly Date: Mon, 4 Apr 2022 13:42:05 +0100 Message-ID: <20220404124209.1086-8-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org This will provide a way for SMMU drivers to retrieve StreamIDs associated with IORT RMR nodes and use that to set bypass settings for those IDs. Signed-off-by: Shameer Kolothum --- drivers/acpi/arm64/iort.c | 29 +++++++++++++++++++++++++++++ include/linux/acpi_iort.h | 8 ++++++++ 2 files changed, 37 insertions(+) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 1147387cfddb..fb2b0163c27d 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -1402,6 +1402,35 @@ int iort_dma_get_ranges(struct device *dev, u64 *size) return nc_dma_get_range(dev, size); } +/** + * iort_get_rmr_sids - Retrieve IORT RMR node reserved regions with + * associated StreamIDs information. + * @iommu_fwnode: fwnode associated with IOMMU + * @head: Resereved region list + */ +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head) +{ + iort_iommu_rmr_get_resv_regions(iommu_fwnode, NULL, head); +} +EXPORT_SYMBOL_GPL(iort_get_rmr_sids); + +/** + * iort_put_rmr_sids - Free all the memory allocated for RMR reserved regions. + * @iommu_fwnode: fwnode associated with IOMMU + * @head: Resereved region list + */ +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head) +{ + struct iommu_resv_region *entry, *next; + + iort_iommu_put_resv_regions(NULL, head); + list_for_each_entry_safe(entry, next, head, list) + kfree(entry); +} +EXPORT_SYMBOL_GPL(iort_put_rmr_sids); + static void __init acpi_iort_register_irq(int hwirq, const char *name, int trigger, struct resource *res) diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index eb3c28853110..774b8bc16573 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -33,6 +33,10 @@ struct irq_domain *iort_get_device_domain(struct device *dev, u32 id, enum irq_domain_bus_token bus_token); void acpi_configure_pmsi_domain(struct device *dev); int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head); +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); @@ -47,6 +51,10 @@ static inline struct irq_domain *iort_get_device_domain( struct device *dev, u32 id, enum irq_domain_bus_token bus_token) { return NULL; } static inline void acpi_configure_pmsi_domain(struct device *dev) { } +static inline +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { } +static inline +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { } /* IOMMU interface */ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) { return -ENODEV; } From patchwork Mon Apr 4 12:42:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 556045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CA3CC433EF for ; Mon, 4 Apr 2022 12:44:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347687AbiDDMqP (ORCPT ); Mon, 4 Apr 2022 08:46:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347646AbiDDMqO (ORCPT ); Mon, 4 Apr 2022 08:46:14 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1ED843C739 for ; Mon, 4 Apr 2022 05:44:19 -0700 (PDT) Received: from fraeml713-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9Qz6r20z685ZZ; Mon, 4 Apr 2022 20:42:35 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml713-chm.china.huawei.com (10.206.15.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:44:17 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:44:08 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 08/11] iommu/arm-smmu-v3: Introduce strtab init helper Date: Mon, 4 Apr 2022 13:42:06 +0100 Message-ID: <20220404124209.1086-9-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Introduce a helper to check the sid range and to init the l2 strtab entries(bypass). This will be useful when we have to initialize the l2 strtab with bypass for RMR SIDs. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 28 +++++++++++---------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index efa38b4411f3..61558fdabbe3 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2537,6 +2537,19 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) return sid < limit; } +static int arm_smmu_init_sid_strtab(struct arm_smmu_device *smmu, u32 sid) +{ + /* Check the SIDs are in range of the SMMU and our stream table */ + if (!arm_smmu_sid_in_range(smmu, sid)) + return -ERANGE; + + /* Ensure l2 strtab is initialised */ + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) + return arm_smmu_init_l2_strtab(smmu, sid); + + return 0; +} + static int arm_smmu_insert_master(struct arm_smmu_device *smmu, struct arm_smmu_master *master) { @@ -2560,20 +2573,9 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu, new_stream->id = sid; new_stream->master = master; - /* - * Check the SIDs are in range of the SMMU and our stream table - */ - if (!arm_smmu_sid_in_range(smmu, sid)) { - ret = -ERANGE; + ret = arm_smmu_init_sid_strtab(smmu, sid); + if (ret) break; - } - - /* Ensure l2 strtab is initialised */ - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { - ret = arm_smmu_init_l2_strtab(smmu, sid); - if (ret) - break; - } /* Insert into SID tree */ new_node = &(smmu->streams.rb_node); From patchwork Mon Apr 4 12:42:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 558899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C448AC433F5 for ; Mon, 4 Apr 2022 12:44:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347646AbiDDMqZ (ORCPT ); Mon, 4 Apr 2022 08:46:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240113AbiDDMqZ (ORCPT ); Mon, 4 Apr 2022 08:46:25 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12B363C73C for ; Mon, 4 Apr 2022 05:44:29 -0700 (PDT) Received: from fraeml715-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9Qq2SdKz68652; Mon, 4 Apr 2022 20:42:27 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml715-chm.china.huawei.com (10.206.15.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:44:27 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:44:18 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 09/11] iommu/arm-smmu-v3: Refactor arm_smmu_init_bypass_stes() to force bypass Date: Mon, 4 Apr 2022 13:42:07 +0100 Message-ID: <20220404124209.1086-10-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org By default, disable_bypass flag is set and any dev without an iommu domain installs STE with CFG_ABORT during arm_smmu_init_bypass_stes(). Introduce a "force" flag and move the STE update logic to arm_smmu_init_bypass_stes() so that we can force it to install CFG_BYPASS STE for specific SIDs. This will be useful in a follow-up patch to install bypass for IORT RMR SIDs. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 61558fdabbe3..57f831c44155 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1380,12 +1380,21 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd); } -static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent) +static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent, bool force) { unsigned int i; + u64 val = STRTAB_STE_0_V; + + if (disable_bypass && !force) + val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT); + else + val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS); for (i = 0; i < nent; ++i) { - arm_smmu_write_strtab_ent(NULL, -1, strtab); + strtab[0] = cpu_to_le64(val); + strtab[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG, + STRTAB_STE_1_SHCFG_INCOMING)); + strtab[2] = 0; strtab += STRTAB_STE_DWORDS; } } @@ -1413,7 +1422,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return -ENOMEM; } - arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT); + arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT, false); arm_smmu_write_strtab_l1_desc(strtab, desc); return 0; } @@ -3051,7 +3060,7 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu) reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits); cfg->strtab_base_cfg = reg; - arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents); + arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents, false); return 0; } From patchwork Mon Apr 4 12:42:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 556044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 269DBC433EF for ; Mon, 4 Apr 2022 12:44:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240113AbiDDMqf (ORCPT ); Mon, 4 Apr 2022 08:46:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347689AbiDDMqe (ORCPT ); Mon, 4 Apr 2022 08:46:34 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F365B3C739 for ; Mon, 4 Apr 2022 05:44:38 -0700 (PDT) Received: from fraeml710-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9R1210Xz68652; Mon, 4 Apr 2022 20:42:37 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml710-chm.china.huawei.com (10.206.15.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:44:36 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:44:28 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 10/11] iommu/arm-smmu-v3: Get associated RMR info and install bypass STE Date: Mon, 4 Apr 2022 13:42:08 +0100 Message-ID: <20220404124209.1086-11-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Check if there is any RMR info associated with the devices behind the SMMUv3 and if any, install bypass STEs for them. This is to keep any ongoing traffic associated with these devices alive when we enable/reset SMMUv3 during probe(). Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 33 +++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 57f831c44155..627a2b498e78 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3754,6 +3754,36 @@ static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start, return devm_ioremap_resource(dev, &res); } +static void arm_smmu_rmr_install_bypass_ste(struct arm_smmu_device *smmu) +{ + struct list_head rmr_list; + struct iommu_resv_region *e; + + INIT_LIST_HEAD(&rmr_list); + iort_get_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); + + list_for_each_entry(e, &rmr_list, list) { + __le64 *step; + const u32 *sids = e->fw_data.rmr.sids; + u32 num_sids = e->fw_data.rmr.num_sids; + int ret, i; + + for (i = 0; i < num_sids; i++) { + ret = arm_smmu_init_sid_strtab(smmu, sids[i]); + if (ret) { + dev_err(smmu->dev, "RMR SID(0x%x) bypass failed\n", + sids[i]); + continue; + } + + step = arm_smmu_get_step_for_sid(smmu, sids[i]); + arm_smmu_init_bypass_stes(step, 1, true); + } + } + + iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); +} + static int arm_smmu_device_probe(struct platform_device *pdev) { int irq, ret; @@ -3835,6 +3865,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev) /* Record our private device structure */ platform_set_drvdata(pdev, smmu); + /* Check for RMRs and install bypass STEs if any */ + arm_smmu_rmr_install_bypass_ste(smmu); + /* Reset the device */ ret = arm_smmu_device_reset(smmu, bypass); if (ret) From patchwork Mon Apr 4 12:42:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 558898 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B55F4C433F5 for ; Mon, 4 Apr 2022 12:44:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347696AbiDDMqp (ORCPT ); Mon, 4 Apr 2022 08:46:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347689AbiDDMqo (ORCPT ); Mon, 4 Apr 2022 08:46:44 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5E103C739 for ; Mon, 4 Apr 2022 05:44:48 -0700 (PDT) Received: from fraeml712-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KX9QC3msTz67wsC; Mon, 4 Apr 2022 20:41:55 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml712-chm.china.huawei.com (10.206.15.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 14:44:46 +0200 Received: from A2006125610.china.huawei.com (10.47.93.34) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Apr 2022 13:44:38 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v9 11/11] iommu/arm-smmu: Get associated RMR info and install bypass SMR Date: Mon, 4 Apr 2022 13:42:09 +0100 Message-ID: <20220404124209.1086-12-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> References: <20220404124209.1086-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.93.34] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org From: Jon Nettleton Check if there is any RMR info associated with the devices behind the SMMU and if any, install bypass SMRs for them. This is to keep any ongoing traffic associated with these devices alive when we enable/reset SMMU during probe(). Signed-off-by: Jon Nettleton Signed-off-by: Steven Price Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu/arm-smmu.c | 52 +++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 9a5b785d28fd..d1d0473b8b88 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -2068,6 +2068,54 @@ err_reset_platform_ops: __maybe_unused; return err; } +static void arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu) +{ + struct list_head rmr_list; + struct iommu_resv_region *e; + int idx, cnt = 0; + u32 reg; + + INIT_LIST_HEAD(&rmr_list); + iort_get_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); + + /* + * Rather than trying to look at existing mappings that + * are setup by the firmware and then invalidate the ones + * that do no have matching RMR entries, just disable the + * SMMU until it gets enabled again in the reset routine. + */ + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0); + reg |= ARM_SMMU_sCR0_CLIENTPD; + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg); + + list_for_each_entry(e, &rmr_list, list) { + const u32 *sids = e->fw_data.rmr.sids; + u32 num_sids = e->fw_data.rmr.num_sids; + int i; + + for (i = 0; i < num_sids; i++) { + idx = arm_smmu_find_sme(smmu, sids[i], ~0); + if (idx < 0) + continue; + + if (smmu->s2crs[idx].count == 0) { + smmu->smrs[idx].id = sids[i]; + smmu->smrs[idx].mask = 0; + smmu->smrs[idx].valid = true; + } + smmu->s2crs[idx].count++; + smmu->s2crs[idx].type = S2CR_TYPE_BYPASS; + smmu->s2crs[idx].privcfg = S2CR_PRIVCFG_DEFAULT; + + cnt++; + } + } + + dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt, + cnt == 1 ? "" : "s"); + iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); +} + static int arm_smmu_device_probe(struct platform_device *pdev) { struct resource *res; @@ -2189,6 +2237,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev) } platform_set_drvdata(pdev, smmu); + + /* Check for RMRs and install bypass SMRs if any */ + arm_smmu_rmr_install_bypass_smr(smmu); + arm_smmu_device_reset(smmu); arm_smmu_test_smr_masks(smmu);