From patchwork Fri Apr 22 16:28:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 564953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0727C433F5 for ; Fri, 22 Apr 2022 16:29:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1449654AbiDVQci (ORCPT ); Fri, 22 Apr 2022 12:32:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1449832AbiDVQcf (ORCPT ); Fri, 22 Apr 2022 12:32:35 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 567FC5EDF7 for ; Fri, 22 Apr 2022 09:29:42 -0700 (PDT) Received: from fraeml735-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKYl3pGzz6892X; Sat, 23 Apr 2022 00:27:07 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml735-chm.china.huawei.com (10.206.15.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:29:40 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:29:32 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 1/9] iommu: Introduce a callback to struct iommu_resv_region Date: Fri, 22 Apr 2022 17:28:59 +0100 Message-ID: <20220422162907.1276-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org A callback is introduced to struct iommu_resv_region to free memory allocations associated with the reserved region. This will be useful when we introduce support for IORT RMR based reserved regions. Signed-off-by: Shameer Kolothum Reviewed-by: Christoph Hellwig --- drivers/iommu/iommu.c | 16 +++++++++++----- include/linux/iommu.h | 2 ++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index f2c45b85b9fc..ffcfa684e80c 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2597,16 +2597,22 @@ void iommu_put_resv_regions(struct device *dev, struct list_head *list) * @list: reserved region list for device * * IOMMU drivers can use this to implement their .put_resv_regions() callback - * for simple reservations. Memory allocated for each reserved region will be - * freed. If an IOMMU driver allocates additional resources per region, it is - * going to have to implement a custom callback. + * for simple reservations. If a per region callback is provided that will be + * used to free all memory allocations associated with the reserved region or + * else just free up the memory for the regions. If an IOMMU driver allocates + * additional resources per region, it is going to have to implement a custom + * callback. */ void generic_iommu_put_resv_regions(struct device *dev, struct list_head *list) { struct iommu_resv_region *entry, *next; - list_for_each_entry_safe(entry, next, list, list) - kfree(entry); + list_for_each_entry_safe(entry, next, list, list) { + if (entry->free) + entry->free(dev, entry); + else + kfree(entry); + } } EXPORT_SYMBOL(generic_iommu_put_resv_regions); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 9208eca4b0d1..68bcfb3a06d7 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -134,6 +134,7 @@ enum iommu_resv_type { * @length: Length of the region in bytes * @prot: IOMMU Protection flags (READ/WRITE/...) * @type: Type of the reserved region + * @free: Callback to free associated memory allocations */ struct iommu_resv_region { struct list_head list; @@ -141,6 +142,7 @@ struct iommu_resv_region { size_t length; int prot; enum iommu_resv_type type; + void (*free)(struct device *dev, struct iommu_resv_region *region); }; /** From patchwork Fri Apr 22 16:29:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 565352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11379C433FE for ; Fri, 22 Apr 2022 16:29:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354418AbiDVQcs (ORCPT ); Fri, 22 Apr 2022 12:32:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1445398AbiDVQcq (ORCPT ); Fri, 22 Apr 2022 12:32:46 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 655505F246 for ; Fri, 22 Apr 2022 09:29:51 -0700 (PDT) Received: from fraeml734-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKXT52RPz6H6gm; Sat, 23 Apr 2022 00:26:01 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml734-chm.china.huawei.com (10.206.15.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:29:49 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:29:41 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 2/9] ACPI/IORT: Make iort_iommu_msi_get_resv_regions() return void Date: Fri, 22 Apr 2022 17:29:00 +0100 Message-ID: <20220422162907.1276-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org At present iort_iommu_msi_get_resv_regions() returns the number of MSI reserved regions on success and there are no users for this. The reserved region list will get populated anyway for platforms that require the HW MSI region reservation. Hence, change the function to return void instead. Reviewed-by: Christoph Hellwig Signed-off-by: Shameer Kolothum --- drivers/acpi/arm64/iort.c | 25 +++++++++---------------- include/linux/acpi_iort.h | 6 +++--- 2 files changed, 12 insertions(+), 19 deletions(-) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index f2f8f05662de..213f61cae176 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -811,22 +811,19 @@ static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) * @dev: Device from iommu_get_resv_regions() * @head: Reserved region list from iommu_get_resv_regions() * - * Returns: Number of msi reserved regions on success (0 if platform - * doesn't require the reservation or no associated msi regions), - * appropriate error value otherwise. The ITS interrupt translation - * spaces (ITS_base + SZ_64K, SZ_64K) associated with the device - * are the msi reserved regions. + * The ITS interrupt translation spaces (ITS_base + SZ_64K, SZ_64K) + * associated with the device are the HW MSI reserved regions. */ -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct acpi_iort_its_group *its; struct acpi_iort_node *iommu_node, *its_node = NULL; - int i, resv = 0; + int i; iommu_node = iort_get_msi_resv_iommu(dev); if (!iommu_node) - return 0; + return; /* * Current logic to reserve ITS regions relies on HW topologies @@ -846,7 +843,7 @@ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) } if (!its_node) - return 0; + return; /* Move to ITS specific data */ its = (struct acpi_iort_its_group *)its_node->node_data; @@ -860,14 +857,10 @@ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) region = iommu_alloc_resv_region(base + SZ_64K, SZ_64K, prot, IOMMU_RESV_MSI); - if (region) { + if (region) list_add_tail(®ion->list, head); - resv++; - } } } - - return (resv == its->its_count) ? resv : -ENODEV; } static inline bool iort_iommu_driver_enabled(u8 type) @@ -1034,8 +1027,8 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in) } #else -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) -{ return 0; } +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +{ } int iort_iommu_configure_id(struct device *dev, const u32 *input_id) { return -ENODEV; } #endif diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index f1f0842a2cb2..a8198b83753d 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -36,7 +36,7 @@ int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); phys_addr_t acpi_iort_dma_get_max_cpu_address(void); #else static inline void acpi_iort_init(void) { } @@ -52,8 +52,8 @@ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in) { return -ENODEV; } static inline -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) -{ return 0; } +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +{ } static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void) { return PHYS_ADDR_MAX; } From patchwork Fri Apr 22 16:29:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 564952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC63EC433EF for ; Fri, 22 Apr 2022 16:30:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1449691AbiDVQc6 (ORCPT ); Fri, 22 Apr 2022 12:32:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242741AbiDVQc6 (ORCPT ); Fri, 22 Apr 2022 12:32:58 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 613C6193FA for ; Fri, 22 Apr 2022 09:30:04 -0700 (PDT) Received: from fraeml713-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKZ9626Vz687XH; Sat, 23 Apr 2022 00:27:29 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml713-chm.china.huawei.com (10.206.15.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:30:02 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:29:54 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 3/9] ACPI/IORT: Provide a generic helper to retrieve reserve regions Date: Fri, 22 Apr 2022 17:29:01 +0100 Message-ID: <20220422162907.1276-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Currently IORT provides a helper to retrieve HW MSI reserve regions. Change this to a generic helper to retrieve any IORT related reserve regions. This will be useful when we add support for RMR nodes in subsequent patches. [Lorenzo: For ACPI IORT] Reviewed-by: Lorenzo Pieralisi Reviewed-by: Christoph Hellwig Signed-off-by: Shameer Kolothum --- drivers/acpi/arm64/iort.c | 22 +++++++++++++++------- drivers/iommu/dma-iommu.c | 2 +- include/linux/acpi_iort.h | 4 ++-- 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 213f61cae176..cd5d1d7823cb 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -806,15 +806,13 @@ static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) return NULL; } -/** - * iort_iommu_msi_get_resv_regions - Reserved region driver helper - * @dev: Device from iommu_get_resv_regions() - * @head: Reserved region list from iommu_get_resv_regions() - * +/* + * Retrieve platform specific HW MSI reserve regions. * The ITS interrupt translation spaces (ITS_base + SZ_64K, SZ_64K) * associated with the device are the HW MSI reserved regions. */ -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +static void iort_iommu_msi_get_resv_regions(struct device *dev, + struct list_head *head) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct acpi_iort_its_group *its; @@ -863,6 +861,16 @@ void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) } } +/** + * iort_iommu_get_resv_regions - Generic helper to retrieve reserved regions. + * @dev: Device from iommu_get_resv_regions() + * @head: Reserved region list from iommu_get_resv_regions() + */ +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) +{ + iort_iommu_msi_get_resv_regions(dev, head); +} + static inline bool iort_iommu_driver_enabled(u8 type) { switch (type) { @@ -1027,7 +1035,7 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in) } #else -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { } int iort_iommu_configure_id(struct device *dev, const u32 *input_id) { return -ENODEV; } diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 09f6e1c0f9c0..93d76b666888 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -384,7 +384,7 @@ void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list) { if (!is_of_node(dev_iommu_fwspec_get(dev)->iommu_fwnode)) - iort_iommu_msi_get_resv_regions(dev, list); + iort_iommu_get_resv_regions(dev, list); } EXPORT_SYMBOL(iommu_dma_get_resv_regions); diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index a8198b83753d..e5d2de9caf7f 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -36,7 +36,7 @@ int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head); phys_addr_t acpi_iort_dma_get_max_cpu_address(void); #else static inline void acpi_iort_init(void) { } @@ -52,7 +52,7 @@ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in) { return -ENODEV; } static inline -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { } static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void) From patchwork Fri Apr 22 16:29:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 565351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80994C433EF for ; Fri, 22 Apr 2022 16:30:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242741AbiDVQdH (ORCPT ); Fri, 22 Apr 2022 12:33:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1445398AbiDVQdH (ORCPT ); Fri, 22 Apr 2022 12:33:07 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 301D55D5F4 for ; Fri, 22 Apr 2022 09:30:13 -0700 (PDT) Received: from fraeml715-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKZT4W9Zz67gR6; Sat, 23 Apr 2022 00:27:45 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml715-chm.china.huawei.com (10.206.15.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:30:11 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:30:03 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 4/9] ACPI/IORT: Add support to retrieve IORT RMR reserved regions Date: Fri, 22 Apr 2022 17:29:02 +0100 Message-ID: <20220422162907.1276-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Parse through the IORT RMR nodes and populate the reserve region list corresponding to a given IOMMU and device(optional). Also, go through the ID mappings of the RMR node and retrieve all the SIDs associated with it. Signed-off-by: Shameer Kolothum Reported-by: kernel test robot --- drivers/acpi/arm64/iort.c | 290 ++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 8 ++ 2 files changed, 298 insertions(+) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index cd5d1d7823cb..5be6e8ecca38 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -788,6 +788,293 @@ void acpi_configure_pmsi_domain(struct device *dev) } #ifdef CONFIG_IOMMU_API +static void iort_rmr_free(struct device *dev, + struct iommu_resv_region *region) +{ + struct iommu_iort_rmr_data *rmr_data; + + rmr_data = container_of(region, struct iommu_iort_rmr_data, rr); + kfree(rmr_data->sids); + kfree(rmr_data); +} + +struct iommu_iort_rmr_data *iort_rmr_alloc(struct acpi_iort_rmr_desc *rmr_desc, + int prot, enum iommu_resv_type type, + u32 *sids, u32 num_sids) +{ + struct iommu_iort_rmr_data *rmr_data; + struct iommu_resv_region *region; + u32 *sids_copy; + u64 addr = rmr_desc->base_address, size = rmr_desc->length; + + rmr_data = kmalloc(sizeof(*rmr_data), GFP_KERNEL); + if (!rmr_data) + return NULL; + + /* Create a copy of SIDs array to associate with this rmr_data */ + sids_copy = kmemdup(sids, num_sids * sizeof(*sids), GFP_KERNEL); + if (!sids_copy) { + kfree(rmr_data); + return NULL; + } + rmr_data->sids = sids_copy; + rmr_data->num_sids = num_sids; + + if (!IS_ALIGNED(addr, SZ_64K) || !IS_ALIGNED(size, SZ_64K)) { + /* PAGE align base addr and size */ + addr &= PAGE_MASK; + size = PAGE_ALIGN(size + offset_in_page(rmr_desc->base_address)); + + pr_err(FW_BUG "RMR descriptor[0x%llx - 0x%llx] not aligned to 64K, continue with [0x%llx - 0x%llx]\n", + rmr_desc->base_address, + rmr_desc->base_address + rmr_desc->length - 1, + addr, addr + size - 1); + } + + region = &rmr_data->rr; + INIT_LIST_HEAD(®ion->list); + region->start = addr; + region->length = size; + region->prot = prot; + region->type = type; + region->free = iort_rmr_free; + + return rmr_data; +} + +static void iort_rmr_desc_check_overlap(struct acpi_iort_rmr_desc *desc, + u32 count) +{ + int i, j; + + for (i = 0; i < count; i++) { + u64 end, start = desc[i].base_address, length = desc[i].length; + + if (!length) { + pr_err(FW_BUG "RMR descriptor[0x%llx] with zero length, continue anyway\n", + start); + continue; + } + + end = start + length - 1; + + /* Check for address overlap */ + for (j = i + 1; j < count; j++) { + u64 e_start = desc[j].base_address; + u64 e_end = e_start + desc[j].length - 1; + + if (start <= e_end && end >= e_start) + pr_err(FW_BUG "RMR descriptor[0x%llx - 0x%llx] overlaps, continue anyway\n", + start, end); + } + } +} + +/* + * Please note, we will keep the already allocated RMR reserve + * regions in case of a memory allocation failure. + */ +static void iort_get_rmrs(struct acpi_iort_node *node, + struct acpi_iort_node *smmu, + u32 *sids, u32 num_sids, + struct list_head *head) +{ + struct acpi_iort_rmr *rmr = (struct acpi_iort_rmr *)node->node_data; + struct acpi_iort_rmr_desc *rmr_desc; + int i; + + rmr_desc = ACPI_ADD_PTR(struct acpi_iort_rmr_desc, node, + rmr->rmr_offset); + + iort_rmr_desc_check_overlap(rmr_desc, rmr->rmr_count); + + for (i = 0; i < rmr->rmr_count; i++, rmr_desc++) { + struct iommu_iort_rmr_data *rmr_data; + enum iommu_resv_type type; + int prot = IOMMU_READ | IOMMU_WRITE; + + if (rmr->flags & ACPI_IORT_RMR_REMAP_PERMITTED) + type = IOMMU_RESV_DIRECT_RELAXABLE; + else + type = IOMMU_RESV_DIRECT; + + if (rmr->flags & ACPI_IORT_RMR_ACCESS_PRIVILEGE) + prot |= IOMMU_PRIV; + + /* Attributes 0x00 - 0x03 represents device memory */ + if (ACPI_IORT_RMR_ACCESS_ATTRIBUTES(rmr->flags) <= + ACPI_IORT_RMR_ATTR_DEVICE_GRE) + prot |= IOMMU_MMIO; + else if (ACPI_IORT_RMR_ACCESS_ATTRIBUTES(rmr->flags) == + ACPI_IORT_RMR_ATTR_NORMAL_IWB_OWB) + prot |= IOMMU_CACHE; + + rmr_data = iort_rmr_alloc(rmr_desc, prot, type, + sids, num_sids); + if (!rmr_data) + return; + + list_add_tail(&rmr_data->rr.list, head); + } +} + +static u32 *iort_rmr_alloc_sids(u32 *sids, u32 count, u32 id_start, + u32 new_count) +{ + u32 *new_sids; + u32 total_count = count + new_count; + int i; + + new_sids = krealloc_array(sids, count + new_count, + sizeof(*new_sids), GFP_KERNEL); + if (!new_sids) + return NULL; + + for (i = count; i < total_count; i++) + new_sids[i] = id_start++; + + return new_sids; +} + +static bool iort_rmr_has_dev(struct device *dev, u32 id_start, + u32 id_count) +{ + int i; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + /* + * Make sure the kernel has preserved the boot firmware PCIe + * configuration. This is required to ensure that the RMR PCIe + * StreamIDs are still valid (Refer: ARM DEN 0049E.d Section 3.1.1.5). + */ + if (dev_is_pci(dev)) { + struct pci_dev *pdev = to_pci_dev(dev); + struct pci_host_bridge *host = pci_find_host_bridge(pdev->bus); + + if (!host->preserve_config) + return false; + } + + for (i = 0; i < fwspec->num_ids; i++) { + if (fwspec->ids[i] >= id_start && + fwspec->ids[i] <= id_start + id_count) + return true; + } + + return false; +} + +static void iort_node_get_rmr_info(struct acpi_iort_node *node, + struct acpi_iort_node *iommu, + struct device *dev, struct list_head *head) +{ + struct acpi_iort_node *smmu = NULL; + struct acpi_iort_rmr *rmr; + struct acpi_iort_id_mapping *map; + u32 *sids = NULL; + u32 num_sids = 0; + int i; + + if (!node->mapping_offset || !node->mapping_count) { + pr_err(FW_BUG "Invalid ID mapping, skipping RMR node %p\n", + node); + return; + } + + rmr = (struct acpi_iort_rmr *)node->node_data; + if (!rmr->rmr_offset || !rmr->rmr_count) + return; + + map = ACPI_ADD_PTR(struct acpi_iort_id_mapping, node, + node->mapping_offset); + + /* + * Go through the ID mappings and see if we have a match for SMMU + * and dev(if !NULL). If found, get the sids for the Node. + * Please note, id_count is equal to the number of IDs in the + * range minus one. + */ + for (i = 0; i < node->mapping_count; i++, map++) { + struct acpi_iort_node *parent; + + if (!map->id_count) + continue; + + parent = ACPI_ADD_PTR(struct acpi_iort_node, iort_table, + map->output_reference); + if (parent != iommu) + continue; + + /* If dev is valid, check RMR node corresponds to the dev SID */ + if (dev && !iort_rmr_has_dev(dev, map->output_base, + map->id_count)) + continue; + + /* Retrieve SIDs associated with the Node. */ + sids = iort_rmr_alloc_sids(sids, num_sids, map->output_base, + map->id_count + 1); + if (!sids) + return; + + num_sids += map->id_count + 1; + } + + if (!sids) + return; + + iort_get_rmrs(node, smmu, sids, num_sids, head); + kfree(sids); +} + +static void iort_find_rmrs(struct acpi_iort_node *iommu, struct device *dev, + struct list_head *head) +{ + struct acpi_table_iort *iort; + struct acpi_iort_node *iort_node, *iort_end; + int i; + + /* Only supports ARM DEN 0049E.d onwards */ + if (iort_table->revision < 5) + return; + + iort = (struct acpi_table_iort *)iort_table; + + iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort, + iort->node_offset); + iort_end = ACPI_ADD_PTR(struct acpi_iort_node, iort, + iort_table->length); + + for (i = 0; i < iort->node_count; i++) { + if (WARN_TAINT(iort_node >= iort_end, TAINT_FIRMWARE_WORKAROUND, + "IORT node pointer overflows, bad table!\n")) + return; + + if (iort_node->type == ACPI_IORT_NODE_RMR) + iort_node_get_rmr_info(iort_node, iommu, dev, head); + + iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node, + iort_node->length); + } +} + +/* + * Populate the RMR list associated with a given IOMMU and dev(if provided). + * If dev is NULL, the function populates all the RMRs associated with the + * given IOMMU. + */ +static void iort_iommu_rmr_get_resv_regions(struct fwnode_handle *iommu_fwnode, + struct device *dev, + struct list_head *head) +{ + struct acpi_iort_node *iommu; + + iommu = iort_get_iort_node(iommu_fwnode); + if (!iommu) + return; + + iort_find_rmrs(iommu, dev, head); +} + static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) { struct acpi_iort_node *iommu; @@ -868,7 +1155,10 @@ static void iort_iommu_msi_get_resv_regions(struct device *dev, */ void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + iort_iommu_msi_get_resv_regions(dev, head); + iort_iommu_rmr_get_resv_regions(fwspec->iommu_fwnode, dev, head); } static inline bool iort_iommu_driver_enabled(u8 type) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 68bcfb3a06d7..0936d7980ce2 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -145,6 +145,14 @@ struct iommu_resv_region { void (*free)(struct device *dev, struct iommu_resv_region *region); }; +struct iommu_iort_rmr_data { + struct iommu_resv_region rr; + + /* Stream IDs associated with IORT RMR entry */ + const u32 *sids; + u32 num_sids; +}; + /** * enum iommu_dev_features - Per device IOMMU features * @IOMMU_DEV_FEAT_SVA: Shared Virtual Addresses From patchwork Fri Apr 22 16:29:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 564951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02BFBC433F5 for ; Fri, 22 Apr 2022 16:30:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1446936AbiDVQdQ (ORCPT ); Fri, 22 Apr 2022 12:33:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1445398AbiDVQdP (ORCPT ); Fri, 22 Apr 2022 12:33:15 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 115165DE70 for ; Fri, 22 Apr 2022 09:30:22 -0700 (PDT) Received: from fraeml710-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKZW3Lz4z681D8; Sat, 23 Apr 2022 00:27:47 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml710-chm.china.huawei.com (10.206.15.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:30:20 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:30:12 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 5/9] ACPI/IORT: Add a helper to retrieve RMR info directly Date: Fri, 22 Apr 2022 17:29:03 +0100 Message-ID: <20220422162907.1276-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org This will provide a way for SMMU drivers to retrieve StreamIDs associated with IORT RMR nodes and use that to set bypass settings for those IDs. Signed-off-by: Shameer Kolothum --- drivers/acpi/arm64/iort.c | 28 ++++++++++++++++++++++++++++ include/linux/acpi_iort.h | 8 ++++++++ 2 files changed, 36 insertions(+) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 5be6e8ecca38..c0de17034dff 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -1393,6 +1393,34 @@ int iort_dma_get_ranges(struct device *dev, u64 *size) return nc_dma_get_range(dev, size); } +/** + * iort_get_rmr_sids - Retrieve IORT RMR node reserved regions with + * associated StreamIDs information. + * @iommu_fwnode: fwnode associated with IOMMU + * @head: Resereved region list + */ +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head) +{ + iort_iommu_rmr_get_resv_regions(iommu_fwnode, NULL, head); +} +EXPORT_SYMBOL_GPL(iort_get_rmr_sids); + +/** + * iort_put_rmr_sids - Free memory allocated for RMR reserved regions. + * @iommu_fwnode: fwnode associated with IOMMU + * @head: Resereved region list + */ +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head) +{ + struct iommu_resv_region *entry, *next; + + list_for_each_entry_safe(entry, next, head, list) + entry->free(NULL, entry); +} +EXPORT_SYMBOL_GPL(iort_put_rmr_sids); + static void __init acpi_iort_register_irq(int hwirq, const char *name, int trigger, struct resource *res) diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index e5d2de9caf7f..b43be0987b19 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -33,6 +33,10 @@ struct irq_domain *iort_get_device_domain(struct device *dev, u32 id, enum irq_domain_bus_token bus_token); void acpi_configure_pmsi_domain(struct device *dev); int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head); +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); @@ -46,6 +50,10 @@ static inline struct irq_domain *iort_get_device_domain( struct device *dev, u32 id, enum irq_domain_bus_token bus_token) { return NULL; } static inline void acpi_configure_pmsi_domain(struct device *dev) { } +static inline +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { } +static inline +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { } /* IOMMU interface */ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) { return -ENODEV; } From patchwork Fri Apr 22 16:29:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 565350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6057EC433F5 for ; Fri, 22 Apr 2022 16:30:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1445398AbiDVQdY (ORCPT ); Fri, 22 Apr 2022 12:33:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1449699AbiDVQdY (ORCPT ); Fri, 22 Apr 2022 12:33:24 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 970E45DE71 for ; Fri, 22 Apr 2022 09:30:30 -0700 (PDT) Received: from fraeml712-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKZq0hG7z686RQ; Sat, 23 Apr 2022 00:28:03 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml712-chm.china.huawei.com (10.206.15.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:30:28 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:30:21 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 6/9] iommu/arm-smmu-v3: Introduce strtab init helper Date: Fri, 22 Apr 2022 17:29:04 +0100 Message-ID: <20220422162907.1276-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Introduce a helper to check the sid range and to init the l2 strtab entries(bypass). This will be useful when we have to initialize the l2 strtab with bypass for RMR SIDs. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 28 +++++++++++---------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 627a3ed5ee8f..df326d8f02c6 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2537,6 +2537,19 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) return sid < limit; } +static int arm_smmu_init_sid_strtab(struct arm_smmu_device *smmu, u32 sid) +{ + /* Check the SIDs are in range of the SMMU and our stream table */ + if (!arm_smmu_sid_in_range(smmu, sid)) + return -ERANGE; + + /* Ensure l2 strtab is initialised */ + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) + return arm_smmu_init_l2_strtab(smmu, sid); + + return 0; +} + static int arm_smmu_insert_master(struct arm_smmu_device *smmu, struct arm_smmu_master *master) { @@ -2560,20 +2573,9 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu, new_stream->id = sid; new_stream->master = master; - /* - * Check the SIDs are in range of the SMMU and our stream table - */ - if (!arm_smmu_sid_in_range(smmu, sid)) { - ret = -ERANGE; + ret = arm_smmu_init_sid_strtab(smmu, sid); + if (ret) break; - } - - /* Ensure l2 strtab is initialised */ - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { - ret = arm_smmu_init_l2_strtab(smmu, sid); - if (ret) - break; - } /* Insert into SID tree */ new_node = &(smmu->streams.rb_node); From patchwork Fri Apr 22 16:29:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 564950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9589EC433F5 for ; Fri, 22 Apr 2022 16:30:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1449706AbiDVQde (ORCPT ); Fri, 22 Apr 2022 12:33:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1449715AbiDVQdd (ORCPT ); Fri, 22 Apr 2022 12:33:33 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D33115EDF6 for ; Fri, 22 Apr 2022 09:30:39 -0700 (PDT) Received: from fraeml711-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKYQ0cjzz67kFm; Sat, 23 Apr 2022 00:26:50 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml711-chm.china.huawei.com (10.206.15.60) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:30:37 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:30:30 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 7/9] iommu/arm-smmu-v3: Refactor arm_smmu_init_bypass_stes() to force bypass Date: Fri, 22 Apr 2022 17:29:05 +0100 Message-ID: <20220422162907.1276-8-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org By default, disable_bypass flag is set and any dev without an iommu domain installs STE with CFG_ABORT during arm_smmu_init_bypass_stes(). Introduce a "force" flag and move the STE update logic to arm_smmu_init_bypass_stes() so that we can force it to install CFG_BYPASS STE for specific SIDs. This will be useful in a follow-up patch to install bypass for IORT RMR SIDs. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index df326d8f02c6..a939d9e0f747 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1380,12 +1380,21 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd); } -static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent) +static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent, bool force) { unsigned int i; + u64 val = STRTAB_STE_0_V; + + if (disable_bypass && !force) + val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT); + else + val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS); for (i = 0; i < nent; ++i) { - arm_smmu_write_strtab_ent(NULL, -1, strtab); + strtab[0] = cpu_to_le64(val); + strtab[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG, + STRTAB_STE_1_SHCFG_INCOMING)); + strtab[2] = 0; strtab += STRTAB_STE_DWORDS; } } @@ -1413,7 +1422,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return -ENOMEM; } - arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT); + arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT, false); arm_smmu_write_strtab_l1_desc(strtab, desc); return 0; } @@ -3051,7 +3060,7 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu) reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits); cfg->strtab_base_cfg = reg; - arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents); + arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents, false); return 0; } From patchwork Fri Apr 22 16:29:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 565349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B225C433F5 for ; Fri, 22 Apr 2022 16:30:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1449715AbiDVQdq (ORCPT ); Fri, 22 Apr 2022 12:33:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1449743AbiDVQdo (ORCPT ); Fri, 22 Apr 2022 12:33:44 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EBCB5F271 for ; Fri, 22 Apr 2022 09:30:49 -0700 (PDT) Received: from fraeml707-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKb953Ttz6874s; Sat, 23 Apr 2022 00:28:21 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml707-chm.china.huawei.com (10.206.15.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:30:47 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:30:39 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 8/9] iommu/arm-smmu-v3: Get associated RMR info and install bypass STE Date: Fri, 22 Apr 2022 17:29:06 +0100 Message-ID: <20220422162907.1276-9-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Check if there is any RMR info associated with the devices behind the SMMUv3 and if any, install bypass STEs for them. This is to keep any ongoing traffic associated with these devices alive when we enable/reset SMMUv3 during probe(). Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 33 +++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index a939d9e0f747..8a5dfd078e95 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3754,6 +3754,36 @@ static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start, return devm_ioremap_resource(dev, &res); } +static void arm_smmu_rmr_install_bypass_ste(struct arm_smmu_device *smmu) +{ + struct list_head rmr_list; + struct iommu_resv_region *e; + + INIT_LIST_HEAD(&rmr_list); + iort_get_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); + + list_for_each_entry(e, &rmr_list, list) { + __le64 *step; + struct iommu_iort_rmr_data *rmr; + int ret, i; + + rmr = container_of(e, struct iommu_iort_rmr_data, rr); + for (i = 0; i < rmr->num_sids; i++) { + ret = arm_smmu_init_sid_strtab(smmu, rmr->sids[i]); + if (ret) { + dev_err(smmu->dev, "RMR SID(0x%x) bypass failed\n", + rmr->sids[i]); + continue; + } + + step = arm_smmu_get_step_for_sid(smmu, rmr->sids[i]); + arm_smmu_init_bypass_stes(step, 1, true); + } + } + + iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); +} + static int arm_smmu_device_probe(struct platform_device *pdev) { int irq, ret; @@ -3835,6 +3865,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev) /* Record our private device structure */ platform_set_drvdata(pdev, smmu); + /* Check for RMRs and install bypass STEs if any */ + arm_smmu_rmr_install_bypass_ste(smmu); + /* Reset the device */ ret = arm_smmu_device_reset(smmu, bypass); if (ret) From patchwork Fri Apr 22 16:29:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 564949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78879C433F5 for ; Fri, 22 Apr 2022 16:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1448232AbiDVQdy (ORCPT ); Fri, 22 Apr 2022 12:33:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1446435AbiDVQdx (ORCPT ); Fri, 22 Apr 2022 12:33:53 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83B385DA79 for ; Fri, 22 Apr 2022 09:30:59 -0700 (PDT) Received: from fraeml708-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KlKbD73cxz689Nq; Sat, 23 Apr 2022 00:28:24 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml708-chm.china.huawei.com (10.206.15.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 18:30:57 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 22 Apr 2022 17:30:50 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v11 9/9] iommu/arm-smmu: Get associated RMR info and install bypass SMR Date: Fri, 22 Apr 2022 17:29:07 +0100 Message-ID: <20220422162907.1276-10-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> References: <20220422162907.1276-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org From: Jon Nettleton Check if there is any RMR info associated with the devices behind the SMMU and if any, install bypass SMRs for them. This is to keep any ongoing traffic associated with these devices alive when we enable/reset SMMU during probe(). Signed-off-by: Jon Nettleton Signed-off-by: Steven Price Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu/arm-smmu.c | 52 +++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 568cce590ccc..e02cc2d4fb4e 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -2068,6 +2068,54 @@ err_reset_platform_ops: __maybe_unused; return err; } +static void arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu) +{ + struct list_head rmr_list; + struct iommu_resv_region *e; + int idx, cnt = 0; + u32 reg; + + INIT_LIST_HEAD(&rmr_list); + iort_get_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); + + /* + * Rather than trying to look at existing mappings that + * are setup by the firmware and then invalidate the ones + * that do no have matching RMR entries, just disable the + * SMMU until it gets enabled again in the reset routine. + */ + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0); + reg |= ARM_SMMU_sCR0_CLIENTPD; + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg); + + list_for_each_entry(e, &rmr_list, list) { + struct iommu_iort_rmr_data *rmr; + int i; + + rmr = container_of(e, struct iommu_iort_rmr_data, rr); + for (i = 0; i < rmr->num_sids; i++) { + idx = arm_smmu_find_sme(smmu, rmr->sids[i], ~0); + if (idx < 0) + continue; + + if (smmu->s2crs[idx].count == 0) { + smmu->smrs[idx].id = rmr->sids[i]; + smmu->smrs[idx].mask = 0; + smmu->smrs[idx].valid = true; + } + smmu->s2crs[idx].count++; + smmu->s2crs[idx].type = S2CR_TYPE_BYPASS; + smmu->s2crs[idx].privcfg = S2CR_PRIVCFG_DEFAULT; + + cnt++; + } + } + + dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt, + cnt == 1 ? "" : "s"); + iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); +} + static int arm_smmu_device_probe(struct platform_device *pdev) { struct resource *res; @@ -2189,6 +2237,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev) } platform_set_drvdata(pdev, smmu); + + /* Check for RMRs and install bypass SMRs if any */ + arm_smmu_rmr_install_bypass_smr(smmu); + arm_smmu_device_reset(smmu); arm_smmu_test_smr_masks(smmu);