From patchwork Mon Dec 21 19:36:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Antoine Tenart X-Patchwork-Id: 346578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5629C433E0 for ; Mon, 21 Dec 2020 19:37:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A0ED322BE9 for ; Mon, 21 Dec 2020 19:37:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726033AbgLUThc (ORCPT ); Mon, 21 Dec 2020 14:37:32 -0500 Received: from mail.kernel.org ([198.145.29.99]:59576 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725783AbgLUThb (ORCPT ); Mon, 21 Dec 2020 14:37:31 -0500 From: Antoine Tenart Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: davem@davemloft.net, kuba@kernel.org, alexander.duyck@gmail.com Cc: Antoine Tenart , netdev@vger.kernel.org, pabeni@redhat.com Subject: [PATCH net v2 1/3] net: fix race conditions in xps by locking the maps and dev->tc_num Date: Mon, 21 Dec 2020 20:36:42 +0100 Message-Id: <20201221193644.1296933-2-atenart@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201221193644.1296933-1-atenart@kernel.org> References: <20201221193644.1296933-1-atenart@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Two race conditions can be triggered in xps, resulting in various oops and invalid memory accesses: 1. Calling netdev_set_num_tc while netif_set_xps_queue: - netdev_set_num_tc sets dev->tc_num. - netif_set_xps_queue uses dev->tc_num as one of the parameters to compute the size of new_dev_maps when allocating it. dev->tc_num is also used to access the map, and the compiler may generate code to retrieve this field multiple times in the function. If new_dev_maps is allocated using dev->tc_num and then dev->tc_num is set to a higher value through netdev_set_num_tc, later accesses to new_dev_maps in netif_set_xps_queue could lead to accessing memory outside of new_dev_maps; triggering an oops. One way of triggering this is to set an iface up (for which the driver uses netdev_set_num_tc in the open path, such as bnx2x) and writing to xps_cpus or xps_rxqs in a concurrent thread. With the right timing an oops is triggered. 2. Calling netif_set_xps_queue while netdev_set_num_tc is running: 2.1. netdev_set_num_tc starts by resetting the xps queues, dev->tc_num isn't updated yet. 2.2. netif_set_xps_queue is called, setting up the maps with the *old* dev->num_tc. 2.3. dev->tc_num is updated. 2.3. Later accesses to the map leads to out of bound accesses and oops. A similar issue can be found with netdev_reset_tc. The fix can't be to only link the size of the maps to them, as invalid configuration could still occur. The reset then set logic in both netdev_set_num_tc and netdev_reset_tc must be protected by a lock. Both issues have the same fix: netif_set_xps_queue, netdev_set_num_tc and netdev_reset_tc should be mutually exclusive. This patch fixes those races by: - Reworking netif_set_xps_queue by moving the xps_map_mutex up so the access of dev->num_tc is done under the lock. - Using xps_map_mutex in both netdev_set_num_tc and netdev_reset_tc for the reset and set logic: + As xps_map_mutex was taken in the reset path, netif_reset_xps_queues had to be reworked to offer an unlocked version (as well as netdev_unbind_all_sb_channels which calls it). + cpus_read_lock was taken in the reset path as well, and is always taken before xps_map_mutex. It had to be moved out of the unlocked version as well. This is why the patch is a little bit longer, and moves netdev_unbind_sb_channel up in the file. Fixes: 184c449f91fe ("net: Add support for XPS with QoS via traffic classes") Signed-off-by: Antoine Tenart --- net/core/dev.c | 122 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 81 insertions(+), 41 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index 8fa739259041..effdb7fee9df 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2527,8 +2527,8 @@ static void clean_xps_maps(struct net_device *dev, const unsigned long *mask, } } -static void netif_reset_xps_queues(struct net_device *dev, u16 offset, - u16 count) +static void __netif_reset_xps_queues(struct net_device *dev, u16 offset, + u16 count) { const unsigned long *possible_mask = NULL; struct xps_dev_maps *dev_maps; @@ -2537,9 +2537,6 @@ static void netif_reset_xps_queues(struct net_device *dev, u16 offset, if (!static_key_false(&xps_needed)) return; - cpus_read_lock(); - mutex_lock(&xps_map_mutex); - if (static_key_false(&xps_rxqs_needed)) { dev_maps = xmap_dereference(dev->xps_rxqs_map); if (dev_maps) { @@ -2551,15 +2548,23 @@ static void netif_reset_xps_queues(struct net_device *dev, u16 offset, dev_maps = xmap_dereference(dev->xps_cpus_map); if (!dev_maps) - goto out_no_maps; + return; if (num_possible_cpus() > 1) possible_mask = cpumask_bits(cpu_possible_mask); nr_ids = nr_cpu_ids; clean_xps_maps(dev, possible_mask, dev_maps, nr_ids, offset, count, false); +} + +static void netif_reset_xps_queues(struct net_device *dev, u16 offset, + u16 count) +{ + cpus_read_lock(); + mutex_lock(&xps_map_mutex); + + __netif_reset_xps_queues(dev, offset, count); -out_no_maps: mutex_unlock(&xps_map_mutex); cpus_read_unlock(); } @@ -2615,27 +2620,32 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, { const unsigned long *online_mask = NULL, *possible_mask = NULL; struct xps_dev_maps *dev_maps, *new_dev_maps = NULL; - int i, j, tci, numa_node_id = -2; + int i, j, tci, numa_node_id = -2, ret = 0; int maps_sz, num_tc = 1, tc = 0; struct xps_map *map, *new_map; bool active = false; unsigned int nr_ids; + mutex_lock(&xps_map_mutex); + if (dev->num_tc) { /* Do not allow XPS on subordinate device directly */ num_tc = dev->num_tc; - if (num_tc < 0) - return -EINVAL; + if (num_tc < 0) { + ret = -EINVAL; + goto unlock; + } /* If queue belongs to subordinate dev use its map */ dev = netdev_get_tx_queue(dev, index)->sb_dev ? : dev; tc = netdev_txq_to_tc(dev, index); - if (tc < 0) - return -EINVAL; + if (tc < 0) { + ret = -EINVAL; + goto unlock; + } } - mutex_lock(&xps_map_mutex); if (is_rxqs_map) { maps_sz = XPS_RXQ_DEV_MAPS_SIZE(num_tc, dev->num_rx_queues); dev_maps = xmap_dereference(dev->xps_rxqs_map); @@ -2659,8 +2669,8 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, if (!new_dev_maps) new_dev_maps = kzalloc(maps_sz, GFP_KERNEL); if (!new_dev_maps) { - mutex_unlock(&xps_map_mutex); - return -ENOMEM; + ret = -ENOMEM; + goto unlock; } tci = j * num_tc + tc; @@ -2765,7 +2775,7 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, } if (!dev_maps) - goto out_no_maps; + goto unlock; /* removes tx-queue from unused CPUs/rx-queues */ for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids), @@ -2783,10 +2793,10 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, if (!active) reset_xps_maps(dev, dev_maps, is_rxqs_map); -out_no_maps: +unlock: mutex_unlock(&xps_map_mutex); - return 0; + return ret; error: /* remove any maps that we added */ for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids), @@ -2822,28 +2832,68 @@ int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask, EXPORT_SYMBOL(netif_set_xps_queue); #endif -static void netdev_unbind_all_sb_channels(struct net_device *dev) + +static void __netdev_unbind_sb_channel(struct net_device *dev, + struct net_device *sb_dev) +{ + struct netdev_queue *txq = &dev->_tx[dev->num_tx_queues]; + +#ifdef CONFIG_XPS + __netif_reset_xps_queues(sb_dev, 0, dev->num_tx_queues); +#endif + + memset(sb_dev->tc_to_txq, 0, sizeof(sb_dev->tc_to_txq)); + memset(sb_dev->prio_tc_map, 0, sizeof(sb_dev->prio_tc_map)); + + while (txq-- != &dev->_tx[0]) { + if (txq->sb_dev == sb_dev) + txq->sb_dev = NULL; + } +} + +void netdev_unbind_sb_channel(struct net_device *dev, + struct net_device *sb_dev) +{ + cpus_read_lock(); + mutex_lock(&xps_map_mutex); + + __netdev_unbind_sb_channel(dev, sb_dev); + + mutex_unlock(&xps_map_mutex); + cpus_read_unlock(); +} +EXPORT_SYMBOL(netdev_unbind_sb_channel); + +static void __netdev_unbind_all_sb_channels(struct net_device *dev) { struct netdev_queue *txq = &dev->_tx[dev->num_tx_queues]; /* Unbind any subordinate channels */ while (txq-- != &dev->_tx[0]) { if (txq->sb_dev) - netdev_unbind_sb_channel(dev, txq->sb_dev); + __netdev_unbind_sb_channel(dev, txq->sb_dev); } } void netdev_reset_tc(struct net_device *dev) { #ifdef CONFIG_XPS - netif_reset_xps_queues_gt(dev, 0); + cpus_read_lock(); + mutex_lock(&xps_map_mutex); + + __netif_reset_xps_queues(dev, 0, dev->num_tx_queues); #endif - netdev_unbind_all_sb_channels(dev); + __netdev_unbind_all_sb_channels(dev); /* Reset TC configuration of device */ dev->num_tc = 0; memset(dev->tc_to_txq, 0, sizeof(dev->tc_to_txq)); memset(dev->prio_tc_map, 0, sizeof(dev->prio_tc_map)); + +#ifdef CONFIG_XPS + mutex_unlock(&xps_map_mutex); + cpus_read_unlock(); +#endif } EXPORT_SYMBOL(netdev_reset_tc); @@ -2867,32 +2917,22 @@ int netdev_set_num_tc(struct net_device *dev, u8 num_tc) return -EINVAL; #ifdef CONFIG_XPS - netif_reset_xps_queues_gt(dev, 0); + cpus_read_lock(); + mutex_lock(&xps_map_mutex); + + __netif_reset_xps_queues(dev, 0, dev->num_tx_queues); #endif - netdev_unbind_all_sb_channels(dev); + __netdev_unbind_all_sb_channels(dev); dev->num_tc = num_tc; - return 0; -} -EXPORT_SYMBOL(netdev_set_num_tc); - -void netdev_unbind_sb_channel(struct net_device *dev, - struct net_device *sb_dev) -{ - struct netdev_queue *txq = &dev->_tx[dev->num_tx_queues]; #ifdef CONFIG_XPS - netif_reset_xps_queues_gt(sb_dev, 0); + mutex_unlock(&xps_map_mutex); + cpus_read_unlock(); #endif - memset(sb_dev->tc_to_txq, 0, sizeof(sb_dev->tc_to_txq)); - memset(sb_dev->prio_tc_map, 0, sizeof(sb_dev->prio_tc_map)); - - while (txq-- != &dev->_tx[0]) { - if (txq->sb_dev == sb_dev) - txq->sb_dev = NULL; - } + return 0; } -EXPORT_SYMBOL(netdev_unbind_sb_channel); +EXPORT_SYMBOL(netdev_set_num_tc); int netdev_bind_sb_channel_queue(struct net_device *dev, struct net_device *sb_dev, From patchwork Mon Dec 21 19:36:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Antoine Tenart X-Patchwork-Id: 346935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B45A6C433DB for ; Mon, 21 Dec 2020 19:37:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8329522BE9 for ; Mon, 21 Dec 2020 19:37:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726302AbgLUThf (ORCPT ); Mon, 21 Dec 2020 14:37:35 -0500 Received: from mail.kernel.org ([198.145.29.99]:59606 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725783AbgLUThe (ORCPT ); Mon, 21 Dec 2020 14:37:34 -0500 From: Antoine Tenart Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: davem@davemloft.net, kuba@kernel.org, alexander.duyck@gmail.com Cc: Antoine Tenart , netdev@vger.kernel.org, pabeni@redhat.com Subject: [PATCH net v2 2/3] net: move the xps cpus retrieval out of net-sysfs Date: Mon, 21 Dec 2020 20:36:43 +0100 Message-Id: <20201221193644.1296933-3-atenart@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201221193644.1296933-1-atenart@kernel.org> References: <20201221193644.1296933-1-atenart@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Accesses to dev->xps_cpus_map (when using dev->num_tc) should be protected by the xps_map mutex, to avoid possible race conditions when dev->num_tc is updated while the map is accessed. This patch moves the logic accessing dev->xps_cpu_map and dev->num_tc to net/core/dev.c, where the xps_map mutex is defined and used. Fixes: 184c449f91fe ("net: Add support for XPS with QoS via traffic classes") Signed-off-by: Antoine Tenart --- include/linux/netdevice.h | 8 ++++++ net/core/dev.c | 59 +++++++++++++++++++++++++++++++++++++++ net/core/net-sysfs.c | 54 ++++++++--------------------------- 3 files changed, 79 insertions(+), 42 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 259be67644e3..bfd6cfa3ea90 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3671,6 +3671,8 @@ int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask, u16 index); int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, u16 index, bool is_rxqs_map); +int netif_show_xps_queue(struct net_device *dev, unsigned long **mask, + u16 index); /** * netif_attr_test_mask - Test a CPU or Rx queue set in a mask @@ -3769,6 +3771,12 @@ static inline int __netif_set_xps_queue(struct net_device *dev, { return 0; } + +static inline int netif_show_xps_queue(struct net_device *dev, + unsigned long **mask, u16 index) +{ + return 0; +} #endif /** diff --git a/net/core/dev.c b/net/core/dev.c index effdb7fee9df..a0257da4160a 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2831,6 +2831,65 @@ int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask, } EXPORT_SYMBOL(netif_set_xps_queue); +int netif_show_xps_queue(struct net_device *dev, unsigned long **mask, + u16 index) +{ + const unsigned long *possible_mask = NULL; + int j, num_tc = 1, tc = 0, ret = 0; + struct xps_dev_maps *dev_maps; + unsigned int nr_ids; + + rcu_read_lock(); + mutex_lock(&xps_map_mutex); + + if (dev->num_tc) { + num_tc = dev->num_tc; + if (num_tc < 0) { + ret = -EINVAL; + goto out_no_map; + } + + /* If queue belongs to subordinate dev use its map */ + dev = netdev_get_tx_queue(dev, index)->sb_dev ? : dev; + + tc = netdev_txq_to_tc(dev, index); + if (tc < 0) { + ret = -EINVAL; + goto out_no_map; + } + } + + dev_maps = rcu_dereference(dev->xps_cpus_map); + if (!dev_maps) + goto out_no_map; + nr_ids = nr_cpu_ids; + if (num_possible_cpus() > 1) + possible_mask = cpumask_bits(cpu_possible_mask); + + for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids), + j < nr_ids;) { + int i, tci = j * num_tc + tc; + struct xps_map *map; + + map = rcu_dereference(dev_maps->attr_map[tci]); + if (!map) + continue; + + for (i = map->len; i--;) { + if (map->queues[i] == index) { + set_bit(j, *mask); + break; + } + } + } + +out_no_map: + mutex_unlock(&xps_map_mutex); + rcu_read_unlock(); + + return ret; +} +EXPORT_SYMBOL(netif_show_xps_queue); #endif static void __netdev_unbind_sb_channel(struct net_device *dev, diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index 999b70c59761..29ee69b67972 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -1314,60 +1314,30 @@ static const struct attribute_group dql_group = { #endif /* CONFIG_BQL */ #ifdef CONFIG_XPS -static ssize_t xps_cpus_show(struct netdev_queue *queue, - char *buf) +static ssize_t xps_cpus_show(struct netdev_queue *queue, char *buf) { struct net_device *dev = queue->dev; - int cpu, len, num_tc = 1, tc = 0; - struct xps_dev_maps *dev_maps; - cpumask_var_t mask; - unsigned long index; + unsigned long *mask, index; + int len, ret; if (!netif_is_multiqueue(dev)) return -ENOENT; index = get_netdev_queue_index(queue); - if (dev->num_tc) { - /* Do not allow XPS on subordinate device directly */ - num_tc = dev->num_tc; - if (num_tc < 0) - return -EINVAL; - - /* If queue belongs to subordinate dev use its map */ - dev = netdev_get_tx_queue(dev, index)->sb_dev ? : dev; - - tc = netdev_txq_to_tc(dev, index); - if (tc < 0) - return -EINVAL; - } - - if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) + mask = bitmap_zalloc(nr_cpu_ids, GFP_KERNEL); + if (!mask) return -ENOMEM; - rcu_read_lock(); - dev_maps = rcu_dereference(dev->xps_cpus_map); - if (dev_maps) { - for_each_possible_cpu(cpu) { - int i, tci = cpu * num_tc + tc; - struct xps_map *map; - - map = rcu_dereference(dev_maps->attr_map[tci]); - if (!map) - continue; - - for (i = map->len; i--;) { - if (map->queues[i] == index) { - cpumask_set_cpu(cpu, mask); - break; - } - } - } + ret = netif_show_xps_queue(dev, &mask, index); + if (ret) { + bitmap_free(mask); + return ret; } - rcu_read_unlock(); - len = snprintf(buf, PAGE_SIZE, "%*pb\n", cpumask_pr_args(mask)); - free_cpumask_var(mask); + len = bitmap_print_to_pagebuf(false, buf, mask, nr_cpu_ids); + bitmap_free(mask); + return len < PAGE_SIZE ? len : -EINVAL; } From patchwork Mon Dec 21 19:36:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Antoine Tenart X-Patchwork-Id: 346577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 526CDC433DB for ; Mon, 21 Dec 2020 19:37:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1140D22BE9 for ; Mon, 21 Dec 2020 19:37:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726352AbgLUThh (ORCPT ); Mon, 21 Dec 2020 14:37:37 -0500 Received: from mail.kernel.org ([198.145.29.99]:59638 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725783AbgLUThh (ORCPT ); Mon, 21 Dec 2020 14:37:37 -0500 From: Antoine Tenart Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: davem@davemloft.net, kuba@kernel.org, alexander.duyck@gmail.com Cc: Antoine Tenart , netdev@vger.kernel.org, pabeni@redhat.com Subject: [PATCH net v2 3/3] net: move the xps rxqs retrieval out of net-sysfs Date: Mon, 21 Dec 2020 20:36:44 +0100 Message-Id: <20201221193644.1296933-4-atenart@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201221193644.1296933-1-atenart@kernel.org> References: <20201221193644.1296933-1-atenart@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Accesses to dev->xps_rxqs_map (when using dev->num_tc) should be protected by the xps_map mutex, to avoid possible race conditions when dev->num_tc is updated while the map is accessed. Make use of the now available netif_show_xps_queue helper which does just that. This also helps to keep xps_cpus_show and xps_rxqs_show synced as their logic is the same (as in __netif_set_xps_queue, the function allocating and setting them up). Fixes: 8af2c06ff4b1 ("net-sysfs: Add interface for Rx queue(s) map per Tx queue") Signed-off-by: Antoine Tenart --- include/linux/netdevice.h | 5 +++-- net/core/dev.c | 15 ++++++++++----- net/core/net-sysfs.c | 37 ++++++------------------------------- 3 files changed, 19 insertions(+), 38 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index bfd6cfa3ea90..5c3e16464c3f 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3672,7 +3672,7 @@ int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask, int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, u16 index, bool is_rxqs_map); int netif_show_xps_queue(struct net_device *dev, unsigned long **mask, - u16 index); + u16 index, bool is_rxqs_map); /** * netif_attr_test_mask - Test a CPU or Rx queue set in a mask @@ -3773,7 +3773,8 @@ static inline int __netif_set_xps_queue(struct net_device *dev, } static inline int netif_show_xps_queue(struct net_device *dev, - unsigned long **mask, u16 index) + unsigned long **mask, u16 index, + bool is_rxqs_map) { return 0; } diff --git a/net/core/dev.c b/net/core/dev.c index a0257da4160a..e5cc2939e4d9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2832,7 +2832,7 @@ int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask, EXPORT_SYMBOL(netif_set_xps_queue); int netif_show_xps_queue(struct net_device *dev, unsigned long **mask, - u16 index) + u16 index, bool is_rxqs_map) { const unsigned long *possible_mask = NULL; int j, num_tc = 1, tc = 0, ret = 0; @@ -2859,12 +2859,17 @@ int netif_show_xps_queue(struct net_device *dev, unsigned long **mask, } } - dev_maps = rcu_dereference(dev->xps_cpus_map); + if (is_rxqs_map) { + dev_maps = rcu_dereference(dev->xps_rxqs_map); + nr_ids = dev->num_rx_queues; + } else { + dev_maps = rcu_dereference(dev->xps_cpus_map); + nr_ids = nr_cpu_ids; + if (num_possible_cpus() > 1) + possible_mask = cpumask_bits(cpu_possible_mask); + } if (!dev_maps) goto out_no_map; - nr_ids = nr_cpu_ids; - if (num_possible_cpus() > 1) - possible_mask = cpumask_bits(cpu_possible_mask); for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids), j < nr_ids;) { diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index 29ee69b67972..4f58b38dfc7d 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -1329,7 +1329,7 @@ static ssize_t xps_cpus_show(struct netdev_queue *queue, char *buf) if (!mask) return -ENOMEM; - ret = netif_show_xps_queue(dev, &mask, index); + ret = netif_show_xps_queue(dev, &mask, index, false); if (ret) { bitmap_free(mask); return ret; @@ -1379,45 +1379,20 @@ static struct netdev_queue_attribute xps_cpus_attribute __ro_after_init static ssize_t xps_rxqs_show(struct netdev_queue *queue, char *buf) { struct net_device *dev = queue->dev; - struct xps_dev_maps *dev_maps; unsigned long *mask, index; - int j, len, num_tc = 1, tc = 0; + int len, ret; index = get_netdev_queue_index(queue); - if (dev->num_tc) { - num_tc = dev->num_tc; - tc = netdev_txq_to_tc(dev, index); - if (tc < 0) - return -EINVAL; - } mask = bitmap_zalloc(dev->num_rx_queues, GFP_KERNEL); if (!mask) return -ENOMEM; - rcu_read_lock(); - dev_maps = rcu_dereference(dev->xps_rxqs_map); - if (!dev_maps) - goto out_no_maps; - - for (j = -1; j = netif_attrmask_next(j, NULL, dev->num_rx_queues), - j < dev->num_rx_queues;) { - int i, tci = j * num_tc + tc; - struct xps_map *map; - - map = rcu_dereference(dev_maps->attr_map[tci]); - if (!map) - continue; - - for (i = map->len; i--;) { - if (map->queues[i] == index) { - set_bit(j, mask); - break; - } - } + ret = netif_show_xps_queue(dev, &mask, index, true); + if (ret) { + bitmap_free(mask); + return ret; } -out_no_maps: - rcu_read_unlock(); len = bitmap_print_to_pagebuf(false, buf, mask, dev->num_rx_queues); bitmap_free(mask);