diff mbox series

[bpf] xsk: Clear pool even for inactive queues

Message ID 20210118160333.333439-1-maximmi@mellanox.com
State New
Headers show
Series [bpf] xsk: Clear pool even for inactive queues | expand

Commit Message

Maxim Mikityanskiy Jan. 18, 2021, 4:03 p.m. UTC
The number of queues can change by other means, rather than ethtool. For
example, attaching an mqprio qdisc with num_tc > 1 leads to creating
multiple sets of TX queues, which may be then destroyed when mqprio is
deleted. If an AF_XDP socket is created while mqprio is active,
dev->_tx[queue_id].pool will be filled, but then real_num_tx_queues may
decrease with deletion of mqprio, which will mean that the pool won't be
NULLed, and a further increase of the number of TX queues may expose a
dangling pointer.

To avoid any potential misbehavior, this commit clears pool for RX and
TX queues, regardless of real_num_*_queues, still taking into
consideration num_*_queues to avoid overflows.

Fixes: 1c1efc2af158 ("xsk: Create and free buffer pool independently from umem")
Fixes: a41b4f3c58dd ("xsk: simplify xdp_clear_umem_at_qid implementation")
Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
---
 net/xdp/xsk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Björn Töpel Jan. 19, 2021, 2:08 p.m. UTC | #1
On 2021-01-18 17:03, Maxim Mikityanskiy wrote:
> The number of queues can change by other means, rather than ethtool. For

> example, attaching an mqprio qdisc with num_tc > 1 leads to creating

> multiple sets of TX queues, which may be then destroyed when mqprio is

> deleted. If an AF_XDP socket is created while mqprio is active,

> dev->_tx[queue_id].pool will be filled, but then real_num_tx_queues may

> decrease with deletion of mqprio, which will mean that the pool won't be

> NULLed, and a further increase of the number of TX queues may expose a

> dangling pointer.

> 

> To avoid any potential misbehavior, this commit clears pool for RX and

> TX queues, regardless of real_num_*_queues, still taking into

> consideration num_*_queues to avoid overflows.

> 

> Fixes: 1c1efc2af158 ("xsk: Create and free buffer pool independently from umem")

> Fixes: a41b4f3c58dd ("xsk: simplify xdp_clear_umem_at_qid implementation")

> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>


Thanks, Maxim!

Acked-by: Björn Töpel <bjorn.topel@intel.com>


> ---

>   net/xdp/xsk.c | 4 ++--

>   1 file changed, 2 insertions(+), 2 deletions(-)

> 

> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c

> index 8037b04a9edd..4a83117507f5 100644

> --- a/net/xdp/xsk.c

> +++ b/net/xdp/xsk.c

> @@ -108,9 +108,9 @@ EXPORT_SYMBOL(xsk_get_pool_from_qid);

>   

>   void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id)

>   {

> -	if (queue_id < dev->real_num_rx_queues)

> +	if (queue_id < dev->num_rx_queues)

>   		dev->_rx[queue_id].pool = NULL;

> -	if (queue_id < dev->real_num_tx_queues)

> +	if (queue_id < dev->num_tx_queues)

>   		dev->_tx[queue_id].pool = NULL;

>   }

>   

>
patchwork-bot+netdevbpf@kernel.org Jan. 19, 2021, 10 p.m. UTC | #2
Hello:

This patch was applied to bpf/bpf.git (refs/heads/master):

On Mon, 18 Jan 2021 18:03:33 +0200 you wrote:
> The number of queues can change by other means, rather than ethtool. For

> example, attaching an mqprio qdisc with num_tc > 1 leads to creating

> multiple sets of TX queues, which may be then destroyed when mqprio is

> deleted. If an AF_XDP socket is created while mqprio is active,

> dev->_tx[queue_id].pool will be filled, but then real_num_tx_queues may

> decrease with deletion of mqprio, which will mean that the pool won't be

> NULLed, and a further increase of the number of TX queues may expose a

> dangling pointer.

> 

> [...]


Here is the summary with links:
  - [bpf] xsk: Clear pool even for inactive queues
    https://git.kernel.org/bpf/bpf/c/b425e24a934e

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
diff mbox series

Patch

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 8037b04a9edd..4a83117507f5 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -108,9 +108,9 @@  EXPORT_SYMBOL(xsk_get_pool_from_qid);
 
 void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id)
 {
-	if (queue_id < dev->real_num_rx_queues)
+	if (queue_id < dev->num_rx_queues)
 		dev->_rx[queue_id].pool = NULL;
-	if (queue_id < dev->real_num_tx_queues)
+	if (queue_id < dev->num_tx_queues)
 		dev->_tx[queue_id].pool = NULL;
 }