mbox series

[RFC,net-next,v1,0/2] Threaded NAPI configurability

Message ID 20210506172021.7327-1-yannick.vignon@oss.nxp.com
Headers show
Series Threaded NAPI configurability | expand

Message

Yannick Vignon May 6, 2021, 5:20 p.m. UTC
From: Yannick Vignon <yannick.vignon@nxp.com>

The purpose of these 2 patches is to be able to configure the scheduling
properties (e.g. affinity, priority...) of the NAPI threads more easily
at run-time, based on the hardware queues each thread is handling.
The main goal is really to expose which thread does what, as the current
naming doesn't exactly make that clear.

Posting this as an RFC in case people have different opinions on how to
do that.

Yannick Vignon (2):
  net: add name field to napi struct
  net: stmmac: use specific name for each NAPI instance

 .../net/ethernet/stmicro/stmmac/stmmac_main.c | 21 ++++++----
 include/linux/netdevice.h                     | 42 ++++++++++++++++++-
 net/core/dev.c                                | 20 +++++++--
 3 files changed, 69 insertions(+), 14 deletions(-)

Comments

Jakub Kicinski May 6, 2021, 10:18 p.m. UTC | #1
On Thu,  6 May 2021 19:20:19 +0200 Yannick Vignon wrote:
> The purpose of these 2 patches is to be able to configure the scheduling
> properties (e.g. affinity, priority...) of the NAPI threads more easily
> at run-time, based on the hardware queues each thread is handling.
> The main goal is really to expose which thread does what, as the current
> naming doesn't exactly make that clear.
> 
> Posting this as an RFC in case people have different opinions on how to
> do that.

WQ <-> CQ <-> irq <-> napi mapping needs an exhaustive netlink
interface. We've been saying this for a while. Neither hard coded
naming schemes nor one-off sysfs files are a great idea IMHO.
Yannick Vignon May 11, 2021, 4:44 p.m. UTC | #2
On 5/6/2021 7:35 PM, Eric Dumazet wrote:
> On Thu, May 6, 2021 at 7:20 PM Yannick Vignon

> <yannick.vignon@oss.nxp.com> wrote:

>>

>> From: Yannick Vignon <yannick.vignon@nxp.com>

>>

>> An interesting possibility offered by the new thread NAPI code is to

>> fine-tune the affinities and priorities of different NAPI instances. In a

>> real-time networking context, this makes it possible to ensure packets

>> received in a high-priority queue are always processed, and with low

>> latency.

>>

>> However, the way the NAPI threads are named does not really expose which

>> one is responsible for a given queue. Assigning a more explicit name to

>> NAPI instances can make that determination much easier.

>>

>> Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>

>> -

> 

> Having to change drivers seems a lot of work

> 

> How about exposing thread id (and napi_id eventually) in

> /sys/class/net/eth0/queues/*/kthread_pid  ?

> 


This seemed like a good idea, but after looking into how to actually 
implement it, I can't find a way to "map" rx queues to napi instances. 
Am I missing something?

In the end, I'm afraid that the NAPI<->RX queue mapping is only known 
within the drivers, so we'll have no choice but to modify them
to extract that information somehow.
Yannick Vignon May 11, 2021, 4:46 p.m. UTC | #3
On 5/7/2021 12:18 AM, Jakub Kicinski wrote:
> On Thu,  6 May 2021 19:20:19 +0200 Yannick Vignon wrote:

>> The purpose of these 2 patches is to be able to configure the scheduling

>> properties (e.g. affinity, priority...) of the NAPI threads more easily

>> at run-time, based on the hardware queues each thread is handling.

>> The main goal is really to expose which thread does what, as the current

>> naming doesn't exactly make that clear.

>>

>> Posting this as an RFC in case people have different opinions on how to

>> do that.

> 

> WQ <-> CQ <-> irq <-> napi mapping needs an exhaustive netlink

> interface. We've been saying this for a while. Neither hard coded

> naming schemes nor one-off sysfs files are a great idea IMHO.

> 


Could you elaborate on the kind of netlink interface you are thinking about?
We already have standard ways of configuring process priorities and 
affinities, what we need is rather to expose which queue(s) each NAPI 
thread/instance is responsible for (and as I just said, I fear this will 
involve driver changes).
Now, one place were a netlink API could be of use is for statistics: we 
currently do not have any per-queue counters, and that would be useful 
when working on multi-queue setups.
Jakub Kicinski May 12, 2021, 1:07 a.m. UTC | #4
On Tue, 11 May 2021 18:46:16 +0200 Yannick Vignon wrote:
> On 5/7/2021 12:18 AM, Jakub Kicinski wrote:

> > On Thu,  6 May 2021 19:20:19 +0200 Yannick Vignon wrote:  

> >> The purpose of these 2 patches is to be able to configure the scheduling

> >> properties (e.g. affinity, priority...) of the NAPI threads more easily

> >> at run-time, based on the hardware queues each thread is handling.

> >> The main goal is really to expose which thread does what, as the current

> >> naming doesn't exactly make that clear.

> >>

> >> Posting this as an RFC in case people have different opinions on how to

> >> do that.  

> > 

> > WQ <-> CQ <-> irq <-> napi mapping needs an exhaustive netlink

> > interface. We've been saying this for a while. Neither hard coded

> > naming schemes nor one-off sysfs files are a great idea IMHO.

> 

> Could you elaborate on the kind of netlink interface you are thinking about?

> We already have standard ways of configuring process priorities and 

> affinities, what we need is rather to expose which queue(s) each NAPI 

> thread/instance is responsible for (and as I just said, I fear this will 

> involve driver changes).


An interface to carry information about the queues, interrupts and NAPI
instances, and relationship between them. 

As you noted in your reply to Eric such API would require driver
changes, but one driver using it is enough to add the API.

> Now, one place were a netlink API could be of use is for statistics: we 

> currently do not have any per-queue counters, and that would be useful 

> when working on multi-queue setups.


Yup, such API is exactly where we should add standard per queue
statistics.