Message ID | 20210523102019.29440-3-qiangqing.zhang@nxp.com |
---|---|
State | Superseded |
Headers | show |
Series | net: fec: fix TX bandwidth fluctuations | expand |
Hi Andrew, > -----Original Message----- > From: Andrew Lunn <andrew@lunn.ch> > Sent: 2021年5月23日 22:39 > To: Joakim Zhang <qiangqing.zhang@nxp.com> > Cc: davem@davemloft.net; kuba@kernel.org; frieder.schrempf@kontron.de; > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; dl-linux-imx > <linux-imx@nxp.com> > Subject: Re: [RFC net-next 2/2] net: fec: add ndo_select_queue to fix TX > bandwidth fluctuations > > > @@ -76,6 +76,8 @@ static void fec_enet_itr_coal_init(struct net_device > *ndev); > > > > #define DRIVER_NAME "fec" > > > > +static const u16 fec_enet_vlan_pri_to_queue[8] = {1, 1, 1, 1, 2, 2, 2, 2}; > > I wonder if priority 0 should be sent to queue 0? Yes, we can. But the original thought of author seems to, VLAN tagged packets only use queue 1&2, and queue 0is reserved for VLAN untagged packets. Best Regards, Joakim Zhang > Andrew
Hi Andrew, > -----Original Message----- > From: Andrew Lunn <andrew@lunn.ch> > Sent: 2021年5月23日 22:46 > To: Joakim Zhang <qiangqing.zhang@nxp.com> > Cc: davem@davemloft.net; kuba@kernel.org; frieder.schrempf@kontron.de; > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; dl-linux-imx > <linux-imx@nxp.com> > Subject: Re: [RFC net-next 2/2] net: fec: add ndo_select_queue to fix TX > bandwidth fluctuations > > On Sun, May 23, 2021 at 06:20:19PM +0800, Joakim Zhang wrote: > > From: Fugang Duan <fugang.duan@nxp.com> > > > > As we know that AVB is enabled by default, and the ENET IP design is > > queue 0 for best effort, queue 1&2 for AVB Class A&B. Bandwidth of > > queue 1&2 set in driver is 50%, TX bandwidth fluctuated when selecting > > tx queues randomly with FEC_QUIRK_HAS_AVB quirk available. > > How is the driver currently scheduling between these queues? Given the > 802.1q priorities, i think we want queue 2 with the highest priority for > scheduling. Then queue 0 and lastly queue 1. I think currently there is no schedule between these queues in the driver. Could you please point me where I can find mapping between priorities and queues? You prefer to below mapping? static const u16 fec_enet_vlan_pri_to_queue[8] = {1, 1, 0, 0, 0, 2, 2, 2}; Best Regards, Joakim Zhang > Andrew
> > On Sun, May 23, 2021 at 06:20:19PM +0800, Joakim Zhang wrote: > > > From: Fugang Duan <fugang.duan@nxp.com> > > > > > > As we know that AVB is enabled by default, and the ENET IP design is > > > queue 0 for best effort, queue 1&2 for AVB Class A&B. Bandwidth of > > > queue 1&2 set in driver is 50%, TX bandwidth fluctuated when selecting > > > tx queues randomly with FEC_QUIRK_HAS_AVB quirk available. > > > > How is the driver currently scheduling between these queues? Given the > > 802.1q priorities, i think we want queue 2 with the highest priority for > > scheduling. Then queue 0 and lastly queue 1. > > I think currently there is no schedule between these queues in the driver. So queues 1 and 2 are limited to 50% the total bandwidth, but are otherwise not prioritised over queue 0? That sounds odd. > Could you please point me where I can find mapping between priorities and queues? You prefer to below mapping? > static const u16 fec_enet_vlan_pri_to_queue[8] = {1, 1, 0, 0, 0, 2, 2, 2}; https://en.wikipedia.org/wiki/IEEE_P802.1p I'm not sure i actually believe the hardware does not prioritise the queues. It seems to me, it is more likely to take frames from queues 1 and 2 if they have not consumed their 50% share. PCP value 0 is best effort. That should really be given the same priority as a packet without a VLAN header. Which is why i suggested putting those packets into queue 0. Also, if the hardware is performing prioritisation, PCP value 1, background, when put into queue 1 will end up as higher priority then best effort? Andrew
Hi Andrew, > -----Original Message----- > From: Andrew Lunn <andrew@lunn.ch> > Sent: 2021年5月25日 21:58 > To: Joakim Zhang <qiangqing.zhang@nxp.com> > Cc: davem@davemloft.net; kuba@kernel.org; frieder.schrempf@kontron.de; > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; dl-linux-imx > <linux-imx@nxp.com> > Subject: Re: [RFC net-next 2/2] net: fec: add ndo_select_queue to fix TX > bandwidth fluctuations > > > > On Sun, May 23, 2021 at 06:20:19PM +0800, Joakim Zhang wrote: > > > > From: Fugang Duan <fugang.duan@nxp.com> > > > > > > > > As we know that AVB is enabled by default, and the ENET IP design > > > > is queue 0 for best effort, queue 1&2 for AVB Class A&B. Bandwidth > > > > of queue 1&2 set in driver is 50%, TX bandwidth fluctuated when > > > > selecting tx queues randomly with FEC_QUIRK_HAS_AVB quirk available. > > > > > > How is the driver currently scheduling between these queues? Given > > > the 802.1q priorities, i think we want queue 2 with the highest > > > priority for scheduling. Then queue 0 and lastly queue 1. > > > > I think currently there is no schedule between these queues in the driver. > > So queues 1 and 2 are limited to 50% the total bandwidth, but are otherwise > not prioritised over queue 0? That sounds odd. No, when enable AVB (configured as Credit-based scheme), queue 0 is best effort, queue 1 is limited to 50% bandwidth, queue 2 is also limited to 50% bandwidth. I may misunderstand to you, what I mean is that there is no scheduler from software level, Net core calls netdev_pick_tx() to select a queue. From the hardware level, schedule with Credit-based. > > Could you please point me where I can find mapping between priorities and > queues? You prefer to below mapping? > > static const u16 fec_enet_vlan_pri_to_queue[8] = {1, 1, 0, 0, 0, 2, 2, > > 2}; > > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikip > edia.org%2Fwiki%2FIEEE_P802.1p&data=04%7C01%7Cqiangqing.zhang% > 40nxp.com%7C30d975f09a79446030d608d91f852c39%7C686ea1d3bc2b4c6fa > 92cd99c5c301635%7C0%7C0%7C637575479097053920%7CUnknown%7CTWF > pbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVC > I6Mn0%3D%7C1000&sdata=g6WxJJVXLbaX3QWBq3yB4fk1Zh4tN%2Ff8Y > w%2FAATabL%2Bg%3D&reserved=0 Err, this link isn't available at my side. It's seems 802.1q spec. > I'm not sure i actually believe the hardware does not prioritise the queues. It > seems to me, it is more likely to take frames from queues 1 and 2 if they have > not consumed their 50% share. Hardware xmits with Credit-based scheme. > PCP value 0 is best effort. That should really be given the same priority as a > packet without a VLAN header. Which is why i suggested putting those packets > into queue 0. Make sense. > Also, if the hardware is performing prioritisation, PCP value 1, background, > when put into queue 1 will end up as higher priority then best effort? I don't think so, Credit-based scheduler would schedule based on credit, not the PCP filed. Best Regards, Joakim Zhang > > Andrew
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 053a0e547e4f..3cde85838189 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -76,6 +76,8 @@ static void fec_enet_itr_coal_init(struct net_device *ndev); #define DRIVER_NAME "fec" +static const u16 fec_enet_vlan_pri_to_queue[8] = {1, 1, 1, 1, 2, 2, 2, 2}; + /* Pause frame feild and FIFO threshold */ #define FEC_ENET_FCE (1 << 5) #define FEC_ENET_RSEM_V 0x84 @@ -3236,10 +3238,40 @@ static int fec_set_features(struct net_device *netdev, return 0; } +u16 fec_enet_get_raw_vlan_tci(struct sk_buff *skb) +{ + struct vlan_ethhdr *vhdr; + unsigned short vlan_TCI = 0; + + if (skb->protocol == ntohs(ETH_P_ALL)) { + vhdr = (struct vlan_ethhdr *)(skb->data); + vlan_TCI = ntohs(vhdr->h_vlan_TCI); + } + + return vlan_TCI; +} + +u16 fec_enet_select_queue(struct net_device *ndev, struct sk_buff *skb, + struct net_device *sb_dev) +{ + struct fec_enet_private *fep = netdev_priv(ndev); + u16 vlan_tag; + + if (!(fep->quirks & FEC_QUIRK_HAS_AVB)) + return netdev_pick_tx(ndev, skb, NULL); + + vlan_tag = fec_enet_get_raw_vlan_tci(skb); + if (!vlan_tag) + return vlan_tag; + + return fec_enet_vlan_pri_to_queue[vlan_tag >> 13]; +} + static const struct net_device_ops fec_netdev_ops = { .ndo_open = fec_enet_open, .ndo_stop = fec_enet_close, .ndo_start_xmit = fec_enet_start_xmit, + .ndo_select_queue = fec_enet_select_queue, .ndo_set_rx_mode = set_multicast_list, .ndo_validate_addr = eth_validate_addr, .ndo_tx_timeout = fec_timeout,