diff mbox series

[net-next] net: dsa: felix: disable always guard band bit for TAS config

Message ID 20210419102530.20361-1-xiaoliang.yang_1@nxp.com
State New
Headers show
Series [net-next] net: dsa: felix: disable always guard band bit for TAS config | expand

Commit Message

Xiaoliang Yang April 19, 2021, 10:25 a.m. UTC
ALWAYS_GUARD_BAND_SCH_Q bit in TAS config register is descripted as
this:
	0: Guard band is implemented for nonschedule queues to schedule
	   queues transition.
	1: Guard band is implemented for any queue to schedule queue
	   transition.

The driver set guard band be implemented for any queue to schedule queue
transition before, which will make each GCL time slot reserve a guard
band time that can pass the max SDU frame. Because guard band time could
not be set in tc-taprio now, it will use about 12000ns to pass 1500B max
SDU. This limits each GCL time interval to be more than 12000ns.

This patch change the guard band to be only implemented for nonschedule
queues to schedule queues transition, so that there is no need to reserve
guard band on each GCL. Users can manually add guard band time for each
schedule queues in their configuration if they want.

Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com>
---
 drivers/net/dsa/ocelot/felix_vsc9959.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Xiaoliang Yang April 20, 2021, 3:06 a.m. UTC | #1
Hi Vladimir.

On Mon, Apr 19, 2021 at 20:38PM +0800, Vladimir Oltean wrote:
>

>What is a scheduled queue? When time-aware scheduling is enabled on the port, why are some queues scheduled and some not?


The felix vsc9959 device can set SCH_TRAFFIC_QUEUES field bits to define which queue is scheduled. Only the set queues serves schedule traffic. In this driver we set all 8 queues to be scheduled in default, so all the traffic are schedule queues to
schedule queue.

Thanks,
Xiaoliang Yang
Vladimir Oltean April 20, 2021, 8:26 a.m. UTC | #2
On Tue, Apr 20, 2021 at 03:06:40AM +0000, Xiaoliang Yang wrote:
> Hi Vladimir.

> 

> On Mon, Apr 19, 2021 at 20:38PM +0800, Vladimir Oltean wrote:

> >

> >What is a scheduled queue? When time-aware scheduling is enabled on

> >the port, why are some queues scheduled and some not?

> 

> The felix vsc9959 device can set SCH_TRAFFIC_QUEUES field bits to

> define which queue is scheduled. Only the set queues serves schedule

> traffic. In this driver we set all 8 queues to be scheduled in

> default, so all the traffic are schedule queues to schedule queue.


I understand this, what I don't really understand is the distinction
that the switch makes between 'scheduled' and 'non-scheduled' traffic.
What else does this distinction affect, apart from the guard bands added
implicitly here? The tc-taprio qdisc has no notion of 'scheduled'
queues, all queues are 'scheduled'. Do we ever need to set the scheduled
queues mask to something other than 0xff? If so, when and why?
Xiaoliang Yang April 20, 2021, 10:28 a.m. UTC | #3
Hi Vladimir,

On Tue, Apr 20, 2021 at 16:27:10AM +0800, Vladimir Oltean wrote:
>

> On Tue, Apr 20, 2021 at 03:06:40AM +0000, Xiaoliang Yang wrote:

>> Hi Vladimir.

>>

>> On Mon, Apr 19, 2021 at 20:38PM +0800, Vladimir Oltean wrote:

>> >

>> >What is a scheduled queue? When time-aware scheduling is enabled on 

>> >the port, why are some queues scheduled and some not?

>>

>> The felix vsc9959 device can set SCH_TRAFFIC_QUEUES field bits to 

>> define which queue is scheduled. Only the set queues serves schedule 

>> traffic. In this driver we set all 8 queues to be scheduled in 

>> default, so all the traffic are schedule queues to schedule queue.

>

> I understand this, what I don't really understand is the distinction that the switch makes between 'scheduled' and 'non-scheduled' traffic.

> What else does this distinction affect, apart from the guard bands added implicitly here? The tc-taprio qdisc has no notion of 'scheduled'

> queues, all queues are 'scheduled'. Do we ever need to set the scheduled queues mask to something other than 0xff? If so, when and why?


Yes, it seems only affect the guard band. If disabling always guard band bit, we can use SCH_TRAFFIC_QUEUES to determine which queue is non-scheduled queue. Only the non-scheduled queue traffic will reserve the guard band. But tc-taprio qdisc cannot set scheduled or non-scheduled queue now. Adding this feature can be discussed in future. 

It is not reasonable to add guardband in each queue traffic in default, so I disable the always guard band bit for TAS config.

Thanks,
Xiaoliang Yang
Vladimir Oltean April 20, 2021, 10:30 a.m. UTC | #4
On Tue, Apr 20, 2021 at 10:28:45AM +0000, Xiaoliang Yang wrote:
> Hi Vladimir,

> 

> On Tue, Apr 20, 2021 at 16:27:10AM +0800, Vladimir Oltean wrote:

> >

> > On Tue, Apr 20, 2021 at 03:06:40AM +0000, Xiaoliang Yang wrote:

> >> Hi Vladimir.

> >>

> >> On Mon, Apr 19, 2021 at 20:38PM +0800, Vladimir Oltean wrote:

> >> >

> >> >What is a scheduled queue? When time-aware scheduling is enabled on 

> >> >the port, why are some queues scheduled and some not?

> >>

> >> The felix vsc9959 device can set SCH_TRAFFIC_QUEUES field bits to 

> >> define which queue is scheduled. Only the set queues serves schedule 

> >> traffic. In this driver we set all 8 queues to be scheduled in 

> >> default, so all the traffic are schedule queues to schedule queue.

> >

> > I understand this, what I don't really understand is the distinction

> > that the switch makes between 'scheduled' and 'non-scheduled'

> > traffic.  What else does this distinction affect, apart from the

> > guard bands added implicitly here? The tc-taprio qdisc has no notion

> > of 'scheduled' queues, all queues are 'scheduled'. Do we ever need

> > to set the scheduled queues mask to something other than 0xff? If

> > so, when and why?

> 

> Yes, it seems only affect the guard band. If disabling always guard

> band bit, we can use SCH_TRAFFIC_QUEUES to determine which queue is

> non-scheduled queue. Only the non-scheduled queue traffic will reserve

> the guard band. But tc-taprio qdisc cannot set scheduled or

> non-scheduled queue now. Adding this feature can be discussed in

> future. 

> 

> It is not reasonable to add guardband in each queue traffic in

> default, so I disable the always guard band bit for TAS config.


Ok, if true, then it makes sense to disable ALWAYS_GUARD_BAND_SCH_Q.
Vladimir Oltean April 20, 2021, 10:33 a.m. UTC | #5
On Mon, Apr 19, 2021 at 06:25:30PM +0800, Xiaoliang Yang wrote:
> ALWAYS_GUARD_BAND_SCH_Q bit in TAS config register is descripted as

> this:

> 	0: Guard band is implemented for nonschedule queues to schedule

> 	   queues transition.

> 	1: Guard band is implemented for any queue to schedule queue

> 	   transition.

> 

> The driver set guard band be implemented for any queue to schedule queue

> transition before, which will make each GCL time slot reserve a guard

> band time that can pass the max SDU frame. Because guard band time could

> not be set in tc-taprio now, it will use about 12000ns to pass 1500B max

> SDU. This limits each GCL time interval to be more than 12000ns.

> 

> This patch change the guard band to be only implemented for nonschedule

> queues to schedule queues transition, so that there is no need to reserve

> guard band on each GCL. Users can manually add guard band time for each

> schedule queues in their configuration if they want.

> 

> Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com>

> ---


Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Vladimir Oltean April 20, 2021, 10:42 a.m. UTC | #6
On Tue, Apr 20, 2021 at 01:30:51PM +0300, Vladimir Oltean wrote:
> On Tue, Apr 20, 2021 at 10:28:45AM +0000, Xiaoliang Yang wrote:

> > Hi Vladimir,

> > 

> > On Tue, Apr 20, 2021 at 16:27:10AM +0800, Vladimir Oltean wrote:

> > >

> > > On Tue, Apr 20, 2021 at 03:06:40AM +0000, Xiaoliang Yang wrote:

> > >> Hi Vladimir.

> > >>

> > >> On Mon, Apr 19, 2021 at 20:38PM +0800, Vladimir Oltean wrote:

> > >> >

> > >> >What is a scheduled queue? When time-aware scheduling is enabled on 

> > >> >the port, why are some queues scheduled and some not?

> > >>

> > >> The felix vsc9959 device can set SCH_TRAFFIC_QUEUES field bits to 

> > >> define which queue is scheduled. Only the set queues serves schedule 

> > >> traffic. In this driver we set all 8 queues to be scheduled in 

> > >> default, so all the traffic are schedule queues to schedule queue.

> > >

> > > I understand this, what I don't really understand is the distinction

> > > that the switch makes between 'scheduled' and 'non-scheduled'

> > > traffic.  What else does this distinction affect, apart from the

> > > guard bands added implicitly here? The tc-taprio qdisc has no notion

> > > of 'scheduled' queues, all queues are 'scheduled'. Do we ever need

> > > to set the scheduled queues mask to something other than 0xff? If

> > > so, when and why?

> > 

> > Yes, it seems only affect the guard band. If disabling always guard

> > band bit, we can use SCH_TRAFFIC_QUEUES to determine which queue is

> > non-scheduled queue. Only the non-scheduled queue traffic will reserve

> > the guard band. But tc-taprio qdisc cannot set scheduled or

> > non-scheduled queue now. Adding this feature can be discussed in

> > future. 

> > 

> > It is not reasonable to add guardband in each queue traffic in

> > default, so I disable the always guard band bit for TAS config.

> 

> Ok, if true, then it makes sense to disable ALWAYS_GUARD_BAND_SCH_Q.


One question though. I know that Felix overruns the time gate, i.e. when
the time interval has any value larger than 32 ns, the switch port is
happy to send any packet of any size, regardless of whether the duration
of transmission exceeds the gate size or not. In doing so, it violates
this requirement from IEEE 802.1Q-2018 clause 8.6.8.4 Enhancements for
scheduled traffic:

-----------------------------[ cut here ]-----------------------------
In addition to the other checks carried out by the transmission selection algorithm, a frame on a traffic class
queue is not available for transmission [as required for tests (a) and (b) in 8.6.8] if the transmission gate is in
the closed state or if there is insufficient time available to transmit the entirety of that frame before the next
gate-close event (3.97) associated with that queue. A per-traffic class counter, TransmissionOverrun
(12.29.1.1.2), is incremented if the implementation detects that a frame from a given queue is still being
transmitted by the MAC when the gate-close event for that queue occurs.

NOTE 1—It is assumed that the implementation has knowledge of the transmission overheads that are involved in
transmitting a frame on a given Port and can therefore determine how long the transmission of a frame will take.
However, there can be reasons why the frame size, and therefore the length of time needed for its transmission, is
unknown; for example, where cut-through is supported, or where frame preemption is supported and there is no way of
telling in advance how many times a given frame will be preempted before its transmission is complete. It is desirable
that the schedule for such traffic is designed to accommodate the intended pattern of transmission without overrunning
the next gate-close event for the traffic classes concerned.
-----------------------------[ cut here ]-----------------------------

Is this not the reason why the guard bands were added, to make the
scheduler stop sending any frame for 1 MAX_SDU in advance of the gate
close event, so that it doesn't overrun the gate?
patchwork-bot+netdevbpf@kernel.org April 20, 2021, 11:20 p.m. UTC | #7
Hello:

This patch was applied to netdev/net-next.git (refs/heads/master):

On Mon, 19 Apr 2021 18:25:30 +0800 you wrote:
> ALWAYS_GUARD_BAND_SCH_Q bit in TAS config register is descripted as

> this:

> 	0: Guard band is implemented for nonschedule queues to schedule

> 	   queues transition.

> 	1: Guard band is implemented for any queue to schedule queue

> 	   transition.

> 

> [...]


Here is the summary with links:
  - [net-next] net: dsa: felix: disable always guard band bit for TAS config
    https://git.kernel.org/netdev/net-next/c/316bcffe4479

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
Xiaoliang Yang April 21, 2021, 2:51 a.m. UTC | #8
Hi Vladimir,

On Tue, Apr 20, 2021 at 01:30:51PM +0300, Vladimir Oltean wrote:
> One question though. I know that Felix overruns the time gate, i.e. when the time interval has any value larger than 32 ns,

> the switch port is happy to send any packet of any size, regardless of whether the duration of transmission exceeds the

> gate size or not. In doing so, it violates this requirement from IEEE 802.1Q-2018 clause 8.6.8.4 Enhancements for

> scheduled traffic:

>

> -----------------------------[ cut here ]----------------------------- 

>

> In addition to the other checks carried out by the transmission selection algorithm, a frame on a traffic class queue is not

> available for transmission [as required for tests (a) and (b) in 8.6.8] if the transmission gate is in the closed state or if

> there is insufficient time available to transmit the entirety of that frame before the next gate-close event (3.97)

> associated with that queue. A per-traffic class counter, TransmissionOverrun (12.29.1.1.2), is incremented if the

> implementation detects that a frame from a given queue is still being transmitted by the MAC when the gate-close event 

> for that queue occurs.

>

> NOTE 1—It is assumed that the implementation has knowledge of the transmission overheads that are involved in 

> transmitting a frame on a given Port and can therefore determine how long the transmission of a frame will take.

> However, there can be reasons why the frame size, and therefore the length of time needed for its transmission, is 

> unknown; for example, where cut-through is supported, or where frame preemption is supported and there is no way of 

> telling in advance how many times a given frame will be preempted before its transmission is complete. It is desirable 

> that the schedule for such traffic is designed to accommodate the intended pattern of transmission without overrunning 

> the next gate-close event for the traffic classes concerned.

> -----------------------------[ cut here ]-----------------------------

>

> Is this not the reason why the guard bands were added, to make the scheduler stop sending any frame for 1 MAX_SDU in 

> advance of the gate close event, so that it doesn't overrun the gate?


Yes, the guard band reserved a MAX_SDU time to ensure there is no packet transmission when gate close. But it's not necessary to reserve the guard band at each of GCL entry. Users can manually add guard band time at any schedule queue in their configuration if they want. For example, if they want to protect one queue, they can add a guard band GCL entry before the protect queue open. Like the note you mentioned: "It is desirable that the schedule for such traffic is designed to accommodate the intended pattern of transmission without overrunning the next gate-close event for the traffic classes concerned."

Thanks,
Xiaoliang Yang
Michael Walle May 4, 2021, 5:05 p.m. UTC | #9
Hi,

> ALWAYS_GUARD_BAND_SCH_Q bit in TAS config register is descripted as

> this:

> 	0: Guard band is implemented for nonschedule queues to schedule

> 	   queues transition.

> 	1: Guard band is implemented for any queue to schedule queue

> 	   transition.

> 

> The driver set guard band be implemented for any queue to schedule queue

> transition before, which will make each GCL time slot reserve a guard

> band time that can pass the max SDU frame. Because guard band time could

> not be set in tc-taprio now, it will use about 12000ns to pass 1500B max

> SDU. This limits each GCL time interval to be more than 12000ns.

> 

> This patch change the guard band to be only implemented for nonschedule

> queues to schedule queues transition, so that there is no need to reserve

> guard band on each GCL. Users can manually add guard band time for each

> schedule queues in their configuration if they want.



As explained in another mail in this thread, all queues are marked as
scheduled. So this is actually a no-op, correct? It doesn't matter if
it set or not set for now. Dunno why we even care for this bit then.

> Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com>

> ---

>  drivers/net/dsa/ocelot/felix_vsc9959.c | 8 ++++++--

>  1 file changed, 6 insertions(+), 2 deletions(-)

> 

> diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c

> index 789fe08cae50..2473bebe48e6 100644

> --- a/drivers/net/dsa/ocelot/felix_vsc9959.c

> +++ b/drivers/net/dsa/ocelot/felix_vsc9959.c

> @@ -1227,8 +1227,12 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,

>  	if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX)

>  		return -ERANGE;

>  

> -	ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port) |

> -		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

> +	/* Set port num and disable ALWAYS_GUARD_BAND_SCH_Q, which means set

> +	 * guard band to be implemented for nonschedule queues to schedule

> +	 * queues transition.

> +	 */

> +	ocelot_rmw(ocelot,

> +		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port),

>  		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M |

>  		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

>  		   QSYS_TAS_PARAM_CFG_CTRL);


Anyway, I don't think this the correct place for this:
 (1) it isn't per port, but a global bit, but here its done per port.
 (2) rmw, I presume is read-modify-write. and there is one bit CONFIG_CHAGE
     which is set by software and cleared by hardware. What happens if it
	 will be cleared right after we read it. Then it will be set again, no?

So if we really care about this bit, shouldn't this be moved to switch
initialization then?

-michael
Vladimir Oltean May 4, 2021, 6:18 p.m. UTC | #10
Hi Michael,

On Tue, May 04, 2021 at 07:05:14PM +0200, Michael Walle wrote:
> Hi,

> 

> > ALWAYS_GUARD_BAND_SCH_Q bit in TAS config register is descripted as

> > this:

> > 	0: Guard band is implemented for nonschedule queues to schedule

> > 	   queues transition.

> > 	1: Guard band is implemented for any queue to schedule queue

> > 	   transition.

> > 

> > The driver set guard band be implemented for any queue to schedule queue

> > transition before, which will make each GCL time slot reserve a guard

> > band time that can pass the max SDU frame. Because guard band time could

> > not be set in tc-taprio now, it will use about 12000ns to pass 1500B max

> > SDU. This limits each GCL time interval to be more than 12000ns.

> > 

> > This patch change the guard band to be only implemented for nonschedule

> > queues to schedule queues transition, so that there is no need to reserve

> > guard band on each GCL. Users can manually add guard band time for each

> > schedule queues in their configuration if they want.

> 

> 

> As explained in another mail in this thread, all queues are marked as

> scheduled. So this is actually a no-op, correct? It doesn't matter if

> it set or not set for now. Dunno why we even care for this bit then.


It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available throughput when set.

> > Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com>

> > ---

> >  drivers/net/dsa/ocelot/felix_vsc9959.c | 8 ++++++--

> >  1 file changed, 6 insertions(+), 2 deletions(-)

> > 

> > diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c

> > index 789fe08cae50..2473bebe48e6 100644

> > --- a/drivers/net/dsa/ocelot/felix_vsc9959.c

> > +++ b/drivers/net/dsa/ocelot/felix_vsc9959.c

> > @@ -1227,8 +1227,12 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,

> >  	if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX)

> >  		return -ERANGE;

> >  

> > -	ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port) |

> > -		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

> > +	/* Set port num and disable ALWAYS_GUARD_BAND_SCH_Q, which means set

> > +	 * guard band to be implemented for nonschedule queues to schedule

> > +	 * queues transition.

> > +	 */

> > +	ocelot_rmw(ocelot,

> > +		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port),

> >  		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M |

> >  		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

> >  		   QSYS_TAS_PARAM_CFG_CTRL);

> 

> Anyway, I don't think this the correct place for this:

>  (1) it isn't per port, but a global bit, but here its done per port.


I don't understand. According to the documentation, selecting the port
whose time-aware shaper you are configuring is done through
QSYS::TAS_PARAM_CFG_CTRL.PORT_NUM.

>  (2) rmw, I presume is read-modify-write. and there is one bit CONFIG_CHAGE

>      which is set by software and cleared by hardware. What happens if it

> 	 will be cleared right after we read it. Then it will be set again, no?

> 

> So if we really care about this bit, shouldn't this be moved to switch

> initialization then?


May I know what drew your attention to this patch? Is there something wrong?
Michael Walle May 4, 2021, 6:38 p.m. UTC | #11
Hi Vladimir,

Am 2021-05-04 20:18, schrieb Vladimir Oltean:
> On Tue, May 04, 2021 at 07:05:14PM +0200, Michael Walle wrote:

>> Hi,

>> 

>> > ALWAYS_GUARD_BAND_SCH_Q bit in TAS config register is descripted as

>> > this:

>> > 	0: Guard band is implemented for nonschedule queues to schedule

>> > 	   queues transition.

>> > 	1: Guard band is implemented for any queue to schedule queue

>> > 	   transition.

>> >

>> > The driver set guard band be implemented for any queue to schedule queue

>> > transition before, which will make each GCL time slot reserve a guard

>> > band time that can pass the max SDU frame. Because guard band time could

>> > not be set in tc-taprio now, it will use about 12000ns to pass 1500B max

>> > SDU. This limits each GCL time interval to be more than 12000ns.

>> >

>> > This patch change the guard band to be only implemented for nonschedule

>> > queues to schedule queues transition, so that there is no need to reserve

>> > guard band on each GCL. Users can manually add guard band time for each

>> > schedule queues in their configuration if they want.

>> 

>> 

>> As explained in another mail in this thread, all queues are marked as

>> scheduled. So this is actually a no-op, correct? It doesn't matter if

>> it set or not set for now. Dunno why we even care for this bit then.

> 

> It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

> throughput when set.


Ahh, I see now. All queues are "scheduled" but the guard band only 
applies
for "non-scheduled" -> "scheduled" transitions. So the guard band is 
never
applied, right? Is that really what we want?

>> > Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com>

>> > ---

>> >  drivers/net/dsa/ocelot/felix_vsc9959.c | 8 ++++++--

>> >  1 file changed, 6 insertions(+), 2 deletions(-)

>> >

>> > diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c

>> > index 789fe08cae50..2473bebe48e6 100644

>> > --- a/drivers/net/dsa/ocelot/felix_vsc9959.c

>> > +++ b/drivers/net/dsa/ocelot/felix_vsc9959.c

>> > @@ -1227,8 +1227,12 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,

>> >  	if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX)

>> >  		return -ERANGE;

>> >

>> > -	ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port) |

>> > -		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

>> > +	/* Set port num and disable ALWAYS_GUARD_BAND_SCH_Q, which means set

>> > +	 * guard band to be implemented for nonschedule queues to schedule

>> > +	 * queues transition.

>> > +	 */

>> > +	ocelot_rmw(ocelot,

>> > +		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port),

>> >  		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M |

>> >  		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

>> >  		   QSYS_TAS_PARAM_CFG_CTRL);

>> 

>> Anyway, I don't think this the correct place for this:

>>  (1) it isn't per port, but a global bit, but here its done per port.

> 

> I don't understand. According to the documentation, selecting the port

> whose time-aware shaper you are configuring is done through

> QSYS::TAS_PARAM_CFG_CTRL.PORT_NUM.


According to the LS1028A RM:

   PORT_NUM
   Specifies the port number to which the TAS_PARAMS register 
configurations
   (CFG_REG_1 to CFG_REG_5, TIME_INTERVAL and GATE_STATE) need to be 
applied.

I guess this work together with CONFIG_CHANGE and applies the mentions 
registers
in an atomic way (or at a given time). There is no mention of the
ALWAYS_GUARD_BAND_SCH_Q bit nor the register TAS_PARAM_CFG_CTRL.

But the ALWAYS_GUARD_BAND_SCH_Q mention its "Global configuration". That
together with the fact that it can't be read back (unless I'm missing
something), led me to the conclusion that this bit is global for the 
whole
switch. I may be wrong.

But in any case, (2) is more severe IMHO.

>>  (2) rmw, I presume is read-modify-write. and there is one bit 

>> CONFIG_CHAGE

>>      which is set by software and cleared by hardware. What happens if 

>> it

>> 	 will be cleared right after we read it. Then it will be set again, 

>> no?

>> 

>> So if we really care about this bit, shouldn't this be moved to switch

>> initialization then?

> 

> May I know what drew your attention to this patch? Is there something 

> wrong?


-michael
Vladimir Oltean May 4, 2021, 6:50 p.m. UTC | #12
On Tue, May 04, 2021 at 08:38:29PM +0200, Michael Walle wrote:
> Hi Vladimir,

> 

> Am 2021-05-04 20:18, schrieb Vladimir Oltean:

> > On Tue, May 04, 2021 at 07:05:14PM +0200, Michael Walle wrote:

> > > Hi,

> > > 

> > > > ALWAYS_GUARD_BAND_SCH_Q bit in TAS config register is descripted as

> > > > this:

> > > > 	0: Guard band is implemented for nonschedule queues to schedule

> > > > 	   queues transition.

> > > > 	1: Guard band is implemented for any queue to schedule queue

> > > > 	   transition.

> > > >

> > > > The driver set guard band be implemented for any queue to schedule queue

> > > > transition before, which will make each GCL time slot reserve a guard

> > > > band time that can pass the max SDU frame. Because guard band time could

> > > > not be set in tc-taprio now, it will use about 12000ns to pass 1500B max

> > > > SDU. This limits each GCL time interval to be more than 12000ns.

> > > >

> > > > This patch change the guard band to be only implemented for nonschedule

> > > > queues to schedule queues transition, so that there is no need to reserve

> > > > guard band on each GCL. Users can manually add guard band time for each

> > > > schedule queues in their configuration if they want.

> > > 

> > > 

> > > As explained in another mail in this thread, all queues are marked as

> > > scheduled. So this is actually a no-op, correct? It doesn't matter if

> > > it set or not set for now. Dunno why we even care for this bit then.

> > 

> > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

> > throughput when set.

> 

> Ahh, I see now. All queues are "scheduled" but the guard band only applies

> for "non-scheduled" -> "scheduled" transitions. So the guard band is never

> applied, right? Is that really what we want?


Xiaoliang explained that yes, this is what we want. If the end user
wants a guard band they can explicitly add a "sched-entry 00" in the
tc-taprio config.

> > > > Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com>

> > > > ---

> > > >  drivers/net/dsa/ocelot/felix_vsc9959.c | 8 ++++++--

> > > >  1 file changed, 6 insertions(+), 2 deletions(-)

> > > >

> > > > diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c

> > > > index 789fe08cae50..2473bebe48e6 100644

> > > > --- a/drivers/net/dsa/ocelot/felix_vsc9959.c

> > > > +++ b/drivers/net/dsa/ocelot/felix_vsc9959.c

> > > > @@ -1227,8 +1227,12 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,

> > > >  	if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX)

> > > >  		return -ERANGE;

> > > >

> > > > -	ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port) |

> > > > -		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

> > > > +	/* Set port num and disable ALWAYS_GUARD_BAND_SCH_Q, which means set

> > > > +	 * guard band to be implemented for nonschedule queues to schedule

> > > > +	 * queues transition.

> > > > +	 */

> > > > +	ocelot_rmw(ocelot,

> > > > +		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port),

> > > >  		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M |

> > > >  		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

> > > >  		   QSYS_TAS_PARAM_CFG_CTRL);

> > > 

> > > Anyway, I don't think this the correct place for this:

> > >  (1) it isn't per port, but a global bit, but here its done per port.

> > 

> > I don't understand. According to the documentation, selecting the port

> > whose time-aware shaper you are configuring is done through

> > QSYS::TAS_PARAM_CFG_CTRL.PORT_NUM.

> 

> According to the LS1028A RM:

> 

>   PORT_NUM

>   Specifies the port number to which the TAS_PARAMS register configurations

>   (CFG_REG_1 to CFG_REG_5, TIME_INTERVAL and GATE_STATE) need to be applied.

> 

> I guess this work together with CONFIG_CHANGE and applies the mentions

> registers

> in an atomic way (or at a given time). There is no mention of the

> ALWAYS_GUARD_BAND_SCH_Q bit nor the register TAS_PARAM_CFG_CTRL.

> 

> But the ALWAYS_GUARD_BAND_SCH_Q mention its "Global configuration". That

> together with the fact that it can't be read back (unless I'm missing

> something), led me to the conclusion that this bit is global for the whole

> switch. I may be wrong.


Sorry, I don't understand what you mean to say here.

> But in any case, (2) is more severe IMHO.

> 

> > >  (2) rmw, I presume is read-modify-write. and there is one bit CONFIG_CHAGE

> > >      which is set by software and cleared by hardware. What happens if it

> > > 	 will be cleared right after we read it. Then it will be set again, no?

> > > 

> > > So if we really care about this bit, shouldn't this be moved to switch

> > > initialization then?


Sorry, again, I don't understand. Let me copy here the procedure from
vsc9959_qos_port_tas_set():

	ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
		   QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
		   QSYS_TAS_PARAM_CFG_CTRL); <- set the CONFIG_CHANGE bit, keep everything else the same

	ret = readx_poll_timeout(vsc9959_tas_read_cfg_status, ocelot, val,
				 !(val & QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE),
				 10, 100000); <- spin until CONFIG_CHANGE clears

Should there have been a mutex at the beginning of vsc9959_qos_port_tas_set,
ensuring that two independent user space processes configuring the TAS
of two ports cannot access the global config registers concurrently?
Probably, although my guess is that currently, the global rtnetlink
mutex prevents this from happening in practice.

> > May I know what drew your attention to this patch? Is there something

> > wrong?


^ question still remains
Michael Walle May 4, 2021, 7:08 p.m. UTC | #13
Am 2021-05-04 20:50, schrieb Vladimir Oltean:
> On Tue, May 04, 2021 at 08:38:29PM +0200, Michael Walle wrote:

>> Hi Vladimir,

>> 

>> Am 2021-05-04 20:18, schrieb Vladimir Oltean:

>> > On Tue, May 04, 2021 at 07:05:14PM +0200, Michael Walle wrote:

>> > > Hi,

>> > >

>> > > > ALWAYS_GUARD_BAND_SCH_Q bit in TAS config register is descripted as

>> > > > this:

>> > > > 	0: Guard band is implemented for nonschedule queues to schedule

>> > > > 	   queues transition.

>> > > > 	1: Guard band is implemented for any queue to schedule queue

>> > > > 	   transition.

>> > > >

>> > > > The driver set guard band be implemented for any queue to schedule queue

>> > > > transition before, which will make each GCL time slot reserve a guard

>> > > > band time that can pass the max SDU frame. Because guard band time could

>> > > > not be set in tc-taprio now, it will use about 12000ns to pass 1500B max

>> > > > SDU. This limits each GCL time interval to be more than 12000ns.

>> > > >

>> > > > This patch change the guard band to be only implemented for nonschedule

>> > > > queues to schedule queues transition, so that there is no need to reserve

>> > > > guard band on each GCL. Users can manually add guard band time for each

>> > > > schedule queues in their configuration if they want.

>> > >

>> > >

>> > > As explained in another mail in this thread, all queues are marked as

>> > > scheduled. So this is actually a no-op, correct? It doesn't matter if

>> > > it set or not set for now. Dunno why we even care for this bit then.

>> >

>> > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

>> > throughput when set.

>> 

>> Ahh, I see now. All queues are "scheduled" but the guard band only 

>> applies

>> for "non-scheduled" -> "scheduled" transitions. So the guard band is 

>> never

>> applied, right? Is that really what we want?

> 

> Xiaoliang explained that yes, this is what we want. If the end user

> wants a guard band they can explicitly add a "sched-entry 00" in the

> tc-taprio config.


You're disabling the guard band, then. I figured, but isn't that
suprising for the user? Who else implements taprio? Do they do it in the
same way? I mean this behavior is passed right to the userspace and have
a direct impact on how it is configured. Of course a user can add it
manually, but I'm not sure that is what we want here. At least it needs
to be documented somewhere. Or maybe it should be a switchable option.

Consider the following:
sched-entry S 01 25000
sched-entry S fe 175000
basetime 0

Doesn't guarantee, that queue 0 is available at the beginning of
the cycle, in the worst case it takes up to
<begin of cycle> + ~12.5us until the frame makes it through (given
gigabit and 1518b frames).

Btw. there are also other implementations which don't need a guard
band (because they are store-and-forward and cound the remaining
bytes). So yes, using a guard band and scheduling is degrading the
performance.


>> > > > Signed-off-by: Xiaoliang Yang <xiaoliang.yang_1@nxp.com>

>> > > > ---

>> > > >  drivers/net/dsa/ocelot/felix_vsc9959.c | 8 ++++++--

>> > > >  1 file changed, 6 insertions(+), 2 deletions(-)

>> > > >

>> > > > diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c

>> > > > index 789fe08cae50..2473bebe48e6 100644

>> > > > --- a/drivers/net/dsa/ocelot/felix_vsc9959.c

>> > > > +++ b/drivers/net/dsa/ocelot/felix_vsc9959.c

>> > > > @@ -1227,8 +1227,12 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,

>> > > >  	if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX)

>> > > >  		return -ERANGE;

>> > > >

>> > > > -	ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port) |

>> > > > -		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

>> > > > +	/* Set port num and disable ALWAYS_GUARD_BAND_SCH_Q, which means set

>> > > > +	 * guard band to be implemented for nonschedule queues to schedule

>> > > > +	 * queues transition.

>> > > > +	 */

>> > > > +	ocelot_rmw(ocelot,

>> > > > +		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port),

>> > > >  		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M |

>> > > >  		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,

>> > > >  		   QSYS_TAS_PARAM_CFG_CTRL);

>> > >

>> > > Anyway, I don't think this the correct place for this:

>> > >  (1) it isn't per port, but a global bit, but here its done per port.

>> >

>> > I don't understand. According to the documentation, selecting the port

>> > whose time-aware shaper you are configuring is done through

>> > QSYS::TAS_PARAM_CFG_CTRL.PORT_NUM.

>> 

>> According to the LS1028A RM:

>> 

>>   PORT_NUM

>>   Specifies the port number to which the TAS_PARAMS register 

>> configurations

>>   (CFG_REG_1 to CFG_REG_5, TIME_INTERVAL and GATE_STATE) need to be 

>> applied.

>> 

>> I guess this work together with CONFIG_CHANGE and applies the mentions

>> registers

>> in an atomic way (or at a given time). There is no mention of the

>> ALWAYS_GUARD_BAND_SCH_Q bit nor the register TAS_PARAM_CFG_CTRL.

>> 

>> But the ALWAYS_GUARD_BAND_SCH_Q mention its "Global configuration". 

>> That

>> together with the fact that it can't be read back (unless I'm missing

>> something), led me to the conclusion that this bit is global for the 

>> whole

>> switch. I may be wrong.

> 

> Sorry, I don't understand what you mean to say here.


I doubt that ALWAYS_GUARD_BAND_SCH_Q is a per-port setting. But that is
only a guess. One would have to check with the IP vendor.

>> But in any case, (2) is more severe IMHO.

>> 

>> > >  (2) rmw, I presume is read-modify-write. and there is one bit CONFIG_CHAGE

>> > >      which is set by software and cleared by hardware. What happens if it

>> > > 	 will be cleared right after we read it. Then it will be set again, no?

>> > >

>> > > So if we really care about this bit, shouldn't this be moved to switch

>> > > initialization then?

> 

> Sorry, again, I don't understand. Let me copy here the procedure from

> vsc9959_qos_port_tas_set():

> 

> 	ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,

> 		   QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,

> 		   QSYS_TAS_PARAM_CFG_CTRL); <- set the CONFIG_CHANGE bit, keep

> everything else the same

> 

> 	ret = readx_poll_timeout(vsc9959_tas_read_cfg_status, ocelot, val,

> 				 !(val & QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE),

> 				 10, 100000); <- spin until CONFIG_CHANGE clears

> 

> Should there have been a mutex at the beginning of 

> vsc9959_qos_port_tas_set,

> ensuring that two independent user space processes configuring the TAS

> of two ports cannot access the global config registers concurrently?

> Probably, although my guess is that currently, the global rtnetlink

> mutex prevents this from happening in practice.


Ah ok, I missed that.

> 

>> > May I know what drew your attention to this patch? Is there something

>> > wrong?


See private mail.

-michael
Vladimir Oltean May 4, 2021, 7:17 p.m. UTC | #14
On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:
> > > > > As explained in another mail in this thread, all queues are marked as

> > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if

> > > > > it set or not set for now. Dunno why we even care for this bit then.

> > > >

> > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

> > > > throughput when set.

> > > 

> > > Ahh, I see now. All queues are "scheduled" but the guard band only

> > > applies

> > > for "non-scheduled" -> "scheduled" transitions. So the guard band is

> > > never

> > > applied, right? Is that really what we want?

> > 

> > Xiaoliang explained that yes, this is what we want. If the end user

> > wants a guard band they can explicitly add a "sched-entry 00" in the

> > tc-taprio config.

> 

> You're disabling the guard band, then. I figured, but isn't that

> suprising for the user? Who else implements taprio? Do they do it in the

> same way? I mean this behavior is passed right to the userspace and have

> a direct impact on how it is configured. Of course a user can add it

> manually, but I'm not sure that is what we want here. At least it needs

> to be documented somewhere. Or maybe it should be a switchable option.

> 

> Consider the following:

> sched-entry S 01 25000

> sched-entry S fe 175000

> basetime 0

> 

> Doesn't guarantee, that queue 0 is available at the beginning of

> the cycle, in the worst case it takes up to

> <begin of cycle> + ~12.5us until the frame makes it through (given

> gigabit and 1518b frames).

> 

> Btw. there are also other implementations which don't need a guard

> band (because they are store-and-forward and cound the remaining

> bytes). So yes, using a guard band and scheduling is degrading the

> performance.


What is surprising for the user, and I mentioned this already in another
thread on this patch, is that the Felix switch overruns the time gate (a
packet taking 2 us to transmit will start transmission even if there is
only 1 us left of its time slot, delaying the packets from the next time
slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit
exists, as a way to avoid these overruns, but it is a bit of a poor tool
for that job. Anyway, right now we disable it and live with the overruns.

FWIW, the ENETC does not overrun the time gate, the SJA1105 does. You
can't really tell just by looking at the driver code, just by testing.
It's a bit of a crapshoot.

> > Sorry, I don't understand what you mean to say here.

> 

> I doubt that ALWAYS_GUARD_BAND_SCH_Q is a per-port setting. But that is

> only a guess. One would have to check with the IP vendor.


Probably not, but I'm not sure that this is relevant one way or another,
as the driver unconditionally clears it regardless of port (or unconditionally
set it, before Xiaoliang's patch).

> > > > May I know what drew your attention to this patch? Is there something

> > > > wrong?

> 

> See private mail.


Responded. I'm really curious if this change makes any difference at all
to your use case.
Michael Walle May 4, 2021, 8:23 p.m. UTC | #15
Am 2021-05-04 21:17, schrieb Vladimir Oltean:
> On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

>> > > > > As explained in another mail in this thread, all queues are marked as

>> > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if

>> > > > > it set or not set for now. Dunno why we even care for this bit then.

>> > > >

>> > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

>> > > > throughput when set.

>> > >

>> > > Ahh, I see now. All queues are "scheduled" but the guard band only

>> > > applies

>> > > for "non-scheduled" -> "scheduled" transitions. So the guard band is

>> > > never

>> > > applied, right? Is that really what we want?

>> >

>> > Xiaoliang explained that yes, this is what we want. If the end user

>> > wants a guard band they can explicitly add a "sched-entry 00" in the

>> > tc-taprio config.

>> 

>> You're disabling the guard band, then. I figured, but isn't that

>> suprising for the user? Who else implements taprio? Do they do it in 

>> the

>> same way? I mean this behavior is passed right to the userspace and 

>> have

>> a direct impact on how it is configured. Of course a user can add it

>> manually, but I'm not sure that is what we want here. At least it 

>> needs

>> to be documented somewhere. Or maybe it should be a switchable option.

>> 

>> Consider the following:

>> sched-entry S 01 25000

>> sched-entry S fe 175000

>> basetime 0

>> 

>> Doesn't guarantee, that queue 0 is available at the beginning of

>> the cycle, in the worst case it takes up to

>> <begin of cycle> + ~12.5us until the frame makes it through (given

>> gigabit and 1518b frames).

>> 

>> Btw. there are also other implementations which don't need a guard

>> band (because they are store-and-forward and cound the remaining

>> bytes). So yes, using a guard band and scheduling is degrading the

>> performance.

> 

> What is surprising for the user, and I mentioned this already in 

> another

> thread on this patch, is that the Felix switch overruns the time gate 

> (a

> packet taking 2 us to transmit will start transmission even if there is

> only 1 us left of its time slot, delaying the packets from the next 

> time

> slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit

> exists, as a way to avoid these overruns, but it is a bit of a poor 

> tool

> for that job. Anyway, right now we disable it and live with the 

> overruns.


We are talking about the same thing here. Why is that a poor tool?

> FWIW, the ENETC does not overrun the time gate, the SJA1105 does. You

> can't really tell just by looking at the driver code, just by testing.

> It's a bit of a crapshoot.


I was speaking of other switches, I see there is also a hirschmann
switch (hellcreek) supported in linux, for example.

Shouldn't the goal to make the configuration of the taprio qdisc
independent of the switch. If on one you'll have to manually define the
guard band by inserting dead-time scheduler entries and on another this
is already handled by the hardware (like it would be with
ALWAYS_GUARD_BAND_SCH_Q or if it doesn't need it at all), this goal
isn't met.

Also what do you expect if you use the following configuration:
sched-entry S 01 5000
sched-entry S fe <some large number>

Will queue 0 be able to send traffic? To me, with this patch, it seems
to me that this isn't always the case anymore. If there is a large 
packet
just sent at the end of the second cycle, the first might even be 
skipped
completely.
Will a user of the taprio (without knowledge of the underlying switch)
assume that it can send traffic up to ~600 bytes? I'd say yes.

-michael
Vladimir Oltean May 4, 2021, 9:33 p.m. UTC | #16
[ trimmed the CC list, as this is most likely spam for most people ]

On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:
> Am 2021-05-04 21:17, schrieb Vladimir Oltean:

> > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

> > > > > > > As explained in another mail in this thread, all queues are marked as

> > > > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if

> > > > > > > it set or not set for now. Dunno why we even care for this bit then.

> > > > > >

> > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

> > > > > > throughput when set.

> > > > >

> > > > > Ahh, I see now. All queues are "scheduled" but the guard band only

> > > > > applies

> > > > > for "non-scheduled" -> "scheduled" transitions. So the guard band is

> > > > > never

> > > > > applied, right? Is that really what we want?

> > > >

> > > > Xiaoliang explained that yes, this is what we want. If the end user

> > > > wants a guard band they can explicitly add a "sched-entry 00" in the

> > > > tc-taprio config.

> > > 

> > > You're disabling the guard band, then. I figured, but isn't that

> > > suprising for the user? Who else implements taprio? Do they do it in

> > > the

> > > same way? I mean this behavior is passed right to the userspace and

> > > have

> > > a direct impact on how it is configured. Of course a user can add it

> > > manually, but I'm not sure that is what we want here. At least it

> > > needs

> > > to be documented somewhere. Or maybe it should be a switchable option.

> > > 

> > > Consider the following:

> > > sched-entry S 01 25000

> > > sched-entry S fe 175000

> > > basetime 0

> > > 

> > > Doesn't guarantee, that queue 0 is available at the beginning of

> > > the cycle, in the worst case it takes up to

> > > <begin of cycle> + ~12.5us until the frame makes it through (given

> > > gigabit and 1518b frames).

> > > 

> > > Btw. there are also other implementations which don't need a guard

> > > band (because they are store-and-forward and cound the remaining

> > > bytes). So yes, using a guard band and scheduling is degrading the

> > > performance.

> > 

> > What is surprising for the user, and I mentioned this already in another

> > thread on this patch, is that the Felix switch overruns the time gate (a

> > packet taking 2 us to transmit will start transmission even if there is

> > only 1 us left of its time slot, delaying the packets from the next time

> > slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit

> > exists, as a way to avoid these overruns, but it is a bit of a poor tool

> > for that job. Anyway, right now we disable it and live with the

> > overruns.

> 

> We are talking about the same thing here. Why is that a poor tool?


It is a poor tool because it revolves around the idea of "scheduled
queues" and "non-scheduled queues".

Consider the following tc-taprio schedule:

	sched-entry S 81 2000 # TC 7 and 0 open, all others closed
	sched-entry S 82 2000 # TC 7 and 1 open, all others closed
	sched-entry S 84 2000 # TC 7 and 2 open, all others closed
	sched-entry S 88 2000 # TC 7 and 3 open, all others closed
	sched-entry S 90 2000 # TC 7 and 4 open, all others closed
	sched-entry S a0 2000 # TC 7 and 5 open, all others closed
	sched-entry S c0 2000 # TC 7 and 6 open, all others closed

Otherwise said, traffic class 7 should be able to send any time it
wishes.

With the ALWAYS_GUARD_BAND_SCH_Q bit, there will be hiccups in packet
transmission for TC 7. For example, at the end of every time slot,
the hardware will insert a guard band for TC 7 because there is a
scheduled-queue-to-scheduled-queue transition, and it has been told to
do that. But a packet with TC 7 should be transmitted at any time,
because that's what we told the port to do!

Alternatively, we could tell the switch that TC 7 is "scheduled", and
the others are "not scheduled". Then it would implement the guard band
at the end of TCs 0-6, but it wouldn't for packets sent in TC 7. But
when you look at the overall schedule I described above, it kinds looks
like TCs 0-6 are the ones that are "scheduled" and TC 7 looks like the
one which isn't "scheduled" but can send at any time it pleases.

Odd, just odd. It's clear that someone had something in mind, it's just
not clear what. I would actually appreciate if somebody from Microchip
could chime in and say "no, you're wrong", and then explain.

> > FWIW, the ENETC does not overrun the time gate, the SJA1105 does. You

> > can't really tell just by looking at the driver code, just by testing.

> > It's a bit of a crapshoot.

> 

> I was speaking of other switches, I see there is also a hirschmann

> switch (hellcreek) supported in linux, for example.

> 

> Shouldn't the goal to make the configuration of the taprio qdisc

> independent of the switch. If on one you'll have to manually define the

> guard band by inserting dead-time scheduler entries and on another this

> is already handled by the hardware (like it would be with

> ALWAYS_GUARD_BAND_SCH_Q or if it doesn't need it at all), this goal

> isn't met.

> 

> Also what do you expect if you use the following configuration:

> sched-entry S 01 5000

> sched-entry S fe <some large number>

> 

> Will queue 0 be able to send traffic? To me, with this patch, it seems

> to me that this isn't always the case anymore. If there is a large packet

> just sent at the end of the second cycle, the first might even be skipped

> completely.

> Will a user of the taprio (without knowledge of the underlying switch)

> assume that it can send traffic up to ~600 bytes? I'd say yes.


Yeah, I think that if a switch overruns a packet's reserved time gate,
then the above tc-taprio schedule is as good as not having any. I didn't
say that overruns are not a problem, I just said that the ALWAYS_blah_blah
bit isn't as silver-bullet for a solution as you think.
Michael Walle May 6, 2021, 1:25 p.m. UTC | #17
Am 2021-05-04 23:33, schrieb Vladimir Oltean:
> [ trimmed the CC list, as this is most likely spam for most people ]

> 

> On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

>> Am 2021-05-04 21:17, schrieb Vladimir Oltean:

>> > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

>> > > > > > > As explained in another mail in this thread, all queues are marked as

>> > > > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if

>> > > > > > > it set or not set for now. Dunno why we even care for this bit then.

>> > > > > >

>> > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

>> > > > > > throughput when set.

>> > > > >

>> > > > > Ahh, I see now. All queues are "scheduled" but the guard band only

>> > > > > applies

>> > > > > for "non-scheduled" -> "scheduled" transitions. So the guard band is

>> > > > > never

>> > > > > applied, right? Is that really what we want?

>> > > >

>> > > > Xiaoliang explained that yes, this is what we want. If the end user

>> > > > wants a guard band they can explicitly add a "sched-entry 00" in the

>> > > > tc-taprio config.

>> > >

>> > > You're disabling the guard band, then. I figured, but isn't that

>> > > suprising for the user? Who else implements taprio? Do they do it in

>> > > the

>> > > same way? I mean this behavior is passed right to the userspace and

>> > > have

>> > > a direct impact on how it is configured. Of course a user can add it

>> > > manually, but I'm not sure that is what we want here. At least it

>> > > needs

>> > > to be documented somewhere. Or maybe it should be a switchable option.

>> > >

>> > > Consider the following:

>> > > sched-entry S 01 25000

>> > > sched-entry S fe 175000

>> > > basetime 0

>> > >

>> > > Doesn't guarantee, that queue 0 is available at the beginning of

>> > > the cycle, in the worst case it takes up to

>> > > <begin of cycle> + ~12.5us until the frame makes it through (given

>> > > gigabit and 1518b frames).

>> > >

>> > > Btw. there are also other implementations which don't need a guard

>> > > band (because they are store-and-forward and cound the remaining

>> > > bytes). So yes, using a guard band and scheduling is degrading the

>> > > performance.

>> >

>> > What is surprising for the user, and I mentioned this already in another

>> > thread on this patch, is that the Felix switch overruns the time gate (a

>> > packet taking 2 us to transmit will start transmission even if there is

>> > only 1 us left of its time slot, delaying the packets from the next time

>> > slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit

>> > exists, as a way to avoid these overruns, but it is a bit of a poor tool

>> > for that job. Anyway, right now we disable it and live with the

>> > overruns.

>> 

>> We are talking about the same thing here. Why is that a poor tool?

> 

> It is a poor tool because it revolves around the idea of "scheduled

> queues" and "non-scheduled queues".

> 

> Consider the following tc-taprio schedule:

> 

> 	sched-entry S 81 2000 # TC 7 and 0 open, all others closed

> 	sched-entry S 82 2000 # TC 7 and 1 open, all others closed

> 	sched-entry S 84 2000 # TC 7 and 2 open, all others closed

> 	sched-entry S 88 2000 # TC 7 and 3 open, all others closed

> 	sched-entry S 90 2000 # TC 7 and 4 open, all others closed

> 	sched-entry S a0 2000 # TC 7 and 5 open, all others closed

> 	sched-entry S c0 2000 # TC 7 and 6 open, all others closed

> 

> Otherwise said, traffic class 7 should be able to send any time it

> wishes.


What is the use case behind that? TC7 (with the highest priority)
may always take precedence of the other TCs, thus what is the point
of having a dedicated window for the others.

Anyway, I've tried it and there are no hiccups. I've meassured
the delta between the start of successive packets and they are
always ~12370ns for a 1518b frame. TC7 is open all the time,
which makes sense. It only happens if you actually close the gate,
eg. you have a sched-entry where a TC7 bit is not set. In this case,
I can see a difference between ALWAYS_GUARD_BAND_SCH_Q set and not
set. If it is set, there is up to a ~12.5us delay added (of course
it depends on when the former frame was scheduled).

It seems that also needs to be 1->0 transition.

You've already mentioned that the switch violates the Qbv standard.
What makes you think so? IMHO before that patch, it wasn't violated.
Now it likely is (still have to confirm that). How can this
be reasonable?

If you have a look at the initial commit message, it is about
making it possible to have a smaller gate window, but that is not
possible because of the current guard band of ~12.5us. It seems
to be a shortcut for not having the MAXSDU (and thus the length
of the guard band) configurable. Yes (static) guard bands will
have a performance impact, also described in [1]. You are trading
the correctness of the TAS for performance. And it is the sole
purpose of Qbv to have a determisitc way (in terms of timing) of
sending the frames.

And telling the user, hey, we know we violate the Qbv standard,
please insert the guard bands yourself if you really need them is
not a real solution. As already mentioned, (1) it is not documented
anywhere, (2) can't be shared among other switches (unless they do
the same workaround) and (3) what am I supposed to do for TSN compliance
testing. Modifying the schedule that is about to be checked (and thus
given by the compliance suite)?

> With the ALWAYS_GUARD_BAND_SCH_Q bit, there will be hiccups in packet

> transmission for TC 7. For example, at the end of every time slot,

> the hardware will insert a guard band for TC 7 because there is a

> scheduled-queue-to-scheduled-queue transition, and it has been told to

> do that. But a packet with TC 7 should be transmitted at any time,

> because that's what we told the port to do!

> 

> Alternatively, we could tell the switch that TC 7 is "scheduled", and

> the others are "not scheduled". Then it would implement the guard band

> at the end of TCs 0-6, but it wouldn't for packets sent in TC 7. But

> when you look at the overall schedule I described above, it kinds looks

> like TCs 0-6 are the ones that are "scheduled" and TC 7 looks like the

> one which isn't "scheduled" but can send at any time it pleases.

> 

> Odd, just odd. It's clear that someone had something in mind, it's just

> not clear what. I would actually appreciate if somebody from Microchip

> could chime in and say "no, you're wrong", and then explain.


If I had to make a bet, the distinction between "scheduled" and
"non-scheduled" is there to have more control for some traffic classes
you trust and where you can engineer the traffic, so you don't really
need the guard band and between arbitrary traffic where you can't really
say anything about and thus need the guard band.

>> > FWIW, the ENETC does not overrun the time gate, the SJA1105 does. You

>> > can't really tell just by looking at the driver code, just by testing.

>> > It's a bit of a crapshoot.

>> 

>> I was speaking of other switches, I see there is also a hirschmann

>> switch (hellcreek) supported in linux, for example.

>> 

>> Shouldn't the goal to make the configuration of the taprio qdisc

>> independent of the switch. If on one you'll have to manually define 

>> the

>> guard band by inserting dead-time scheduler entries and on another 

>> this

>> is already handled by the hardware (like it would be with

>> ALWAYS_GUARD_BAND_SCH_Q or if it doesn't need it at all), this goal

>> isn't met.

>> 

>> Also what do you expect if you use the following configuration:

>> sched-entry S 01 5000

>> sched-entry S fe <some large number>

>> 

>> Will queue 0 be able to send traffic? To me, with this patch, it seems

>> to me that this isn't always the case anymore. If there is a large 

>> packet

>> just sent at the end of the second cycle, the first might even be 

>> skipped

>> completely.

>> Will a user of the taprio (without knowledge of the underlying switch)

>> assume that it can send traffic up to ~600 bytes? I'd say yes.

> 

> Yeah, I think that if a switch overruns a packet's reserved time gate,

> then the above tc-taprio schedule is as good as not having any. I 

> didn't

> say that overruns are not a problem, I just said that the 

> ALWAYS_blah_blah

> bit isn't as silver-bullet for a solution as you think.


See above.

-michael

[1] 
https://www.belden.com/hubfs/resources/knowledge/white-papers/tsn-time-sensitive-networking.pdf
Vladimir Oltean May 6, 2021, 1:50 p.m. UTC | #18
On Thu, May 06, 2021 at 03:25:07PM +0200, Michael Walle wrote:
> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

> > [ trimmed the CC list, as this is most likely spam for most people ]

> > 

> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

> > > Am 2021-05-04 21:17, schrieb Vladimir Oltean:

> > > > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

> > > > > > > > > As explained in another mail in this thread, all queues are marked as

> > > > > > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if

> > > > > > > > > it set or not set for now. Dunno why we even care for this bit then.

> > > > > > > >

> > > > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

> > > > > > > > throughput when set.

> > > > > > >

> > > > > > > Ahh, I see now. All queues are "scheduled" but the guard band only

> > > > > > > applies

> > > > > > > for "non-scheduled" -> "scheduled" transitions. So the guard band is

> > > > > > > never

> > > > > > > applied, right? Is that really what we want?

> > > > > >

> > > > > > Xiaoliang explained that yes, this is what we want. If the end user

> > > > > > wants a guard band they can explicitly add a "sched-entry 00" in the

> > > > > > tc-taprio config.

> > > > >

> > > > > You're disabling the guard band, then. I figured, but isn't that

> > > > > suprising for the user? Who else implements taprio? Do they do it in

> > > > > the

> > > > > same way? I mean this behavior is passed right to the userspace and

> > > > > have

> > > > > a direct impact on how it is configured. Of course a user can add it

> > > > > manually, but I'm not sure that is what we want here. At least it

> > > > > needs

> > > > > to be documented somewhere. Or maybe it should be a switchable option.

> > > > >

> > > > > Consider the following:

> > > > > sched-entry S 01 25000

> > > > > sched-entry S fe 175000

> > > > > basetime 0

> > > > >

> > > > > Doesn't guarantee, that queue 0 is available at the beginning of

> > > > > the cycle, in the worst case it takes up to

> > > > > <begin of cycle> + ~12.5us until the frame makes it through (given

> > > > > gigabit and 1518b frames).

> > > > >

> > > > > Btw. there are also other implementations which don't need a guard

> > > > > band (because they are store-and-forward and cound the remaining

> > > > > bytes). So yes, using a guard band and scheduling is degrading the

> > > > > performance.

> > > >

> > > > What is surprising for the user, and I mentioned this already in another

> > > > thread on this patch, is that the Felix switch overruns the time gate (a

> > > > packet taking 2 us to transmit will start transmission even if there is

> > > > only 1 us left of its time slot, delaying the packets from the next time

> > > > slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit

> > > > exists, as a way to avoid these overruns, but it is a bit of a poor tool

> > > > for that job. Anyway, right now we disable it and live with the

> > > > overruns.

> > > 

> > > We are talking about the same thing here. Why is that a poor tool?

> > 

> > It is a poor tool because it revolves around the idea of "scheduled

> > queues" and "non-scheduled queues".

> > 

> > Consider the following tc-taprio schedule:

> > 

> > 	sched-entry S 81 2000 # TC 7 and 0 open, all others closed

> > 	sched-entry S 82 2000 # TC 7 and 1 open, all others closed

> > 	sched-entry S 84 2000 # TC 7 and 2 open, all others closed

> > 	sched-entry S 88 2000 # TC 7 and 3 open, all others closed

> > 	sched-entry S 90 2000 # TC 7 and 4 open, all others closed

> > 	sched-entry S a0 2000 # TC 7 and 5 open, all others closed

> > 	sched-entry S c0 2000 # TC 7 and 6 open, all others closed

> > 

> > Otherwise said, traffic class 7 should be able to send any time it

> > wishes.

> 

> What is the use case behind that? TC7 (with the highest priority)

> may always take precedence of the other TCs, thus what is the point

> of having a dedicated window for the others.


Worst case latency is obviously better for an intermittent stream (not
more than one packet in flight at a time) in TC7 than it is for any
stream in TC6-TC0. But intermittent streams in TC6-TC0 also have their
own worst case guarantees (assuming that 2000 ns is enough to fit one
TC 7 frame and one frame from the TC6-TC0 range).

> Anyway, I've tried it and there are no hiccups. I've meassured

> the delta between the start of successive packets and they are

> always ~12370ns for a 1518b frame. TC7 is open all the time,

> which makes sense. It only happens if you actually close the gate,

> eg. you have a sched-entry where a TC7 bit is not set. In this case,

> I can see a difference between ALWAYS_GUARD_BAND_SCH_Q set and not

> set. If it is set, there is up to a ~12.5us delay added (of course

> it depends on when the former frame was scheduled).

> 

> It seems that also needs to be 1->0 transition.

> 

> You've already mentioned that the switch violates the Qbv standard.

> What makes you think so? IMHO before that patch, it wasn't violated.

> Now it likely is (still have to confirm that). How can this

> be reasonable?


Ah, ok, if you need an open->close transition in order for the auto
guard banding to kick in, then it makes more sense. I didn't actually
measure this, it was based just upon my reading of the user manual.

I won't oppose a revert, let's see what Xiaoliang has to object.

> If you have a look at the initial commit message, it is about

> making it possible to have a smaller gate window, but that is not

> possible because of the current guard band of ~12.5us. It seems

> to be a shortcut for not having the MAXSDU (and thus the length

> of the guard band) configurable. Yes (static) guard bands will

> have a performance impact, also described in [1]. You are trading

> the correctness of the TAS for performance. And it is the sole

> purpose of Qbv to have a determisitc way (in terms of timing) of

> sending the frames.


Ok, so instead of checking on a per-packet basis whether it's going to
fit at the end of its time slot or not, the guard band is just added for
the maximum SDU.

> And telling the user, hey, we know we violate the Qbv standard,

> please insert the guard bands yourself if you really need them is

> not a real solution. As already mentioned, (1) it is not documented

> anywhere, (2) can't be shared among other switches (unless they do

> the same workaround) and (3) what am I supposed to do for TSN compliance

> testing. Modifying the schedule that is about to be checked (and thus

> given by the compliance suite)?

>

> > With the ALWAYS_GUARD_BAND_SCH_Q bit, there will be hiccups in packet

> > transmission for TC 7. For example, at the end of every time slot,

> > the hardware will insert a guard band for TC 7 because there is a

> > scheduled-queue-to-scheduled-queue transition, and it has been told to

> > do that. But a packet with TC 7 should be transmitted at any time,

> > because that's what we told the port to do!

> > 

> > Alternatively, we could tell the switch that TC 7 is "scheduled", and

> > the others are "not scheduled". Then it would implement the guard band

> > at the end of TCs 0-6, but it wouldn't for packets sent in TC 7. But

> > when you look at the overall schedule I described above, it kinds looks

> > like TCs 0-6 are the ones that are "scheduled" and TC 7 looks like the

> > one which isn't "scheduled" but can send at any time it pleases.

> > 

> > Odd, just odd. It's clear that someone had something in mind, it's just

> > not clear what. I would actually appreciate if somebody from Microchip

> > could chime in and say "no, you're wrong", and then explain.

> 

> If I had to make a bet, the distinction between "scheduled" and

> "non-scheduled" is there to have more control for some traffic classes

> you trust and where you can engineer the traffic, so you don't really

> need the guard band and between arbitrary traffic where you can't really

> say anything about and thus need the guard band.


I still don't know if I understand properly. You mean that "scheduled"
traffic is traffic sent synchronized with the switch's schedule, and
which does not need guard banding at the end of its time slot because
the sender is playing nice?
Yes, but then, do you gain anything at all by disabling that guard band
and allowing the sender to overrun if they want to? I still don't see
why overruns are permitted by the switch in certain configurations.

> > > > FWIW, the ENETC does not overrun the time gate, the SJA1105 does. You

> > > > can't really tell just by looking at the driver code, just by testing.

> > > > It's a bit of a crapshoot.

> > > 

> > > I was speaking of other switches, I see there is also a hirschmann

> > > switch (hellcreek) supported in linux, for example.

> > > 

> > > Shouldn't the goal to make the configuration of the taprio qdisc

> > > independent of the switch. If on one you'll have to manually define

> > > the

> > > guard band by inserting dead-time scheduler entries and on another

> > > this

> > > is already handled by the hardware (like it would be with

> > > ALWAYS_GUARD_BAND_SCH_Q or if it doesn't need it at all), this goal

> > > isn't met.

> > > 

> > > Also what do you expect if you use the following configuration:

> > > sched-entry S 01 5000

> > > sched-entry S fe <some large number>

> > > 

> > > Will queue 0 be able to send traffic? To me, with this patch, it seems

> > > to me that this isn't always the case anymore. If there is a large

> > > packet

> > > just sent at the end of the second cycle, the first might even be

> > > skipped

> > > completely.

> > > Will a user of the taprio (without knowledge of the underlying switch)

> > > assume that it can send traffic up to ~600 bytes? I'd say yes.

> > 

> > Yeah, I think that if a switch overruns a packet's reserved time gate,

> > then the above tc-taprio schedule is as good as not having any. I didn't

> > say that overruns are not a problem, I just said that the

> > ALWAYS_blah_blah

> > bit isn't as silver-bullet for a solution as you think.

> 

> See above.

> 

> -michael

> 

> [1] https://www.belden.com/hubfs/resources/knowledge/white-papers/tsn-time-sensitive-networking.pdf
Michael Walle May 6, 2021, 2:20 p.m. UTC | #19
Am 2021-05-06 15:50, schrieb Vladimir Oltean:
> On Thu, May 06, 2021 at 03:25:07PM +0200, Michael Walle wrote:

>> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

>> > [ trimmed the CC list, as this is most likely spam for most people ]

>> >

>> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

[..]

>> > With the ALWAYS_GUARD_BAND_SCH_Q bit, there will be hiccups in packet

>> > transmission for TC 7. For example, at the end of every time slot,

>> > the hardware will insert a guard band for TC 7 because there is a

>> > scheduled-queue-to-scheduled-queue transition, and it has been told to

>> > do that. But a packet with TC 7 should be transmitted at any time,

>> > because that's what we told the port to do!

>> >

>> > Alternatively, we could tell the switch that TC 7 is "scheduled", and

>> > the others are "not scheduled". Then it would implement the guard band

>> > at the end of TCs 0-6, but it wouldn't for packets sent in TC 7. But

>> > when you look at the overall schedule I described above, it kinds looks

>> > like TCs 0-6 are the ones that are "scheduled" and TC 7 looks like the

>> > one which isn't "scheduled" but can send at any time it pleases.

>> >

>> > Odd, just odd. It's clear that someone had something in mind, it's just

>> > not clear what. I would actually appreciate if somebody from Microchip

>> > could chime in and say "no, you're wrong", and then explain.

>> 

>> If I had to make a bet, the distinction between "scheduled" and

>> "non-scheduled" is there to have more control for some traffic classes

>> you trust and where you can engineer the traffic, so you don't really

>> need the guard band and between arbitrary traffic where you can't 

>> really

>> say anything about and thus need the guard band.

> 

> I still don't know if I understand properly. You mean that "scheduled"

> traffic is traffic sent synchronized with the switch's schedule, and

> which does not need guard banding at the end of its time slot because

> the sender is playing nice?


Yes exactly. If you have a low level application that might be a
reasonable optimization.

> Yes, but then, do you gain anything at all by disabling that guard band

> and allowing the sender to overrun if they want to? I still don't see

> why overruns are permitted by the switch in certain configurations.


First, I suspect that this shouldn't happen, because as in you words,
the sender is playing nice and won't send outside its allocated time
frame.
Second, you don't waste the (possible) deadtime for the guard band
which might make sense esp. for smaller windows.

Consider the following gate timings:

(1) gate open event
(2)-(3) guard band
(3) gate close event

       (1)                 (2)      (3)
        _________ ... ______________
______|                    |_______|________


And the following timings for a frame:

(A)           ___... ___
_____________|          |___________________

(B)                   ___... ___
_____________________|          |___________

(C)                          ____
____________________________|    |__________


(A) and (B) can be send if there is a guard band, (C) can only
be send if there is no guard band. But in the case of (C) you
have to trust the sender to stop sending before reaching (3).
For (B) you don't have to trust the sender because the end of the
frame cannot be later than (3).

-michael
Michael Walle May 6, 2021, 2:41 p.m. UTC | #20
Am 2021-05-06 15:50, schrieb Vladimir Oltean:
> On Thu, May 06, 2021 at 03:25:07PM +0200, Michael Walle wrote:

>> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

>> > [ trimmed the CC list, as this is most likely spam for most people ]

>> >

>> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

>> > > Am 2021-05-04 21:17, schrieb Vladimir Oltean:

>> > > > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

>> > > > > > > > > As explained in another mail in this thread, all queues are marked as

>> > > > > > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if

>> > > > > > > > > it set or not set for now. Dunno why we even care for this bit then.

>> > > > > > > >

>> > > > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

>> > > > > > > > throughput when set.

>> > > > > > >

>> > > > > > > Ahh, I see now. All queues are "scheduled" but the guard band only

>> > > > > > > applies

>> > > > > > > for "non-scheduled" -> "scheduled" transitions. So the guard band is

>> > > > > > > never

>> > > > > > > applied, right? Is that really what we want?

>> > > > > >

>> > > > > > Xiaoliang explained that yes, this is what we want. If the end user

>> > > > > > wants a guard band they can explicitly add a "sched-entry 00" in the

>> > > > > > tc-taprio config.

>> > > > >

>> > > > > You're disabling the guard band, then. I figured, but isn't that

>> > > > > suprising for the user? Who else implements taprio? Do they do it in

>> > > > > the

>> > > > > same way? I mean this behavior is passed right to the userspace and

>> > > > > have

>> > > > > a direct impact on how it is configured. Of course a user can add it

>> > > > > manually, but I'm not sure that is what we want here. At least it

>> > > > > needs

>> > > > > to be documented somewhere. Or maybe it should be a switchable option.

>> > > > >

>> > > > > Consider the following:

>> > > > > sched-entry S 01 25000

>> > > > > sched-entry S fe 175000

>> > > > > basetime 0

>> > > > >

>> > > > > Doesn't guarantee, that queue 0 is available at the beginning of

>> > > > > the cycle, in the worst case it takes up to

>> > > > > <begin of cycle> + ~12.5us until the frame makes it through (given

>> > > > > gigabit and 1518b frames).

>> > > > >

>> > > > > Btw. there are also other implementations which don't need a guard

>> > > > > band (because they are store-and-forward and cound the remaining

>> > > > > bytes). So yes, using a guard band and scheduling is degrading the

>> > > > > performance.

>> > > >

>> > > > What is surprising for the user, and I mentioned this already in another

>> > > > thread on this patch, is that the Felix switch overruns the time gate (a

>> > > > packet taking 2 us to transmit will start transmission even if there is

>> > > > only 1 us left of its time slot, delaying the packets from the next time

>> > > > slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit

>> > > > exists, as a way to avoid these overruns, but it is a bit of a poor tool

>> > > > for that job. Anyway, right now we disable it and live with the

>> > > > overruns.

>> > >

>> > > We are talking about the same thing here. Why is that a poor tool?

>> >

>> > It is a poor tool because it revolves around the idea of "scheduled

>> > queues" and "non-scheduled queues".

>> >

>> > Consider the following tc-taprio schedule:

>> >

>> > 	sched-entry S 81 2000 # TC 7 and 0 open, all others closed

>> > 	sched-entry S 82 2000 # TC 7 and 1 open, all others closed

>> > 	sched-entry S 84 2000 # TC 7 and 2 open, all others closed

>> > 	sched-entry S 88 2000 # TC 7 and 3 open, all others closed

>> > 	sched-entry S 90 2000 # TC 7 and 4 open, all others closed

>> > 	sched-entry S a0 2000 # TC 7 and 5 open, all others closed

>> > 	sched-entry S c0 2000 # TC 7 and 6 open, all others closed

>> >

>> > Otherwise said, traffic class 7 should be able to send any time it

>> > wishes.

>> 

>> What is the use case behind that? TC7 (with the highest priority)

>> may always take precedence of the other TCs, thus what is the point

>> of having a dedicated window for the others.

> 

> Worst case latency is obviously better for an intermittent stream (not

> more than one packet in flight at a time) in TC7 than it is for any

> stream in TC6-TC0. But intermittent streams in TC6-TC0 also have their

> own worst case guarantees (assuming that 2000 ns is enough to fit one

> TC 7 frame and one frame from the TC6-TC0 range).


Oh and I missed that, TC0-TC6 probably won't work because that gate is
too narrow (12.5us guard band) unless of course you set MAXSDU to a
smaller value. Which would IMHO be the correct thing to do here.

-michael
Vladimir Oltean May 6, 2021, 3:07 p.m. UTC | #21
On Thu, May 06, 2021 at 04:41:51PM +0200, Michael Walle wrote:
> Am 2021-05-06 15:50, schrieb Vladimir Oltean:

> > On Thu, May 06, 2021 at 03:25:07PM +0200, Michael Walle wrote:

> > > Am 2021-05-04 23:33, schrieb Vladimir Oltean:

> > > > [ trimmed the CC list, as this is most likely spam for most people ]

> > > >

> > > > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

> > > > > Am 2021-05-04 21:17, schrieb Vladimir Oltean:

> > > > > > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

> > > > > > > > > > > As explained in another mail in this thread, all queues are marked as

> > > > > > > > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if

> > > > > > > > > > > it set or not set for now. Dunno why we even care for this bit then.

> > > > > > > > > >

> > > > > > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

> > > > > > > > > > throughput when set.

> > > > > > > > >

> > > > > > > > > Ahh, I see now. All queues are "scheduled" but the guard band only

> > > > > > > > > applies

> > > > > > > > > for "non-scheduled" -> "scheduled" transitions. So the guard band is

> > > > > > > > > never

> > > > > > > > > applied, right? Is that really what we want?

> > > > > > > >

> > > > > > > > Xiaoliang explained that yes, this is what we want. If the end user

> > > > > > > > wants a guard band they can explicitly add a "sched-entry 00" in the

> > > > > > > > tc-taprio config.

> > > > > > >

> > > > > > > You're disabling the guard band, then. I figured, but isn't that

> > > > > > > suprising for the user? Who else implements taprio? Do they do it in

> > > > > > > the

> > > > > > > same way? I mean this behavior is passed right to the userspace and

> > > > > > > have

> > > > > > > a direct impact on how it is configured. Of course a user can add it

> > > > > > > manually, but I'm not sure that is what we want here. At least it

> > > > > > > needs

> > > > > > > to be documented somewhere. Or maybe it should be a switchable option.

> > > > > > >

> > > > > > > Consider the following:

> > > > > > > sched-entry S 01 25000

> > > > > > > sched-entry S fe 175000

> > > > > > > basetime 0

> > > > > > >

> > > > > > > Doesn't guarantee, that queue 0 is available at the beginning of

> > > > > > > the cycle, in the worst case it takes up to

> > > > > > > <begin of cycle> + ~12.5us until the frame makes it through (given

> > > > > > > gigabit and 1518b frames).

> > > > > > >

> > > > > > > Btw. there are also other implementations which don't need a guard

> > > > > > > band (because they are store-and-forward and cound the remaining

> > > > > > > bytes). So yes, using a guard band and scheduling is degrading the

> > > > > > > performance.

> > > > > >

> > > > > > What is surprising for the user, and I mentioned this already in another

> > > > > > thread on this patch, is that the Felix switch overruns the time gate (a

> > > > > > packet taking 2 us to transmit will start transmission even if there is

> > > > > > only 1 us left of its time slot, delaying the packets from the next time

> > > > > > slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit

> > > > > > exists, as a way to avoid these overruns, but it is a bit of a poor tool

> > > > > > for that job. Anyway, right now we disable it and live with the

> > > > > > overruns.

> > > > >

> > > > > We are talking about the same thing here. Why is that a poor tool?

> > > >

> > > > It is a poor tool because it revolves around the idea of "scheduled

> > > > queues" and "non-scheduled queues".

> > > >

> > > > Consider the following tc-taprio schedule:

> > > >

> > > > 	sched-entry S 81 2000 # TC 7 and 0 open, all others closed

> > > > 	sched-entry S 82 2000 # TC 7 and 1 open, all others closed

> > > > 	sched-entry S 84 2000 # TC 7 and 2 open, all others closed

> > > > 	sched-entry S 88 2000 # TC 7 and 3 open, all others closed

> > > > 	sched-entry S 90 2000 # TC 7 and 4 open, all others closed

> > > > 	sched-entry S a0 2000 # TC 7 and 5 open, all others closed

> > > > 	sched-entry S c0 2000 # TC 7 and 6 open, all others closed

> > > >

> > > > Otherwise said, traffic class 7 should be able to send any time it

> > > > wishes.

> > > 

> > > What is the use case behind that? TC7 (with the highest priority)

> > > may always take precedence of the other TCs, thus what is the point

> > > of having a dedicated window for the others.

> > 

> > Worst case latency is obviously better for an intermittent stream (not

> > more than one packet in flight at a time) in TC7 than it is for any

> > stream in TC6-TC0. But intermittent streams in TC6-TC0 also have their

> > own worst case guarantees (assuming that 2000 ns is enough to fit one

> > TC 7 frame and one frame from the TC6-TC0 range).

> 

> Oh and I missed that, TC0-TC6 probably won't work because that gate is

> too narrow (12.5us guard band) unless of course you set MAXSDU to a

> smaller value. Which would IMHO be the correct thing to do here.


I'm not sure that this is exactly true. I know that I tested once, and
the switch is happy to send a packet as long as the time window is 33 ns
or larger (Idon't . Can't remember if the ALWAYS_GUARD_BAND_SCH_Q bit was set or
not, and I'm traveling right now so I don't have an LS1028A board handy
to re-test.
Michael Walle May 6, 2021, 6:28 p.m. UTC | #22
Am 2021-05-06 17:07, schrieb Vladimir Oltean:
> On Thu, May 06, 2021 at 04:41:51PM +0200, Michael Walle wrote:

>> Am 2021-05-06 15:50, schrieb Vladimir Oltean:

>> > On Thu, May 06, 2021 at 03:25:07PM +0200, Michael Walle wrote:

>> > > Am 2021-05-04 23:33, schrieb Vladimir Oltean:

>> > > > [ trimmed the CC list, as this is most likely spam for most people ]

>> > > >

>> > > > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

>> > > > > Am 2021-05-04 21:17, schrieb Vladimir Oltean:

>> > > > > > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

>> > > > > > > > > > > As explained in another mail in this thread, all queues are marked as

>> > > > > > > > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if

>> > > > > > > > > > > it set or not set for now. Dunno why we even care for this bit then.

>> > > > > > > > > >

>> > > > > > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available

>> > > > > > > > > > throughput when set.

>> > > > > > > > >

>> > > > > > > > > Ahh, I see now. All queues are "scheduled" but the guard band only

>> > > > > > > > > applies

>> > > > > > > > > for "non-scheduled" -> "scheduled" transitions. So the guard band is

>> > > > > > > > > never

>> > > > > > > > > applied, right? Is that really what we want?

>> > > > > > > >

>> > > > > > > > Xiaoliang explained that yes, this is what we want. If the end user

>> > > > > > > > wants a guard band they can explicitly add a "sched-entry 00" in the

>> > > > > > > > tc-taprio config.

>> > > > > > >

>> > > > > > > You're disabling the guard band, then. I figured, but isn't that

>> > > > > > > suprising for the user? Who else implements taprio? Do they do it in

>> > > > > > > the

>> > > > > > > same way? I mean this behavior is passed right to the userspace and

>> > > > > > > have

>> > > > > > > a direct impact on how it is configured. Of course a user can add it

>> > > > > > > manually, but I'm not sure that is what we want here. At least it

>> > > > > > > needs

>> > > > > > > to be documented somewhere. Or maybe it should be a switchable option.

>> > > > > > >

>> > > > > > > Consider the following:

>> > > > > > > sched-entry S 01 25000

>> > > > > > > sched-entry S fe 175000

>> > > > > > > basetime 0

>> > > > > > >

>> > > > > > > Doesn't guarantee, that queue 0 is available at the beginning of

>> > > > > > > the cycle, in the worst case it takes up to

>> > > > > > > <begin of cycle> + ~12.5us until the frame makes it through (given

>> > > > > > > gigabit and 1518b frames).

>> > > > > > >

>> > > > > > > Btw. there are also other implementations which don't need a guard

>> > > > > > > band (because they are store-and-forward and cound the remaining

>> > > > > > > bytes). So yes, using a guard band and scheduling is degrading the

>> > > > > > > performance.

>> > > > > >

>> > > > > > What is surprising for the user, and I mentioned this already in another

>> > > > > > thread on this patch, is that the Felix switch overruns the time gate (a

>> > > > > > packet taking 2 us to transmit will start transmission even if there is

>> > > > > > only 1 us left of its time slot, delaying the packets from the next time

>> > > > > > slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit

>> > > > > > exists, as a way to avoid these overruns, but it is a bit of a poor tool

>> > > > > > for that job. Anyway, right now we disable it and live with the

>> > > > > > overruns.

>> > > > >

>> > > > > We are talking about the same thing here. Why is that a poor tool?

>> > > >

>> > > > It is a poor tool because it revolves around the idea of "scheduled

>> > > > queues" and "non-scheduled queues".

>> > > >

>> > > > Consider the following tc-taprio schedule:

>> > > >

>> > > > 	sched-entry S 81 2000 # TC 7 and 0 open, all others closed

>> > > > 	sched-entry S 82 2000 # TC 7 and 1 open, all others closed

>> > > > 	sched-entry S 84 2000 # TC 7 and 2 open, all others closed

>> > > > 	sched-entry S 88 2000 # TC 7 and 3 open, all others closed

>> > > > 	sched-entry S 90 2000 # TC 7 and 4 open, all others closed

>> > > > 	sched-entry S a0 2000 # TC 7 and 5 open, all others closed

>> > > > 	sched-entry S c0 2000 # TC 7 and 6 open, all others closed

>> > > >

>> > > > Otherwise said, traffic class 7 should be able to send any time it

>> > > > wishes.

>> > >

>> > > What is the use case behind that? TC7 (with the highest priority)

>> > > may always take precedence of the other TCs, thus what is the point

>> > > of having a dedicated window for the others.

>> >

>> > Worst case latency is obviously better for an intermittent stream (not

>> > more than one packet in flight at a time) in TC7 than it is for any

>> > stream in TC6-TC0. But intermittent streams in TC6-TC0 also have their

>> > own worst case guarantees (assuming that 2000 ns is enough to fit one

>> > TC 7 frame and one frame from the TC6-TC0 range).

>> 

>> Oh and I missed that, TC0-TC6 probably won't work because that gate is

>> too narrow (12.5us guard band) unless of course you set MAXSDU to a

>> smaller value. Which would IMHO be the correct thing to do here.

> 

> I'm not sure that this is exactly true. I know that I tested once, and

> the switch is happy to send a packet as long as the time window is 33 

> ns

> or larger (Idon't . Can't remember if the ALWAYS_GUARD_BAND_SCH_Q bit 

> was set or

> not, and I'm traveling right now so I don't have an LS1028A board handy

> to re-test.


I've just double checked with a 5.12 kernel (that is with
ALWAYS_GUARD_BAND_SCH_Q bit still set). TC7 works perfectly fine at
line rate. Frames in TC0-TC6 aren't forwarded.

Clearing that bit manually (ie. devmem 0x1fc20f698 32 0x0; btw the
schedule is on swp1, so this will also prove that bit is global ;))
will make TC0-6 work. But it will also happily forward 1518b frames.

Behavior as expected. Please feel free to confirm, though.

-michael
Xiaoliang Yang May 7, 2021, 7:16 a.m. UTC | #23
Hi Michael,

On 2021-05-06 21:25, Michael Walle <michael@walle.cc> wrote:
> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

> > [ trimmed the CC list, as this is most likely spam for most people ]

> >

> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

> >> Am 2021-05-04 21:17, schrieb Vladimir Oltean:

> >> > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

> >> > > > > > > As explained in another mail in this thread, all queues

> >> > > > > > > are marked as scheduled. So this is actually a no-op,

> >> > > > > > > correct? It doesn't matter if it set or not set for now. Dunno why

> we even care for this bit then.

> >> > > > > >

> >> > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the

> >> > > > > > available throughput when set.

> >> > > > >

> >> > > > > Ahh, I see now. All queues are "scheduled" but the guard band

> >> > > > > only applies for "non-scheduled" -> "scheduled" transitions.

> >> > > > > So the guard band is never applied, right? Is that really

> >> > > > > what we want?

> >> > > >

> >> > > > Xiaoliang explained that yes, this is what we want. If the end

> >> > > > user wants a guard band they can explicitly add a "sched-entry

> >> > > > 00" in the tc-taprio config.

> >> > >

> >> > > You're disabling the guard band, then. I figured, but isn't that

> >> > > suprising for the user? Who else implements taprio? Do they do it

> >> > > in the same way? I mean this behavior is passed right to the

> >> > > userspace and have a direct impact on how it is configured. Of

> >> > > course a user can add it manually, but I'm not sure that is what

> >> > > we want here. At least it needs to be documented somewhere. Or

> >> > > maybe it should be a switchable option.

> >> > >

> >> > > Consider the following:

> >> > > sched-entry S 01 25000

> >> > > sched-entry S fe 175000

> >> > > basetime 0

> >> > >

> >> > > Doesn't guarantee, that queue 0 is available at the beginning of

> >> > > the cycle, in the worst case it takes up to <begin of cycle> +

> >> > > ~12.5us until the frame makes it through (given gigabit and 1518b

> >> > > frames).

> >> > >

> >> > > Btw. there are also other implementations which don't need a

> >> > > guard band (because they are store-and-forward and cound the

> >> > > remaining bytes). So yes, using a guard band and scheduling is

> >> > > degrading the performance.

> >> >

> >> > What is surprising for the user, and I mentioned this already in

> >> > another thread on this patch, is that the Felix switch overruns the

> >> > time gate (a packet taking 2 us to transmit will start transmission

> >> > even if there is only 1 us left of its time slot, delaying the

> >> > packets from the next time slot by 1 us). I guess that this is why

> >> > the ALWAYS_GUARD_BAND_SCH_Q bit exists, as a way to avoid these

> >> > overruns, but it is a bit of a poor tool for that job. Anyway,

> >> > right now we disable it and live with the overruns.

> >>

> >> We are talking about the same thing here. Why is that a poor tool?

> >

> > It is a poor tool because it revolves around the idea of "scheduled

> > queues" and "non-scheduled queues".

> >

> > Consider the following tc-taprio schedule:

> >

> >       sched-entry S 81 2000 # TC 7 and 0 open, all others closed

> >       sched-entry S 82 2000 # TC 7 and 1 open, all others closed

> >       sched-entry S 84 2000 # TC 7 and 2 open, all others closed

> >       sched-entry S 88 2000 # TC 7 and 3 open, all others closed

> >       sched-entry S 90 2000 # TC 7 and 4 open, all others closed

> >       sched-entry S a0 2000 # TC 7 and 5 open, all others closed

> >       sched-entry S c0 2000 # TC 7 and 6 open, all others closed

> >

> > Otherwise said, traffic class 7 should be able to send any time it

> > wishes.

> 

> What is the use case behind that? TC7 (with the highest priority) may always

> take precedence of the other TCs, thus what is the point of having a dedicated

> window for the others.

> 

> Anyway, I've tried it and there are no hiccups. I've meassured the delta

> between the start of successive packets and they are always ~12370ns for a

> 1518b frame. TC7 is open all the time, which makes sense. It only happens if

> you actually close the gate, eg. you have a sched-entry where a TC7 bit is not

> set. In this case, I can see a difference between ALWAYS_GUARD_BAND_SCH_Q

> set and not set. If it is set, there is up to a ~12.5us delay added (of course it

> depends on when the former frame was scheduled).

> 

> It seems that also needs to be 1->0 transition.

> 

> You've already mentioned that the switch violates the Qbv standard.

> What makes you think so? IMHO before that patch, it wasn't violated.

> Now it likely is (still have to confirm that). How can this be reasonable?

> 

> If you have a look at the initial commit message, it is about making it possible

> to have a smaller gate window, but that is not possible because of the current

> guard band of ~12.5us. It seems to be a shortcut for not having the MAXSDU

> (and thus the length of the guard band) configurable. Yes (static) guard bands

> will have a performance impact, also described in [1]. You are trading the

> correctness of the TAS for performance. And it is the sole purpose of Qbv to

> have a determisitc way (in terms of timing) of sending the frames.

> 

> And telling the user, hey, we know we violate the Qbv standard, please insert

> the guard bands yourself if you really need them is not a real solution. As

> already mentioned, (1) it is not documented anywhere, (2) can't be shared

> among other switches (unless they do the same workaround) and (3) what am

> I supposed to do for TSN compliance testing. Modifying the schedule that is

> about to be checked (and thus given by the compliance suite)?

>

I disable the always guard band bit because each gate control list needs to reserve a guard band slot, which affects performance. The user does not need to set a guard band for each queue transmission. For example, "sched-entry S 01 2000 sched-entry S fe 98000". Queue 0 is protected traffic and has smaller frames, so there is no need to reserve a guard band during the open time of queue 0. The user can add the following guard band before protected traffic: "sched-entry S 00 25000 sched-entry S 01 2000 sched-entry S fe 73000"

I checked other devices such as ENETC and it can calculate how long the frame transmission will take and determine whether to transmit before the gate is closed. The VSC9959 device does not have this hardware function. If we implement guard bands on each queue, we need to reserve a 12.5us guard band for each GCL, even if it only needs to send a small packet. This confuses customers.

actually, I'm not sure if this will violate the Qbv standard. I looked up the Qbv standard spec, and found it only introduce the guard band before protected window (Annex Q (informative)Traffic scheduling). It allows to design the schedule to accommodate the intended pattern of transmission without overrunning the next gate-close event for the traffic classes concerned.

Thanks,
Xiaoliang
Michael Walle May 7, 2021, 7:35 a.m. UTC | #24
Hi Xiaoliang,

Am 2021-05-07 09:16, schrieb Xiaoliang Yang:
> On 2021-05-06 21:25, Michael Walle <michael@walle.cc> wrote:

>> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

>> > [ trimmed the CC list, as this is most likely spam for most people ]

>> >

>> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

>> >> Am 2021-05-04 21:17, schrieb Vladimir Oltean:

>> >> > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

>> >> > > > > > > As explained in another mail in this thread, all queues

>> >> > > > > > > are marked as scheduled. So this is actually a no-op,

>> >> > > > > > > correct? It doesn't matter if it set or not set for now. Dunno why

>> we even care for this bit then.

>> >> > > > > >

>> >> > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the

>> >> > > > > > available throughput when set.

>> >> > > > >

>> >> > > > > Ahh, I see now. All queues are "scheduled" but the guard band

>> >> > > > > only applies for "non-scheduled" -> "scheduled" transitions.

>> >> > > > > So the guard band is never applied, right? Is that really

>> >> > > > > what we want?

>> >> > > >

>> >> > > > Xiaoliang explained that yes, this is what we want. If the end

>> >> > > > user wants a guard band they can explicitly add a "sched-entry

>> >> > > > 00" in the tc-taprio config.

>> >> > >

>> >> > > You're disabling the guard band, then. I figured, but isn't that

>> >> > > suprising for the user? Who else implements taprio? Do they do it

>> >> > > in the same way? I mean this behavior is passed right to the

>> >> > > userspace and have a direct impact on how it is configured. Of

>> >> > > course a user can add it manually, but I'm not sure that is what

>> >> > > we want here. At least it needs to be documented somewhere. Or

>> >> > > maybe it should be a switchable option.

>> >> > >

>> >> > > Consider the following:

>> >> > > sched-entry S 01 25000

>> >> > > sched-entry S fe 175000

>> >> > > basetime 0

>> >> > >

>> >> > > Doesn't guarantee, that queue 0 is available at the beginning of

>> >> > > the cycle, in the worst case it takes up to <begin of cycle> +

>> >> > > ~12.5us until the frame makes it through (given gigabit and 1518b

>> >> > > frames).

>> >> > >

>> >> > > Btw. there are also other implementations which don't need a

>> >> > > guard band (because they are store-and-forward and cound the

>> >> > > remaining bytes). So yes, using a guard band and scheduling is

>> >> > > degrading the performance.

>> >> >

>> >> > What is surprising for the user, and I mentioned this already in

>> >> > another thread on this patch, is that the Felix switch overruns the

>> >> > time gate (a packet taking 2 us to transmit will start transmission

>> >> > even if there is only 1 us left of its time slot, delaying the

>> >> > packets from the next time slot by 1 us). I guess that this is why

>> >> > the ALWAYS_GUARD_BAND_SCH_Q bit exists, as a way to avoid these

>> >> > overruns, but it is a bit of a poor tool for that job. Anyway,

>> >> > right now we disable it and live with the overruns.

>> >>

>> >> We are talking about the same thing here. Why is that a poor tool?

>> >

>> > It is a poor tool because it revolves around the idea of "scheduled

>> > queues" and "non-scheduled queues".

>> >

>> > Consider the following tc-taprio schedule:

>> >

>> >       sched-entry S 81 2000 # TC 7 and 0 open, all others closed

>> >       sched-entry S 82 2000 # TC 7 and 1 open, all others closed

>> >       sched-entry S 84 2000 # TC 7 and 2 open, all others closed

>> >       sched-entry S 88 2000 # TC 7 and 3 open, all others closed

>> >       sched-entry S 90 2000 # TC 7 and 4 open, all others closed

>> >       sched-entry S a0 2000 # TC 7 and 5 open, all others closed

>> >       sched-entry S c0 2000 # TC 7 and 6 open, all others closed

>> >

>> > Otherwise said, traffic class 7 should be able to send any time it

>> > wishes.

>> 

>> What is the use case behind that? TC7 (with the highest priority) may 

>> always

>> take precedence of the other TCs, thus what is the point of having a 

>> dedicated

>> window for the others.

>> 

>> Anyway, I've tried it and there are no hiccups. I've meassured the 

>> delta

>> between the start of successive packets and they are always ~12370ns 

>> for a

>> 1518b frame. TC7 is open all the time, which makes sense. It only 

>> happens if

>> you actually close the gate, eg. you have a sched-entry where a TC7 

>> bit is not

>> set. In this case, I can see a difference between 

>> ALWAYS_GUARD_BAND_SCH_Q

>> set and not set. If it is set, there is up to a ~12.5us delay added 

>> (of course it

>> depends on when the former frame was scheduled).

>> 

>> It seems that also needs to be 1->0 transition.

>> 

>> You've already mentioned that the switch violates the Qbv standard.

>> What makes you think so? IMHO before that patch, it wasn't violated.

>> Now it likely is (still have to confirm that). How can this be 

>> reasonable?

>> 

>> If you have a look at the initial commit message, it is about making 

>> it possible

>> to have a smaller gate window, but that is not possible because of the 

>> current

>> guard band of ~12.5us. It seems to be a shortcut for not having the 

>> MAXSDU

>> (and thus the length of the guard band) configurable. Yes (static) 

>> guard bands

>> will have a performance impact, also described in [1]. You are trading 

>> the

>> correctness of the TAS for performance. And it is the sole purpose of 

>> Qbv to

>> have a determisitc way (in terms of timing) of sending the frames.

>> 

>> And telling the user, hey, we know we violate the Qbv standard, please 

>> insert

>> the guard bands yourself if you really need them is not a real 

>> solution. As

>> already mentioned, (1) it is not documented anywhere, (2) can't be 

>> shared

>> among other switches (unless they do the same workaround) and (3) what 

>> am

>> I supposed to do for TSN compliance testing. Modifying the schedule 

>> that is

>> about to be checked (and thus given by the compliance suite)?

>> 

> I disable the always guard band bit because each gate control list

> needs to reserve a guard band slot, which affects performance. The

> user does not need to set a guard band for each queue transmission.

> For example, "sched-entry S 01 2000 sched-entry S fe 98000". Queue 0

> is protected traffic and has smaller frames, so there is no need to

> reserve a guard band during the open time of queue 0. The user can add

> the following guard band before protected traffic: "sched-entry S 00

> 25000 sched-entry S 01 2000 sched-entry S fe 73000"


Again, you're passing the handling of the guard band to the user,
which is an implementation detail for this switch (unless there is
a new switch for it on the qdisc IMHO). And (1), (2) and (3) from
above is still valid.

Consider the entry
  sched-entry S 01 2000
  sched-entry S 02 20000

A user assumes that TC0 is open for 2us. But with your change
it is bascially open for 2us + 12.5us. And even worse, it is not
deterministic. A frame in the subsequent queue (ie TC1) can be
scheduled anywhere beteeen 0us and 12.5us after opening the gate,
depending on wether there is still a frame transmitting on TC0.

> I checked other devices such as ENETC and it can calculate how long

> the frame transmission will take and determine whether to transmit

> before the gate is closed. The VSC9959 device does not have this

> hardware function. If we implement guard bands on each queue, we need

> to reserve a 12.5us guard band for each GCL, even if it only needs to

> send a small packet. This confuses customers.


How about getting it right and working on how we can set the MAXSDU
per queue and thus making the guard band smaller?

> actually, I'm not sure if this will violate the Qbv standard. I looked

> up the Qbv standard spec, and found it only introduce the guard band

> before protected window (Annex Q (informative)Traffic scheduling). It

> allows to design the schedule to accommodate the intended pattern of

> transmission without overrunning the next gate-close event for the

> traffic classes concerned.


Vladimir already quoted "IEEE 802.1Q-2018 clause 8.6.8.4". I didn't
check it, though.

A static guard band is one of the options you have to fulfill that.
Granted, it is not that efficient, but it is how the switch handles
it.

-michael
Xiaoliang Yang May 7, 2021, 11:09 a.m. UTC | #25
Hi Michael,

On 2021-05-07 15:35, Michael Walle <michael@walle.cc> wrote:
> Am 2021-05-07 09:16, schrieb Xiaoliang Yang:

> > On 2021-05-06 21:25, Michael Walle <michael@walle.cc> wrote:

> >> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

> >> > [ trimmed the CC list, as this is most likely spam for most people

> >> > ]

> >> >

> >> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

> >> >> Am 2021-05-04 21:17, schrieb Vladimir Oltean:

> >> >> > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

> >> >> > > > > > > As explained in another mail in this thread, all

> >> >> > > > > > > queues are marked as scheduled. So this is actually a

> >> >> > > > > > > no-op, correct? It doesn't matter if it set or not set

> >> >> > > > > > > for now. Dunno why

> >> we even care for this bit then.

> >> >> > > > > >

> >> >> > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the

> >> >> > > > > > available throughput when set.

> >> >> > > > >

> >> >> > > > > Ahh, I see now. All queues are "scheduled" but the guard

> >> >> > > > > band only applies for "non-scheduled" -> "scheduled" transitions.

> >> >> > > > > So the guard band is never applied, right? Is that really

> >> >> > > > > what we want?

> >> >> > > >

> >> >> > > > Xiaoliang explained that yes, this is what we want. If the

> >> >> > > > end user wants a guard band they can explicitly add a

> >> >> > > > "sched-entry 00" in the tc-taprio config.

> >> >> > >

> >> >> > > You're disabling the guard band, then. I figured, but isn't

> >> >> > > that suprising for the user? Who else implements taprio? Do

> >> >> > > they do it in the same way? I mean this behavior is passed

> >> >> > > right to the userspace and have a direct impact on how it is

> >> >> > > configured. Of course a user can add it manually, but I'm not

> >> >> > > sure that is what we want here. At least it needs to be

> >> >> > > documented somewhere. Or maybe it should be a switchable option.

> >> >> > >

> >> >> > > Consider the following:

> >> >> > > sched-entry S 01 25000

> >> >> > > sched-entry S fe 175000

> >> >> > > basetime 0

> >> >> > >

> >> >> > > Doesn't guarantee, that queue 0 is available at the beginning

> >> >> > > of the cycle, in the worst case it takes up to <begin of

> >> >> > > cycle> + ~12.5us until the frame makes it through (given

> >> >> > > gigabit and 1518b frames).

> >> >> > >

> >> >> > > Btw. there are also other implementations which don't need a

> >> >> > > guard band (because they are store-and-forward and cound the

> >> >> > > remaining bytes). So yes, using a guard band and scheduling is

> >> >> > > degrading the performance.

> >> >> >

> >> >> > What is surprising for the user, and I mentioned this already in

> >> >> > another thread on this patch, is that the Felix switch overruns

> >> >> > the time gate (a packet taking 2 us to transmit will start

> >> >> > transmission even if there is only 1 us left of its time slot,

> >> >> > delaying the packets from the next time slot by 1 us). I guess

> >> >> > that this is why the ALWAYS_GUARD_BAND_SCH_Q bit exists, as a

> >> >> > way to avoid these overruns, but it is a bit of a poor tool for

> >> >> > that job. Anyway, right now we disable it and live with the overruns.

> >> >>

> >> >> We are talking about the same thing here. Why is that a poor tool?

> >> >

> >> > It is a poor tool because it revolves around the idea of "scheduled

> >> > queues" and "non-scheduled queues".

> >> >

> >> > Consider the following tc-taprio schedule:

> >> >

> >> >       sched-entry S 81 2000 # TC 7 and 0 open, all others closed

> >> >       sched-entry S 82 2000 # TC 7 and 1 open, all others closed

> >> >       sched-entry S 84 2000 # TC 7 and 2 open, all others closed

> >> >       sched-entry S 88 2000 # TC 7 and 3 open, all others closed

> >> >       sched-entry S 90 2000 # TC 7 and 4 open, all others closed

> >> >       sched-entry S a0 2000 # TC 7 and 5 open, all others closed

> >> >       sched-entry S c0 2000 # TC 7 and 6 open, all others closed

> >> >

> >> > Otherwise said, traffic class 7 should be able to send any time it

> >> > wishes.

> >>

> >> What is the use case behind that? TC7 (with the highest priority) may

> >> always take precedence of the other TCs, thus what is the point of

> >> having a dedicated window for the others.

> >>

> >> Anyway, I've tried it and there are no hiccups. I've meassured the

> >> delta between the start of successive packets and they are always

> >> ~12370ns for a 1518b frame. TC7 is open all the time, which makes

> >> sense. It only happens if you actually close the gate, eg. you have a

> >> sched-entry where a TC7 bit is not set. In this case, I can see a

> >> difference between ALWAYS_GUARD_BAND_SCH_Q set and not set. If it is

> >> set, there is up to a ~12.5us delay added (of course it depends on

> >> when the former frame was scheduled).

> >>

> >> It seems that also needs to be 1->0 transition.

> >>

> >> You've already mentioned that the switch violates the Qbv standard.

> >> What makes you think so? IMHO before that patch, it wasn't violated.

> >> Now it likely is (still have to confirm that). How can this be

> >> reasonable?

> >>

> >> If you have a look at the initial commit message, it is about making

> >> it possible to have a smaller gate window, but that is not possible

> >> because of the current guard band of ~12.5us. It seems to be a

> >> shortcut for not having the MAXSDU (and thus the length of the guard

> >> band) configurable. Yes (static) guard bands will have a performance

> >> impact, also described in [1]. You are trading the correctness of the

> >> TAS for performance. And it is the sole purpose of Qbv to have a

> >> determisitc way (in terms of timing) of sending the frames.

> >>

> >> And telling the user, hey, we know we violate the Qbv standard,

> >> please insert the guard bands yourself if you really need them is not

> >> a real solution. As already mentioned, (1) it is not documented

> >> anywhere, (2) can't be shared among other switches (unless they do

> >> the same workaround) and (3) what am I supposed to do for TSN

> >> compliance testing. Modifying the schedule that is about to be

> >> checked (and thus given by the compliance suite)?

> >>

> > I disable the always guard band bit because each gate control list

> > needs to reserve a guard band slot, which affects performance. The

> > user does not need to set a guard band for each queue transmission.

> > For example, "sched-entry S 01 2000 sched-entry S fe 98000". Queue 0

> > is protected traffic and has smaller frames, so there is no need to

> > reserve a guard band during the open time of queue 0. The user can add

> > the following guard band before protected traffic: "sched-entry S 00

> > 25000 sched-entry S 01 2000 sched-entry S fe 73000"

> 

> Again, you're passing the handling of the guard band to the user, which is an

> implementation detail for this switch (unless there is a new switch for it on the

> qdisc IMHO). And (1), (2) and (3) from above is still valid.

> 

> Consider the entry

>   sched-entry S 01 2000

>   sched-entry S 02 20000

> 

> A user assumes that TC0 is open for 2us. But with your change it is bascially

> open for 2us + 12.5us. And even worse, it is not deterministic. A frame in the

> subsequent queue (ie TC1) can be scheduled anywhere beteeen 0us and

> 12.5us after opening the gate, depending on wether there is still a frame

> transmitting on TC0.

> 

After my change, user need to add a GCL at first: "sched-entry S 00 12500".
Before the change, your example need to be set as " sched-entry S 01 14500 sched-entry S 02 20000", TC0 is open for 2us, and TC1 is only open for 20us-12.5us. This also need to add guard band time for user.
So if we do not have this patch, GCL entry will also have a different set with other devices.

> > I checked other devices such as ENETC and it can calculate how long

> > the frame transmission will take and determine whether to transmit

> > before the gate is closed. The VSC9959 device does not have this

> > hardware function. If we implement guard bands on each queue, we need

> > to reserve a 12.5us guard band for each GCL, even if it only needs to

> > send a small packet. This confuses customers.

> 

> How about getting it right and working on how we can set the MAXSDU per

> queue and thus making the guard band smaller?

> 

Of course, if we can set maxSDU for each queue, then users can set the guard band of each queue. I added this patch to set the guard band by adding GCL, which is another way to make the guard band configurable for users. But there is no way to set per queue maxSDU now. Do you have any ideas on how to set the MAXSDU per queue?

> > actually, I'm not sure if this will violate the Qbv standard. I looked

> > up the Qbv standard spec, and found it only introduce the guard band

> > before protected window (Annex Q (informative)Traffic scheduling). It

> > allows to design the schedule to accommodate the intended pattern of

> > transmission without overrunning the next gate-close event for the

> > traffic classes concerned.

> 

> Vladimir already quoted "IEEE 802.1Q-2018 clause 8.6.8.4". I didn't check it,

> though.

> 

> A static guard band is one of the options you have to fulfill that.

> Granted, it is not that efficient, but it is how the switch handles it.

> 

I'm still not sure if guard band is required for each queue. Maybe this patch need revert if it's required. Then there will be a fixed non-configurable guard band for each queue, and the user needs to calculate the guard band into gate opening time every time when designing a schedule. Maybe it's better to revert it and add MAXSDU per queue set.

Thanks,
Xiaoliang
Vladimir Oltean May 7, 2021, 12:19 p.m. UTC | #26
On Fri, May 07, 2021 at 11:09:24AM +0000, Xiaoliang Yang wrote:
> Hi Michael,

> 

> On 2021-05-07 15:35, Michael Walle <michael@walle.cc> wrote:

> > Am 2021-05-07 09:16, schrieb Xiaoliang Yang:

> > > On 2021-05-06 21:25, Michael Walle <michael@walle.cc> wrote:

> > >> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

> > >> > [ trimmed the CC list, as this is most likely spam for most people ]

> > >> >

> > >> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

> > >> >> Am 2021-05-04 21:17, schrieb Vladimir Oltean:

> > >> >> > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

> > >> >> > > > > > > As explained in another mail in this thread, all

> > >> >> > > > > > > queues are marked as scheduled. So this is actually a

> > >> >> > > > > > > no-op, correct? It doesn't matter if it set or not set

> > >> >> > > > > > > for now. Dunno why we even care for this bit then.

> > >> >> > > > > >

> > >> >> > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the

> > >> >> > > > > > available throughput when set.

> > >> >> > > > >

> > >> >> > > > > Ahh, I see now. All queues are "scheduled" but the guard

> > >> >> > > > > band only applies for "non-scheduled" -> "scheduled" transitions.

> > >> >> > > > > So the guard band is never applied, right? Is that really

> > >> >> > > > > what we want?

> > >> >> > > >

> > >> >> > > > Xiaoliang explained that yes, this is what we want. If the

> > >> >> > > > end user wants a guard band they can explicitly add a

> > >> >> > > > "sched-entry 00" in the tc-taprio config.

> > >> >> > >

> > >> >> > > You're disabling the guard band, then. I figured, but isn't

> > >> >> > > that suprising for the user? Who else implements taprio? Do

> > >> >> > > they do it in the same way? I mean this behavior is passed

> > >> >> > > right to the userspace and have a direct impact on how it is

> > >> >> > > configured. Of course a user can add it manually, but I'm not

> > >> >> > > sure that is what we want here. At least it needs to be

> > >> >> > > documented somewhere. Or maybe it should be a switchable option.

> > >> >> > >

> > >> >> > > Consider the following:

> > >> >> > > sched-entry S 01 25000

> > >> >> > > sched-entry S fe 175000

> > >> >> > > basetime 0

> > >> >> > >

> > >> >> > > Doesn't guarantee, that queue 0 is available at the beginning

> > >> >> > > of the cycle, in the worst case it takes up to <begin of

> > >> >> > > cycle> + ~12.5us until the frame makes it through (given

> > >> >> > > gigabit and 1518b frames).

> > >> >> > >

> > >> >> > > Btw. there are also other implementations which don't need a

> > >> >> > > guard band (because they are store-and-forward and cound the

> > >> >> > > remaining bytes). So yes, using a guard band and scheduling is

> > >> >> > > degrading the performance.

> > >> >> >

> > >> >> > What is surprising for the user, and I mentioned this already in

> > >> >> > another thread on this patch, is that the Felix switch overruns

> > >> >> > the time gate (a packet taking 2 us to transmit will start

> > >> >> > transmission even if there is only 1 us left of its time slot,

> > >> >> > delaying the packets from the next time slot by 1 us). I guess

> > >> >> > that this is why the ALWAYS_GUARD_BAND_SCH_Q bit exists, as a

> > >> >> > way to avoid these overruns, but it is a bit of a poor tool for

> > >> >> > that job. Anyway, right now we disable it and live with the overruns.

> > >> >>

> > >> >> We are talking about the same thing here. Why is that a poor tool?

> > >> >

> > >> > It is a poor tool because it revolves around the idea of "scheduled

> > >> > queues" and "non-scheduled queues".

> > >> >

> > >> > Consider the following tc-taprio schedule:

> > >> >

> > >> >       sched-entry S 81 2000 # TC 7 and 0 open, all others closed

> > >> >       sched-entry S 82 2000 # TC 7 and 1 open, all others closed

> > >> >       sched-entry S 84 2000 # TC 7 and 2 open, all others closed

> > >> >       sched-entry S 88 2000 # TC 7 and 3 open, all others closed

> > >> >       sched-entry S 90 2000 # TC 7 and 4 open, all others closed

> > >> >       sched-entry S a0 2000 # TC 7 and 5 open, all others closed

> > >> >       sched-entry S c0 2000 # TC 7 and 6 open, all others closed

> > >> >

> > >> > Otherwise said, traffic class 7 should be able to send any time it

> > >> > wishes.

> > >>

> > >> What is the use case behind that? TC7 (with the highest priority) may

> > >> always take precedence of the other TCs, thus what is the point of

> > >> having a dedicated window for the others.

> > >>

> > >> Anyway, I've tried it and there are no hiccups. I've meassured the

> > >> delta between the start of successive packets and they are always

> > >> ~12370ns for a 1518b frame. TC7 is open all the time, which makes

> > >> sense. It only happens if you actually close the gate, eg. you have a

> > >> sched-entry where a TC7 bit is not set. In this case, I can see a

> > >> difference between ALWAYS_GUARD_BAND_SCH_Q set and not set. If it is

> > >> set, there is up to a ~12.5us delay added (of course it depends on

> > >> when the former frame was scheduled).

> > >>

> > >> It seems that also needs to be 1->0 transition.

> > >>

> > >> You've already mentioned that the switch violates the Qbv standard.

> > >> What makes you think so? IMHO before that patch, it wasn't violated.

> > >> Now it likely is (still have to confirm that). How can this be

> > >> reasonable?

> > >>

> > >> If you have a look at the initial commit message, it is about making

> > >> it possible to have a smaller gate window, but that is not possible

> > >> because of the current guard band of ~12.5us. It seems to be a

> > >> shortcut for not having the MAXSDU (and thus the length of the guard

> > >> band) configurable. Yes (static) guard bands will have a performance

> > >> impact, also described in [1]. You are trading the correctness of the

> > >> TAS for performance. And it is the sole purpose of Qbv to have a

> > >> determisitc way (in terms of timing) of sending the frames.

> > >>

> > >> And telling the user, hey, we know we violate the Qbv standard,

> > >> please insert the guard bands yourself if you really need them is not

> > >> a real solution. As already mentioned, (1) it is not documented

> > >> anywhere, (2) can't be shared among other switches (unless they do

> > >> the same workaround) and (3) what am I supposed to do for TSN

> > >> compliance testing. Modifying the schedule that is about to be

> > >> checked (and thus given by the compliance suite)?

> > >>

> > > I disable the always guard band bit because each gate control list

> > > needs to reserve a guard band slot, which affects performance. The

> > > user does not need to set a guard band for each queue transmission.

> > > For example, "sched-entry S 01 2000 sched-entry S fe 98000". Queue 0

> > > is protected traffic and has smaller frames, so there is no need to

> > > reserve a guard band during the open time of queue 0. The user can add

> > > the following guard band before protected traffic: "sched-entry S 00

> > > 25000 sched-entry S 01 2000 sched-entry S fe 73000"

> > 

> > Again, you're passing the handling of the guard band to the user, which is an

> > implementation detail for this switch (unless there is a new switch for it on the

> > qdisc IMHO). And (1), (2) and (3) from above is still valid.

> > 

> > Consider the entry

> >   sched-entry S 01 2000

> >   sched-entry S 02 20000

> > 

> > A user assumes that TC0 is open for 2us. But with your change it is bascially

> > open for 2us + 12.5us. And even worse, it is not deterministic. A frame in the

> > subsequent queue (ie TC1) can be scheduled anywhere beteeen 0us and

> > 12.5us after opening the gate, depending on wether there is still a frame

> > transmitting on TC0.

> > 

> After my change, user need to add a GCL at first: "sched-entry S 00 12500".

> Before the change, your example need to be set as " sched-entry S 01

> 14500 sched-entry S 02 20000", TC0 is open for 2us, and TC1 is only

> open for 20us-12.5us. This also need to add guard band time for user.

> So if we do not have this patch, GCL entry will also have a different

> set with other devices.

> 

> > > I checked other devices such as ENETC and it can calculate how long

> > > the frame transmission will take and determine whether to transmit

> > > before the gate is closed. The VSC9959 device does not have this

> > > hardware function. If we implement guard bands on each queue, we need

> > > to reserve a 12.5us guard band for each GCL, even if it only needs to

> > > send a small packet. This confuses customers.

> > 

> > How about getting it right and working on how we can set the MAXSDU per

> > queue and thus making the guard band smaller?

> > 

> Of course, if we can set maxSDU for each queue, then users can set the

> guard band of each queue. I added this patch to set the guard band by

> adding GCL, which is another way to make the guard band configurable

> for users. But there is no way to set per queue maxSDU now. Do you

> have any ideas on how to set the MAXSDU per queue?

> 

> > > actually, I'm not sure if this will violate the Qbv standard. I looked

> > > up the Qbv standard spec, and found it only introduce the guard band

> > > before protected window (Annex Q (informative)Traffic scheduling). It

> > > allows to design the schedule to accommodate the intended pattern of

> > > transmission without overrunning the next gate-close event for the

> > > traffic classes concerned.

> > 

> > Vladimir already quoted "IEEE 802.1Q-2018 clause 8.6.8.4". I didn't check it,

> > though.

> > 


That clause says:

In addition to the other checks carried out by the transmission
selection algorithm, a frame on a traffic class queue is not available
for transmission [as required for tests (a) and (b) in 8.6.8] if the
transmission gate is in the closed state or if there is insufficient
time available to transmit the entirety of that frame before the next
gate-close event (3.97) associated with that queue. A per-traffic class
counter, TransmissionOverrun (12.29.1.1.2), is incremented if the
implementation detects that a frame from a given queue is still being
transmitted by the MAC when the gate-close event for that queue occurs.

NOTE 1—It is assumed that the implementation has knowledge of the
transmission overheads that are involved in transmitting a frame on a
given Port and can therefore determine how long the transmission of a
frame will take. However, there can be reasons why the frame size, and
therefore the length of time needed for its transmission, is unknown;
for example, where cut-through is supported, or where frame preemption
is supported and there is no way of telling in advance how many times a
given frame will be preempted before its transmission is complete. It is
desirable that the schedule for such traffic is designed to accommodate
the intended pattern of transmission without overrunning the next
gate-close event for the traffic classes concerned.

NOTE 2— It is assumed that the implementation can determine the time at
which the next gate-close event will occur from the sequence of gate
events. In the normal case, this can be achieved by inspecting the
sequence of gate operations in OperControlList (8.6.9.4.19) and
associated variables. However, when a new configuration is about to be
installed, it would be necessary to inspect the contents of both the
OperControlList and the AdminControlList (8.6.9.4.2) and associated
variables in order to determine the time of the next gate-close event,
as the gating cycle for the new configuration may differ in size and
phasing from the old cycle.
A per-traffic class queue queueMaxSDU parameter defines the maximum
service data unit size for each queue; frames that exceed queueMaxSDU
are discarded [item b8) in 6.5.2]. The value of queueMaxSDU for each
queue is configurable by management (12.29); its default value is the
maximum SDU size supported by the MAC procedures employed on the LAN to
which the frame is to be relayed [item b3) in 6.5.2].

NOTE 3—The use of PFC is likely to interfere with a traffic schedule,
because PFC is transmitted by a higher layer entity (see Clause 36).

> > A static guard band is one of the options you have to fulfill that.

> > Granted, it is not that efficient, but it is how the switch handles it.

> > 

> I'm still not sure if guard band is required for each queue. Maybe

> this patch need revert if it's required. Then there will be a fixed

> non-configurable guard band for each queue, and the user needs to

> calculate the guard band into gate opening time every time when

> designing a schedule. Maybe it's better to revert it and add MAXSDU

> per queue set.


I think we can universally agree on the fact that overruns at the end of
the time slot should be avoided: if there isn't enough time to send it
within your time slot, don't send it (unless you can count it, but none
of the devices I know count the overruns). Devices like the ENETC handle
this on a per-packet basis and shouldn't require any further
configuration. Devices like Felix need the per-queue max SDU from the
user - if that isn's specified in the netlink message they'll have to
default to the interface's MTU.
Devices like the SJA1105 are more interesting, they have no knob
whatsoever to avoid overruns. Additionally, there is a user-visible
restriction there that two gate events on different ports can never fire
at the same time. So, the option of automatically having the driver
shorten the GCL entries and add MTU-sized guard bands by itself kind of
falls flat on its face - it will make for a pretty unusable device,
considering that the user already has to calculate the lengths and
base-time offsets properly in order to avoid gates opening/closing at
the same time.
On the other hand, I do understand the need for uniform behavior (or at
least predictable, if we can't have uniformity), I'm just not sure what
is, realistically speaking, our best option. As outrageous as it sounds
to Michael, I think we shouldn't completely exclude the option that the
TSN compliance suite adapts itself to real-life hardware, that is by far
the options with the least amount of limitations, all things considered.
Michael Walle May 7, 2021, 12:19 p.m. UTC | #27
Hi Xiaoliang,

Am 2021-05-07 13:09, schrieb Xiaoliang Yang:
> On 2021-05-07 15:35, Michael Walle <michael@walle.cc> wrote:

>> Am 2021-05-07 09:16, schrieb Xiaoliang Yang:

>> > On 2021-05-06 21:25, Michael Walle <michael@walle.cc> wrote:

>> >> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

>> >> > [ trimmed the CC list, as this is most likely spam for most people

>> >> > ]

>> >> >

>> >> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

>> >> >> Am 2021-05-04 21:17, schrieb Vladimir Oltean:

>> >> >> > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

>> >> >> > > > > > > As explained in another mail in this thread, all

>> >> >> > > > > > > queues are marked as scheduled. So this is actually a

>> >> >> > > > > > > no-op, correct? It doesn't matter if it set or not set

>> >> >> > > > > > > for now. Dunno why

>> >> we even care for this bit then.

>> >> >> > > > > >

>> >> >> > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the

>> >> >> > > > > > available throughput when set.

>> >> >> > > > >

>> >> >> > > > > Ahh, I see now. All queues are "scheduled" but the guard

>> >> >> > > > > band only applies for "non-scheduled" -> "scheduled" transitions.

>> >> >> > > > > So the guard band is never applied, right? Is that really

>> >> >> > > > > what we want?

>> >> >> > > >

>> >> >> > > > Xiaoliang explained that yes, this is what we want. If the

>> >> >> > > > end user wants a guard band they can explicitly add a

>> >> >> > > > "sched-entry 00" in the tc-taprio config.

>> >> >> > >

>> >> >> > > You're disabling the guard band, then. I figured, but isn't

>> >> >> > > that suprising for the user? Who else implements taprio? Do

>> >> >> > > they do it in the same way? I mean this behavior is passed

>> >> >> > > right to the userspace and have a direct impact on how it is

>> >> >> > > configured. Of course a user can add it manually, but I'm not

>> >> >> > > sure that is what we want here. At least it needs to be

>> >> >> > > documented somewhere. Or maybe it should be a switchable option.

>> >> >> > >

>> >> >> > > Consider the following:

>> >> >> > > sched-entry S 01 25000

>> >> >> > > sched-entry S fe 175000

>> >> >> > > basetime 0

>> >> >> > >

>> >> >> > > Doesn't guarantee, that queue 0 is available at the beginning

>> >> >> > > of the cycle, in the worst case it takes up to <begin of

>> >> >> > > cycle> + ~12.5us until the frame makes it through (given

>> >> >> > > gigabit and 1518b frames).

>> >> >> > >

>> >> >> > > Btw. there are also other implementations which don't need a

>> >> >> > > guard band (because they are store-and-forward and cound the

>> >> >> > > remaining bytes). So yes, using a guard band and scheduling is

>> >> >> > > degrading the performance.

>> >> >> >

>> >> >> > What is surprising for the user, and I mentioned this already in

>> >> >> > another thread on this patch, is that the Felix switch overruns

>> >> >> > the time gate (a packet taking 2 us to transmit will start

>> >> >> > transmission even if there is only 1 us left of its time slot,

>> >> >> > delaying the packets from the next time slot by 1 us). I guess

>> >> >> > that this is why the ALWAYS_GUARD_BAND_SCH_Q bit exists, as a

>> >> >> > way to avoid these overruns, but it is a bit of a poor tool for

>> >> >> > that job. Anyway, right now we disable it and live with the overruns.

>> >> >>

>> >> >> We are talking about the same thing here. Why is that a poor tool?

>> >> >

>> >> > It is a poor tool because it revolves around the idea of "scheduled

>> >> > queues" and "non-scheduled queues".

>> >> >

>> >> > Consider the following tc-taprio schedule:

>> >> >

>> >> >       sched-entry S 81 2000 # TC 7 and 0 open, all others closed

>> >> >       sched-entry S 82 2000 # TC 7 and 1 open, all others closed

>> >> >       sched-entry S 84 2000 # TC 7 and 2 open, all others closed

>> >> >       sched-entry S 88 2000 # TC 7 and 3 open, all others closed

>> >> >       sched-entry S 90 2000 # TC 7 and 4 open, all others closed

>> >> >       sched-entry S a0 2000 # TC 7 and 5 open, all others closed

>> >> >       sched-entry S c0 2000 # TC 7 and 6 open, all others closed

>> >> >

>> >> > Otherwise said, traffic class 7 should be able to send any time it

>> >> > wishes.

>> >>

>> >> What is the use case behind that? TC7 (with the highest priority) may

>> >> always take precedence of the other TCs, thus what is the point of

>> >> having a dedicated window for the others.

>> >>

>> >> Anyway, I've tried it and there are no hiccups. I've meassured the

>> >> delta between the start of successive packets and they are always

>> >> ~12370ns for a 1518b frame. TC7 is open all the time, which makes

>> >> sense. It only happens if you actually close the gate, eg. you have a

>> >> sched-entry where a TC7 bit is not set. In this case, I can see a

>> >> difference between ALWAYS_GUARD_BAND_SCH_Q set and not set. If it is

>> >> set, there is up to a ~12.5us delay added (of course it depends on

>> >> when the former frame was scheduled).

>> >>

>> >> It seems that also needs to be 1->0 transition.

>> >>

>> >> You've already mentioned that the switch violates the Qbv standard.

>> >> What makes you think so? IMHO before that patch, it wasn't violated.

>> >> Now it likely is (still have to confirm that). How can this be

>> >> reasonable?

>> >>

>> >> If you have a look at the initial commit message, it is about making

>> >> it possible to have a smaller gate window, but that is not possible

>> >> because of the current guard band of ~12.5us. It seems to be a

>> >> shortcut for not having the MAXSDU (and thus the length of the guard

>> >> band) configurable. Yes (static) guard bands will have a performance

>> >> impact, also described in [1]. You are trading the correctness of the

>> >> TAS for performance. And it is the sole purpose of Qbv to have a

>> >> determisitc way (in terms of timing) of sending the frames.

>> >>

>> >> And telling the user, hey, we know we violate the Qbv standard,

>> >> please insert the guard bands yourself if you really need them is not

>> >> a real solution. As already mentioned, (1) it is not documented

>> >> anywhere, (2) can't be shared among other switches (unless they do

>> >> the same workaround) and (3) what am I supposed to do for TSN

>> >> compliance testing. Modifying the schedule that is about to be

>> >> checked (and thus given by the compliance suite)?

>> >>

>> > I disable the always guard band bit because each gate control list

>> > needs to reserve a guard band slot, which affects performance. The

>> > user does not need to set a guard band for each queue transmission.

>> > For example, "sched-entry S 01 2000 sched-entry S fe 98000". Queue 0

>> > is protected traffic and has smaller frames, so there is no need to

>> > reserve a guard band during the open time of queue 0. The user can add

>> > the following guard band before protected traffic: "sched-entry S 00

>> > 25000 sched-entry S 01 2000 sched-entry S fe 73000"

>> 

>> Again, you're passing the handling of the guard band to the user, 

>> which is an

>> implementation detail for this switch (unless there is a new switch 

>> for it on the

>> qdisc IMHO). And (1), (2) and (3) from above is still valid.

>> 

>> Consider the entry

>>   sched-entry S 01 2000

>>   sched-entry S 02 20000

>> 

>> A user assumes that TC0 is open for 2us. But with your change it is 

>> bascially

>> open for 2us + 12.5us. And even worse, it is not deterministic. A 

>> frame in the

>> subsequent queue (ie TC1) can be scheduled anywhere beteeen 0us and

>> 12.5us after opening the gate, depending on wether there is still a 

>> frame

>> transmitting on TC0.

>> 

> After my change, user need to add a GCL at first: "sched-entry S 00 

> 12500".


Yes and this is not something we want here. Because it is the duty of
the hardware scheduler to be certain the gates aren't overrun.

> Before the change, your example need to be set as " sched-entry S 01

> 14500 sched-entry S 02 20000", TC0 is open for 2us, and TC1 is only

> open for 20us-12.5us.


We have to differentiate two things here:
  (1) with ALWAYS_GUARD_BAND_SCH_Q bit set, this example doesn't even
      work unless you also set MAXSDU to a smaller value. But this is
      expected here.
  (2) with "sched-entry S 01 14500 sched-entry S 02 20000", it is not
      correct that TC0 is only open for 2us, it is open for _up to_
      14.5us. With the current MAXSDU setting, the _beginning_ of the
      transmission has to be within 0us .. 2us. That is, if you send
      a 1518b frame starting at (relative offset) 1us, it will end
      at 13.5us.

> This also need to add guard band time for user.

> So if we do not have this patch, GCL entry will also have a different

> set with other devices.


I don't think so, because that is the expected behavior.

As already mentioned above, but I'll repeat it here: this patch breaks
the assumption that if a window is open for only 2us, only frames < 2us
can be sent. With this patch, the switch happily forwards max sized 
frames
which takes up to 12.5us (at gigabit) even if the gate is only open for
2us. Eg. the IXIA compliance suite has a dedicated test for it "Test
qbv.c.1.7 - Testing rejection of frame due to insufficient window time".

But I'd guess this should be common sense, that you cannot send a frame
bigger than the window.

>> > I checked other devices such as ENETC and it can calculate how long

>> > the frame transmission will take and determine whether to transmit

>> > before the gate is closed. The VSC9959 device does not have this

>> > hardware function. If we implement guard bands on each queue, we need

>> > to reserve a 12.5us guard band for each GCL, even if it only needs to

>> > send a small packet. This confuses customers.

>> 

>> How about getting it right and working on how we can set the MAXSDU 

>> per

>> queue and thus making the guard band smaller?

>> 

> Of course, if we can set maxSDU for each queue, then users can set the

> guard band of each queue. I added this patch to set the guard band by

> adding GCL, which is another way to make the guard band configurable

> for users. But there is no way to set per queue maxSDU now. Do you

> have any ideas on how to set the MAXSDU per queue?


I understand your problem, but this is not the way to go here, you can't
just change the schedule. Unless of course there might be switch for
the qdisc which enables/disables the guard band (and it is enabled by
default). I'd presume every solution will involve adding some parameters
to the taprio qdisc.

>> > actually, I'm not sure if this will violate the Qbv standard. I looked

>> > up the Qbv standard spec, and found it only introduce the guard band

>> > before protected window (Annex Q (informative)Traffic scheduling). It

>> > allows to design the schedule to accommodate the intended pattern of

>> > transmission without overrunning the next gate-close event for the

>> > traffic classes concerned.

>> 

>> Vladimir already quoted "IEEE 802.1Q-2018 clause 8.6.8.4". I didn't 

>> check it,

>> though.

>> 

>> A static guard band is one of the options you have to fulfill that.

>> Granted, it is not that efficient, but it is how the switch handles 

>> it.

>> 

> I'm still not sure if guard band is required for each queue. Maybe

> this patch need revert if it's required. Then there will be a fixed

> non-configurable guard band for each queue, and the user needs to

> calculate the guard band into gate opening time every time when

> designing a schedule. Maybe it's better to revert it and add MAXSDU

> per queue set.


See above.

-michael
Michael Walle May 7, 2021, 12:43 p.m. UTC | #28
Am 2021-05-07 14:19, schrieb Vladimir Oltean:
> On Fri, May 07, 2021 at 11:09:24AM +0000, Xiaoliang Yang wrote:

>> Hi Michael,

>> 

>> On 2021-05-07 15:35, Michael Walle <michael@walle.cc> wrote:

>> > Am 2021-05-07 09:16, schrieb Xiaoliang Yang:

>> > > On 2021-05-06 21:25, Michael Walle <michael@walle.cc> wrote:

>> > >> Am 2021-05-04 23:33, schrieb Vladimir Oltean:

>> > >> > [ trimmed the CC list, as this is most likely spam for most people ]

>> > >> >

>> > >> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:

>> > >> >> Am 2021-05-04 21:17, schrieb Vladimir Oltean:

>> > >> >> > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:

>> > >> >> > > > > > > As explained in another mail in this thread, all

>> > >> >> > > > > > > queues are marked as scheduled. So this is actually a

>> > >> >> > > > > > > no-op, correct? It doesn't matter if it set or not set

>> > >> >> > > > > > > for now. Dunno why we even care for this bit then.

>> > >> >> > > > > >

>> > >> >> > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the

>> > >> >> > > > > > available throughput when set.

>> > >> >> > > > >

>> > >> >> > > > > Ahh, I see now. All queues are "scheduled" but the guard

>> > >> >> > > > > band only applies for "non-scheduled" -> "scheduled" transitions.

>> > >> >> > > > > So the guard band is never applied, right? Is that really

>> > >> >> > > > > what we want?

>> > >> >> > > >

>> > >> >> > > > Xiaoliang explained that yes, this is what we want. If the

>> > >> >> > > > end user wants a guard band they can explicitly add a

>> > >> >> > > > "sched-entry 00" in the tc-taprio config.

>> > >> >> > >

>> > >> >> > > You're disabling the guard band, then. I figured, but isn't

>> > >> >> > > that suprising for the user? Who else implements taprio? Do

>> > >> >> > > they do it in the same way? I mean this behavior is passed

>> > >> >> > > right to the userspace and have a direct impact on how it is

>> > >> >> > > configured. Of course a user can add it manually, but I'm not

>> > >> >> > > sure that is what we want here. At least it needs to be

>> > >> >> > > documented somewhere. Or maybe it should be a switchable option.

>> > >> >> > >

>> > >> >> > > Consider the following:

>> > >> >> > > sched-entry S 01 25000

>> > >> >> > > sched-entry S fe 175000

>> > >> >> > > basetime 0

>> > >> >> > >

>> > >> >> > > Doesn't guarantee, that queue 0 is available at the beginning

>> > >> >> > > of the cycle, in the worst case it takes up to <begin of

>> > >> >> > > cycle> + ~12.5us until the frame makes it through (given

>> > >> >> > > gigabit and 1518b frames).

>> > >> >> > >

>> > >> >> > > Btw. there are also other implementations which don't need a

>> > >> >> > > guard band (because they are store-and-forward and cound the

>> > >> >> > > remaining bytes). So yes, using a guard band and scheduling is

>> > >> >> > > degrading the performance.

>> > >> >> >

>> > >> >> > What is surprising for the user, and I mentioned this already in

>> > >> >> > another thread on this patch, is that the Felix switch overruns

>> > >> >> > the time gate (a packet taking 2 us to transmit will start

>> > >> >> > transmission even if there is only 1 us left of its time slot,

>> > >> >> > delaying the packets from the next time slot by 1 us). I guess

>> > >> >> > that this is why the ALWAYS_GUARD_BAND_SCH_Q bit exists, as a

>> > >> >> > way to avoid these overruns, but it is a bit of a poor tool for

>> > >> >> > that job. Anyway, right now we disable it and live with the overruns.

>> > >> >>

>> > >> >> We are talking about the same thing here. Why is that a poor tool?

>> > >> >

>> > >> > It is a poor tool because it revolves around the idea of "scheduled

>> > >> > queues" and "non-scheduled queues".

>> > >> >

>> > >> > Consider the following tc-taprio schedule:

>> > >> >

>> > >> >       sched-entry S 81 2000 # TC 7 and 0 open, all others closed

>> > >> >       sched-entry S 82 2000 # TC 7 and 1 open, all others closed

>> > >> >       sched-entry S 84 2000 # TC 7 and 2 open, all others closed

>> > >> >       sched-entry S 88 2000 # TC 7 and 3 open, all others closed

>> > >> >       sched-entry S 90 2000 # TC 7 and 4 open, all others closed

>> > >> >       sched-entry S a0 2000 # TC 7 and 5 open, all others closed

>> > >> >       sched-entry S c0 2000 # TC 7 and 6 open, all others closed

>> > >> >

>> > >> > Otherwise said, traffic class 7 should be able to send any time it

>> > >> > wishes.

>> > >>

>> > >> What is the use case behind that? TC7 (with the highest priority) may

>> > >> always take precedence of the other TCs, thus what is the point of

>> > >> having a dedicated window for the others.

>> > >>

>> > >> Anyway, I've tried it and there are no hiccups. I've meassured the

>> > >> delta between the start of successive packets and they are always

>> > >> ~12370ns for a 1518b frame. TC7 is open all the time, which makes

>> > >> sense. It only happens if you actually close the gate, eg. you have a

>> > >> sched-entry where a TC7 bit is not set. In this case, I can see a

>> > >> difference between ALWAYS_GUARD_BAND_SCH_Q set and not set. If it is

>> > >> set, there is up to a ~12.5us delay added (of course it depends on

>> > >> when the former frame was scheduled).

>> > >>

>> > >> It seems that also needs to be 1->0 transition.

>> > >>

>> > >> You've already mentioned that the switch violates the Qbv standard.

>> > >> What makes you think so? IMHO before that patch, it wasn't violated.

>> > >> Now it likely is (still have to confirm that). How can this be

>> > >> reasonable?

>> > >>

>> > >> If you have a look at the initial commit message, it is about making

>> > >> it possible to have a smaller gate window, but that is not possible

>> > >> because of the current guard band of ~12.5us. It seems to be a

>> > >> shortcut for not having the MAXSDU (and thus the length of the guard

>> > >> band) configurable. Yes (static) guard bands will have a performance

>> > >> impact, also described in [1]. You are trading the correctness of the

>> > >> TAS for performance. And it is the sole purpose of Qbv to have a

>> > >> determisitc way (in terms of timing) of sending the frames.

>> > >>

>> > >> And telling the user, hey, we know we violate the Qbv standard,

>> > >> please insert the guard bands yourself if you really need them is not

>> > >> a real solution. As already mentioned, (1) it is not documented

>> > >> anywhere, (2) can't be shared among other switches (unless they do

>> > >> the same workaround) and (3) what am I supposed to do for TSN

>> > >> compliance testing. Modifying the schedule that is about to be

>> > >> checked (and thus given by the compliance suite)?

>> > >>

>> > > I disable the always guard band bit because each gate control list

>> > > needs to reserve a guard band slot, which affects performance. The

>> > > user does not need to set a guard band for each queue transmission.

>> > > For example, "sched-entry S 01 2000 sched-entry S fe 98000". Queue 0

>> > > is protected traffic and has smaller frames, so there is no need to

>> > > reserve a guard band during the open time of queue 0. The user can add

>> > > the following guard band before protected traffic: "sched-entry S 00

>> > > 25000 sched-entry S 01 2000 sched-entry S fe 73000"

>> >

>> > Again, you're passing the handling of the guard band to the user, which is an

>> > implementation detail for this switch (unless there is a new switch for it on the

>> > qdisc IMHO). And (1), (2) and (3) from above is still valid.

>> >

>> > Consider the entry

>> >   sched-entry S 01 2000

>> >   sched-entry S 02 20000

>> >

>> > A user assumes that TC0 is open for 2us. But with your change it is bascially

>> > open for 2us + 12.5us. And even worse, it is not deterministic. A frame in the

>> > subsequent queue (ie TC1) can be scheduled anywhere beteeen 0us and

>> > 12.5us after opening the gate, depending on wether there is still a frame

>> > transmitting on TC0.

>> >

>> After my change, user need to add a GCL at first: "sched-entry S 00 

>> 12500".

>> Before the change, your example need to be set as " sched-entry S 01

>> 14500 sched-entry S 02 20000", TC0 is open for 2us, and TC1 is only

>> open for 20us-12.5us. This also need to add guard band time for user.

>> So if we do not have this patch, GCL entry will also have a different

>> set with other devices.

>> 

>> > > I checked other devices such as ENETC and it can calculate how long

>> > > the frame transmission will take and determine whether to transmit

>> > > before the gate is closed. The VSC9959 device does not have this

>> > > hardware function. If we implement guard bands on each queue, we need

>> > > to reserve a 12.5us guard band for each GCL, even if it only needs to

>> > > send a small packet. This confuses customers.

>> >

>> > How about getting it right and working on how we can set the MAXSDU per

>> > queue and thus making the guard band smaller?

>> >

>> Of course, if we can set maxSDU for each queue, then users can set the

>> guard band of each queue. I added this patch to set the guard band by

>> adding GCL, which is another way to make the guard band configurable

>> for users. But there is no way to set per queue maxSDU now. Do you

>> have any ideas on how to set the MAXSDU per queue?

>> 

>> > > actually, I'm not sure if this will violate the Qbv standard. I looked

>> > > up the Qbv standard spec, and found it only introduce the guard band

>> > > before protected window (Annex Q (informative)Traffic scheduling). It

>> > > allows to design the schedule to accommodate the intended pattern of

>> > > transmission without overrunning the next gate-close event for the

>> > > traffic classes concerned.

>> >

>> > Vladimir already quoted "IEEE 802.1Q-2018 clause 8.6.8.4". I didn't check it,

>> > though.

>> >

> 

> That clause says:

> 

> In addition to the other checks carried out by the transmission

> selection algorithm, a frame on a traffic class queue is not available

> for transmission [as required for tests (a) and (b) in 8.6.8] if the

> transmission gate is in the closed state or if there is insufficient

> time available to transmit the entirety of that frame before the next

> gate-close event (3.97) associated with that queue. A per-traffic class

> counter, TransmissionOverrun (12.29.1.1.2), is incremented if the

> implementation detects that a frame from a given queue is still being

> transmitted by the MAC when the gate-close event for that queue occurs.

> 

> NOTE 1—It is assumed that the implementation has knowledge of the

> transmission overheads that are involved in transmitting a frame on a

> given Port and can therefore determine how long the transmission of a

> frame will take. However, there can be reasons why the frame size, and

> therefore the length of time needed for its transmission, is unknown;

> for example, where cut-through is supported, or where frame preemption

> is supported and there is no way of telling in advance how many times a

> given frame will be preempted before its transmission is complete. It 

> is

> desirable that the schedule for such traffic is designed to accommodate

> the intended pattern of transmission without overrunning the next

> gate-close event for the traffic classes concerned.

> 

> NOTE 2— It is assumed that the implementation can determine the time at

> which the next gate-close event will occur from the sequence of gate

> events. In the normal case, this can be achieved by inspecting the

> sequence of gate operations in OperControlList (8.6.9.4.19) and

> associated variables. However, when a new configuration is about to be

> installed, it would be necessary to inspect the contents of both the

> OperControlList and the AdminControlList (8.6.9.4.2) and associated

> variables in order to determine the time of the next gate-close event,

> as the gating cycle for the new configuration may differ in size and

> phasing from the old cycle.

> A per-traffic class queue queueMaxSDU parameter defines the maximum

> service data unit size for each queue; frames that exceed queueMaxSDU

> are discarded [item b8) in 6.5.2]. The value of queueMaxSDU for each

> queue is configurable by management (12.29); its default value is the

> maximum SDU size supported by the MAC procedures employed on the LAN to

> which the frame is to be relayed [item b3) in 6.5.2].

> 

> NOTE 3—The use of PFC is likely to interfere with a traffic schedule,

> because PFC is transmitted by a higher layer entity (see Clause 36).

> 

>> > A static guard band is one of the options you have to fulfill that.

>> > Granted, it is not that efficient, but it is how the switch handles it.

>> >

>> I'm still not sure if guard band is required for each queue. Maybe

>> this patch need revert if it's required. Then there will be a fixed

>> non-configurable guard band for each queue, and the user needs to

>> calculate the guard band into gate opening time every time when

>> designing a schedule. Maybe it's better to revert it and add MAXSDU

>> per queue set.

> 

> I think we can universally agree on the fact that overruns at the end 

> of

> the time slot should be avoided: if there isn't enough time to send it

> within your time slot, don't send it (unless you can count it, but none

> of the devices I know count the overruns). Devices like the ENETC 

> handle

> this on a per-packet basis and shouldn't require any further

> configuration. Devices like Felix need the per-queue max SDU from the

> user - if that isn's specified in the netlink message they'll have to

> default to the interface's MTU.

> Devices like the SJA1105 are more interesting, they have no knob

> whatsoever to avoid overruns. Additionally, there is a user-visible

> restriction there that two gate events on different ports can never 

> fire

> at the same time. So, the option of automatically having the driver

> shorten the GCL entries and add MTU-sized guard bands by itself kind of

> falls flat on its face - it will make for a pretty unusable device,

> considering that the user already has to calculate the lengths and

> base-time offsets properly in order to avoid gates opening/closing at

> the same time.

> On the other hand, I do understand the need for uniform behavior (or at

> least predictable, if we can't have uniformity), I'm just not sure what

> is, realistically speaking, our best option. As outrageous as it sounds

> to Michael, I think we shouldn't completely exclude the option that the

> TSN compliance suite adapts itself to real-life hardware, that is by 

> far

> the options with the least amount of limitations, all things 

> considered.


Haha, I kinda hate compliance suites anyway, so no offence taken ;)

IMHO for the devices that supports it, we should have that uniform
behavior. If the SJA1105 is inherently broken in this regard, then I
agree there is no need to make it behave the same in all circumstances.
Also the SJA1105 was one of the first devices which supported Qbv, no?
I'd presume newer devices will get better at the scheduling.

Btw. because the entries in the CGL are likely constrained (and some
customers always for how much they can use) it should be taken into
consideration when the driver will automatically add some entries.

-michael
Michael Walle June 7, 2021, 11:26 a.m. UTC | #29
Hi Vladimir, Hi Xiaoliang,

Am 2021-05-07 14:19, schrieb Vladimir Oltean:
> Devices like Felix need the per-queue max SDU from the

> user - if that isn's specified in the netlink message they'll have to

> default to the interface's MTU.


Btw. just to let you and Xiaoliang know:

It appears that PORT_MAX_SDU isn't working as expected. It is used
as a fallback if QMAXSDU_CFG_n isn't set for the guard band
calculation. But it appears to be _not_ used for discarding any
frames. E.g. if you set PORT_MAX_SDU to 500 the port will still
happily send frames larger than 500 bytes. (Unless of course
you hit the guard band of 500 bytes). OTOH QMAXSDU_CFG_n works
as expected, it will discard oversized frames - and presumly will
set the guard band accordingly, I haven't tested this explicitly.

Thus, I wonder what sense PORT_MAX_SDU makes at all. If you set
the guard band to a smaller value than the MTU, you'll also need
to make sure, there will be no larger frames scheduled on that port.

In any case, the workaround is to set QMAXSDU_CFG_n (for all
n=0..7) to the desired max_sdu value instead of using PORT_MAX_SDU.

It might also make sense to check with the IP supplier.

In the case anyone wants to implement that for (upstream) linux ;)

-michael
Xiaoliang Yang June 9, 2021, 8:06 a.m. UTC | #30
On 2021-06-07 19:26, Michael Walle wrote:
> 

> Hi Vladimir, Hi Xiaoliang,

> 

> Am 2021-05-07 14:19, schrieb Vladimir Oltean:

> > Devices like Felix need the per-queue max SDU from the user - if that

> > isn's specified in the netlink message they'll have to default to the

> > interface's MTU.

> 

> Btw. just to let you and Xiaoliang know:

> 

> It appears that PORT_MAX_SDU isn't working as expected. It is used as a

> fallback if QMAXSDU_CFG_n isn't set for the guard band calculation. But it

> appears to be _not_ used for discarding any frames. E.g. if you set

> PORT_MAX_SDU to 500 the port will still happily send frames larger than 500

> bytes. (Unless of course you hit the guard band of 500 bytes). OTOH

> QMAXSDU_CFG_n works as expected, it will discard oversized frames - and

> presumly will set the guard band accordingly, I haven't tested this explicitly.

> 

> Thus, I wonder what sense PORT_MAX_SDU makes at all. If you set the guard

> band to a smaller value than the MTU, you'll also need to make sure, there will

> be no larger frames scheduled on that port.

> 

> In any case, the workaround is to set QMAXSDU_CFG_n (for all

> n=0..7) to the desired max_sdu value instead of using PORT_MAX_SDU.

> 

> It might also make sense to check with the IP supplier.

> 

> In the case anyone wants to implement that for (upstream) linux ;)

> 

> -michael


Yes, PORT_MAX_SDU is only used for guard band calculation. DEV_GMII: MAC_MAXLEN_CFG
limited the frame length accepted by the MAC. I am worried that QMAXSDU is not a universal
setting, it may just be set on Felix, so there is no suitable place to add this configuration.

Thanks,
xiaoliang
Michael Walle June 9, 2021, 8:41 a.m. UTC | #31
Am 2021-06-09 10:06, schrieb Xiaoliang Yang:
> On 2021-06-07 19:26, Michael Walle wrote:

>> 

>> Hi Vladimir, Hi Xiaoliang,

>> 

>> Am 2021-05-07 14:19, schrieb Vladimir Oltean:

>> > Devices like Felix need the per-queue max SDU from the user - if that

>> > isn's specified in the netlink message they'll have to default to the

>> > interface's MTU.

>> 

>> Btw. just to let you and Xiaoliang know:

>> 

>> It appears that PORT_MAX_SDU isn't working as expected. It is used as 

>> a

>> fallback if QMAXSDU_CFG_n isn't set for the guard band calculation. 

>> But it

>> appears to be _not_ used for discarding any frames. E.g. if you set

>> PORT_MAX_SDU to 500 the port will still happily send frames larger 

>> than 500

>> bytes. (Unless of course you hit the guard band of 500 bytes). OTOH

>> QMAXSDU_CFG_n works as expected, it will discard oversized frames - 

>> and

>> presumly will set the guard band accordingly, I haven't tested this 

>> explicitly.

>> 

>> Thus, I wonder what sense PORT_MAX_SDU makes at all. If you set the 

>> guard

>> band to a smaller value than the MTU, you'll also need to make sure, 

>> there will

>> be no larger frames scheduled on that port.

>> 

>> In any case, the workaround is to set QMAXSDU_CFG_n (for all

>> n=0..7) to the desired max_sdu value instead of using PORT_MAX_SDU.

>> 

>> It might also make sense to check with the IP supplier.

>> 

>> In the case anyone wants to implement that for (upstream) linux ;)

>> 

>> -michael

> 

> Yes, PORT_MAX_SDU is only used for guard band calculation. DEV_GMII:

> MAC_MAXLEN_CFG

> limited the frame length accepted by the MAC.


But MAC_MAXLEN_CFG is for ingress handling while you want egress 
handling,
for example think two ingress ports sending to one egress port. The
limitation is on the egress side. Or two queues with different guard
bands/maxsdu settings.

> I am worried that

> QMAXSDU is not a universal

> setting, it may just be set on Felix, so there is no suitable place to

> add this configuration.


I can't follow you here. I'm talkling about felix and its quirks. Eg. 
the
static guard band handling there, the reason why we need the maxsdu
setting in the first place.

Or do you think about how to communicate that setting from user space to
the kernel? In this case, I'd say we'll need some kind of parameter for
this kind of devices which doesn't have a dynamic guard band mechanism.
Felix won't be the only one.

-michael
diff mbox series

Patch

diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
index 789fe08cae50..2473bebe48e6 100644
--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
+++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
@@ -1227,8 +1227,12 @@  static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
 	if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX)
 		return -ERANGE;
 
-	ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port) |
-		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,
+	/* Set port num and disable ALWAYS_GUARD_BAND_SCH_Q, which means set
+	 * guard band to be implemented for nonschedule queues to schedule
+	 * queues transition.
+	 */
+	ocelot_rmw(ocelot,
+		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port),
 		   QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M |
 		   QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,
 		   QSYS_TAS_PARAM_CFG_CTRL);