mbox series

[net-next,RFC,0/2] add elevated refcnt support for page pool

Message ID 1625044676-12441-1-git-send-email-linyunsheng@huawei.com
Headers show
Series add elevated refcnt support for page pool | expand

Message

Yunsheng Lin June 30, 2021, 9:17 a.m. UTC
This patchset adds elevated refcnt support for page pool
and enable skb's page frag recycling based on page pool
in hns3 drvier.

Yunsheng Lin (2):
  page_pool: add page recycling support based on elevated refcnt
  net: hns3: support skb's frag page recycling based on page pool

 drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-
 drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +
 drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +
 drivers/net/ethernet/marvell/mvneta.c              |   6 +-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-
 include/linux/mm_types.h                           |   2 +-
 include/linux/skbuff.h                             |   4 +-
 include/net/page_pool.h                            |  30 ++-
 net/core/page_pool.c                               | 215 +++++++++++++++++----
 9 files changed, 285 insertions(+), 57 deletions(-)

Comments

Ilias Apalodimas July 2, 2021, 8:36 a.m. UTC | #1
Hi Yunsheng, 

On Wed, Jun 30, 2021 at 05:17:54PM +0800, Yunsheng Lin wrote:
> This patchset adds elevated refcnt support for page pool

> and enable skb's page frag recycling based on page pool

> in hns3 drvier.

> 


Thanks for taking the time with this! I am a bit overloaded atm, give me a
few days and I'll go through the patches

Cheers
/Ilias


> Yunsheng Lin (2):

>   page_pool: add page recycling support based on elevated refcnt

>   net: hns3: support skb's frag page recycling based on page pool

> 

>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-

>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +

>  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +

>  drivers/net/ethernet/marvell/mvneta.c              |   6 +-

>  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-

>  include/linux/mm_types.h                           |   2 +-

>  include/linux/skbuff.h                             |   4 +-

>  include/net/page_pool.h                            |  30 ++-

>  net/core/page_pool.c                               | 215 +++++++++++++++++----

>  9 files changed, 285 insertions(+), 57 deletions(-)

> 

> -- 

> 2.7.4

>
Matteo Croce July 2, 2021, 1:39 p.m. UTC | #2
On Wed, 30 Jun 2021 17:17:54 +0800
Yunsheng Lin <linyunsheng@huawei.com> wrote:

> This patchset adds elevated refcnt support for page pool

> and enable skb's page frag recycling based on page pool

> in hns3 drvier.

> 

> Yunsheng Lin (2):

>   page_pool: add page recycling support based on elevated refcnt

>   net: hns3: support skb's frag page recycling based on page pool

> 

>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-

>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +

>  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +

>  drivers/net/ethernet/marvell/mvneta.c              |   6 +-

>  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-

>  include/linux/mm_types.h                           |   2 +-

>  include/linux/skbuff.h                             |   4 +-

>  include/net/page_pool.h                            |  30 ++-

>  net/core/page_pool.c                               | 215

> +++++++++++++++++---- 9 files changed, 285 insertions(+), 57

> deletions(-)

> 


Interesting!
Unfortunately I'll not have access to my macchiatobin anytime soon, can
someone test the impact, if any, on mvpp2?

Regards,
-- 
per aspera ad upstream
Russell King (Oracle) July 6, 2021, 3:51 p.m. UTC | #3
On Fri, Jul 02, 2021 at 03:39:47PM +0200, Matteo Croce wrote:
> On Wed, 30 Jun 2021 17:17:54 +0800

> Yunsheng Lin <linyunsheng@huawei.com> wrote:

> 

> > This patchset adds elevated refcnt support for page pool

> > and enable skb's page frag recycling based on page pool

> > in hns3 drvier.

> > 

> > Yunsheng Lin (2):

> >   page_pool: add page recycling support based on elevated refcnt

> >   net: hns3: support skb's frag page recycling based on page pool

> > 

> >  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-

> >  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +

> >  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +

> >  drivers/net/ethernet/marvell/mvneta.c              |   6 +-

> >  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-

> >  include/linux/mm_types.h                           |   2 +-

> >  include/linux/skbuff.h                             |   4 +-

> >  include/net/page_pool.h                            |  30 ++-

> >  net/core/page_pool.c                               | 215

> > +++++++++++++++++---- 9 files changed, 285 insertions(+), 57

> > deletions(-)

> > 

> 

> Interesting!

> Unfortunately I'll not have access to my macchiatobin anytime soon, can

> someone test the impact, if any, on mvpp2?


I'll try to test. Please let me know what kind of testing you're
looking for (I haven't been following these patches, sorry.)

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
Matteo Croce July 6, 2021, 11:19 p.m. UTC | #4
On Tue, Jul 6, 2021 at 5:51 PM Russell King (Oracle)
<linux@armlinux.org.uk> wrote:
>

> On Fri, Jul 02, 2021 at 03:39:47PM +0200, Matteo Croce wrote:

> > On Wed, 30 Jun 2021 17:17:54 +0800

> > Yunsheng Lin <linyunsheng@huawei.com> wrote:

> >

> > > This patchset adds elevated refcnt support for page pool

> > > and enable skb's page frag recycling based on page pool

> > > in hns3 drvier.

> > >

> > > Yunsheng Lin (2):

> > >   page_pool: add page recycling support based on elevated refcnt

> > >   net: hns3: support skb's frag page recycling based on page pool

> > >

> > >  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-

> > >  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +

> > >  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +

> > >  drivers/net/ethernet/marvell/mvneta.c              |   6 +-

> > >  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-

> > >  include/linux/mm_types.h                           |   2 +-

> > >  include/linux/skbuff.h                             |   4 +-

> > >  include/net/page_pool.h                            |  30 ++-

> > >  net/core/page_pool.c                               | 215

> > > +++++++++++++++++---- 9 files changed, 285 insertions(+), 57

> > > deletions(-)

> > >

> >

> > Interesting!

> > Unfortunately I'll not have access to my macchiatobin anytime soon, can

> > someone test the impact, if any, on mvpp2?

>

> I'll try to test. Please let me know what kind of testing you're

> looking for (I haven't been following these patches, sorry.)

>


A drop test or L2 routing will be enough.
BTW I should have the macchiatobin back on friday.

Regards,
-- 
per aspera ad upstream
Marcin Wojtas July 7, 2021, 4:50 p.m. UTC | #5
Hi,


śr., 7 lip 2021 o 01:20 Matteo Croce <mcroce@linux.microsoft.com> napisał(a):
>

> On Tue, Jul 6, 2021 at 5:51 PM Russell King (Oracle)

> <linux@armlinux.org.uk> wrote:

> >

> > On Fri, Jul 02, 2021 at 03:39:47PM +0200, Matteo Croce wrote:

> > > On Wed, 30 Jun 2021 17:17:54 +0800

> > > Yunsheng Lin <linyunsheng@huawei.com> wrote:

> > >

> > > > This patchset adds elevated refcnt support for page pool

> > > > and enable skb's page frag recycling based on page pool

> > > > in hns3 drvier.

> > > >

> > > > Yunsheng Lin (2):

> > > >   page_pool: add page recycling support based on elevated refcnt

> > > >   net: hns3: support skb's frag page recycling based on page pool

> > > >

> > > >  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-

> > > >  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +

> > > >  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +

> > > >  drivers/net/ethernet/marvell/mvneta.c              |   6 +-

> > > >  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-

> > > >  include/linux/mm_types.h                           |   2 +-

> > > >  include/linux/skbuff.h                             |   4 +-

> > > >  include/net/page_pool.h                            |  30 ++-

> > > >  net/core/page_pool.c                               | 215

> > > > +++++++++++++++++---- 9 files changed, 285 insertions(+), 57

> > > > deletions(-)

> > > >

> > >

> > > Interesting!

> > > Unfortunately I'll not have access to my macchiatobin anytime soon, can

> > > someone test the impact, if any, on mvpp2?

> >

> > I'll try to test. Please let me know what kind of testing you're

> > looking for (I haven't been following these patches, sorry.)

> >

>

> A drop test or L2 routing will be enough.

> BTW I should have the macchiatobin back on friday.


I have a 10G packet generator connected to 10G ports of CN913x-DB - I
will stress mvpp2 in l2 forwarding early next week (I'm mostly AFK
this until Monday).

Best regards,
Marcin
Matteo Croce July 9, 2021, 4:15 a.m. UTC | #6
On Wed, Jul 7, 2021 at 6:50 PM Marcin Wojtas <mw@semihalf.com> wrote:
>

> Hi,

>

>

> śr., 7 lip 2021 o 01:20 Matteo Croce <mcroce@linux.microsoft.com> napisał(a):

> >

> > On Tue, Jul 6, 2021 at 5:51 PM Russell King (Oracle)

> > <linux@armlinux.org.uk> wrote:

> > >

> > > On Fri, Jul 02, 2021 at 03:39:47PM +0200, Matteo Croce wrote:

> > > > On Wed, 30 Jun 2021 17:17:54 +0800

> > > > Yunsheng Lin <linyunsheng@huawei.com> wrote:

> > > >

> > > > > This patchset adds elevated refcnt support for page pool

> > > > > and enable skb's page frag recycling based on page pool

> > > > > in hns3 drvier.

> > > > >

> > > > > Yunsheng Lin (2):

> > > > >   page_pool: add page recycling support based on elevated refcnt

> > > > >   net: hns3: support skb's frag page recycling based on page pool

> > > > >

> > > > >  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-

> > > > >  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +

> > > > >  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +

> > > > >  drivers/net/ethernet/marvell/mvneta.c              |   6 +-

> > > > >  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-

> > > > >  include/linux/mm_types.h                           |   2 +-

> > > > >  include/linux/skbuff.h                             |   4 +-

> > > > >  include/net/page_pool.h                            |  30 ++-

> > > > >  net/core/page_pool.c                               | 215

> > > > > +++++++++++++++++---- 9 files changed, 285 insertions(+), 57

> > > > > deletions(-)

> > > > >

> > > >

> > > > Interesting!

> > > > Unfortunately I'll not have access to my macchiatobin anytime soon, can

> > > > someone test the impact, if any, on mvpp2?

> > >

> > > I'll try to test. Please let me know what kind of testing you're

> > > looking for (I haven't been following these patches, sorry.)

> > >

> >

> > A drop test or L2 routing will be enough.

> > BTW I should have the macchiatobin back on friday.

>

> I have a 10G packet generator connected to 10G ports of CN913x-DB - I

> will stress mvpp2 in l2 forwarding early next week (I'm mostly AFK

> this until Monday).

>


I managed to to a drop test on mvpp2. Maybe there is a slowdown but
it's below the measurement uncertainty.

Perf top before:

Overhead  Shared O  Symbol
   8.48%  [kernel]  [k] page_pool_put_page
   2.57%  [kernel]  [k] page_pool_refill_alloc_cache
   1.58%  [kernel]  [k] page_pool_alloc_pages
   0.75%  [kernel]  [k] page_pool_return_skb_page

after:

Overhead  Shared O  Symbol
   8.34%  [kernel]  [k] page_pool_put_page
   4.52%  [kernel]  [k] page_pool_return_skb_page
   4.42%  [kernel]  [k] page_pool_sub_bias
   3.16%  [kernel]  [k] page_pool_alloc_pages
   2.43%  [kernel]  [k] page_pool_refill_alloc_cache

Regards,
-- 
per aspera ad upstream
Yunsheng Lin July 9, 2021, 6:40 a.m. UTC | #7
On 2021/7/9 12:15, Matteo Croce wrote:
> On Wed, Jul 7, 2021 at 6:50 PM Marcin Wojtas <mw@semihalf.com> wrote:

>>

>> Hi,

>>

>>

>> śr., 7 lip 2021 o 01:20 Matteo Croce <mcroce@linux.microsoft.com> napisał(a):

>>>

>>> On Tue, Jul 6, 2021 at 5:51 PM Russell King (Oracle)

>>> <linux@armlinux.org.uk> wrote:

>>>>

>>>> On Fri, Jul 02, 2021 at 03:39:47PM +0200, Matteo Croce wrote:

>>>>> On Wed, 30 Jun 2021 17:17:54 +0800

>>>>> Yunsheng Lin <linyunsheng@huawei.com> wrote:

>>>>>

>>>>>> This patchset adds elevated refcnt support for page pool

>>>>>> and enable skb's page frag recycling based on page pool

>>>>>> in hns3 drvier.

>>>>>>

>>>>>> Yunsheng Lin (2):

>>>>>>   page_pool: add page recycling support based on elevated refcnt

>>>>>>   net: hns3: support skb's frag page recycling based on page pool

>>>>>>

>>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-

>>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +

>>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +

>>>>>>  drivers/net/ethernet/marvell/mvneta.c              |   6 +-

>>>>>>  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-

>>>>>>  include/linux/mm_types.h                           |   2 +-

>>>>>>  include/linux/skbuff.h                             |   4 +-

>>>>>>  include/net/page_pool.h                            |  30 ++-

>>>>>>  net/core/page_pool.c                               | 215

>>>>>> +++++++++++++++++---- 9 files changed, 285 insertions(+), 57

>>>>>> deletions(-)

>>>>>>

>>>>>

>>>>> Interesting!

>>>>> Unfortunately I'll not have access to my macchiatobin anytime soon, can

>>>>> someone test the impact, if any, on mvpp2?

>>>>

>>>> I'll try to test. Please let me know what kind of testing you're

>>>> looking for (I haven't been following these patches, sorry.)

>>>>

>>>

>>> A drop test or L2 routing will be enough.

>>> BTW I should have the macchiatobin back on friday.

>>

>> I have a 10G packet generator connected to 10G ports of CN913x-DB - I

>> will stress mvpp2 in l2 forwarding early next week (I'm mostly AFK

>> this until Monday).

>>

> 

> I managed to to a drop test on mvpp2. Maybe there is a slowdown but

> it's below the measurement uncertainty.

> 

> Perf top before:

> 

> Overhead  Shared O  Symbol

>    8.48%  [kernel]  [k] page_pool_put_page

>    2.57%  [kernel]  [k] page_pool_refill_alloc_cache

>    1.58%  [kernel]  [k] page_pool_alloc_pages

>    0.75%  [kernel]  [k] page_pool_return_skb_page

> 

> after:

> 

> Overhead  Shared O  Symbol

>    8.34%  [kernel]  [k] page_pool_put_page

>    4.52%  [kernel]  [k] page_pool_return_skb_page

>    4.42%  [kernel]  [k] page_pool_sub_bias

>    3.16%  [kernel]  [k] page_pool_alloc_pages

>    2.43%  [kernel]  [k] page_pool_refill_alloc_cache


Hi, Matteo
Thanks for the testing.
it seems you have adapted the mvpp2 driver to use the new frag
API for page pool, There is one missing optimization for XDP case,
the page is always returned to the pool->ring regardless of the
context of page_pool_put_page() for elevated refcnt case.

Maybe adding back that optimization will close some gap of the above
performance difference if the drop is happening in softirq context.

> 

> Regards,

>
Ilias Apalodimas July 9, 2021, 6:42 a.m. UTC | #8
On Fri, Jul 09, 2021 at 02:40:02PM +0800, Yunsheng Lin wrote:
> On 2021/7/9 12:15, Matteo Croce wrote:

> > On Wed, Jul 7, 2021 at 6:50 PM Marcin Wojtas <mw@semihalf.com> wrote:

> >>

> >> Hi,

> >>

> >>

> >> ??r., 7 lip 2021 o 01:20 Matteo Croce <mcroce@linux.microsoft.com> napisa??(a):

> >>>

> >>> On Tue, Jul 6, 2021 at 5:51 PM Russell King (Oracle)

> >>> <linux@armlinux.org.uk> wrote:

> >>>>

> >>>> On Fri, Jul 02, 2021 at 03:39:47PM +0200, Matteo Croce wrote:

> >>>>> On Wed, 30 Jun 2021 17:17:54 +0800

> >>>>> Yunsheng Lin <linyunsheng@huawei.com> wrote:

> >>>>>

> >>>>>> This patchset adds elevated refcnt support for page pool

> >>>>>> and enable skb's page frag recycling based on page pool

> >>>>>> in hns3 drvier.

> >>>>>>

> >>>>>> Yunsheng Lin (2):

> >>>>>>   page_pool: add page recycling support based on elevated refcnt

> >>>>>>   net: hns3: support skb's frag page recycling based on page pool

> >>>>>>

> >>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-

> >>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +

> >>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +

> >>>>>>  drivers/net/ethernet/marvell/mvneta.c              |   6 +-

> >>>>>>  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-

> >>>>>>  include/linux/mm_types.h                           |   2 +-

> >>>>>>  include/linux/skbuff.h                             |   4 +-

> >>>>>>  include/net/page_pool.h                            |  30 ++-

> >>>>>>  net/core/page_pool.c                               | 215

> >>>>>> +++++++++++++++++---- 9 files changed, 285 insertions(+), 57

> >>>>>> deletions(-)

> >>>>>>

> >>>>>

> >>>>> Interesting!

> >>>>> Unfortunately I'll not have access to my macchiatobin anytime soon, can

> >>>>> someone test the impact, if any, on mvpp2?

> >>>>

> >>>> I'll try to test. Please let me know what kind of testing you're

> >>>> looking for (I haven't been following these patches, sorry.)

> >>>>

> >>>

> >>> A drop test or L2 routing will be enough.

> >>> BTW I should have the macchiatobin back on friday.

> >>

> >> I have a 10G packet generator connected to 10G ports of CN913x-DB - I

> >> will stress mvpp2 in l2 forwarding early next week (I'm mostly AFK

> >> this until Monday).

> >>

> > 

> > I managed to to a drop test on mvpp2. Maybe there is a slowdown but

> > it's below the measurement uncertainty.

> > 

> > Perf top before:

> > 

> > Overhead  Shared O  Symbol

> >    8.48%  [kernel]  [k] page_pool_put_page

> >    2.57%  [kernel]  [k] page_pool_refill_alloc_cache

> >    1.58%  [kernel]  [k] page_pool_alloc_pages

> >    0.75%  [kernel]  [k] page_pool_return_skb_page

> > 

> > after:

> > 

> > Overhead  Shared O  Symbol

> >    8.34%  [kernel]  [k] page_pool_put_page

> >    4.52%  [kernel]  [k] page_pool_return_skb_page

> >    4.42%  [kernel]  [k] page_pool_sub_bias

> >    3.16%  [kernel]  [k] page_pool_alloc_pages

> >    2.43%  [kernel]  [k] page_pool_refill_alloc_cache

> 

> Hi, Matteo

> Thanks for the testing.

> it seems you have adapted the mvpp2 driver to use the new frag

> API for page pool, There is one missing optimization for XDP case,

> the page is always returned to the pool->ring regardless of the

> context of page_pool_put_page() for elevated refcnt case.

> 

> Maybe adding back that optimization will close some gap of the above

> performance difference if the drop is happening in softirq context.

> 


I think what Matteo did was a pure netstack test.  We'll need testing on
both XDP and normal network cases to be able to figure out the exact
impact.

Thanks
/Ilias
> > 

> > Regards,

> >