mbox series

[0/9,v6] Introduce a bulk order-0 page allocator with two in-tree users

Message ID 20210325114228.27719-1-mgorman@techsingularity.net
Headers show
Series Introduce a bulk order-0 page allocator with two in-tree users | expand

Message

Mel Gorman March 25, 2021, 11:42 a.m. UTC
This series is based on top of Matthew Wilcox's series "Rationalise
__alloc_pages wrapper" and does not apply to 5.14-rc4. If Andrew's tree
is not the testing baseline then the following git tree will work.

git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v6r7

Changelog since v5
o Add micro-optimisations from Jesper
o Add array-based versions of the sunrpc and page_pool users
o Allocate 1 page if local zone watermarks are not met
o Fix statistics
o prep_new_pages as they are allocated. Batching prep_new_pages with
  IRQs enabled limited how the API could be used (e.g. list must be
  empty) and added too much complexity.

Changelog since v4
o Drop users of the API
o Remove free_pages_bulk interface, no users
o Add array interface
o Allocate single page if watermark checks on local zones fail

Changelog since v3
o Rebase on top of Matthew's series consolidating the alloc_pages API
o Rename alloced to allocated
o Split out preparation patch for prepare_alloc_pages
o Defensive check for bulk allocation or <= 0 pages
o Call single page allocation path only if no pages were allocated
o Minor cosmetic cleanups
o Reorder patch dependencies by subsystem. As this is a cross-subsystem
  series, the mm patches have to be merged before the sunrpc and net
  users.

Changelog since v2
o Prep new pages with IRQs enabled
o Minor documentation update

Changelog since v1
o Parenthesise binary and boolean comparisons
o Add reviewed-bys
o Rebase to 5.12-rc2

This series introduces a bulk order-0 page allocator with sunrpc and
the network page pool being the first users. The implementation is not
efficient as semantics needed to be ironed out first. If no other semantic
changes are needed, it can be made more efficient.  Despite that, this
is a performance-related for users that require multiple pages for an
operation without multiple round-trips to the page allocator. Quoting
the last patch for the high-speed networking use-case

            Kernel          XDP stats       CPU     pps           Delta
            Baseline        XDP-RX CPU      total   3,771,046       n/a
            List            XDP-RX CPU      total   3,940,242    +4.49%
            Array           XDP-RX CPU      total   4,249,224   +12.68%

Comments

Matthew Wilcox (Oracle) March 25, 2021, 12:50 p.m. UTC | #1
On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> This series introduces a bulk order-0 page allocator with sunrpc and
> the network page pool being the first users. The implementation is not
> efficient as semantics needed to be ironed out first. If no other semantic
> changes are needed, it can be made more efficient.  Despite that, this
> is a performance-related for users that require multiple pages for an
> operation without multiple round-trips to the page allocator. Quoting
> the last patch for the high-speed networking use-case
> 
>             Kernel          XDP stats       CPU     pps           Delta
>             Baseline        XDP-RX CPU      total   3,771,046       n/a
>             List            XDP-RX CPU      total   3,940,242    +4.49%
>             Array           XDP-RX CPU      total   4,249,224   +12.68%
> 
> >From the SUNRPC traces of svc_alloc_arg()
> 
> 	Single page: 25.007 us per call over 532,571 calls
> 	Bulk list:    6.258 us per call over 517,034 calls
> 	Bulk array:   4.590 us per call over 517,442 calls
> 
> Both potential users in this series are corner cases (NFS and high-speed
> networks) so it is unlikely that most users will see any benefit in the
> short term. Other potential other users are batch allocations for page
> cache readahead, fault around and SLUB allocations when high-order pages
> are unavailable. It's unknown how much benefit would be seen by converting
> multiple page allocation calls to a single batch or what difference it may
> make to headline performance.

We have a third user, vmalloc(), with a 16% perf improvement.  I know the
email says 21% but that includes the 5% improvement from switching to
kvmalloc() to allocate area->pages.

https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/

I don't know how many _frequent_ vmalloc users we have that will benefit
from this, but it's probably more than will benefit from improvements
to 200Gbit networking performance.
Mel Gorman March 25, 2021, 1:25 p.m. UTC | #2
On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote:
> On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> > This series introduces a bulk order-0 page allocator with sunrpc and
> > the network page pool being the first users. The implementation is not
> > efficient as semantics needed to be ironed out first. If no other semantic
> > changes are needed, it can be made more efficient.  Despite that, this
> > is a performance-related for users that require multiple pages for an
> > operation without multiple round-trips to the page allocator. Quoting
> > the last patch for the high-speed networking use-case
> > 
> >             Kernel          XDP stats       CPU     pps           Delta
> >             Baseline        XDP-RX CPU      total   3,771,046       n/a
> >             List            XDP-RX CPU      total   3,940,242    +4.49%
> >             Array           XDP-RX CPU      total   4,249,224   +12.68%
> > 
> > >From the SUNRPC traces of svc_alloc_arg()
> > 
> > 	Single page: 25.007 us per call over 532,571 calls
> > 	Bulk list:    6.258 us per call over 517,034 calls
> > 	Bulk array:   4.590 us per call over 517,442 calls
> > 
> > Both potential users in this series are corner cases (NFS and high-speed
> > networks) so it is unlikely that most users will see any benefit in the
> > short term. Other potential other users are batch allocations for page
> > cache readahead, fault around and SLUB allocations when high-order pages
> > are unavailable. It's unknown how much benefit would be seen by converting
> > multiple page allocation calls to a single batch or what difference it may
> > make to headline performance.
> 
> We have a third user, vmalloc(), with a 16% perf improvement.  I know the
> email says 21% but that includes the 5% improvement from switching to
> kvmalloc() to allocate area->pages.
> 
> https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/
> 

That's fairly promising. Assuming the bulk allocator gets merged, it would
make sense to add vmalloc on top. That's for bringing it to my attention
because it's far more relevant than my imaginary potential use cases.

> I don't know how many _frequent_ vmalloc users we have that will benefit
> from this, but it's probably more than will benefit from improvements
> to 200Gbit networking performance.

I think it was 100Gbit being looked at but your point is still valid and
there is no harm in incrementally improving over time.
Alexander Lobakin March 25, 2021, 1:33 p.m. UTC | #3
From: Mel Gorman <mgorman@techsingularity.net>
Date: Thu, 25 Mar 2021 11:42:28 +0000

> From: Jesper Dangaard Brouer <brouer@redhat.com>
>
> There are cases where the page_pool need to refill with pages from the
> page allocator. Some workloads cause the page_pool to release pages
> instead of recycling these pages.
>
> For these workload it can improve performance to bulk alloc pages from
> the page-allocator to refill the alloc cache.
>
> For XDP-redirect workload with 100G mlx5 driver (that use page_pool)
> redirecting xdp_frame packets into a veth, that does XDP_PASS to create
> an SKB from the xdp_frame, which then cannot return the page to the
> page_pool.
>
> Performance results under GitHub xdp-project[1]:
>  [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org
>
> Mel: The patch "net: page_pool: convert to use alloc_pages_bulk_array
> variant" was squashed with this patch. From the test page, the array
> variant was superior with one of the test results as follows.
>
> 	Kernel		XDP stats       CPU     pps           Delta
> 	Baseline	XDP-RX CPU      total   3,771,046       n/a
> 	List		XDP-RX CPU      total   3,940,242    +4.49%
> 	Array		XDP-RX CPU      total   4,249,224   +12.68%
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

I tested it a lot for past two weeks and I'm very satisfied with
the results, especially the new array-based version.
Haven't had a chance to test this particular set yet, but still.

Reviewed-by: Alexander Lobakin <alobakin@pm.me>

Great work, thank you all guys!

> ---
>  include/net/page_pool.h |  2 +-
>  net/core/page_pool.c    | 82 ++++++++++++++++++++++++++++-------------
>  2 files changed, 57 insertions(+), 27 deletions(-)
>
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index b5b195305346..6d517a37c18b 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -65,7 +65,7 @@
>  #define PP_ALLOC_CACHE_REFILL	64
>  struct pp_alloc_cache {
>  	u32 count;
> -	void *cache[PP_ALLOC_CACHE_SIZE];
> +	struct page *cache[PP_ALLOC_CACHE_SIZE];
>  };
>
>  struct page_pool_params {
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 40e1b2beaa6c..9ec1aa9640ad 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -203,38 +203,17 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
>  	return true;
>  }
>
> -/* slow path */
> -noinline
> -static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> -						 gfp_t _gfp)
> +static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
> +						 gfp_t gfp)
>  {
> -	unsigned int pp_flags = pool->p.flags;
>  	struct page *page;
> -	gfp_t gfp = _gfp;
> -
> -	/* We could always set __GFP_COMP, and avoid this branch, as
> -	 * prep_new_page() can handle order-0 with __GFP_COMP.
> -	 */
> -	if (pool->p.order)
> -		gfp |= __GFP_COMP;
> -
> -	/* FUTURE development:
> -	 *
> -	 * Current slow-path essentially falls back to single page
> -	 * allocations, which doesn't improve performance.  This code
> -	 * need bulk allocation support from the page allocator code.
> -	 */
>
> -	/* Cache was empty, do real allocation */
> -#ifdef CONFIG_NUMA
> +	gfp |= __GFP_COMP;
>  	page = alloc_pages_node(pool->p.nid, gfp, pool->p.order);
> -#else
> -	page = alloc_pages(gfp, pool->p.order);
> -#endif
> -	if (!page)
> +	if (unlikely(!page))
>  		return NULL;
>
> -	if ((pp_flags & PP_FLAG_DMA_MAP) &&
> +	if ((pool->p.flags & PP_FLAG_DMA_MAP) &&
>  	    unlikely(!page_pool_dma_map(pool, page))) {
>  		put_page(page);
>  		return NULL;
> @@ -243,6 +222,57 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
>  	/* Track how many pages are held 'in-flight' */
>  	pool->pages_state_hold_cnt++;
>  	trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
> +	return page;
> +}
> +
> +/* slow path */
> +noinline
> +static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> +						 gfp_t gfp)
> +{
> +	const int bulk = PP_ALLOC_CACHE_REFILL;
> +	unsigned int pp_flags = pool->p.flags;
> +	unsigned int pp_order = pool->p.order;
> +	struct page *page;
> +	int i, nr_pages;
> +
> +	/* Don't support bulk alloc for high-order pages */
> +	if (unlikely(pp_order))
> +		return __page_pool_alloc_page_order(pool, gfp);
> +
> +	/* Unnecessary as alloc cache is empty, but guarantees zero count */
> +	if (unlikely(pool->alloc.count > 0))
> +		return pool->alloc.cache[--pool->alloc.count];
> +
> +	/* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */
> +	memset(&pool->alloc.cache, 0, sizeof(void *) * bulk);
> +
> +	nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache);
> +	if (unlikely(!nr_pages))
> +		return NULL;
> +
> +	/* Pages have been filled into alloc.cache array, but count is zero and
> +	 * page element have not been (possibly) DMA mapped.
> +	 */
> +	for (i = 0; i < nr_pages; i++) {
> +		page = pool->alloc.cache[i];
> +		if ((pp_flags & PP_FLAG_DMA_MAP) &&
> +		    unlikely(!page_pool_dma_map(pool, page))) {
> +			put_page(page);
> +			continue;
> +		}
> +		pool->alloc.cache[pool->alloc.count++] = page;
> +		/* Track how many pages are held 'in-flight' */
> +		pool->pages_state_hold_cnt++;
> +		trace_page_pool_state_hold(pool, page,
> +					   pool->pages_state_hold_cnt);
> +	}
> +
> +	/* Return last page */
> +	if (likely(pool->alloc.count > 0))
> +		page = pool->alloc.cache[--pool->alloc.count];
> +	else
> +		page = NULL;
>
>  	/* When page just alloc'ed is should/must have refcnt 1. */
>  	return page;
> --
> 2.26.2

Al
Uladzislau Rezki (Sony) March 25, 2021, 2:06 p.m. UTC | #4
> On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote:
> > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> > > This series introduces a bulk order-0 page allocator with sunrpc and
> > > the network page pool being the first users. The implementation is not
> > > efficient as semantics needed to be ironed out first. If no other semantic
> > > changes are needed, it can be made more efficient.  Despite that, this
> > > is a performance-related for users that require multiple pages for an
> > > operation without multiple round-trips to the page allocator. Quoting
> > > the last patch for the high-speed networking use-case
> > > 
> > >             Kernel          XDP stats       CPU     pps           Delta
> > >             Baseline        XDP-RX CPU      total   3,771,046       n/a
> > >             List            XDP-RX CPU      total   3,940,242    +4.49%
> > >             Array           XDP-RX CPU      total   4,249,224   +12.68%
> > > 
> > > >From the SUNRPC traces of svc_alloc_arg()
> > > 
> > > 	Single page: 25.007 us per call over 532,571 calls
> > > 	Bulk list:    6.258 us per call over 517,034 calls
> > > 	Bulk array:   4.590 us per call over 517,442 calls
> > > 
> > > Both potential users in this series are corner cases (NFS and high-speed
> > > networks) so it is unlikely that most users will see any benefit in the
> > > short term. Other potential other users are batch allocations for page
> > > cache readahead, fault around and SLUB allocations when high-order pages
> > > are unavailable. It's unknown how much benefit would be seen by converting
> > > multiple page allocation calls to a single batch or what difference it may
> > > make to headline performance.
> > 
> > We have a third user, vmalloc(), with a 16% perf improvement.  I know the
> > email says 21% but that includes the 5% improvement from switching to
> > kvmalloc() to allocate area->pages.
> > 
> > https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/
> > 
> 
> That's fairly promising. Assuming the bulk allocator gets merged, it would
> make sense to add vmalloc on top. That's for bringing it to my attention
> because it's far more relevant than my imaginary potential use cases.
> 
For the vmalloc we should be able to allocating on a specific NUMA node,
at least the current interface takes it into account. As far as i see
the current interface allocate on a current node:

static inline unsigned long
alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
{
    return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
}

Or am i missing something?

--
Vlad Rezki
Uladzislau Rezki (Sony) March 25, 2021, 2:13 p.m. UTC | #5
On Thu, Mar 25, 2021 at 02:09:27PM +0000, Matthew Wilcox wrote:
> On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote:
> > For the vmalloc we should be able to allocating on a specific NUMA node,
> > at least the current interface takes it into account. As far as i see
> > the current interface allocate on a current node:
> > 
> > static inline unsigned long
> > alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
> > {
> >     return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
> > }
> > 
> > Or am i missing something?
> 
> You can call __alloc_pages_bulk() directly; there's no need to indirect
> through alloc_pages_bulk_array().
>
OK. It is accessible then.

--
Vlad Rezki
Mel Gorman March 25, 2021, 2:26 p.m. UTC | #6
On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote:
> > On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote:
> > > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> > > > This series introduces a bulk order-0 page allocator with sunrpc and
> > > > the network page pool being the first users. The implementation is not
> > > > efficient as semantics needed to be ironed out first. If no other semantic
> > > > changes are needed, it can be made more efficient.  Despite that, this
> > > > is a performance-related for users that require multiple pages for an
> > > > operation without multiple round-trips to the page allocator. Quoting
> > > > the last patch for the high-speed networking use-case
> > > > 
> > > >             Kernel          XDP stats       CPU     pps           Delta
> > > >             Baseline        XDP-RX CPU      total   3,771,046       n/a
> > > >             List            XDP-RX CPU      total   3,940,242    +4.49%
> > > >             Array           XDP-RX CPU      total   4,249,224   +12.68%
> > > > 
> > > > >From the SUNRPC traces of svc_alloc_arg()
> > > > 
> > > > 	Single page: 25.007 us per call over 532,571 calls
> > > > 	Bulk list:    6.258 us per call over 517,034 calls
> > > > 	Bulk array:   4.590 us per call over 517,442 calls
> > > > 
> > > > Both potential users in this series are corner cases (NFS and high-speed
> > > > networks) so it is unlikely that most users will see any benefit in the
> > > > short term. Other potential other users are batch allocations for page
> > > > cache readahead, fault around and SLUB allocations when high-order pages
> > > > are unavailable. It's unknown how much benefit would be seen by converting
> > > > multiple page allocation calls to a single batch or what difference it may
> > > > make to headline performance.
> > > 
> > > We have a third user, vmalloc(), with a 16% perf improvement.  I know the
> > > email says 21% but that includes the 5% improvement from switching to
> > > kvmalloc() to allocate area->pages.
> > > 
> > > https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/
> > > 
> > 
> > That's fairly promising. Assuming the bulk allocator gets merged, it would
> > make sense to add vmalloc on top. That's for bringing it to my attention
> > because it's far more relevant than my imaginary potential use cases.
> > 
> For the vmalloc we should be able to allocating on a specific NUMA node,
> at least the current interface takes it into account. As far as i see
> the current interface allocate on a current node:
> 
> static inline unsigned long
> alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
> {
>     return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
> }
> 
> Or am i missing something?
> 

No, you're not missing anything. Options would be to add a helper similar
alloc_pages_node or to directly call __alloc_pages_bulk specifying a node
and using GFP_THISNODE. prepare_alloc_pages() should pick the correct
zonelist containing only the required node.

> --
> Vlad Rezki
Vlastimil Babka April 12, 2021, 10:01 a.m. UTC | #7
On 3/25/21 12:42 PM, Mel Gorman wrote:
> Review feedback of the bulk allocator twice found problems with "alloced"

> being a counter for pages allocated. The naming was based on the API name

> "alloc" and was based on the idea that verbal communication about malloc

> tends to use the fake word "malloced" instead of the fake word mallocated.

> To be consistent, this preparation patch renames alloced to allocated

> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar

> names when the bulk allocator is introduced.

> 

> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>


Acked-by: Vlastimil Babka <vbabka@suse.cz>


> ---

>  mm/page_alloc.c | 8 ++++----

>  1 file changed, 4 insertions(+), 4 deletions(-)

> 

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c

> index dfa9af064f74..8a3e13277e22 100644

> --- a/mm/page_alloc.c

> +++ b/mm/page_alloc.c

> @@ -2908,7 +2908,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,

>  			unsigned long count, struct list_head *list,

>  			int migratetype, unsigned int alloc_flags)

>  {

> -	int i, alloced = 0;

> +	int i, allocated = 0;

>  

>  	spin_lock(&zone->lock);

>  	for (i = 0; i < count; ++i) {

> @@ -2931,7 +2931,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,

>  		 * pages are ordered properly.

>  		 */

>  		list_add_tail(&page->lru, list);

> -		alloced++;

> +		allocated++;

>  		if (is_migrate_cma(get_pcppage_migratetype(page)))

>  			__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,

>  					      -(1 << order));

> @@ -2940,12 +2940,12 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,

>  	/*

>  	 * i pages were removed from the buddy list even if some leak due

>  	 * to check_pcp_refill failing so adjust NR_FREE_PAGES based

> -	 * on i. Do not confuse with 'alloced' which is the number of

> +	 * on i. Do not confuse with 'allocated' which is the number of

>  	 * pages added to the pcp list.

>  	 */

>  	__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));

>  	spin_unlock(&zone->lock);

> -	return alloced;

> +	return allocated;

>  }

>  

>  #ifdef CONFIG_NUMA

>
Vlastimil Babka April 12, 2021, 10:36 a.m. UTC | #8
On 3/25/21 12:42 PM, Mel Gorman wrote:
> The proposed callers for the bulk allocator store pages from the bulk

> allocator in an array. This patch adds an array-based interface to the API

> to avoid multiple list iterations. The page list interface is preserved

> to avoid requiring all users of the bulk API to allocate and manage enough

> storage to store the pages.

> 

> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>


Acked-by: Vlastimil Babka <vbabka@suse.cz>
Vlastimil Babka April 12, 2021, 10:59 a.m. UTC | #9
On 3/25/21 12:42 PM, Mel Gorman wrote:
> From: Jesper Dangaard Brouer <brouer@redhat.com>

> 

> When __alloc_pages_bulk() got introduced two callers of __rmqueue_pcplist

> exist and the compiler chooses to not inline this function.

> 

>  ./scripts/bloat-o-meter vmlinux-before vmlinux-inline__rmqueue_pcplist

> add/remove: 0/1 grow/shrink: 2/0 up/down: 164/-125 (39)

> Function                                     old     new   delta

> rmqueue                                     2197    2296     +99

> __alloc_pages_bulk                          1921    1986     +65

> __rmqueue_pcplist                            125       -    -125

> Total: Before=19374127, After=19374166, chg +0.00%

> 

> modprobe page_bench04_bulk loops=$((10**7))

> 

> Type:time_bulk_page_alloc_free_array

>  -  Per elem: 106 cycles(tsc) 29.595 ns (step:64)

>  - (measurement period time:0.295955434 sec time_interval:295955434)

>  - (invoke count:10000000 tsc_interval:1065447105)

> 

> Before:

>  - Per elem: 110 cycles(tsc) 30.633 ns (step:64)

> 

> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>

> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>


Acked-by: Vlastimil Babka <vbabka@suse.cz>


> ---

>  mm/page_alloc.c | 3 ++-

>  1 file changed, 2 insertions(+), 1 deletion(-)

> 

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c

> index 1ec18121268b..d900e92884b2 100644

> --- a/mm/page_alloc.c

> +++ b/mm/page_alloc.c

> @@ -3415,7 +3415,8 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)

>  }

>  

>  /* Remove page from the per-cpu list, caller must protect the list */

> -static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,

> +static inline

> +struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,

>  			unsigned int alloc_flags,

>  			struct per_cpu_pages *pcp,

>  			struct list_head *list)

>