From patchwork Wed Jan 27 20:11:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 372184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2B40C433DB for ; Wed, 27 Jan 2021 20:12:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5601664DA3 for ; Wed, 27 Jan 2021 20:12:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232814AbhA0UML (ORCPT ); Wed, 27 Jan 2021 15:12:11 -0500 Received: from mail-40131.protonmail.ch ([185.70.40.131]:17370 "EHLO mail-40131.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232785AbhA0ULz (ORCPT ); Wed, 27 Jan 2021 15:11:55 -0500 Date: Wed, 27 Jan 2021 20:11:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1611778273; bh=WGn3Y4XXY7ltWJBzRL8oxPGlTjtHLW5fnpeQGERPzl4=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=kLg7qi7avVJ9qLl4flp/CXLvM3HChtTuCeAiof3s0V/YfAxu8HF+otf0ApeHUvgUG MxeMACV+4kgXuidZB8WCoonEBt21RadqT+/dYt3uW92WeHHN0G2sxWnxIgfJgZoWtk Mss+4kom9kQFB4kRGjcfD4efRioZ8gHF2oWgGpH7UIErdsXpGvrbYsYd9/I7XG3kS5 snI3iiHPi1hnxKGFqTltxOuLP1IzszKFhMc6FqGCwoyD9UafrD4G3jveOBC6wdyXO/ B8GRHXAA2t24hiBkfVeQfkTNi/I0gA/qIPvgPb0NLVaKDx1gNFkm/tlHcJMHS5c+bW IimfDh7I9ScXQ== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: David Rientjes , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Saeed Mahameed , Leon Romanovsky , Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , Jonathan Lemon , Willem de Bruijn , Randy Dunlap , Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Marco Elver , Paolo Abeni , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org Reply-To: Alexander Lobakin Subject: [PATCH v2 net-next 2/4] skbuff: constify skb_propagate_pfmemalloc() "page" argument Message-ID: <20210127201031.98544-3-alobakin@pm.me> In-Reply-To: <20210127201031.98544-1-alobakin@pm.me> References: <20210127201031.98544-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The function doesn't write anything to the page struct itself, so this argument can be const. Misc: align second argument to the brace while at it. Signed-off-by: Alexander Lobakin Acked-by: David Rientjes --- include/linux/skbuff.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 9313b5aaf45b..b027526da4f9 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -2943,8 +2943,8 @@ static inline struct page *dev_alloc_page(void) * @page: The page that was allocated from skb_alloc_page * @skb: The skb that may need pfmemalloc set */ -static inline void skb_propagate_pfmemalloc(struct page *page, - struct sk_buff *skb) +static inline void skb_propagate_pfmemalloc(const struct page *page, + struct sk_buff *skb) { if (page_is_pfmemalloc(page)) skb->pfmemalloc = true; From patchwork Wed Jan 27 20:11:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 372183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EA09C433E0 for ; Wed, 27 Jan 2021 20:13:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3422764DA4 for ; Wed, 27 Jan 2021 20:13:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232842AbhA0UNA (ORCPT ); Wed, 27 Jan 2021 15:13:00 -0500 Received: from mail-40133.protonmail.ch ([185.70.40.133]:40237 "EHLO mail-40133.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231597AbhA0UMg (ORCPT ); Wed, 27 Jan 2021 15:12:36 -0500 Date: Wed, 27 Jan 2021 20:11:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1611778293; bh=9qyrYwHZiBXvDz842Cw5/bsnWtdqL9n+12n4P+yiMMc=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=f4zd97APJyHcue9fD9L3vZ6ObTvDAUTssdCjkJFUURwJpo26PCZZKFy2l1suO3zVz 8NNsQtnAaEkc6VyEiMYabnXmIjFugh7vzLUt/ZHQUBGYplrJAw0WE/jH6ruBux/j0J 04OvKQeW+CcjBPC+F95iJgzpVnAdr90tWgHt29Cn4UgcLXhZXrCE2IrTp91BlVRDhj F1M6wnpbadsj3dAwxbGoBMbFmPE/4fIQUF7/PWpstXw/eoUNHlhSOFPybnBQj0SvjA BP4f3UBxUnZ9eTZqjzIpcVhlufshszPAf0klO9GAvl9KV5JB4USz1IaKHK3mFxkwmI 1jDc+G2UaWtqg== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: David Rientjes , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Saeed Mahameed , Leon Romanovsky , Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , Jonathan Lemon , Willem de Bruijn , Randy Dunlap , Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Marco Elver , Paolo Abeni , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org Reply-To: Alexander Lobakin Subject: [PATCH v2 net-next 4/4] net: page_pool: simplify page recycling condition tests Message-ID: <20210127201031.98544-5-alobakin@pm.me> In-Reply-To: <20210127201031.98544-1-alobakin@pm.me> References: <20210127201031.98544-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org pool_page_reusable() is a leftover from pre-NUMA-aware times. For now, this function is just a redundant wrapper over page_is_pfmemalloc(), so Inline it into its sole call site. Signed-off-by: Alexander Lobakin Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Acked-by: David Rientjes --- net/core/page_pool.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f3c690b8c8e3..ad8b0707af04 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -350,14 +350,6 @@ static bool page_pool_recycle_in_cache(struct page *page, return true; } -/* page is NOT reusable when: - * 1) allocated when system is under some pressure. (page_is_pfmemalloc) - */ -static bool pool_page_reusable(struct page_pool *pool, struct page *page) -{ - return !page_is_pfmemalloc(page); -} - /* If the page refcnt == 1, this will try to recycle the page. * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for * the configured size min(dma_sync_size, pool->max_len). @@ -373,9 +365,11 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * regular page allocator APIs. * * refcnt == 1 means page_pool owns page, and can recycle it. + * + * page is NOT reusable when allocated when system is under + * some pressure. (page_is_pfmemalloc) */ - if (likely(page_ref_count(page) == 1 && - pool_page_reusable(pool, page))) { + if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { /* Read barrier done in page_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)