From patchwork Tue Feb 2 13:31:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 375025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44EE9C433DB for ; Tue, 2 Feb 2021 13:38:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 10CAD64F51 for ; Tue, 2 Feb 2021 13:38:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232437AbhBBNir (ORCPT ); Tue, 2 Feb 2021 08:38:47 -0500 Received: from mail-40134.protonmail.ch ([185.70.40.134]:25287 "EHLO mail-40134.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232464AbhBBNcf (ORCPT ); Tue, 2 Feb 2021 08:32:35 -0500 Date: Tue, 02 Feb 2021 13:31:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1612272712; bh=vZbokY1lR2yxpQh6llA40AYewCXOK6n7Mk2VaFMgqMw=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=RRCmGVskxM06gir97O7queRXOrVsD1PbQYwYmdjmasUHdSgRJtvjy4r1a/w9vJWod ENfzjQZIQra+3Uu+sb2b/AGrWNGfUFCdKdLumzIET/EPyGkKOXRnG/+nn7CxPYrn5X YUfPyaII0uz68enCelJu5xkK0G1fcAadDMvQxEzpCIf3RCNLZflqjaS2dyRtgPrmhE thQG2gWWC/x5zvoR0aK1LdGvEpX/cL0Po5BSl3RW7wUZCAZHIUhmIltAUHz2axkXII VzA6rDXzez0UwdoV4Aisj3GM2rCpsePvKMLF1kq92AOUtI9Vi25l2P7aF1OkM+35mh hn1emmwkIP4mQ== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: John Hubbard , David Rientjes , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Saeed Mahameed , Leon Romanovsky , Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , Jonathan Lemon , Willem de Bruijn , Randy Dunlap , Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Marco Elver , Paolo Abeni , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org Reply-To: Alexander Lobakin Subject: [PATCH RESEND v3 net-next 5/5] net: page_pool: simplify page recycling condition tests Message-ID: <20210202133030.5760-6-alobakin@pm.me> In-Reply-To: <20210202133030.5760-1-alobakin@pm.me> References: <20210202133030.5760-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org pool_page_reusable() is a leftover from pre-NUMA-aware times. For now, this function is just a redundant wrapper over page_is_pfmemalloc(), so inline it into its sole call site. Signed-off-by: Alexander Lobakin Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg Acked-by: David Rientjes --- net/core/page_pool.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f3c690b8c8e3..ad8b0707af04 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -350,14 +350,6 @@ static bool page_pool_recycle_in_cache(struct page *page, return true; } -/* page is NOT reusable when: - * 1) allocated when system is under some pressure. (page_is_pfmemalloc) - */ -static bool pool_page_reusable(struct page_pool *pool, struct page *page) -{ - return !page_is_pfmemalloc(page); -} - /* If the page refcnt == 1, this will try to recycle the page. * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for * the configured size min(dma_sync_size, pool->max_len). @@ -373,9 +365,11 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * regular page allocator APIs. * * refcnt == 1 means page_pool owns page, and can recycle it. + * + * page is NOT reusable when allocated when system is under + * some pressure. (page_is_pfmemalloc) */ - if (likely(page_ref_count(page) == 1 && - pool_page_reusable(pool, page))) { + if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { /* Read barrier done in page_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)