From patchwork Wed Dec 9 00:54:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Wang X-Patchwork-Id: 341790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DD4CC433FE for ; Wed, 9 Dec 2020 00:55:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E2D85238E2 for ; Wed, 9 Dec 2020 00:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725991AbgLIAz3 (ORCPT ); Tue, 8 Dec 2020 19:55:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725768AbgLIAz3 (ORCPT ); Tue, 8 Dec 2020 19:55:29 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43936C061794 for ; Tue, 8 Dec 2020 16:54:49 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id o25so188911qkj.1 for ; Tue, 08 Dec 2020 16:54:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=ui+mfxi8wpT8gvvNai/q/zsMOrHc18LzVuoveVRNu1Q=; b=NnHIj3G7v5EsquRDUm364Jl9AjatryYDJlL80FbRhhgTqBgJZPyIH2iPsjSdGt7YnB 11oeVwE4oe/O9SincSiGGpyS74/Kt3ZauIXK4j74CHKoS0wKGmPec1sfiwvRMzxpiWrv k4BuNNNww4KBIag24TKWtN3JYU2jgRcGwM0LZ7iQcBlPJN+RJUZFd7kKCCcq7gmBm7OY aGK4IY9PHxPc8nZcWFbI65Jy1NJ0YHCWrxBbU9+SkF52h6cAUZxDbJ5xuW/x69DX1oGM ZVQ4SJ22Mk8Xj0JH3T3xnaVNOT1Mgw1BpF3ymE6z0B0HWpCAG9KaHFX4/hj7Gqj4muTJ bkGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ui+mfxi8wpT8gvvNai/q/zsMOrHc18LzVuoveVRNu1Q=; b=L7mcS6HpnxwDdtg2u4VlqNkbrnt9nB8hOWNujMFUH6ZnOOC/uaSVQFYZMWQobjypCd OukJ0rdn52c+B+FicptgJTh7oALZzWj8uw2CO2ftFm5jdC8uD9+D9F37gEYs0CjO098P MikEPooFM9rcsY7QTuCSU45gZp8p2eQHsPzv+ZrMAVcpjVf3+1x17Nh34KZKPkD/hde5 Mk/U1p0mW+wxI4hlsyfQjfnrNmu1o3saPeQSimPGhSySINPU2q6WqegXGTcsKILBn+ud ZWqOudOFi+qSJOZC/zWidaPjhEDiVNR/cQjplfcVJuKuxQQ4d2YrSdInpF4G2c+wPw5R 2bpA== X-Gm-Message-State: AOAM5330KZd/UhYHklcnW5N1aatLNg5U6nmwm/ZwyUBln2GaUB7BkidB EjBNOvwkDE/Q6YHzB6JkhT5+53BMyis= X-Google-Smtp-Source: ABdhPJxJ4RJcDR2BCGqUW+jG7b96sn3GdysLrjWsXbw70kO32iGZRCRjSvXrSxF/w21lerZSjFadgVoZExo= Sender: "weiwan via sendgmr" X-Received: from weiwan.svl.corp.google.com ([2620:15c:2c4:201:1ea0:b8ff:fe75:cf08]) (user=weiwan job=sendgmr) by 2002:a0c:a987:: with SMTP id a7mr942498qvb.55.1607475288388; Tue, 08 Dec 2020 16:54:48 -0800 (PST) Date: Tue, 8 Dec 2020 16:54:42 -0800 In-Reply-To: <20201209005444.1949356-1-weiwan@google.com> Message-Id: <20201209005444.1949356-2-weiwan@google.com> Mime-Version: 1.0 References: <20201209005444.1949356-1-weiwan@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH net-next v4 1/3] net: extract napi poll functionality to __napi_poll() From: Wei Wang To: David Miller , netdev@vger.kernel.org, Jakub Kicinski Cc: Paolo Abeni , Hannes Frederic Sowa , Eric Dumazet , Felix Fietkau , Hillf Danton Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Felix Fietkau This commit introduces a new function __napi_poll() which does the main logic of the existing napi_poll() function, and will be called by other functions in later commits. This idea and implementation is done by Felix Fietkau and is proposed as part of the patch to move napi work to work_queue context. This commit by itself is a code restructure. Signed-off-by: Felix Fietkau Signed-off-by: Wei Wang Reviewed-by: Eric Dumazet --- net/core/dev.c | 35 +++++++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 10 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index ce8fea2e2788..8064af1dd03c 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6765,15 +6765,10 @@ void __netif_napi_del(struct napi_struct *napi) } EXPORT_SYMBOL(__netif_napi_del); -static int napi_poll(struct napi_struct *n, struct list_head *repoll) +static int __napi_poll(struct napi_struct *n, bool *repoll) { - void *have; int work, weight; - list_del_init(&n->poll_list); - - have = netpoll_poll_lock(n); - weight = n->weight; /* This NAPI_STATE_SCHED test is for avoiding a race @@ -6793,7 +6788,7 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll) n->poll, work, weight); if (likely(work < weight)) - goto out_unlock; + return work; /* Drivers must not modify the NAPI state if they * consume the entire weight. In such cases this code @@ -6802,7 +6797,7 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll) */ if (unlikely(napi_disable_pending(n))) { napi_complete(n); - goto out_unlock; + return work; } /* The NAPI context has more processing work, but busy-polling @@ -6815,7 +6810,7 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll) */ napi_schedule(n); } - goto out_unlock; + return work; } if (n->gro_bitmask) { @@ -6833,9 +6828,29 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll) if (unlikely(!list_empty(&n->poll_list))) { pr_warn_once("%s: Budget exhausted after napi rescheduled\n", n->dev ? n->dev->name : "backlog"); - goto out_unlock; + return work; } + *repoll = true; + + return work; +} + +static int napi_poll(struct napi_struct *n, struct list_head *repoll) +{ + bool do_repoll = false; + void *have; + int work; + + list_del_init(&n->poll_list); + + have = netpoll_poll_lock(n); + + work = __napi_poll(n, &do_repoll); + + if (!do_repoll) + goto out_unlock; + list_add_tail(&n->poll_list, repoll); out_unlock: From patchwork Wed Dec 9 00:54:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Wang X-Patchwork-Id: 340894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C08EC19437 for ; Wed, 9 Dec 2020 00:55:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 497B02229C for ; Wed, 9 Dec 2020 00:55:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725208AbgLIAzi (ORCPT ); Tue, 8 Dec 2020 19:55:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725768AbgLIAzh (ORCPT ); Tue, 8 Dec 2020 19:55:37 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6367C06179C for ; Tue, 8 Dec 2020 16:54:50 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id t127so189060qkf.0 for ; Tue, 08 Dec 2020 16:54:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=5OkKxgxDxxYnQ0ng7/ZnEnQTthDelIY86K9kd2On1/Y=; b=fCz2fIQJ3Oda7bPq/hzJlTifv6+d+eprOz45wq/fJwKNiFRd4ljgyX4y/wltr5ZlUA 9bhXwKdxnTdYitj4xo9P2tSOxBfL3zNrSMYXtR3378zukUirh/MZFiAXTgdfh8VSY5Yv L+oLo7tWutcFwh6VOIPqesIeL/SpfX5GSzaNHNelXHQhk8+K8pXHAgUy4zf6NS/Q9ixn +H25P9Cn7iafFX20dYoiaD6fLCW+tWpB4HestU+RjxcfhEEyMj+lpXeBPmkDyFZorw8O W39uAFe2ZLT7/3tWytfWhhTU3N4JcYhOV2meMfzCDtc3C4UmTc0T68A5/YJgAZOKijiO ZMCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5OkKxgxDxxYnQ0ng7/ZnEnQTthDelIY86K9kd2On1/Y=; b=Tg5ZsYnneMo/JTZZmq37Gzr6eJ7X60LJ4nwwzkHz58OngxoopGdA/LKz8wqPnOxwRM +PKSp6KDByvxTfwCVN/hQJ9hokzY/dswwW28ooKAgef60hrh9K0odNtyePhvw1sWIhC1 3PiKA/RKn0iYTgY/uCSodWiLkLJcF0tpEhjeaJmRsK+zUmz8T1mfX/QH1px29H9Aav1O xFSHiHOw5XCnsuYVM+uccAZxYnsUYPYLqyjsn/1gM6USg2jlx5vHCOpTfeWzmh3pigT3 iJ0ztzPT70tMHH3Lx8eqETO6+p/Leg2mNxdZKO7YfreXsW++Q//U+MU7SOgMD4HFA4uz VXMw== X-Gm-Message-State: AOAM532VqiLvunGGogDKck8yri3i2OcIXkyngaYmj+0DipHC3dTzlryj /W4CTtt8ITWue+q/K2i2q4TlaYZHHyM= X-Google-Smtp-Source: ABdhPJzktaCyiHqEDHiniMaumGrrJu1xbr1PiQJdIXYwHaR3my9nRJrCbXgcyDVgE5sTlwOxkAyiHRtZHWc= Sender: "weiwan via sendgmr" X-Received: from weiwan.svl.corp.google.com ([2620:15c:2c4:201:1ea0:b8ff:fe75:cf08]) (user=weiwan job=sendgmr) by 2002:a0c:f046:: with SMTP id b6mr285531qvl.14.1607475290114; Tue, 08 Dec 2020 16:54:50 -0800 (PST) Date: Tue, 8 Dec 2020 16:54:43 -0800 In-Reply-To: <20201209005444.1949356-1-weiwan@google.com> Message-Id: <20201209005444.1949356-3-weiwan@google.com> Mime-Version: 1.0 References: <20201209005444.1949356-1-weiwan@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH net-next v4 2/3] net: implement threaded-able napi poll loop support From: Wei Wang To: David Miller , netdev@vger.kernel.org, Jakub Kicinski Cc: Paolo Abeni , Hannes Frederic Sowa , Eric Dumazet , Felix Fietkau , Hillf Danton Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch allows running each napi poll loop inside its own kernel thread. The threaded mode could be enabled through napi_set_threaded() api, and does not require a device up/down. The kthread gets created on demand when napi_set_threaded() is called, and gets shut down eventually in napi_disable(). Once that threaded mode is enabled and the kthread is started, napi_schedule() will wake-up such thread instead of scheduling the softirq. The threaded poll loop behaves quite likely the net_rx_action, but it does not have to manipulate local irqs and uses an explicit scheduling point based on netdev_budget. Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Co-developed-by: Hannes Frederic Sowa Signed-off-by: Hannes Frederic Sowa Co-developed-by: Jakub Kicinski Signed-off-by: Jakub Kicinski Signed-off-by: Wei Wang Reviewed-by: Eric Dumazet --- include/linux/netdevice.h | 5 ++ net/core/dev.c | 105 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 110 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 7bf167993c05..abd3b52b7da6 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -347,6 +347,7 @@ struct napi_struct { struct list_head dev_list; struct hlist_node napi_hash_node; unsigned int napi_id; + struct task_struct *thread; }; enum { @@ -358,6 +359,7 @@ enum { NAPI_STATE_NO_BUSY_POLL, /* Do not add in napi_hash, no busy polling */ NAPI_STATE_IN_BUSY_POLL, /* sk_busy_loop() owns this NAPI */ NAPI_STATE_PREFER_BUSY_POLL, /* prefer busy-polling over softirq processing*/ + NAPI_STATE_THREADED, /* The poll is performed inside its own thread*/ }; enum { @@ -369,6 +371,7 @@ enum { NAPIF_STATE_NO_BUSY_POLL = BIT(NAPI_STATE_NO_BUSY_POLL), NAPIF_STATE_IN_BUSY_POLL = BIT(NAPI_STATE_IN_BUSY_POLL), NAPIF_STATE_PREFER_BUSY_POLL = BIT(NAPI_STATE_PREFER_BUSY_POLL), + NAPIF_STATE_THREADED = BIT(NAPI_STATE_THREADED), }; enum gro_result { @@ -495,6 +498,8 @@ static inline bool napi_complete(struct napi_struct *n) return napi_complete_done(n, 0); } +int napi_set_threaded(struct napi_struct *n, bool threaded); + /** * napi_disable - prevent NAPI from scheduling * @n: NAPI context diff --git a/net/core/dev.c b/net/core/dev.c index 8064af1dd03c..cb6c4e2363a4 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -91,6 +91,7 @@ #include #include #include +#include #include #include #include @@ -1475,6 +1476,36 @@ void netdev_notify_peers(struct net_device *dev) } EXPORT_SYMBOL(netdev_notify_peers); +static int napi_threaded_poll(void *data); + +static int napi_kthread_create(struct napi_struct *n) +{ + int err = 0; + + /* Create and wake up the kthread once to put it in + * TASK_INTERRUPTIBLE mode to avoid the blocked task + * warning and work with loadavg. + */ + n->thread = kthread_run(napi_threaded_poll, n, "napi/%s-%d", + n->dev->name, n->napi_id); + if (IS_ERR(n->thread)) { + err = PTR_ERR(n->thread); + pr_err("kthread_run failed with err %d\n", err); + n->thread = NULL; + } + + return err; +} + +static void napi_kthread_stop(struct napi_struct *n) +{ + if (!n->thread) + return; + kthread_stop(n->thread); + clear_bit(NAPI_STATE_THREADED, &n->state); + n->thread = NULL; +} + static int __dev_open(struct net_device *dev, struct netlink_ext_ack *extack) { const struct net_device_ops *ops = dev->netdev_ops; @@ -4234,6 +4265,11 @@ int gro_normal_batch __read_mostly = 8; static inline void ____napi_schedule(struct softnet_data *sd, struct napi_struct *napi) { + if (test_bit(NAPI_STATE_THREADED, &napi->state)) { + wake_up_process(napi->thread); + return; + } + list_add_tail(&napi->poll_list, &sd->poll_list); __raise_softirq_irqoff(NET_RX_SOFTIRQ); } @@ -6690,6 +6726,29 @@ static void init_gro_hash(struct napi_struct *napi) napi->gro_bitmask = 0; } +int napi_set_threaded(struct napi_struct *n, bool threaded) +{ + int err = 0; + + ASSERT_RTNL(); + + if (threaded == !!test_bit(NAPI_STATE_THREADED, &n->state)) + return 0; + if (threaded) { + if (!n->thread) { + err = napi_kthread_create(n); + if (err) + goto out; + } + set_bit(NAPI_STATE_THREADED, &n->state); + } else { + clear_bit(NAPI_STATE_THREADED, &n->state); + } + +out: + return err; +} + void netif_napi_add(struct net_device *dev, struct napi_struct *napi, int (*poll)(struct napi_struct *, int), int weight) { @@ -6731,6 +6790,7 @@ void napi_disable(struct napi_struct *n) msleep(1); hrtimer_cancel(&n->timer); + napi_kthread_stop(n); clear_bit(NAPI_STATE_PREFER_BUSY_POLL, &n->state); clear_bit(NAPI_STATE_DISABLE, &n->state); @@ -6859,6 +6919,51 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll) return work; } +static int napi_thread_wait(struct napi_struct *napi) +{ + set_current_state(TASK_INTERRUPTIBLE); + + while (!kthread_should_stop() && !napi_disable_pending(napi)) { + if (test_bit(NAPI_STATE_SCHED, &napi->state)) { + WARN_ON(!list_empty(&napi->poll_list)); + __set_current_state(TASK_RUNNING); + return 0; + } + + schedule(); + set_current_state(TASK_INTERRUPTIBLE); + } + __set_current_state(TASK_RUNNING); + return -1; +} + +static int napi_threaded_poll(void *data) +{ + struct napi_struct *napi = data; + void *have; + + while (!napi_thread_wait(napi)) { + for (;;) { + bool repoll = false; + + local_bh_disable(); + + have = netpoll_poll_lock(napi); + __napi_poll(napi, &repoll); + netpoll_poll_unlock(have); + + __kfree_skb_flush(); + local_bh_enable(); + + if (!repoll) + break; + + cond_resched(); + } + } + return 0; +} + static __latent_entropy void net_rx_action(struct softirq_action *h) { struct softnet_data *sd = this_cpu_ptr(&softnet_data); From patchwork Wed Dec 9 00:54:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Wang X-Patchwork-Id: 340893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B22EDC4167B for ; Wed, 9 Dec 2020 00:55:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F0B923B7C for ; Wed, 9 Dec 2020 00:55:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726172AbgLIAzp (ORCPT ); Tue, 8 Dec 2020 19:55:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725768AbgLIAzj (ORCPT ); Tue, 8 Dec 2020 19:55:39 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D141BC0617A6 for ; Tue, 8 Dec 2020 16:54:52 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id c8so408548plo.13 for ; Tue, 08 Dec 2020 16:54:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=J8dECfyOQwvK0nUOqNfD0kSF+CNbpd+Jpy6icYbjYWg=; b=cOciLv1eJXb7/ItKK0lsl1KE5mLgql7vQ+lR2yWzZ8xLWtvs+Ec97aLcxtZxmNAU9a J1PEBxWacntrhjqw0n/msy8jNYxlnf6/g5edEXqYpXM0o/fDyWzdwvRaq6woKeA7M3Aw KMxasZJbmzCBkCPnEvkHcWZfxzMH3btCJQlqSbLAY76tUfcry2aTg97blbdYKYXYwCFa jE6u+SMmsZrBZNqkCFaIjjvjGTl/+T+PNkKTeA54gWzu1Qwx5NveEFTH9MnYkT29dIpe NUS6YLcalGyFS0drr6ou0V66bbcaO5xcn+Uys2f2msdAhplm4jJQKJ/NGSpqiwDvTSjM U6yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=J8dECfyOQwvK0nUOqNfD0kSF+CNbpd+Jpy6icYbjYWg=; b=KVmnascTlBHdvTnWF5Hn5/jmYt/vg7NQfrk7GCEGWuTfKZivzkoIa3tQ6bjQCuDfma yFOuLP3cd5D6ZNUlvcMi2Zctlr930JGPebYAcU9oGKPI7is7Sw34TxtFUc2fkh2PBbdz 37cAtingv2ZT1LSgAQc3zAehARJnoz/U6dkswGhapevdu0QAbxM5P0Fs6JDwonhoZQse TQL28xmccrKfsoK4GU/WfAR9CCW6fqsC/dEs/iwSjBKnOELl38hfJDkMj4+AViMtDH0P q2P5LvfETLc56NOgDqX0xq6c2gHwEHWnJgA8jGtW0x0koBn9gg/kUltoNIaetDm9i2Rz JsKg== X-Gm-Message-State: AOAM531GtaJhAPYik2lr/6sMfrQ4wyJ7VCWsP+wwzrKGVOjO9lTMoJ5D hEPnA3Ee0M8wJD95zIBan47S/FwQVeI= X-Google-Smtp-Source: ABdhPJxKTT5tsc0vlbMxsSWTJ6T/JpwftXZ2rdWmvaK8Ztawcj6y0TwEzRrFsPV3uRKGfyhDQi7NnPwWUjA= Sender: "weiwan via sendgmr" X-Received: from weiwan.svl.corp.google.com ([2620:15c:2c4:201:1ea0:b8ff:fe75:cf08]) (user=weiwan job=sendgmr) by 2002:a17:90a:17a4:: with SMTP id q33mr14537pja.0.1607475292041; Tue, 08 Dec 2020 16:54:52 -0800 (PST) Date: Tue, 8 Dec 2020 16:54:44 -0800 In-Reply-To: <20201209005444.1949356-1-weiwan@google.com> Message-Id: <20201209005444.1949356-4-weiwan@google.com> Mime-Version: 1.0 References: <20201209005444.1949356-1-weiwan@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH net-next v4 3/3] net: add sysfs attribute to control napi threaded mode From: Wei Wang To: David Miller , netdev@vger.kernel.org, Jakub Kicinski Cc: Paolo Abeni , Hannes Frederic Sowa , Eric Dumazet , Felix Fietkau , Hillf Danton Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch adds a new sysfs attribute to the network device class. Said attribute provides a per-device control to enable/disable the threaded mode for all the napi instances of the given network device. Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Co-developed-by: Hannes Frederic Sowa Signed-off-by: Hannes Frederic Sowa Co-developed-by: Felix Fietkau Signed-off-by: Felix Fietkau Signed-off-by: Wei Wang Reviewed-by: Eric Dumazet --- net/core/net-sysfs.c | 70 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index 94fff0700bdd..d14dc1da4c97 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -538,6 +538,75 @@ static ssize_t phys_switch_id_show(struct device *dev, } static DEVICE_ATTR_RO(phys_switch_id); +static ssize_t threaded_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct net_device *netdev = to_net_dev(dev); + struct napi_struct *n; + bool enabled; + int ret; + + if (!rtnl_trylock()) + return restart_syscall(); + + if (!dev_isalive(netdev)) { + ret = -EINVAL; + goto unlock; + } + + if (list_empty(&netdev->napi_list)) { + ret = -EOPNOTSUPP; + goto unlock; + } + + n = list_first_entry(&netdev->napi_list, struct napi_struct, dev_list); + enabled = !!test_bit(NAPI_STATE_THREADED, &n->state); + + ret = sprintf(buf, fmt_dec, enabled); + +unlock: + rtnl_unlock(); + return ret; +} + +static void dev_disable_threaded_all(struct net_device *dev) +{ + struct napi_struct *napi; + + list_for_each_entry(napi, &dev->napi_list, dev_list) + napi_set_threaded(napi, false); +} + +static int modify_napi_threaded(struct net_device *dev, unsigned long val) +{ + struct napi_struct *napi; + int ret; + + if (list_empty(&dev->napi_list)) + return -EOPNOTSUPP; + + list_for_each_entry(napi, &dev->napi_list, dev_list) { + ret = napi_set_threaded(napi, !!val); + if (ret) { + /* Error occurred on one of the napi, + * reset threaded mode on all napi. + */ + dev_disable_threaded_all(dev); + break; + } + } + + return ret; +} + +static ssize_t threaded_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + return netdev_store(dev, attr, buf, len, modify_napi_threaded); +} +static DEVICE_ATTR_RW(threaded); + static struct attribute *net_class_attrs[] __ro_after_init = { &dev_attr_netdev_group.attr, &dev_attr_type.attr, @@ -570,6 +639,7 @@ static struct attribute *net_class_attrs[] __ro_after_init = { &dev_attr_proto_down.attr, &dev_attr_carrier_up_count.attr, &dev_attr_carrier_down_count.attr, + &dev_attr_threaded.attr, NULL, }; ATTRIBUTE_GROUPS(net_class);