From patchwork Fri Jan 31 18:51:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zoran Markovic X-Patchwork-Id: 23989 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f70.google.com (mail-pa0-f70.google.com [209.85.220.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 289FD202FA for ; Fri, 31 Jan 2014 18:51:59 +0000 (UTC) Received: by mail-pa0-f70.google.com with SMTP id kq14sf11840820pab.5 for ; Fri, 31 Jan 2014 10:51:58 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=TRrXOXmUUrwBuj4yYoEWxpRYOsdTq4ZPEqVh08XXlk0=; b=e7aiTjVHAIjaDK1W/SB3y9TgvX6o/s5hlVgNCCwHEMzovnnUrH8dED7ns/xfRK60W4 XRja58Ueu32ofPSFWw9lVC8AUkD4RjnZl66DhLS+F4iF7n7YigMYKIcj0JA8iIkIc6yS LymGz17PdCWCwr0K9yXMEaBiDkduJlgoBp1IpytPfeJvRmE22CVAFIed8ZP+Brtn2GlM yHACoX9tkH7PoPt9O2QRqnOcsHdqmgzjOzCjLpKlycu0Fu2qcESGmQSuaLzphFl2qe+F JFmi+xTSdvk/bN4fyA5u1HJvLhyTmusGNMyhnhG7tF3E+tsh+r4InyrLg8rvWwsVpSiL kZgg== X-Gm-Message-State: ALoCoQkGpF7Q/tDKRgKvNO5f9LUnakvg82CQYKFGbXZ9Tw2BHMWA/QpA8C/GHl3+fVudgJKMAjHd X-Received: by 10.66.218.70 with SMTP id pe6mr8389760pac.33.1391194318197; Fri, 31 Jan 2014 10:51:58 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.84.165 with SMTP id l34ls1028746qgd.5.gmail; Fri, 31 Jan 2014 10:51:58 -0800 (PST) X-Received: by 10.58.66.137 with SMTP id f9mr17378243vet.11.1391194318032; Fri, 31 Jan 2014 10:51:58 -0800 (PST) Received: from mail-vb0-f45.google.com (mail-vb0-f45.google.com [209.85.212.45]) by mx.google.com with ESMTPS id vd4si3812277vdc.26.2014.01.31.10.51.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 31 Jan 2014 10:51:58 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.45 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.45; Received: by mail-vb0-f45.google.com with SMTP id m10so3231231vbh.32 for ; Fri, 31 Jan 2014 10:51:57 -0800 (PST) X-Received: by 10.52.89.230 with SMTP id br6mr14846160vdb.20.1391194317910; Fri, 31 Jan 2014 10:51:57 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp118195vcz; Fri, 31 Jan 2014 10:51:57 -0800 (PST) X-Received: by 10.66.182.199 with SMTP id eg7mr22400251pac.135.1391194316690; Fri, 31 Jan 2014 10:51:56 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ds4si11479943pbb.229.2014.01.31.10.51.55; Fri, 31 Jan 2014 10:51:56 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932835AbaAaSvt (ORCPT + 27 others); Fri, 31 Jan 2014 13:51:49 -0500 Received: from mail-pa0-f46.google.com ([209.85.220.46]:53179 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932491AbaAaSvr (ORCPT ); Fri, 31 Jan 2014 13:51:47 -0500 Received: by mail-pa0-f46.google.com with SMTP id rd3so4769058pab.33 for ; Fri, 31 Jan 2014 10:51:47 -0800 (PST) X-Received: by 10.68.171.99 with SMTP id at3mr22337224pbc.109.1391194306943; Fri, 31 Jan 2014 10:51:46 -0800 (PST) Received: from vb-linaro.ric.broadcom.com ([216.31.219.19]) by mx.google.com with ESMTPSA id zc5sm30011343pbc.41.2014.01.31.10.51.44 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 31 Jan 2014 10:51:46 -0800 (PST) From: Zoran Markovic To: linux-kernel@vger.kernel.org Cc: netdev@vger.kernel.org, Shaibal Dutta , "David S. Miller" , Jiri Pirko , YOSHIFUJI Hideaki , Eric Dumazet , Julian Anastasov , Flavio Leitner , Neil Horman , Patrick McHardy , John Fastabend , Amerigo Wang , Joe Perches , Jason Wang , Antonio Quartulli , Simon Horman , Nikolay Aleksandrov , Zoran Markovic Subject: [RFC PATCHv2] net: core: move core networking work to power efficient workqueue Date: Fri, 31 Jan 2014 10:51:22 -0800 Message-Id: <1391194282-12265-1-git-send-email-zoran.markovic@linaro.org> X-Mailer: git-send-email 1.7.9.5 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: zoran.markovic@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.45 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Shaibal Dutta This patch moves the following work to the power efficient workqueue: - Transmit work of netpoll - Destination cache garbage collector work - Link watch event handler work In general, assignment of CPUs to pending work could be deferred to the scheduler in order to extend idle residency time and improve power efficiency. I would value community's opinion on the migration of this work to the power efficient workqueue, with an emphasis on migration of netpoll's transmit work. This functionality is enabled when CONFIG_WQ_POWER_EFFICIENT is selected. Cc: "David S. Miller" Cc: Jiri Pirko Cc: YOSHIFUJI Hideaki Cc: Eric Dumazet Cc: Julian Anastasov Cc: Flavio Leitner Cc: Neil Horman Cc: Patrick McHardy Cc: John Fastabend Cc: Amerigo Wang Cc: Joe Perches Cc: Jason Wang Cc: Antonio Quartulli Cc: Simon Horman Cc: Nikolay Aleksandrov Signed-off-by: Shaibal Dutta [zoran.markovic@linaro.org: Rebased to latest kernel version. Edited calls to mod_delayed_work to reference power efficient workqueue. Added commit message. Fixed code alignment.] Signed-off-by: Zoran Markovic --- net/core/dst.c | 5 +++-- net/core/link_watch.c | 5 +++-- net/core/netpoll.c | 6 ++++-- 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/net/core/dst.c b/net/core/dst.c index ca4231e..57fba10 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -135,7 +135,8 @@ loop: */ if (expires > 4*HZ) expires = round_jiffies_relative(expires); - schedule_delayed_work(&dst_gc_work, expires); + queue_delayed_work(system_power_efficient_wq, + &dst_gc_work, expires); } spin_unlock_bh(&dst_garbage.lock); @@ -223,7 +224,7 @@ void __dst_free(struct dst_entry *dst) if (dst_garbage.timer_inc > DST_GC_INC) { dst_garbage.timer_inc = DST_GC_INC; dst_garbage.timer_expires = DST_GC_MIN; - mod_delayed_work(system_wq, &dst_gc_work, + mod_delayed_work(system_power_efficient_wq, &dst_gc_work, dst_garbage.timer_expires); } spin_unlock_bh(&dst_garbage.lock); diff --git a/net/core/link_watch.c b/net/core/link_watch.c index 9c3a839..6899935 100644 --- a/net/core/link_watch.c +++ b/net/core/link_watch.c @@ -135,9 +135,10 @@ static void linkwatch_schedule_work(int urgent) * override the existing timer. */ if (test_bit(LW_URGENT, &linkwatch_flags)) - mod_delayed_work(system_wq, &linkwatch_work, 0); + mod_delayed_work(system_power_efficient_wq, &linkwatch_work, 0); else - schedule_delayed_work(&linkwatch_work, delay); + queue_delayed_work(system_power_efficient_wq, + &linkwatch_work, delay); } diff --git a/net/core/netpoll.c b/net/core/netpoll.c index c03f3de..6685938 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -101,7 +101,8 @@ static void queue_process(struct work_struct *work) __netif_tx_unlock(txq); local_irq_restore(flags); - schedule_delayed_work(&npinfo->tx_work, HZ/10); + queue_delayed_work(system_power_efficient_wq, + &npinfo->tx_work, HZ/10); return; } __netif_tx_unlock(txq); @@ -423,7 +424,8 @@ void netpoll_send_skb_on_dev(struct netpoll *np, struct sk_buff *skb, if (status != NETDEV_TX_OK) { skb_queue_tail(&npinfo->txq, skb); - schedule_delayed_work(&npinfo->tx_work,0); + queue_delayed_work(system_power_efficient_wq, + &npinfo->tx_work, 0); } } EXPORT_SYMBOL(netpoll_send_skb_on_dev);