From patchwork Fri Jan 31 00:33:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zoran Markovic X-Patchwork-Id: 23941 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yk0-f199.google.com (mail-yk0-f199.google.com [209.85.160.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DF815202B2 for ; Fri, 31 Jan 2014 00:33:53 +0000 (UTC) Received: by mail-yk0-f199.google.com with SMTP id 142sf20732466ykq.2 for ; Thu, 30 Jan 2014 16:33:53 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=d2e+oKCsyQONvBx2hmiA7aAhjluvLo6uXLD81puCZmM=; b=cQlmSTP9pTFfNWjHo1jIDqmM/fIPD30m/Frb6EDXvoXshm6kmHPszrBU4ys72VhZPK asCxyfjWPb0o+nQp4kn6sY7VggmxOY6gBD1e+n1M0D/yOI5s0aYp7ixkx8pLc4SGfRAC H1AMLFh3JSNinoBHn3QSxZSvJyb3/tPz8GgY0Jc716UYLlsxnmyhQe29pvkPBEZAvH50 4uvNJWvZk2PoapnRXoUOgANuCzcIEvA08ZdFZrIXQL8mp53MTelfouacuY+1QkpueU9M Gp/dvHDfsHYAKWLnfFp7QhxUkyio6Gfb0MprS8p33Wlnmd7gMDZTSF6mf3iHPuWlcAGa u2cw== X-Gm-Message-State: ALoCoQmTTQsvtoX07WOQpotM4qDxvQ6PRHGB8YS0rYetAB/4/Oyct4A0fizSlrRslkcHNJveubm4 X-Received: by 10.58.221.74 with SMTP id qc10mr3314603vec.35.1391128433009; Thu, 30 Jan 2014 16:33:53 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.41.202 with SMTP id z68ls795663qgz.85.gmail; Thu, 30 Jan 2014 16:33:52 -0800 (PST) X-Received: by 10.221.34.211 with SMTP id st19mr14330064vcb.5.1391128432913; Thu, 30 Jan 2014 16:33:52 -0800 (PST) Received: from mail-ve0-f177.google.com (mail-ve0-f177.google.com [209.85.128.177]) by mx.google.com with ESMTPS id eb8si2732769vdb.112.2014.01.30.16.33.52 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 30 Jan 2014 16:33:52 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.177 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.177; Received: by mail-ve0-f177.google.com with SMTP id jz11so2657989veb.22 for ; Thu, 30 Jan 2014 16:33:52 -0800 (PST) X-Received: by 10.58.181.71 with SMTP id du7mr3821229vec.25.1391128432785; Thu, 30 Jan 2014 16:33:52 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp56549vcz; Thu, 30 Jan 2014 16:33:52 -0800 (PST) X-Received: by 10.68.254.230 with SMTP id al6mr17625093pbd.3.1391128431418; Thu, 30 Jan 2014 16:33:51 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id vb2si8384754pbc.37.2014.01.30.16.33.50; Thu, 30 Jan 2014 16:33:50 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932109AbaAaAdo (ORCPT + 27 others); Thu, 30 Jan 2014 19:33:44 -0500 Received: from mail-pb0-f43.google.com ([209.85.160.43]:38011 "EHLO mail-pb0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753097AbaAaAdm (ORCPT ); Thu, 30 Jan 2014 19:33:42 -0500 Received: by mail-pb0-f43.google.com with SMTP id md12so3787772pbc.30 for ; Thu, 30 Jan 2014 16:33:41 -0800 (PST) X-Received: by 10.66.174.165 with SMTP id bt5mr17283017pac.151.1391128421692; Thu, 30 Jan 2014 16:33:41 -0800 (PST) Received: from vb-linaro.ric.broadcom.com ([216.31.219.19]) by mx.google.com with ESMTPSA id tu3sm53190942pab.1.2014.01.30.16.33.38 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 30 Jan 2014 16:33:40 -0800 (PST) From: Zoran Markovic To: linux-kernel@vger.kernel.org Cc: netdev@vger.kernel.org, Shaibal Dutta , "David S. Miller" , Jiri Pirko , YOSHIFUJI Hideaki , Eric Dumazet , Julian Anastasov , Flavio Leitner , Neil Horman , Patrick McHardy , John Fastabend , Amerigo Wang , Joe Perches , Jason Wang , Antonio Quartulli , Simon Horman , Nikolay Aleksandrov , Zoran Markovic Subject: [RFC PATCH] net: core: move core networking work to power efficient workqueue Date: Thu, 30 Jan 2014 16:33:22 -0800 Message-Id: <1391128402-10725-1-git-send-email-zoran.markovic@linaro.org> X-Mailer: git-send-email 1.7.9.5 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: zoran.markovic@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.177 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Shaibal Dutta This patch moves the following work to the power efficient workqueue: - Transmit work of netpoll - Destination cache garbage collector work - Link watch event handler work In general, assignment of CPUs to pending work could be deferred to the scheduler in order to extend idle residency time and improve power efficiency. I would value community's opinion on the migration of this work to the power efficient workqueue, with an emphasis on migration of netpoll's transmit work. This functionality is enabled when CONFIG_WQ_POWER_EFFICIENT is selected. Cc: "David S. Miller" Cc: Jiri Pirko Cc: YOSHIFUJI Hideaki Cc: Eric Dumazet Cc: Julian Anastasov Cc: Flavio Leitner Cc: Neil Horman Cc: Patrick McHardy Cc: John Fastabend Cc: Amerigo Wang Cc: Joe Perches Cc: Jason Wang Cc: Antonio Quartulli Cc: Simon Horman Cc: Nikolay Aleksandrov Signed-off-by: Shaibal Dutta [zoran.markovic@linaro.org: Rebased to latest kernel version. Edited calls to mod_delayed_work to reference power efficient workqueue. Added commit message.] Signed-off-by: Zoran Markovic --- net/core/dst.c | 5 +++-- net/core/link_watch.c | 5 +++-- net/core/netpoll.c | 6 ++++-- 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/net/core/dst.c b/net/core/dst.c index ca4231e..cc28352 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -135,7 +135,8 @@ loop: */ if (expires > 4*HZ) expires = round_jiffies_relative(expires); - schedule_delayed_work(&dst_gc_work, expires); + queue_delayed_work(system_power_efficient_wq, + &dst_gc_work, expires); } spin_unlock_bh(&dst_garbage.lock); @@ -223,7 +224,7 @@ void __dst_free(struct dst_entry *dst) if (dst_garbage.timer_inc > DST_GC_INC) { dst_garbage.timer_inc = DST_GC_INC; dst_garbage.timer_expires = DST_GC_MIN; - mod_delayed_work(system_wq, &dst_gc_work, + mod_delayed_work(system_power_efficient_wq, &dst_gc_work, dst_garbage.timer_expires); } spin_unlock_bh(&dst_garbage.lock); diff --git a/net/core/link_watch.c b/net/core/link_watch.c index 9c3a839..0ae3994 100644 --- a/net/core/link_watch.c +++ b/net/core/link_watch.c @@ -135,9 +135,10 @@ static void linkwatch_schedule_work(int urgent) * override the existing timer. */ if (test_bit(LW_URGENT, &linkwatch_flags)) - mod_delayed_work(system_wq, &linkwatch_work, 0); + mod_delayed_work(system_power_efficient_wq, &linkwatch_work, 0); else - schedule_delayed_work(&linkwatch_work, delay); + queue_delayed_work(system_power_efficient_wq, + &linkwatch_work, delay); } diff --git a/net/core/netpoll.c b/net/core/netpoll.c index c03f3de..2c8f839 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -101,7 +101,8 @@ static void queue_process(struct work_struct *work) __netif_tx_unlock(txq); local_irq_restore(flags); - schedule_delayed_work(&npinfo->tx_work, HZ/10); + queue_delayed_work(system_power_efficient_wq, + &npinfo->tx_work, HZ/10); return; } __netif_tx_unlock(txq); @@ -423,7 +424,8 @@ void netpoll_send_skb_on_dev(struct netpoll *np, struct sk_buff *skb, if (status != NETDEV_TX_OK) { skb_queue_tail(&npinfo->txq, skb); - schedule_delayed_work(&npinfo->tx_work,0); + queue_delayed_work(system_power_efficient_wq, + &npinfo->tx_work, 0); } } EXPORT_SYMBOL(netpoll_send_skb_on_dev);