From patchwork Mon Jul 28 17:51:35 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 34400 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f72.google.com (mail-yh0-f72.google.com [209.85.213.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E4A49202E4 for ; Mon, 28 Jul 2014 18:00:47 +0000 (UTC) Received: by mail-yh0-f72.google.com with SMTP id f73sf27672897yha.3 for ; Mon, 28 Jul 2014 11:00:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=Gr09j2ri2t7n0MWcBN+yD1JFTy3lsXW24VxtUnBThDs=; b=TyxzDvcKOdzxTnaREmTbXbD0R9AJwP8ZZZ4ZvpDr8v54I7eNOIMod39UlFfM1aui3m EOkfDffqQgC13EOfTh3jkq60QYAGD+WwD7Gc6RTv/1Y8SG8LWr5/TlOxdcCCj86OA0a/ Vy2/6otLuvWVKwUtKJimlRRV9TBbKWskL3bkXN/E+DcxR4CFe4Ux7Qg99ztrCoMwvCye ekcM/LXpOft3Fm2z+CIFZr1CK301DF/YYV+B0Zmcg2ETzxDRfQnjvgJ3iaaQ4RapZB2u hcWMsQVa6jYN6OXTLDSyJJJYBlO56BH/9Z7YAnmM08nWlpn572Al8i/Mw/aOMu1IZbVZ tw0A== X-Gm-Message-State: ALoCoQlvYG3JnzHqPvy8mYHZ5iRakRNPRWeD/UIss+jG2NMki5ihVkFy2uXzuUxy6nQcVHuLgwxg X-Received: by 10.236.137.13 with SMTP id x13mr15166153yhi.32.1406570447708; Mon, 28 Jul 2014 11:00:47 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.30.137 with SMTP id d9ls282752qgd.82.gmail; Mon, 28 Jul 2014 11:00:47 -0700 (PDT) X-Received: by 10.220.195.67 with SMTP id eb3mr2602894vcb.30.1406570447551; Mon, 28 Jul 2014 11:00:47 -0700 (PDT) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id b11si12972770vdv.62.2014.07.28.11.00.47 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 28 Jul 2014 11:00:47 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.171 as permitted sender) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id hq11so11900582vcb.2 for ; Mon, 28 Jul 2014 11:00:47 -0700 (PDT) X-Received: by 10.220.50.8 with SMTP id x8mr10507814vcf.18.1406570447425; Mon, 28 Jul 2014 11:00:47 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp186312vcb; Mon, 28 Jul 2014 11:00:46 -0700 (PDT) X-Received: by 10.66.156.42 with SMTP id wb10mr3696447pab.155.1406570105639; Mon, 28 Jul 2014 10:55:05 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id uh4si18615063pac.46.2014.07.28.10.54.35 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 Jul 2014 10:55:05 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XBp6d-0000F7-HY; Mon, 28 Jul 2014 17:53:03 +0000 Received: from mail-wg0-f44.google.com ([74.125.82.44]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XBp6a-0008N7-31 for linux-arm-kernel@lists.infradead.org; Mon, 28 Jul 2014 17:53:00 +0000 Received: by mail-wg0-f44.google.com with SMTP id m15so7715075wgh.27 for ; Mon, 28 Jul 2014 10:52:37 -0700 (PDT) X-Received: by 10.180.102.100 with SMTP id fn4mr33880947wib.22.1406569957400; Mon, 28 Jul 2014 10:52:37 -0700 (PDT) Received: from lmenx30s.lme.st.com (LPuteaux-656-01-48-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id ex4sm33758149wic.2.2014.07.28.10.52.35 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 28 Jul 2014 10:52:36 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, preeti@linux.vnet.ibm.com, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 01/12] sched: fix imbalance flag reset Date: Mon, 28 Jul 2014 19:51:35 +0200 Message-Id: <1406569906-9763-2-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1406569906-9763-1-git-send-email-vincent.guittot@linaro.org> References: <1406569906-9763-1-git-send-email-vincent.guittot@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140728_105300_323538_A2FC1565 X-CRM114-Status: GOOD ( 17.80 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.44 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.44 listed in wl.mailspike.net] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: nicolas.pitre@linaro.org, riel@redhat.com, daniel.lezcano@linaro.org, Vincent Guittot , efault@gmx.de, dietmar.eggemann@arm.com, linaro-kernel@lists.linaro.org, Morten.Rasmussen@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: vincent.guittot@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.171 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The imbalance flag can stay set whereas there is no imbalance. Let assume that we have 3 tasks that run on a dual cores /dual cluster system. We will have some idle load balance which are triggered during tick. Unfortunately, the tick is also used to queue background work so we can reach the situation where short work has been queued on a CPU which already runs a task. The load balance will detect this imbalance (2 tasks on 1 CPU and an idle CPU) and will try to pull the waiting task on the idle CPU. The waiting task is a worker thread that is pinned on a CPU so an imbalance due to pinned task is detected and the imbalance flag is set. Then, we will not be able to clear the flag because we have at most 1 task on each CPU but the imbalance flag will trig to useless active load balance between the idle CPU and the busy CPU. We need to reset of the imbalance flag as soon as we have reached a balanced state. If all tasks are pinned, we don't consider that as a balanced state and let the imbalance flag set. Reviewed-by: Preeti U Murthy Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 923fe32..7eb9126 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6672,10 +6672,8 @@ static int load_balance(int this_cpu, struct rq *this_rq, if (sd_parent) { int *group_imbalance = &sd_parent->groups->sgc->imbalance; - if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0) { + if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0) *group_imbalance = 1; - } else if (*group_imbalance) - *group_imbalance = 0; } /* All tasks on this runqueue were pinned by CPU affinity */ @@ -6686,7 +6684,7 @@ static int load_balance(int this_cpu, struct rq *this_rq, env.loop_break = sched_nr_migrate_break; goto redo; } - goto out_balanced; + goto out_all_pinned; } } @@ -6760,6 +6758,23 @@ static int load_balance(int this_cpu, struct rq *this_rq, goto out; out_balanced: + /* + * We reach balance although we may have faced some affinity + * constraints. Clear the imbalance flag if it was set. + */ + if (sd_parent) { + int *group_imbalance = &sd_parent->groups->sgc->imbalance; + + if (*group_imbalance) + *group_imbalance = 0; + } + +out_all_pinned: + /* + * We reach balance because all tasks are pinned at this level so + * we can't migrate them. Let the imbalance flag set so parent level + * can try to migrate them. + */ schedstat_inc(sd, lb_balanced[idle]); sd->nr_balance_failed = 0;