From patchwork Thu Oct 3 15:53:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 175192 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp646590ill; Thu, 3 Oct 2019 10:42:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqyctbuTHQhLn1gRj+l3bLQ8t1kMwT2VQYexOYyHVFAbTle8F6Sp87/gyMeu/zG4w4O+IDhX X-Received: by 2002:a17:906:254d:: with SMTP id j13mr8762157ejb.30.1570124546647; Thu, 03 Oct 2019 10:42:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570124546; cv=none; d=google.com; s=arc-20160816; b=iQdtun8us7eP6oLIrRzvr1w75PaycAb7Qy7OBtUulEoutXo4xz3wPJqoj+rUhh8Bgg Py+uxus2mt6CrlmGX6tSg8b0aKqqmaCwtFyyBqsNotMQDKgA+XR/O8F8i39rTJQPZGdG BRDI+TaEY7fIvQLIFLB4pmvTA+EsmqNGhAYZy6ch4YUGbHVs4hKRADCCQ61abWE+/Qck n8hPhSMDfD37jgAPC/zP3iId++YCaz9iWuFZEMEboRqleVEyazzB9Gx0GDZcbKdF6R9p 1Si8TaxOTvNKNO1KDjwuil/PnyH6opo4s5xl8vudbspKmZSNgusbmOscpbcGq0SceW8g bTCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=UnquXKNgTV2CJDncBb3DRpTBwtw8JEQ6MZf2vyTf1Rk=; b=iWgHT9hKSBNYCt5uAFckkis+fncvOEmC1m6eXtMrkQJn+I5xf2V043+fmGA96+HKNT kPzJXlR6CcwXA874QS7K+zYQXGC9y4BValbfo1UuZh8zIXJx+Yno0WWPqhrmTmeOw5TX 2VRmdeUAyZsWcYK4quYYxwwiqonhzQ5K6Zt8e4jfvVCEAwmo4wKWC89v4l3PHmyMHj9B jwCVgNstWY8G21v0I88oPKBZIrMVcV8cLRiDJqILjkkhAmwuUEKaHQIXR8ONnqVm6GMT p4nGsshKt96+/1sdR6+gh0mXbS14EpNCFJ4G5yJ3UgiWy3gpQofPK1spr1eGE9QXmijS 1p4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=IGqME5eC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u12si1947383edi.326.2019.10.03.10.42.26; Thu, 03 Oct 2019 10:42:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=IGqME5eC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390463AbfJCRmY (ORCPT + 27 others); Thu, 3 Oct 2019 13:42:24 -0400 Received: from mail.kernel.org ([198.145.29.99]:39716 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730924AbfJCP5W (ORCPT ); Thu, 3 Oct 2019 11:57:22 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 30C5220830; Thu, 3 Oct 2019 15:57:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570118241; bh=5u9Whsc6p/XfD9xIRZ6kTcBX2xjj+4JbTk+wCWZ59AA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IGqME5eC9/Wb1MRfQfL+v3vLhoSqf0yP5Bc/6GZqbLUzq3Rl3399VF93s1an+KDgC kpaIv7RulDxaC5Puexa70c8mSP+6zi2CnALWLGe8/+EDfx6ymePbga3IyAPOGOWBBt Qk/NdkN1kTjaTfHAxvs008wPeHdMsL8cNb/734Fs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vincent Guittot , "Peter Zijlstra (Intel)" , Linus Torvalds , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH 4.4 39/99] sched/fair: Fix imbalance due to CPU affinity Date: Thu, 3 Oct 2019 17:53:02 +0200 Message-Id: <20191003154314.415611606@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191003154252.297991283@linuxfoundation.org> References: <20191003154252.297991283@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincent Guittot [ Upstream commit f6cad8df6b30a5d2bbbd2e698f74b4cafb9fb82b ] The load_balance() has a dedicated mecanism to detect when an imbalance is due to CPU affinity and must be handled at parent level. In this case, the imbalance field of the parent's sched_group is set. The description of sg_imbalanced() gives a typical example of two groups of 4 CPUs each and 4 tasks each with a cpumask covering 1 CPU of the first group and 3 CPUs of the second group. Something like: { 0 1 2 3 } { 4 5 6 7 } * * * * But the load_balance fails to fix this UC on my octo cores system made of 2 clusters of quad cores. Whereas the load_balance is able to detect that the imbalanced is due to CPU affinity, it fails to fix it because the imbalance field is cleared before letting parent level a chance to run. In fact, when the imbalance is detected, the load_balance reruns without the CPU with pinned tasks. But there is no other running tasks in the situation described above and everything looks balanced this time so the imbalance field is immediately cleared. The imbalance field should not be cleared if there is no other task to move when the imbalance is detected. Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/1561996022-28829-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) -- 2.20.1 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 19d735ab44db4..cd2fb8384fbe3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7313,9 +7313,10 @@ static int load_balance(int this_cpu, struct rq *this_rq, out_balanced: /* * We reach balance although we may have faced some affinity - * constraints. Clear the imbalance flag if it was set. + * constraints. Clear the imbalance flag only if other tasks got + * a chance to move and fix the imbalance. */ - if (sd_parent) { + if (sd_parent && !(env.flags & LBF_ALL_PINNED)) { int *group_imbalance = &sd_parent->groups->sgc->imbalance; if (*group_imbalance)