From patchwork Tue Oct 27 13:50:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 311918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8050C388F9 for ; Tue, 27 Oct 2020 17:44:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7944920578 for ; Tue, 27 Oct 2020 17:44:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603820681; bh=Q7s2e4JccszcPEZZZ2r/6Nif1WeoarCHcmCBLfoWw14=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=U3oCvb9fiZTuy2dQ/CsmEwmSRF+y7vsrEg/LJjKaC9RH/xN4E3Gx1MXqE/VNdGZ1g ccOgrypCxZE30T/TGy3QZsuKW5oGRIxFK70dunImt3xJulSnMOKaFxoOdBszq0UpPe mA9Bl1WJZdti27CNS80h5TLsMzFu3C8fC5ryGCwo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1821527AbgJ0Roj (ORCPT ); Tue, 27 Oct 2020 13:44:39 -0400 Received: from mail.kernel.org ([198.145.29.99]:57658 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2902147AbgJ0ObC (ORCPT ); Tue, 27 Oct 2020 10:31:02 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D56A220780; Tue, 27 Oct 2020 14:31:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603809061; bh=Q7s2e4JccszcPEZZZ2r/6Nif1WeoarCHcmCBLfoWw14=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kh43uoTkEMP+VrOSW3rmm5+rK5N8zQvKWdE2i5q7O1HvOhrWs2VgZmx4dfrkyt4Kl LQabyi0bg8CunDrhBkWftrBK6lVtziKBZvnXtugMJzJX6+6HlSrEmFWrzizqZZESIy UalKMHqFPsUbzDsv1Huue4EAMmO7xVMohVu4YEVY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Wetp Zhang , Xunlei Pang , "Peter Zijlstra (Intel)" , Jiang Biao , Vincent Guittot , Sasha Levin Subject: [PATCH 5.4 062/408] sched/fair: Fix wrong cpu selecting from isolated domain Date: Tue, 27 Oct 2020 14:50:00 +0100 Message-Id: <20201027135457.942357675@linuxfoundation.org> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201027135455.027547757@linuxfoundation.org> References: <20201027135455.027547757@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Xunlei Pang [ Upstream commit df3cb4ea1fb63ff326488efd671ba3c39034255e ] We've met problems that occasionally tasks with full cpumask (e.g. by putting it into a cpuset or setting to full affinity) were migrated to our isolated cpus in production environment. After some analysis, we found that it is due to the current select_idle_smt() not considering the sched_domain mask. Steps to reproduce on my 31-CPU hyperthreads machine: 1. with boot parameter: "isolcpus=domain,2-31" (thread lists: 0,16 and 1,17) 2. cgcreate -g cpu:test; cgexec -g cpu:test "test_threads" 3. some threads will be migrated to the isolated cpu16~17. Fix it by checking the valid domain mask in select_idle_smt(). Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings()) Reported-by: Wetp Zhang Signed-off-by: Xunlei Pang Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Jiang Biao Reviewed-by: Vincent Guittot Link: https://lkml.kernel.org/r/1600930127-76857-1-git-send-email-xlpang@linux.alibaba.com Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b02a83ff40687..dddaf61378f62 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5936,7 +5936,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int /* * Scan the local SMT mask for idle CPUs. */ -static int select_idle_smt(struct task_struct *p, int target) +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target) { int cpu, si_cpu = -1; @@ -5944,7 +5944,8 @@ static int select_idle_smt(struct task_struct *p, int target) return -1; for_each_cpu(cpu, cpu_smt_mask(target)) { - if (!cpumask_test_cpu(cpu, p->cpus_ptr)) + if (!cpumask_test_cpu(cpu, p->cpus_ptr) || + !cpumask_test_cpu(cpu, sched_domain_span(sd))) continue; if (available_idle_cpu(cpu)) return cpu; @@ -5962,7 +5963,7 @@ static inline int select_idle_core(struct task_struct *p, struct sched_domain *s return -1; } -static inline int select_idle_smt(struct task_struct *p, int target) +static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target) { return -1; } @@ -6072,7 +6073,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) if ((unsigned)i < nr_cpumask_bits) return i; - i = select_idle_smt(p, target); + i = select_idle_smt(p, sd, target); if ((unsigned)i < nr_cpumask_bits) return i;