From patchwork Tue Apr 26 08:21:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "gregkh@linuxfoundation.org" X-Patchwork-Id: 566595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BE7AC433EF for ; Tue, 26 Apr 2022 08:49:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244937AbiDZIwf (ORCPT ); Tue, 26 Apr 2022 04:52:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346414AbiDZIpB (ORCPT ); Tue, 26 Apr 2022 04:45:01 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A13A16941E; Tue, 26 Apr 2022 01:34:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A7265B81CF2; Tue, 26 Apr 2022 08:34:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E10BCC385BC; Tue, 26 Apr 2022 08:34:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1650962086; bh=KdS1QDOj+LvTKxofeljDe/A2cZ6/6XqlWp6vxJFqwFo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wC5+FtI3AZAvTTihh41v4FM3ewCs02zqNRAcZdFR5+1N8XpwccQDlYHa/gQuPabSW aOHsXqMi2MtNpoZYBCYrSeRzyWsAPBBuZQnfvkzMWM189FrKLOJL+dRp3F6puNe3Eu oNaNemLab7YtQjt7nCSeD9JfOsAu14S1EuVEofAA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, kuyo chang , "Peter Zijlstra (Intel)" , Vincent Guittot , Dietmar Eggemann , Sasha Levin Subject: [PATCH 5.10 66/86] sched/pelt: Fix attach_entity_load_avg() corner case Date: Tue, 26 Apr 2022 10:21:34 +0200 Message-Id: <20220426081743.113797427@linuxfoundation.org> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220426081741.202366502@linuxfoundation.org> References: <20220426081741.202366502@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: kuyo chang [ Upstream commit 40f5aa4c5eaebfeaca4566217cb9c468e28ed682 ] The warning in cfs_rq_is_decayed() triggered: SCHED_WARN_ON(cfs_rq->avg.load_avg || cfs_rq->avg.util_avg || cfs_rq->avg.runnable_avg) There exists a corner case in attach_entity_load_avg() which will cause load_sum to be zero while load_avg will not be. Consider se_weight is 88761 as per the sched_prio_to_weight[] table. Further assume the get_pelt_divider() is 47742, this gives: se->avg.load_avg is 1. However, calculating load_sum: se->avg.load_sum = div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se)); se->avg.load_sum = 1*47742/88761 = 0. Then enqueue_load_avg() adds this to the cfs_rq totals: cfs_rq->avg.load_avg += se->avg.load_avg; cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum; Resulting in load_avg being 1 with load_sum is 0, which will trigger the WARN. Fixes: f207934fb79d ("sched/fair: Align PELT windows between cfs_rq and its se") Signed-off-by: kuyo chang [peterz: massage changelog] Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Vincent Guittot Tested-by: Dietmar Eggemann Link: https://lkml.kernel.org/r/20220414090229.342-1-kuyo.chang@mediatek.com Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index acd9833b8ec2..1a306ef51bbe 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3748,11 +3748,11 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s se->avg.runnable_sum = se->avg.runnable_avg * divider; - se->avg.load_sum = divider; - if (se_weight(se)) { - se->avg.load_sum = - div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se)); - } + se->avg.load_sum = se->avg.load_avg * divider; + if (se_weight(se) < se->avg.load_sum) + se->avg.load_sum = div_u64(se->avg.load_sum, se_weight(se)); + else + se->avg.load_sum = 1; enqueue_load_avg(cfs_rq, se); cfs_rq->avg.util_avg += se->avg.util_avg;