From patchwork Tue Dec 1 11:54:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 335504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 152D8C71155 for ; Tue, 1 Dec 2020 11:56:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC33A2076C for ; Tue, 1 Dec 2020 11:56:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZS3UObme" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389351AbgLALzx (ORCPT ); Tue, 1 Dec 2020 06:55:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389210AbgLALzx (ORCPT ); Tue, 1 Dec 2020 06:55:53 -0500 Received: from mail-oi1-x244.google.com (mail-oi1-x244.google.com [IPv6:2607:f8b0:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6021C0617A6; Tue, 1 Dec 2020 03:55:03 -0800 (PST) Received: by mail-oi1-x244.google.com with SMTP id k26so1478456oiw.0; Tue, 01 Dec 2020 03:55:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e1Bdu1f8R9bBNbgeIXBGwkTBiP2EkZW/1Ka6vd1hK2g=; b=ZS3UObmeJCFaTOChgdsRBms/o52jCdaXOT47+FXJrtnJc2jafAY1hqCdNCGsDR16ee vGucVLZfICul203Llkim3QBBPHYZgybwBE4uiEgKnIbyiiJofsD7UuFHrsyeSpYRa+g/ xvXVSARdgaa1IWdfQBa1AYbk8cCpLGx7HpuZHJabV7Qj/9Jms/uXPHNsJHrBympXOy/y vq32/FgxEEKUi/DqZOObaHG5DxdMDwuoDS2LZOBbuvzZEZ4sNC+oW8ahY8AwzM6qHkiF 9EEyp6/6qLAENMapwZMUESw5gnRZWoGHeIp9AQvq+IsyNY7KCQeQXmdqPPrbuCCpMUE7 JbLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e1Bdu1f8R9bBNbgeIXBGwkTBiP2EkZW/1Ka6vd1hK2g=; b=DUHTPTjBlCdfJ40/kLxZe7TUd76yzu+wtJTN05iB+GK5DUfcvTwq5zJRe6pXm/8Jx0 xzRH/3OHUaa1QZIGGiD2Ctx5Gm3w4KhPJt2nzPJ5OoNyr3R2Zqi+fxFahJ95SwWi5Y1v jZIiFZ9Zn4gjuahPZlcmXEsUQ3qPX/ZM7zU7kuKzBfW5MQg9UqdOKrUwkyFDZ+S5+2FB xUqyOgMGOIiTzUjNC31aGNObepiK6K/rKPZMwRnv3FAhBMngqyU7vNykYXkrhU9w0Zdb nDYHc8A1AXnrVJ/HiSBZj4R/mhcFUuhAKPxb0McS59ak9xpd9nIJIVG8tYEbbilufVSv 6DAg== X-Gm-Message-State: AOAM530nlrAAxjbYS4IfQ4PlBq+L/XdBBJytrB7fvx2isX9G/zF2Arlg OQgGb0D1ieqCbbWDOerFfDXdw86A6LBgTM2j X-Google-Smtp-Source: ABdhPJwPbxpdS5B7CmDbT/CJEOQdWxBMyx94+12E8i2g4F+/RWZKHnhbB99ICv7JR4S+bu6u3GFI4w== X-Received: by 2002:aca:eb07:: with SMTP id j7mr1432428oih.48.1606823703362; Tue, 01 Dec 2020 03:55:03 -0800 (PST) Received: from localhost.localdomain ([122.225.203.131]) by smtp.gmail.com with ESMTPSA id o6sm342592oon.7.2020.12.01.03.54.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Dec 2020 03:55:02 -0800 (PST) From: Yafang Shao To: mgorman@suse.de, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, qianjun.kernel@gmail.com Cc: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, Yafang Shao Subject: [PATCH 1/6] sched: don't include stats.h in sched.h Date: Tue, 1 Dec 2020 19:54:11 +0800 Message-Id: <20201201115416.26515-2-laoar.shao@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201201115416.26515-1-laoar.shao@gmail.com> References: <20201201115416.26515-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org This patch is a preparation of the followup patches. In the followup patches some common helpers will be defined in stats.h, and these common helpers require some definitions in sched.h, so let's move stats.h out of sched.h. The source files which require stats.h include it specifically. Signed-off-by: Yafang Shao --- kernel/sched/core.c | 1 + kernel/sched/deadline.c | 1 + kernel/sched/debug.c | 1 + kernel/sched/fair.c | 1 + kernel/sched/idle.c | 1 + kernel/sched/rt.c | 2 +- kernel/sched/sched.h | 6 +++++- kernel/sched/stats.c | 1 + kernel/sched/stats.h | 2 ++ kernel/sched/stop_task.c | 1 + 10 files changed, 15 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab5..fd76628778f7 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11,6 +11,7 @@ #undef CREATE_TRACE_POINTS #include "sched.h" +#include "stats.h" #include diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index f232305dcefe..7a0124f81a4f 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -16,6 +16,7 @@ * Fabio Checconi */ #include "sched.h" +#include "stats.h" #include "pelt.h" struct dl_bandwidth def_dl_bandwidth; diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 2357921580f9..9758aa1bba1e 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -7,6 +7,7 @@ * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar */ #include "sched.h" +#include "stats.h" static DEFINE_SPINLOCK(sched_debug_lock); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8917d2d715ef..8ff1daa3d9bb 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -21,6 +21,7 @@ * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra */ #include "sched.h" +#include "stats.h" /* * Targeted preemption latency for CPU-bound tasks: diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 24d0ee26377d..95c02cbca04a 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -7,6 +7,7 @@ * tasks which are handled in sched/fair.c ) */ #include "sched.h" +#include "stats.h" #include diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 49ec096a8aa1..af772ac0f32d 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -4,7 +4,7 @@ * policies) */ #include "sched.h" - +#include "stats.h" #include "pelt.h" int sched_rr_timeslice = RR_TIMESLICE; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index df80bfcea92e..871544bb9a38 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2,6 +2,9 @@ /* * Scheduler internal types and methods: */ +#ifndef _KERNEL_SCHED_SCHED_H +#define _KERNEL_SCHED_SCHED_H + #include #include @@ -1538,7 +1541,6 @@ extern void flush_smp_call_function_from_idle(void); static inline void flush_smp_call_function_from_idle(void) { } #endif -#include "stats.h" #include "autogroup.h" #ifdef CONFIG_CGROUP_SCHED @@ -2633,3 +2635,5 @@ static inline bool is_per_cpu_kthread(struct task_struct *p) void swake_up_all_locked(struct swait_queue_head *q); void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait); + +#endif /* _KERNEL_SCHED_SCHED_H */ diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c index 750fb3c67eed..844bd9dbfbf0 100644 --- a/kernel/sched/stats.c +++ b/kernel/sched/stats.c @@ -3,6 +3,7 @@ * /proc/schedstat implementation */ #include "sched.h" +#include "stats.h" /* * Current schedstat API version. diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h index 33d0daf83842..c23b653ffc53 100644 --- a/kernel/sched/stats.h +++ b/kernel/sched/stats.h @@ -2,6 +2,8 @@ #ifdef CONFIG_SCHEDSTATS +#include "sched.h" + /* * Expects runqueue lock to be held for atomicity of update */ diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c index ceb5b6b12561..a5d289049388 100644 --- a/kernel/sched/stop_task.c +++ b/kernel/sched/stop_task.c @@ -8,6 +8,7 @@ * See kernel/stop_machine.c */ #include "sched.h" +#include "stats.h" #ifdef CONFIG_SMP static int From patchwork Tue Dec 1 11:54:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 335503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7EE2C64E7A for ; Tue, 1 Dec 2020 11:56:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 370FA20770 for ; Tue, 1 Dec 2020 11:56:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JHxmXJeI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729920AbgLAL4Y (ORCPT ); Tue, 1 Dec 2020 06:56:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729545AbgLAL4Y (ORCPT ); Tue, 1 Dec 2020 06:56:24 -0500 Received: from mail-ot1-x341.google.com (mail-ot1-x341.google.com [IPv6:2607:f8b0:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9085C0613D3; Tue, 1 Dec 2020 03:55:43 -0800 (PST) Received: by mail-ot1-x341.google.com with SMTP id n12so1390210otk.0; Tue, 01 Dec 2020 03:55:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X2A2gQAekcu/stjcOTnmS6PBV/8nj3HvrMWHRk++src=; b=JHxmXJeIn2M3ucjCy4xFIv2ZkLnB7qLYacBpZiEyVBVkJjaF1sPQMiiRhUmR02ZsQI BuGAxiuhRfDKrf+p3dGeQ9f6F14unv8XVHk+cc2iXV8FydoVTJ4UVVa4xs7KSAcA+zFf 5vUEU3R8klhN5aHaZx3sRXQ1HkY40McNJgHVMjSSdwZfwfmFp8H9fA6X7kWXrysnEh+Y gV6gFUnBfXRTsRUZ7YfZbo31oepCYSjUdZmKKmGd3fPpPOS6t0dVEb5Nh9wxwImAN380 VybCAW4ugZFMcX2hU7KcERt4XQiJFKCVFiEEEiupb/eHtCitOkmifzV+7Q3Wen47ss49 A0DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X2A2gQAekcu/stjcOTnmS6PBV/8nj3HvrMWHRk++src=; b=ktYx6Fk1qS8FINsFeQWpVFE98dV6NovrfQROsq0XKD+a01X/wMe74sGvc9rDzBTspV K1OtmWBCTm3bpnLB9B8G945TAOE175L+JekPQ2T6KjeEZNBn/PCGw5TZ/9L2i4xSd6Am s7AVBZDUyuqoHvukv1Y35Q/Ddna5tsz6m9vgLD7fQfZtDtqhX5YHQ7fsBmQ1K8Mr36im J1JR0qCK9ZiEHux3bnMDrlKd30HCpZyP9PDwS1mmoB4cuSKZjFZTYhDEo4+g9wPdjFYu lDquzCeutHpEV74198ITw4gSyGj8DJRsnc16zM7CKxrwwoCBmij9wDPVX5CAdSj4LaO4 CFAQ== X-Gm-Message-State: AOAM530vLLky+RsTHC27v3Zy3LfWCmok4+mfGCiAQcNZl/9J9Jj9U6kA bwtbXaMsHqugqDfujbGVFKU= X-Google-Smtp-Source: ABdhPJxzr8YHVsfMWFGF/v++FWGh18bkSvu4cmXmuh3c6YBQUx3mQ1+pAjWzCFXMachR+7JBJshpHw== X-Received: by 2002:a05:6830:441:: with SMTP id d1mr1470302otc.337.1606823743303; Tue, 01 Dec 2020 03:55:43 -0800 (PST) Received: from localhost.localdomain ([122.225.203.131]) by smtp.gmail.com with ESMTPSA id o6sm342592oon.7.2020.12.01.03.55.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Dec 2020 03:55:42 -0800 (PST) From: Yafang Shao To: mgorman@suse.de, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, qianjun.kernel@gmail.com Cc: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, Yafang Shao Subject: [PATCH 4/6] sched: make schedstats helpers independent of fair sched class Date: Tue, 1 Dec 2020 19:54:14 +0800 Message-Id: <20201201115416.26515-5-laoar.shao@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201201115416.26515-1-laoar.shao@gmail.com> References: <20201201115416.26515-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org The original prototype of the schedstats helpers are update_stats_wait_*(struct cfs_rq *cfs_rq, struct sched_entity *se) The cfs_rq in these helpers is used to get the rq_clock, and the se is used to get the struct sched_statistics and the struct task_struct. In order to make these helpers available by all sched classes, we can pass the rq, sched_statistics and task_struct directly. Then the new helpers are update_stats_wait_*(struct rq *rq, struct task_struct *p, struct sched_statistics *stats) which are independent of fair sched class. To avoid vmlinux growing too large or introducing ovehead when !schedstat_enabled(), some new helpers after schedstat_enabled() are also introduced, Suggested by Mel. These helpers are in sched/stats.c, __update_stats_wait_*(struct rq *rq, struct task_struct *p, struct sched_statistics *stats) Cc: Mel Gorman Signed-off-by: Yafang Shao --- kernel/sched/fair.c | 140 +++++++------------------------------------ kernel/sched/stats.c | 104 ++++++++++++++++++++++++++++++++ kernel/sched/stats.h | 32 ++++++++++ 3 files changed, 157 insertions(+), 119 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 14d8df308d44..b869a83fac29 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -917,69 +917,44 @@ static void update_curr_fair(struct rq *rq) } static inline void -update_stats_wait_start(struct cfs_rq *cfs_rq, struct sched_entity *se) +update_stats_wait_start_fair(struct cfs_rq *cfs_rq, struct sched_entity *se) { struct sched_statistics *stats = NULL; - u64 wait_start, prev_wait_start; + struct task_struct *p = NULL; if (!schedstat_enabled()) return; - __schedstat_from_sched_entity(se, &stats); - - wait_start = rq_clock(rq_of(cfs_rq)); - prev_wait_start = schedstat_val(stats->wait_start); + if (entity_is_task(se)) + p = task_of(se); - if (entity_is_task(se) && task_on_rq_migrating(task_of(se)) && - likely(wait_start > prev_wait_start)) - wait_start -= prev_wait_start; + __schedstat_from_sched_entity(se, &stats); - __schedstat_set(stats->wait_start, wait_start); + __update_stats_wait_start(rq_of(cfs_rq), p, stats); } static inline void -update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se) +update_stats_wait_end_fair(struct cfs_rq *cfs_rq, struct sched_entity *se) { struct sched_statistics *stats = NULL; struct task_struct *p = NULL; - u64 delta; if (!schedstat_enabled()) return; - __schedstat_from_sched_entity(se, &stats); - - delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(stats->wait_start); - if (entity_is_task(se)) { + if (entity_is_task(se)) p = task_of(se); - if (task_on_rq_migrating(p)) { - /* - * Preserve migrating task's wait time so wait_start - * time stamp can be adjusted to accumulate wait time - * prior to migration. - */ - __schedstat_set(stats->wait_start, delta); - - return; - } - - trace_sched_stat_wait(p, delta); - } + __schedstat_from_sched_entity(se, &stats); - __schedstat_set(stats->wait_max, - max(schedstat_val(stats->wait_max), delta)); - __schedstat_inc(stats->wait_count); - __schedstat_add(stats->wait_sum, delta); - __schedstat_set(stats->wait_start, 0); + __update_stats_wait_end(rq_of(cfs_rq), p, stats); } static inline void -update_stats_enqueue_sleeper(struct cfs_rq *cfs_rq, struct sched_entity *se) +update_stats_enqueue_sleeper_fair(struct cfs_rq *cfs_rq, struct sched_entity *se) { struct sched_statistics *stats = NULL; struct task_struct *p = NULL; - u64 sleep_start, block_start; if (!schedstat_enabled()) return; @@ -989,67 +964,14 @@ update_stats_enqueue_sleeper(struct cfs_rq *cfs_rq, struct sched_entity *se) __schedstat_from_sched_entity(se, &stats); - sleep_start = schedstat_val(stats->sleep_start); - block_start = schedstat_val(stats->block_start); - - if (sleep_start) { - u64 delta = rq_clock(rq_of(cfs_rq)) - sleep_start; - - if ((s64)delta < 0) - delta = 0; - - if (unlikely(delta > schedstat_val(stats->sleep_max))) - __schedstat_set(stats->sleep_max, delta); - - __schedstat_set(stats->sleep_start, 0); - __schedstat_add(stats->sum_sleep_runtime, delta); - - if (p) { - account_scheduler_latency(p, delta >> 10, 1); - trace_sched_stat_sleep(p, delta); - } - } - if (block_start) { - u64 delta = rq_clock(rq_of(cfs_rq)) - block_start; - - if ((s64)delta < 0) - delta = 0; - - if (unlikely(delta > schedstat_val(stats->block_max))) - __schedstat_set(stats->block_max, delta); - - __schedstat_set(stats->block_start, 0); - __schedstat_add(stats->sum_sleep_runtime, delta); - - if (p) { - if (p->in_iowait) { - __schedstat_add(stats->iowait_sum, delta); - __schedstat_inc(stats->iowait_count); - trace_sched_stat_iowait(p, delta); - } - - trace_sched_stat_blocked(p, delta); - - /* - * Blocking time is in units of nanosecs, so shift by - * 20 to get a milliseconds-range estimation of the - * amount of time that the task spent sleeping: - */ - if (unlikely(prof_on == SLEEP_PROFILING)) { - profile_hits(SLEEP_PROFILING, - (void *)get_wchan(p), - delta >> 20); - } - account_scheduler_latency(p, delta >> 10, 0); - } - } + __update_stats_enqueue_sleeper(rq_of(cfs_rq), p, stats); } /* * Task is being enqueued - update stats: */ static inline void -update_stats_enqueue(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) +update_stats_enqueue_fair(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { if (!schedstat_enabled()) return; @@ -1059,14 +981,14 @@ update_stats_enqueue(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * a dequeue/enqueue event is a NOP) */ if (se != cfs_rq->curr) - update_stats_wait_start(cfs_rq, se); + update_stats_wait_start_fair(cfs_rq, se); if (flags & ENQUEUE_WAKEUP) - update_stats_enqueue_sleeper(cfs_rq, se); + update_stats_enqueue_sleeper_fair(cfs_rq, se); } static inline void -update_stats_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) +update_stats_dequeue_fair(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { if (!schedstat_enabled()) @@ -1077,7 +999,7 @@ update_stats_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * waiting task: */ if (se != cfs_rq->curr) - update_stats_wait_end(cfs_rq, se); + update_stats_wait_end_fair(cfs_rq, se); if ((flags & DEQUEUE_SLEEP) && entity_is_task(se)) { struct task_struct *tsk = task_of(se); @@ -4186,26 +4108,6 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) static void check_enqueue_throttle(struct cfs_rq *cfs_rq); -static inline void check_schedstat_required(void) -{ -#ifdef CONFIG_SCHEDSTATS - if (schedstat_enabled()) - return; - - /* Force schedstat enabled if a dependent tracepoint is active */ - if (trace_sched_stat_wait_enabled() || - trace_sched_stat_sleep_enabled() || - trace_sched_stat_iowait_enabled() || - trace_sched_stat_blocked_enabled() || - trace_sched_stat_runtime_enabled()) { - printk_deferred_once("Scheduler tracepoints stat_sleep, stat_iowait, " - "stat_blocked and stat_runtime require the " - "kernel parameter schedstats=enable or " - "kernel.sched_schedstats=1\n"); - } -#endif -} - static inline bool cfs_bandwidth_used(void); /* @@ -4279,7 +4181,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) place_entity(cfs_rq, se, 0); check_schedstat_required(); - update_stats_enqueue(cfs_rq, se, flags); + update_stats_enqueue_fair(cfs_rq, se, flags); check_spread(cfs_rq, se); if (!curr) __enqueue_entity(cfs_rq, se); @@ -4363,7 +4265,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) update_load_avg(cfs_rq, se, UPDATE_TG); se_update_runnable(se); - update_stats_dequeue(cfs_rq, se, flags); + update_stats_dequeue_fair(cfs_rq, se, flags); clear_buddies(cfs_rq, se); @@ -4448,7 +4350,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) * a CPU. So account for the time it spent waiting on the * runqueue. */ - update_stats_wait_end(cfs_rq, se); + update_stats_wait_end_fair(cfs_rq, se); __dequeue_entity(cfs_rq, se); update_load_avg(cfs_rq, se, UPDATE_TG); } @@ -4550,7 +4452,7 @@ static void put_prev_entity(struct cfs_rq *cfs_rq, struct sched_entity *prev) check_spread(cfs_rq, prev); if (prev->on_rq) { - update_stats_wait_start(cfs_rq, prev); + update_stats_wait_start_fair(cfs_rq, prev); /* Put 'current' back into the tree. */ __enqueue_entity(cfs_rq, prev); /* in !on_rq case, update occurred at dequeue */ diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c index 844bd9dbfbf0..1a9614c69669 100644 --- a/kernel/sched/stats.c +++ b/kernel/sched/stats.c @@ -5,6 +5,110 @@ #include "sched.h" #include "stats.h" +void __update_stats_wait_start(struct rq *rq, struct task_struct *p, + struct sched_statistics *stats) +{ + u64 wait_start, prev_wait_start; + + wait_start = rq_clock(rq); + prev_wait_start = schedstat_val(stats->wait_start); + + if (p && likely(wait_start > prev_wait_start)) + wait_start -= prev_wait_start; + + __schedstat_set(stats->wait_start, wait_start); +} + +void __update_stats_wait_end(struct rq *rq, struct task_struct *p, + struct sched_statistics *stats) +{ + u64 delta; + + delta = rq_clock(rq) - schedstat_val(stats->wait_start); + + if (p) { + if (task_on_rq_migrating(p)) { + /* + * Preserve migrating task's wait time so wait_start + * time stamp can be adjusted to accumulate wait time + * prior to migration. + */ + __schedstat_set(stats->wait_start, delta); + + return; + } + + trace_sched_stat_wait(p, delta); + } + + __schedstat_set(stats->wait_max, + max(schedstat_val(stats->wait_max), delta)); + __schedstat_inc(stats->wait_count); + __schedstat_add(stats->wait_sum, delta); + __schedstat_set(stats->wait_start, 0); +} + +void __update_stats_enqueue_sleeper(struct rq *rq, struct task_struct *p, + struct sched_statistics *stats) +{ + u64 sleep_start, block_start; + + sleep_start = schedstat_val(stats->sleep_start); + block_start = schedstat_val(stats->block_start); + + if (sleep_start) { + u64 delta = rq_clock(rq) - sleep_start; + + if ((s64)delta < 0) + delta = 0; + + if (unlikely(delta > schedstat_val(stats->sleep_max))) + __schedstat_set(stats->sleep_max, delta); + + __schedstat_set(stats->sleep_start, 0); + __schedstat_add(stats->sum_sleep_runtime, delta); + + if (p) { + account_scheduler_latency(p, delta >> 10, 1); + trace_sched_stat_sleep(p, delta); + } + } + if (block_start) { + u64 delta = rq_clock(rq) - block_start; + + if ((s64)delta < 0) + delta = 0; + + if (unlikely(delta > schedstat_val(stats->block_max))) + __schedstat_set(stats->block_max, delta); + + __schedstat_set(stats->block_start, 0); + __schedstat_add(stats->sum_sleep_runtime, delta); + + if (p) { + if (p->in_iowait) { + __schedstat_add(stats->iowait_sum, delta); + __schedstat_inc(stats->iowait_count); + trace_sched_stat_iowait(p, delta); + } + + trace_sched_stat_blocked(p, delta); + + /* + * Blocking time is in units of nanosecs, so shift by + * 20 to get a milliseconds-range estimation of the + * amount of time that the task spent sleeping: + */ + if (unlikely(prof_on == SLEEP_PROFILING)) { + profile_hits(SLEEP_PROFILING, + (void *)get_wchan(p), + delta >> 20); + } + account_scheduler_latency(p, delta >> 10, 0); + } + } +} + /* * Current schedstat API version. * diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h index 87242968712e..b8e3d4ee21e1 100644 --- a/kernel/sched/stats.h +++ b/kernel/sched/stats.h @@ -78,6 +78,33 @@ static inline int alloc_tg_schedstats(struct task_group *tg) return 1; } +void __update_stats_wait_start(struct rq *rq, struct task_struct *p, + struct sched_statistics *stats); + +void __update_stats_wait_end(struct rq *rq, struct task_struct *p, + struct sched_statistics *stats); +void __update_stats_enqueue_sleeper(struct rq *rq, struct task_struct *p, + struct sched_statistics *stats); + +static inline void +check_schedstat_required(void) +{ + if (schedstat_enabled()) + return; + + /* Force schedstat enabled if a dependent tracepoint is active */ + if (trace_sched_stat_wait_enabled() || + trace_sched_stat_sleep_enabled() || + trace_sched_stat_iowait_enabled() || + trace_sched_stat_blocked_enabled() || + trace_sched_stat_runtime_enabled()) { + printk_deferred_once("Scheduler tracepoints stat_sleep, stat_iowait, " + "stat_blocked and stat_runtime require the " + "kernel parameter schedstats=enable or " + "kernel.sched_schedstats=1\n"); + } +} + #else /* !CONFIG_SCHEDSTATS: */ static inline void rq_sched_info_arrive (struct rq *rq, unsigned long long delta) { } static inline void rq_sched_info_dequeued(struct rq *rq, unsigned long long delta) { } @@ -101,6 +128,11 @@ static inline int alloc_tg_schedstats(struct task_group *tg) return 1; } +# define __update_stats_wait_start(rq, p, stats) do { } while (0) +# define __update_stats_wait_end(rq, p, stats) do { } while (0) +# define __update_stats_enqueue_sleeper(rq, p, stats) do { } while (0) +# define check_schedstat_required() do { } while (0) + #endif /* CONFIG_SCHEDSTATS */ #ifdef CONFIG_PSI From patchwork Tue Dec 1 11:54:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 335502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB6B9C83017 for ; Tue, 1 Dec 2020 11:56:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 612442084C for ; Tue, 1 Dec 2020 11:56:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MEW3lBtf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389827AbgLAL4k (ORCPT ); Tue, 1 Dec 2020 06:56:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389039AbgLAL4k (ORCPT ); Tue, 1 Dec 2020 06:56:40 -0500 Received: from mail-oi1-x241.google.com (mail-oi1-x241.google.com [IPv6:2607:f8b0:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD8F2C0613CF; Tue, 1 Dec 2020 03:55:59 -0800 (PST) Received: by mail-oi1-x241.google.com with SMTP id f11so1451190oij.6; Tue, 01 Dec 2020 03:55:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ViWMHVnO4QkV2f+4zPtCPy/g805uGF4Vy6ueDDWHRXk=; b=MEW3lBtfEvke21o5oATpCyDtegW9Q7RlG4n8OOXhze1vaszlMfyOMWUs88yOIZAMiA gx1Q3tQppzDiFUf42z06BlLrNvsaHj3BD+kzvlA6XxhMwRmQpFYHdr8SrS5ioBbvzoK3 CioQzwL4cjlE/ndmCzW+O8OoNiFTN/2GGRA9z8ZaCXXnctKIqfayU2HUwWHG2C7PZ+h+ 9fxiGDnR4/1g1JMUnoqhDwO6p7LQ6TsmvBICTbr3vXQywMlXxRumW0FCG9yo48CepzpC rCKXYBcqcblK+GPXvSeNMR/Obd2A9yf4CqdkZ4AZz8PEb6CeIYBp1fX4JLdUSF2yAew0 XvYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ViWMHVnO4QkV2f+4zPtCPy/g805uGF4Vy6ueDDWHRXk=; b=XS6MNV+hKjR6s2bBn/qRiJrkWrgDK60khsemGGzzzlpUi5RSX5Wnt4oUGsIxoQAolH dR27sDiy7UkF/S7DRDbfVuH/MCy6FxQooMZNFcZ0XwobbcMkCO0WjduMZRHFzOmrFYaA mlkWxhLpajGnnd1bgjKe/5/jYupTwvbl7TxNMQWX/xAOfVLK3NFyh8M7b4gJd65NfJfw z21rYiPff7peII6obQUZTGp8BWx/Fwvt2lVpomHTK0e+QaSRU1SXo7wEgrWLG7az2HYM NJSI+9jnG/nYMcNoxbqkix962lf1ltjxi4OCKQAa6rEHtzUtt5wcvKjGTUyMAQn+Obor tRWA== X-Gm-Message-State: AOAM533fmotDFsvvYmynjxPsDiaf0sNC7O2dWpovVm9uK+7BjmD6bJ2X 35khKtEfiw+g5BPaq4feUGQ= X-Google-Smtp-Source: ABdhPJz5CYTXbPjCN7diXrPr6clLeo2TrjxDq+z5N9LmoRSg9Cd0hUsp2sTRU3X2X+gCaBpkHrrR4Q== X-Received: by 2002:aca:cc08:: with SMTP id c8mr1336856oig.161.1606823759140; Tue, 01 Dec 2020 03:55:59 -0800 (PST) Received: from localhost.localdomain ([122.225.203.131]) by smtp.gmail.com with ESMTPSA id o6sm342592oon.7.2020.12.01.03.55.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Dec 2020 03:55:58 -0800 (PST) From: Yafang Shao To: mgorman@suse.de, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, qianjun.kernel@gmail.com Cc: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, Yafang Shao Subject: [PATCH 6/6] sched, rt: support schedstats for RT sched class Date: Tue, 1 Dec 2020 19:54:16 +0800 Message-Id: <20201201115416.26515-7-laoar.shao@gmail.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20201201115416.26515-1-laoar.shao@gmail.com> References: <20201201115416.26515-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org We want to measure the latency of RT tasks in our production environment with schedstats facility, but currently schedstats is only supported for fair sched class. This patch enable it for RT sched class as well. After we make the struct sched_statistics and the helpers of it independent of fair sched class, we can easily use the schedstats facility for RT sched class. The schedstat usage in RT sched class is similar with fair sched class, for example, fair RT enqueue update_stats_enqueue_fair update_stats_enqueue_rt dequeue update_stats_dequeue_fair update_stats_dequeue_rt put_prev_task update_stats_wait_start update_stats_wait_start set_next_task update_stats_wait_end update_stats_wait_end The user can get the schedstats information in the same way in fair sched class. For example, fair RT task show /proc/[pid]/sched /proc/[pid]/sched group show cpu.stat in cgroup cpu.stat in cgroup The output of a RT task's schedstats as follows, $ cat /proc/10461/sched ... stats.sum_sleep_runtime : 37966.502936 stats.wait_start : 0.000000 stats.sleep_start : 0.000000 stats.block_start : 279182.986012 stats.sleep_max : 9.001121 stats.block_max : 9.292692 stats.exec_max : 0.090009 stats.slice_max : 0.000000 stats.wait_max : 0.005305 stats.wait_sum : 0.352352 stats.wait_count : 236173 stats.iowait_sum : 37875.625128 stats.iowait_count : 235933 stats.nr_migrations_cold : 0 stats.nr_failed_migrations_affine : 0 stats.nr_failed_migrations_running : 0 stats.nr_failed_migrations_hot : 0 stats.nr_forced_migrations : 0 stats.nr_wakeups : 236172 stats.nr_wakeups_sync : 0 stats.nr_wakeups_migrate : 2 stats.nr_wakeups_local : 235865 stats.nr_wakeups_remote : 307 stats.nr_wakeups_affine : 0 stats.nr_wakeups_affine_attempts : 0 stats.nr_wakeups_passive : 0 stats.nr_wakeups_idle : 0 ... The sched:sched_stat_{wait, sleep, iowait, blocked} tracepoints can be used to trace RT tasks as well. The output of these tracepoints for a RT tasks as follows, - blocked & iowait kworker/48:1-442 [048] d... 539.830872: sched_stat_iowait: comm=stress pid=10461 delay=158242 [ns] kworker/48:1-442 [048] d... 539.830872: sched_stat_blocked: comm=stress pid=10461 delay=158242 [ns] - wait stress-10460 [001] dN.. 813.965304: sched_stat_wait: comm=stress pid=10462 delay=99997536 [ns] stress-10462 [001] d.h. 813.966300: sched_stat_runtime: comm=stress pid=10462 runtime=993812 [ns] vruntime=0 [ns] [...] stress-10462 [001] d.h. 814.065300: sched_stat_runtime: comm=stress pid=10462 runtime=994484 [ns] vruntime=0 [ns] [ totally 100 times of sched_stat_runtime for pid 10462] [ The delay of pid 10462 is the sum of above runtime ] stress-10462 [001] dN.. 814.065307: sched_stat_wait: comm=stress pid=10460 delay=100001089 [ns] stress-10460 [001] d.h. 814.066299: sched_stat_runtime: comm=stress pid=10460 runtime=991934 [ns] vruntime=0 [ns] - sleep sleep-15582 [041] dN.. 1732.814348: sched_stat_sleep: comm=sleep.sh pid=15474 delay=1001223130 [ns] sleep-15584 [041] dN.. 1733.815908: sched_stat_sleep: comm=sleep.sh pid=15474 delay=1001238954 [ns] [ In sleep.sh, it sleeps 1 sec each time. ] Signed-off-by: Yafang Shao --- kernel/sched/rt.c | 134 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 133 insertions(+), 1 deletion(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index da989653b0a2..f764c2b9070d 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1271,6 +1271,121 @@ static void __delist_rt_entity(struct sched_rt_entity *rt_se, struct rt_prio_arr rt_se->on_list = 0; } +static inline void +__schedstat_from_sched_rt_entity(struct sched_rt_entity *rt_se, + struct sched_statistics **stats) +{ + struct task_struct *p; + struct task_group *tg; + struct rt_rq *rt_rq; + int cpu; + + if (rt_entity_is_task(rt_se)) { + p = rt_task_of(rt_se); + *stats = &p->stats; + } else { + rt_rq = group_rt_rq(rt_se); + tg = rt_rq->tg; + cpu = cpu_of(rq_of_rt_rq(rt_rq)); + *stats = tg->stats[cpu]; + } +} + +static inline void +schedstat_from_sched_rt_entity(struct sched_rt_entity *rt_se, + struct sched_statistics **stats) +{ + if (!schedstat_enabled()) + return; + + __schedstat_from_sched_rt_entity(rt_se, stats); +} + +static inline void +update_stats_wait_start_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se) +{ + struct sched_statistics *stats = NULL; + struct task_struct *p = NULL; + + if (!schedstat_enabled()) + return; + + if (rt_entity_is_task(rt_se)) + p = rt_task_of(rt_se); + + __schedstat_from_sched_rt_entity(rt_se, &stats); + + __update_stats_wait_start(rq_of_rt_rq(rt_rq), p, stats); +} + +static inline void +update_stats_enqueue_sleeper_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se) +{ + struct sched_statistics *stats = NULL; + struct task_struct *p = NULL; + + if (!schedstat_enabled()) + return; + + if (rt_entity_is_task(rt_se)) + p = rt_task_of(rt_se); + + __schedstat_from_sched_rt_entity(rt_se, &stats); + + __update_stats_enqueue_sleeper(rq_of_rt_rq(rt_rq), p, stats); +} + +static inline void +update_stats_enqueue_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se, + int flags) +{ + if (!schedstat_enabled()) + return; + + if (flags & ENQUEUE_WAKEUP) + update_stats_enqueue_sleeper_rt(rt_rq, rt_se); +} + +static inline void +update_stats_wait_end_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se) +{ + struct sched_statistics *stats = NULL; + struct task_struct *p = NULL; + + if (!schedstat_enabled()) + return; + + if (rt_entity_is_task(rt_se)) + p = rt_task_of(rt_se); + + __schedstat_from_sched_rt_entity(rt_se, &stats); + + __update_stats_wait_end(rq_of_rt_rq(rt_rq), p, stats); +} + +static inline void +update_stats_dequeue_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se, + int flags) +{ + struct task_struct *p = NULL; + + if (!schedstat_enabled()) + return; + + if (rt_entity_is_task(rt_se)) + p = rt_task_of(rt_se); + + if ((flags & DEQUEUE_SLEEP) && p) { + if (p->state & TASK_INTERRUPTIBLE) + __schedstat_set(p->stats.sleep_start, + rq_clock(rq_of_rt_rq(rt_rq))); + + if (p->state & TASK_UNINTERRUPTIBLE) + __schedstat_set(p->stats.block_start, + rq_clock(rq_of_rt_rq(rt_rq))); + } +} + static void __enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags) { struct rt_rq *rt_rq = rt_rq_of_se(rt_se); @@ -1344,6 +1459,8 @@ static void enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags) { struct rq *rq = rq_of_rt_se(rt_se); + update_stats_enqueue_rt(rt_rq_of_se(rt_se), rt_se, flags); + dequeue_rt_stack(rt_se, flags); for_each_sched_rt_entity(rt_se) __enqueue_rt_entity(rt_se, flags); @@ -1354,6 +1471,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags) { struct rq *rq = rq_of_rt_se(rt_se); + update_stats_dequeue_rt(rt_rq_of_se(rt_se), rt_se, flags); dequeue_rt_stack(rt_se, flags); for_each_sched_rt_entity(rt_se) { @@ -1376,6 +1494,9 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags) if (flags & ENQUEUE_WAKEUP) rt_se->timeout = 0; + check_schedstat_required(); + update_stats_wait_start_rt(rt_rq_of_se(rt_se), rt_se); + enqueue_rt_entity(rt_se, flags); if (!task_current(rq, p) && p->nr_cpus_allowed > 1) @@ -1574,9 +1695,14 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag #endif } -static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool first) +void set_next_task_rt(struct rq *rq, struct task_struct *p, bool first) { + struct sched_rt_entity *rt_se = &p->rt; + struct rt_rq *rt_rq = &rq->rt; + p->se.exec_start = rq_clock_task(rq); + if (on_rt_rq(&p->rt)) + update_stats_wait_end_rt(rt_rq, rt_se); /* The running task is never eligible for pushing */ dequeue_pushable_task(rq, p); @@ -1640,6 +1766,12 @@ static struct task_struct *pick_next_task_rt(struct rq *rq) static void put_prev_task_rt(struct rq *rq, struct task_struct *p) { + struct sched_rt_entity *rt_se = &p->rt; + struct rt_rq *rt_rq = &rq->rt; + + if (on_rt_rq(&p->rt)) + update_stats_wait_start_rt(rt_rq, rt_se); + update_curr_rt(rq); update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 1);