From patchwork Wed Jul 5 08:58:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 107045 Delivered-To: patch@linaro.org Received: by 10.182.135.102 with SMTP id pr6csp565684obb; Wed, 5 Jul 2017 02:00:32 -0700 (PDT) X-Received: by 10.84.231.194 with SMTP id g2mr21019169pln.5.1499245232601; Wed, 05 Jul 2017 02:00:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1499245232; cv=none; d=google.com; s=arc-20160816; b=eYiClvxJToO+jbuI93E2nqy7Q8EviAzNuWtbLYRdOWl3znqqC67p3wqaI1hvQA7tvR FrDdXydiFkWsi/jROy6W5KfRQTxSBYwkvfVYtCx7jeo79KCY8p7/RDTIERoi5y02pYTb r+PF2mbu3V6gyKxoZyRO6WCXTmbNmET6O7EnXFqDf5L/nOvcu5QRhY8j6q+CmiW2ylrm Z2WbrOAxeG9cfgxsJs1NT8o+qj7Q3qLzFBdergJB4kQ7Ed7int4UyNHgGD87AxYVnbJu vHiT5J0M1lhEmqV15fp7JK7hXT712bbbIzuqAw1sMtnwmYxxRgK7g1SE5eRnRgWbzxk/ tU9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=LaSdjjTTLVXBR1ts4GJjb+IL326JYCAq8NlwFIBQ/VI=; b=BR7eykQI3z8RGVn49rewPywMG6QLNWPYgEaXZnBY5DrGCGZY6ugkZpnOPgw/4goiKe DPk6nWuASuuLziMrUMiA0I+Fzyqu/BsbJYC2L/ponJYXDmDIfKW50D2s2gIRlp6+88XG H3sZXOaeYy02WJLGsnWLKXKtpPB139qjW8eQUFX/lprwqDB+w5cAr4bYpnvbcmKDynHl Tw/smGRpkv+LY4f+vnoB9sXeRAZKCED+pGNI4mir1eQYhwvBTfRjPIDcWJkzjYSEItZt yBlYJ6C6+eTU91Yd6xNUz5uqC3wElVSTw9TvB7S7qbIDBpJmJwUqOcDS+CyJ2OxE1FbT nNJw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e86si16522770pfl.428.2017.07.05.02.00.32; Wed, 05 Jul 2017 02:00:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752512AbdGEJAW (ORCPT + 25 others); Wed, 5 Jul 2017 05:00:22 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:53516 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752426AbdGEJAU (ORCPT ); Wed, 5 Jul 2017 05:00:20 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F1A3D15AD; Wed, 5 Jul 2017 02:00:19 -0700 (PDT) Received: from cam-smtp0.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 908933F581; Wed, 5 Jul 2017 02:00:16 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v1 2/8] sched/deadline: move cpu frequency selection triggering points Date: Wed, 5 Jul 2017 09:58:59 +0100 Message-Id: <20170705085905.6558-3-juri.lelli@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170705085905.6558-1-juri.lelli@arm.com> References: <20170705085905.6558-1-juri.lelli@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since SCHED_DEADLINE doesn't track utilization signal (but reserves a fraction of CPU bandwidth to tasks admitted to the system), there is no point in evaluating frequency changes during each tick event. Move frequency selection triggering points to where running_bw changes. Co-authored-by: Claudio Scordino Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni --- Changes from RFCv0: - modify comment regarding periodic RT updates (Claudio) --- kernel/sched/deadline.c | 7 ++++--- kernel/sched/sched.h | 12 ++++++------ 2 files changed, 10 insertions(+), 9 deletions(-) -- 2.11.0 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index a84299f44b5d..6912f7f35f9b 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -85,6 +85,8 @@ void add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) dl_rq->running_bw += dl_bw; SCHED_WARN_ON(dl_rq->running_bw < old); /* overflow */ SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw); + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -97,6 +99,8 @@ void sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) SCHED_WARN_ON(dl_rq->running_bw > old); /* underflow */ if (dl_rq->running_bw > old) dl_rq->running_bw = 0; + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -1135,9 +1139,6 @@ static void update_curr_dl(struct rq *rq) return; } - /* kick cpufreq (see the comment in kernel/sched/sched.h). */ - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_DL); - schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index eeef1a3086d1..d8798bb54ace 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2057,14 +2057,14 @@ DECLARE_PER_CPU(struct update_util_data *, cpufreq_update_util_data); * The way cpufreq is currently arranged requires it to evaluate the CPU * performance state (frequency/voltage) on a regular basis to prevent it from * being stuck in a completely inadequate performance level for too long. - * That is not guaranteed to happen if the updates are only triggered from CFS, - * though, because they may not be coming in if RT or deadline tasks are active - * all the time (or there are RT and DL tasks only). + * That is not guaranteed to happen if the updates are only triggered from CFS + * and DL, though, because they may not be coming in if only RT tasks are + * active all the time (or there are RT tasks only). * - * As a workaround for that issue, this function is called by the RT and DL - * sched classes to trigger extra cpufreq updates to prevent it from stalling, + * As a workaround for that issue, this function is called periodically by the + * RT sched class to trigger extra cpufreq updates to prevent it from stalling, * but that really is a band-aid. Going forward it should be replaced with - * solutions targeted more specifically at RT and DL tasks. + * solutions targeted more specifically at RT tasks. */ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {