From patchwork Mon Dec 4 10:23:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120500 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4244776qgn; Mon, 4 Dec 2017 02:24:18 -0800 (PST) X-Google-Smtp-Source: AGs4zMYs5BhaaO+k/kCM9EuFMrFQz+zWbRddeO7mdlKDUNFDZ7XVipputHuSxGMgaZXsd+kcIDGU X-Received: by 10.98.193.1 with SMTP id i1mr18992861pfg.29.1512383058622; Mon, 04 Dec 2017 02:24:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383058; cv=none; d=google.com; s=arc-20160816; b=dHkqkmPok/9632GYhQV85zXvZXVTzLZQ20AI6VhdG5Gff+Or29TsBLH/MHV35pD6SO 4PxBfD3ccytKO3Lt1dfgTw57xp/mwILRTYS/n//VJ0sEYyKSP5Ro/EOj/rKNtFgB0R+b pQvWJGSbo9xrx6gbB7S3qmpwMzhVx0bBMk+ovC+RYDLbQngxEKIvnfG9DbNdflTO/o6E EJtZXR9hPF9GMvuTtpNrpkopLld+BRBzQSp9fsF11SnTwG00vqYOPGF8cBVWZK3Z6on7 pQPUZRSDRJQ/BRhDetVATeTRPzLZ5djDZkfzZG7o/QzmlCm3ZdgBHUUk8chPNrjuCQHo X69Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ok1IoNWsSc5SPbW7Qs1gsW6XD9UfS7OPuFMYCjsLEcU=; b=W5iJ5+Xgxiic9G1aRINwLHKRdS+1jMpRyLH/GlXpVRUyuVE7ZW8AUugUDnaHdKxoi1 KgbNdarORhZMHztXieCejgIuaAjs2q9g+SSzl+Cdisf1qg5PA4NWfXk0NHa3dLKKO1nL +xJ8QhIqR3WqtCef3Luk+NxC7BY+VgNmU9bSr/x4kQZGMTpSBwk1eTSICmuF6s01tUlm mnmUqYfp6HKjXq4et6gLa3UrD/nJb9IzKS4iLWOPnnN5XRNEPrdkBRfzDdgq+Uh5lEpZ SOWx8Sb2hQZXRfL+tFt92h0QaS6x3k0TlML2CLtgXO2fVGzis7SB/VbppJ7FbOCzNEhy 3qSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t19si9350389plo.269.2017.12.04.02.24.18; Mon, 04 Dec 2017 02:24:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753627AbdLDKYQ (ORCPT + 28 others); Mon, 4 Dec 2017 05:24:16 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:39229 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751619AbdLDKYJ (ORCPT ); Mon, 4 Dec 2017 05:24:09 -0500 Received: by mail-wr0-f193.google.com with SMTP id a41so14762443wra.6 for ; Mon, 04 Dec 2017 02:24:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ok1IoNWsSc5SPbW7Qs1gsW6XD9UfS7OPuFMYCjsLEcU=; b=ZxjMUvEfZqoEwEHa7eFh8HB2PuXudlMeNbSV6lhYv+3VgUTGAfS46iyIzNxtGVMz/Y ytrAo7zVlyqX9x0oWQfrO0nN/XV6N4JHje40R3VtTiB6lNxDNmBsreZsS20WWFXFXH92 bI3siypR/svq8Hant1B07a77IopjPniebzjrTL4Ksz/mlNiSVU6VyEcD/NsLPEyrissB guaOTUNT9KdK4jRQExndToqb2QwKP+e3k+nBPyVsg5HfdfPfW/AtPp3sutDnLDAKGe/u D4FlycdYpFM0F1fjK4XE44fiqwIAvSWKjHe3PaUNPr7sNnPc6hVwq3WL/4Yx+cLdh43X Uo6Q== X-Gm-Message-State: AJaThX7x301mlJq+jWkE/dO7Oy2lpIIiU2MbTQLZzKHKR2WBrvv+x/6V 0p90k6XrwhjqO0h2lwqVhKgvlg== X-Received: by 10.223.133.250 with SMTP id 55mr13144423wru.23.1512383048157; Mon, 04 Dec 2017 02:24:08 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.04 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:07 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v2 2/8] sched/deadline: move cpu frequency selection triggering points Date: Mon, 4 Dec 2017 11:23:19 +0100 Message-Id: <20171204102325.5110-3-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli Since SCHED_DEADLINE doesn't track utilization signal (but reserves a fraction of CPU bandwidth to tasks admitted to the system), there is no point in evaluating frequency changes during each tick event. Move frequency selection triggering points to where running_bw changes. Co-authored-by: Claudio Scordino Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Reviewed-by: Viresh Kumar --- kernel/sched/deadline.c | 7 ++++--- kernel/sched/sched.h | 12 ++++++------ 2 files changed, 10 insertions(+), 9 deletions(-) -- 2.14.3 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 2473736c7616..7e4038bf9954 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -86,6 +86,8 @@ void add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) dl_rq->running_bw += dl_bw; SCHED_WARN_ON(dl_rq->running_bw < old); /* overflow */ SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw); + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_util(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -98,6 +100,8 @@ void sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) SCHED_WARN_ON(dl_rq->running_bw > old); /* underflow */ if (dl_rq->running_bw > old) dl_rq->running_bw = 0; + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_util(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -1134,9 +1138,6 @@ static void update_curr_dl(struct rq *rq) return; } - /* kick cpufreq (see the comment in kernel/sched/sched.h). */ - cpufreq_update_util(rq, SCHED_CPUFREQ_DL); - schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b19552a212de..a1730e39cbc6 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2096,14 +2096,14 @@ DECLARE_PER_CPU(struct update_util_data *, cpufreq_update_util_data); * The way cpufreq is currently arranged requires it to evaluate the CPU * performance state (frequency/voltage) on a regular basis to prevent it from * being stuck in a completely inadequate performance level for too long. - * That is not guaranteed to happen if the updates are only triggered from CFS, - * though, because they may not be coming in if RT or deadline tasks are active - * all the time (or there are RT and DL tasks only). + * That is not guaranteed to happen if the updates are only triggered from CFS + * and DL, though, because they may not be coming in if only RT tasks are + * active all the time (or there are RT tasks only). * - * As a workaround for that issue, this function is called by the RT and DL - * sched classes to trigger extra cpufreq updates to prevent it from stalling, + * As a workaround for that issue, this function is called periodically by the + * RT sched class to trigger extra cpufreq updates to prevent it from stalling, * but that really is a band-aid. Going forward it should be replaced with - * solutions targeted more specifically at RT and DL tasks. + * solutions targeted more specifically at RT tasks. */ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {