From patchwork Fri Aug 26 18:40:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 74828 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp506287qga; Fri, 26 Aug 2016 11:42:22 -0700 (PDT) X-Received: by 10.66.197.131 with SMTP id iu3mr6533802pac.97.1472236941934; Fri, 26 Aug 2016 11:42:21 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c9si22421143pav.143.2016.08.26.11.42.21; Fri, 26 Aug 2016 11:42:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752944AbcHZSlx (ORCPT + 27 others); Fri, 26 Aug 2016 14:41:53 -0400 Received: from mail-pa0-f42.google.com ([209.85.220.42]:35659 "EHLO mail-pa0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751556AbcHZSlt (ORCPT ); Fri, 26 Aug 2016 14:41:49 -0400 Received: by mail-pa0-f42.google.com with SMTP id hb8so29612466pac.2 for ; Fri, 26 Aug 2016 11:41:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jjSFDmXPMTulbXsLSFHSEQ4P82W1uSfo6lEAZ64675Y=; b=ijEXzTCjZrut2GBpoaflKODi9ag5aPFgGXycOoVCZ+9YT/fbToobijcqU/fNt12zNw QMcMJOwxyUiw2HjaPIKSXeV7fzkK7Cwzm+EbmAsEi7V36HwVnyjUtLPwr/OeKxT8jWLg W25yJWOVUyFx0s/1r2UXB385N0fNtugF82RIo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jjSFDmXPMTulbXsLSFHSEQ4P82W1uSfo6lEAZ64675Y=; b=GxLl/lPUJ+QCntFtU4EzqE1H6pChmUcesdQh4usRSZnUY/A1CFhAPaEljrJ1uNLKoW 7502BNyHni1JJaRadPuPyQg28v3ia7vmRscwn82Nul8CvjsNpM/J0+coplUIKe7j4LAn PrN176IRz2RC6er9fnibhrZBTWp9JWPbP+EYijtGVkB0/D6YfAaEfFylkmf2E7VuSKey 0Tp198/SYiL61/YabI7VAABA+7XXVTqCae3Im220y27aoaqP+waYCP/VmSACobUt/R3n Ufo1hmfwTVPQPoP+pMlMOVQbYduVK1Zvciyf/xSd+gwOdFFdP5AQ4moOfPH2juXnpgKW ccYw== X-Gm-Message-State: AE9vXwOTIjvyOqQ++t2U7g0aPo+/fSINRg9hYdRb1KPSdVlm6rza9J9nhbiaKHnscYbDT9jq X-Received: by 10.66.79.138 with SMTP id j10mr8644071pax.60.1472236859465; Fri, 26 Aug 2016 11:40:59 -0700 (PDT) Received: from graphite.smuckle.net (cpe-76-167-105-107.san.res.rr.com. [76.167.105.107]) by smtp.gmail.com with ESMTPSA id g27sm30480541pfd.47.2016.08.26.11.40.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Aug 2016 11:40:59 -0700 (PDT) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar , "Rafael J . Wysocki" Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Steve Muckle Subject: [PATCH 1/2] sched: cpufreq: ignore SMT when determining max cpu capacity Date: Fri, 26 Aug 2016 11:40:47 -0700 Message-Id: <1472236848-17038-2-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.7.3 In-Reply-To: <1472236848-17038-1-git-send-email-smuckle@linaro.org> References: <1472236848-17038-1-git-send-email-smuckle@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org PELT does not consider SMT when scaling its utilization values via arch_scale_cpu_capacity(). The value in rq->cpu_capacity_orig does take SMT into consideration though and therefore may be smaller than the utilization reported by PELT. On an Intel i7-3630QM for example rq->cpu_capacity_orig is 589 but util_avg scales up to 1024. This means that a 50% utilized CPU will show up in schedutil as ~86% busy. Fix this by using the same CPU scaling value in schedutil as that which is used by PELT. Signed-off-by: Steve Muckle --- kernel/sched/cpufreq_schedutil.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) -- 2.7.3 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 60d985f4dc47..cb8a77b1ef1b 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -147,7 +147,9 @@ static unsigned int get_next_freq(struct sugov_cpu *sg_cpu, unsigned long util, static void sugov_get_util(unsigned long *util, unsigned long *max) { struct rq *rq = this_rq(); - unsigned long cfs_max = rq->cpu_capacity_orig; + unsigned long cfs_max; + + cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id()); *util = min(rq->cfs.avg.util_avg, cfs_max); *max = cfs_max;