From patchwork Tue Feb 25 11:47:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dietmar Eggemann X-Patchwork-Id: 25284 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f72.google.com (mail-yh0-f72.google.com [209.85.213.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7EF2220143 for ; Tue, 25 Feb 2014 11:47:59 +0000 (UTC) Received: by mail-yh0-f72.google.com with SMTP id f73sf12434412yha.3 for ; Tue, 25 Feb 2014 03:47:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=lc/Atu6hQ5cDoHrcevJnU6Jm0q1d27O1gZS/3roAlSY=; b=WLtBG1lVK517pjsrpsrgqQmzItnpyq6z5gF6gXCFM03/nvFI5sKqmENrD8kaE/MiPP KzTjaxtbcHmGXit2R6KN4k8K7e/UHJ4kjEgmfSCozqQkRPLP97ffxr//FlGajaupeGLW D7xxwUFcfPQQvFp4dji9zKdvVTLaMqHPrMBx25W760Ob6TrcYxY2Lcw9fljcmoEJemK+ S6rQdtFTH9iGst3//YuQEIphIAFsZtegqMaNS2tngz/b22INui2PEYXu+bN60rL4yc+G 8F4Z0QykBWo2ixBu7OMnN/oaBAGzbUa7WSNDjGCJa+i6qzszdUj32S8RveH1oktKDi4S 5WMQ== X-Gm-Message-State: ALoCoQk6zEkqQKQxAzcitZW2nCu/drABKHNN5otd44InoL7RjWgtQIAukEz1GNoRAJCsaMrDkV/0 X-Received: by 10.58.136.100 with SMTP id pz4mr480943veb.26.1393328879083; Tue, 25 Feb 2014 03:47:59 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.32.197 with SMTP id h63ls1432214qgh.28.gmail; Tue, 25 Feb 2014 03:47:58 -0800 (PST) X-Received: by 10.58.168.142 with SMTP id zw14mr315624veb.33.1393328878929; Tue, 25 Feb 2014 03:47:58 -0800 (PST) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id f5si6723304vej.149.2014.02.25.03.47.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Feb 2014 03:47:58 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.169 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id hq11so7207919vcb.0 for ; Tue, 25 Feb 2014 03:47:58 -0800 (PST) X-Received: by 10.58.66.137 with SMTP id f9mr734471vet.11.1393328878842; Tue, 25 Feb 2014 03:47:58 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp132751vcz; Tue, 25 Feb 2014 03:47:58 -0800 (PST) X-Received: by 10.69.13.132 with SMTP id ey4mr6228237pbd.22.1393328878022; Tue, 25 Feb 2014 03:47:58 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id tk9si20274551pac.35.2014.02.25.03.47.56; Tue, 25 Feb 2014 03:47:56 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752978AbaBYLru (ORCPT + 26 others); Tue, 25 Feb 2014 06:47:50 -0500 Received: from service87.mimecast.com ([91.220.42.44]:45971 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752120AbaBYLrs (ORCPT ); Tue, 25 Feb 2014 06:47:48 -0500 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Tue, 25 Feb 2014 11:47:46 +0000 Received: from e103711-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 25 Feb 2014 11:47:50 +0000 From: Dietmar Eggemann To: Peter Zijlstra , Ben Segall Cc: linux-kernel@vger.kernel.org, Dietmar Eggemann Subject: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED Date: Tue, 25 Feb 2014 11:47:42 +0000 Message-Id: <1393328862-19997-1-git-send-email-dietmar.eggemann@arm.com> X-Mailer: git-send-email 1.7.9.5 X-OriginalArrivalTime: 25 Feb 2014 11:47:50.0755 (UTC) FILETIME=[6935EB30:01CF321F] X-MC-Unique: 114022511474639601 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: dietmar.eggemann@arm.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.169 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Dietmar Eggemann The struct sched_avg of struct rq is only used in case group scheduling is enabled inside __update_tg_runnable_avg() to update per-cpu representation of a task group. I.e. that there is no need to maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case. This patch guards struct sched_avg of struct rq and update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED. There is an extra empty definition for update_rq_runnable_avg() necessary for the !CONFIG_FAIR_GROUP_SCHED && CONFIG_SMP case. The function print_cfs_group_stats() which prints out struct sched_avg of struct rq is already guarded with CONFIG_FAIR_GROUP_SCHED. Signed-off-by: Dietmar Eggemann --- Hi, I was just wondering what the overall policy is when it comes to guard specific functionality in the scheduler code? Do we want to to guard something like the fair group scheduling support completely? The patch is against tip/master . kernel/sched/fair.c | 13 +++++++------ kernel/sched/sched.h | 2 ++ 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5f6ddbef80af..76c6513b6889 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2376,12 +2376,19 @@ static inline void __update_group_entity_contrib(struct sched_entity *se) se->avg.load_avg_contrib >>= NICE_0_SHIFT; } } + +static inline void update_rq_runnable_avg(struct rq *rq, int runnable) +{ + __update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable); + __update_tg_runnable_avg(&rq->avg, &rq->cfs); +} #else /* CONFIG_FAIR_GROUP_SCHED */ static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq, int force_update) {} static inline void __update_tg_runnable_avg(struct sched_avg *sa, struct cfs_rq *cfs_rq) {} static inline void __update_group_entity_contrib(struct sched_entity *se) {} +static inline void update_rq_runnable_avg(struct rq *rq, int runnable) {} #endif /* CONFIG_FAIR_GROUP_SCHED */ static inline void __update_task_entity_contrib(struct sched_entity *se) @@ -2480,12 +2487,6 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update) __update_cfs_rq_tg_load_contrib(cfs_rq, force_update); } -static inline void update_rq_runnable_avg(struct rq *rq, int runnable) -{ - __update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable); - __update_tg_runnable_avg(&rq->avg, &rq->cfs); -} - /* Add the load generated by se into cfs_rq's child load-average */ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4be68da1fe00..63beab7512e7 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -630,7 +630,9 @@ struct rq { struct llist_head wake_list; #endif +#ifdef CONFIG_FAIR_GROUP_SCHED struct sched_avg avg; +#endif }; static inline int cpu_of(struct rq *rq)