From patchwork Mon May 29 21:02:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 100687 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp498130qge; Mon, 29 May 2017 14:19:16 -0700 (PDT) X-Received: by 10.98.42.7 with SMTP id q7mr19864054pfq.165.1496092756513; Mon, 29 May 2017 14:19:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496092756; cv=none; d=google.com; s=arc-20160816; b=CfMXUSKbFwD3YWlx3GmQMTrvlylRjlkC4X53/cLW32kHvX3h6JEXEnLovl5xe6ptX/ aQON3TKm0hvUMCVEd8SRamcMdCx98eB6JXIbRAe21RUp6PM2rnjyzqmqpV+Pl0waSWAn X1md4zmKPShHzkiw2qW5AbILz5qp1WPIAXwKq9XP+Wc2Z2ePwsNhQchvYYtwDRX9G1X6 mQ9SMxtarDVlMw9PhA65hbQ91/WsSU5UY3jI18a9hUGDCkMzkQpSJggU3Gt48dL7uemE Wnn+ePY4YM3cet0mL7yT2Ng69fMWZCkwUcFFgmKx7cGtZOwlYdf75bCZG6H7K4aCf/hk gaTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=oieCptFk8AZ7p6Kz7cWDB98xPGwbODHGPCUOR/QL7z8=; b=yVNzEwVqsnAKXKwBpHpSCYcMmbulAjIWFiEDKGE6LNT8qBDAxgQurvtwiopKHzDATW l76AaysXoD/LgK1JZRziVOkgTpzi16293kFfVCoJ0tCfcU7DWEYdvmJMbysmwPpjQSZe SlxLJkeWYay+Xx3ikcS7YUY4IyNc5nS4aAJqELfwqyeymYANff0chMUqQA5lQ73F8h+L wyzY9U2ZYVwbfo7AsmCX0lTYrGbvCYiHA7qKFxeNGc9m5r0Mbueu5QCulScK3Hm5qMJU zfQ509HhOTLsmmAm6PyudknTCCxYfXA05lACZpUEuwrbLrhAca7+ANrojqCukveDhE0y nTGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x5si11465257pfx.75.2017.05.29.14.19.16; Mon, 29 May 2017 14:19:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751340AbdE2VSv (ORCPT + 25 others); Mon, 29 May 2017 17:18:51 -0400 Received: from alt22.smtp-out.videotron.ca ([70.80.0.73]:20885 "EHLO alt22.smtp-out.videotron.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751201AbdE2VSL (ORCPT ); Mon, 29 May 2017 17:18:11 -0400 Received: from yoda.home ([96.23.157.65]) by Videotron with SMTP id FRohdlqSVzCgpFRoidThXF; Mon, 29 May 2017 17:03:09 -0400 X-Authority-Analysis: v=2.1 cv=QfzGxpvv c=1 sm=1 tr=0 a=keA3yYpnlypCNW5BNWqu+w==:117 a=keA3yYpnlypCNW5BNWqu+w==:17 a=L9H7d07YOLsA:10 a=9cW_t1CCXrUA:10 a=s5jvgZ67dGcA:10 a=tJ8p9aeEuA8A:10 a=KKAkSRfTAAAA:8 a=fT16DAesCbQ9Dy-Qf7cA:9 a=7Zwj6sZBwVKJAoWSPKxL6X1jA+E=:19 a=cvBusfyB2V15izCimMoJ:22 Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id AC4EC2DA063B; Mon, 29 May 2017 17:03:07 -0400 (EDT) From: Nicolas Pitre To: Ingo Molnar , Peter Zijlstra Cc: linux-kernel@vger.kernel.org Subject: [PATCH 4/7] sched/deadline: make it configurable Date: Mon, 29 May 2017 17:02:59 -0400 Message-Id: <20170529210302.26868-5-nicolas.pitre@linaro.org> X-Mailer: git-send-email 2.9.4 In-Reply-To: <20170529210302.26868-1-nicolas.pitre@linaro.org> References: <20170529210302.26868-1-nicolas.pitre@linaro.org> X-CMAE-Envelope: MS4wfKhEzEIu+GVD8EnCoyjHQFKrrn37KHu561Je/qdNW2dd4jnaHVLhVtM19iO9DwEls2wgs4u6DNUUR59IDQdaZoNIoPoYLJmAjnZmn8owQVT5wTMv4EWN YDtXsSAVnYfKvqm/b8cFyiinouzndYttQEGkoMsurbySQedE1Yid/FG1YV7LCo03TETJmh6kUSGfanUZVrh9CuLZ4DzsEk3mrVo2agltDM0SckP2J3z52IHN Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On most small systems, the deadline scheduler class is a luxury that rarely gets used if at all. It is preferable to have the ability to configure it out to reduce the kernel size in that case. Signed-off-by: Nicolas Pitre --- include/linux/sched.h | 2 ++ include/linux/sched/deadline.h | 2 +- init/Kconfig | 8 ++++++++ kernel/locking/rtmutex.c | 9 +++++++++ kernel/sched/Makefile | 5 +++-- kernel/sched/core.c | 37 ++++++++++++++++++++++++++----------- kernel/sched/debug.c | 4 ++++ kernel/sched/rt.c | 7 +++++-- kernel/sched/sched.h | 9 +++++++-- kernel/sched/stop_task.c | 4 ++++ kernel/sched/topology.c | 6 ++++++ 11 files changed, 75 insertions(+), 18 deletions(-) -- 2.9.4 diff --git a/include/linux/sched.h b/include/linux/sched.h index 2b69fc6502..ba0c203669 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -522,7 +522,9 @@ struct task_struct { #ifdef CONFIG_CGROUP_SCHED struct task_group *sched_task_group; #endif +#ifdef CONFIG_SCHED_DL struct sched_dl_entity dl; +#endif #ifdef CONFIG_PREEMPT_NOTIFIERS /* List of struct preempt_notifier: */ diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h index 975be862e0..308ca2482a 100644 --- a/include/linux/sched/deadline.h +++ b/include/linux/sched/deadline.h @@ -13,7 +13,7 @@ static inline int dl_prio(int prio) { - if (unlikely(prio < MAX_DL_PRIO)) + if (IS_ENABLED(CONFIG_SCHED_DL) && unlikely(prio < MAX_DL_PRIO)) return 1; return 0; } diff --git a/init/Kconfig b/init/Kconfig index b9aed60cac..f73e3f0940 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1303,6 +1303,14 @@ config SCHED_AUTOGROUP desktop applications. Task group autogeneration is currently based upon task session. +config SCHED_DL + bool "Deadline Task Scheduling" if EXPERT + default y + help + This adds the sched_dl scheduling class to the kernel providing + support for the SCHED_DEADLINE policy. You might want to disable + this to reduce the kernel size. If unsure say y. + config SYSFS_DEPRECATED bool "Enable deprecated sysfs features to support old userspace tools" depends on SYSFS diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 28cd09e635..f42c1b1e52 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -227,8 +227,13 @@ static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock, /* * Only use with rt_mutex_waiter_{less,equal}() */ +#ifdef CONFIG_SCHED_DL #define task_to_waiter(p) \ &(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline } +#else +#define task_to_waiter(p) \ + &(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = 0 } +#endif static inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left, @@ -692,7 +697,9 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, * the values of the node being removed. */ waiter->prio = task->prio; +#ifdef CONFIG_SCHED_DL waiter->deadline = task->dl.deadline; +#endif rt_mutex_enqueue(lock, waiter); @@ -967,7 +974,9 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, waiter->task = task; waiter->lock = lock; waiter->prio = task->prio; +#ifdef CONFIG_SCHED_DL waiter->deadline = task->dl.deadline; +#endif /* Get the top priority waiter on the lock */ if (rt_mutex_has_waiters(lock)) diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index 5e4c2e7a63..3bd6a7c1cc 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -16,9 +16,10 @@ CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer endif obj-y += core.o loadavg.o clock.o cputime.o -obj-y += idle_task.o fair.o rt.o deadline.o obj-y += wait.o swait.o completion.o idle.o -obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o +obj-y += idle_task.o fair.o rt.o +obj-$(CONFIG_SCHED_DL) += deadline.o $(if $(CONFIG_SMP),cpudeadline.o) +obj-$(CONFIG_SMP) += cpupri.o topology.o stop_task.o obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o obj-$(CONFIG_SCHEDSTATS) += stats.o obj-$(CONFIG_SCHED_DEBUG) += debug.o diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 93ce28ea34..d2d2791f32 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -634,9 +634,11 @@ bool sched_can_stop_tick(struct rq *rq) { int fifo_nr_running; +#ifdef CONFIG_SCHED_DL /* Deadline tasks, even if single, need the tick */ if (rq->dl.dl_nr_running) return false; +#endif /* * If there are more than one RR tasks, we need the tick to effect the @@ -2174,9 +2176,11 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) memset(&p->se.statistics, 0, sizeof(p->se.statistics)); #endif +#ifdef CONFIG_SCHED_DL RB_CLEAR_NODE(&p->dl.rb_node); init_dl_task_timer(&p->dl); __dl_clear_params(p); +#endif INIT_LIST_HEAD(&p->rt.run_list); p->rt.timeout = 0; @@ -3699,6 +3703,9 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task) * --> -dl task blocks on mutex A and could preempt the * running task */ +#ifdef CONFIG_SCHED_DL + if (dl_prio(oldprio)) + p->dl.dl_boosted = 0; if (dl_prio(prio)) { if (!dl_prio(p->normal_prio) || (pi_task && dl_entity_preempt(&pi_task->dl, &p->dl))) { @@ -3707,15 +3714,13 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task) } else p->dl.dl_boosted = 0; p->sched_class = &dl_sched_class; - } else if (rt_prio(prio)) { - if (dl_prio(oldprio)) - p->dl.dl_boosted = 0; + } else +#endif + if (rt_prio(prio)) { if (oldprio < prio) queue_flag |= ENQUEUE_HEAD; p->sched_class = &rt_sched_class; } else { - if (dl_prio(oldprio)) - p->dl.dl_boosted = 0; if (rt_prio(oldprio)) p->rt.timeout = 0; p->sched_class = &fair_sched_class; @@ -5266,7 +5271,8 @@ int cpuset_cpumask_can_shrink(const struct cpumask *cur, if (!cpumask_weight(cur)) return ret; - ret = dl_cpuset_cpumask_can_shrink(cur, trial); + if (IS_ENABLED(CONFIG_SCHED_DL)) + ret = dl_cpuset_cpumask_can_shrink(cur, trial); return ret; } @@ -5561,7 +5567,7 @@ static void cpuset_cpu_active(void) static int cpuset_cpu_inactive(unsigned int cpu) { if (!cpuhp_tasks_frozen) { - if (dl_cpu_busy(cpu)) + if (IS_ENABLED(CONFIG_SCHED_DL) && dl_cpu_busy(cpu)) return -EBUSY; cpuset_update_active_cpus(); } else { @@ -5721,7 +5727,9 @@ void __init sched_init_smp(void) free_cpumask_var(non_isolated_cpus); init_sched_rt_class(); +#ifdef CONFIG_SCHED_DL init_sched_dl_class(); +#endif sched_init_smt(); sched_clock_init_late(); @@ -5825,7 +5833,9 @@ void __init sched_init(void) #endif /* CONFIG_CPUMASK_OFFSTACK */ init_rt_bandwidth(&def_rt_bandwidth, global_rt_period(), global_rt_runtime()); +#ifdef CONFIG_SCHED_DL init_dl_bandwidth(&def_dl_bandwidth, global_rt_period(), global_rt_runtime()); +#endif #ifdef CONFIG_SMP init_defrootdomain(); @@ -5855,7 +5865,9 @@ void __init sched_init(void) rq->calc_load_update = jiffies + LOAD_FREQ; init_cfs_rq(&rq->cfs); init_rt_rq(&rq->rt); +#ifdef CONFIG_SCHED_DL init_dl_rq(&rq->dl); +#endif #ifdef CONFIG_FAIR_GROUP_SCHED root_task_group.shares = ROOT_TASK_GROUP_LOAD; INIT_LIST_HEAD(&rq->leaf_cfs_rq_list); @@ -6518,16 +6530,19 @@ int sched_rt_handler(struct ctl_table *table, int write, if (ret) goto undo; - ret = sched_dl_global_validate(); - if (ret) - goto undo; + if (IS_ENABLED(CONFIG_SCHED_DL)) { + ret = sched_dl_global_validate(); + if (ret) + goto undo; + } ret = sched_rt_global_constraints(); if (ret) goto undo; sched_rt_do_global(); - sched_dl_do_global(); + if (IS_ENABLED(CONFIG_SCHED_DL)) + sched_dl_do_global(); } if (0) { undo: diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 38f019324f..84f80a81ab 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -646,7 +646,9 @@ do { \ spin_lock_irqsave(&sched_debug_lock, flags); print_cfs_stats(m, cpu); print_rt_stats(m, cpu); +#ifdef CONFIG_SCHED_DL print_dl_stats(m, cpu); +#endif print_rq(m, rq, cpu); spin_unlock_irqrestore(&sched_debug_lock, flags); @@ -954,10 +956,12 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) #endif P(policy); P(prio); +#ifdef CONFIG_SCHED_DL if (p->policy == SCHED_DEADLINE) { P(dl.runtime); P(dl.deadline); } +#endif #undef PN_SCHEDSTAT #undef PN #undef __PN diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 979b734100..a3206ef3e8 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1544,9 +1544,12 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) * means a dl or stop task can slip in, in which case we need * to re-start task selection. */ - if (unlikely((rq->stop && task_on_rq_queued(rq->stop)) || - rq->dl.dl_nr_running)) + if (unlikely((rq->stop && task_on_rq_queued(rq->stop)))) return RETRY_TASK; +#ifdef CONFIG_SCHED_DL + if (unlikely(rq->dl.dl_nr_running)) + return RETRY_TASK; +#endif } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4a845c19b8..ec9a84aad4 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -137,7 +137,7 @@ static inline int rt_policy(int policy) static inline int dl_policy(int policy) { - return policy == SCHED_DEADLINE; + return IS_ENABLED(CONFIG_SCHED_DL) && policy == SCHED_DEADLINE; } static inline bool valid_policy(int policy) { @@ -667,7 +667,9 @@ struct rq { struct cfs_rq cfs; struct rt_rq rt; +#ifdef CONFIG_SCHED_DL struct dl_rq dl; +#endif #ifdef CONFIG_FAIR_GROUP_SCHED /* list of leaf cfs_rq on this cpu: */ @@ -1438,9 +1440,12 @@ static inline void set_curr_task(struct rq *rq, struct task_struct *curr) #ifdef CONFIG_SMP #define sched_class_highest (&stop_sched_class) -#else +#elif defined(CONFIG_SCHED_DL) #define sched_class_highest (&dl_sched_class) +#else +#define sched_class_highest (&rt_sched_class) #endif + #define for_each_class(class) \ for (class = sched_class_highest; class; class = class->next) diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c index 9f69fb6308..5632dc3e63 100644 --- a/kernel/sched/stop_task.c +++ b/kernel/sched/stop_task.c @@ -110,7 +110,11 @@ static void update_curr_stop(struct rq *rq) * Simple, special scheduling class for the per-CPU stop tasks: */ const struct sched_class stop_sched_class = { +#ifdef CONFIG_SCHED_DL .next = &dl_sched_class, +#else + .next = &rt_sched_class, +#endif .enqueue_task = enqueue_task_stop, .dequeue_task = dequeue_task_stop, diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 1b0b4fb128..25328bfca6 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -195,7 +195,9 @@ static void free_rootdomain(struct rcu_head *rcu) struct root_domain *rd = container_of(rcu, struct root_domain, rcu); cpupri_cleanup(&rd->cpupri); +#ifdef CONFIG_SCHED_DL cpudl_cleanup(&rd->cpudl); +#endif free_cpumask_var(rd->dlo_mask); free_cpumask_var(rd->rto_mask); free_cpumask_var(rd->online); @@ -253,16 +255,20 @@ static int init_rootdomain(struct root_domain *rd) if (!zalloc_cpumask_var(&rd->rto_mask, GFP_KERNEL)) goto free_dlo_mask; +#ifdef CONFIG_SCHED_DL init_dl_bw(&rd->dl_bw); if (cpudl_init(&rd->cpudl) != 0) goto free_rto_mask; +#endif if (cpupri_init(&rd->cpupri) != 0) goto free_cpudl; return 0; free_cpudl: +#ifdef CONFIG_SCHED_DL cpudl_cleanup(&rd->cpudl); +#endif free_rto_mask: free_cpumask_var(rd->rto_mask); free_dlo_mask: