From patchwork Thu Feb 11 17:10:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 61792 Delivered-To: patch@linaro.org Received: by 10.112.43.199 with SMTP id y7csp327207lbl; Thu, 11 Feb 2016 09:09:29 -0800 (PST) X-Received: by 10.66.124.200 with SMTP id mk8mr68801203pab.43.1455210568969; Thu, 11 Feb 2016 09:09:28 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ud10si13651756pab.54.2016.02.11.09.09.28; Thu, 11 Feb 2016 09:09:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751525AbcBKRJ1 (ORCPT + 30 others); Thu, 11 Feb 2016 12:09:27 -0500 Received: from foss.arm.com ([217.140.101.70]:56420 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750968AbcBKRJZ (ORCPT ); Thu, 11 Feb 2016 12:09:25 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7F6F649; Thu, 11 Feb 2016 09:08:37 -0800 (PST) Received: from e106622-lin (e106622-lin.cambridge.arm.com [10.1.208.152]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D4E223F238; Thu, 11 Feb 2016 09:09:23 -0800 (PST) Date: Thu, 11 Feb 2016 17:10:12 +0000 From: Juri Lelli To: Steven Rostedt Cc: luca abeni , linux-kernel@vger.kernel.org, peterz@infradead.org, mingo@redhat.com, vincent.guittot@linaro.org, wanpeng.li@hotmail.com Subject: Re: [PATCH 1/2] sched/deadline: add per rq tracking of admitted bandwidth Message-ID: <20160211171012.GS11415@e106622-lin> References: <20160210113258.GX11415@e106622-lin> <20160210093702.10c655be@gandalf.local.home> <20160210162748.GI11415@e106622-lin> <20160211121257.GL11415@e106622-lin> <20160211132254.1a369fe9@utopia> <20160211122754.GN11415@e106622-lin> <20160211134018.6b15fd68@utopia> <20160211124959.GO11415@e106622-lin> <20160211140545.3c9e6e41@utopia> <20160211092546.5b607147@gandalf.local.home> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20160211092546.5b607147@gandalf.local.home> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/02/16 09:25, Steven Rostedt wrote: > On Thu, 11 Feb 2016 14:05:45 +0100 > luca abeni wrote: > > > > Well, I never used the rq utilization to re-build the root_domain > > utilization (and I never played with root domains too much... :)... > > So, I do not really know. Maybe the code should do: > > raw_spin_lock(&rq->lock); > > raw_spin_lock(&cpu_rq(cpu)->lock); > > Of course you want to use double_rq_lock() here instead. > Right. Is something like this completely out of question/broken? I slighly tested it with Steve's test and I don't see the warning anymore (sched_debug looks good as well); but my confidence is still pretty low. :( --->8--- >From 9713e12bc682ca364e62f9d69bcd44598c50a8a9 Mon Sep 17 00:00:00 2001 From: Juri Lelli Date: Thu, 11 Feb 2016 16:55:49 +0000 Subject: [PATCH] fixup! sched/deadline: add per rq tracking of admitted bandwidth Signed-off-by: Juri Lelli --- include/linux/init_task.h | 1 + include/linux/sched.h | 1 + kernel/sched/core.c | 5 ++++- kernel/sched/deadline.c | 26 +++++++++++++++++++++++++- 4 files changed, 31 insertions(+), 2 deletions(-) -- 2.7.0 diff --git a/include/linux/init_task.h b/include/linux/init_task.h index f2cb8d4..c582f9d 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h @@ -199,6 +199,7 @@ extern struct task_group root_task_group; .policy = SCHED_NORMAL, \ .cpus_allowed = CPU_MASK_ALL, \ .nr_cpus_allowed= NR_CPUS, \ + .fallback_cpu = -1, \ .mm = NULL, \ .active_mm = &init_mm, \ .restart_block = { \ diff --git a/include/linux/sched.h b/include/linux/sched.h index a10494a..a6fc95c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1401,6 +1401,7 @@ struct task_struct { struct task_struct *last_wakee; int wake_cpu; + int fallback_cpu; #endif int on_rq; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7fb9246..4e4bc41 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1442,7 +1442,8 @@ static int select_fallback_rq(int cpu, struct task_struct *p) continue; if (!cpu_active(dest_cpu)) continue; - if (cpumask_test_cpu(dest_cpu, tsk_cpus_allowed(p))) + if (cpumask_test_cpu(dest_cpu, tsk_cpus_allowed(p))) { + p->fallback_cpu = dest_cpu; return dest_cpu; } } @@ -1490,6 +1491,7 @@ out: } } + p->fallback_cpu = dest_cpu; return dest_cpu; } @@ -1954,6 +1956,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) if (task_cpu(p) != cpu) { wake_flags |= WF_MIGRATED; set_task_cpu(p, cpu); + p->fallback_cpu = -1; } #endif /* CONFIG_SMP */ diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 6368f43..1eccecf 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1043,6 +1043,21 @@ static void yield_task_dl(struct rq *rq) #ifdef CONFIG_SMP +static void swap_task_ac_bw(struct task_struct *p, + struct rq *from, + struct rq *to) +{ + unsigned long flags; + + lockdep_assert_held(&p->pi_lock); + local_irq_save(flags); + double_rq_lock(from, to); + __dl_sub_ac(from, p->dl.dl_bw); + __dl_add_ac(to, p->dl.dl_bw); + double_rq_unlock(from, to); + local_irq_restore(flags); +} + static int find_later_rq(struct task_struct *task); static int @@ -1077,8 +1092,10 @@ select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags) if (target != -1 && (dl_time_before(p->dl.deadline, cpu_rq(target)->dl.earliest_dl.curr) || - (cpu_rq(target)->dl.dl_nr_running == 0))) + (cpu_rq(target)->dl.dl_nr_running == 0))) { cpu = target; + swap_task_ac_bw(p, rq, cpu_rq(target)); + } } rcu_read_unlock(); @@ -1807,6 +1824,12 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p, switched_to_dl(rq, p); } +static void migrate_task_rq_dl(struct task_struct *p) +{ + if (p->fallback_cpu != -1) + swap_task_ac_bw(p, task_rq(p), cpu_rq(p->fallback_cpu)); +} + const struct sched_class dl_sched_class = { .next = &rt_sched_class, .enqueue_task = enqueue_task_dl, @@ -1820,6 +1843,7 @@ const struct sched_class dl_sched_class = { #ifdef CONFIG_SMP .select_task_rq = select_task_rq_dl, + .migrate_task_rq = migrate_task_rq_dl, .set_cpus_allowed = set_cpus_allowed_dl, .rq_online = rq_online_dl, .rq_offline = rq_offline_dl,