From patchwork Mon Mar 9 07:32:28 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 45528 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E515D20285 for ; Mon, 9 Mar 2015 07:37:14 +0000 (UTC) Received: by widex7 with SMTP id ex7sf7125788wid.2 for ; Mon, 09 Mar 2015 00:37:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=BGr8D5x/G2QSXNWQpi+SkoNirBUPkMwjHYQo4Zowe2o=; b=BX5ordWsD4FJRfdkkL+Oem3HrRfeh0K/kYBJJ6Nch/C4SkEjIoQ2m6YJlMXztWAMpi lKmvo62/i38IuS3mJNDpIANDbJa+VjcAq2suln2UsB+4n1jsCtnrWCX+hr7aaOjQSIHa FSAkSKVMa7q8vMsp2dbTaFJe+sXAq/x+vfdOyoQRTSy7SoKJSR9VS0hGPdBofaSa0WrB MjcK1QbCAXD0jD2PTXeM4IYfcvBjhxJFbrV4E/k83P9g8jZL4Rqm6qYN6d5VQBWFlXNb BTRIPv3WuWScNdyiEOG/9MoIvvJ3L1RnTxnSVhTyem/chvDnb3bJiL/v0OmR3DO1sWlm W46w== X-Gm-Message-State: ALoCoQlu1LbIqbfjgbl0pP8lGcZv0iG9YjwX/wxQ3K/m/KH09UpyDR4ZI+ZMo5msVX7vxZ4UGa+7 X-Received: by 10.152.29.97 with SMTP id j1mr3674930lah.3.1425886634083; Mon, 09 Mar 2015 00:37:14 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.132 with SMTP id q4ls248044laj.26.gmail; Mon, 09 Mar 2015 00:37:13 -0700 (PDT) X-Received: by 10.152.37.69 with SMTP id w5mr24335898laj.15.1425886633910; Mon, 09 Mar 2015 00:37:13 -0700 (PDT) Received: from mail-la0-x22e.google.com (mail-la0-x22e.google.com. [2a00:1450:4010:c03::22e]) by mx.google.com with ESMTPS id i8si13569151lbq.73.2015.03.09.00.37.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Mar 2015 00:37:13 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::22e as permitted sender) client-ip=2a00:1450:4010:c03::22e; Received: by labmn12 with SMTP id mn12so7033787lab.0 for ; Mon, 09 Mar 2015 00:37:13 -0700 (PDT) X-Received: by 10.152.120.134 with SMTP id lc6mr23890272lab.72.1425886633490; Mon, 09 Mar 2015 00:37:13 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp1308827lbj; Mon, 9 Mar 2015 00:37:12 -0700 (PDT) X-Received: by 10.107.30.135 with SMTP id e129mr45477247ioe.26.1425886631761; Mon, 09 Mar 2015 00:37:11 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ej8si4821549pdb.104.2015.03.09.00.37.10; Mon, 09 Mar 2015 00:37:11 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753009AbbCIHhG (ORCPT + 28 others); Mon, 9 Mar 2015 03:37:06 -0400 Received: from m15-111.126.com ([220.181.15.111]:36723 "EHLO m15-111.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752653AbbCIHhC (ORCPT ); Mon, 9 Mar 2015 03:37:02 -0400 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp1 (Coremail) with SMTP id C8mowEDJjkaiTP1UiXhUAw--.132S4; Mon, 09 Mar 2015 15:33:18 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Xunlei Pang Subject: [PATCH RESEND v4 3/3] sched/rt: Check to push the task when changing its affinity Date: Mon, 9 Mar 2015 15:32:28 +0800 Message-Id: <1425886348-3191-3-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1425886348-3191-1-git-send-email-xlpang@126.com> References: <1425886348-3191-1-git-send-email-xlpang@126.com> X-CM-TRANSID: C8mowEDJjkaiTP1UiXhUAw--.132S4 X-Coremail-Antispam: 1Uf129KBjvJXoWxXFy3Jr48KFy5tFWUuw47XFb_yoWrCrWUpa 1vk39Ygr4UJaySgF1fZw4DZr45Kwnav345Krnxtw1Fkan0qr4Fv3W5tF4ayr9Y9r1YgF4a qr4Dtr47GF1UZa7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jiuc_UUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbi7x+3v00vbZTgsgAAsx Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::22e as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang We may suffer from extra rt overload rq due to the affinity, so when the affinity of any runnable rt task is changed, we should check to trigger balancing, otherwise it will cause some unnecessary delayed real-time response. Unfortunately, current RT global scheduler doesn't trigger anything. For example: a 2-cpu system with two runnable FIFO tasks(same rt_priority) bound on CPU0, let's name them rt1(running) and rt2(runnable) respectively; CPU1 has no RTs. Then, someone sets the affinity of rt2 to 0x3(i.e. CPU0 and CPU1), but after this, rt2 still can't be scheduled until rt1 enters schedule(), this definitely causes some/big response latency for rt2. So, when doing set_cpus_allowed_rt(), if detecting such cases, check to trigger a push behaviour. Signed-off-by: Xunlei Pang --- kernel/sched/rt.c | 78 ++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 68 insertions(+), 10 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 86cd79f..ac048d7 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1433,10 +1433,9 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, return next; } -static struct task_struct *_pick_next_task_rt(struct rq *rq) +static struct task_struct *peek_next_task_rt(struct rq *rq) { struct sched_rt_entity *rt_se; - struct task_struct *p; struct rt_rq *rt_rq = &rq->rt; do { @@ -1445,7 +1444,14 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) rt_rq = group_rt_rq(rt_se); } while (rt_rq); - p = rt_task_of(rt_se); + return rt_task_of(rt_se); +} + +static inline struct task_struct *_pick_next_task_rt(struct rq *rq) +{ + struct task_struct *p; + + p = peek_next_task_rt(rq); p->se.exec_start = rq_clock_task(rq); return p; @@ -1895,28 +1901,74 @@ static void set_cpus_allowed_rt(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq; - int weight; + int old_weight, new_weight; + int preempt_push = 0, direct_push = 0; BUG_ON(!rt_task(p)); if (!task_on_rq_queued(p)) return; - weight = cpumask_weight(new_mask); + old_weight = p->nr_cpus_allowed; + new_weight = cpumask_weight(new_mask); + + rq = task_rq(p); + + if (new_weight > 1 && + rt_task(rq->curr) && + !test_tsk_need_resched(rq->curr)) { + /* + * We own p->pi_lock and rq->lock. rq->lock might + * get released when doing direct pushing, however + * p->pi_lock is always held, so it's safe to assign + * new_mask and new_weight to p below. + */ + if (!task_running(rq, p)) { + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = new_weight; + direct_push = 1; + } else if (cpumask_test_cpu(task_cpu(p), new_mask)) { + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = new_weight; + if (!cpupri_find(&rq->rd->cpupri, p, NULL)) + goto update; + + /* + * At this point, current task gets migratable most + * likely due to the change of its affinity, let's + * figure out if we can migrate it. + * + * Is there any task with the same priority as that + * of current task? If found one, we should resched. + * NOTE: The target may be unpushable. + */ + if (p->prio == rq->rt.highest_prio.next) { + /* One target just in pushable_tasks list. */ + requeue_task_rt(rq, p, 0); + preempt_push = 1; + } else if (rq->rt.rt_nr_total > 1) { + struct task_struct *next; + + requeue_task_rt(rq, p, 0); + next = peek_next_task_rt(rq); + if (next != p && next->prio == p->prio) + preempt_push = 1; + } + } + } +update: /* * Only update if the process changes its state from whether it * can migrate or not. */ - if ((p->nr_cpus_allowed > 1) == (weight > 1)) - return; - - rq = task_rq(p); + if ((old_weight > 1) == (new_weight > 1)) + goto out; /* * The process used to be able to migrate OR it can now migrate */ - if (weight <= 1) { + if (new_weight <= 1) { if (!task_current(rq, p)) dequeue_pushable_task(rq, p); BUG_ON(!rq->rt.rt_nr_migratory); @@ -1928,6 +1980,12 @@ static void set_cpus_allowed_rt(struct task_struct *p, } update_rt_migration(&rq->rt); + +out: + if (direct_push) + push_rt_tasks(rq); + else if (preempt_push) + resched_curr(rq); } /* Assumes rq->lock is held */