From patchwork Sun Apr 26 17:10:58 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 47586 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id EDB4120553 for ; Sun, 26 Apr 2015 17:12:57 +0000 (UTC) Received: by wgiv13 with SMTP id v13sf20575914wgi.3 for ; Sun, 26 Apr 2015 10:12:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=Ay4bC/pfNymAdISmTHKKNrUGvWViYDHSU7YFDvF7RzU=; b=fmPEDWN+E2jgysaECYPd6kV+Pt909MOdmfldbP1LGtm4/p1RsBMoFf85K5KzdESiiQ Y2iFVd/QGCN616PN9BzLx6d4MAsUg9uhX3Nx/94oVS5TppmKcCVp2DL0pZEBIxZKrT5c fNmkJVjHHVdzrfrlXO0SlUiifOdVow5t8Ghxodbo91bMqDid2FTwBCiJCNLb4XU7VYgJ bhjet24f7nJoe20XOLhIlyZcp4mfmGDAEpTw6b97BGMORjq2MO9FYh05oE7xnB98rllo L27NTBl7USCZCUlFX2Bv5Y9BmH+51fkZP5aVDBm19pv6wvI9fbEaYvOCcOu2cnrz5oSE 6LnQ== X-Gm-Message-State: ALoCoQmOYJ6dM/t/mx4rxQbctsHBHa/vybLd09WJjKdBBhcX9jNFZK2UaAPv4IuhzaOWVu1drrOw X-Received: by 10.180.73.137 with SMTP id l9mr4796925wiv.5.1430068377290; Sun, 26 Apr 2015 10:12:57 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.115.240 with SMTP id jr16ls700885lab.69.gmail; Sun, 26 Apr 2015 10:12:57 -0700 (PDT) X-Received: by 10.152.87.13 with SMTP id t13mr6878439laz.66.1430068377030; Sun, 26 Apr 2015 10:12:57 -0700 (PDT) Received: from mail-la0-x22a.google.com (mail-la0-x22a.google.com. [2a00:1450:4010:c03::22a]) by mx.google.com with ESMTPS id zs2si12965530lbb.124.2015.04.26.10.12.56 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 26 Apr 2015 10:12:56 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::22a as permitted sender) client-ip=2a00:1450:4010:c03::22a; Received: by labbd9 with SMTP id bd9so65049228lab.2 for ; Sun, 26 Apr 2015 10:12:56 -0700 (PDT) X-Received: by 10.112.125.138 with SMTP id mq10mr6964339lbb.35.1430068376867; Sun, 26 Apr 2015 10:12:56 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp868231lbt; Sun, 26 Apr 2015 10:12:55 -0700 (PDT) X-Received: by 10.70.34.46 with SMTP id w14mr15048589pdi.32.1430068375037; Sun, 26 Apr 2015 10:12:55 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w16si26246559pbs.214.2015.04.26.10.12.54; Sun, 26 Apr 2015 10:12:55 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752136AbbDZRMs (ORCPT + 27 others); Sun, 26 Apr 2015 13:12:48 -0400 Received: from m15-114.126.com ([220.181.15.114]:53266 "EHLO m15-114.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751045AbbDZRMZ (ORCPT ); Sun, 26 Apr 2015 13:12:25 -0400 Received: from localhost.localdomain (unknown [220.166.221.63]) by smtp7 (Coremail) with SMTP id DsmowACX+G4iHD1V8BojAA--.830S7; Mon, 27 Apr 2015 01:11:09 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Ingo Molnar , Xunlei Pang Subject: [RFC PATCH 6/6] sched/rt: Requeue p back if the preemption of check_preempt_equal_prio_common() failed Date: Mon, 27 Apr 2015 01:10:58 +0800 Message-Id: <1430068258-1960-6-git-send-email-xlpang@126.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1430068258-1960-1-git-send-email-xlpang@126.com> References: <1430068258-1960-1-git-send-email-xlpang@126.com> X-CM-TRANSID: DsmowACX+G4iHD1V8BojAA--.830S7 X-Coremail-Antispam: 1Uf129KBjvJXoWxJw4UWFy3tFy3Wr4xXryxZrb_yoW7JF17pa 95A3s7Jw4UJay2grZavr4kZr15GwnaqayfJr97KayrtF45tr18GFn5Jr1ayF45Cry8uFya qF4ktr43Gw1DXF7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jMhFxUUUUU= X-Originating-IP: [220.166.221.63] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbipBTnv1GogZ1G-wABs1 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::22a as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang In check_preempt_equal_prio_common(), it requeues "next" ahead in the "run queue" and want to push current away. But when doing the actual pushing, if the system state changes, the pushing may fail as a result. In this case, p finally becomes the new current and gets running, while previous current was queued back waiting in the same "run queue". This broke FIFO. This patch adds a flag named RT_PREEMPT_PUSHAWAY for task_struct:: rt_preempt, sets it when doing check_preempt_equal_prio_common(), and clears it if current is away(when it is dequeued). Thus we can test this flag in p's post_schedule_rt() to judge if the pushing has happened. If the pushing failed, requeue previous current back to the head of its "run queue" and start a rescheduling. Signed-off-by: Xunlei Pang --- kernel/sched/rt.c | 75 +++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 67 insertions(+), 8 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index e7d66eb..94789f1 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -258,6 +258,8 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent) #ifdef CONFIG_SMP #define RT_PREEMPT_QUEUEAHEAD 1UL +#define RT_PREEMPT_PUSHAWAY 2UL +#define RT_PREEMPT_MASK 3UL /* * p(current) was preempted, and to be put ahead of @@ -268,6 +270,22 @@ static inline bool rt_preempted(struct task_struct *p) return !!(p->rt_preempt & RT_PREEMPT_QUEUEAHEAD); } +static inline struct task_struct *rt_preempting_target(struct task_struct *p) +{ + return (struct task_struct *) (p->rt_preempt & ~RT_PREEMPT_MASK); +} + +/* + * p(new current) is preempting and pushing previous current away. + */ +static inline bool rt_preempting(struct task_struct *p) +{ + if ((p->rt_preempt & RT_PREEMPT_PUSHAWAY) && rt_preempting_target(p)) + return true; + + return false; +} + static inline void clear_rt_preempt(struct task_struct *p) { p->rt_preempt = 0; @@ -375,13 +393,17 @@ static inline int has_pushable_tasks(struct rq *rq) return !plist_head_empty(&rq->rt.pushable_tasks); } -static inline void set_post_schedule(struct rq *rq) +static inline void set_post_schedule(struct rq *rq, struct task_struct *p) { - /* - * We detect this state here so that we can avoid taking the RQ - * lock again later if there is no need to push - */ - rq->post_schedule = has_pushable_tasks(rq); + if (rt_preempting(p)) + /* Forced post schedule */ + rq->post_schedule = 1; + else + /* + * We detect this state here so that we can avoid taking + * the RQ lock again later if there is no need to push + */ + rq->post_schedule = has_pushable_tasks(rq); } static void @@ -430,6 +452,11 @@ static inline bool rt_preempted(struct task_struct *p) return false; } +static inline bool rt_preempting(struct task_struct *p) +{ + return false; +} + static inline void clear_rt_preempt(struct task_struct *p) { } @@ -472,7 +499,7 @@ static inline int pull_rt_task(struct rq *this_rq) return 0; } -static inline void set_post_schedule(struct rq *rq) +static inline void set_post_schedule(struct rq *rq, struct task_struct *p) { } #endif /* CONFIG_SMP */ @@ -1330,6 +1357,7 @@ static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags) dequeue_rt_entity(rt_se); dequeue_pushable_task(rq, p); + clear_rt_preempt(p); } /* @@ -1468,6 +1496,11 @@ static void check_preempt_equal_prio_common(struct rq *rq) * to try and push current away. */ requeue_task_rt(rq, next, 1); + + get_task_struct(curr); + curr->rt_preempt |= RT_PREEMPT_PUSHAWAY; + next->rt_preempt = (unsigned long) curr; + next->rt_preempt |= RT_PREEMPT_PUSHAWAY; resched_curr_preempted_rt(rq); } @@ -1590,7 +1623,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev) /* The running task is never eligible for pushing */ dequeue_pushable_task(rq, p); - set_post_schedule(rq); + set_post_schedule(rq, p); return p; } @@ -2151,6 +2184,32 @@ skip: static void post_schedule_rt(struct rq *rq) { push_rt_tasks(rq); + + if (rt_preempting(current)) { + struct task_struct *target; + + current->rt_preempt = 0; + target = rt_preempting_target(current); + if (!(target->rt_preempt & RT_PREEMPT_PUSHAWAY)) + goto out; + + /* + * target still has RT_PREEMPT_PUSHAWAY set which + * means it wasn't pushed away successfully if it + * is still on this rq. thus restore former status + * of current and target if so. + */ + if (!task_on_rq_queued(target) || + task_cpu(target) != rq->cpu) + goto out; + + /* target is previous current, requeue it back ahead. */ + requeue_task_rt(rq, target, 1); + /* Let's preempt current, loop back to __schedule(). */ + resched_curr_preempted_rt(rq); +out: + put_task_struct(target); + } } /*