From patchwork Mon Jun 20 09:23:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 70419 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp1415920qgy; Mon, 20 Jun 2016 02:24:08 -0700 (PDT) X-Received: by 10.67.13.164 with SMTP id ez4mr20706254pad.32.1466414646617; Mon, 20 Jun 2016 02:24:06 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 134si32324605pfb.253.2016.06.20.02.24.06; Mon, 20 Jun 2016 02:24:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753829AbcFTJYD (ORCPT + 30 others); Mon, 20 Jun 2016 05:24:03 -0400 Received: from mail-wm0-f54.google.com ([74.125.82.54]:35370 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752925AbcFTJXp (ORCPT ); Mon, 20 Jun 2016 05:23:45 -0400 Received: by mail-wm0-f54.google.com with SMTP id v199so60743213wmv.0 for ; Mon, 20 Jun 2016 02:23:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=R55+4KAROwmvIewq1GUZmRRYxJeMT7KGhJWFHq0hrM0=; b=J4y7q076+gbvqkwX7BdWJCx/ow2yCUy6CfIeh1eMuqkICW9w/2wfgFnb2tdS8oI3O5 bKpvTaf/QXw+AtjKUjjUe6rck3ADh2nNWDp+4mWpg8glT42c2AMTsEjVioy93iZ/eJIZ d1F4SPaiqa8FjEVPPfaSti27vLKgKP++Hwjqg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=R55+4KAROwmvIewq1GUZmRRYxJeMT7KGhJWFHq0hrM0=; b=Cg6PCRMIzTLZpAt9vhuzA8l2TZmg46lE9swWtUQChmUF8k09kaGqnEQU150Ge7mvuA I0v7XmvsnSf1Q6RaRazIWZ1oLBKy4p00J2vB4Al+bplwotz0+L/+SSvctDNGgq9KlpkP g9/7y5cRKmkkXhaN2V9ShiBswyaKGDHltTAq2LTNHm3oGKzb/FuBj0Xlahtx9V7NntLM QSA+PZfz44wTMqaqzrh2C2RZw2uXQu1N+eFTFD5hsPciscmm2Oyjk10u5FhlEzLiWTOj TjhMa08trfsvn2OXh/SYhMNbUOeT6RFDPKMgCkq4VXAF9IIw8fwFLNo4jLXokm6KHTR7 csTw== X-Gm-Message-State: ALyK8tKoQm12mBOSd6BCjJvOwNQ+3TzSfNIM/hsBnHa9pk5ql+YBQDKOSYz2OAQc4jWcMzDj X-Received: by 10.28.94.18 with SMTP id s18mr4014108wmb.42.1466414623439; Mon, 20 Jun 2016 02:23:43 -0700 (PDT) Received: from vingu-laptop ([2a01:e35:8bd4:7750:f4bc:2036:6eb1:d37d]) by smtp.gmail.com with ESMTPSA id q63sm2001337wma.0.2016.06.20.02.23.41 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Mon, 20 Jun 2016 02:23:42 -0700 (PDT) Date: Mon, 20 Jun 2016 11:23:39 +0200 From: Vincent Guittot To: Peter Zijlstra Cc: Yuyang Du , Ingo Molnar , linux-kernel , Mike Galbraith , Benjamin Segall , Paul Turner , Morten Rasmussen , Dietmar Eggemann , Matt Fleming Subject: Re: [PATCH 4/4] sched,fair: Fix PELT integrity for new tasks Message-ID: <20160620092339.GA4526@vingu-laptop> References: <20160617120136.064100812@infradead.org> <20160617120454.150630859@infradead.org> <20160617142814.GT30154@twins.programming.kicks-ass.net> <20160617160239.GL30927@twins.programming.kicks-ass.net> <20160617161831.GM30927@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20160617161831.GM30927@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Le Friday 17 Jun 2016 à 18:18:31 (+0200), Peter Zijlstra a écrit : > On Fri, Jun 17, 2016 at 06:02:39PM +0200, Peter Zijlstra wrote: > > So yes, ho-humm, how to go about doing that bestest. Lemme have a play. > > This is what I came up with, not entirely pretty, but I suppose it'll > have to do. > > --- > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -724,6 +724,7 @@ void post_init_entity_util_avg(struct sc > struct cfs_rq *cfs_rq = cfs_rq_of(se); > struct sched_avg *sa = &se->avg; > long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; > + u64 now = cfs_rq_clock_task(cfs_rq); > > if (cap > 0) { > if (cfs_rq->avg.util_avg != 0) { > @@ -738,7 +739,20 @@ void post_init_entity_util_avg(struct sc > sa->util_sum = sa->util_avg * LOAD_AVG_MAX; > } > > - update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, false); > + if (entity_is_task(se)) { Why only task ? > + struct task_struct *p = task_of(se); > + if (p->sched_class != &fair_sched_class) { > + /* > + * For !fair tasks do attach_entity_load_avg() > + * followed by detach_entity_load_avg() as per > + * switched_from_fair(). > + */ > + se->avg.last_update_time = now; > + return; > + } > + } > + > + update_cfs_rq_load_avg(now, cfs_rq, false); > attach_entity_load_avg(cfs_rq, se); Don't we have to do a complete attach with attach_task_cfs_rq instead of just the load_avg ? to set also depth ? What about something like below ? --- --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -723,6 +723,7 @@ void post_init_entity_util_avg(struct sched_entity *se) struct cfs_rq *cfs_rq = cfs_rq_of(se); struct sched_avg *sa = &se->avg; long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; + u64 now = cfs_rq_clock_task(cfs_rq); if (cap > 0) { if (cfs_rq->avg.util_avg != 0) { @@ -737,8 +738,18 @@ void post_init_entity_util_avg(struct sched_entity *se) sa->util_sum = sa->util_avg * LOAD_AVG_MAX; } - update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, false); - attach_entity_load_avg(cfs_rq, se); + if (p->sched_class == &fair_sched_class) { + /* fair entity must be attached to cfs_rq */ + attach_task_cfs_rq(se); + } else { + /* + * For !fair tasks do attach_entity_load_avg() + * followed by detach_entity_load_avg() as per + * switched_from_fair(). + */ + se->avg.last_update_time = now; + } + } static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);