From patchwork Fri Jul 19 13:59:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 169255 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3899912ilk; Fri, 19 Jul 2019 07:01:01 -0700 (PDT) X-Google-Smtp-Source: APXvYqx+GdbM5HmC+hlrS/nr9ztAYGFcjvo12+IcuTR8KC5ToQ20lfTEZcfPp7XLw2TahpwB+h6u X-Received: by 2002:a63:1e0b:: with SMTP id e11mr50547292pge.402.1563544861364; Fri, 19 Jul 2019 07:01:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563544861; cv=none; d=google.com; s=arc-20160816; b=uirUy7pDBYhpZVJi1dG94wFQDv6i+nUPtVO7bN+jQLHF2WUIyTlZQX3owmhoaY1vuX nUghS2A3Q87PTfZVpcc9I4bzViKm9fFPVnzR3aRNOcX9cygBuUAcVgq8o+uCrdGiN/yC dwt2+RBXqPqTv/YkmgZ9NnKL5dWhdEvnRkG37x1gmGnMppkTjCTsuBqyOkVqMsDGDDoR 9bpcseuvRbDDN6gvl+N8gHnghauCPKCKtkPRzroQh/Yded/WdpFhJAX3FNeiv+kG6fzx /CNPkQgGQZ64hBG8atkcvoFQncuIOoHYOzkGxP/9F8Q8+0nWZY39NulyHKZKH9nA/z2w L2AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=8yrjFRKlFsxVAR2E0++WqOpY0GWxGblducHQHkY1HEY=; b=fL03BCFym9CXjjgTZKYEbLh83Xq8Tn7HQ53dva+ZFd5A1kGNoAf1zKQ5dhBIg9bWY6 EPoHXoOAn81JjvDLrbtLxuP6Qm2Njl8+MqwMR8R2LbWk1g9KP5JfPJd9z+V61HdmtuxM V8elpXg+tACeW/I/37hJarjK47fr3Vg3V10j6wJGYnkjDIiXvlDJHqdOp0U8kAlhSNc7 fGX1JsAcjPwQTCDhnFAHVwkF2yiA+zy58TRKRvAeZqc2/AjdRDsepdAYXnFf6tazOVae e3clnJyLwSnosI0/e9Av1sFOhCBEeewXB1yRbmpz4IGL3TkK5cib3kNKnAaSgeVNKjUs dOrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si383543pla.154.2019.07.19.07.01.01; Fri, 19 Jul 2019 07:01:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729473AbfGSOAb (ORCPT + 29 others); Fri, 19 Jul 2019 10:00:31 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:35651 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729410AbfGSOA3 (ORCPT ); Fri, 19 Jul 2019 10:00:29 -0400 Received: by mail-wm1-f66.google.com with SMTP id l2so29142140wmg.0 for ; Fri, 19 Jul 2019 07:00:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8yrjFRKlFsxVAR2E0++WqOpY0GWxGblducHQHkY1HEY=; b=T0d6z3Cez6qRVDVaOQsSiBEn85ZpJa+5xCv/f1iL9SlOqKH3zgCa8UAj4QK1uwS7FL bf/UlhrxAuruNQlg166Ix46yLpIDyh2Ukoh72xCHmQ0ArgnWCxagRGlimTF2nf+BE2pt gUXN/eG8M9T97Pif5vi10XIRS/PGo5vw421ZYTF/PQ9iRUCLTp2Lk263KRb4LwIa4ZX4 aEgOMkUVmU1q2lwVEdg+Lt5TuqPOmhPJByQgWyhZ+X1mcj0Gks4KzqYbgRuXFLKmNqEE XfjubzKClJ0ANOBTMyvDpGjrPoIW3OyXU9QmqTXEbyLsaAwVtRH7o3oUHsglKebjy3u9 c0aw== X-Gm-Message-State: APjAAAUZy3KxemTvKkuM9IEPC3eHkCQwoVDtkIlrvS0cJJb44b/SoAx7 NHtJvCkK2bT0yP63qf+qrTpGGA== X-Received: by 2002:a7b:cb08:: with SMTP id u8mr47466519wmj.167.1563544827281; Fri, 19 Jul 2019 07:00:27 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.230.231]) by smtp.gmail.com with ESMTPSA id f10sm21276926wrs.22.2019.07.19.07.00.25 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 19 Jul 2019 07:00:26 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, tj@kernel.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, longman@redhat.com, dietmar.eggemann@arm.com, cgroups@vger.kernel.org Subject: [PATCH v9 1/8] sched/topology: Adding function partition_sched_domains_locked() Date: Fri, 19 Jul 2019 15:59:53 +0200 Message-Id: <20190719140000.31694-2-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190719140000.31694-1-juri.lelli@redhat.com> References: <20190719140000.31694-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Introducing function partition_sched_domains_locked() by taking the mutex locking code out of the original function. That way the work done by partition_sched_domains_locked() can be reused without dropping the mutex lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier Acked-by: Tejun Heo --- include/linux/sched/topology.h | 10 ++++++++++ kernel/sched/topology.c | 17 +++++++++++++---- 2 files changed, 23 insertions(+), 4 deletions(-) -- 2.17.2 diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index cfc0a89a7159..d7166f8c0215 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -161,6 +161,10 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd) return to_cpumask(sd->span); } +extern void partition_sched_domains_locked(int ndoms_new, + cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new); + extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new); @@ -213,6 +217,12 @@ unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) struct sched_domain_attr; +static inline void +partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ +} + static inline void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index f53f89df837d..362c383ec4bd 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2159,16 +2159,16 @@ static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur, * ndoms_new == 0 is a special case for destroying existing domains, * and it will not create the default domain. * - * Call with hotplug lock held + * Call with hotplug lock and sched_domains_mutex held */ -void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], - struct sched_domain_attr *dattr_new) +void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) { bool __maybe_unused has_eas = false; int i, j, n; int new_topology; - mutex_lock(&sched_domains_mutex); + lockdep_assert_held(&sched_domains_mutex); /* Always unregister in case we don't destroy any domains: */ unregister_sched_domain_sysctl(); @@ -2251,6 +2251,15 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], ndoms_cur = ndoms_new; register_sched_domain_sysctl(); +} +/* + * Call with hotplug lock held + */ +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ + mutex_lock(&sched_domains_mutex); + partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); mutex_unlock(&sched_domains_mutex); } From patchwork Fri Jul 19 13:59:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 169254 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3899287ilk; Fri, 19 Jul 2019 07:00:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqw0YS1lRK5Kegj3gurPmVDmz4LNMyYOQjSyzbHWT/WGzmV1Vp2PC9B/gAb96Tv9e1CVHUV5 X-Received: by 2002:a63:124a:: with SMTP id 10mr53922407pgs.254.1563544834516; Fri, 19 Jul 2019 07:00:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563544834; cv=none; d=google.com; s=arc-20160816; b=jdXPFeXiOfyygHyrWx32E/h6ccxoU5d4SPyZr/qj5PlXqiKDVOr+nxlCgRvy9JXelq dS/c8mrKVXV1AELBSRiG1XdsQV2jOeRJ36Ke+Ma/7WxvPwkmvsMY7VrFqtb3vDaWcdmc XkgW9ocXPLEeoJ4NdJGejL8ulI/4dfveMhnEqBmmosrexVHnznqA1Xi3fkVNVtt9P5JE R3DoRaIzQnymZHQ+2RUUz/udn20hNvqbbYEfTOGQgJW/P320Xo9cC/rCIirV7OXaFnlX yf6/R3d3kiqRNg3A+qMkOxN3MIJ5N7z4Mnx65/cMjYgv+itbYc3iRDlutdPmWn4fUZAG 9ACg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=1v9Wyn7iwUdwNAPe/wicZ1s65Wur1rKRI0dtWQllWvE=; b=A1466zaCusJGnNcORH37OpS6E96NUNqGHJFVu8OzDFmXjrPsV8xDC8UtHRVPfwo9KB x8kF45Sv8i2D9vy0oMxXXrx7UT1t4IRuLY2pMDoORLfKmkM7z+I11D2VbLrLxrNO6mGI D9lOijUDpMfg5fhNKTzvfa13yA2iq0hvqaY6E2xM32Zl5wOD35/Rs6+yQRG0G3bsb6uN K/6pD/JsAiOq4I0dO3tOHZjCJiyDMPNBTyYQYQkjaJpBgWqmw7pZf6Xbjbe/fGOSuJvh zF39G9wlJMFEJU6wqTJvazYh612XCnHzfOacfcBRP9bSPlhK1sBC5d/7CJdVz/HI2Vg+ n/3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k76si269879pga.294.2019.07.19.07.00.34; Fri, 19 Jul 2019 07:00:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729522AbfGSOAd (ORCPT + 29 others); Fri, 19 Jul 2019 10:00:33 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:50350 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729446AbfGSOAb (ORCPT ); Fri, 19 Jul 2019 10:00:31 -0400 Received: by mail-wm1-f68.google.com with SMTP id v15so28946724wml.0 for ; Fri, 19 Jul 2019 07:00:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1v9Wyn7iwUdwNAPe/wicZ1s65Wur1rKRI0dtWQllWvE=; b=rdhVfRWtEfyS4/6AvuV7vDP6spgkLu83gjz3lLSmzR2robQfEeHoVeoya1JYdf2I7/ 1BkwdfSVJeGcMtxG8Bxx1VTQqqL1C47Cr4ZENuHWvhMromFZaMEr3KKJGGwuayLSmm+z /FzuaiOpyBHHOuGMJHFr8K9/3sYJSimy8vHYu5Ew+74vQkJ+BI1ML8dS1BGNUUqWTYUW 1Tvm9qOIA52pfdn0Pjwt1okim5dIo7iqqDa3maQmbAQBamyIlWKy7a2QWN8dvfmypjxP hPEvXyaSVPC2FrQE5n+9Ifufnd30k5gRSyiSi9B2mBbDKnwZwW8JKIiwHXR5aEKUTM90 j13g== X-Gm-Message-State: APjAAAV+xarXj+nDNVX0+er6AiGCrMu4Qyt9ciNbTzSPAHFkVFcW6LfQ OQ7NY59MQo5eIQ7JKt9VA4Gh3Q== X-Received: by 2002:a1c:dc07:: with SMTP id t7mr50358433wmg.164.1563544828940; Fri, 19 Jul 2019 07:00:28 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.230.231]) by smtp.gmail.com with ESMTPSA id f10sm21276926wrs.22.2019.07.19.07.00.27 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 19 Jul 2019 07:00:28 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, tj@kernel.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, longman@redhat.com, dietmar.eggemann@arm.com, cgroups@vger.kernel.org Subject: [PATCH v9 2/8] sched/core: Streamlining calls to task_rq_unlock() Date: Fri, 19 Jul 2019 15:59:54 +0200 Message-Id: <20190719140000.31694-3-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190719140000.31694-1-juri.lelli@redhat.com> References: <20190719140000.31694-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Calls to task_rq_unlock() are done several times in function __sched_setscheduler(). This is fine when only the rq lock needs to be handled but not so much when other locks come into play. This patch streamlines the release of the rq lock so that only one location need to be modified when dealing with more than one lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier Reviewed-by: Steven Rostedt (VMware) Acked-by: Tejun Heo --- kernel/sched/core.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) -- 2.17.2 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 874c427742a9..acd6a9fe85bc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4222,8 +4222,8 @@ static int __sched_setscheduler(struct task_struct *p, * Changing the policy of the stop threads its a very bad idea: */ if (p == rq->stop) { - task_rq_unlock(rq, p, &rf); - return -EINVAL; + retval = -EINVAL; + goto unlock; } /* @@ -4239,8 +4239,8 @@ static int __sched_setscheduler(struct task_struct *p, goto change; p->sched_reset_on_fork = reset_on_fork; - task_rq_unlock(rq, p, &rf); - return 0; + retval = 0; + goto unlock; } change: @@ -4253,8 +4253,8 @@ static int __sched_setscheduler(struct task_struct *p, if (rt_bandwidth_enabled() && rt_policy(policy) && task_group(p)->rt_bandwidth.rt_runtime == 0 && !task_group_is_autogroup(task_group(p))) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } #endif #ifdef CONFIG_SMP @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p, */ if (!cpumask_subset(span, &p->cpus_allowed) || rq->rd->dl_bw.bw == 0) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } } #endif @@ -4289,8 +4289,8 @@ static int __sched_setscheduler(struct task_struct *p, * is available. */ if ((dl_policy(policy) || dl_task(p)) && sched_dl_overflow(p, policy, attr)) { - task_rq_unlock(rq, p, &rf); - return -EBUSY; + retval = -EBUSY; + goto unlock; } p->sched_reset_on_fork = reset_on_fork; @@ -4346,6 +4346,10 @@ static int __sched_setscheduler(struct task_struct *p, preempt_enable(); return 0; + +unlock: + task_rq_unlock(rq, p, &rf); + return retval; } static int _sched_setscheduler(struct task_struct *p, int policy,