From patchwork Wed Jan 20 11:07:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 60029 Delivered-To: patches@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp3105356lbb; Wed, 20 Jan 2016 03:08:14 -0800 (PST) X-Received: by 10.25.153.130 with SMTP id b124mr13081827lfe.81.1453288094597; Wed, 20 Jan 2016 03:08:14 -0800 (PST) Return-Path: Received: from mail-lf0-x235.google.com (mail-lf0-x235.google.com. [2a00:1450:4010:c07::235]) by mx.google.com with ESMTPS id m140si17052532lfm.204.2016.01.20.03.08.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Jan 2016 03:08:14 -0800 (PST) Received-SPF: pass (google.com: domain of ulf.hansson@linaro.org designates 2a00:1450:4010:c07::235 as permitted sender) client-ip=2a00:1450:4010:c07::235; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ulf.hansson@linaro.org designates 2a00:1450:4010:c07::235 as permitted sender) smtp.mailfrom=ulf.hansson@linaro.org; dkim=pass header.i=@linaro.org Received: by mail-lf0-x235.google.com with SMTP id h129so3538659lfh.3 for ; Wed, 20 Jan 2016 03:08:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=toftI+520uz75wvu+hhjP/3O/qupCWxfJWXfG1MBRtk=; b=MzlLFrhDKzakKce6cHfHy0K+UllyzGX63FVveIHmRO45c5UcZ/cKkLnYRLZqrfYCYv OIaAZr0c/TPr7ayVwOOFE3kg6Jjj4hachkX61bVd2eUbVm9Lr/o/NeYUCE7IgU1DKIP7 JaN4owtNPob+aNOU+zFG5LRwX62VMcqdXlhgg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=toftI+520uz75wvu+hhjP/3O/qupCWxfJWXfG1MBRtk=; b=kd+gOF799CUi7dw2Brs3+cHm2R6+YsgHZD+PKftBXPZvDH7rqLINevTC3q1rBKEvGA J6yr06Ivkt8El0oNb452CCTwLCnYeck0oFK7Jn8pKn+1FRS3Ev6d+1CfsbM8xc1jsCtD n8PsdXbkSi2Zdw/+BPsCXmAUxGDKW+z/PBymdgCrm4EuQIV5/FXAnP6Z71s/0u3LqGPy O+/XV0SSC9AkAWlkeeQb17ab6iv5JcFxu22GUbpLWRuczV0Ha9oVocuUo0o+xUVMLUYW tgOH3Xul5I9G/q/qq1ol20hqwxj+qefKZ62xGAUvGN1U/hHdjQNy9INsK1fMGhfXfuq/ +V4Q== X-Gm-Message-State: ALoCoQnCx12YdkTVHp9ELOMIWNAW3Bbt/Rnxc8mPW9tnzvNq8Id64SRApq8LWBcEXFL9JbcIj83cexFMLHOrwchm+E9SEEXbnQ== X-Received: by 10.25.207.3 with SMTP id f3mr13445531lfg.20.1453288094396; Wed, 20 Jan 2016 03:08:14 -0800 (PST) Return-Path: Received: from localhost.localdomain (c-83-233-167-104.cust.bredband2.com. [83.233.167.104]) by smtp.gmail.com with ESMTPSA id rx3sm774969lbb.35.2016.01.20.03.08.13 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 20 Jan 2016 03:08:13 -0800 (PST) From: Ulf Hansson To: "Rafael J. Wysocki" , Kevin Hilman , Ulf Hansson , linux-pm@vger.kernel.org Cc: Len Brown , Pavel Machek , Geert Uytterhoeven , Lina Iyer , Lorenzo Pieralisi , Axel Haslam , Marc Titinger , Marek Szyprowski Subject: [PATCH V2] PM / Domains: Fix potential deadlock while adding/removing subdomains Date: Wed, 20 Jan 2016 12:07:39 +0100 Message-Id: <1453288059-1988-1-git-send-email-ulf.hansson@linaro.org> X-Mailer: git-send-email 1.9.1 We must preserve the same order of how we acquire and release the lock for genpd, as otherwise we may encounter deadlocks. The power on phase of a genpd starts by acquiring its lock. Then it walks the hierarchy of its parent domains to be able to power on these first, as per design of genpd. >From a locking perspective this means the locks of the parents becomes acquired after the lock of the subdomain. Let's fix pm_genpd_add|remove_subdomain() to maintain the same order of acquiring/releasing the genpd lock as being applied in the power on/off sequence. Signed-off-by: Ulf Hansson --- Changes in v2: Fix lockdep warning. --- drivers/base/power/domain.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) -- 1.9.1 diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 6ac9a7f..676d762 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -1339,8 +1339,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, if (!link) return -ENOMEM; - mutex_lock(&genpd->lock); - mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING); + mutex_lock(&subdomain->lock); + mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING); if (genpd->status == GPD_STATE_POWER_OFF && subdomain->status != GPD_STATE_POWER_OFF) { @@ -1363,8 +1363,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, genpd_sd_counter_inc(genpd); out: - mutex_unlock(&subdomain->lock); mutex_unlock(&genpd->lock); + mutex_unlock(&subdomain->lock); if (ret) kfree(link); return ret; @@ -1385,7 +1385,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain)) return -EINVAL; - mutex_lock(&genpd->lock); + mutex_lock(&subdomain->lock); + mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING); if (!list_empty(&subdomain->slave_links) || subdomain->device_count) { pr_warn("%s: unable to remove subdomain %s\n", genpd->name, @@ -1398,22 +1399,19 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, if (link->slave != subdomain) continue; - mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING); - list_del(&link->master_node); list_del(&link->slave_node); kfree(link); if (subdomain->status != GPD_STATE_POWER_OFF) genpd_sd_counter_dec(genpd); - mutex_unlock(&subdomain->lock); - ret = 0; break; } out: mutex_unlock(&genpd->lock); + mutex_unlock(&subdomain->lock); return ret; }