From patchwork Mon Dec 19 15:14:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Krzysztof Kozlowski X-Patchwork-Id: 635375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB53AC4332F for ; Mon, 19 Dec 2022 15:15:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232870AbiLSPPS (ORCPT ); Mon, 19 Dec 2022 10:15:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232829AbiLSPPP (ORCPT ); Mon, 19 Dec 2022 10:15:15 -0500 Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com [IPv6:2a00:1450:4864:20::135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9EB68B7C4 for ; Mon, 19 Dec 2022 07:15:13 -0800 (PST) Received: by mail-lf1-x135.google.com with SMTP id cf42so14152089lfb.1 for ; Mon, 19 Dec 2022 07:15:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5GU2NjmgE9pvJfcbP7Wbuf1L4Bg6ALcOGzwayLNEdCc=; b=M96PQCEKu708zYSAKfnI3vYYuOYQPIDYBpF3ko4XOFDTuP8CChmqa/eBMxnob9Xvvu 13bJnQo4DxOWhMM/jR1CrFAF5M9LqPtdN/7YNIKVERNdhI1SDlDl3cc86hykawp9nDlW A8LaZVRmgg3mME0zszFlmghrDv2CYA9UQGulYue18w+tfHQ41K2i09gLezGgefdt2Xat yA3iZWLVPKyr3leBBG1RmA/uIihnVnrApGFWHFy9C1HuHrxB4Ntk1TXktxvLrCwRmM4l HKEKzORBxoGPpeRsegGr5/AeKG3CT40OE9h/5COkm0Ke5wzV7SIHlnUWUrpTu1NnSrTp 0XRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5GU2NjmgE9pvJfcbP7Wbuf1L4Bg6ALcOGzwayLNEdCc=; b=wKzky5sTqHyymSEowcCzbTMtCwpz/Cy5xaAFjADv+DJr1/OfDrJMfwmlpJ7vdu1bi3 ny/Iw3fl0Ag7rnaD3x+KdG8weK0yHyWO3memkv8/H0bOjBBKlczGdM56kmhKix2ezV/l fh3bfSzOgtm2HbRQ6BObdt1bK6u7g8kyEVZKW3aKhp3eFb47ApCybiCfNkT1o3qki6mY C4RRKICXU8pDWjMLDgRSJddL/8wAGl4cGmZqZwl16L5/TuYpG+1Stufr6Ae3FLdHbF2z EPCCW2bnSA0a1HyRUM/AFiVtRkdrOfrFNPPY6OKt9eM1u6QQHfGScA7rWhYcr5vW5aXJ mTGw== X-Gm-Message-State: ANoB5pnEdw+6YskzqD2djX4n0Vg+awbbil5o8rPFWVijDyjDL1g5RJFQ czXtsSz+9JhwZbr4bPrXiwhDsQ== X-Google-Smtp-Source: AA0mqf5P0EXzSYerNvMnE1tqLXFMHI3GyGO5br1pN4dVVzBmBdh/UJ2E1Tze6Cn2gf/kxMeSGKg5YA== X-Received: by 2002:ac2:560a:0:b0:4b6:f0ea:4f49 with SMTP id v10-20020ac2560a000000b004b6f0ea4f49mr7591911lfd.9.1671462912024; Mon, 19 Dec 2022 07:15:12 -0800 (PST) Received: from krzk-bin.NAT.warszawa.vectranet.pl (088156142067.dynamic-2-waw-k-3-2-0.vectranet.pl. [88.156.142.67]) by smtp.gmail.com with ESMTPSA id e1-20020a05651236c100b004bd8534ebbcsm1109894lfs.37.2022.12.19.07.15.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Dec 2022 07:15:11 -0800 (PST) From: Krzysztof Kozlowski To: "Rafael J. Wysocki" , Len Brown , Pavel Machek , Greg Kroah-Hartman , Kevin Hilman , Ulf Hansson , Daniel Lezcano , Lorenzo Pieralisi , Sudeep Holla , linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Krzysztof Kozlowski , Adrien Thierry , Brian Masney , linux-rt-users@vger.kernel.org Subject: [PATCH v2 1/5] PM: domains: Add GENPD_FLAG_RT_SAFE for PREEMPT_RT Date: Mon, 19 Dec 2022 16:14:59 +0100 Message-Id: <20221219151503.385816-2-krzysztof.kozlowski@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221219151503.385816-1-krzysztof.kozlowski@linaro.org> References: <20221219151503.385816-1-krzysztof.kozlowski@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org Realtime kernels with PREEMPT_RT must use raw_spinlock_t for domains which are invoked from CPU idle (thus from atomic section). Example is cpuidle PSCI domain driver which itself is PREEMPT_RT safe, but is being called as part of cpuidle. Add a flag allowing a power domain provider to indicate it is RT safe. The flag is supposed to be used with existing GENPD_FLAG_IRQ_SAFE. Cc: Adrien Thierry Cc: Brian Masney Cc: linux-rt-users@vger.kernel.org Signed-off-by: Krzysztof Kozlowski --- Independently from Adrien, I encountered the same problem around genpd when using PREEMPT_RT kernel. Previous patch by Adrien: https://lore.kernel.org/all/20220615203605.1068453-1-athierry@redhat.com/ --- drivers/base/power/domain.c | 59 +++++++++++++++++++++++++++++++++++-- include/linux/pm_domain.h | 13 ++++++++ 2 files changed, 70 insertions(+), 2 deletions(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 967bcf9d415e..4dfce1d476f4 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -119,6 +119,48 @@ static const struct genpd_lock_ops genpd_spin_ops = { .unlock = genpd_unlock_spin, }; +static void genpd_lock_rawspin(struct generic_pm_domain *genpd) + __acquires(&genpd->rslock) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&genpd->rslock, flags); + genpd->rlock_flags = flags; +} + +static void genpd_lock_nested_rawspin(struct generic_pm_domain *genpd, + int depth) + __acquires(&genpd->rslock) +{ + unsigned long flags; + + raw_spin_lock_irqsave_nested(&genpd->rslock, flags, depth); + genpd->rlock_flags = flags; +} + +static int genpd_lock_interruptible_rawspin(struct generic_pm_domain *genpd) + __acquires(&genpd->rslock) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&genpd->rslock, flags); + genpd->rlock_flags = flags; + return 0; +} + +static void genpd_unlock_rawspin(struct generic_pm_domain *genpd) + __releases(&genpd->rslock) +{ + raw_spin_unlock_irqrestore(&genpd->rslock, genpd->rlock_flags); +} + +static const struct genpd_lock_ops genpd_rawspin_ops = { + .lock = genpd_lock_rawspin, + .lock_nested = genpd_lock_nested_rawspin, + .lock_interruptible = genpd_lock_interruptible_rawspin, + .unlock = genpd_unlock_rawspin, +}; + #define genpd_lock(p) p->lock_ops->lock(p) #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) @@ -126,6 +168,8 @@ static const struct genpd_lock_ops genpd_spin_ops = { #define genpd_status_on(genpd) (genpd->status == GENPD_STATE_ON) #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) +#define genpd_is_rt_safe(genpd) (genpd_is_irq_safe(genpd) && \ + (genpd->flags & GENPD_FLAG_RT_SAFE)) #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) #define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) @@ -1838,6 +1882,12 @@ static int genpd_add_subdomain(struct generic_pm_domain *genpd, return -EINVAL; } + if (!genpd_is_rt_safe(genpd) && genpd_is_rt_safe(subdomain)) { + WARN(1, "Parent %s of subdomain %s must be RT safe\n", + genpd->name, subdomain->name); + return -EINVAL; + } + link = kzalloc(sizeof(*link), GFP_KERNEL); if (!link) return -ENOMEM; @@ -2008,8 +2058,13 @@ static void genpd_free_data(struct generic_pm_domain *genpd) static void genpd_lock_init(struct generic_pm_domain *genpd) { if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { - spin_lock_init(&genpd->slock); - genpd->lock_ops = &genpd_spin_ops; + if (genpd->flags & GENPD_FLAG_RT_SAFE) { + raw_spin_lock_init(&genpd->rslock); + genpd->lock_ops = &genpd_rawspin_ops; + } else { + spin_lock_init(&genpd->slock); + genpd->lock_ops = &genpd_spin_ops; + } } else { mutex_init(&genpd->mlock); genpd->lock_ops = &genpd_mtx_ops; diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 1cd41bdf73cf..0a1600244963 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -61,6 +61,14 @@ * GENPD_FLAG_MIN_RESIDENCY: Enable the genpd governor to consider its * components' next wakeup when determining the * optimal idle state. + * + * GENPD_FLAG_RT_SAFE: When used with GENPD_FLAG_IRQ_SAFE, this informs + * genpd that its backend callbacks, ->power_on|off(), + * do not use other spinlocks. They might use + * raw_spinlocks or other pre-emption-disable + * methods, all of which are PREEMPT_RT safe. Note + * that, a genpd having this flag set, requires its + * masterdomains to also have it set. */ #define GENPD_FLAG_PM_CLK (1U << 0) #define GENPD_FLAG_IRQ_SAFE (1U << 1) @@ -69,6 +77,7 @@ #define GENPD_FLAG_CPU_DOMAIN (1U << 4) #define GENPD_FLAG_RPM_ALWAYS_ON (1U << 5) #define GENPD_FLAG_MIN_RESIDENCY (1U << 6) +#define GENPD_FLAG_RT_SAFE (1U << 7) enum gpd_status { GENPD_STATE_ON = 0, /* PM domain is on */ @@ -164,6 +173,10 @@ struct generic_pm_domain { spinlock_t slock; unsigned long lock_flags; }; + struct { + raw_spinlock_t rslock; + unsigned long rlock_flags; + }; }; };