From patchwork Tue Jan 17 02:07:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinivas Pandruvada X-Patchwork-Id: 643628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A09AFC67871 for ; Tue, 17 Jan 2023 02:07:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233746AbjAQCHr (ORCPT ); Mon, 16 Jan 2023 21:07:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbjAQCHp (ORCPT ); Mon, 16 Jan 2023 21:07:45 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E71BE1E295; Mon, 16 Jan 2023 18:07:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673921264; x=1705457264; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r7skIjpac3ppa/NE5mrQpx7Ptisud6RZvpXv2fDUu3o=; b=V3K7aJYD7wmm8+7nC4HAKW5v9jTWaHMTtbVwFgC4/g2ru9jsHdUJ8lBb w98KBQzvZZQ0W+mMVzB1E5qIn6HwcvDJpH6tliUtfX5Hl0NGGygCtaiYs k2IsY3qj7nHllqw0mYbyQpOQpfhAGCksNFFytBCHc07Rqp14kT90HKaQk N5oqauqhAY01ciOj9ARImyslb4+GE3tgfTnC/MiPYn3wMAjvur7A8zcjX 2KWsu1Ff6Au4BxT5xC6MKen4a9NoESg/stcZME24WqMferItSbYTX4uzs u0Jt731rGxVI0n4EAeMZcv9djn9ppkLvBClTOf5gNFmU7vYWJs+/DtzJY Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="323287629" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="323287629" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2023 18:07:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="636703086" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="636703086" Received: from spandruv-desk.jf.intel.com ([10.54.75.8]) by orsmga006.jf.intel.com with ESMTP; 16 Jan 2023 18:07:43 -0800 From: Srinivas Pandruvada To: rafael@kernel.org Cc: linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, daniel.lezcano@linaro.org, rui.zhang@intel.com, amitk@kernel.org, Srinivas Pandruvada Subject: [PATCH v3 1/4] powercap: idle_inject: Export symbols Date: Mon, 16 Jan 2023 18:07:39 -0800 Message-Id: <20230117020742.2760307-2-srinivas.pandruvada@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230117020742.2760307-1-srinivas.pandruvada@linux.intel.com> References: <20230117020742.2760307-1-srinivas.pandruvada@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Export symbols for external interfaces, so that they can be used in other loadable modules. Export is done under name space IDLE_INJECT. Signed-off-by: Srinivas Pandruvada --- v2/v3: No change drivers/powercap/idle_inject.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/powercap/idle_inject.c b/drivers/powercap/idle_inject.c index fe86a09e3b67..dfa989182e71 100644 --- a/drivers/powercap/idle_inject.c +++ b/drivers/powercap/idle_inject.c @@ -160,6 +160,7 @@ void idle_inject_set_duration(struct idle_inject_device *ii_dev, WRITE_ONCE(ii_dev->idle_duration_us, idle_duration_us); } } +EXPORT_SYMBOL_NS_GPL(idle_inject_set_duration, IDLE_INJECT); /** * idle_inject_get_duration - idle and run duration retrieval helper @@ -174,6 +175,7 @@ void idle_inject_get_duration(struct idle_inject_device *ii_dev, *run_duration_us = READ_ONCE(ii_dev->run_duration_us); *idle_duration_us = READ_ONCE(ii_dev->idle_duration_us); } +EXPORT_SYMBOL_NS_GPL(idle_inject_get_duration, IDLE_INJECT); /** * idle_inject_set_latency - set the maximum latency allowed @@ -185,6 +187,7 @@ void idle_inject_set_latency(struct idle_inject_device *ii_dev, { WRITE_ONCE(ii_dev->latency_us, latency_us); } +EXPORT_SYMBOL_NS_GPL(idle_inject_set_latency, IDLE_INJECT); /** * idle_inject_start - start idle injections @@ -216,6 +219,7 @@ int idle_inject_start(struct idle_inject_device *ii_dev) return 0; } +EXPORT_SYMBOL_NS_GPL(idle_inject_start, IDLE_INJECT); /** * idle_inject_stop - stops idle injections @@ -262,6 +266,7 @@ void idle_inject_stop(struct idle_inject_device *ii_dev) cpu_hotplug_enable(); } +EXPORT_SYMBOL_NS_GPL(idle_inject_stop, IDLE_INJECT); /** * idle_inject_setup - prepare the current task for idle injection @@ -337,6 +342,7 @@ struct idle_inject_device *idle_inject_register(struct cpumask *cpumask) return NULL; } +EXPORT_SYMBOL_NS_GPL(idle_inject_register, IDLE_INJECT); /** * idle_inject_unregister - unregister idle injection control device @@ -357,6 +363,7 @@ void idle_inject_unregister(struct idle_inject_device *ii_dev) kfree(ii_dev); } +EXPORT_SYMBOL_NS_GPL(idle_inject_unregister, IDLE_INJECT); static struct smp_hotplug_thread idle_inject_threads = { .store = &idle_inject_thread.tsk, From patchwork Tue Jan 17 02:07:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinivas Pandruvada X-Patchwork-Id: 643627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1414DC46467 for ; Tue, 17 Jan 2023 02:07:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235247AbjAQCHu (ORCPT ); Mon, 16 Jan 2023 21:07:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235150AbjAQCHq (ORCPT ); Mon, 16 Jan 2023 21:07:46 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80BFC1DBB6; Mon, 16 Jan 2023 18:07:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673921265; x=1705457265; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pf5odtJ7eC1FC8bG+y+UlkBGlbR6fMPmvDJxk/oztQc=; b=YO61pgNBmqTC5wrpngjDCOMsW5oUxF43Gom7AtIEsKYll/rBgFafWiBY Hg+q4gsJWJOXelgzxNO8Eezr/U+c3IohTqkfLOS6B7xdnRcy82bk/DLlq o7gww8wsB5jBnfmokuwoWf2zDpHB0IX1ZjAgc0U6/gzXeT/jJyqm/d+hy /ywh0HCaNCdej5ftu1/tOlUh8YrM2FAUXvzeEjobWbLpOh8rbremGrjPC NGkB67zCWU6sLznfr3pd0hpa+Q4RtnnyPusztht1G95SSdRQZA5PYKA1T MusQXykYdNvMnulxbbunjpgHkQbkQibPH7VxcXnf4NehrASsDlawqKM5v A==; X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="323287633" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="323287633" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2023 18:07:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="636703089" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="636703089" Received: from spandruv-desk.jf.intel.com ([10.54.75.8]) by orsmga006.jf.intel.com with ESMTP; 16 Jan 2023 18:07:43 -0800 From: Srinivas Pandruvada To: rafael@kernel.org Cc: linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, daniel.lezcano@linaro.org, rui.zhang@intel.com, amitk@kernel.org, Srinivas Pandruvada Subject: [PATCH v3 2/4] powercap: idle_inject: Add update callback Date: Mon, 16 Jan 2023 18:07:40 -0800 Message-Id: <20230117020742.2760307-3-srinivas.pandruvada@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230117020742.2760307-1-srinivas.pandruvada@linux.intel.com> References: <20230117020742.2760307-1-srinivas.pandruvada@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The powercap/idle_inject core uses play_idle_precise() to inject idle time. But play_idle_precise() can't ensure that the CPU is fully idle for the specified duration because of wakeups due to interrupts. To compensate for the reduced idle time due to these wakes, the caller can adjust requested idle time for the next cycle. The goal of idle injection is to keep system at some idle percent on average, so this is fine to overshoot or undershoot instantaneous idle times. The idle inject core provides an interface idle_inject_set_duration() to set idle and runtime duration. Some architectures provide interface to get actual idle time observed by the hardware. So, the effective idle percent can be adjusted using the hardware feedback. For example, Intel CPUs provides package idle counters, which is currently used by Intel powerclamp driver to readjust runtime duration. When the caller's desired idle time over a period is less or greater than the actual CPU idle time observed by the hardware, caller can readjust idle and runtime duration for the next cycle. The only way this can be done currently is by monitoring hardware idle time from a different software thread and readjust idle and runtime duration using idle_inject_set_duration(). This can be avoided by adding a callback which callers can register and readjust from this callback function. Add a capability to register an optional update() callback, which can be called from the idle inject core before waking up CPUs for idle injection. This callback can be registered via a new interface: idle_inject_register_full(). During this process of constantly adjusting idle and runtime duration there can be some cases where actual idle time is more than the desired. In this case idle inject can be skipped for a cycle. If update() callback returns false, then the idle inject core skips waking up CPUs for the idle injection. Signed-off-by: Srinivas Pandruvada --- v3: - Replace prepare/complete callback with update callback v2 - Replace begin/end with prepare/complete - Add new interface idle_inject_register_full with callbacks - Update kernel doc - Update commit description drivers/powercap/idle_inject.c | 50 ++++++++++++++++++++++++++++++---- include/linux/idle_inject.h | 3 ++ 2 files changed, 47 insertions(+), 6 deletions(-) diff --git a/drivers/powercap/idle_inject.c b/drivers/powercap/idle_inject.c index dfa989182e71..c57b40477246 100644 --- a/drivers/powercap/idle_inject.c +++ b/drivers/powercap/idle_inject.c @@ -63,13 +63,27 @@ struct idle_inject_thread { * @idle_duration_us: duration of CPU idle time to inject * @run_duration_us: duration of CPU run time to allow * @latency_us: max allowed latency + * @update: Optional callback deciding whether or not to skip idle + * injection in the given cycle. * @cpumask: mask of CPUs affected by idle injection + * + * This structure is used to define per instance idle inject device data. Each + * instance has an idle duration, a run duration and mask of CPUs to inject + * idle. + * Actual idle is injected by calling kernel scheduler interface + * play_idle_precise(). There is one optional callbacks which the caller can + * register by calling idle_inject_register_full(): + * update() - This callback is called just before waking up CPUs to inject + * idle. If this callback returns false, CPUs are not woken up to inject idle + * for this cycle. Also gives opportunity to the caller to readjust idle + * and run duration by calling idle_inject_set_duration() for the next cycle. */ struct idle_inject_device { struct hrtimer timer; unsigned int idle_duration_us; unsigned int run_duration_us; unsigned int latency_us; + bool (*update)(void); unsigned long cpumask[]; }; @@ -111,11 +125,12 @@ static enum hrtimer_restart idle_inject_timer_fn(struct hrtimer *timer) struct idle_inject_device *ii_dev = container_of(timer, struct idle_inject_device, timer); + if (!ii_dev->update || (ii_dev->update && ii_dev->update())) + idle_inject_wakeup(ii_dev); + duration_us = READ_ONCE(ii_dev->run_duration_us); duration_us += READ_ONCE(ii_dev->idle_duration_us); - idle_inject_wakeup(ii_dev); - hrtimer_forward_now(timer, ns_to_ktime(duration_us * NSEC_PER_USEC)); return HRTIMER_RESTART; @@ -295,17 +310,22 @@ static int idle_inject_should_run(unsigned int cpu) } /** - * idle_inject_register - initialize idle injection on a set of CPUs + * idle_inject_register_full - initialize idle injection on a set of CPUs * @cpumask: CPUs to be affected by idle injection + * @update: This callback is called just before waking up CPUs to inject + * idle * * This function creates an idle injection control device structure for the - * given set of CPUs and initializes the timer associated with it. It does not - * start any injection cycles. + * given set of CPUs and initializes the timer associated with it. This + * function also allows to register update()callback. + * It does not start any injection cycles. * * Return: NULL if memory allocation fails, idle injection control device * pointer on success. */ -struct idle_inject_device *idle_inject_register(struct cpumask *cpumask) + +struct idle_inject_device *idle_inject_register_full(struct cpumask *cpumask, + bool (*update)(void)) { struct idle_inject_device *ii_dev; int cpu, cpu_rb; @@ -318,6 +338,7 @@ struct idle_inject_device *idle_inject_register(struct cpumask *cpumask) hrtimer_init(&ii_dev->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); ii_dev->timer.function = idle_inject_timer_fn; ii_dev->latency_us = UINT_MAX; + ii_dev->update = update; for_each_cpu(cpu, to_cpumask(ii_dev->cpumask)) { @@ -342,6 +363,23 @@ struct idle_inject_device *idle_inject_register(struct cpumask *cpumask) return NULL; } +EXPORT_SYMBOL_NS_GPL(idle_inject_register_full, IDLE_INJECT); + +/** + * idle_inject_register - initialize idle injection on a set of CPUs + * @cpumask: CPUs to be affected by idle injection + * + * This function creates an idle injection control device structure for the + * given set of CPUs and initializes the timer associated with it. It does not + * start any injection cycles. + * + * Return: NULL if memory allocation fails, idle injection control device + * pointer on success. + */ +struct idle_inject_device *idle_inject_register(struct cpumask *cpumask) +{ + return idle_inject_register_full(cpumask, NULL); +} EXPORT_SYMBOL_NS_GPL(idle_inject_register, IDLE_INJECT); /** diff --git a/include/linux/idle_inject.h b/include/linux/idle_inject.h index fb88e23a99d3..a85d5dd40f72 100644 --- a/include/linux/idle_inject.h +++ b/include/linux/idle_inject.h @@ -13,6 +13,9 @@ struct idle_inject_device; struct idle_inject_device *idle_inject_register(struct cpumask *cpumask); +struct idle_inject_device *idle_inject_register_full(struct cpumask *cpumask, + bool (*update)(void)); + void idle_inject_unregister(struct idle_inject_device *ii_dev); int idle_inject_start(struct idle_inject_device *ii_dev); From patchwork Tue Jan 17 02:07:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinivas Pandruvada X-Patchwork-Id: 645372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92E17C678D4 for ; Tue, 17 Jan 2023 02:07:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235063AbjAQCHv (ORCPT ); Mon, 16 Jan 2023 21:07:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235234AbjAQCHr (ORCPT ); Mon, 16 Jan 2023 21:07:47 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C0331E5D4; Mon, 16 Jan 2023 18:07:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673921266; x=1705457266; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mbjxjRJpKdYMyGyhoeFzAzpGOChFBeyIyypJoapGAuA=; b=Dr9aJAwoGCbGGwXXSyLcTRRa+k/zaMULkmSZMjQ7NdSHk3Oa0NAF5v0q 6EjEAM/rvPDArmD5ctBqYLrF+r+6FpnWmkAZuCwwZz9XojaMQkGNtJ6fN Sn4p6RE3N2wlb/ouO4cAbJkaV6rW1o882iNt0wxTaDghjqm0IV4FlEnLh AdOSAl7TxWg4yLQgWLp1J/LsYJ/4Q5W5A2ReebgH5A5OuizC2ZzQtWAsZ 0/u534hWcx1DJQyXW2KG/+5I2u02iV/IsCwZhsLZSMgX8IM/gLtj8z2O8 Wvb0gb8rgUsDXC2Y4Vj3PZzGg9BtWlkVy98F2R1olrW/nZBWKNGQgnvc0 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="323287636" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="323287636" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2023 18:07:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="636703092" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="636703092" Received: from spandruv-desk.jf.intel.com ([10.54.75.8]) by orsmga006.jf.intel.com with ESMTP; 16 Jan 2023 18:07:43 -0800 From: Srinivas Pandruvada To: rafael@kernel.org Cc: linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, daniel.lezcano@linaro.org, rui.zhang@intel.com, amitk@kernel.org, Srinivas Pandruvada , kernel test robot Subject: [PATCH v3 3/4] thermal/drivers/intel_powerclamp: Use powercap idle-inject framework Date: Mon, 16 Jan 2023 18:07:41 -0800 Message-Id: <20230117020742.2760307-4-srinivas.pandruvada@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230117020742.2760307-1-srinivas.pandruvada@linux.intel.com> References: <20230117020742.2760307-1-srinivas.pandruvada@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org There are two idle injection implementation in the Linux kernel. One via intel_powerclamp and the other using powercap/idle_inject. Both implementation end up in calling play_idle* function from a FIFO priority thread. Both can't be used at the same time. Currently per core idle injection (cpuidle_cooling) is using powercap/idle_inject, which is not used in platforms where intel_powerclamp is used for system wide idle injection. So there is no conflict. But there are some use cases where per core idle injection is beneficial on the same system where system wide idle injection is also used via intel_powerclamp. To avoid conflict only one of the idle injection type must be in use at a time. This require a common framework which both per core and system wide idle injection can use. Here powercap/idle_inject can be used for both per-core and for system wide idle injection. This framework has a well defined interface which allow registry for per-core or for all CPUs (system wide). If particular CPU is already participating in idle injection, the call to registry fails. Here the registry can be done when user space changes the current cooling device state. Also one framework for idle injection is better as there is one loop calling play_idle*, instead of multiple for better maintenance. So, reuse powercap/idle_inject calls in intel_powerclamp. This simplifies the code as all per CPU kthreads which calls play_idle* can be removed. The changes include: - Remove unneeded include files - Remove per CPU kthread workers: balancing_work and idle_injection_work. - Reuse the compensation related code by moving from previous worker thread to idle_injection callback. - Adjust the idle_duration and runtime by using powercap/idle_inject interface. - Remove all variables, which are not required once powercap/idle_inject is used. - Add mutex to avoid race during removal of idle injection during module unload and user action to change idle inject percent. Also for protection during dynamic adjustment of run and idle time from update() callback. - Remove online/offline callbacks to designate control CPU Signed-off-by: Srinivas Pandruvada --- v3: - Use Update callback which is per device - Remove use of control_cpu and online/offline callback to set this v2: - Use idle_inject_register_full instead of idle_inject_register - Also fix dependency issue with POWERCAP config Reported-by: kernel test robot drivers/thermal/intel/Kconfig | 2 + drivers/thermal/intel/intel_powerclamp.c | 328 ++++++++--------------- 2 files changed, 119 insertions(+), 211 deletions(-) diff --git a/drivers/thermal/intel/Kconfig b/drivers/thermal/intel/Kconfig index f0c845679250..6c2a95f41c81 100644 --- a/drivers/thermal/intel/Kconfig +++ b/drivers/thermal/intel/Kconfig @@ -3,6 +3,8 @@ config INTEL_POWERCLAMP tristate "Intel PowerClamp idle injection driver" depends on X86 depends on CPU_SUP_INTEL + select POWERCAP + select IDLE_INJECT help Enable this to enable Intel PowerClamp idle injection driver. This enforce idle time which results in more package C-state residency. The diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c index b80e25ec1261..777476e3d1f9 100644 --- a/drivers/thermal/intel/intel_powerclamp.c +++ b/drivers/thermal/intel/intel_powerclamp.c @@ -2,7 +2,7 @@ /* * intel_powerclamp.c - package c-state idle injection * - * Copyright (c) 2012, Intel Corporation. + * Copyright (c) 2012-2023, Intel Corporation. * * Authors: * Arjan van de Ven @@ -27,21 +27,15 @@ #include #include #include -#include #include #include -#include -#include #include #include -#include -#include +#include -#include #include #include #include -#include #define MAX_TARGET_RATIO (50U) /* For each undisturbed clamping period (no extra wake ups during idle time), @@ -60,33 +54,24 @@ static struct dentry *debug_dir; /* user selected target */ static unsigned int set_target_ratio; +static bool target_ratio_updated; static unsigned int current_ratio; static bool should_skip; -static unsigned int control_cpu; /* The cpu assigned to collect stat and update - * control parameters. default to BSP but BSP - * can be offlined. - */ -static bool clamping; - -struct powerclamp_worker_data { - struct kthread_worker *worker; - struct kthread_work balancing_work; - struct kthread_delayed_work idle_injection_work; +struct powerclamp_data { unsigned int cpu; unsigned int count; unsigned int guard; unsigned int window_size_now; unsigned int target_ratio; - unsigned int duration_jiffies; bool clamping; }; -static struct powerclamp_worker_data __percpu *worker_data; +static struct powerclamp_data powerclamp_data; + static struct thermal_cooling_device *cooling_dev; -static unsigned long *cpu_clamping_mask; /* bit map for tracking per cpu - * clamping kthread worker - */ + +static DEFINE_MUTEX(powerclamp_lock); static unsigned int duration; static unsigned int pkg_cstate_ratio_cur; @@ -344,79 +329,33 @@ static bool powerclamp_adjust_controls(unsigned int target_ratio, return set_target_ratio + guard <= current_ratio; } -static void clamp_balancing_func(struct kthread_work *work) +static unsigned int get_run_time(void) { - struct powerclamp_worker_data *w_data; - int sleeptime; - unsigned long target_jiffies; unsigned int compensated_ratio; - int interval; /* jiffies to sleep for each attempt */ - - w_data = container_of(work, struct powerclamp_worker_data, - balancing_work); + unsigned int runtime; /* * make sure user selected ratio does not take effect until * the next round. adjust target_ratio if user has changed * target such that we can converge quickly. */ - w_data->target_ratio = READ_ONCE(set_target_ratio); - w_data->guard = 1 + w_data->target_ratio / 20; - w_data->window_size_now = window_size; - w_data->duration_jiffies = msecs_to_jiffies(duration); - w_data->count++; + powerclamp_data.target_ratio = set_target_ratio; + powerclamp_data.guard = 1 + powerclamp_data.target_ratio / 20; + powerclamp_data.window_size_now = window_size; /* * systems may have different ability to enter package level * c-states, thus we need to compensate the injected idle ratio * to achieve the actual target reported by the HW. */ - compensated_ratio = w_data->target_ratio + - get_compensation(w_data->target_ratio); + compensated_ratio = powerclamp_data.target_ratio + + get_compensation(powerclamp_data.target_ratio); if (compensated_ratio <= 0) compensated_ratio = 1; - interval = w_data->duration_jiffies * 100 / compensated_ratio; - - /* align idle time */ - target_jiffies = roundup(jiffies, interval); - sleeptime = target_jiffies - jiffies; - if (sleeptime <= 0) - sleeptime = 1; - - if (clamping && w_data->clamping && cpu_online(w_data->cpu)) - kthread_queue_delayed_work(w_data->worker, - &w_data->idle_injection_work, - sleeptime); -} -static void clamp_idle_injection_func(struct kthread_work *work) -{ - struct powerclamp_worker_data *w_data; - - w_data = container_of(work, struct powerclamp_worker_data, - idle_injection_work.work); + runtime = duration * 100 / compensated_ratio - duration; - /* - * only elected controlling cpu can collect stats and update - * control parameters. - */ - if (w_data->cpu == control_cpu && - !(w_data->count % w_data->window_size_now)) { - should_skip = - powerclamp_adjust_controls(w_data->target_ratio, - w_data->guard, - w_data->window_size_now); - smp_mb(); - } - - if (should_skip) - goto balance; - - play_idle(jiffies_to_usecs(w_data->duration_jiffies)); - -balance: - if (clamping && w_data->clamping && cpu_online(w_data->cpu)) - kthread_queue_work(w_data->worker, &w_data->balancing_work); + return runtime; } /* @@ -452,126 +391,116 @@ static void poll_pkg_cstate(struct work_struct *dummy) msr_last = msr_now; tsc_last = tsc_now; - if (true == clamping) + if (powerclamp_data.clamping) schedule_delayed_work(&poll_pkg_cstate_work, HZ); } -static void start_power_clamp_worker(unsigned long cpu) +static struct idle_inject_device *ii_dev; + +static bool idle_inject_begin(void) { - struct powerclamp_worker_data *w_data = per_cpu_ptr(worker_data, cpu); - struct kthread_worker *worker; + bool update; - worker = kthread_create_worker_on_cpu(cpu, 0, "kidle_inj/%ld", cpu); - if (IS_ERR(worker)) - return; + mutex_lock(&powerclamp_lock); - w_data->worker = worker; - w_data->count = 0; - w_data->cpu = cpu; - w_data->clamping = true; - set_bit(cpu, cpu_clamping_mask); - sched_set_fifo(worker->task); - kthread_init_work(&w_data->balancing_work, clamp_balancing_func); - kthread_init_delayed_work(&w_data->idle_injection_work, - clamp_idle_injection_func); - kthread_queue_work(w_data->worker, &w_data->balancing_work); -} + if (!(powerclamp_data.count % powerclamp_data.window_size_now)) { -static void stop_power_clamp_worker(unsigned long cpu) -{ - struct powerclamp_worker_data *w_data = per_cpu_ptr(worker_data, cpu); + should_skip = powerclamp_adjust_controls(powerclamp_data.target_ratio, + powerclamp_data.guard, + powerclamp_data.window_size_now); + update = true; + } - if (!w_data->worker) - return; + if (update || target_ratio_updated) { + unsigned int runtime; - w_data->clamping = false; - /* - * Make sure that all works that get queued after this point see - * the clamping disabled. The counter part is not needed because - * there is an implicit memory barrier when the queued work - * is proceed. - */ - smp_wmb(); - kthread_cancel_work_sync(&w_data->balancing_work); - kthread_cancel_delayed_work_sync(&w_data->idle_injection_work); - /* - * The balancing work still might be queued here because - * the handling of the "clapming" variable, cancel, and queue - * operations are not synchronized via a lock. But it is not - * a big deal. The balancing work is fast and destroy kthread - * will wait for it. - */ - clear_bit(w_data->cpu, cpu_clamping_mask); - kthread_destroy_worker(w_data->worker); + runtime = get_run_time(); + idle_inject_set_duration(ii_dev, runtime, duration); + target_ratio_updated = false; + } + + powerclamp_data.count++; + + mutex_unlock(&powerclamp_lock); + + if (should_skip) + return false; - w_data->worker = NULL; + return true; } -static int start_power_clamp(void) +static void trigger_idle_injection(void) { - unsigned long cpu; + unsigned int runtime = get_run_time(); - set_target_ratio = clamp(set_target_ratio, 0U, MAX_TARGET_RATIO - 1); - /* prevent cpu hotplug */ - cpus_read_lock(); + idle_inject_set_duration(ii_dev, runtime, duration); + idle_inject_start(ii_dev); + powerclamp_data.clamping = true; +} - /* prefer BSP */ - control_cpu = cpumask_first(cpu_online_mask); +static int powerclamp_idle_injection_register(void) +{ + static cpumask_t idle_injection_cpu_mask; + unsigned long cpu; - clamping = true; - schedule_delayed_work(&poll_pkg_cstate_work, 0); + /* + * The idle inject core will only inject for online CPUs, + * So we can register for all present CPUs. In this way + * if some CPU goes online/offline while idle inject + * is registered, nothing additional calls are required. + * The same runtime and idle time is applicable for + * newly onlined CPUs if any. + */ + for_each_present_cpu(cpu) { + cpumask_set_cpu(cpu, &idle_injection_cpu_mask); + } - /* start one kthread worker per online cpu */ - for_each_online_cpu(cpu) { - start_power_clamp_worker(cpu); + ii_dev = idle_inject_register_full(&idle_injection_cpu_mask, + idle_inject_begin); + if (!ii_dev) { + pr_err("powerclamp: idle_inject_register failed\n"); + return -EAGAIN; } - cpus_read_unlock(); + + idle_inject_set_duration(ii_dev, TICK_USEC, duration); + idle_inject_set_latency(ii_dev, UINT_MAX); return 0; } -static void end_power_clamp(void) +static void remove_idle_injection(void) { - int i; + if (!powerclamp_data.clamping) + return; - /* - * Block requeuing in all the kthread workers. They will flush and - * stop faster. - */ - clamping = false; - for_each_set_bit(i, cpu_clamping_mask, num_possible_cpus()) { - pr_debug("clamping worker for cpu %d alive, destroy\n", i); - stop_power_clamp_worker(i); - } + powerclamp_data.clamping = false; + idle_inject_stop(ii_dev); } -static int powerclamp_cpu_online(unsigned int cpu) +static int start_power_clamp(void) { - if (clamping == false) - return 0; - start_power_clamp_worker(cpu); - /* prefer BSP as controlling CPU */ - if (cpu == 0) { - control_cpu = 0; - smp_mb(); + int ret; + + /* prevent cpu hotplug */ + cpus_read_lock(); + + ret = powerclamp_idle_injection_register(); + if (!ret) { + trigger_idle_injection(); + schedule_delayed_work(&poll_pkg_cstate_work, 0); } - return 0; -} -static int powerclamp_cpu_predown(unsigned int cpu) -{ - if (clamping == false) - return 0; + cpus_read_unlock(); - stop_power_clamp_worker(cpu); - if (cpu != control_cpu) - return 0; + return ret; +} - control_cpu = cpumask_first(cpu_online_mask); - if (control_cpu == cpu) - control_cpu = cpumask_next(cpu, cpu_online_mask); - smp_mb(); - return 0; +static void end_power_clamp(void) +{ + if (powerclamp_data.clamping) { + remove_idle_injection(); + idle_inject_unregister(ii_dev); + } } static int powerclamp_get_max_state(struct thermal_cooling_device *cdev, @@ -585,7 +514,7 @@ static int powerclamp_get_max_state(struct thermal_cooling_device *cdev, static int powerclamp_get_cur_state(struct thermal_cooling_device *cdev, unsigned long *state) { - if (true == clamping) + if (powerclamp_data.clamping) *state = pkg_cstate_ratio_cur; else /* to save power, do not poll idle ratio while not clamping */ @@ -599,24 +528,30 @@ static int powerclamp_set_cur_state(struct thermal_cooling_device *cdev, { int ret = 0; + mutex_lock(&powerclamp_lock); + new_target_ratio = clamp(new_target_ratio, 0UL, - (unsigned long) (MAX_TARGET_RATIO-1)); - if (set_target_ratio == 0 && new_target_ratio > 0) { + (unsigned long) (MAX_TARGET_RATIO - 1)); + if (!set_target_ratio && new_target_ratio > 0) { pr_info("Start idle injection to reduce power\n"); set_target_ratio = new_target_ratio; ret = start_power_clamp(); + if (ret) + set_target_ratio = 0; goto exit_set; } else if (set_target_ratio > 0 && new_target_ratio == 0) { pr_info("Stop forced idle injection\n"); end_power_clamp(); set_target_ratio = 0; + target_ratio_updated = false; } else /* adjust currently running */ { set_target_ratio = new_target_ratio; - /* make new set_target_ratio visible to other cpus */ - smp_mb(); + target_ratio_updated= true; } exit_set: + mutex_unlock(&powerclamp_lock); + return ret; } @@ -657,7 +592,6 @@ static int powerclamp_debug_show(struct seq_file *m, void *unused) { int i = 0; - seq_printf(m, "controlling cpu: %d\n", control_cpu); seq_printf(m, "pct confidence steady dynamic (compensation)\n"); for (i = 0; i < MAX_TARGET_RATIO; i++) { seq_printf(m, "%d\t%lu\t%lu\t%lu\n", @@ -680,75 +614,47 @@ static inline void powerclamp_create_debug_files(void) &powerclamp_debug_fops); } -static enum cpuhp_state hp_state; - static int __init powerclamp_init(void) { int retval; - cpu_clamping_mask = bitmap_zalloc(num_possible_cpus(), GFP_KERNEL); - if (!cpu_clamping_mask) - return -ENOMEM; - /* probe cpu features and ids here */ retval = powerclamp_probe(); if (retval) - goto exit_free; + return retval; /* set default limit, maybe adjusted during runtime based on feedback */ window_size = 2; - retval = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, - "thermal/intel_powerclamp:online", - powerclamp_cpu_online, - powerclamp_cpu_predown); - if (retval < 0) - goto exit_free; - - hp_state = retval; - - worker_data = alloc_percpu(struct powerclamp_worker_data); - if (!worker_data) { - retval = -ENOMEM; - goto exit_unregister; - } cooling_dev = thermal_cooling_device_register("intel_powerclamp", NULL, - &powerclamp_cooling_ops); - if (IS_ERR(cooling_dev)) { - retval = -ENODEV; - goto exit_free_thread; - } + &powerclamp_cooling_ops); + if (IS_ERR(cooling_dev)) + return -ENODEV; if (!duration) - duration = jiffies_to_msecs(DEFAULT_DURATION_JIFFIES); + duration = jiffies_to_usecs(DEFAULT_DURATION_JIFFIES); powerclamp_create_debug_files(); return 0; - -exit_free_thread: - free_percpu(worker_data); -exit_unregister: - cpuhp_remove_state_nocalls(hp_state); -exit_free: - bitmap_free(cpu_clamping_mask); - return retval; } module_init(powerclamp_init); static void __exit powerclamp_exit(void) { + mutex_lock(&powerclamp_lock); end_power_clamp(); - cpuhp_remove_state_nocalls(hp_state); - free_percpu(worker_data); + mutex_unlock(&powerclamp_lock); + thermal_cooling_device_unregister(cooling_dev); - bitmap_free(cpu_clamping_mask); cancel_delayed_work_sync(&poll_pkg_cstate_work); debugfs_remove_recursive(debug_dir); } module_exit(powerclamp_exit); +MODULE_IMPORT_NS(IDLE_INJECT); + MODULE_LICENSE("GPL"); MODULE_AUTHOR("Arjan van de Ven "); MODULE_AUTHOR("Jacob Pan "); From patchwork Tue Jan 17 02:07:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinivas Pandruvada X-Patchwork-Id: 645373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49A90C54EBE for ; Tue, 17 Jan 2023 02:07:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235235AbjAQCHs (ORCPT ); Mon, 16 Jan 2023 21:07:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235218AbjAQCHq (ORCPT ); Mon, 16 Jan 2023 21:07:46 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FF401E5C6; Mon, 16 Jan 2023 18:07:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673921265; x=1705457265; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a03wXaPuzbn9nzd/98O2P8UNKaBO2/Tln+bJKFI215w=; b=KbqLVRoAmmke/8038rOo993JHAuAA5pJUrYVbgOWVJNZ8nFAPubhUICL b8l8JxTZl74wuIimc+Edfzo//k/s/cetAQZUD433p8Z2CthJZSt2l91Jm 67xVZzH3VjijD6Lc8GWfn0hhqcuveC+uBB1XhhgGNf6ZO3XDU/4NiGG4t 8+fAhUJtrdg3dE6vBjmO8yH7kvv0NUC6mhMGXlTc2sERBafIJMJEp1UrB AeBfc8kcdZZSjDehsuG639JdttE80xlQ7NoHkXhy9gyWKo9RvhK5xC2++ MGBR243zej2L3oQ/d02NL9dY3/Yl2LuxVTe6lG/wyBkfJdN+LXwKUOmPH Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="323287639" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="323287639" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2023 18:07:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="636703095" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="636703095" Received: from spandruv-desk.jf.intel.com ([10.54.75.8]) by orsmga006.jf.intel.com with ESMTP; 16 Jan 2023 18:07:43 -0800 From: Srinivas Pandruvada To: rafael@kernel.org Cc: linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, daniel.lezcano@linaro.org, rui.zhang@intel.com, amitk@kernel.org, Srinivas Pandruvada Subject: [PATCH v3 4/4] thermal/drivers/intel_cpu_idle_cooling: Introduce Intel cpu idle cooling driver Date: Mon, 16 Jan 2023 18:07:42 -0800 Message-Id: <20230117020742.2760307-5-srinivas.pandruvada@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230117020742.2760307-1-srinivas.pandruvada@linux.intel.com> References: <20230117020742.2760307-1-srinivas.pandruvada@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The cpu idle cooling is used to cool down a CPU by injecting idle cycles at runtime. The objective is similar to intel_powerclamp driver, which is used for system wide cooling by injecting idle on each CPU. This driver is modeled after drivers/thermal/cpuidle_cooling.c by reusing powercap/idle_inject framework. On each CPU online a thermal cooling device is registered. The minimum state of the cooling device is 0 and maximum is 100. When user space changes the current state to non zero, then register with idle inject framework and start idle inject. The default idle duration is 24 milli seconds, matching intel_powerclamp, which doesn't change based on the current state of cooling device. The runtime is changed based on the current state. Signed-off-by: Srinivas Pandruvada --- v3: Suggestion from Daniel: - Remove unused cpuidle_cpu_mask - Why not register idle_inject here before calling thermal_cooling_device_register() so you get ride of the lock? We can't as cooling device is registered during CPU online. But if you also register idle, then every CPU is registered with idle inject core. So intel_powerclamp can't use those CPUs to register again. Here the logic is only register when user actually changing the state of cooling device from 0 to 1. In this way either per CPU or all cpu (intel_powerclamp) can be used one at a time. Reuse cpuidle_cooling_device.c. The RFC tried to do that, But you don't save lots of reuse. The of_device separation and separate idle_injection register will need a separate function. To be clean implementation, added a new module. This module adds very few lines of new code. v2: - Removed callback arguments for idle_inject_register drivers/thermal/intel/Kconfig | 10 + drivers/thermal/intel/Makefile | 1 + .../thermal/intel/intel_cpu_idle_cooling.c | 259 ++++++++++++++++++ 3 files changed, 270 insertions(+) create mode 100644 drivers/thermal/intel/intel_cpu_idle_cooling.c diff --git a/drivers/thermal/intel/Kconfig b/drivers/thermal/intel/Kconfig index 6c2a95f41c81..8c88d6e18414 100644 --- a/drivers/thermal/intel/Kconfig +++ b/drivers/thermal/intel/Kconfig @@ -115,3 +115,13 @@ config INTEL_HFI_THERMAL These capabilities may change as a result of changes in the operating conditions of the system such power and thermal limits. If selected, the kernel relays updates in CPUs' capabilities to userspace. + +config INTEL_CPU_IDLE_COOLING + tristate "Intel CPU idle cooling device" + depends on IDLE_INJECT + help + This implements the CPU cooling mechanism through + idle injection. This will throttle the CPU by injecting + idle cycle. + Unlike Intel Power clamp driver, this driver provides + idle injection for each CPU. diff --git a/drivers/thermal/intel/Makefile b/drivers/thermal/intel/Makefile index 9a8d8054f316..8d5f7b5cf9b7 100644 --- a/drivers/thermal/intel/Makefile +++ b/drivers/thermal/intel/Makefile @@ -14,3 +14,4 @@ obj-$(CONFIG_INTEL_TCC_COOLING) += intel_tcc_cooling.o obj-$(CONFIG_X86_THERMAL_VECTOR) += therm_throt.o obj-$(CONFIG_INTEL_MENLOW) += intel_menlow.o obj-$(CONFIG_INTEL_HFI_THERMAL) += intel_hfi.o +obj-$(CONFIG_INTEL_CPU_IDLE_COOLING) += intel_cpu_idle_cooling.o diff --git a/drivers/thermal/intel/intel_cpu_idle_cooling.c b/drivers/thermal/intel/intel_cpu_idle_cooling.c new file mode 100644 index 000000000000..f157018ffb63 --- /dev/null +++ b/drivers/thermal/intel/intel_cpu_idle_cooling.c @@ -0,0 +1,259 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Per CPU Idle injection cooling device implementation + * + * Copyright (c) 2022, Intel Corporation. + * All rights reserved. + * + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +/* Duration match with intel_powerclamp driver */ +#define IDLE_DURATION 24000 +#define IDLE_LATENCY UINT_MAX + +static int idle_duration_us = IDLE_DURATION; +static int idle_latency_us = IDLE_LATENCY; + +module_param(idle_duration_us, int, 0644); +MODULE_PARM_DESC(idle_duration_us, + "Idle duration in us."); + +module_param(idle_latency_us, int, 0644); +MODULE_PARM_DESC(idle_latency_us, + "Idle latency in us."); + +/** + * struct cpuidle_cooling - Per instance data for cooling device + * @cpu: CPU number for this cooling device + * @ii_dev: Idle inject core instance pointer + * @cdev: Thermal core cooling device instance + * @state: Current cooling device state + * + * Stores per instance cooling device state. + */ +struct cpuidle_cooling { + int cpu; + struct idle_inject_device *ii_dev; + struct thermal_cooling_device *cdev; + unsigned long state; +}; + +static DEFINE_PER_CPU(struct cpuidle_cooling, cooling_devs); + +/* Used for module unload protection with idle injection operations */ +static DEFINE_MUTEX(idle_cooling_lock); + +static unsigned int cpuidle_cooling_runtime(unsigned int idle_duration_us, + unsigned long state) +{ + if (!state) + return 0; + + return ((idle_duration_us * 100) / state) - idle_duration_us; +} + +static int cpuidle_idle_injection_register(struct cpuidle_cooling *cooling_dev) +{ + struct idle_inject_device *ii_dev; + + ii_dev = idle_inject_register((struct cpumask *)cpumask_of(cooling_dev->cpu)); + if (!ii_dev) { + /* + * It is busy as some other device claimed idle injection for this CPU + * Also it is possible that memory allocation failure. + */ + pr_err("idle_inject_register failed for cpu:%d\n", cooling_dev->cpu); + return -EAGAIN; + } + + idle_inject_set_duration(ii_dev, TICK_USEC, idle_duration_us); + idle_inject_set_latency(ii_dev, idle_latency_us); + + cooling_dev->ii_dev = ii_dev; + + return 0; +} + +static void cpuidle_idle_injection_unregister(struct cpuidle_cooling *cooling_dev) +{ + idle_inject_unregister(cooling_dev->ii_dev); +} + +static int cpuidle_cooling_get_max_state(struct thermal_cooling_device *cdev, + unsigned long *state) +{ + *state = 100; + + return 0; +} + +static int cpuidle_cooling_get_cur_state(struct thermal_cooling_device *cdev, + unsigned long *state) +{ + struct cpuidle_cooling *cooling_dev = cdev->devdata; + + *state = READ_ONCE(cooling_dev->state); + + return 0; +} + +static int cpuidle_cooling_set_cur_state(struct thermal_cooling_device *cdev, + unsigned long state) +{ + struct cpuidle_cooling *cooling_dev = cdev->devdata; + unsigned int runtime_us; + unsigned long curr_state; + int ret = 0; + + mutex_lock(&idle_cooling_lock); + + curr_state = READ_ONCE(cooling_dev->state); + + if (!curr_state && state > 0) { + /* + * This is the first time to start cooling, so register with + * idle injection framework. + */ + if (!cooling_dev->ii_dev) { + ret = cpuidle_idle_injection_register(cooling_dev); + if (ret) + goto unlock_set_state; + } + + runtime_us = cpuidle_cooling_runtime(idle_duration_us, state); + + idle_inject_set_duration(cooling_dev->ii_dev, runtime_us, idle_duration_us); + idle_inject_start(cooling_dev->ii_dev); + } else if (curr_state > 0 && state) { + /* Simply update runtime */ + runtime_us = cpuidle_cooling_runtime(idle_duration_us, state); + idle_inject_set_duration(cooling_dev->ii_dev, runtime_us, idle_duration_us); + } else if (curr_state > 0 && !state) { + idle_inject_stop(cooling_dev->ii_dev); + cpuidle_idle_injection_unregister(cooling_dev); + cooling_dev->ii_dev = NULL; + } + + WRITE_ONCE(cooling_dev->state, state); + +unlock_set_state: + mutex_unlock(&idle_cooling_lock); + + return ret; +} + +/** + * cpuidle_cooling_ops - thermal cooling device ops + */ +static struct thermal_cooling_device_ops cpuidle_cooling_ops = { + .get_max_state = cpuidle_cooling_get_max_state, + .get_cur_state = cpuidle_cooling_get_cur_state, + .set_cur_state = cpuidle_cooling_set_cur_state, +}; + +static int cpuidle_cooling_register(int cpu) +{ + struct cpuidle_cooling *cooling_dev = &per_cpu(cooling_devs, cpu); + struct thermal_cooling_device *cdev; + char name[14]; /* storage for cpuidle-XXXX */ + int ret = 0; + + mutex_lock(&idle_cooling_lock); + + snprintf(name, sizeof(name), "cpuidle-%d", cpu); + cdev = thermal_cooling_device_register(name, cooling_dev, &cpuidle_cooling_ops); + if (IS_ERR(cdev)) { + ret = PTR_ERR(cdev); + goto unlock_register; + } + + cooling_dev->cdev = cdev; + cooling_dev->cpu = cpu; + +unlock_register: + mutex_unlock(&idle_cooling_lock); + + return ret; +} + +static void cpuidle_cooling_unregister(int cpu) +{ + struct cpuidle_cooling *cooling_dev = &per_cpu(cooling_devs, cpu); + + mutex_lock(&idle_cooling_lock); + + if (cooling_dev->state) { + idle_inject_stop(cooling_dev->ii_dev); + cpuidle_idle_injection_unregister(cooling_dev); + } + + thermal_cooling_device_unregister(cooling_dev->cdev); + cooling_dev->state = 0; + + mutex_unlock(&idle_cooling_lock); +} + +static int cpuidle_cooling_cpu_online(unsigned int cpu) +{ + cpuidle_cooling_register(cpu); + + return 0; +} + +static int cpuidle_cooling_cpu_offline(unsigned int cpu) +{ + cpuidle_cooling_unregister(cpu); + + return 0; +} + +static enum cpuhp_state cpuidle_cooling_hp_state __read_mostly; + +static const struct x86_cpu_id intel_cpuidle_cooling_ids[] __initconst = { + X86_MATCH_VENDOR_FEATURE(INTEL, X86_FEATURE_MWAIT, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, intel_cpuidle_cooling_ids); + +static int __init cpuidle_cooling_init(void) +{ + int ret; + + if (!x86_match_cpu(intel_cpuidle_cooling_ids)) + return -ENODEV; + + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, + "thermal/cpuidle_cooling:online", + cpuidle_cooling_cpu_online, + cpuidle_cooling_cpu_offline); + if (ret < 0) + return ret; + + cpuidle_cooling_hp_state = ret; + + return 0; +} +module_init(cpuidle_cooling_init) + +static void __exit cpuidle_cooling_exit(void) +{ + cpuhp_remove_state(cpuidle_cooling_hp_state); +} +module_exit(cpuidle_cooling_exit) + +MODULE_IMPORT_NS(IDLE_INJECT); + +MODULE_LICENSE("GPL");