From patchwork Wed Oct 5 07:33:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Lezcano X-Patchwork-Id: 77277 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp2629718qgf; Wed, 5 Oct 2016 00:33:57 -0700 (PDT) X-Received: by 10.66.141.39 with SMTP id rl7mr11321978pab.121.1475652837038; Wed, 05 Oct 2016 00:33:57 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id he1si6981106pac.124.2016.10.05.00.33.56; Wed, 05 Oct 2016 00:33:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754608AbcJEHdf (ORCPT + 27 others); Wed, 5 Oct 2016 03:33:35 -0400 Received: from mail-wm0-f51.google.com ([74.125.82.51]:38053 "EHLO mail-wm0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754520AbcJEHdd (ORCPT ); Wed, 5 Oct 2016 03:33:33 -0400 Received: by mail-wm0-f51.google.com with SMTP id p138so256564448wmb.1 for ; Wed, 05 Oct 2016 00:33:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ln9+B+u6RJgC+jltvPNzv2Txg0piSFG+Izt4Aqiwv4Q=; b=du0sFHzctb33slSlWFNpVARMxJXZBgieFT6xbJB2Sh3/wofMIIZbpr3r9Grz+Xq6LY 3Tcg+rd8alHOSYLr8sFbSrr3r8+rTOwTaof0JxC/99dRh8AyHh1kjTYAfM2Sg7hWi0mL TrlhjUM54gl4Xp0o3Z9ij8EfWtsp4dimpmDT4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ln9+B+u6RJgC+jltvPNzv2Txg0piSFG+Izt4Aqiwv4Q=; b=MBmCzZF+lbNwOQ5lIxSHsKoqyjAoexWVCNgLNOHnnQWd9DAHdrmdHJUajGe4QpLn9o fQbtTXUkyvpLTpfk/uJP//oNipDavBNWk2jDef4tkuCenAIASYvaJxo1jdSQAksAML3Y OVh4ANYA/yjAGxAzi6ZMUwB5860xfXEpiR25NGkabAxLDQ6teaxgs9qrR7UYa7FqrXZd 9jQvPrSAvNZyRvsgg+bK2sbxsTJ1oyKhzZmNM+t/F8X8XHt3dLkJAUaZ5EZ0WbDzGTj6 id4NEtxEIP62oicK5mBFLiKZzAJI9c9lxAGsUveC4jPH+nZ0FYHT3nGv39r9GTCclJQd IteQ== X-Gm-Message-State: AA6/9RmvizBKVECyRDLKc/2AeVOJCrrJpufA+a04624XBe+lZQAC6YeO+JGLF22QIFgPOqxp X-Received: by 10.28.186.6 with SMTP id k6mr13467873wmf.101.1475652810296; Wed, 05 Oct 2016 00:33:30 -0700 (PDT) Received: from localhost.localdomain (lft31-1-88-121-166-205.fbx.proxad.net. [88.121.166.205]) by smtp.gmail.com with ESMTPSA id 142sm7922776wmh.12.2016.10.05.00.33.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 05 Oct 2016 00:33:29 -0700 (PDT) From: Daniel Lezcano To: rjw@rjwysocki.net Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Subject: [PATCH 2/2] cpuidle: governors: Move the files to the upper directory Date: Wed, 5 Oct 2016 09:33:13 +0200 Message-Id: <1475652794-4486-2-git-send-email-daniel.lezcano@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1475652794-4486-1-git-send-email-daniel.lezcano@linaro.org> References: <1475652794-4486-1-git-send-email-daniel.lezcano@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the different governors are stored in the subdir 'governors'. That is not a problem. However, that forces to declare some private structure in the include/linux/cpuidle.h header because these governor files don't have access to the private 'cpuidle.h' located in drivers/cpuidle. Instead of having the governors in the separate directory, move them along with the drivers and prefix them with 'governor-', that allows to do a proper cleanup in the cpuidle headers. Signed-off-by: Daniel Lezcano --- drivers/cpuidle/Makefile | 7 +- drivers/cpuidle/governor-ladder.c | 197 +++++++++++++++ drivers/cpuidle/governor-menu.c | 496 +++++++++++++++++++++++++++++++++++++ drivers/cpuidle/governors/Makefile | 6 - drivers/cpuidle/governors/ladder.c | 197 --------------- drivers/cpuidle/governors/menu.c | 496 ------------------------------------- 6 files changed, 699 insertions(+), 700 deletions(-) create mode 100644 drivers/cpuidle/governor-ladder.c create mode 100644 drivers/cpuidle/governor-menu.c delete mode 100644 drivers/cpuidle/governors/Makefile delete mode 100644 drivers/cpuidle/governors/ladder.c delete mode 100644 drivers/cpuidle/governors/menu.c -- 1.9.1 diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile index 3ba81b1..b21ada9 100644 --- a/drivers/cpuidle/Makefile +++ b/drivers/cpuidle/Makefile @@ -2,7 +2,7 @@ # Makefile for cpuidle. # -obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ +obj-y += cpuidle.o driver.o governor.o sysfs.o obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o @@ -27,3 +27,8 @@ obj-$(CONFIG_MIPS_CPS_CPUIDLE) += cpuidle-cps.o # POWERPC drivers obj-$(CONFIG_PSERIES_CPUIDLE) += cpuidle-pseries.o obj-$(CONFIG_POWERNV_CPUIDLE) += cpuidle-powernv.o + +############################################################################### +# Governors +obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += governor-ladder.o +obj-$(CONFIG_CPU_IDLE_GOV_MENU) += governor-menu.o diff --git a/drivers/cpuidle/governor-ladder.c b/drivers/cpuidle/governor-ladder.c new file mode 100644 index 0000000..fe8f089 --- /dev/null +++ b/drivers/cpuidle/governor-ladder.c @@ -0,0 +1,197 @@ +/* + * ladder.c - the residency ladder algorithm + * + * Copyright (C) 2001, 2002 Andy Grover + * Copyright (C) 2001, 2002 Paul Diefenbaugh + * Copyright (C) 2004, 2005 Dominik Brodowski + * + * (C) 2006-2007 Venkatesh Pallipadi + * Shaohua Li + * Adam Belay + * + * This code is licenced under the GPL. + */ + +#include +#include +#include +#include +#include + +#include +#include + +#define PROMOTION_COUNT 4 +#define DEMOTION_COUNT 1 + +struct ladder_device_state { + struct { + u32 promotion_count; + u32 demotion_count; + u32 promotion_time; + u32 demotion_time; + } threshold; + struct { + int promotion_count; + int demotion_count; + } stats; +}; + +struct ladder_device { + struct ladder_device_state states[CPUIDLE_STATE_MAX]; + int last_state_idx; +}; + +static DEFINE_PER_CPU(struct ladder_device, ladder_devices); + +/** + * ladder_do_selection - prepares private data for a state change + * @ldev: the ladder device + * @old_idx: the current state index + * @new_idx: the new target state index + */ +static inline void ladder_do_selection(struct ladder_device *ldev, + int old_idx, int new_idx) +{ + ldev->states[old_idx].stats.promotion_count = 0; + ldev->states[old_idx].stats.demotion_count = 0; + ldev->last_state_idx = new_idx; +} + +/** + * ladder_select_state - selects the next state to enter + * @drv: cpuidle driver + * @dev: the CPU + */ +static int ladder_select_state(struct cpuidle_driver *drv, + struct cpuidle_device *dev) +{ + struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); + struct ladder_device_state *last_state; + int last_residency, last_idx = ldev->last_state_idx; + int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); + + /* Special case when user has set very strict latency requirement */ + if (unlikely(latency_req == 0)) { + ladder_do_selection(ldev, last_idx, 0); + return 0; + } + + last_state = &ldev->states[last_idx]; + + last_residency = cpuidle_get_last_residency(dev) - drv->states[last_idx].exit_latency; + + /* consider promotion */ + if (last_idx < drv->state_count - 1 && + !drv->states[last_idx + 1].disabled && + !dev->states_usage[last_idx + 1].disable && + last_residency > last_state->threshold.promotion_time && + drv->states[last_idx + 1].exit_latency <= latency_req) { + last_state->stats.promotion_count++; + last_state->stats.demotion_count = 0; + if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) { + ladder_do_selection(ldev, last_idx, last_idx + 1); + return last_idx + 1; + } + } + + /* consider demotion */ + if (last_idx > CPUIDLE_DRIVER_STATE_START && + (drv->states[last_idx].disabled || + dev->states_usage[last_idx].disable || + drv->states[last_idx].exit_latency > latency_req)) { + int i; + + for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) { + if (drv->states[i].exit_latency <= latency_req) + break; + } + ladder_do_selection(ldev, last_idx, i); + return i; + } + + if (last_idx > CPUIDLE_DRIVER_STATE_START && + last_residency < last_state->threshold.demotion_time) { + last_state->stats.demotion_count++; + last_state->stats.promotion_count = 0; + if (last_state->stats.demotion_count >= last_state->threshold.demotion_count) { + ladder_do_selection(ldev, last_idx, last_idx - 1); + return last_idx - 1; + } + } + + /* otherwise remain at the current state */ + return last_idx; +} + +/** + * ladder_enable_device - setup for the governor + * @drv: cpuidle driver + * @dev: the CPU + */ +static int ladder_enable_device(struct cpuidle_driver *drv, + struct cpuidle_device *dev) +{ + int i; + struct ladder_device *ldev = &per_cpu(ladder_devices, dev->cpu); + struct ladder_device_state *lstate; + struct cpuidle_state *state; + + ldev->last_state_idx = CPUIDLE_DRIVER_STATE_START; + + for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) { + state = &drv->states[i]; + lstate = &ldev->states[i]; + + lstate->stats.promotion_count = 0; + lstate->stats.demotion_count = 0; + + lstate->threshold.promotion_count = PROMOTION_COUNT; + lstate->threshold.demotion_count = DEMOTION_COUNT; + + if (i < drv->state_count - 1) + lstate->threshold.promotion_time = state->exit_latency; + if (i > CPUIDLE_DRIVER_STATE_START) + lstate->threshold.demotion_time = state->exit_latency; + } + + return 0; +} + +/** + * ladder_reflect - update the correct last_state_idx + * @dev: the CPU + * @index: the index of actual state entered + */ +static void ladder_reflect(struct cpuidle_device *dev, int index) +{ + struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); + if (index > 0) + ldev->last_state_idx = index; +} + +static struct cpuidle_governor ladder_governor = { + .name = "ladder", + .rating = 10, + .enable = ladder_enable_device, + .select = ladder_select_state, + .reflect = ladder_reflect, +}; + +/** + * init_ladder - initializes the governor + */ +static int __init init_ladder(void) +{ + /* + * When NO_HZ is disabled, or when booting with nohz=off, the ladder + * governor is better so give it a higher rating than the menu + * governor. + */ + if (!tick_nohz_enabled) + ladder_governor.rating = 25; + + return cpuidle_register_governor(&ladder_governor); +} + +postcore_initcall(init_ladder); diff --git a/drivers/cpuidle/governor-menu.c b/drivers/cpuidle/governor-menu.c new file mode 100644 index 0000000..d9b5b93 --- /dev/null +++ b/drivers/cpuidle/governor-menu.c @@ -0,0 +1,496 @@ +/* + * menu.c - the menu idle governor + * + * Copyright (C) 2006-2007 Adam Belay + * Copyright (C) 2009 Intel Corporation + * Author: + * Arjan van de Ven + * + * This code is licenced under the GPL version 2 as described + * in the COPYING file that acompanies the Linux Kernel. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Please note when changing the tuning values: + * If (MAX_INTERESTING-1) * RESOLUTION > UINT_MAX, the result of + * a scaling operation multiplication may overflow on 32 bit platforms. + * In that case, #define RESOLUTION as ULL to get 64 bit result: + * #define RESOLUTION 1024ULL + * + * The default values do not overflow. + */ +#define BUCKETS 12 +#define INTERVAL_SHIFT 3 +#define INTERVALS (1UL << INTERVAL_SHIFT) +#define RESOLUTION 1024 +#define DECAY 8 +#define MAX_INTERESTING 50000 + + +/* + * Concepts and ideas behind the menu governor + * + * For the menu governor, there are 3 decision factors for picking a C + * state: + * 1) Energy break even point + * 2) Performance impact + * 3) Latency tolerance (from pmqos infrastructure) + * These these three factors are treated independently. + * + * Energy break even point + * ----------------------- + * C state entry and exit have an energy cost, and a certain amount of time in + * the C state is required to actually break even on this cost. CPUIDLE + * provides us this duration in the "target_residency" field. So all that we + * need is a good prediction of how long we'll be idle. Like the traditional + * menu governor, we start with the actual known "next timer event" time. + * + * Since there are other source of wakeups (interrupts for example) than + * the next timer event, this estimation is rather optimistic. To get a + * more realistic estimate, a correction factor is applied to the estimate, + * that is based on historic behavior. For example, if in the past the actual + * duration always was 50% of the next timer tick, the correction factor will + * be 0.5. + * + * menu uses a running average for this correction factor, however it uses a + * set of factors, not just a single factor. This stems from the realization + * that the ratio is dependent on the order of magnitude of the expected + * duration; if we expect 500 milliseconds of idle time the likelihood of + * getting an interrupt very early is much higher than if we expect 50 micro + * seconds of idle time. A second independent factor that has big impact on + * the actual factor is if there is (disk) IO outstanding or not. + * (as a special twist, we consider every sleep longer than 50 milliseconds + * as perfect; there are no power gains for sleeping longer than this) + * + * For these two reasons we keep an array of 12 independent factors, that gets + * indexed based on the magnitude of the expected duration as well as the + * "is IO outstanding" property. + * + * Repeatable-interval-detector + * ---------------------------- + * There are some cases where "next timer" is a completely unusable predictor: + * Those cases where the interval is fixed, for example due to hardware + * interrupt mitigation, but also due to fixed transfer rate devices such as + * mice. + * For this, we use a different predictor: We track the duration of the last 8 + * intervals and if the stand deviation of these 8 intervals is below a + * threshold value, we use the average of these intervals as prediction. + * + * Limiting Performance Impact + * --------------------------- + * C states, especially those with large exit latencies, can have a real + * noticeable impact on workloads, which is not acceptable for most sysadmins, + * and in addition, less performance has a power price of its own. + * + * As a general rule of thumb, menu assumes that the following heuristic + * holds: + * The busier the system, the less impact of C states is acceptable + * + * This rule-of-thumb is implemented using a performance-multiplier: + * If the exit latency times the performance multiplier is longer than + * the predicted duration, the C state is not considered a candidate + * for selection due to a too high performance impact. So the higher + * this multiplier is, the longer we need to be idle to pick a deep C + * state, and thus the less likely a busy CPU will hit such a deep + * C state. + * + * Two factors are used in determing this multiplier: + * a value of 10 is added for each point of "per cpu load average" we have. + * a value of 5 points is added for each process that is waiting for + * IO on this CPU. + * (these values are experimentally determined) + * + * The load average factor gives a longer term (few seconds) input to the + * decision, while the iowait value gives a cpu local instantanious input. + * The iowait factor may look low, but realize that this is also already + * represented in the system load average. + * + */ + +struct menu_device { + int last_state_idx; + int needs_update; + + unsigned int next_timer_us; + unsigned int predicted_us; + unsigned int bucket; + unsigned int correction_factor[BUCKETS]; + unsigned int intervals[INTERVALS]; + int interval_ptr; +}; + + +#define LOAD_INT(x) ((x) >> FSHIFT) +#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100) + +static inline int get_loadavg(unsigned long load) +{ + return LOAD_INT(load) * 10 + LOAD_FRAC(load) / 10; +} + +static inline int which_bucket(unsigned int duration, unsigned long nr_iowaiters) +{ + int bucket = 0; + + /* + * We keep two groups of stats; one with no + * IO pending, one without. + * This allows us to calculate + * E(duration)|iowait + */ + if (nr_iowaiters) + bucket = BUCKETS/2; + + if (duration < 10) + return bucket; + if (duration < 100) + return bucket + 1; + if (duration < 1000) + return bucket + 2; + if (duration < 10000) + return bucket + 3; + if (duration < 100000) + return bucket + 4; + return bucket + 5; +} + +/* + * Return a multiplier for the exit latency that is intended + * to take performance requirements into account. + * The more performance critical we estimate the system + * to be, the higher this multiplier, and thus the higher + * the barrier to go to an expensive C state. + */ +static inline int performance_multiplier(unsigned long nr_iowaiters, unsigned long load) +{ + int mult = 1; + + /* for higher loadavg, we are more reluctant */ + + mult += 2 * get_loadavg(load); + + /* for IO wait tasks (per cpu!) we add 5x each */ + mult += 10 * nr_iowaiters; + + return mult; +} + +static DEFINE_PER_CPU(struct menu_device, menu_devices); + +static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev); + +/* + * Try detecting repeating patterns by keeping track of the last 8 + * intervals, and checking if the standard deviation of that set + * of points is below a threshold. If it is... then use the + * average of these 8 points as the estimated value. + */ +static unsigned int get_typical_interval(struct menu_device *data) +{ + int i, divisor; + unsigned int max, thresh, avg; + uint64_t sum, variance; + + thresh = UINT_MAX; /* Discard outliers above this value */ + +again: + + /* First calculate the average of past intervals */ + max = 0; + sum = 0; + divisor = 0; + for (i = 0; i < INTERVALS; i++) { + unsigned int value = data->intervals[i]; + if (value <= thresh) { + sum += value; + divisor++; + if (value > max) + max = value; + } + } + if (divisor == INTERVALS) + avg = sum >> INTERVAL_SHIFT; + else + avg = div_u64(sum, divisor); + + /* Then try to determine variance */ + variance = 0; + for (i = 0; i < INTERVALS; i++) { + unsigned int value = data->intervals[i]; + if (value <= thresh) { + int64_t diff = (int64_t)value - avg; + variance += diff * diff; + } + } + if (divisor == INTERVALS) + variance >>= INTERVAL_SHIFT; + else + do_div(variance, divisor); + + /* + * The typical interval is obtained when standard deviation is + * small (stddev <= 20 us, variance <= 400 us^2) or standard + * deviation is small compared to the average interval (avg > + * 6*stddev, avg^2 > 36*variance). The average is smaller than + * UINT_MAX aka U32_MAX, so computing its square does not + * overflow a u64. We simply reject this candidate average if + * the standard deviation is greater than 715 s (which is + * rather unlikely). + * + * Use this result only if there is no timer to wake us up sooner. + */ + if (likely(variance <= U64_MAX/36)) { + if ((((u64)avg*avg > variance*36) && (divisor * 4 >= INTERVALS * 3)) + || variance <= 400) { + return avg; + } + } + + /* + * If we have outliers to the upside in our distribution, discard + * those by setting the threshold to exclude these outliers, then + * calculate the average and standard deviation again. Once we get + * down to the bottom 3/4 of our samples, stop excluding samples. + * + * This can deal with workloads that have long pauses interspersed + * with sporadic activity with a bunch of short pauses. + */ + if ((divisor * 4) <= INTERVALS * 3) + return UINT_MAX; + + thresh = max - 1; + goto again; +} + +/** + * menu_select - selects the next idle state to enter + * @drv: cpuidle driver containing state data + * @dev: the CPU + */ +static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) +{ + struct menu_device *data = this_cpu_ptr(&menu_devices); + int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); + int i; + unsigned int interactivity_req; + unsigned int expected_interval; + unsigned long nr_iowaiters, cpu_load; + + if (data->needs_update) { + menu_update(drv, dev); + data->needs_update = 0; + } + + /* Special case when user has set very strict latency requirement */ + if (unlikely(latency_req == 0)) + return 0; + + /* determine the expected residency time, round up */ + data->next_timer_us = ktime_to_us(tick_nohz_get_sleep_length()); + + get_iowait_load(&nr_iowaiters, &cpu_load); + data->bucket = which_bucket(data->next_timer_us, nr_iowaiters); + + /* + * Force the result of multiplication to be 64 bits even if both + * operands are 32 bits. + * Make sure to round up for half microseconds. + */ + data->predicted_us = DIV_ROUND_CLOSEST_ULL((uint64_t)data->next_timer_us * + data->correction_factor[data->bucket], + RESOLUTION * DECAY); + + expected_interval = get_typical_interval(data); + expected_interval = min(expected_interval, data->next_timer_us); + + if (CPUIDLE_DRIVER_STATE_START > 0) { + struct cpuidle_state *s = &drv->states[CPUIDLE_DRIVER_STATE_START]; + unsigned int polling_threshold; + + /* + * We want to default to C1 (hlt), not to busy polling + * unless the timer is happening really really soon, or + * C1's exit latency exceeds the user configured limit. + */ + polling_threshold = max_t(unsigned int, 20, s->target_residency); + if (data->next_timer_us > polling_threshold && + latency_req > s->exit_latency && !s->disabled && + !dev->states_usage[CPUIDLE_DRIVER_STATE_START].disable) + data->last_state_idx = CPUIDLE_DRIVER_STATE_START; + else + data->last_state_idx = CPUIDLE_DRIVER_STATE_START - 1; + } else { + data->last_state_idx = CPUIDLE_DRIVER_STATE_START; + } + + /* + * Use the lowest expected idle interval to pick the idle state. + */ + data->predicted_us = min(data->predicted_us, expected_interval); + + /* + * Use the performance multiplier and the user-configurable + * latency_req to determine the maximum exit latency. + */ + interactivity_req = data->predicted_us / performance_multiplier(nr_iowaiters, cpu_load); + if (latency_req > interactivity_req) + latency_req = interactivity_req; + + /* + * Find the idle state with the lowest power while satisfying + * our constraints. + */ + for (i = data->last_state_idx + 1; i < drv->state_count; i++) { + struct cpuidle_state *s = &drv->states[i]; + struct cpuidle_state_usage *su = &dev->states_usage[i]; + + if (s->disabled || su->disable) + continue; + if (s->target_residency > data->predicted_us) + continue; + if (s->exit_latency > latency_req) + continue; + + data->last_state_idx = i; + } + + return data->last_state_idx; +} + +/** + * menu_reflect - records that data structures need update + * @dev: the CPU + * @index: the index of actual entered state + * + * NOTE: it's important to be fast here because this operation will add to + * the overall exit latency. + */ +static void menu_reflect(struct cpuidle_device *dev, int index) +{ + struct menu_device *data = this_cpu_ptr(&menu_devices); + + data->last_state_idx = index; + data->needs_update = 1; +} + +/** + * menu_update - attempts to guess what happened after entry + * @drv: cpuidle driver containing state data + * @dev: the CPU + */ +static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) +{ + struct menu_device *data = this_cpu_ptr(&menu_devices); + int last_idx = data->last_state_idx; + struct cpuidle_state *target = &drv->states[last_idx]; + unsigned int measured_us; + unsigned int new_factor; + + /* + * Try to figure out how much time passed between entry to low + * power state and occurrence of the wakeup event. + * + * If the entered idle state didn't support residency measurements, + * we use them anyway if they are short, and if long, + * truncate to the whole expected time. + * + * Any measured amount of time will include the exit latency. + * Since we are interested in when the wakeup begun, not when it + * was completed, we must subtract the exit latency. However, if + * the measured amount of time is less than the exit latency, + * assume the state was never reached and the exit latency is 0. + */ + + /* measured value */ + measured_us = cpuidle_get_last_residency(dev); + + /* Deduct exit latency */ + if (measured_us > 2 * target->exit_latency) + measured_us -= target->exit_latency; + else + measured_us /= 2; + + /* Make sure our coefficients do not exceed unity */ + if (measured_us > data->next_timer_us) + measured_us = data->next_timer_us; + + /* Update our correction ratio */ + new_factor = data->correction_factor[data->bucket]; + new_factor -= new_factor / DECAY; + + if (data->next_timer_us > 0 && measured_us < MAX_INTERESTING) + new_factor += RESOLUTION * measured_us / data->next_timer_us; + else + /* + * we were idle so long that we count it as a perfect + * prediction + */ + new_factor += RESOLUTION; + + /* + * We don't want 0 as factor; we always want at least + * a tiny bit of estimated time. Fortunately, due to rounding, + * new_factor will stay nonzero regardless of measured_us values + * and the compiler can eliminate this test as long as DECAY > 1. + */ + if (DECAY == 1 && unlikely(new_factor == 0)) + new_factor = 1; + + data->correction_factor[data->bucket] = new_factor; + + /* update the repeating-pattern data */ + data->intervals[data->interval_ptr++] = measured_us; + if (data->interval_ptr >= INTERVALS) + data->interval_ptr = 0; +} + +/** + * menu_enable_device - scans a CPU's states and does setup + * @drv: cpuidle driver + * @dev: the CPU + */ +static int menu_enable_device(struct cpuidle_driver *drv, + struct cpuidle_device *dev) +{ + struct menu_device *data = &per_cpu(menu_devices, dev->cpu); + int i; + + memset(data, 0, sizeof(struct menu_device)); + + /* + * if the correction factor is 0 (eg first time init or cpu hotplug + * etc), we actually want to start out with a unity factor. + */ + for(i = 0; i < BUCKETS; i++) + data->correction_factor[i] = RESOLUTION * DECAY; + + return 0; +} + +static struct cpuidle_governor menu_governor = { + .name = "menu", + .rating = 20, + .enable = menu_enable_device, + .select = menu_select, + .reflect = menu_reflect, +}; + +/** + * init_menu - initializes the governor + */ +static int __init init_menu(void) +{ + return cpuidle_register_governor(&menu_governor); +} + +postcore_initcall(init_menu); diff --git a/drivers/cpuidle/governors/Makefile b/drivers/cpuidle/governors/Makefile deleted file mode 100644 index 1b51272..0000000 --- a/drivers/cpuidle/governors/Makefile +++ /dev/null @@ -1,6 +0,0 @@ -# -# Makefile for cpuidle governors. -# - -obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o -obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c deleted file mode 100644 index fe8f089..0000000 --- a/drivers/cpuidle/governors/ladder.c +++ /dev/null @@ -1,197 +0,0 @@ -/* - * ladder.c - the residency ladder algorithm - * - * Copyright (C) 2001, 2002 Andy Grover - * Copyright (C) 2001, 2002 Paul Diefenbaugh - * Copyright (C) 2004, 2005 Dominik Brodowski - * - * (C) 2006-2007 Venkatesh Pallipadi - * Shaohua Li - * Adam Belay - * - * This code is licenced under the GPL. - */ - -#include -#include -#include -#include -#include - -#include -#include - -#define PROMOTION_COUNT 4 -#define DEMOTION_COUNT 1 - -struct ladder_device_state { - struct { - u32 promotion_count; - u32 demotion_count; - u32 promotion_time; - u32 demotion_time; - } threshold; - struct { - int promotion_count; - int demotion_count; - } stats; -}; - -struct ladder_device { - struct ladder_device_state states[CPUIDLE_STATE_MAX]; - int last_state_idx; -}; - -static DEFINE_PER_CPU(struct ladder_device, ladder_devices); - -/** - * ladder_do_selection - prepares private data for a state change - * @ldev: the ladder device - * @old_idx: the current state index - * @new_idx: the new target state index - */ -static inline void ladder_do_selection(struct ladder_device *ldev, - int old_idx, int new_idx) -{ - ldev->states[old_idx].stats.promotion_count = 0; - ldev->states[old_idx].stats.demotion_count = 0; - ldev->last_state_idx = new_idx; -} - -/** - * ladder_select_state - selects the next state to enter - * @drv: cpuidle driver - * @dev: the CPU - */ -static int ladder_select_state(struct cpuidle_driver *drv, - struct cpuidle_device *dev) -{ - struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); - struct ladder_device_state *last_state; - int last_residency, last_idx = ldev->last_state_idx; - int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); - - /* Special case when user has set very strict latency requirement */ - if (unlikely(latency_req == 0)) { - ladder_do_selection(ldev, last_idx, 0); - return 0; - } - - last_state = &ldev->states[last_idx]; - - last_residency = cpuidle_get_last_residency(dev) - drv->states[last_idx].exit_latency; - - /* consider promotion */ - if (last_idx < drv->state_count - 1 && - !drv->states[last_idx + 1].disabled && - !dev->states_usage[last_idx + 1].disable && - last_residency > last_state->threshold.promotion_time && - drv->states[last_idx + 1].exit_latency <= latency_req) { - last_state->stats.promotion_count++; - last_state->stats.demotion_count = 0; - if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) { - ladder_do_selection(ldev, last_idx, last_idx + 1); - return last_idx + 1; - } - } - - /* consider demotion */ - if (last_idx > CPUIDLE_DRIVER_STATE_START && - (drv->states[last_idx].disabled || - dev->states_usage[last_idx].disable || - drv->states[last_idx].exit_latency > latency_req)) { - int i; - - for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) { - if (drv->states[i].exit_latency <= latency_req) - break; - } - ladder_do_selection(ldev, last_idx, i); - return i; - } - - if (last_idx > CPUIDLE_DRIVER_STATE_START && - last_residency < last_state->threshold.demotion_time) { - last_state->stats.demotion_count++; - last_state->stats.promotion_count = 0; - if (last_state->stats.demotion_count >= last_state->threshold.demotion_count) { - ladder_do_selection(ldev, last_idx, last_idx - 1); - return last_idx - 1; - } - } - - /* otherwise remain at the current state */ - return last_idx; -} - -/** - * ladder_enable_device - setup for the governor - * @drv: cpuidle driver - * @dev: the CPU - */ -static int ladder_enable_device(struct cpuidle_driver *drv, - struct cpuidle_device *dev) -{ - int i; - struct ladder_device *ldev = &per_cpu(ladder_devices, dev->cpu); - struct ladder_device_state *lstate; - struct cpuidle_state *state; - - ldev->last_state_idx = CPUIDLE_DRIVER_STATE_START; - - for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) { - state = &drv->states[i]; - lstate = &ldev->states[i]; - - lstate->stats.promotion_count = 0; - lstate->stats.demotion_count = 0; - - lstate->threshold.promotion_count = PROMOTION_COUNT; - lstate->threshold.demotion_count = DEMOTION_COUNT; - - if (i < drv->state_count - 1) - lstate->threshold.promotion_time = state->exit_latency; - if (i > CPUIDLE_DRIVER_STATE_START) - lstate->threshold.demotion_time = state->exit_latency; - } - - return 0; -} - -/** - * ladder_reflect - update the correct last_state_idx - * @dev: the CPU - * @index: the index of actual state entered - */ -static void ladder_reflect(struct cpuidle_device *dev, int index) -{ - struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); - if (index > 0) - ldev->last_state_idx = index; -} - -static struct cpuidle_governor ladder_governor = { - .name = "ladder", - .rating = 10, - .enable = ladder_enable_device, - .select = ladder_select_state, - .reflect = ladder_reflect, -}; - -/** - * init_ladder - initializes the governor - */ -static int __init init_ladder(void) -{ - /* - * When NO_HZ is disabled, or when booting with nohz=off, the ladder - * governor is better so give it a higher rating than the menu - * governor. - */ - if (!tick_nohz_enabled) - ladder_governor.rating = 25; - - return cpuidle_register_governor(&ladder_governor); -} - -postcore_initcall(init_ladder); diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c deleted file mode 100644 index d9b5b93..0000000 --- a/drivers/cpuidle/governors/menu.c +++ /dev/null @@ -1,496 +0,0 @@ -/* - * menu.c - the menu idle governor - * - * Copyright (C) 2006-2007 Adam Belay - * Copyright (C) 2009 Intel Corporation - * Author: - * Arjan van de Ven - * - * This code is licenced under the GPL version 2 as described - * in the COPYING file that acompanies the Linux Kernel. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -/* - * Please note when changing the tuning values: - * If (MAX_INTERESTING-1) * RESOLUTION > UINT_MAX, the result of - * a scaling operation multiplication may overflow on 32 bit platforms. - * In that case, #define RESOLUTION as ULL to get 64 bit result: - * #define RESOLUTION 1024ULL - * - * The default values do not overflow. - */ -#define BUCKETS 12 -#define INTERVAL_SHIFT 3 -#define INTERVALS (1UL << INTERVAL_SHIFT) -#define RESOLUTION 1024 -#define DECAY 8 -#define MAX_INTERESTING 50000 - - -/* - * Concepts and ideas behind the menu governor - * - * For the menu governor, there are 3 decision factors for picking a C - * state: - * 1) Energy break even point - * 2) Performance impact - * 3) Latency tolerance (from pmqos infrastructure) - * These these three factors are treated independently. - * - * Energy break even point - * ----------------------- - * C state entry and exit have an energy cost, and a certain amount of time in - * the C state is required to actually break even on this cost. CPUIDLE - * provides us this duration in the "target_residency" field. So all that we - * need is a good prediction of how long we'll be idle. Like the traditional - * menu governor, we start with the actual known "next timer event" time. - * - * Since there are other source of wakeups (interrupts for example) than - * the next timer event, this estimation is rather optimistic. To get a - * more realistic estimate, a correction factor is applied to the estimate, - * that is based on historic behavior. For example, if in the past the actual - * duration always was 50% of the next timer tick, the correction factor will - * be 0.5. - * - * menu uses a running average for this correction factor, however it uses a - * set of factors, not just a single factor. This stems from the realization - * that the ratio is dependent on the order of magnitude of the expected - * duration; if we expect 500 milliseconds of idle time the likelihood of - * getting an interrupt very early is much higher than if we expect 50 micro - * seconds of idle time. A second independent factor that has big impact on - * the actual factor is if there is (disk) IO outstanding or not. - * (as a special twist, we consider every sleep longer than 50 milliseconds - * as perfect; there are no power gains for sleeping longer than this) - * - * For these two reasons we keep an array of 12 independent factors, that gets - * indexed based on the magnitude of the expected duration as well as the - * "is IO outstanding" property. - * - * Repeatable-interval-detector - * ---------------------------- - * There are some cases where "next timer" is a completely unusable predictor: - * Those cases where the interval is fixed, for example due to hardware - * interrupt mitigation, but also due to fixed transfer rate devices such as - * mice. - * For this, we use a different predictor: We track the duration of the last 8 - * intervals and if the stand deviation of these 8 intervals is below a - * threshold value, we use the average of these intervals as prediction. - * - * Limiting Performance Impact - * --------------------------- - * C states, especially those with large exit latencies, can have a real - * noticeable impact on workloads, which is not acceptable for most sysadmins, - * and in addition, less performance has a power price of its own. - * - * As a general rule of thumb, menu assumes that the following heuristic - * holds: - * The busier the system, the less impact of C states is acceptable - * - * This rule-of-thumb is implemented using a performance-multiplier: - * If the exit latency times the performance multiplier is longer than - * the predicted duration, the C state is not considered a candidate - * for selection due to a too high performance impact. So the higher - * this multiplier is, the longer we need to be idle to pick a deep C - * state, and thus the less likely a busy CPU will hit such a deep - * C state. - * - * Two factors are used in determing this multiplier: - * a value of 10 is added for each point of "per cpu load average" we have. - * a value of 5 points is added for each process that is waiting for - * IO on this CPU. - * (these values are experimentally determined) - * - * The load average factor gives a longer term (few seconds) input to the - * decision, while the iowait value gives a cpu local instantanious input. - * The iowait factor may look low, but realize that this is also already - * represented in the system load average. - * - */ - -struct menu_device { - int last_state_idx; - int needs_update; - - unsigned int next_timer_us; - unsigned int predicted_us; - unsigned int bucket; - unsigned int correction_factor[BUCKETS]; - unsigned int intervals[INTERVALS]; - int interval_ptr; -}; - - -#define LOAD_INT(x) ((x) >> FSHIFT) -#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100) - -static inline int get_loadavg(unsigned long load) -{ - return LOAD_INT(load) * 10 + LOAD_FRAC(load) / 10; -} - -static inline int which_bucket(unsigned int duration, unsigned long nr_iowaiters) -{ - int bucket = 0; - - /* - * We keep two groups of stats; one with no - * IO pending, one without. - * This allows us to calculate - * E(duration)|iowait - */ - if (nr_iowaiters) - bucket = BUCKETS/2; - - if (duration < 10) - return bucket; - if (duration < 100) - return bucket + 1; - if (duration < 1000) - return bucket + 2; - if (duration < 10000) - return bucket + 3; - if (duration < 100000) - return bucket + 4; - return bucket + 5; -} - -/* - * Return a multiplier for the exit latency that is intended - * to take performance requirements into account. - * The more performance critical we estimate the system - * to be, the higher this multiplier, and thus the higher - * the barrier to go to an expensive C state. - */ -static inline int performance_multiplier(unsigned long nr_iowaiters, unsigned long load) -{ - int mult = 1; - - /* for higher loadavg, we are more reluctant */ - - mult += 2 * get_loadavg(load); - - /* for IO wait tasks (per cpu!) we add 5x each */ - mult += 10 * nr_iowaiters; - - return mult; -} - -static DEFINE_PER_CPU(struct menu_device, menu_devices); - -static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev); - -/* - * Try detecting repeating patterns by keeping track of the last 8 - * intervals, and checking if the standard deviation of that set - * of points is below a threshold. If it is... then use the - * average of these 8 points as the estimated value. - */ -static unsigned int get_typical_interval(struct menu_device *data) -{ - int i, divisor; - unsigned int max, thresh, avg; - uint64_t sum, variance; - - thresh = UINT_MAX; /* Discard outliers above this value */ - -again: - - /* First calculate the average of past intervals */ - max = 0; - sum = 0; - divisor = 0; - for (i = 0; i < INTERVALS; i++) { - unsigned int value = data->intervals[i]; - if (value <= thresh) { - sum += value; - divisor++; - if (value > max) - max = value; - } - } - if (divisor == INTERVALS) - avg = sum >> INTERVAL_SHIFT; - else - avg = div_u64(sum, divisor); - - /* Then try to determine variance */ - variance = 0; - for (i = 0; i < INTERVALS; i++) { - unsigned int value = data->intervals[i]; - if (value <= thresh) { - int64_t diff = (int64_t)value - avg; - variance += diff * diff; - } - } - if (divisor == INTERVALS) - variance >>= INTERVAL_SHIFT; - else - do_div(variance, divisor); - - /* - * The typical interval is obtained when standard deviation is - * small (stddev <= 20 us, variance <= 400 us^2) or standard - * deviation is small compared to the average interval (avg > - * 6*stddev, avg^2 > 36*variance). The average is smaller than - * UINT_MAX aka U32_MAX, so computing its square does not - * overflow a u64. We simply reject this candidate average if - * the standard deviation is greater than 715 s (which is - * rather unlikely). - * - * Use this result only if there is no timer to wake us up sooner. - */ - if (likely(variance <= U64_MAX/36)) { - if ((((u64)avg*avg > variance*36) && (divisor * 4 >= INTERVALS * 3)) - || variance <= 400) { - return avg; - } - } - - /* - * If we have outliers to the upside in our distribution, discard - * those by setting the threshold to exclude these outliers, then - * calculate the average and standard deviation again. Once we get - * down to the bottom 3/4 of our samples, stop excluding samples. - * - * This can deal with workloads that have long pauses interspersed - * with sporadic activity with a bunch of short pauses. - */ - if ((divisor * 4) <= INTERVALS * 3) - return UINT_MAX; - - thresh = max - 1; - goto again; -} - -/** - * menu_select - selects the next idle state to enter - * @drv: cpuidle driver containing state data - * @dev: the CPU - */ -static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) -{ - struct menu_device *data = this_cpu_ptr(&menu_devices); - int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); - int i; - unsigned int interactivity_req; - unsigned int expected_interval; - unsigned long nr_iowaiters, cpu_load; - - if (data->needs_update) { - menu_update(drv, dev); - data->needs_update = 0; - } - - /* Special case when user has set very strict latency requirement */ - if (unlikely(latency_req == 0)) - return 0; - - /* determine the expected residency time, round up */ - data->next_timer_us = ktime_to_us(tick_nohz_get_sleep_length()); - - get_iowait_load(&nr_iowaiters, &cpu_load); - data->bucket = which_bucket(data->next_timer_us, nr_iowaiters); - - /* - * Force the result of multiplication to be 64 bits even if both - * operands are 32 bits. - * Make sure to round up for half microseconds. - */ - data->predicted_us = DIV_ROUND_CLOSEST_ULL((uint64_t)data->next_timer_us * - data->correction_factor[data->bucket], - RESOLUTION * DECAY); - - expected_interval = get_typical_interval(data); - expected_interval = min(expected_interval, data->next_timer_us); - - if (CPUIDLE_DRIVER_STATE_START > 0) { - struct cpuidle_state *s = &drv->states[CPUIDLE_DRIVER_STATE_START]; - unsigned int polling_threshold; - - /* - * We want to default to C1 (hlt), not to busy polling - * unless the timer is happening really really soon, or - * C1's exit latency exceeds the user configured limit. - */ - polling_threshold = max_t(unsigned int, 20, s->target_residency); - if (data->next_timer_us > polling_threshold && - latency_req > s->exit_latency && !s->disabled && - !dev->states_usage[CPUIDLE_DRIVER_STATE_START].disable) - data->last_state_idx = CPUIDLE_DRIVER_STATE_START; - else - data->last_state_idx = CPUIDLE_DRIVER_STATE_START - 1; - } else { - data->last_state_idx = CPUIDLE_DRIVER_STATE_START; - } - - /* - * Use the lowest expected idle interval to pick the idle state. - */ - data->predicted_us = min(data->predicted_us, expected_interval); - - /* - * Use the performance multiplier and the user-configurable - * latency_req to determine the maximum exit latency. - */ - interactivity_req = data->predicted_us / performance_multiplier(nr_iowaiters, cpu_load); - if (latency_req > interactivity_req) - latency_req = interactivity_req; - - /* - * Find the idle state with the lowest power while satisfying - * our constraints. - */ - for (i = data->last_state_idx + 1; i < drv->state_count; i++) { - struct cpuidle_state *s = &drv->states[i]; - struct cpuidle_state_usage *su = &dev->states_usage[i]; - - if (s->disabled || su->disable) - continue; - if (s->target_residency > data->predicted_us) - continue; - if (s->exit_latency > latency_req) - continue; - - data->last_state_idx = i; - } - - return data->last_state_idx; -} - -/** - * menu_reflect - records that data structures need update - * @dev: the CPU - * @index: the index of actual entered state - * - * NOTE: it's important to be fast here because this operation will add to - * the overall exit latency. - */ -static void menu_reflect(struct cpuidle_device *dev, int index) -{ - struct menu_device *data = this_cpu_ptr(&menu_devices); - - data->last_state_idx = index; - data->needs_update = 1; -} - -/** - * menu_update - attempts to guess what happened after entry - * @drv: cpuidle driver containing state data - * @dev: the CPU - */ -static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) -{ - struct menu_device *data = this_cpu_ptr(&menu_devices); - int last_idx = data->last_state_idx; - struct cpuidle_state *target = &drv->states[last_idx]; - unsigned int measured_us; - unsigned int new_factor; - - /* - * Try to figure out how much time passed between entry to low - * power state and occurrence of the wakeup event. - * - * If the entered idle state didn't support residency measurements, - * we use them anyway if they are short, and if long, - * truncate to the whole expected time. - * - * Any measured amount of time will include the exit latency. - * Since we are interested in when the wakeup begun, not when it - * was completed, we must subtract the exit latency. However, if - * the measured amount of time is less than the exit latency, - * assume the state was never reached and the exit latency is 0. - */ - - /* measured value */ - measured_us = cpuidle_get_last_residency(dev); - - /* Deduct exit latency */ - if (measured_us > 2 * target->exit_latency) - measured_us -= target->exit_latency; - else - measured_us /= 2; - - /* Make sure our coefficients do not exceed unity */ - if (measured_us > data->next_timer_us) - measured_us = data->next_timer_us; - - /* Update our correction ratio */ - new_factor = data->correction_factor[data->bucket]; - new_factor -= new_factor / DECAY; - - if (data->next_timer_us > 0 && measured_us < MAX_INTERESTING) - new_factor += RESOLUTION * measured_us / data->next_timer_us; - else - /* - * we were idle so long that we count it as a perfect - * prediction - */ - new_factor += RESOLUTION; - - /* - * We don't want 0 as factor; we always want at least - * a tiny bit of estimated time. Fortunately, due to rounding, - * new_factor will stay nonzero regardless of measured_us values - * and the compiler can eliminate this test as long as DECAY > 1. - */ - if (DECAY == 1 && unlikely(new_factor == 0)) - new_factor = 1; - - data->correction_factor[data->bucket] = new_factor; - - /* update the repeating-pattern data */ - data->intervals[data->interval_ptr++] = measured_us; - if (data->interval_ptr >= INTERVALS) - data->interval_ptr = 0; -} - -/** - * menu_enable_device - scans a CPU's states and does setup - * @drv: cpuidle driver - * @dev: the CPU - */ -static int menu_enable_device(struct cpuidle_driver *drv, - struct cpuidle_device *dev) -{ - struct menu_device *data = &per_cpu(menu_devices, dev->cpu); - int i; - - memset(data, 0, sizeof(struct menu_device)); - - /* - * if the correction factor is 0 (eg first time init or cpu hotplug - * etc), we actually want to start out with a unity factor. - */ - for(i = 0; i < BUCKETS; i++) - data->correction_factor[i] = RESOLUTION * DECAY; - - return 0; -} - -static struct cpuidle_governor menu_governor = { - .name = "menu", - .rating = 20, - .enable = menu_enable_device, - .select = menu_select, - .reflect = menu_reflect, -}; - -/** - * init_menu - initializes the governor - */ -static int __init init_menu(void) -{ - return cpuidle_register_governor(&menu_governor); -} - -postcore_initcall(init_menu);