From patchwork Thu Jun 11 10:51:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 49745 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1CC1120C81 for ; Thu, 11 Jun 2015 10:52:58 +0000 (UTC) Received: by wibdt2 with SMTP id dt2sf1839851wib.3 for ; Thu, 11 Jun 2015 03:52:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:in-reply-to:references :sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=+1skyPK2c0hE41GjT8faYa0HgklFMzxVPWWMt+iZY6Q=; b=gd5lBkx3DETDs6lY4ALb79KKZC5m/AnccGzVqXliL8ASNhWktWkeemG8tb+WxUsoDG szya0PZvr5TEYSQ5DMeVggi6gZBnFofgJUwMdyNdLHxy6rwH19H6e6zpKxmcTxRKXmEh thpJrqEJUrjt/ekbErf5ZHVwihR5ogc9VcbINKRERuktBMQHfYyblNfQ/y/w2eKO2RKq LskwJR0Akj3FVudT9gxXtOErBYiw6+ewwGMQEu94mB9oAkQINxz7SdlCRgpjZqcdKPWn 9hl5OZ+M9c3v9wbFoaTPB7quChtvxsaoVx0bFJUSYHJT23lbbIxHT6e0dLAd84kylsBa s5rw== X-Gm-Message-State: ALoCoQnF9UkEhalL6L5P2EgTA8nD2P4dpI7UY0dKTxn/6PFqjEYaVnAhijsWG32YuthpyRjcuHel X-Received: by 10.152.9.3 with SMTP id v3mr7899608laa.6.1434019977416; Thu, 11 Jun 2015 03:52:57 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.6.202 with SMTP id d10ls205487laa.69.gmail; Thu, 11 Jun 2015 03:52:57 -0700 (PDT) X-Received: by 10.112.182.4 with SMTP id ea4mr9350398lbc.35.1434019977133; Thu, 11 Jun 2015 03:52:57 -0700 (PDT) Received: from mail-la0-f43.google.com (mail-la0-f43.google.com. [209.85.215.43]) by mx.google.com with ESMTPS id j11si269565lbh.45.2015.06.11.03.52.57 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Jun 2015 03:52:57 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) client-ip=209.85.215.43; Received: by labke4 with SMTP id ke4so2465033lab.0 for ; Thu, 11 Jun 2015 03:52:57 -0700 (PDT) X-Received: by 10.152.87.33 with SMTP id u1mr9337673laz.35.1434019976951; Thu, 11 Jun 2015 03:52:56 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp3952609lbb; Thu, 11 Jun 2015 03:52:55 -0700 (PDT) X-Received: by 10.70.47.138 with SMTP id d10mr13666145pdn.137.1434019967634; Thu, 11 Jun 2015 03:52:47 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id xz3si375664pab.181.2015.06.11.03.52.46; Thu, 11 Jun 2015 03:52:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752700AbbFKKwp (ORCPT + 11 others); Thu, 11 Jun 2015 06:52:45 -0400 Received: from mail-pa0-f50.google.com ([209.85.220.50]:34478 "EHLO mail-pa0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752284AbbFKKwo (ORCPT ); Thu, 11 Jun 2015 06:52:44 -0400 Received: by payr10 with SMTP id r10so2308318pay.1 for ; Thu, 11 Jun 2015 03:52:44 -0700 (PDT) X-Received: by 10.68.191.101 with SMTP id gx5mr14040818pbc.148.1434019963941; Thu, 11 Jun 2015 03:52:43 -0700 (PDT) Received: from localhost ([122.167.219.251]) by mx.google.com with ESMTPSA id fo5sm391186pbb.41.2015.06.11.03.52.42 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Thu, 11 Jun 2015 03:52:43 -0700 (PDT) From: Viresh Kumar To: Rafael Wysocki , Preeti U Murthy , ke.wang@spreadtrum.com Cc: linaro-kernel@lists.linaro.org, linux-pm@vger.kernel.org, ego@linux.vnet.ibm.com, paulus@samba.org, shilpa.bhat@linux.vnet.ibm.com, prarit@redhat.com, robert.schoene@tu-dresden.de, skannan@codeaurora.org, Viresh Kumar Subject: [PATCH 08/12] cpufreq: governor: synchronize work-handler with governor callbacks Date: Thu, 11 Jun 2015 16:21:51 +0530 Message-Id: X-Mailer: git-send-email 2.4.0 In-Reply-To: References: In-Reply-To: References: Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: viresh.kumar@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently the work-handler's for all the CPUs are synchronized with one another but not with cpufreq_governor_dbs(). Both work-handlers and cpufreq_governor_dbs() enqueue work. And these not being in sync is an open invitation to all kind of races to come in. So, this patch replaces 'timer_mutex' with 'cdata->mutex' so that both work-handler and callback are in sync. Also in update_sampling_rate() we are dropping the mutex while canceling delayed-work. There is no need to do that, keep lock active for the entire length of routine as that also enqueues works. Signed-off-by: Viresh Kumar --- drivers/cpufreq/cpufreq_governor.c | 14 +++----------- drivers/cpufreq/cpufreq_governor.h | 6 ------ drivers/cpufreq/cpufreq_ondemand.c | 13 ++++--------- 3 files changed, 7 insertions(+), 26 deletions(-) diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 08bb67225b85..aa24aa9a9eb3 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -228,13 +228,12 @@ static void dbs_timer(struct work_struct *work) { struct cpu_dbs_info *cdbs = container_of(work, struct cpu_dbs_info, dwork.work); - struct cpu_common_dbs_info *ccdbs = cdbs->ccdbs; - struct cpufreq_policy *policy = ccdbs->policy; + struct cpufreq_policy *policy = cdbs->ccdbs->policy; struct dbs_data *dbs_data = policy->governor_data; unsigned int sampling_rate, delay; bool modify_all = true; - mutex_lock(&ccdbs->timer_mutex); + mutex_lock(&dbs_data->cdata->mutex); if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; @@ -252,7 +251,7 @@ static void dbs_timer(struct work_struct *work) delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all); gov_queue_work(dbs_data, policy, delay, modify_all); - mutex_unlock(&ccdbs->timer_mutex); + mutex_unlock(&dbs_data->cdata->mutex); } static void set_sampling_rate(struct dbs_data *dbs_data, @@ -424,7 +423,6 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, } ccdbs->time_stamp = ktime_get(); - mutex_init(&ccdbs->timer_mutex); for_each_cpu(j, policy->cpus) { struct cpu_dbs_info *j_cdbs = cdata->get_cpu_cdbs(j); @@ -470,8 +468,6 @@ static void cpufreq_governor_stop(struct cpufreq_policy *policy, { struct common_dbs_data *cdata = dbs_data->cdata; unsigned int cpu = policy->cpu; - struct cpu_dbs_info *cdbs = cdata->get_cpu_cdbs(cpu); - struct cpu_common_dbs_info *ccdbs = cdbs->ccdbs; gov_cancel_work(dbs_data, policy); @@ -481,8 +477,6 @@ static void cpufreq_governor_stop(struct cpufreq_policy *policy, cs_dbs_info->enable = 0; } - - mutex_destroy(&ccdbs->timer_mutex); } static void cpufreq_governor_limits(struct cpufreq_policy *policy, @@ -495,7 +489,6 @@ static void cpufreq_governor_limits(struct cpufreq_policy *policy, if (!cdbs->ccdbs) return; - mutex_lock(&cdbs->ccdbs->timer_mutex); if (policy->max < cdbs->ccdbs->policy->cur) __cpufreq_driver_target(cdbs->ccdbs->policy, policy->max, CPUFREQ_RELATION_H); @@ -503,7 +496,6 @@ static void cpufreq_governor_limits(struct cpufreq_policy *policy, __cpufreq_driver_target(cdbs->ccdbs->policy, policy->min, CPUFREQ_RELATION_L); dbs_check_cpu(dbs_data, cpu); - mutex_unlock(&cdbs->ccdbs->timer_mutex); } int cpufreq_governor_dbs(struct cpufreq_policy *policy, diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index 2125c299c602..2b2884f91fe7 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -131,12 +131,6 @@ static void *get_cpu_dbs_info_s(int cpu) \ /* Common to all CPUs of a policy */ struct cpu_common_dbs_info { struct cpufreq_policy *policy; - /* - * percpu mutex that serializes governor limit change with dbs_timer - * invocation. We do not want dbs_timer to run when user is changing - * the governor or limits. - */ - struct mutex timer_mutex; ktime_t time_stamp; }; diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 11db20079fc6..87813830e169 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -249,6 +249,8 @@ static void update_sampling_rate(struct dbs_data *dbs_data, struct od_dbs_tuners *od_tuners = dbs_data->tuners; int cpu; + mutex_lock(&dbs_data->cdata->mutex); + od_tuners->sampling_rate = new_rate = max(new_rate, dbs_data->min_sampling_rate); @@ -267,28 +269,21 @@ static void update_sampling_rate(struct dbs_data *dbs_data, dbs_info = &per_cpu(od_cpu_dbs_info, cpu); cpufreq_cpu_put(policy); - mutex_lock(&dbs_info->cdbs.ccdbs->timer_mutex); - - if (!delayed_work_pending(&dbs_info->cdbs.dwork)) { - mutex_unlock(&dbs_info->cdbs.ccdbs->timer_mutex); + if (!delayed_work_pending(&dbs_info->cdbs.dwork)) continue; - } next_sampling = jiffies + usecs_to_jiffies(new_rate); appointed_at = dbs_info->cdbs.dwork.timer.expires; if (time_before(next_sampling, appointed_at)) { - mutex_unlock(&dbs_info->cdbs.ccdbs->timer_mutex); cancel_delayed_work_sync(&dbs_info->cdbs.dwork); - mutex_lock(&dbs_info->cdbs.ccdbs->timer_mutex); - gov_queue_work(dbs_data, policy, usecs_to_jiffies(new_rate), true); } - mutex_unlock(&dbs_info->cdbs.ccdbs->timer_mutex); } + mutex_unlock(&dbs_data->cdata->mutex); } static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,