From patchwork Tue Jul 23 03:31:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 18527 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ye0-f198.google.com (mail-ye0-f198.google.com [209.85.213.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D40462777D for ; Tue, 23 Jul 2013 03:31:50 +0000 (UTC) Received: by mail-ye0-f198.google.com with SMTP id m13sf8441158yen.9 for ; Mon, 22 Jul 2013 20:31:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe :content-transfer-encoding; bh=GCIa1uPj84p2wgQzomIjFLrpyMs0ffW9xiwoTC5f8n8=; b=IBh/CuR4EpXQbLjc/x13x9jQJriOsFpQ7OtVf1zV/ZUjff+l1RBwb8ROjFGtPR1OJQ 9ai+mTEheaLNTq0ut2an3uNOKt9p/Hi1JmxCmOeiBbMg8aPomeKo8NIeZmVlBMLuDN2D Nd4pzZD7hnFx6IT2adggLB+JkhL4ky//nDgXvUwklJ2I0fJnUDQdDY0HLhQOg/08VvHe iiiQ8v6C3bBqG6C3si6GrBmaFbYZ/TbiElmSSUil5ZkWKqjGERdFTbPQh587j2ZmkSZY m4gvtuUHDjGFCsITUtKv32411J5jmR5Hhivmci8U4DtJKq6fkb+qwCXzQwVhR7DuQMkl OYjQ== X-Received: by 10.236.123.70 with SMTP id u46mr17207517yhh.20.1374550310359; Mon, 22 Jul 2013 20:31:50 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.128.34 with SMTP id nl2ls2866617qeb.39.gmail; Mon, 22 Jul 2013 20:31:50 -0700 (PDT) X-Received: by 10.220.162.200 with SMTP id w8mr10268723vcx.90.1374550310129; Mon, 22 Jul 2013 20:31:50 -0700 (PDT) Received: from mail-ve0-f174.google.com (mail-ve0-f174.google.com [209.85.128.174]) by mx.google.com with ESMTPS id of1si7814620vcb.129.2013.07.22.20.31.50 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 22 Jul 2013 20:31:50 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.174; Received: by mail-ve0-f174.google.com with SMTP id oz10so5590329veb.5 for ; Mon, 22 Jul 2013 20:31:50 -0700 (PDT) X-Received: by 10.52.120.77 with SMTP id la13mr8633943vdb.23.1374550310014; Mon, 22 Jul 2013 20:31:50 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.165.8 with SMTP id yu8csp84147veb; Mon, 22 Jul 2013 20:31:49 -0700 (PDT) X-Received: by 10.224.182.79 with SMTP id cb15mr37732747qab.48.1374550305757; Mon, 22 Jul 2013 20:31:45 -0700 (PDT) Received: from relais.videotron.ca (relais.videotron.ca. [24.201.245.36]) by mx.google.com with ESMTP id b11si5234490qcv.81.2013.07.22.20.31.45 for ; Mon, 22 Jul 2013 20:31:45 -0700 (PDT) Received-SPF: neutral (google.com: 24.201.245.36 is neither permitted nor denied by best guess record for domain of nicolas.pitre@linaro.org) client-ip=24.201.245.36; Received: from yoda.home ([70.83.209.44]) by VL-VM-MR004.ip.videotron.ca (Oracle Communications Messaging Exchange Server 7u4-22.01 64bit (built Apr 21 2011)) with ESMTP id <0MQD007GKDSVI140@VL-VM-MR004.ip.videotron.ca>; Mon, 22 Jul 2013 23:31:44 -0400 (EDT) Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id C97792DA05F2; Mon, 22 Jul 2013 23:31:43 -0400 (EDT) From: Nicolas Pitre To: linux-arm-kernel@lists.infradead.org Cc: dave.martin@linaro.org, lorenzo.pieralisi@arm.com, patches@linaro.org Subject: [PATCH 05/13] ARM: bL_switcher: move to dedicated threads rather than workqueues Date: Mon, 22 Jul 2013 23:31:21 -0400 Message-id: <1374550289-25305-6-git-send-email-nicolas.pitre@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-reply-to: <1374550289-25305-1-git-send-email-nicolas.pitre@linaro.org> References: <1374550289-25305-1-git-send-email-nicolas.pitre@linaro.org> X-Gm-Message-State: ALoCoQlqoVJ7dg3A9X28DqJfMxUgvvBKJTG0u+dKnGX9ayBu0b8j+rud95KwUmmF7Enly81Ur9PH X-Original-Sender: nicolas.pitre@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-transfer-encoding: 7BIT The workqueues are problematic as they may be contended. They can't be scheduled with top priority either. Also the optimization in bL_switch_request() to skip the workqueue entirely when the target CPU and the calling CPU were the same didn't allow for bL_switch_request() to be called from atomic context, as might be the case for some cpufreq drivers. Let's move to dedicated kthreads instead. Signed-off-by: Nicolas Pitre --- arch/arm/common/bL_switcher.c | 101 ++++++++++++++++++++++++++++--------- arch/arm/include/asm/bL_switcher.h | 2 +- 2 files changed, 79 insertions(+), 24 deletions(-) diff --git a/arch/arm/common/bL_switcher.c b/arch/arm/common/bL_switcher.c index d6f7e507d9..c2355cafc9 100644 --- a/arch/arm/common/bL_switcher.c +++ b/arch/arm/common/bL_switcher.c @@ -15,8 +15,10 @@ #include #include #include +#include #include -#include +#include +#include #include #include #include @@ -225,15 +227,48 @@ static int bL_switch_to(unsigned int new_cluster_id) return ret; } -struct switch_args { - unsigned int cluster; - struct work_struct work; +struct bL_thread { + struct task_struct *task; + wait_queue_head_t wq; + int wanted_cluster; }; -static void __bL_switch_to(struct work_struct *work) +static struct bL_thread bL_threads[MAX_CPUS_PER_CLUSTER]; + +static int bL_switcher_thread(void *arg) +{ + struct bL_thread *t = arg; + struct sched_param param = { .sched_priority = 1 }; + int cluster; + + sched_setscheduler_nocheck(current, SCHED_FIFO, ¶m); + + do { + if (signal_pending(current)) + flush_signals(current); + wait_event_interruptible(t->wq, + t->wanted_cluster != -1 || + kthread_should_stop()); + cluster = xchg(&t->wanted_cluster, -1); + if (cluster != -1) + bL_switch_to(cluster); + } while (!kthread_should_stop()); + + return 0; +} + +static struct task_struct * __init bL_switcher_thread_create(int cpu, void *arg) { - struct switch_args *args = container_of(work, struct switch_args, work); - bL_switch_to(args->cluster); + struct task_struct *task; + + task = kthread_create_on_node(bL_switcher_thread, arg, + cpu_to_node(cpu), "kswitcher_%d", cpu); + if (!IS_ERR(task)) { + kthread_bind(task, cpu); + wake_up_process(task); + } else + pr_err("%s failed for CPU %d\n", __func__, cpu); + return task; } /* @@ -242,26 +277,46 @@ static void __bL_switch_to(struct work_struct *work) * @cpu: the CPU to switch * @new_cluster_id: the ID of the cluster to switch to. * - * This function causes a cluster switch on the given CPU. If the given - * CPU is the same as the calling CPU then the switch happens right away. - * Otherwise the request is put on a work queue to be scheduled on the - * remote CPU. + * This function causes a cluster switch on the given CPU by waking up + * the appropriate switcher thread. This function may or may not return + * before the switch has occurred. */ -void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id) +int bL_switch_request(unsigned int cpu, unsigned int new_cluster_id) { - unsigned int this_cpu = get_cpu(); - struct switch_args args; + struct bL_thread *t; - if (cpu == this_cpu) { - bL_switch_to(new_cluster_id); - put_cpu(); - return; + if (cpu >= MAX_CPUS_PER_CLUSTER) { + pr_err("%s: cpu %d out of bounds\n", __func__, cpu); + return -EINVAL; } - put_cpu(); - args.cluster = new_cluster_id; - INIT_WORK_ONSTACK(&args.work, __bL_switch_to); - schedule_work_on(cpu, &args.work); - flush_work(&args.work); + t = &bL_threads[cpu]; + if (IS_ERR(t->task)) + return PTR_ERR(t->task); + if (!t->task) + return -ESRCH; + + t->wanted_cluster = new_cluster_id; + wake_up(&t->wq); + return 0; } EXPORT_SYMBOL_GPL(bL_switch_request); + +static int __init bL_switcher_init(void) +{ + int cpu; + + pr_info("big.LITTLE switcher initializing\n"); + + for_each_online_cpu(cpu) { + struct bL_thread *t = &bL_threads[cpu]; + init_waitqueue_head(&t->wq); + t->wanted_cluster = -1; + t->task = bL_switcher_thread_create(cpu, t); + } + + pr_info("big.LITTLE switcher initialized\n"); + return 0; +} + +late_initcall(bL_switcher_init); diff --git a/arch/arm/include/asm/bL_switcher.h b/arch/arm/include/asm/bL_switcher.h index 72efe3f349..e0c0bba70b 100644 --- a/arch/arm/include/asm/bL_switcher.h +++ b/arch/arm/include/asm/bL_switcher.h @@ -12,6 +12,6 @@ #ifndef ASM_BL_SWITCHER_H #define ASM_BL_SWITCHER_H -void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id); +int bL_switch_request(unsigned int cpu, unsigned int new_cluster_id); #endif