From patchwork Thu Jul 18 03:28:35 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 18407 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f71.google.com (mail-ee0-f71.google.com [74.125.83.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B9A3525B8B for ; Thu, 18 Jul 2013 03:28:46 +0000 (UTC) Received: by mail-ee0-f71.google.com with SMTP id e52sf2859111eek.6 for ; Wed, 17 Jul 2013 20:28:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe :content-transfer-encoding; bh=RcXI9HVzr8K/AUq9zYMQdFrJ9Umif2/pVnLy5GyZPyg=; b=iCnB6vmbCQ+7OJy4jcCDyZOjcGRwYv/VjkO32LwNfyu123ne9X5ykXnoyf7wJTdjMH auWmON0xM7etxBXYITq9EwsGx8aAi2Xo3Nvn8F7ZH1Pnxek+BPmGT1B7KbzWXmCDZq31 nm2Hq1ftOkPsLS552xp2SKWlW0AK6ZlOMDDoC2H0pvqgqV072uKGkeuqmBWVsdXLjwGR 3NFdz/v8EcCwzLpWemlYXCEm4tn3VxtUg+KljtczeM8tGjH0IidCaS6kFQ0qgQH1rI+o N4iFc8Ku3AvpWJdavDv0bfamDYYApqTg6pPkNh2Uahooxbcb4CK47FoT9JG+e5Cr50BE amgA== X-Received: by 10.180.39.193 with SMTP id r1mr620136wik.0.1374118125861; Wed, 17 Jul 2013 20:28:45 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.180.206.104 with SMTP id ln8ls54850wic.54.canary; Wed, 17 Jul 2013 20:28:45 -0700 (PDT) X-Received: by 10.180.107.167 with SMTP id hd7mr6562179wib.33.1374118125563; Wed, 17 Jul 2013 20:28:45 -0700 (PDT) Received: from mail-ve0-f176.google.com (mail-ve0-f176.google.com [209.85.128.176]) by mx.google.com with ESMTPS id c5si3773911wjb.51.2013.07.17.20.28.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 17 Jul 2013 20:28:45 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.176; Received: by mail-ve0-f176.google.com with SMTP id c13so2102139vea.7 for ; Wed, 17 Jul 2013 20:28:44 -0700 (PDT) X-Received: by 10.52.69.177 with SMTP id f17mr2804853vdu.48.1374118123958; Wed, 17 Jul 2013 20:28:43 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.149.77 with SMTP id s13csp204004vcv; Wed, 17 Jul 2013 20:28:43 -0700 (PDT) X-Received: by 10.224.28.197 with SMTP id n5mr12047850qac.64.1374118121925; Wed, 17 Jul 2013 20:28:41 -0700 (PDT) Received: from relais.videotron.ca (relais.videotron.ca. [24.201.245.36]) by mx.google.com with ESMTP id j17si3550640qen.3.2013.07.17.20.28.41 for ; Wed, 17 Jul 2013 20:28:41 -0700 (PDT) Received-SPF: neutral (google.com: 24.201.245.36 is neither permitted nor denied by best guess record for domain of nicolas.pitre@linaro.org) client-ip=24.201.245.36; Received: from yoda.home ([70.83.209.44]) by VL-VM-MR005.ip.videotron.ca (Oracle Communications Messaging Exchange Server 7u4-22.01 64bit (built Apr 21 2011)) with ESMTP id <0MQ400I0I4BSTFC0@VL-VM-MR005.ip.videotron.ca>; Wed, 17 Jul 2013 23:28:40 -0400 (EDT) Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id EF25D2DA0586; Wed, 17 Jul 2013 23:28:39 -0400 (EDT) From: Nicolas Pitre To: linux-arm-kernel@lists.infradead.org Cc: lorenzo.pieralisi@arm.com, dave.martin@arm.com, pawel.moll@arm.com, olof@lixom.net, tixy@linaro.org, achin.gupta@arm.com, sudeep.karkadanagesha@arm.com, patches@linaro.org Subject: [PATCH 3/4] ARM: vexpress/TC2: basic PM support Date: Wed, 17 Jul 2013 23:28:35 -0400 Message-id: <1374118116-16836-4-git-send-email-nicolas.pitre@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-reply-to: <1374118116-16836-1-git-send-email-nicolas.pitre@linaro.org> References: <1374118116-16836-1-git-send-email-nicolas.pitre@linaro.org> X-Gm-Message-State: ALoCoQnH3RObti/m84RKNYndKdA6gtliUxIwnHgcSegV6hEurAEds+NURuSg+SfCxuTL5NstKrg/ X-Original-Sender: nicolas.pitre@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-transfer-encoding: 7BIT This is the MCPM backend for the Virtual Express A15x2 A7x3 CoreTile aka TC2. This provides cluster management for SMP secondary boot and CPU hotplug. Signed-off-by: Nicolas Pitre Acked-by: Pawel Moll Reviewed-by: Lorenzo Pieralisi --- arch/arm/mach-vexpress/Kconfig | 9 ++ arch/arm/mach-vexpress/Makefile | 1 + arch/arm/mach-vexpress/tc2_pm.c | 287 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 297 insertions(+) create mode 100644 arch/arm/mach-vexpress/tc2_pm.c diff --git a/arch/arm/mach-vexpress/Kconfig b/arch/arm/mach-vexpress/Kconfig index b8bbabec63..4b082f8912 100644 --- a/arch/arm/mach-vexpress/Kconfig +++ b/arch/arm/mach-vexpress/Kconfig @@ -66,4 +66,13 @@ config ARCH_VEXPRESS_DCSCB This is needed to provide CPU and cluster power management on RTSM implementing big.LITTLE. +config ARCH_VEXPRESS_TC2_PM + bool "Versatile Express TC2 power management" + depends on MCPM + select VEXPRESS_SPC + select ARM_CCI + help + Support for CPU and cluster power management on Versatile Express + with a TC2 (A15x2 A7x3) big.LITTLE core tile. + endmenu diff --git a/arch/arm/mach-vexpress/Makefile b/arch/arm/mach-vexpress/Makefile index 48ba89a814..0853da6395 100644 --- a/arch/arm/mach-vexpress/Makefile +++ b/arch/arm/mach-vexpress/Makefile @@ -7,5 +7,6 @@ ccflags-$(CONFIG_ARCH_MULTIPLATFORM) := -I$(srctree)/$(src)/include \ obj-y := v2m.o obj-$(CONFIG_ARCH_VEXPRESS_CA9X4) += ct-ca9x4.o obj-$(CONFIG_ARCH_VEXPRESS_DCSCB) += dcscb.o dcscb_setup.o +obj-$(CONFIG_ARCH_VEXPRESS_TC2_PM) += tc2_pm.o obj-$(CONFIG_SMP) += platsmp.o obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c new file mode 100644 index 0000000000..26dbe782fb --- /dev/null +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -0,0 +1,287 @@ +/* + * arch/arm/mach-vexpress/tc2_pm.c - TC2 power management support + * + * Created by: Nicolas Pitre, October 2012 + * Copyright: (C) 2012-2013 Linaro Limited + * + * Some portions of this file were originally written by Achin Gupta + * Copyright: (C) 2012 ARM Limited + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include + +/* + * We can't use regular spinlocks. In the switcher case, it is possible + * for an outbound CPU to call power_down() after its inbound counterpart + * is already live using the same logical CPU number which trips lockdep + * debugging. + */ +static arch_spinlock_t tc2_pm_lock = __ARCH_SPIN_LOCK_UNLOCKED; + +#define TC2_CLUSTERS 2 +#define TC2_MAX_CPUS_PER_CLUSTER 3 + +/* Keep per-cpu usage count to cope with unordered up/down requests */ +static int tc2_pm_use_count[TC2_MAX_CPUS_PER_CLUSTER][TC2_CLUSTERS]; + +#define tc2_cluster_unused(cluster) \ + (!tc2_pm_use_count[0][cluster] && \ + !tc2_pm_use_count[1][cluster] && \ + !tc2_pm_use_count[2][cluster]) + +static int tc2_pm_power_up(unsigned int cpu, unsigned int cluster) +{ + pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); + if (cluster >= TC2_CLUSTERS || cpu >= ve_spc_get_nr_cpus(cluster)) + return -EINVAL; + + /* + * Since this is called with IRQs enabled, and no arch_spin_lock_irq + * variant exists, we need to disable IRQs manually here. + */ + local_irq_disable(); + arch_spin_lock(&tc2_pm_lock); + + if (tc2_cluster_unused(cluster)) + ve_spc_powerdown(cluster, false); + + tc2_pm_use_count[cpu][cluster]++; + if (tc2_pm_use_count[cpu][cluster] == 1) { + ve_spc_set_resume_addr(cluster, cpu, + virt_to_phys(mcpm_entry_point)); + ve_spc_cpu_wakeup_irq(cluster, cpu, true); + } else if (tc2_pm_use_count[cpu][cluster] != 2) { + /* + * The only possible values are: + * 0 = CPU down + * 1 = CPU (still) up + * 2 = CPU requested to be up before it had a chance + * to actually make itself down. + * Any other value is a bug. + */ + BUG(); + } + + arch_spin_unlock(&tc2_pm_lock); + local_irq_enable(); + + return 0; +} + +static void tc2_pm_power_down(void) +{ + unsigned int mpidr, cpu, cluster; + bool last_man = false, skip_wfi = false; + + mpidr = read_cpuid_mpidr(); + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + + pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); + BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER); + + __mcpm_cpu_going_down(cpu, cluster); + + arch_spin_lock(&tc2_pm_lock); + BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP); + tc2_pm_use_count[cpu][cluster]--; + if (tc2_pm_use_count[cpu][cluster] == 0) { + ve_spc_cpu_wakeup_irq(cluster, cpu, true); + if (tc2_cluster_unused(cluster)) { + ve_spc_powerdown(cluster, true); + ve_spc_global_wakeup_irq(true); + last_man = true; + } + } else if (tc2_pm_use_count[cpu][cluster] == 1) { + /* + * A power_up request went ahead of us. + * Even if we do not want to shut this CPU down, + * the caller expects a certain state as if the WFI + * was aborted. So let's continue with cache cleaning. + */ + skip_wfi = true; + } else + BUG(); + + if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { + arch_spin_unlock(&tc2_pm_lock); + + if (read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A15) { + /* + * On the Cortex-A15 we need to disable + * L2 prefetching before flushing the cache. + */ + asm volatile( + "mcr p15, 1, %0, c15, c0, 3 \n\t" + "isb \n\t" + "dsb " + : : "r" (0x400) ); + } + + /* + * We need to disable and flush the whole (L1 and L2) cache. + * Let's do it in the safest possible way i.e. with + * no memory access within the following sequence + * including the stack. + */ + asm volatile( + "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" + "bic r0, r0, #"__stringify(CR_C)" \n\t" + "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" + "isb \n\t" + "bl v7_flush_dcache_all \n\t" + "clrex \n\t" + "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" + "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" + "isb \n\t" + "dsb " + : : : "r0","r1","r2","r3","r4","r5","r6","r7", + "r9","r10","r11","lr","memory"); + + cci_disable_port_by_cpu(mpidr); + + __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); + } else { + /* + * If last man then undo any setup done previously. + */ + if (last_man) { + ve_spc_powerdown(cluster, false); + ve_spc_global_wakeup_irq(false); + } + + arch_spin_unlock(&tc2_pm_lock); + + /* + * We need to disable and flush only the L1 cache. + * Let's do it in the safest possible way as above. + */ + asm volatile( + "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" + "bic r0, r0, #"__stringify(CR_C)" \n\t" + "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" + "isb \n\t" + "bl v7_flush_dcache_louis \n\t" + "clrex \n\t" + "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" + "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" + "isb \n\t" + "dsb " + : : : "r0","r1","r2","r3","r4","r5","r6","r7", + "r9","r10","r11","lr","memory"); + } + + __mcpm_cpu_down(cpu, cluster); + + /* Now we are prepared for power-down, do it: */ + if (!skip_wfi) + wfi(); + + /* Not dead at this point? Let our caller cope. */ +} + +static void tc2_pm_powered_up(void) +{ + unsigned int mpidr, cpu, cluster; + unsigned long flags; + + mpidr = read_cpuid_mpidr(); + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + + pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); + BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER); + + local_irq_save(flags); + arch_spin_lock(&tc2_pm_lock); + + if (tc2_cluster_unused(cluster)) { + ve_spc_powerdown(cluster, false); + ve_spc_global_wakeup_irq(false); + } + + if (!tc2_pm_use_count[cpu][cluster]) + tc2_pm_use_count[cpu][cluster] = 1; + + ve_spc_cpu_wakeup_irq(cluster, cpu, false); + ve_spc_set_resume_addr(cluster, cpu, 0); + + arch_spin_unlock(&tc2_pm_lock); + local_irq_restore(flags); +} + +static const struct mcpm_platform_ops tc2_pm_power_ops = { + .power_up = tc2_pm_power_up, + .power_down = tc2_pm_power_down, + .powered_up = tc2_pm_powered_up, +}; + +static bool __init tc2_pm_usage_count_init(void) +{ + unsigned int mpidr, cpu, cluster; + + mpidr = read_cpuid_mpidr(); + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + + pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); + if (cluster >= TC2_CLUSTERS || cpu >= ve_spc_get_nr_cpus(cluster)) { + pr_err("%s: boot CPU is out of bound!\n", __func__); + return false; + } + tc2_pm_use_count[cpu][cluster] = 1; + return true; +} + +/* + * Enable cluster-level coherency, in preparation for turning on the MMU. + */ +static void __naked tc2_pm_power_up_setup(unsigned int affinity_level) +{ + asm volatile (" \n" +" cmp r0, #1 \n" +" bxne lr \n" +" b cci_enable_port_for_self "); +} + +static int __init tc2_pm_init(void) +{ + int ret; + + ret = ve_spc_init(); + if (ret) + return ret; + + if (!cci_probed()) + return -ENODEV; + + if (!tc2_pm_usage_count_init()) + return -EINVAL; + + ret = mcpm_platform_register(&tc2_pm_power_ops); + if (!ret) { + mcpm_sync_init(tc2_pm_power_up_setup); + pr_info("TC2 power management initialized\n"); + } + return ret; +} + +early_initcall(tc2_pm_init);