From patchwork Mon Apr 14 01:42:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Zhang X-Patchwork-Id: 28302 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f197.google.com (mail-vc0-f197.google.com [209.85.220.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7DB512036A for ; Mon, 14 Apr 2014 01:43:09 +0000 (UTC) Received: by mail-vc0-f197.google.com with SMTP id if11sf26331161vcb.4 for ; Sun, 13 Apr 2014 18:43:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :mime-version:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe:content-type; bh=nqn9knaZ5iouqacysRQRw32LbM/JgDs3fxSJFqr/JPg=; b=HHGGDzk/xNZnMa/mdQ0l/zmr2BPPVAgPqP5iFw/+LVx51TjZhG7SfTawO9nnQP8xL4 KCOi3eghgU7btrY+XmDl+HM8Mlli2zNTm8pcO0DkA/21WjB839K3w/dL+UeHN5nlZbl+ h208cG+cgubEQfwemSaOJd/dO7zOI0XaqbYT5w60H1kRavyV16D40o8UchDMbzNTCI4V 0O53pbY8rscYSZtOdro1fv2E2xuUs/J9EcWHXCNTnnGYKLUd0Edk6M/5DDGt/2QmF6Jl cRLwbIsQsF5G1pFJ6l2vsFP+mqFDwFBkXkudGN6wet2eprlJOPZBlaOTvSUKeHHRomyt vNtw== X-Gm-Message-State: ALoCoQm4G6XBD4r6dYsQotTHEMqRARSXWQDGRr920Hx3xyUgwwcIdS413smAAp2RE+WQLsWPeZkY X-Received: by 10.236.32.236 with SMTP id o72mr15769211yha.42.1397439789118; Sun, 13 Apr 2014 18:43:09 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.93.106 with SMTP id c97ls1162594qge.55.gmail; Sun, 13 Apr 2014 18:43:09 -0700 (PDT) X-Received: by 10.58.96.36 with SMTP id dp4mr478530veb.21.1397439789051; Sun, 13 Apr 2014 18:43:09 -0700 (PDT) Received: from mail-ve0-f179.google.com (mail-ve0-f179.google.com [209.85.128.179]) by mx.google.com with ESMTPS id sc7si2499290vdc.103.2014.04.13.18.43.09 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 13 Apr 2014 18:43:09 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.179; Received: by mail-ve0-f179.google.com with SMTP id db12so6659859veb.24 for ; Sun, 13 Apr 2014 18:43:09 -0700 (PDT) X-Received: by 10.220.133.80 with SMTP id e16mr33934618vct.13.1397439788970; Sun, 13 Apr 2014 18:43:08 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp111057vcb; Sun, 13 Apr 2014 18:43:08 -0700 (PDT) X-Received: by 10.68.197.8 with SMTP id iq8mr755567pbc.124.1397439788225; Sun, 13 Apr 2014 18:43:08 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id fd9si7913350pad.101.2014.04.13.18.43.07; Sun, 13 Apr 2014 18:43:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754298AbaDNBm6 (ORCPT + 26 others); Sun, 13 Apr 2014 21:42:58 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:40562 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750814AbaDNBm5 (ORCPT ); Sun, 13 Apr 2014 21:42:57 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.14.5/8.14.5) with SMTP id s3E1gO1M024704; Sun, 13 Apr 2014 18:42:24 -0700 Received: from sc-owa01.marvell.com ([199.233.58.136]) by mx0b-0016f401.pphosted.com with ESMTP id 1k7y6t9jn8-9 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Sun, 13 Apr 2014 18:42:24 -0700 Received: from maili.marvell.com (10.93.76.43) by sc-owa01.marvell.com (10.93.76.21) with Microsoft SMTP Server id 8.3.327.1; Sun, 13 Apr 2014 18:42:23 -0700 Received: from localhost (unknown [10.38.164.185]) by maili.marvell.com (Postfix) with ESMTP id AA0353F703F; Sun, 13 Apr 2014 18:42:22 -0700 (PDT) From: Neil Zhang To: , CC: , , Sudeep KarkadaNagesha , Neil Zhang Subject: [PATCH v2] ARM: perf: save/restore pmu registers in pm notifier Date: Mon, 14 Apr 2014 09:42:22 +0800 Message-ID: <1397439742-28337-1-git-send-email-zhangwm@marvell.com> X-Mailer: git-send-email 1.7.9.5 MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96, 1.0.14, 0.0.0000 definitions=2014-04-12_01:2014-04-11, 2014-04-12, 1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404140029 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: zhangwm@marvell.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Sudeep KarkadaNagesha This adds core support for saving and restoring CPU PMU registers for suspend/resume support i.e. deeper C-states in cpuidle terms. This patch adds support only to ARMv7 PMU registers save/restore. It needs to be extended to xscale and ARMv6 if needed. [Neil] We found that DS-5 not work on our CA7 based SoCs. After debuging, found PMU registers were lost because of core power down. Then i found Sudeep had a patch to fix it about two years ago but not in the mainline, just port it. Signed-off-by: Sudeep KarkadaNagesha Signed-off-by: Neil Zhang --- arch/arm/include/asm/pmu.h | 11 +++++++++ arch/arm/kernel/perf_event_cpu.c | 29 ++++++++++++++++++++++- arch/arm/kernel/perf_event_v7.c | 47 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 86 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/pmu.h b/arch/arm/include/asm/pmu.h index ae1919b..f37f048 100644 --- a/arch/arm/include/asm/pmu.h +++ b/arch/arm/include/asm/pmu.h @@ -62,6 +62,15 @@ struct pmu_hw_events { raw_spinlock_t pmu_lock; }; +struct cpupmu_regs { + u32 pmc; + u32 pmcntenset; + u32 pmuseren; + u32 pmintenset; + u32 pmxevttype[8]; + u32 pmxevtcnt[8]; +}; + struct arm_pmu { struct pmu pmu; cpumask_t active_irqs; @@ -83,6 +92,8 @@ struct arm_pmu { int (*request_irq)(struct arm_pmu *, irq_handler_t handler); void (*free_irq)(struct arm_pmu *); int (*map_event)(struct perf_event *event); + void (*save_regs)(struct arm_pmu *, struct cpupmu_regs *); + void (*restore_regs)(struct arm_pmu *, struct cpupmu_regs *); int num_events; atomic_t active_events; struct mutex reserve_mutex; diff --git a/arch/arm/kernel/perf_event_cpu.c b/arch/arm/kernel/perf_event_cpu.c index 51798d7..7f1c756 100644 --- a/arch/arm/kernel/perf_event_cpu.c +++ b/arch/arm/kernel/perf_event_cpu.c @@ -19,6 +19,7 @@ #define pr_fmt(fmt) "CPU PMU: " fmt #include +#include #include #include #include @@ -39,6 +40,7 @@ static DEFINE_PER_CPU(struct arm_pmu *, percpu_pmu); static DEFINE_PER_CPU(struct perf_event * [ARMPMU_MAX_HWEVENTS], hw_events); static DEFINE_PER_CPU(unsigned long [BITS_TO_LONGS(ARMPMU_MAX_HWEVENTS)], used_mask); static DEFINE_PER_CPU(struct pmu_hw_events, cpu_hw_events); +static DEFINE_PER_CPU(struct cpupmu_regs, cpu_pmu_regs); /* * Despite the names, these two functions are CPU-specific and are used @@ -217,6 +219,23 @@ static struct notifier_block cpu_pmu_hotplug_notifier = { .notifier_call = cpu_pmu_notify, }; +static int cpu_pmu_pm_notify(struct notifier_block *b, + unsigned long action, void *hcpu) +{ + struct cpupmu_regs *pmuregs = this_cpu_ptr(&cpu_pmu_regs); + + if (action == CPU_PM_ENTER && cpu_pmu->save_regs) + cpu_pmu->save_regs(cpu_pmu, pmuregs); + else if (action == CPU_PM_EXIT && cpu_pmu->restore_regs) + cpu_pmu->restore_regs(cpu_pmu, pmuregs); + + return NOTIFY_OK; +} + +static struct notifier_block cpu_pmu_pm_notifier = { + .notifier_call = cpu_pmu_pm_notify, +}; + /* * PMU platform driver and devicetree bindings. */ @@ -349,9 +368,17 @@ static int __init register_pmu_driver(void) if (err) return err; + err = cpu_pm_register_notifier(&cpu_pmu_pm_notifier); + if (err) { + unregister_cpu_notifier(&cpu_pmu_hotplug_notifier); + return err; + } + err = platform_driver_register(&cpu_pmu_driver); - if (err) + if (err) { + cpu_pm_unregister_notifier(&cpu_pmu_pm_notifier); unregister_cpu_notifier(&cpu_pmu_hotplug_notifier); + } return err; } diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c index f4ef398..29ae8f1 100644 --- a/arch/arm/kernel/perf_event_v7.c +++ b/arch/arm/kernel/perf_event_v7.c @@ -1237,6 +1237,51 @@ static void armv7_pmnc_dump_regs(struct arm_pmu *cpu_pmu) } #endif +static void armv7pmu_save_regs(struct arm_pmu *cpu_pmu, + struct cpupmu_regs *regs) +{ + unsigned int cnt; + asm volatile("mrc p15, 0, %0, c9, c12, 0" : "=r" (regs->pmc)); + if (!(regs->pmc & ARMV7_PMNC_E)) + return; + + asm volatile("mrc p15, 0, %0, c9, c12, 1" : "=r" (regs->pmcntenset)); + asm volatile("mrc p15, 0, %0, c9, c14, 0" : "=r" (regs->pmuseren)); + asm volatile("mrc p15, 0, %0, c9, c14, 1" : "=r" (regs->pmintenset)); + asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r" (regs->pmxevtcnt[0])); + for (cnt = ARMV7_IDX_COUNTER0; + cnt <= ARMV7_IDX_COUNTER_LAST(cpu_pmu); cnt++) { + armv7_pmnc_select_counter(cnt); + asm volatile("mrc p15, 0, %0, c9, c13, 1" + : "=r"(regs->pmxevttype[cnt])); + asm volatile("mrc p15, 0, %0, c9, c13, 2" + : "=r"(regs->pmxevtcnt[cnt])); + } + return; +} + +static void armv7pmu_restore_regs(struct arm_pmu *cpu_pmu, + struct cpupmu_regs *regs) +{ + unsigned int cnt; + if (!(regs->pmc & ARMV7_PMNC_E)) + return; + + asm volatile("mcr p15, 0, %0, c9, c12, 1" : : "r" (regs->pmcntenset)); + asm volatile("mcr p15, 0, %0, c9, c14, 0" : : "r" (regs->pmuseren)); + asm volatile("mcr p15, 0, %0, c9, c14, 1" : : "r" (regs->pmintenset)); + asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" (regs->pmxevtcnt[0])); + for (cnt = ARMV7_IDX_COUNTER0; + cnt <= ARMV7_IDX_COUNTER_LAST(cpu_pmu); cnt++) { + armv7_pmnc_select_counter(cnt); + asm volatile("mcr p15, 0, %0, c9, c13, 1" + : : "r"(regs->pmxevttype[cnt])); + asm volatile("mcr p15, 0, %0, c9, c13, 2" + : : "r"(regs->pmxevtcnt[cnt])); + } + asm volatile("mcr p15, 0, %0, c9, c12, 0" : : "r" (regs->pmc)); +} + static void armv7pmu_enable_event(struct perf_event *event) { unsigned long flags; @@ -1528,6 +1573,8 @@ static void armv7pmu_init(struct arm_pmu *cpu_pmu) cpu_pmu->start = armv7pmu_start; cpu_pmu->stop = armv7pmu_stop; cpu_pmu->reset = armv7pmu_reset; + cpu_pmu->save_regs = armv7pmu_save_regs; + cpu_pmu->restore_regs = armv7pmu_restore_regs; cpu_pmu->max_period = (1LLU << 32) - 1; };