From patchwork Fri Nov 7 16:25:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 40433 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CFAF220D85 for ; Fri, 7 Nov 2014 16:27:16 +0000 (UTC) Received: by mail-wi0-f197.google.com with SMTP id ex7sf2095271wid.8 for ; Fri, 07 Nov 2014 08:27:16 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=8QLDsG9/ftp6OLZpobPtqEwPPwsmhdxMbFDMpvE+yUE=; b=VXRM10K7bYerW71t17f5yHEs081iTaISsNnuX17CjS/pQGs7fTH5hF6/RjTHGJlOMW cbAteS+o0qI/sxDQdAaZwTCRG1rgFq65WMonyIbj20KUSLiI6uCrRSG9pBmRxucc291p a/sW8b0jn5PSj1gfCMRWUTiNYlxCDC5CtJCvM7VEI21ZYPboqOOLzv+vS8XsSBTbOVFY x6JYxFzl12a4RZ7BxpVT6pAqpWDsH1YQ/9el4oclOEDP7aF/n4pdrxlCAqfnQmhhA4t8 wqJ1zFLR7nnBMksEZg6M0eppbS2n8Ffz1jGvqz5Acwnpx2ikJnWFi6457SylrjXssNUv IZ2A== X-Gm-Message-State: ALoCoQlyXvsRiC/Ac7aC3TLVFPkp70o8/5cdDf5N7vwc78Ba5VwEXWjvbGHCZMCSnV//cb22bdMM X-Received: by 10.112.155.10 with SMTP id vs10mr237054lbb.20.1415377636008; Fri, 07 Nov 2014 08:27:16 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.27.97 with SMTP id s1ls235294lag.81.gmail; Fri, 07 Nov 2014 08:27:15 -0800 (PST) X-Received: by 10.112.173.199 with SMTP id bm7mr12226899lbc.54.1415377635744; Fri, 07 Nov 2014 08:27:15 -0800 (PST) Received: from mail-la0-f48.google.com (mail-la0-f48.google.com. [209.85.215.48]) by mx.google.com with ESMTPS id ei3si15581280lad.84.2014.11.07.08.27.15 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 07 Nov 2014 08:27:15 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) client-ip=209.85.215.48; Received: by mail-la0-f48.google.com with SMTP id gq15so4650343lab.21 for ; Fri, 07 Nov 2014 08:27:15 -0800 (PST) X-Received: by 10.152.120.199 with SMTP id le7mr11929437lab.67.1415377635610; Fri, 07 Nov 2014 08:27:15 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp228375lbc; Fri, 7 Nov 2014 08:27:14 -0800 (PST) X-Received: by 10.69.27.33 with SMTP id jd1mr13338731pbd.90.1415377633804; Fri, 07 Nov 2014 08:27:13 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ip5si9334586pbc.246.2014.11.07.08.27.06 for ; Fri, 07 Nov 2014 08:27:13 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752536AbaKGQ1C (ORCPT + 25 others); Fri, 7 Nov 2014 11:27:02 -0500 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]:48294 "EHLO cam-admin0.cambridge.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751667AbaKGQ1A (ORCPT ); Fri, 7 Nov 2014 11:27:00 -0500 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id sA7GPwwr014702; Fri, 7 Nov 2014 16:26:34 GMT From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, will.deacon@arm.com, Mark Rutland Subject: [PATCH 03/11] arm: perf: treat PMUs as CPU affine Date: Fri, 7 Nov 2014 16:25:28 +0000 Message-Id: <1415377536-12841-4-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> References: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , In multi-cluster systems, the PMUs can be different across clusters, and so our logical PMU may not be able to schedule events on all CPUs. This patch adds a cpumask to encode which CPUs a PMU driver supports controlling events for, and limits the driver to scheduling events on those CPUs, and enabling and disabling the physical PMUs on those CPUs. Currently the cpumask is set to match all CPUs. Signed-off-by: Mark Rutland --- arch/arm/include/asm/pmu.h | 1 + arch/arm/kernel/perf_event.c | 25 +++++++++++++++++++++++++ arch/arm/kernel/perf_event_cpu.c | 10 +++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/pmu.h b/arch/arm/include/asm/pmu.h index b1596bd..b630a44 100644 --- a/arch/arm/include/asm/pmu.h +++ b/arch/arm/include/asm/pmu.h @@ -92,6 +92,7 @@ struct pmu_hw_events { struct arm_pmu { struct pmu pmu; cpumask_t active_irqs; + cpumask_t supported_cpus; char *name; irqreturn_t (*handle_irq)(int irq_num, void *dev); void (*enable)(struct perf_event *event); diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index e34934f..9ad21ab 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -11,6 +11,7 @@ */ #define pr_fmt(fmt) "hw perfevents: " fmt +#include #include #include #include @@ -223,6 +224,10 @@ armpmu_add(struct perf_event *event, int flags) int idx; int err = 0; + /* An event following a process won't be stopped earlier */ + if (!cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus)) + return -ENOENT; + perf_pmu_disable(event->pmu); /* If we don't have a space for the counter then finish early. */ @@ -439,6 +444,17 @@ static int armpmu_event_init(struct perf_event *event) int err = 0; atomic_t *active_events = &armpmu->active_events; + /* + * Reject CPU-affine events for CPUs that are of a different class to + * that which this PMU handles. Process-following events (where + * event->cpu == -1) can be migrated between CPUs, and thus we have to + * reject them later (in armpmu_add) if they're scheduled on a + * different class of CPU. + */ + if (event->cpu != -1 && + !cpumask_test_cpu(event->cpu, &armpmu->supported_cpus)) + return -ENOENT; + /* does not support taken branch sampling */ if (has_branch_stack(event)) return -EOPNOTSUPP; @@ -474,6 +490,10 @@ static void armpmu_enable(struct pmu *pmu) struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); int enabled = bitmap_weight(hw_events->used_mask, armpmu->num_events); + /* For task-bound events we may be called on other CPUs */ + if (!cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus)) + return; + if (enabled) armpmu->start(armpmu); } @@ -481,6 +501,11 @@ static void armpmu_enable(struct pmu *pmu) static void armpmu_disable(struct pmu *pmu) { struct arm_pmu *armpmu = to_arm_pmu(pmu); + + /* For task-bound events we may be called on other CPUs */ + if (!cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus)) + return; + armpmu->stop(armpmu); } diff --git a/arch/arm/kernel/perf_event_cpu.c b/arch/arm/kernel/perf_event_cpu.c index 59c0642..ce35149 100644 --- a/arch/arm/kernel/perf_event_cpu.c +++ b/arch/arm/kernel/perf_event_cpu.c @@ -169,11 +169,15 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler) static int cpu_pmu_notify(struct notifier_block *b, unsigned long action, void *hcpu) { + int cpu = (unsigned long)hcpu; struct arm_pmu *pmu = container_of(b, struct arm_pmu, hotplug_nb); if ((action & ~CPU_TASKS_FROZEN) != CPU_STARTING) return NOTIFY_DONE; + if (!cpumask_test_cpu(cpu, &pmu->supported_cpus)) + return NOTIFY_DONE; + if (pmu->reset) pmu->reset(pmu); else @@ -209,7 +213,8 @@ static int cpu_pmu_init(struct arm_pmu *cpu_pmu) /* Ensure the PMU has sane values out of reset. */ if (cpu_pmu->reset) - on_each_cpu(cpu_pmu->reset, cpu_pmu, 1); + on_each_cpu_mask(&cpu_pmu->supported_cpus, cpu_pmu->reset, + cpu_pmu, 1); /* If no interrupts available, set the corresponding capability flag */ if (!platform_get_irq(cpu_pmu->plat_device, 0)) @@ -311,6 +316,9 @@ static int cpu_pmu_device_probe(struct platform_device *pdev) cpu_pmu = pmu; cpu_pmu->plat_device = pdev; + /* Assume by default that we're on a homogeneous system */ + cpumask_setall(&pmu->supported_cpus); + if (node && (of_id = of_match_node(cpu_pmu_of_device_ids, pdev->dev.of_node))) { init_fn = of_id->data; ret = init_fn(pmu);