From patchwork Fri Nov 7 16:25:29 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 40442 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CA3F220D85 for ; Fri, 7 Nov 2014 16:29:21 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id x13sf2094378wgg.9 for ; Fri, 07 Nov 2014 08:29:21 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=t3GLj13RYILneY6gQvyKPNptrNybTb1SFLuRoFjfQ24=; b=XJZzvXpwFXAuiBzw6izYek2PkyLV0HC6Hc3GvNCgpfiNRLozGQoL+ZCHp8vq4gYmmn HdGeTS00b84Lh6HH/ETCMFdocYBHCNW3P2mGc10sXMYAQjdWT7dN9NMgHRRJXeUcsU5M qIdtg1NyptDP/zJkTTmmWLCJXZG4aj1QSurdYOZpXbXs3OvteG8AH6cvN8g+muoG/AIq TpVusYXe/Zd8Nlzsvw2yBKUw07iCV+fWI2d+3Bq5vpC2FZjg8JQYes5Gs19wQvY5qudY jwcHw5kPgSXOEGnutyx4gvBs82YBJJ1bGqOEJiPrQQnzO3Tj3AK7NOjOEZOpDxPbhECQ pfrg== X-Gm-Message-State: ALoCoQnkB3N+tWADoQF3aMP55gNgKAeF5WoIIhJrBBH54BAwejJgu9HMHi3UIACtGv8PiV7HAl1R X-Received: by 10.180.218.100 with SMTP id pf4mr849751wic.4.1415377761065; Fri, 07 Nov 2014 08:29:21 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.170.133 with SMTP id am5ls247688lac.54.gmail; Fri, 07 Nov 2014 08:29:20 -0800 (PST) X-Received: by 10.152.10.143 with SMTP id i15mr12307412lab.5.1415377760742; Fri, 07 Nov 2014 08:29:20 -0800 (PST) Received: from mail-lb0-f180.google.com (mail-lb0-f180.google.com. [209.85.217.180]) by mx.google.com with ESMTPS id p8si13270302laj.1.2014.11.07.08.29.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 07 Nov 2014 08:29:20 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.180 as permitted sender) client-ip=209.85.217.180; Received: by mail-lb0-f180.google.com with SMTP id u10so2880170lbd.39 for ; Fri, 07 Nov 2014 08:29:20 -0800 (PST) X-Received: by 10.152.87.100 with SMTP id w4mr12001182laz.27.1415377760625; Fri, 07 Nov 2014 08:29:20 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp228690lbc; Fri, 7 Nov 2014 08:29:19 -0800 (PST) X-Received: by 10.68.106.66 with SMTP id gs2mr13251543pbb.76.1415377758870; Fri, 07 Nov 2014 08:29:18 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t11si9546645pdl.62.2014.11.07.08.29.18 for ; Fri, 07 Nov 2014 08:29:18 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753130AbaKGQ3P (ORCPT + 25 others); Fri, 7 Nov 2014 11:29:15 -0500 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]:48325 "EHLO cam-admin0.cambridge.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752627AbaKGQ1d (ORCPT ); Fri, 7 Nov 2014 11:27:33 -0500 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id sA7GPwws014702; Fri, 7 Nov 2014 16:26:40 GMT From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, will.deacon@arm.com, Mark Rutland Subject: [PATCH 04/11] arm: perf: filter unschedulable events Date: Fri, 7 Nov 2014 16:25:29 +0000 Message-Id: <1415377536-12841-5-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> References: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.180 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Different CPU microarchitectures implement different PMU events, and thus events which can be scheduled on one microarchitecture cannot be scheduled on another, and vice-versa. Some archicted events behave differently across microarchitectures, and thus cannot be meaningfully summed. Due to this, we reject the scheduling of an event on a CPU of a different microarchitecture to that the event targets. When the core perf code is scheduling events and encounters an event which cannot be scheduled, it stops attempting to schedule events. As the perf core periodically rotates the list of events, for some proportion of the time events which are unschedulable will block events which are schedulable, resulting in low utilisation of the hardware counters. This patch implements a pmu::filter_match callback such that we can detect and skip such events while scheduling early, before they can block the schedulable events. This prevents the low HW counter utilisation issue. Signed-off-by: Mark Rutland --- arch/arm/kernel/perf_event.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index 9ad21ab..b00f6aa 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -509,6 +509,18 @@ static void armpmu_disable(struct pmu *pmu) armpmu->stop(armpmu); } +/* + * In heterogeneous systems, events are specific to a particular + * microarchitecture, and aren't suitable for another. Thus, only match CPUs of + * the same microarchitecture. + */ +static int armpmu_filter_match(struct perf_event *event) +{ + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); + unsigned int cpu = smp_processor_id(); + return cpumask_test_cpu(cpu, &armpmu->supported_cpus); +} + #ifdef CONFIG_PM_RUNTIME static int armpmu_runtime_resume(struct device *dev) { @@ -549,6 +561,7 @@ static void armpmu_init(struct arm_pmu *armpmu) .start = armpmu_start, .stop = armpmu_stop, .read = armpmu_read, + .filter_match = armpmu_filter_match, }; }