From patchwork Fri Nov 7 16:25:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 40434 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 123E320D85 for ; Fri, 7 Nov 2014 16:27:19 +0000 (UTC) Received: by mail-lb0-f197.google.com with SMTP id w7sf2095900lbi.4 for ; Fri, 07 Nov 2014 08:27:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=ilrj22Xa7ItwkHk/vMvGlsvwTgwewBXU9y5NMJQj+fU=; b=FNPTMPxQaSZl/mGE6Y1FWMdoQK9mdNVitx5CWQbtB5PbW8ZA4YO5FDSQ00Um5HDYqW 1CuPXUrvBJO44W2jGUXPxxPpG5v/5YUs4KpJQgjk7I6/SccjHAys++eAAywnWY09MFTT gfeHrVDrKBtaOwRWiEWL7zn8vg2/eY996IaLTZlKcmI5ACBXksTEhKu8D0MhiB8IixwI SLkr7SXOS+KMmF/aCbAkGdQc5PZMjtb7Twl0pTmPVMHS5xLnlyORPisMDMe3nhvoeH8u 7RYvmv7KZqRIJk/JqL0NSNlbCGUxtwGT/B1W83rdR9FzhzhsczKqJ//n/o9dzPB74E7j 048g== X-Gm-Message-State: ALoCoQkf+Actt9VtWdpNG2VXpc2ZvDcwk7LiQkP7d17yvkE1QSLY2D5KtdLJLVECREjv8JPTZgJT X-Received: by 10.152.7.73 with SMTP id h9mr741179laa.6.1415377637983; Fri, 07 Nov 2014 08:27:17 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.4.2 with SMTP id g2ls240175lag.35.gmail; Fri, 07 Nov 2014 08:27:17 -0800 (PST) X-Received: by 10.112.73.103 with SMTP id k7mr12207226lbv.41.1415377637796; Fri, 07 Nov 2014 08:27:17 -0800 (PST) Received: from mail-lb0-f174.google.com (mail-lb0-f174.google.com. [209.85.217.174]) by mx.google.com with ESMTPS id jj7si15633896lbc.65.2014.11.07.08.27.17 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 07 Nov 2014 08:27:17 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) client-ip=209.85.217.174; Received: by mail-lb0-f174.google.com with SMTP id z11so2427077lbi.5 for ; Fri, 07 Nov 2014 08:27:17 -0800 (PST) X-Received: by 10.152.116.102 with SMTP id jv6mr6557685lab.40.1415377637703; Fri, 07 Nov 2014 08:27:17 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp228381lbc; Fri, 7 Nov 2014 08:27:16 -0800 (PST) X-Received: by 10.68.57.134 with SMTP id i6mr13357566pbq.9.1415377635930; Fri, 07 Nov 2014 08:27:15 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ip5si9334586pbc.246.2014.11.07.08.27.13 for ; Fri, 07 Nov 2014 08:27:15 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752712AbaKGQ1H (ORCPT + 25 others); Fri, 7 Nov 2014 11:27:07 -0500 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]:48297 "EHLO cam-admin0.cambridge.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751667AbaKGQ1D (ORCPT ); Fri, 7 Nov 2014 11:27:03 -0500 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id sA7GPwwt014702; Fri, 7 Nov 2014 16:26:42 GMT From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, will.deacon@arm.com, Mark Rutland Subject: [PATCH 05/11] arm: perf: reject multi-pmu groups Date: Fri, 7 Nov 2014 16:25:30 +0000 Message-Id: <1415377536-12841-6-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> References: <1415377536-12841-1-git-send-email-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , An event group spanning multiple CPU PMUs can never be scheduled, as at least one event should always fail, and are therefore nonsensical. Additionally, groups spanning multiple PMUs would require additional validation logic throughout the driver to prevent CPU PMUs from stepping on each others' internal state. Given that such groups are nonsensical to begin with, the simple option is to reject such groups entirely. Groups consisting of software events and CPU PMU events are benign so long as the CPU PMU events only target a single CPU PMU. This patch ensures that we reject the creation of event groups which span multiple CPU PMUs, avoiding the issues described above. The addition of this_pmu to the validation logic made the fake_pmu more confusing than it already was; so this is renamed to the more accurate hw_events. As hw_events was being modified anyway, the initialisation of hw_events.used_mask is also simplified with the use of a designated initializer rather than the existing memset. Signed-off-by: Mark Rutland --- arch/arm/kernel/perf_event.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index b00f6aa..41dcfc0 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -258,13 +258,17 @@ out: } static int -validate_event(struct pmu_hw_events *hw_events, +validate_event(struct pmu *this_pmu, + struct pmu_hw_events *hw_events, struct perf_event *event) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); if (is_software_event(event)) return 1; + + if (event->pmu != this_pmu) + return 0; if (event->state < PERF_EVENT_STATE_OFF) return 1; @@ -279,23 +283,20 @@ static int validate_group(struct perf_event *event) { struct perf_event *sibling, *leader = event->group_leader; - struct pmu_hw_events fake_pmu; - - /* - * Initialise the fake PMU. We only need to populate the - * used_mask for the purposes of validation. - */ - memset(&fake_pmu.used_mask, 0, sizeof(fake_pmu.used_mask)); + struct pmu *this_pmu = event->pmu; + struct pmu_hw_events hw_events = { + .used_mask = { 0 }, + }; - if (!validate_event(&fake_pmu, leader)) + if (!validate_event(this_pmu, &hw_events, leader)) return -EINVAL; list_for_each_entry(sibling, &leader->sibling_list, group_entry) { - if (!validate_event(&fake_pmu, sibling)) + if (!validate_event(this_pmu, &hw_events, sibling)) return -EINVAL; } - if (!validate_event(&fake_pmu, event)) + if (!validate_event(this_pmu, &hw_events, event)) return -EINVAL; return 0;