From patchwork Thu Jun 22 14:41:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 106218 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp156554qgd; Thu, 22 Jun 2017 07:42:39 -0700 (PDT) X-Received: by 10.98.92.3 with SMTP id q3mr2943959pfb.65.1498142559103; Thu, 22 Jun 2017 07:42:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498142559; cv=none; d=google.com; s=arc-20160816; b=FBPWG8XQa8dU+1S2xc+iLlAsZiPwKmfKkkBwFpuOvFhryAF6wWwjFnLuCaoIwV95In ChXcwMGSElDTQJ785m3qQGAS6eeEbdodPmD4QMPem6ECHeAiUT9/mKWTeqNJck2uGn5z c+EuvGXhKFYClJH2kKs0GS5Yuluzxt8uhJdZ4NVZymA16s6UX8erNw2dtZGJO+UGvZGR enBYrGiuTqk/3rqGXoB5DeNY+lNUXUTcOvO+0jrRhmTYMaHUj53L+gjx637u2N9c6shN cZLA+c7eMZy0Y/xWF4VB2iDe9xXNXMNeVTWybc3HoSRc51u+Q3tQ76fJr+fnWVmk7Ad9 E9sA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=99luTiLLcvr87wqMLBJ0BA5dNW+X1ICNKsEk2+DsPsw=; b=aoRzJ4F8aCZZu/4AQDdiapK1li64r0wyjr6F31xEli1e316rVqEVhn2ZwESJE3FS62 GV43CKTfwpRCQnai3riZaxmvFJhyBPCqwk4ZSuHGOchGTg+Ckf2vPXMjYDUmTqOPDJIi f+Qq5tkrVjOYus+5DrCa0vC2HYRqACO992ZCt4yUfqUIIeZh0n2CBkVI8iVj06fZ+Szn 2LYfEDOn0G+qYVAVQ1zU34E9thMQLtWpT33Jyno/sY89jMl5DOiP+OCnLS8AYsjbdnSm GfxSBZZwYNCf3HLKlZ3Y492vn3b1voidZsVyO7QhNOTgRcgAvXE9eo2dMXG/Vsnbrmhw tx+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p28si1392102pli.166.2017.06.22.07.42.38; Thu, 22 Jun 2017 07:42:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752075AbdFVOmb (ORCPT + 25 others); Thu, 22 Jun 2017 10:42:31 -0400 Received: from foss.arm.com ([217.140.101.70]:39282 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750812AbdFVOm3 (ORCPT ); Thu, 22 Jun 2017 10:42:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 61882344; Thu, 22 Jun 2017 07:42:29 -0700 (PDT) Received: from leverpostej.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3A2763F3E1; Thu, 22 Jun 2017 07:42:28 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Alexander Shishkin , Arnaldo Carvalho de Melo , Ingo Molnar , Peter Zijlstra , Zhou Chengming Subject: [PATCH] perf/core: fix group {cpu,task} validation Date: Thu, 22 Jun 2017 15:41:38 +0100 Message-Id: <1498142498-15758-1-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Regardless of which events form a group, it does not make sense for the events to target different tasks and/or CPUs, as this leaves the group inconsistent and impossible to schedule. The core perf code assumes that these are consistent across (successfully intialised) groups. Core perf code only verifies this when moving SW events into a HW context. Thus, we can violate this requirement for pure SW groups and pure HW groups, unless the relevant PMU driver happens to perform this verification itself. These mismatched groups subsequently wreak havoc elsewhere. For example, we handle watchpoints as SW events, and reserve watchpoint HW on a per-cpu basis at pmu::event_init() time to ensure that any event that is initialised is guaranteed to have a slot at pmu::add() time. However, the core code only checks the group leader's cpu filter (via event_filter_match()), and can thus install follower events onto CPUs violating thier (mismatched) CPU filters, potentially installing them into a CPU without sufficient reserved slots. This can be triggered with the below test case, resulting in warnings from arch backends. #define _GNU_SOURCE #include #include #include #include #include #include #include static int perf_event_open(struct perf_event_attr *attr, pid_t pid, int cpu, int group_fd, unsigned long flags) { return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags); } char watched_char; struct perf_event_attr wp_attr = { .type = PERF_TYPE_BREAKPOINT, .bp_type = HW_BREAKPOINT_RW, .bp_addr = (unsigned long)&watched_char, .bp_len = 1, .size = sizeof(wp_attr), }; int main(int argc, char *argv[]) { int leader, ret; cpu_set_t cpus; /* * Force use of CPU0 to ensure our CPU0-bound events get scheduled. */ CPU_ZERO(&cpus); CPU_SET(0, &cpus); ret = sched_setaffinity(0, sizeof(cpus), &cpus); if (ret) { printf("Unable to set cpu affinity\n"); return 1; } /* open leader event, bound to this task, CPU0 only */ leader = perf_event_open(&wp_attr, 0, 0, -1, 0); if (leader < 0) { printf("Couldn't open leader: %d\n", leader); return 1; } /* * Open a follower event that is bound to the same task, but a * different CPU. This means that the group should never be possible to * schedule. */ ret = perf_event_open(&wp_attr, 0, 1, leader, 0); if (ret < 0) { printf("Couldn't open mismatched follower: %d\n", ret); return 1; } else { printf("Opened leader/follower with mismastched CPUs\n"); } /* * Open as many independent events as we can, all bound to the same * task, CPU0 only. */ do { ret = perf_event_open(&wp_attr, 0, 0, -1, 0); } while (ret >= 0); /* * Force enable/disble all events to trigger the erronoeous * installation of the follower event. */ printf("Opened all events. Toggling..\n"); for (;;) { prctl(PR_TASK_PERF_EVENTS_DISABLE, 0, 0, 0, 0); prctl(PR_TASK_PERF_EVENTS_ENABLE, 0, 0, 0, 0); } return 0; } Fix this by validating this requirement regardless of whether we're moving events. Signed-off-by: Mark Rutland Cc: Alexander Shishkin Cc: Arnaldo Carvalho de Melo Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Zhou Chengming Cc: linux-kernel@vger.kernel.org --- kernel/events/core.c | 39 +++++++++++++++++++-------------------- 1 file changed, 19 insertions(+), 20 deletions(-) -- 1.9.1 diff --git a/kernel/events/core.c b/kernel/events/core.c index 6c4e523..1dca484 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -10010,28 +10010,27 @@ static int perf_event_set_clock(struct perf_event *event, clockid_t clk_id) goto err_context; /* - * Do not allow to attach to a group in a different - * task or CPU context: + * Make sure we're both events for the same CPU; + * grouping events for different CPUs is broken; since + * you can never concurrently schedule them anyhow. */ - if (move_group) { - /* - * Make sure we're both on the same task, or both - * per-cpu events. - */ - if (group_leader->ctx->task != ctx->task) - goto err_context; + if (group_leader->cpu != event->cpu) + goto err_context; - /* - * Make sure we're both events for the same CPU; - * grouping events for different CPUs is broken; since - * you can never concurrently schedule them anyhow. - */ - if (group_leader->cpu != event->cpu) - goto err_context; - } else { - if (group_leader->ctx != ctx) - goto err_context; - } + /* + * Make sure we're both on the same task, or both + * per-cpu events. + */ + if (group_leader->ctx->task != ctx->task) + goto err_context; + + /* + * Do not allow to attach to a group in a different task + * or CPU context. If we're moving SW events, we'll fix + * this up later, so allow that. + */ + if (!move_group && group_leader->ctx != ctx) + goto err_context; /* * Only a group leader can be exclusive or pinned