From patchwork Tue Dec 5 16:13:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 120692 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp5889319qgn; Tue, 5 Dec 2017 07:31:50 -0800 (PST) X-Google-Smtp-Source: AGs4zMaJHqGBxjBS2+BilAMUnrSKSp3ZbeYtLZ55kTV6y4wPnDqvHkNbRxsEs3f4M8b+Gm2WILw9 X-Received: by 10.98.70.132 with SMTP id o4mr24071974pfi.102.1512487910214; Tue, 05 Dec 2017 07:31:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512487910; cv=none; d=google.com; s=arc-20160816; b=ML6cCo5alFtvWSrTx6nlOi4mjvyvjyJ3T5Z5DAafDWp3hJtRohFuHcvrk+k3Qt+fMn l/2YHbbqeoigLOxL1N5bqPUk9c3sDgI7AGIpX+fwpSaUVSPQJ66qORi0XYI/pOnl+LuX VRNVi1QTqBtLEwLdYH4vN8xJdfW1PHzcsZkK/sAs0kFzeo9hslzZWmtqE48Dl3GGYgWp YajYzXc3c1k3XC/ZLeJwcYG7RHUYhk7arvF3tXEvqRRbBy2vJrxvufYuVTmvOx54mc6M PVm9RcXmIFu7lNbiiBKsiZp7MktHnO0AIICZsnP0WNYvpw95ch/bFt715edePA3HVab3 Z0QA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=UM4iNWz6rR06PItGO/pDa/mAb4o9YLAacc/pvSPM7AY=; b=jfqERwYtyEE5wQJqc96WLul2B2UDJf48rI8joWrhBLvGVs+J6tIQQ3AefnvT0TQeBn LPPHwyE1HvaSUKxu0snYhEJnlkUKTln1pYYj9jZiWm64hb51YexGZGKJGzaw5DPinRST RPHAjr5acFH679IgeCnA4JtsAvAYMQietPC92O7Yj7d1szRZ6Cy8E0ubpFa7lZf5GRWA r577gHDP4oCLunuuJlF0qCWfLTMZLD2fOFK9prX5PIsJ8ax26ArnpwGJv8Hn2PRFIVw7 vSM3u9ELOO1GjVc0HFhQ4SPqlmx2VaPBj8lQzl5/xsyviIoaTDWMG9MbN0vEQCXMB5Yx pdLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si234193pfs.283.2017.12.05.07.31.49; Tue, 05 Dec 2017 07:31:50 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752792AbdLEPbi (ORCPT + 28 others); Tue, 5 Dec 2017 10:31:38 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:2257 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752611AbdLEPbY (ORCPT ); Tue, 5 Dec 2017 10:31:24 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 58D6365FF8E9D; Tue, 5 Dec 2017 23:31:08 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.361.1; Tue, 5 Dec 2017 23:31:03 +0800 From: John Garry To: , , , , , , , , , , , CC: , , , , , John Garry Subject: [RFC PATCH 1/5] perf jevents: add support for pmu events vendor subdirectory Date: Wed, 6 Dec 2017 00:13:15 +0800 Message-ID: <1512490399-94107-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1512490399-94107-1-git-send-email-john.garry@huawei.com> References: <1512490399-94107-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For some architectures (like arm64), it is required to support a vendor sub-directory and not put all the JSONs for a given vendor in the same dir. This is because all the events for the same vendor will be in the same pmu events table, which may cause conflict. This conflict would be in the instance that a vendor's custom implemented events do have the same meaning on different platforms. In addition, "perf list" command may list events which are not even supported for a given platform. This patch adds support for an arch/vendor/platform directory hierarchy, while maintaining support for arch/platform structure. In this, each platform would always have its own pmu events table. In generated file pmu_events.c, each platform table name is in the format pme{_vendor}_platform, like this: struct pmu_events_map pmu_events_map[] = { { .cpuid = "0x00000000420f5160", .version = "v1", .type = "core", .table = pme_cavium_thunderx2 }, { .cpuid = 0, .version = 0, .type = 0, .table = 0, }, }; or this: struct pmu_events_map pmu_events_map[] = { { .cpuid = "GenuineIntel-6-56", .version = "v5", .type = "core", .table = pme_broadwellde }, [snip] { .cpuid = 0, .version = 0, .type = 0, .table = 0, }, }; Signed-off-by: John Garry --- tools/perf/pmu-events/jevents.c | 57 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 53 insertions(+), 4 deletions(-) -- 1.9.1 diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index b578aa2..a0d489e 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -588,7 +588,7 @@ static char *file_name_to_table_name(char *fname) * Derive rest of table name from basename of the JSON file, * replacing hyphens and stripping out .json suffix. */ - n = asprintf(&tblname, "pme_%s", basename(fname)); + n = asprintf(&tblname, "pme_%s", fname); if (n < 0) { pr_info("%s: asprintf() error %s for file %s\n", prog, strerror(errno), fname); @@ -598,7 +598,7 @@ static char *file_name_to_table_name(char *fname) for (i = 0; i < strlen(tblname); i++) { c = tblname[i]; - if (c == '-') + if (c == '-' || c == '/') tblname[i] = '_'; else if (c == '.') { tblname[i] = '\0'; @@ -755,15 +755,52 @@ static int get_maxfds(void) static FILE *eventsfp; static char *mapfile; +static int isLeafDir(const char *fpath) +{ + DIR *d; + struct dirent *dir; + int res = 1; + d = opendir(fpath); + if (!d) + return 0; + + while ((dir = readdir(d)) != NULL) { + if (dir-> d_type == DT_DIR && dir->d_name[0] != '.') { + res = 0; + break; + } + } + + closedir(d); + + return res; +} + static int process_one_file(const char *fpath, const struct stat *sb, int typeflag, struct FTW *ftwbuf) { - char *tblname, *bname = (char *) fpath + ftwbuf->base; + char *tblname, *bname; int is_dir = typeflag == FTW_D; int is_file = typeflag == FTW_F; int level = ftwbuf->level; int err = 0; + if (level == 2 && is_dir) { + /* + * For level 2 directory, bname will include parent name, + * like vendor/platform. So search back from platform dir + * to find this. + */ + bname = (char *) fpath + ftwbuf->base - 2; + while (true) { + if (*bname == '/') + break; + bname--; + } + bname++; + } else + bname = (char *) fpath + ftwbuf->base; + pr_debug("%s %d %7jd %-20s %s\n", is_file ? "f" : is_dir ? "d" : "x", level, sb->st_size, bname, fpath); @@ -773,7 +810,7 @@ static int process_one_file(const char *fpath, const struct stat *sb, return 0; /* model directory, reset topic */ - if (level == 1 && is_dir) { + if (level == 1 && is_dir && isLeafDir(fpath)) { if (close_table) print_events_table_suffix(eventsfp); @@ -791,6 +828,18 @@ static int process_one_file(const char *fpath, const struct stat *sb, print_events_table_prefix(eventsfp, tblname); return 0; + } else if (level == 2 && is_dir) { + if (close_table) + print_events_table_suffix(eventsfp); + + tblname = file_name_to_table_name(bname); + if (!tblname) { + pr_info("%s: Error determining table name for %s, exiting\n", prog, + bname); + return -1; + } + + print_events_table_prefix(eventsfp, tblname); } /* From patchwork Tue Dec 5 16:13:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 120690 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp5889305qgn; Tue, 5 Dec 2017 07:31:49 -0800 (PST) X-Google-Smtp-Source: AGs4zMasI4XWhDg2FVhU0FkLXb6MO6z6joI+9o6lTy0WEgHhMjRI3d74HsxXBY6qVPopxIVGBhda X-Received: by 10.98.129.4 with SMTP id t4mr6578373pfd.125.1512487909785; Tue, 05 Dec 2017 07:31:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512487909; cv=none; d=google.com; s=arc-20160816; b=mJD9D1TpUEqeIA/qd1EB5OxSVlVaAXQuYGiZpkMn/EDAbKk3amKn4awcccecXe3tV0 fd1Kt27umYmGledXpyUXk8q9wEYmpEWZYLvui/NZPeDRjvmcyJTKCFv9qxBMUCBQq4M5 PIbaRxrNn7HHyXgL+R/VCnRne01ckLGchlE0oV8hB3fBbt8QpjPDHnJbV25oEcvWp3/3 6ix5y6y4YOtXsKUmzLpGv1ldylKSL5vOd4WDT7CdM57OrxGvr6vPdIXO7utBN7L5+YNE PbVv3skvQbL4qIt/nAkOWDrVntLLqS+TzkpzdIhrp74zoJxPQYuPK3i9SAC4bubW9fDQ lNkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=cdpjSpUvTkpBy2YaB3mLNtgZkt74XvNPbqNyqcQUge4=; b=XAwoVQ1ela5079fcW0jS+ObmLC2L81o9zhWAuR5wMZ83rDSnLcscx0F1AOJfTcn2M1 PH3JkIOllHHfF07KVR8fhu+DVKYlhRSNTIoshJFo94PwnTGWOW37XOSD3IYmv/b6kKhg fHDaj4bjR/5DcKmW2HZaNP8YzktdgsDSc68/4IRJ94Hh5J5wBpBaoclF3pYk65pSKbIq 8D+5OKhoCGG+QIQapLr/VbKhRXDg7XFy2BN0GU4aTiVl2+TnYIXp6l8FK9Iz3SaLenLy dRGrm+cF+ItgGx2tCik7+9kyBw1tvJ8f4BXg877WuiltezMTx6nc+0jgBnmFeG09t1+U 04XA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si234193pfs.283.2017.12.05.07.31.49; Tue, 05 Dec 2017 07:31:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752927AbdLEPbb (ORCPT + 28 others); Tue, 5 Dec 2017 10:31:31 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:2259 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752610AbdLEPb0 (ORCPT ); Tue, 5 Dec 2017 10:31:26 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 7F9A09DD84B2D; Tue, 5 Dec 2017 23:31:08 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.361.1; Tue, 5 Dec 2017 23:31:03 +0800 From: John Garry To: , , , , , , , , , , , CC: , , , , , John Garry Subject: [RFC PATCH 2/5] perf jevents: add support for arch recommended events Date: Wed, 6 Dec 2017 00:13:16 +0800 Message-ID: <1512490399-94107-3-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1512490399-94107-1-git-send-email-john.garry@huawei.com> References: <1512490399-94107-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For some architectures (like arm64), there are architecture- defined recommended events. Vendors may not be obliged to follow the recommendation and may implement their own pmu event for a specific event code. This patch adds support for parsing events from arch-defined recommended JSONs, and then fixing up vendor events when they have implemented these events as recommended. In the vendor JSON, to specify that the event is supported according to the recommendation, only the event code is added to the JSON entry - no other event elements need be added, like below: [ { "EventCode": "0x40", }, ] The pmu event parsing will check for "BriefDescription" field presence only for this. If "BriefDescription" is present, then it is implied that the vendor has implemented their own custom event, and there is no fixup. Other fields are ignored. *TODO: update documentation Signed-off-by: John Garry --- tools/perf/pmu-events/jevents.c | 215 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 198 insertions(+), 17 deletions(-) -- 1.9.1 diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index a0d489e..a820ed4 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -42,6 +42,7 @@ #include #include /* getrlimit */ #include /* getrlimit */ +#include #include #include #include "jsmn.h" @@ -366,6 +367,94 @@ static int print_events_table_entry(void *data, char *name, char *event, return 0; } +struct event_struct { + char *name; + char *event; + char *desc; + char *long_desc; + char *pmu; + char *unit; + char *perpkg; + char *metric_expr; + char *metric_name; + char *metric_group; + LIST_ENTRY(event_struct) list; + char strings[]; +}; + +LIST_HEAD(listhead, event_struct) recommended_events; + +static int save_recommended_events(void *data, char *name, char *event, + char *desc, char *long_desc, + char *pmu, char *unit, char *perpkg, + char *metric_expr, + char *metric_name, char *metric_group) +{ + static int count = 0; + char temp[1024]; + struct event_struct *es; + struct stat *sb = data; + int len = 0; + char *strings; + + /* + * Lazily allocate size of the JSON file to hold the + * strings, which would be more than large enough. + */ + len = sb->st_size; + + es = malloc(sizeof(*es) + len); + if (!es) + return -ENOMEM; + memset(es, 0, sizeof(*es)); + LIST_INSERT_HEAD(&recommended_events, es, list); + + strings = &es->strings[0]; + + if (name) { + es->name = strings; + strings += snprintf(strings, len, "%s", name) + 1; + } + if (event) { + es->event = strings; + strings += snprintf(strings, len, "%s", event) + 1; + } + if (desc) { + es->desc = strings; + strings += snprintf(strings, len, "%s", desc) + 1; + } + if (long_desc) { + es->long_desc = strings; + strings += snprintf(strings, len, "%s", long_desc) + 1; + } + if (pmu) { + es->pmu = strings; + strings += snprintf(strings, len, "%s", pmu) + 1; + } + if (unit) { + es->unit = strings; + strings += snprintf(strings, len, "%s", unit) + 1; + } + if (perpkg) { + es->perpkg = strings; + strings += snprintf(strings, len, "%s", perpkg) + 1; + } + if (metric_expr) { + es->metric_expr = strings; + strings += snprintf(strings, len, "%s", metric_expr) + 1; + } + if (metric_name) { + es->metric_name = strings; + strings += snprintf(strings, len, "%s", metric_name) + 1; + } + if (metric_group) { + es->metric_group = strings; + strings += snprintf(strings, len, "%s", metric_group) + 1; + } + + return 0; +} + static void print_events_table_suffix(FILE *outfp) { fprintf(outfp, "{\n"); @@ -407,6 +496,61 @@ static char *real_event(const char *name, char *event) return event; } +static void fixup_field(char *from, char **to) +{ + /* + * If we already had a valid pointer (string), then + * don't allocate a new one, just reuse and overwrite. + */ + if (!*to) + *to = malloc(strlen(from)); + + strcpy(*to, from); +} + +static int try_fixup(const char *fn, char *event, char **desc, char **name, char **long_desc, char **pmu, char **filter, + char **perpkg, char **unit, char **metric_expr, char **metric_name, char **metric_group) +{ + /* try to find matching event from recommended values */ + struct event_struct *es; + + LIST_FOREACH(es, &recommended_events, list) { + if (!strcmp(event, es->event)) { + /* now fixup */ + if (es->desc) + fixup_field(es->desc, desc); + if (es->name) + fixup_field(es->name, name); + if (es->long_desc) + fixup_field(es->long_desc, long_desc); + if (es->pmu) + fixup_field(es->pmu, pmu); + // if (event_struct->filter) + // fixup_field(event_struct->filter, filter); + if (es->perpkg) + fixup_field(es->perpkg, perpkg); + if (es->unit) + fixup_field(es->unit, unit); + if (es->metric_expr) + fixup_field(es->metric_expr, metric_expr); + if (es->metric_name) + fixup_field(es->metric_name, metric_name); + if (es->metric_group) + fixup_field(es->metric_group, metric_group); + + return 0; + } + } + + pr_err("%s: could not find matching %s for %s\n", prog, event, fn); + return -1; +} + +#define FREE_MEMORIES \ + free(event); free(desc); free(name); free(long_desc); \ + free(extra_desc); free(pmu); free(filter); free(perpkg); \ + free(unit); free(metric_expr); free(metric_name); + /* Call func with each event in the json file */ int json_events(const char *fn, int (*func)(void *data, char *name, char *event, char *desc, @@ -551,20 +695,22 @@ int json_events(const char *fn, if (name) fixname(name); + if (!desc) { + /* + * If we have no valid desc, then fixup *all* values from recommended + * by matching the event. + */ + err = try_fixup(fn, event, &desc, &name, &long_desc, &pmu, &filter, &perpkg, &unit, &metric_expr, + &metric_name, &metric_group); + if (err) { + FREE_MEMORIES + goto out_free; + } + } + err = func(data, name, real_event(name, event), desc, long_desc, pmu, unit, perpkg, metric_expr, metric_name, metric_group); - free(event); - free(desc); - free(name); - free(long_desc); - free(extra_desc); - free(pmu); - free(filter); - free(perpkg); - free(unit); - free(metric_expr); - free(metric_name); - free(metric_group); + FREE_MEMORIES if (err) break; tok += j; @@ -776,6 +922,32 @@ static int isLeafDir(const char *fpath) return res; } +static int isJsonFile(const char *name) +{ + const char *suffix; + + if (strlen(name) < 5) + return 0; + + suffix = name + strlen(name) - 5; + + if (strncmp(suffix, ".json", 5) == 0) + return 1; + return 0; +} + +static int preprocess_level0_files(const char *fpath, const struct stat *sb, + int typeflag, struct FTW *ftwbuf) +{ + int level = ftwbuf->level; + int is_file = typeflag == FTW_F; + + if (level == 1 && is_file && isJsonFile(fpath)) + return json_events(fpath, save_recommended_events, (void *)sb); + + return 0; +} + static int process_one_file(const char *fpath, const struct stat *sb, int typeflag, struct FTW *ftwbuf) { @@ -806,8 +978,10 @@ static int process_one_file(const char *fpath, const struct stat *sb, level, sb->st_size, bname, fpath); /* base dir */ - if (level == 0) - return 0; + if (level == 0) { + LIST_INIT(&recommended_events); + return nftw(fpath, preprocess_level0_files, get_maxfds(), 0); + } /* model directory, reset topic */ if (level == 1 && is_dir && isLeafDir(fpath)) { @@ -869,9 +1043,7 @@ static int process_one_file(const char *fpath, const struct stat *sb, * ignore it. It could be a readme.txt for instance. */ if (is_file) { - char *suffix = bname + strlen(bname) - 5; - - if (strncmp(suffix, ".json", 5)) { + if (!isJsonFile(bname)) { pr_info("%s: Ignoring file without .json suffix %s\n", prog, fpath); return 0; @@ -933,6 +1105,7 @@ int main(int argc, char *argv[]) const char *output_file; const char *start_dirname; struct stat stbuf; + struct event_struct *es1, *es2; prog = basename(argv[0]); if (argc < 4) { @@ -988,6 +1161,14 @@ int main(int argc, char *argv[]) goto empty_map; } + /* Free struct for recommended events */ + es1 = LIST_FIRST(&recommended_events); + while (es1) { + es2 = LIST_NEXT(es1, list); + free(es1); + es1 = es2; + } + if (close_table) print_events_table_suffix(eventsfp); From patchwork Tue Dec 5 16:13:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 120694 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp5889983qgn; Tue, 5 Dec 2017 07:32:22 -0800 (PST) X-Google-Smtp-Source: AGs4zMbFw/Y/wJh8i7kRh8ccjhxqUBBjfM0ihHg+TBfA6DaKcySs0a+kMdu2LaFyGC8nmqo0hhJF X-Received: by 10.84.149.139 with SMTP id m11mr18551737pla.36.1512487942833; Tue, 05 Dec 2017 07:32:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512487942; cv=none; d=google.com; s=arc-20160816; b=pqaCkrnZPUpJOigHjACzPMc67+NYPbuzNUKFbWogu/3CZRVOjCBsl4yxdcvzYcXNFG Q6VkPlv1Mkfuz7JBvFYJ9cob17zJWb80VvAoqNusIOVpNoDOLqGFAIFGZQFMD2tLYZCx iJJ1BnhkRboEifxhMiFwJlWApmC7GhdmaBsJbH3DZOijQFjwUHNBmXvwkHIFvR3vb5It CTo+ttvNz6k65cqbF+zEAr34WQ5M2CWhYqSDo9QXXFU+GLXZ3A23QxsqeeYGJgn1kZwM vhoPzHcq8GX65lcPF9JepBuwEXSAoKZov3McucO59eelzP76U64Lsr7om4dvL8pt+RD0 iMWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=2KG60vprWE0Df7a1YkGyJp52iM7+PrHNmeL0gCFggec=; b=t0d72qYWBte/78RsGgfvHYMKpBBdLCqKLIgpXYTzOXSLBdiKciOlRsVPHZGMur22GC SaJztuokBGhhNA6lhNqsBexXfBI/425kpLvZDHXQLV0egUlPOIBrO/2yV+vFAGWZN6UF +k0a/3Ax3ggOr50f7a3SdAA3gzVi1BXM/NT0RYt68onLByRNovv5otNoUDV62TC5M06T qpD/B5/hpkQwLi9OWmAFQO575FW1rxG7IWkP5H9ZJtPXXfyDDTN7Fb3gWHpC4MiXQvPg sYm464IaUbTpYlWXbj4YxIlUkjKVk0m+C+TW1PASCXe7G80Ck9YYleI7ymp0maAndV++ 3OTg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b186si208029pgc.780.2017.12.05.07.32.22; Tue, 05 Dec 2017 07:32:22 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753110AbdLEPcU (ORCPT + 28 others); Tue, 5 Dec 2017 10:32:20 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:2258 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752624AbdLEPb0 (ORCPT ); Tue, 5 Dec 2017 10:31:26 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 6B7AD4B654EB2; Tue, 5 Dec 2017 23:31:08 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.361.1; Tue, 5 Dec 2017 23:31:03 +0800 From: John Garry To: , , , , , , , , , , , CC: , , , , , John Garry Subject: [RFC PATCH 3/5] perf vendor events arm64: add armv8 recommended events JSON Date: Wed, 6 Dec 2017 00:13:17 +0800 Message-ID: <1512490399-94107-4-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1512490399-94107-1-git-send-email-john.garry@huawei.com> References: <1512490399-94107-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add JSON for arm64 IMPLEMENTATION DEFINED recommended events. Signed-off-by: John Garry Signed-off-by: Shaokun Zhang --- .../pmu-events/arch/arm64/armv8-recommended.json | 452 +++++++++++++++++++++ 1 file changed, 452 insertions(+) create mode 100644 tools/perf/pmu-events/arch/arm64/armv8-recommended.json -- 1.9.1 diff --git a/tools/perf/pmu-events/arch/arm64/armv8-recommended.json b/tools/perf/pmu-events/arch/arm64/armv8-recommended.json new file mode 100644 index 0000000..5584c31 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/armv8-recommended.json @@ -0,0 +1,452 @@ +[ + { + "PublicDescription": "Attributable Level 1 data cache access, read", + "EventCode": "0x40", + "EventName": "L1D_CACHE_RD", + "BriefDescription": "L1D cache access, read", + }, + { + "PublicDescription": "Attributable Level 1 data cache access, write", + "EventCode": "0x41", + "EventName": "L1D_CACHE_WR", + "BriefDescription": "L1D cache access, write", + }, + { + "PublicDescription": "Attributable Level 1 data cache refill, read", + "EventCode": "0x42", + "EventName": "L1D_CACHE_REFILL_RD", + "BriefDescription": "L1D cache refill, read", + }, + { + "PublicDescription": "Attributable Level 1 data cache refill, write", + "EventCode": "0x43", + "EventName": "L1D_CACHE_REFILL_WR", + "BriefDescription": "L1D cache refill, write", + }, + { + "PublicDescription": "Attributable Level 1 data cache refill, inner", + "EventCode": "0x44", + "EventName": "L1D_CACHE_REFILL_INNER", + "BriefDescription": "L1D cache refill, inner", + }, + { + "PublicDescription": "Attributable Level 1 data cache refill, outer", + "EventCode": "0x45", + "EventName": "L1D_CACHE_REFILL_OUTER", + "BriefDescription": "L1D cache refill, outer", + }, + { + "PublicDescription": "Attributable Level 1 data cache Write-Back, victim", + "EventCode": "0x46", + "EventName": "L1D_CACHE_WB_VICTIM", + "BriefDescription": "L1D cache Write-Back, victim", + }, + { + "PublicDescription": "Level 1 data cache Write-Back, cleaning and coherency", + "EventCode": "0x47", + "EventName": "L1D_CACHE_WB_CLEAN", + "BriefDescription": "L1D cache Write-Back, cleaning and coherency", + }, + { + "PublicDescription": "Attributable Level 1 data cache invalidate", + "EventCode": "0x48", + "EventName": "L1D_CACHE_INVAL", + "BriefDescription": "L1D cache invalidate", + }, + { + "PublicDescription": "Attributable Level 1 data TLB refill, read", + "EventCode": "0x4C", + "EventName": "L1D_TLB_REFILL_RD", + "BriefDescription": "L1D tlb refill, read", + }, + { + "PublicDescription": "Attributable Level 1 data TLB refill, write", + "EventCode": "0x4D", + "EventName": "L1D_TLB_REFILL_WR", + "BriefDescription": "L1D tlb refill, write", + }, + { + "PublicDescription": "Attributable Level 1 data or unified TLB access, read", + "EventCode": "0x4E", + "EventName": "L1D_TLB_RD", + "BriefDescription": "L1D tlb access, read", + }, + { + "PublicDescription": "Attributable Level 1 data or unified TLB access, write", + "EventCode": "0x4F", + "EventName": "L1D_TLB_WR", + "BriefDescription": "L1D tlb access, write", + }, + { + "PublicDescription": "Attributable Level 2 data cache access, read", + "EventCode": "0x50", + "EventName": "L2D_CACHE_RD", + "BriefDescription": "L2D cache access, read", + }, + { + "PublicDescription": "Attributable Level 2 data cache access, write", + "EventCode": "0x51", + "EventName": "L2D_CACHE_WR", + "BriefDescription": "L2D cache access, write", + }, + { + "PublicDescription": "Attributable Level 2 data cache refill, read", + "EventCode": "0x52", + "EventName": "L2D_CACHE_REFILL_RD", + "BriefDescription": "L2D cache refill, read", + }, + { + "PublicDescription": "Attributable Level 2 data cache refill, write", + "EventCode": "0x53", + "EventName": "L2D_CACHE_REFILL_WR", + "BriefDescription": "L2D cache refill, write", + }, + { + "PublicDescription": "Attributable Level 2 data cache Write-Back, victim", + "EventCode": "0x56", + "EventName": "L2D_CACHE_WB_VICTIM", + "BriefDescription": "L2D cache Write-Back, victim", + }, + { + "PublicDescription": "Level 2 data cache Write-Back, cleaning and coherency", + "EventCode": "0x57", + "EventName": "L2D_CACHE_WB_CLEAN", + "BriefDescription": "L2D cache Write-Back, cleaning and coherency", + }, + { + "PublicDescription": "Attributable Level 2 data cache invalidate", + "EventCode": "0x58", + "EventName": "L2D_CACHE_INVAL", + "BriefDescription": "L2D cache invalidate", + }, + { + "PublicDescription": "Attributable Level 2 data or unified TLB refill, read", + "EventCode": "0x5c", + "EventName": "L2D_TLB_REFILL_RD", + "BriefDescription": "L2D cache refill, read", + }, + { + "PublicDescription": "Attributable Level 2 data or unified TLB refill, write", + "EventCode": "0x5d", + "EventName": "L2D_TLB_REFILL_WR", + "BriefDescription": "L2D cache refill, write", + }, + { + "PublicDescription": "Attributable Level 2 data or unified TLB access, read", + "EventCode": "0x5e", + "EventName": "L2D_TLB_RD", + "BriefDescription": "L2D cache access, read", + }, + { + "PublicDescription": "Attributable Level 2 data or unified TLB access, write", + "EventCode": "0x5f", + "EventName": "L2D_TLB_WR", + "BriefDescription": "L2D cache access, write", + }, + { + "PublicDescription": "Bus access read", + "EventCode": "0x60", + "EventName": "BUS_ACCESS_RD", + "BriefDescription": "Bus access read", + }, + { + "PublicDescription": "Bus access write", + "EventCode": "0x61", + "EventName": "BUS_ACCESS_WR", + "BriefDescription": "Bus access write", + } + { + "PublicDescription": "Bus access, Normal, Cacheable, Shareable", + "EventCode": "0x62", + "EventName": "BUS_ACCESS_SHARED", + "BriefDescription": "Bus access, Normal, Cacheable, Shareable", + } + { + "PublicDescription": "Bus access, not Normal, Cacheable, Shareable", + "EventCode": "0x63", + "EventName": "BUS_ACCESS_NOT_SHARED", + "BriefDescription": "Bus access, not Normal, Cacheable, Shareable", + } + { + "PublicDescription": "Bus access, Normal", + "EventCode": "0x64", + "EventName": "BUS_ACCESS_NORMAL", + "BriefDescription": "Bus access, Normal", + } + { + "PublicDescription": "Bus access, peripheral", + "EventCode": "0x65", + "EventName": "BUS_ACCESS_PERIPH", + "BriefDescription": "Bus access, peripheral", + } + { + "PublicDescription": "Data memory access, read", + "EventCode": "0x66", + "EventName": "MEM_ACCESS_RD", + "BriefDescription": "Data memory access, read", + } + { + "PublicDescription": "Data memory access, write", + "EventCode": "0x67", + "EventName": "MEM_ACCESS_WR", + "BriefDescription": "Data memory access, write", + } + { + "PublicDescription": "Unaligned access, read", + "EventCode": "0x68", + "EventName": "UNALIGNED_LD_SPEC", + "BriefDescription": "Unaligned access, read", + } + { + "PublicDescription": "Unaligned access, write", + "EventCode": "0x69", + "EventName": "UNALIGNED_ST_SPEC", + "BriefDescription": "Unaligned access, write", + } + { + "PublicDescription": "Unaligned access", + "EventCode": "0x6a", + "EventName": "UNALIGNED_LDST_SPEC", + "BriefDescription": "Unaligned access", + } + { + "PublicDescription": "Exclusive operation speculatively executed, LDREX or LDX", + "EventCode": "0x6c", + "EventName": "LDREX_SPEC", + "BriefDescription": "Exclusive operation speculatively executed, LDREX or LDX", + } + { + "PublicDescription": "Exclusive operation speculatively executed, STREX or STX pass", + "EventCode": "0x6d", + "EventName": "STREX_PASS_SPEC", + "BriefDescription": "Exclusive operation speculatively executed, STREX or STX pass", + } + { + "PublicDescription": "Exclusive operation speculatively executed, STREX or STX fail", + "EventCode": "0x6e", + "EventName": "STREX_FAIL_SPEC", + "BriefDescription": "Exclusive operation speculatively executed, STREX or STX fail", + } + { + "PublicDescription": "Exclusive operation speculatively executed, STREX or STX", + "EventCode": "0x6f", + "EventName": "STREX_SPEC", + "BriefDescription": "Exclusive operation speculatively executed, STREX or STX", + } + { + "PublicDescription": "Operation speculatively executed, load", + "EventCode": "0x70", + "EventName": "LD_SPEC", + "BriefDescription": "Operation speculatively executed, load", + } + { + "PublicDescription": "Operation speculatively executed, store", + "EventCode": "0x71", + "EventName": "ST_SPEC", + "BriefDescription": "Operation speculatively executed, store", + } + { + "PublicDescription": "Operation speculatively executed, load or store", + "EventCode": "0x72", + "EventName": "LDST_SPEC", + "BriefDescription": "Operation speculatively executed, load or store", + } + { + "PublicDescription": "Operation speculatively executed, integer data processing", + "EventCode": "0x73", + "EventName": "DP_SPEC", + "BriefDescription": "Operation speculatively executed, integer data processing", + } + { + "PublicDescription": "Operation speculatively executed, Advanced SIMD instruction", + "EventCode": "0x74", + "EventName": "ASE_SPEC", + "BriefDescription": "Operation speculatively executed, Advanced SIMD instruction", + } + { + "PublicDescription": "Operation speculatively executed, floating-point instruction", + "EventCode": "0x75", + "EventName": "VFP_SPEC", + "BriefDescription": "Operation speculatively executed, floating-point instruction", + } + { + "PublicDescription": "Operation speculatively executed, software change of the PC", + "EventCode": "0x76", + "EventName": "PC_WRITE_SPEC", + "BriefDescription": "Operation speculatively executed, software change of the PC", + } + { + "PublicDescription": "Operation speculatively executed, Cryptographic instruction", + "EventCode": "0x77", + "EventName": "CRYPTO_SPEC", + "BriefDescription": "Operation speculatively executed, Cryptographic instruction", + } + { + "PublicDescription": "Branch speculatively executed, immediate branch" + "EventCode": "0x78", + "EventName": "BR_IMMED_SPEC", + "BriefDescription": "Branch speculatively executed, immediate branch" + } + { + "PublicDescription": "Branch speculatively executed, procedure return" + "EventCode": "0x79", + "EventName": "BR_RETURN_SPEC", + "BriefDescription": "Branch speculatively executed, procedure return" + } + { + "PublicDescription": "Branch speculatively executed, indirect branch" + "EventCode": "0x7a", + "EventName": "BR_INDIRECT_SPEC", + "BriefDescription": "Branch speculatively executed, indirect branch" + } + { + "PublicDescription": "Barrier speculatively executed, ISB" + "EventCode": "0x7c", + "EventName": "ISB_SPEC", + "BriefDescription": "Barrier speculatively executed, ISB" + } + { + "PublicDescription": "Barrier speculatively executed, DSB" + "EventCode": "0x7d", + "EventName": "DSB_SPEC", + "BriefDescription": "Barrier speculatively executed, DSB" + } + { + "PublicDescription": "Barrier speculatively executed, DMB" + "EventCode": "0x7e", + "EventName": "DMB_SPEC", + "BriefDescription": "Barrier speculatively executed, DMB" + } + { + "PublicDescription": "Exception taken, Other synchronous" + "EventCode": "0x81", + "EventName": "EXC_UNDEF", + "BriefDescription": "Exception taken, Other synchronous" + } + { + "PublicDescription": "Exception taken, Supervisor Call" + "EventCode": "0x82", + "EventName": "EXC_SVC", + "BriefDescription": "Exception taken, Supervisor Call" + } + { + "PublicDescription": "Exception taken, Instruction Abort" + "EventCode": "0x83", + "EventName": "EXC_PABORT", + "BriefDescription": "Exception taken, Instruction Abort" + } + { + "PublicDescription": "Exception taken, Data Abort and SError" + "EventCode": "0x84", + "EventName": "EXC_DABORT", + "BriefDescription": "Exception taken, Data Abort and SError" + } + { + "PublicDescription": "Exception taken, IRQ" + "EventCode": "0x86", + "EventName": "EXC_IRQ", + "BriefDescription": "Exception taken, IRQ" + } + { + "PublicDescription": "Exception taken, FIQ" + "EventCode": "0x87", + "EventName": "EXC_FIQ", + "BriefDescription": "Exception taken, FIQ" + } + { + "PublicDescription": "Exception taken, Secure Monitor Call" + "EventCode": "0x88", + "EventName": "EXC_SMC", + "BriefDescription": "Exception taken, Secure Monitor Call" + } + { + "PublicDescription": "Exception taken, Hypervisor Call" + "EventCode": "0x8a", + "EventName": "EXC_HVC", + "BriefDescription": "Exception taken, Hypervisor Call" + } + { + "PublicDescription": "Exception taken, Instruction Abort not taken locally" + "EventCode": "0x8b", + "EventName": "EXC_TRAP_PABORT", + "BriefDescription": "Exception taken, Instruction Abort not taken locally" + } + { + "PublicDescription": "Exception taken, Data Abort or SError not taken locally" + "EventCode": "0x8c", + "EventName": "EXC_TRAP_DABORT", + "BriefDescription": "Exception taken, Data Abort or SError not taken locally" + } + { + "PublicDescription": "Exception taken, Other traps not taken locally" + "EventCode": "0x8d", + "EventName": "EXC_TRAP_OTHER", + "BriefDescription": "Exception taken, Other traps not taken locally" + } + { + "PublicDescription": "Exception taken, IRQ not taken locally" + "EventCode": "0x8e", + "EventName": "EXC_TRAP_IRQ", + "BriefDescription": "Exception taken, IRQ not taken locally" + } + { + "PublicDescription": "Exception taken, FIQ not taken locally" + "EventCode": "0x8f", + "EventName": "EXC_TRAP_FIQ", + "BriefDescription": "Exception taken, FIQ not taken locally" + } + { + "PublicDescription": "Release consistency operation speculatively executed, Load-Acquire" + "EventCode": "0x90", + "EventName": "RC_LD_SPEC", + "BriefDescription": "Release consistency operation speculatively executed, Load-Acquire" + } + { + "PublicDescription": "Release consistency operation speculatively executed, Store-Release" + "EventCode": "0x91", + "EventName": "RC_ST_SPEC", + "BriefDescription": "Release consistency operation speculatively executed, Store-Release" + } + { + "PublicDescription": "Attributable Level 3 data or unified cache access, read" + "EventCode": "0xa0", + "EventName": "L3D_CACHE_RD", + "BriefDescription": "Attributable Level 3 data or unified cache access, read" + } + { + "PublicDescription": "Attributable Level 3 data or unified cache access, write" + "EventCode": "0xa1", + "EventName": "L3D_CACHE_WR", + "BriefDescription": "Attributable Level 3 data or unified cache access, write" + } + { + "PublicDescription": "Attributable Level 3 data or unified cache refill, read" + "EventCode": "0xa2", + "EventName": "L3D_CACHE_REFILL_RD", + "BriefDescription": "Attributable Level 3 data or unified cache refill, read" + } + { + "PublicDescription": "Attributable Level 3 data or unified cache refill, write" + "EventCode": "0xa3", + "EventName": "L3D_CACHE_REFILL_WR", + "BriefDescription": "Attributable Level 3 data or unified cache refill, write" + } + { + "PublicDescription": "Attributable Level 3 data or unified cache Write-Back, victim" + "EventCode": "0xa6", + "EventName": "L3D_CACHE_WB_VICTIM", + "BriefDescription": "Attributable Level 3 data or unified cache Write-Back, victim" + } + { + "PublicDescription": "Attributable Level 3 data or unified cache Write-Back, cache clean" + "EventCode": "0xa7", + "EventName": "L3D_CACHE_WB_CLEAN", + "BriefDescription": "Attributable Level 3 data or unified cache Write-Back, cache clean" + } + { + "PublicDescription": "Attributable Level 3 data or unified cache access, invalidate" + "EventCode": "0xa8", + "EventName": "L3D_CACHE_INVAL", + "BriefDescription": "Attributable Level 3 data or unified cache access, invalidate" + } +] From patchwork Tue Dec 5 16:13:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 120691 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp5889299qgn; Tue, 5 Dec 2017 07:31:49 -0800 (PST) X-Google-Smtp-Source: AGs4zMa9XGzjNYRSe8j4VytYwcbUzFuAVY14gshDSUNYZwAmiSyajenb7Pgmvn3EkHuo+OO6oP1g X-Received: by 10.98.65.197 with SMTP id g66mr23545439pfd.60.1512487909335; Tue, 05 Dec 2017 07:31:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512487909; cv=none; d=google.com; s=arc-20160816; b=c3UUySPN292JKrdU08pOurzhLHqZ6JnSXM+rTfU2yymq6R0PM99UFgdXFbCI0+Q90m 7X7F99YDOSi6T3QixEGHpKyLI0KMYWOYv1p1dqrvEKmk4+n0rndOL6F2pQrzkdlUU2LU uReLwbNsi5/o113H3PPP1PRhtob9eIKTPrr9l+qxlBNPs/5UsO3kQqzVfDJajiL9Zvza eFSUwbCYOjChEpWTzACn/foiA+AnRNXCp54nxv2otW1Sken3uDm9rYLgvI1kAMVfySe8 L8ZEgPqJV5nmYUyFXgB4DQ4b2ofbyZ8xcbg+RbZK9XIqfNX0T0jTNOZdbPiX9OtlgGox /33A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=Ql12s1oOCLOJ9rugFjjehLeEa4JzKWoVG1XyXKjswwE=; b=pTwWZair1F1Qq1Zg+cOGBm/shqUy5hnEX1nd0ZJWin8Aaa1xKo+lhuPBJMIHhf6QjG WE/Ra2eiBqcLbYJuSnOiIm8sVpXnkRiyxnDSV/bbuUIBxRmO0ncC0VOFgXXMvhPvgTN5 GbiT4gGrtLLqT87j/euJRhhbdDZmPwLW0O2sVCejD2j5bC1GHq5E/ep/vN5Q4PZQlWY1 AoYknHE+U4QCJ0YhV7Y8BleBj59UhFvk1Z6GsepDXdUlEXJ9Rv9dMn6JhgwsmfqfqxEY Ky2vpW41RelkfOfFu4UnRZWqnUPIlvH2Bv4J41ERT33n/I+CdsjsaCVj4/yS6EuCM7du yjRw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si234193pfs.283.2017.12.05.07.31.49; Tue, 05 Dec 2017 07:31:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752844AbdLEPb2 (ORCPT + 28 others); Tue, 5 Dec 2017 10:31:28 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:2255 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752592AbdLEPbY (ORCPT ); Tue, 5 Dec 2017 10:31:24 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 33717EF25C6D2; Tue, 5 Dec 2017 23:31:08 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.361.1; Tue, 5 Dec 2017 23:31:04 +0800 From: John Garry To: , , , , , , , , , , , CC: , , , , , John Garry Subject: [RFC PATCH 4/5] perf vendor events arm64: relocate thunderx2 JSON Date: Wed, 6 Dec 2017 00:13:18 +0800 Message-ID: <1512490399-94107-5-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1512490399-94107-1-git-send-email-john.garry@huawei.com> References: <1512490399-94107-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since the pmu events architecture folder structure supports arch/vendor/platform structure, relocate the ThunderX2 JSON. Also since Cavium ThunderX2 has implemented its events according to ARM recommendation, remove the fields apart from "EventCode". Signed-off-by: John Garry --- .../arch/arm64/cavium/thunderx2-imp-def.json | 62 ---------------------- .../arch/arm64/cavium/thunderx2/core-imp-def.json | 32 +++++++++++ tools/perf/pmu-events/arch/arm64/mapfile.csv | 2 +- 3 files changed, 33 insertions(+), 63 deletions(-) delete mode 100644 tools/perf/pmu-events/arch/arm64/cavium/thunderx2-imp-def.json create mode 100644 tools/perf/pmu-events/arch/arm64/cavium/thunderx2/core-imp-def.json -- 1.9.1 diff --git a/tools/perf/pmu-events/arch/arm64/cavium/thunderx2-imp-def.json b/tools/perf/pmu-events/arch/arm64/cavium/thunderx2-imp-def.json deleted file mode 100644 index 2db45c4..0000000 --- a/tools/perf/pmu-events/arch/arm64/cavium/thunderx2-imp-def.json +++ /dev/null @@ -1,62 +0,0 @@ -[ - { - "PublicDescription": "Attributable Level 1 data cache access, read", - "EventCode": "0x40", - "EventName": "l1d_cache_rd", - "BriefDescription": "L1D cache read", - }, - { - "PublicDescription": "Attributable Level 1 data cache access, write ", - "EventCode": "0x41", - "EventName": "l1d_cache_wr", - "BriefDescription": "L1D cache write", - }, - { - "PublicDescription": "Attributable Level 1 data cache refill, read", - "EventCode": "0x42", - "EventName": "l1d_cache_refill_rd", - "BriefDescription": "L1D cache refill read", - }, - { - "PublicDescription": "Attributable Level 1 data cache refill, write", - "EventCode": "0x43", - "EventName": "l1d_cache_refill_wr", - "BriefDescription": "L1D refill write", - }, - { - "PublicDescription": "Attributable Level 1 data TLB refill, read", - "EventCode": "0x4C", - "EventName": "l1d_tlb_refill_rd", - "BriefDescription": "L1D tlb refill read", - }, - { - "PublicDescription": "Attributable Level 1 data TLB refill, write", - "EventCode": "0x4D", - "EventName": "l1d_tlb_refill_wr", - "BriefDescription": "L1D tlb refill write", - }, - { - "PublicDescription": "Attributable Level 1 data or unified TLB access, read", - "EventCode": "0x4E", - "EventName": "l1d_tlb_rd", - "BriefDescription": "L1D tlb read", - }, - { - "PublicDescription": "Attributable Level 1 data or unified TLB access, write", - "EventCode": "0x4F", - "EventName": "l1d_tlb_wr", - "BriefDescription": "L1D tlb write", - }, - { - "PublicDescription": "Bus access read", - "EventCode": "0x60", - "EventName": "bus_access_rd", - "BriefDescription": "Bus access read", - }, - { - "PublicDescription": "Bus access write", - "EventCode": "0x61", - "EventName": "bus_access_wr", - "BriefDescription": "Bus access write", - } -] diff --git a/tools/perf/pmu-events/arch/arm64/cavium/thunderx2/core-imp-def.json b/tools/perf/pmu-events/arch/arm64/cavium/thunderx2/core-imp-def.json new file mode 100644 index 0000000..99313eb --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/cavium/thunderx2/core-imp-def.json @@ -0,0 +1,32 @@ +[ + { + "EventCode": "0x40", + }, + { + "EventCode": "0x41", + }, + { + "EventCode": "0x42", + }, + { + "EventCode": "0x43", + }, + { + "EventCode": "0x4C", + }, + { + "EventCode": "0x4D", + }, + { + "EventCode": "0x4E", + }, + { + "EventCode": "0x4F", + }, + { + "EventCode": "0x60", + }, + { + "EventCode": "0x61", + } +] diff --git a/tools/perf/pmu-events/arch/arm64/mapfile.csv b/tools/perf/pmu-events/arch/arm64/mapfile.csv index 219d675..32fa0d1 100644 --- a/tools/perf/pmu-events/arch/arm64/mapfile.csv +++ b/tools/perf/pmu-events/arch/arm64/mapfile.csv @@ -12,4 +12,4 @@ # # #Family-model,Version,Filename,EventType -0x00000000420f5160,v1,cavium,core +0x00000000420f5160,v1,cavium/thunderx2,core From patchwork Tue Dec 5 16:13:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 120693 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp5889537qgn; Tue, 5 Dec 2017 07:32:00 -0800 (PST) X-Google-Smtp-Source: AGs4zMaGLm9NgBKb4/zLr3I1pdmxOWAWD3sYaNZt+B6C2BQjWcStHjTpzqDPD3nSpsQ1Vh6lh0fx X-Received: by 10.84.174.129 with SMTP id r1mr18967635plb.337.1512487920316; Tue, 05 Dec 2017 07:32:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512487920; cv=none; d=google.com; s=arc-20160816; b=a1e92X0gWpVcQDncPnpnS0jCIZbCk02i7j/syXoi1y6gXva9uY3PuX/f9U7liGLIdh P76Ku3f6iIpQg8VWKq7qfAyxPgYx/FVuNREsEGWR/pgrauT7FcXnu0c3d/1jTWrGGOTi RR45apHMelioQ985JnGOMWLaXy3Z2a02RBeOOqm0AjDV3eeguCgK4HKvmWJPBaMFV9Kp hds59/Zx35fL3HE418+jsMv3SpUm8i7rgz4F3htHAlOmBYuiG291V7iNmhEp8RLRvTRc JI/ABrUjFa3Rif4D+h15UmNbllqWTkuSZXhVwc0Sdj0iW3iYdiuQwAsJm2BvcqKWP3bW uSiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=aXKE2l2e4zeQtRXj/gi/sdM1BtwgBBRvHld7W+4TSls=; b=RCZ/acIsCIdP1W46MZasLgOFLbCl+EmLMmidy/O3fX1WHNn4DgXCGpg277GVbjf3+y LXnl53NzI4fBlZPT2xqRDNbf+VEaFXm+P/veKPtb2F+oUPHDDX9LvGIpXAU+9BZeyqhV edtwPswNuyIE0KP66lsfXTugWJTM4htizow3uq0s4WciAKEAsQPirBI6HMvoMqRiqrjQ A+0oK0HjaMVm0tPrb9JIEOIo21r+9W4z3yp/BpaYD4LOAKAjA0bgdn4kwj5qIQ1Ca3KC FCT794PlM8x/ul48q3C0+6K7z6qIjUG8cq64P+339pN4l3L3xqc2Z+34gShZdjXMM5jn oijg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si234193pfs.283.2017.12.05.07.31.59; Tue, 05 Dec 2017 07:32:00 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753039AbdLEPb6 (ORCPT + 28 others); Tue, 5 Dec 2017 10:31:58 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:2210 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752592AbdLEPb3 (ORCPT ); Tue, 5 Dec 2017 10:31:29 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 9FAC58F833B8D; Tue, 5 Dec 2017 23:31:13 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.361.1; Tue, 5 Dec 2017 23:31:04 +0800 From: John Garry To: , , , , , , , , , , , CC: , , , , , John Garry Subject: [RFC PATCH 5/5] perf vendor events arm64: add HiSilicon hip08 JSON Date: Wed, 6 Dec 2017 00:13:19 +0800 Message-ID: <1512490399-94107-6-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1512490399-94107-1-git-send-email-john.garry@huawei.com> References: <1512490399-94107-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add HiSilicon hip08 JSON. Since hip08 has its events implementated according to ARM recommendation, only add fields "EventCode" (where applicable - hip08 also has implemented some other custom events). Signed-off-by: John Garry Signed-off-by: Shaokun Zhang --- .../arch/arm64/hisilicon/hip08/core-imp-def.json | 122 +++++++++++++++++++++ tools/perf/pmu-events/arch/arm64/mapfile.csv | 1 + 2 files changed, 123 insertions(+) create mode 100644 tools/perf/pmu-events/arch/arm64/hisilicon/hip08/core-imp-def.json -- 1.9.1 diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/core-imp-def.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/core-imp-def.json new file mode 100644 index 0000000..94fde40 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/core-imp-def.json @@ -0,0 +1,122 @@ +[ + { + "EventCode": "0x40", + }, + { + "EventCode": "0x41", + }, + { + "EventCode": "0x42", + }, + { + "EventCode": "0x43", + }, + { + "EventCode": "0x46", + }, + { + "EventCode": "0x47", + }, + { + "EventCode": "0x48", + }, + { + "EventCode": "0x4C", + }, + { + "EventCode": "0x4D", + }, + { + "EventCode": "0x4E", + }, + { + "EventCode": "0x4F", + }, + { + "EventCode": "0x50", + }, + { + "EventCode": "0x51", + }, + { + "EventCode": "0x52", + }, + { + "EventCode": "0x53", + }, + { + "EventCode": "0x56", + }, + { + "EventCode": "0x57", + }, + { + "EventCode": "0x58", + }, + { + "PublicDescription": "Level 1 instruction cache prefetch access count", + "EventCode": "0x102e", + "EventName": "L1I_CACHE_PRF", + "BriefDescription": "L1I cache prefetch access count", + }, + { + "PublicDescription": "Level 1 instruction cache miss due to prefetch access count", + "EventCode": "0x102f", + "EventName": "L1I_CACHE_PRF_REFILL", + "BriefDescription": "L1I cache miss due to prefetch access count", + }, + { + "PublicDescription": "Instruction queue is empty", + "EventCode": "0x1043", + "EventName": "IQ_IS_EMPTY", + "BriefDescription": "Instruction queue is empty", + }, + { + "PublicDescription": "Instruction fetch stall cycles", + "EventCode": "0x1044", + "EventName": "IF_IS_STALL", + "BriefDescription": "Instruction fetch stall cycles", + }, + { + "PublicDescription": "Instructions can receive, but not send", + "EventCode": "0x2014", + "EventName": "FETCH_BUBBLE", + "BriefDescription": "Instructions can receive, but not send", + }, + { + "PublicDescription": "Prefetch request from LSU", + "EventCode": "0x6013", + "EventName": "PRF_REQ", + "BriefDescription": "Prefetch request from LSU", + }, + { + "PublicDescription": "Hit on prefetched data", + "EventCode": "0x6014", + "EventName": "HIT_ON_PRF", + "BriefDescription": "Hit on prefetched data", + }, + { + "PublicDescription": "Cycles of that the number of issuing micro operations are less than 4", + "EventCode": "0x7001", + "EventName": "EXE_STALL_CYCLE", + "BriefDescription": "Cycles of that the number of issue ups are less than 4", + }, + { + "PublicDescription": "No any micro operation is issued and meanwhile any load operation is not resolved", + "EventCode": "0x7004", + "EventName": "MEM_STALL_ANYLOAD", + "BriefDescription": "No any micro operation is issued and meanwhile any load operation is not resolved", + }, + { + "PublicDescription": "No any micro operation is issued and meanwhile there is any load operation missing L1 cache and pending data refill", + "EventCode": "0x7006", + "EventName": "MEM_STALL_L1MISS", + "BriefDescription": "No any micro operation is issued and meanwhile there is any load operation missing L1 cache and pending data refill", + }, + { + "PublicDescription": "No any micro operation is issued and meanwhile there is any load operation missing both L1 and L2 cache and pending data refill from L3 cache", + "EventCode": "0x7007", + "EventName": "MEM_STALL_L2MISS", + "BriefDescription": "No any micro operation is issued and meanwhile there is any load operation missing both L1 and L2 cache and pending data refill from L3 cache", + }, +] diff --git a/tools/perf/pmu-events/arch/arm64/mapfile.csv b/tools/perf/pmu-events/arch/arm64/mapfile.csv index 32fa0d1..9cc42da 100644 --- a/tools/perf/pmu-events/arch/arm64/mapfile.csv +++ b/tools/perf/pmu-events/arch/arm64/mapfile.csv @@ -13,3 +13,4 @@ # #Family-model,Version,Filename,EventType 0x00000000420f5160,v1,cavium/thunderx2,core +0x00000000480fd010,v1,hisilicon/hip08,core