From patchwork Fri May 13 07:56:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 67741 Delivered-To: patch@linaro.org Received: by 10.140.92.199 with SMTP id b65csp140744qge; Fri, 13 May 2016 01:11:11 -0700 (PDT) X-Received: by 10.67.30.68 with SMTP id kc4mr20889905pad.19.1463127070827; Fri, 13 May 2016 01:11:10 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b7si23384362pav.72.2016.05.13.01.11.10; Fri, 13 May 2016 01:11:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752716AbcEMILF (ORCPT + 29 others); Fri, 13 May 2016 04:11:05 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:28591 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751941AbcEMH4o (ORCPT ); Fri, 13 May 2016 03:56:44 -0400 Received: from 172.24.1.137 (EHLO szxeml433-hub.china.huawei.com) ([172.24.1.137]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DHB52320; Fri, 13 May 2016 15:56:31 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by szxeml433-hub.china.huawei.com (10.82.67.210) with Microsoft SMTP Server id 14.3.235.1; Fri, 13 May 2016 15:56:24 +0800 From: Wang Nan To: CC: , , Wang Nan , He Kuang , "Arnaldo Carvalho de Melo" , Jiri Olsa , Masami Hiramatsu , Namhyung Kim , "Zefan Li" , Subject: [PATCH 06/17] perf tools: Squash overwrite setting into channel Date: Fri, 13 May 2016 07:56:03 +0000 Message-ID: <1463126174-119290-7-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1463126174-119290-1-git-send-email-wangnan0@huawei.com> References: <1463126174-119290-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020206.573588B0.0042, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 81a8545971b242472022e9b59f40bb54 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Make 'overwrite' a channel configuration other than a evlist global option. With this setting an evlist can have two channels, one is normal channel, another is overwritable channel. perf_evlist__channel_for_evsel() ensures events with 'overwrite' configuration inserted to overwritable channel. Signed-off-by: Wang Nan Signed-off-by: He Kuang Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Masami Hiramatsu Cc: Namhyung Kim Cc: Zefan Li Cc: pi3orama@163.com --- tools/perf/builtin-record.c | 2 +- tools/perf/util/evlist.c | 43 ++++++++++++++++++++++++++++--------------- tools/perf/util/evlist.h | 7 +++---- tools/perf/util/evsel.h | 1 + 4 files changed, 33 insertions(+), 20 deletions(-) -- 1.8.3.4 diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 81c700d..5e87602 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -325,7 +325,7 @@ try_again: } perf_evlist__channel_reset(evlist); - if (perf_evlist__mmap_ex(evlist, opts->mmap_pages, false, + if (perf_evlist__mmap_ex(evlist, opts->mmap_pages, opts->auxtrace_mmap_pages, opts->auxtrace_snapshot_mode) < 0) { if (errno == EPERM) { diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index eefa33b..abce588 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -789,6 +789,7 @@ union perf_event *perf_evlist__mmap_read_ex(struct perf_evlist *evlist, struct perf_mmap *md = &evlist->mmap[idx]; u64 head, old; int err = perf_evlist__channel_idx(evlist, &channel, &idx); + bool rdonly; if (err || !perf_evlist__channel_is_enabled(evlist, channel)) { pr_err("ERROR: invalid mmap index: channel %d, idx: %d\n", @@ -805,8 +806,8 @@ union perf_event *perf_evlist__mmap_read_ex(struct perf_evlist *evlist, head = perf_mmap__read_head(md); - return __perf_evlist__mmap_read(md, evlist->overwrite, head, - old, &md->prev); + rdonly = perf_evlist__channel_check(evlist, channel, RDONLY); + return __perf_evlist__mmap_read(md, rdonly, old, head, &md->prev); } union perf_event * @@ -894,7 +895,7 @@ void perf_evlist__mmap_consume_ex(struct perf_evlist *evlist, return; } - if (!evlist->overwrite) { + if (!perf_evlist__channel_check(evlist, channel, RDONLY)) { u64 old = md->prev; perf_mmap__write_tail(md, old); @@ -987,7 +988,6 @@ static int perf_evlist__alloc_mmap(struct perf_evlist *evlist) } struct mmap_params { - int prot; int mask; struct auxtrace_mmap_params auxtrace_mp; }; @@ -995,6 +995,15 @@ struct mmap_params { static int __perf_evlist__mmap(struct perf_evlist *evlist, int idx, struct mmap_params *mp, int fd) { + int channel = perf_evlist__idx_channel(evlist, idx); + int prot = PROT_READ; + + if (channel < 0) + return -1; + + if (!perf_evlist__channel_check(evlist, channel, RDONLY)) + prot |= PROT_WRITE; + /* * The last one will be done at perf_evlist__mmap_consume(), so that we * make sure we don't prevent tools from consuming every last event in @@ -1011,7 +1020,7 @@ static int __perf_evlist__mmap(struct perf_evlist *evlist, int idx, atomic_set(&evlist->mmap[idx].refcnt, 2); evlist->mmap[idx].prev = 0; evlist->mmap[idx].mask = mp->mask; - evlist->mmap[idx].base = mmap(NULL, evlist->mmap_len, mp->prot, + evlist->mmap[idx].base = mmap(NULL, evlist->mmap_len, prot, MAP_SHARED, fd, 0); if (evlist->mmap[idx].base == MAP_FAILED) { pr_debug2("failed to mmap perf event ring buffer, error %d\n", @@ -1030,7 +1039,11 @@ static int __perf_evlist__mmap(struct perf_evlist *evlist, int idx, static unsigned long perf_evlist__channel_for_evsel(struct perf_evsel *evsel __maybe_unused) { - return 0; + unsigned long flag = 0; + + if (evsel->overwrite) + flag |= PERF_EVLIST__CHANNEL_RDONLY; + return flag; } static int @@ -1286,11 +1299,10 @@ int perf_evlist__parse_mmap_pages(const struct option *opt, const char *str, * perf_evlist__mmap_ex - Create mmaps to receive events. * @evlist: list of events * @pages: map length in pages - * @overwrite: overwrite older events? * @auxtrace_pages - auxtrace map length in pages * @auxtrace_overwrite - overwrite older auxtrace data? * - * If @overwrite is %false the user needs to signal event consumption using + * For writable channel, the user needs to signal event consumption using * perf_mmap__write_tail(). Using perf_evlist__mmap_read() does this * automatically. * @@ -1300,16 +1312,13 @@ int perf_evlist__parse_mmap_pages(const struct option *opt, const char *str, * Return: %0 on success, negative error code otherwise. */ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, - bool overwrite, unsigned int auxtrace_pages, - bool auxtrace_overwrite) + unsigned int auxtrace_pages, bool auxtrace_overwrite) { int err; struct perf_evsel *evsel; const struct cpu_map *cpus = evlist->cpus; const struct thread_map *threads = evlist->threads; - struct mmap_params mp = { - .prot = PROT_READ | (overwrite ? 0 : PROT_WRITE), - }; + struct mmap_params mp; err = perf_evlist__channel_complete(evlist); if (err) @@ -1321,7 +1330,6 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, if (evlist->pollfd.entries == NULL && perf_evlist__alloc_pollfd(evlist) < 0) return -ENOMEM; - evlist->overwrite = overwrite; evlist->mmap_len = perf_evlist__mmap_size(pages); pr_debug("mmap size %zuB\n", evlist->mmap_len); mp.mask = evlist->mmap_len - page_size - 1; @@ -1345,8 +1353,13 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages, bool overwrite) { + struct perf_evsel *evsel; + perf_evlist__channel_reset(evlist); - return perf_evlist__mmap_ex(evlist, pages, overwrite, 0, false); + evlist__for_each(evlist, evsel) + evsel->overwrite = overwrite; + + return perf_evlist__mmap_ex(evlist, pages, 0, false); } int perf_evlist__create_maps(struct perf_evlist *evlist, struct target *target) diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index 188f0c7..c53bdbd 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -20,9 +20,10 @@ struct record_opts; #define PERF_EVLIST__HLIST_BITS 8 #define PERF_EVLIST__HLIST_SIZE (1 << PERF_EVLIST__HLIST_BITS) -#define PERF_EVLIST__NR_CHANNELS 2 +#define PERF_EVLIST__NR_CHANNELS 3 enum perf_evlist_mmap_flag { PERF_EVLIST__CHANNEL_ENABLED = 1, + PERF_EVLIST__CHANNEL_RDONLY = 2, }; /** @@ -45,7 +46,6 @@ struct perf_evlist { int nr_entries; int nr_groups; int nr_mmaps; - bool overwrite; bool enabled; bool has_user_cpus; size_t mmap_len; @@ -223,8 +223,7 @@ int perf_evlist__parse_mmap_pages(const struct option *opt, unsigned long perf_event_mlock_kb_in_pages(void); int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages, - bool overwrite, unsigned int auxtrace_pages, - bool auxtrace_overwrite); + unsigned int auxtrace_pages, bool auxtrace_overwrite); int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages, bool overwrite); void perf_evlist__munmap(struct perf_evlist *evlist); diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index 8a644fe..c1f1015 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -112,6 +112,7 @@ struct perf_evsel { bool tracking; bool per_pkg; bool precise_max; + bool overwrite; /* parse modifier helper */ int exclude_GH; int nr_members;