From patchwork Mon May 9 01:47:50 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 67332 Delivered-To: patch@linaro.org Received: by 10.140.92.199 with SMTP id b65csp1364239qge; Sun, 8 May 2016 18:49:53 -0700 (PDT) X-Received: by 10.98.20.197 with SMTP id 188mr46854098pfu.144.1462758593188; Sun, 08 May 2016 18:49:53 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v129si25736123pfb.232.2016.05.08.18.49.52; Sun, 08 May 2016 18:49:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751131AbcEIBtv (ORCPT + 29 others); Sun, 8 May 2016 21:49:51 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:56535 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750961AbcEIBtu (ORCPT ); Sun, 8 May 2016 21:49:50 -0400 Received: from 172.24.1.60 (EHLO szxeml428-hub.china.huawei.com) ([172.24.1.60]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DGW19259; Mon, 09 May 2016 09:48:06 +0800 (CST) Received: from euler.hulk-profiling (10.107.193.250) by szxeml428-hub.china.huawei.com (10.82.67.183) with Microsoft SMTP Server id 14.3.235.1; Mon, 9 May 2016 09:47:55 +0800 From: Wang Nan To: CC: , , , Wang Nan , Arnaldo Carvalho de Melo , Peter Zijlstra Subject: [PATCH v3 1/2] perf tools: Support reading from backward ring buffer Date: Mon, 9 May 2016 01:47:50 +0000 Message-ID: <1462758471-89706-2-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1462758471-89706-1-git-send-email-wangnan0@huawei.com> References: <1462758471-89706-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.250] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090204.572FEC5A.0063, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: b7be70d92467b53e3b1533883700ee1c Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org perf_evlist__mmap_read_backward() is introduced for reading backward ring buffer. Since direction for reading such ring buffer is different from the direction kernel writing to it, and since user need to fetch most recent record from it, a perf_evlist__mmap_read_catchup() is introduced to move the reading pointer to the end of the buffer. Signed-off-by: Wang Nan Cc: Arnaldo Carvalho de Melo Cc: Peter Zijlstra Cc: Zefan Li Cc: pi3orama@163.com --- tools/perf/util/evlist.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++ tools/perf/util/evlist.h | 4 ++++ 2 files changed, 54 insertions(+) -- 1.8.3.4 diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index 17cd014..c4bfe11 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -766,6 +766,56 @@ union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx) return perf_mmap__read(md, evlist->overwrite, old, head, &md->prev); } +union perf_event * +perf_evlist__mmap_read_backward(struct perf_evlist *evlist, int idx) +{ + struct perf_mmap *md = &evlist->mmap[idx]; + u64 head, end; + u64 start = md->prev; + + /* + * Check if event was unmapped due to a POLLHUP/POLLERR. + */ + if (!atomic_read(&md->refcnt)) + return NULL; + + head = perf_mmap__read_head(md); + if (!head) + return NULL; + + /* + * 'head' pointer starts from 0. Kernel minus sizeof(record) form + * it each time when kernel writes to it, so in fact 'head' is + * negative. 'end' pointer is made manually by adding the size of + * the ring buffer to 'head' pointer, means the validate data can + * read is the whole ring buffer. If 'end' is positive, the ring + * buffer has not fully filled, so we must adjust 'end' to 0. + * + * However, since both 'head' and 'end' is unsigned, we can't + * simply compare 'end' against 0. Here we compare '-head' and + * the size of the ring buffer, where -head is the number of bytes + * kernel write to the ring buffer. + */ + if (-head < (u64)(md->mask + 1)) + end = 0; + else + end = head + md->mask + 1; + + return perf_mmap__read(md, false, start, end, &md->prev); +} + +void perf_evlist__mmap_read_catchup(struct perf_evlist *evlist, int idx) +{ + struct perf_mmap *md = &evlist->mmap[idx]; + u64 head; + + if (!atomic_read(&md->refcnt)) + return; + + head = perf_mmap__read_head(md); + md->prev = head; +} + static bool perf_mmap__empty(struct perf_mmap *md) { return perf_mmap__read_head(md) == md->prev && !md->auxtrace_mmap.base; diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index 208897a..85d1b59 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -129,6 +129,10 @@ struct perf_sample_id *perf_evlist__id2sid(struct perf_evlist *evlist, u64 id); union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx); +union perf_event *perf_evlist__mmap_read_backward(struct perf_evlist *evlist, + int idx); +void perf_evlist__mmap_read_catchup(struct perf_evlist *evlist, int idx); + void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx); int perf_evlist__open(struct perf_evlist *evlist);