From patchwork Fri Oct 16 07:42:12 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kaixu Xia X-Patchwork-Id: 55083 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lf0-f69.google.com (mail-lf0-f69.google.com [209.85.215.69]) by patches.linaro.org (Postfix) with ESMTPS id C181922FFA for ; Fri, 16 Oct 2015 07:42:51 +0000 (UTC) Received: by lfaz124 with SMTP id z124sf14294543lfa.0 for ; Fri, 16 Oct 2015 00:42:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type:sender:precedence :list-id:x-original-sender:x-original-authentication-results :mailing-list:list-post:list-help:list-archive:list-unsubscribe; bh=bCHgNm4qyfu/Xvo7AbKlO/r+Y4dDO2ERQfellJJ+8Fc=; b=I1FymnWDdQehB3xOOO09GGeigXNcgU7wUqWGbFBmnKLy1SkUh4icUzJhJ7DKkIy1po ij+VuPnUlSPwRu4uZj/j4qedJAedWcXpHsXXwMlGThAzjRw5tpOEAfH/iT1Vatf2xE4e fLc+4dzRxVMR2/a7ecPJMjluhrGnTbej6f0D3sYVkLtaTqazLsRv54pvyFPp7xY5UEV/ SMoRkFBctYU9tkZIgwxZyuMJHvF1TPo72FXm7800plZL9m2IGDOWMJ5jEEZCv9LYEeCx el0qWR7YiX2Omf1TnKQYdhGnNzMbDEFGJWHaS7v5g/5fRjZbrlfbv24u+n+ZYsr/ZZGQ ko0g== X-Gm-Message-State: ALoCoQk5X7IznXEploE0a7oyZxeDlvmWRI4iCzesbVyZ7LiRRZPD6qIe5s9Y/0XthLOYEKEQlmyF X-Received: by 10.112.181.10 with SMTP id ds10mr3175390lbc.3.1444981370554; Fri, 16 Oct 2015 00:42:50 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.150.209 with SMTP id y200ls245668lfd.17.gmail; Fri, 16 Oct 2015 00:42:50 -0700 (PDT) X-Received: by 10.112.200.202 with SMTP id ju10mr7232328lbc.97.1444981370389; Fri, 16 Oct 2015 00:42:50 -0700 (PDT) Received: from mail-lb0-f172.google.com (mail-lb0-f172.google.com. [209.85.217.172]) by mx.google.com with ESMTPS id v9si9995803lby.76.2015.10.16.00.42.50 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Oct 2015 00:42:50 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) client-ip=209.85.217.172; Received: by lbbpp2 with SMTP id pp2so64715388lbb.0 for ; Fri, 16 Oct 2015 00:42:50 -0700 (PDT) X-Received: by 10.112.180.230 with SMTP id dr6mr7394040lbc.72.1444981370257; Fri, 16 Oct 2015 00:42:50 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1082656lbq; Fri, 16 Oct 2015 00:42:49 -0700 (PDT) X-Received: by 10.50.49.68 with SMTP id s4mr3581717ign.44.1444981368964; Fri, 16 Oct 2015 00:42:48 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id av10si2644518igc.36.2015.10.16.00.42.48; Fri, 16 Oct 2015 00:42:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754050AbbJPHmq (ORCPT + 30 others); Fri, 16 Oct 2015 03:42:46 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:42350 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752426AbbJPHmm (ORCPT ); Fri, 16 Oct 2015 03:42:42 -0400 Received: from 172.24.1.51 (EHLO szxeml428-hub.china.huawei.com) ([172.24.1.51]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CWW46282; Fri, 16 Oct 2015 15:42:30 +0800 (CST) Received: from euler.hulk-profiling (10.107.193.250) by szxeml428-hub.china.huawei.com (10.82.67.183) with Microsoft SMTP Server id 14.3.235.1; Fri, 16 Oct 2015 15:42:18 +0800 From: Kaixu Xia To: , , , , , , , CC: , , , , , Subject: [PATCH V3 1/2] bpf: control the trace data output on current cpu when perf sampling Date: Fri, 16 Oct 2015 07:42:12 +0000 Message-ID: <1444981333-70429-2-git-send-email-xiakaixu@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1444981333-70429-1-git-send-email-xiakaixu@huawei.com> References: <1444981333-70429-1-git-send-email-xiakaixu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.250] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: xiakaixu@huawei.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch adds the flag dump_enable to control the trace data output process when perf sampling. By setting this flag and integrating with ebpf, we can control the data output process and get the samples we are most interested in. The bpf helper bpf_perf_event_dump_control() can control the perf_event on current cpu. Signed-off-by: Kaixu Xia --- include/linux/perf_event.h | 1 + include/uapi/linux/bpf.h | 5 +++++ include/uapi/linux/perf_event.h | 3 ++- kernel/bpf/verifier.c | 3 ++- kernel/events/core.c | 13 ++++++++++++ kernel/trace/bpf_trace.c | 44 +++++++++++++++++++++++++++++++++++++++++ 6 files changed, 67 insertions(+), 2 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 092a0e8..2af527e 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -472,6 +472,7 @@ struct perf_event { struct irq_work pending; atomic_t event_limit; + atomic_t dump_enable; void (*destroy)(struct perf_event *); struct rcu_head rcu_head; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 564f1f0..ba08034 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -287,6 +287,11 @@ enum bpf_func_id { * Return: realm if != 0 */ BPF_FUNC_get_route_realm, + + /** + * u64 bpf_perf_event_dump_control(&map, index, flag) + */ + BPF_FUNC_perf_event_dump_control, __BPF_FUNC_MAX_ID, }; diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index 2881145..f4b8f08 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -331,7 +331,8 @@ struct perf_event_attr { comm_exec : 1, /* flag comm events that are due to an exec */ use_clockid : 1, /* use @clockid for time fields */ context_switch : 1, /* context switch data */ - __reserved_1 : 37; + dump_enable : 1, /* don't output data on samples */ + __reserved_1 : 36; union { __u32 wakeup_events; /* wakeup every n events */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1d6b97b..26b55f2 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -245,6 +245,7 @@ static const struct { } func_limit[] = { {BPF_MAP_TYPE_PROG_ARRAY, BPF_FUNC_tail_call}, {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_read}, + {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_dump_control}, }; static void print_verifier_state(struct verifier_env *env) @@ -910,7 +911,7 @@ static int check_map_func_compatibility(struct bpf_map *map, int func_id) * don't allow any other map type to be passed into * the special func; */ - if (bool_map != bool_func) + if (bool_func && bool_map != bool_func) return -EINVAL; } diff --git a/kernel/events/core.c b/kernel/events/core.c index b11756f..74a16af 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6337,6 +6337,9 @@ static int __perf_event_overflow(struct perf_event *event, irq_work_queue(&event->pending); } + if (!atomic_read(&event->dump_enable)) + return ret; + if (event->overflow_handler) event->overflow_handler(event, data, regs); else @@ -7709,6 +7712,14 @@ static void account_event(struct perf_event *event) account_event_cpu(event, event->cpu); } +static void perf_event_check_dump_flag(struct perf_event *event) +{ + if (event->attr.dump_enable == 1) + atomic_set(&event->dump_enable, 1); + else + atomic_set(&event->dump_enable, 0); +} + /* * Allocate and initialize a event structure */ @@ -7840,6 +7851,8 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu, } } + perf_event_check_dump_flag(event); + return event; err_per_task: diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 0fe96c7..3175600 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -215,6 +215,48 @@ const struct bpf_func_proto bpf_perf_event_read_proto = { .arg2_type = ARG_ANYTHING, }; +/* flags for PERF_EVENT_ARRAY maps*/ +enum { + PERF_EVENT_CTL_BIT_DUMP = 0, + _NR_PERF_EVENT_CTL_BITS, +}; + +#define BIT_FLAG_CHECK GENMASK_ULL(63, _NR_PERF_EVENT_CTL_BITS) +#define BIT_DUMP_CTL BIT_ULL(PERF_EVENT_CTL_BIT_DUMP) + +static u64 bpf_perf_event_dump_control(u64 r1, u64 index, u64 flag, u64 r4, u64 r5) +{ + struct bpf_map *map = (struct bpf_map *) (unsigned long) r1; + struct bpf_array *array = container_of(map, struct bpf_array, map); + struct perf_event *event; + + if (unlikely(index >= array->map.max_entries)) + return -E2BIG; + + if (flag & BIT_FLAG_CHECK) + return -EINVAL; + + event = (struct perf_event *)array->ptrs[index]; + if (!event) + return -ENOENT; + + if (flag & BIT_DUMP_CTL) + atomic_dec_if_positive(&event->dump_enable); + else + atomic_inc_unless_negative(&event->dump_enable); + + return 0; +} + +static const struct bpf_func_proto bpf_perf_event_dump_control_proto = { + .func = bpf_perf_event_dump_control, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_CONST_MAP_PTR, + .arg2_type = ARG_ANYTHING, + .arg3_type = ARG_ANYTHING, +}; + static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func_id) { switch (func_id) { @@ -242,6 +284,8 @@ static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func return &bpf_get_smp_processor_id_proto; case BPF_FUNC_perf_event_read: return &bpf_perf_event_read_proto; + case BPF_FUNC_perf_event_dump_control: + return &bpf_perf_event_dump_control_proto; default: return NULL; }