From patchwork Mon Oct 12 09:02:43 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kaixu Xia X-Patchwork-Id: 54748 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by patches.linaro.org (Postfix) with ESMTPS id D030A23001 for ; Mon, 12 Oct 2015 09:03:17 +0000 (UTC) Received: by lbwr8 with SMTP id r8sf66294822lbw.0 for ; Mon, 12 Oct 2015 02:03:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type:sender:precedence :list-id:x-original-sender:x-original-authentication-results :mailing-list:list-post:list-help:list-archive:list-unsubscribe; bh=qiJzvuGfVip3ANZc6XaS4HvsP2WHTPEOkF1zrm2KNUw=; b=kBmBEdtIFflZs9jHsWKXXEiQjfwin68VHaOTt0acXxu9YjAimp1PcydHrHLJr8cLYi qtCfuBSiJaP1VyHC7Iu0W8BDIklme2WzOErIKUZbj4dbBmaryL/mPhzBW/NZilHEcby0 6itwY9hLVk/CZlJ4rCkAz+i21/zN19bV7c4PUmIFTdzII8y4n3e6NuA8j8LilmGgOyzV OTb2q72PNLLe+ByF6zb0ohkUSncLyC5Hlcvrs0eYVB28Le91ahRjBeNI9ktwEQYdKdep aQA9Ikd/bc9ZwQhPOyXaUEJsjwM8wJZpiCZmXL216jVLAhJ24I2iUB1vSH7IIylXLhXe zgGg== X-Gm-Message-State: ALoCoQmHoXXH50D4A3zXeh9VxCiRvRlk+vm8N1lTj60I+YtoedYta3v2R9iMOsiRaANDLVHfRv67 X-Received: by 10.112.130.41 with SMTP id ob9mr5284685lbb.17.1444640596844; Mon, 12 Oct 2015 02:03:16 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.23.81 with SMTP id n78ls518760lfi.33.gmail; Mon, 12 Oct 2015 02:03:16 -0700 (PDT) X-Received: by 10.112.32.72 with SMTP id g8mr11941451lbi.22.1444640596673; Mon, 12 Oct 2015 02:03:16 -0700 (PDT) Received: from mail-lb0-f179.google.com (mail-lb0-f179.google.com. [209.85.217.179]) by mx.google.com with ESMTPS id m82si10495169lfg.118.2015.10.12.02.03.16 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Oct 2015 02:03:16 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.179 as permitted sender) client-ip=209.85.217.179; Received: by lbbk10 with SMTP id k10so23989596lbb.0 for ; Mon, 12 Oct 2015 02:03:16 -0700 (PDT) X-Received: by 10.25.86.213 with SMTP id k204mr7797041lfb.36.1444640596529; Mon, 12 Oct 2015 02:03:16 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1430467lbq; Mon, 12 Oct 2015 02:03:15 -0700 (PDT) X-Received: by 10.107.35.78 with SMTP id j75mr24736070ioj.123.1444640595499; Mon, 12 Oct 2015 02:03:15 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d8si52216iod.18.2015.10.12.02.03.15; Mon, 12 Oct 2015 02:03:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752016AbbJLJDM (ORCPT + 30 others); Mon, 12 Oct 2015 05:03:12 -0400 Received: from szxga03-in.huawei.com ([119.145.14.66]:15468 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751449AbbJLJDK (ORCPT ); Mon, 12 Oct 2015 05:03:10 -0400 Received: from 172.24.1.48 (EHLO szxeml422-hub.china.huawei.com) ([172.24.1.48]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id BOT26939; Mon, 12 Oct 2015 17:03:04 +0800 (CST) Received: from euler.hulk-profiling (10.107.193.250) by szxeml422-hub.china.huawei.com (10.82.67.152) with Microsoft SMTP Server id 14.3.235.1; Mon, 12 Oct 2015 17:02:50 +0800 From: Kaixu Xia To: , , , , , , , CC: , , , , , Subject: [RFC PATCH 2/2] bpf: Implement bpf_perf_event_sample_enable/disable() helpers Date: Mon, 12 Oct 2015 09:02:43 +0000 Message-ID: <1444640563-159175-3-git-send-email-xiakaixu@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1444640563-159175-1-git-send-email-xiakaixu@huawei.com> References: <1444640563-159175-1-git-send-email-xiakaixu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.193.250] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020201.561B774A.0157, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 67aa83a675ab4202cc54da6a883c19a2 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: xiakaixu@huawei.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.179 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The functions bpf_perf_event_sample_enable/disable() can set the flag sample_disable to enable/disable output trace data on samples. Signed-off-by: Kaixu Xia --- include/linux/bpf.h | 2 ++ include/uapi/linux/bpf.h | 2 ++ kernel/bpf/verifier.c | 4 +++- kernel/trace/bpf_trace.c | 34 ++++++++++++++++++++++++++++++++++ 4 files changed, 41 insertions(+), 1 deletion(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 25e073d..09148ff 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -192,6 +192,8 @@ extern const struct bpf_func_proto bpf_map_update_elem_proto; extern const struct bpf_func_proto bpf_map_delete_elem_proto; extern const struct bpf_func_proto bpf_perf_event_read_proto; +extern const struct bpf_func_proto bpf_perf_event_sample_enable_proto; +extern const struct bpf_func_proto bpf_perf_event_sample_disable_proto; extern const struct bpf_func_proto bpf_get_prandom_u32_proto; extern const struct bpf_func_proto bpf_get_smp_processor_id_proto; extern const struct bpf_func_proto bpf_tail_call_proto; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 92a48e2..5229c550 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -272,6 +272,8 @@ enum bpf_func_id { BPF_FUNC_skb_get_tunnel_key, BPF_FUNC_skb_set_tunnel_key, BPF_FUNC_perf_event_read, /* u64 bpf_perf_event_read(&map, index) */ + BPF_FUNC_perf_event_sample_enable, /* u64 bpf_perf_event_enable(&map) */ + BPF_FUNC_perf_event_sample_disable, /* u64 bpf_perf_event_disable(&map) */ __BPF_FUNC_MAX_ID, }; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b074b23..6428daf 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -244,6 +244,8 @@ static const struct { } func_limit[] = { {BPF_MAP_TYPE_PROG_ARRAY, BPF_FUNC_tail_call}, {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_read}, + {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_sample_enable}, + {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_sample_disable}, }; static void print_verifier_state(struct verifier_env *env) @@ -860,7 +862,7 @@ static int check_map_func_compatibility(struct bpf_map *map, int func_id) * don't allow any other map type to be passed into * the special func; */ - if (bool_map != bool_func) + if (bool_func && bool_map != bool_func) return -EINVAL; } diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 0fe96c7..abe943a 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -215,6 +215,36 @@ const struct bpf_func_proto bpf_perf_event_read_proto = { .arg2_type = ARG_ANYTHING, }; +static u64 bpf_perf_event_sample_enable(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5) +{ + struct bpf_map *map = (struct bpf_map *) (unsigned long) r1; + + atomic_set(&map->perf_sample_disable, 0); + return 0; +} + +const struct bpf_func_proto bpf_perf_event_sample_enable_proto = { + .func = bpf_perf_event_sample_enable, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_CONST_MAP_PTR, +}; + +static u64 bpf_perf_event_sample_disable(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5) +{ + struct bpf_map *map = (struct bpf_map *) (unsigned long) r1; + + atomic_set(&map->perf_sample_disable, 1); + return 0; +} + +const struct bpf_func_proto bpf_perf_event_sample_disable_proto = { + .func = bpf_perf_event_sample_disable, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_CONST_MAP_PTR, +}; + static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func_id) { switch (func_id) { @@ -242,6 +272,10 @@ static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func return &bpf_get_smp_processor_id_proto; case BPF_FUNC_perf_event_read: return &bpf_perf_event_read_proto; + case BPF_FUNC_perf_event_sample_enable: + return &bpf_perf_event_sample_enable_proto; + case BPF_FUNC_perf_event_sample_disable: + return &bpf_perf_event_sample_disable_proto; default: return NULL; }