From patchwork Tue Apr 13 12:15:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 420690 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBE3DC433ED for ; Tue, 13 Apr 2021 12:16:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB82B61278 for ; Tue, 13 Apr 2021 12:16:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344259AbhDMMQS convert rfc822-to-8bit (ORCPT ); Tue, 13 Apr 2021 08:16:18 -0400 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:32521 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239738AbhDMMQO (ORCPT ); Tue, 13 Apr 2021 08:16:14 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-541-sXv1wqZIML6AklquvXHiTw-1; Tue, 13 Apr 2021 08:15:48 -0400 X-MC-Unique: sXv1wqZIML6AklquvXHiTw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 54EF81005E4C; Tue, 13 Apr 2021 12:15:24 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.196.16]) by smtp.corp.redhat.com (Postfix) with ESMTP id 140A210023B0; Tue, 13 Apr 2021 12:15:20 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: kernel test robot , netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Viktor Malik Subject: [PATCHv2 RFC bpf-next 1/7] bpf: Move bpf_prog_start/end functions to generic place Date: Tue, 13 Apr 2021 14:15:10 +0200 Message-Id: <20210413121516.1467989-2-jolsa@kernel.org> In-Reply-To: <20210413121516.1467989-1-jolsa@kernel.org> References: <20210413121516.1467989-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Moving bpf_prog_start/end functions plus related static functions to generic place. So they can be used also when trampolines are disabled. Reported-by: kernel test robot Signed-off-by: Jiri Olsa --- kernel/bpf/syscall.c | 97 +++++++++++++++++++++++++++++++++++++++++ kernel/bpf/trampoline.c | 97 ----------------------------------------- 2 files changed, 97 insertions(+), 97 deletions(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 6428634da57e..90cd58520bd4 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4494,3 +4494,100 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz return err; } + +#define NO_START_TIME 1 +static u64 notrace bpf_prog_start_time(void) +{ + u64 start = NO_START_TIME; + + if (static_branch_unlikely(&bpf_stats_enabled_key)) { + start = sched_clock(); + if (unlikely(!start)) + start = NO_START_TIME; + } + return start; +} + +static void notrace inc_misses_counter(struct bpf_prog *prog) +{ + struct bpf_prog_stats *stats; + + stats = this_cpu_ptr(prog->stats); + u64_stats_update_begin(&stats->syncp); + stats->misses++; + u64_stats_update_end(&stats->syncp); +} + +/* The logic is similar to BPF_PROG_RUN, but with an explicit + * rcu_read_lock() and migrate_disable() which are required + * for the trampoline. The macro is split into + * call __bpf_prog_enter + * call prog->bpf_func + * call __bpf_prog_exit + * + * __bpf_prog_enter returns: + * 0 - skip execution of the bpf prog + * 1 - execute bpf prog + * [2..MAX_U64] - excute bpf prog and record execution time. + * This is start time. + */ +u64 notrace __bpf_prog_enter(struct bpf_prog *prog) + __acquires(RCU) +{ + rcu_read_lock(); + migrate_disable(); + if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) { + inc_misses_counter(prog); + return 0; + } + return bpf_prog_start_time(); +} + +static void notrace update_prog_stats(struct bpf_prog *prog, + u64 start) +{ + struct bpf_prog_stats *stats; + + if (static_branch_unlikely(&bpf_stats_enabled_key) && + /* static_key could be enabled in __bpf_prog_enter* + * and disabled in __bpf_prog_exit*. + * And vice versa. + * Hence check that 'start' is valid. + */ + start > NO_START_TIME) { + stats = this_cpu_ptr(prog->stats); + u64_stats_update_begin(&stats->syncp); + stats->cnt++; + stats->nsecs += sched_clock() - start; + u64_stats_update_end(&stats->syncp); + } +} + +void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start) + __releases(RCU) +{ + update_prog_stats(prog, start); + __this_cpu_dec(*(prog->active)); + migrate_enable(); + rcu_read_unlock(); +} + +u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog) +{ + rcu_read_lock_trace(); + migrate_disable(); + might_fault(); + if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) { + inc_misses_counter(prog); + return 0; + } + return bpf_prog_start_time(); +} + +void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start) +{ + update_prog_stats(prog, start); + __this_cpu_dec(*(prog->active)); + migrate_enable(); + rcu_read_unlock_trace(); +} diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 1f3a4be4b175..951cad26c5a9 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -489,103 +489,6 @@ void bpf_trampoline_put(struct bpf_trampoline *tr) mutex_unlock(&trampoline_mutex); } -#define NO_START_TIME 1 -static u64 notrace bpf_prog_start_time(void) -{ - u64 start = NO_START_TIME; - - if (static_branch_unlikely(&bpf_stats_enabled_key)) { - start = sched_clock(); - if (unlikely(!start)) - start = NO_START_TIME; - } - return start; -} - -static void notrace inc_misses_counter(struct bpf_prog *prog) -{ - struct bpf_prog_stats *stats; - - stats = this_cpu_ptr(prog->stats); - u64_stats_update_begin(&stats->syncp); - stats->misses++; - u64_stats_update_end(&stats->syncp); -} - -/* The logic is similar to BPF_PROG_RUN, but with an explicit - * rcu_read_lock() and migrate_disable() which are required - * for the trampoline. The macro is split into - * call __bpf_prog_enter - * call prog->bpf_func - * call __bpf_prog_exit - * - * __bpf_prog_enter returns: - * 0 - skip execution of the bpf prog - * 1 - execute bpf prog - * [2..MAX_U64] - excute bpf prog and record execution time. - * This is start time. - */ -u64 notrace __bpf_prog_enter(struct bpf_prog *prog) - __acquires(RCU) -{ - rcu_read_lock(); - migrate_disable(); - if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) { - inc_misses_counter(prog); - return 0; - } - return bpf_prog_start_time(); -} - -static void notrace update_prog_stats(struct bpf_prog *prog, - u64 start) -{ - struct bpf_prog_stats *stats; - - if (static_branch_unlikely(&bpf_stats_enabled_key) && - /* static_key could be enabled in __bpf_prog_enter* - * and disabled in __bpf_prog_exit*. - * And vice versa. - * Hence check that 'start' is valid. - */ - start > NO_START_TIME) { - stats = this_cpu_ptr(prog->stats); - u64_stats_update_begin(&stats->syncp); - stats->cnt++; - stats->nsecs += sched_clock() - start; - u64_stats_update_end(&stats->syncp); - } -} - -void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start) - __releases(RCU) -{ - update_prog_stats(prog, start); - __this_cpu_dec(*(prog->active)); - migrate_enable(); - rcu_read_unlock(); -} - -u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog) -{ - rcu_read_lock_trace(); - migrate_disable(); - might_fault(); - if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) { - inc_misses_counter(prog); - return 0; - } - return bpf_prog_start_time(); -} - -void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start) -{ - update_prog_stats(prog, start); - __this_cpu_dec(*(prog->active)); - migrate_enable(); - rcu_read_unlock_trace(); -} - void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr) { percpu_ref_get(&tr->pcref); From patchwork Tue Apr 13 12:15:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 420688 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DC3CC433B4 for ; Tue, 13 Apr 2021 12:16:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E92B61026 for ; Tue, 13 Apr 2021 12:16:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345466AbhDMMQm convert rfc822-to-8bit (ORCPT ); Tue, 13 Apr 2021 08:16:42 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:31273 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237650AbhDMMQU (ORCPT ); Tue, 13 Apr 2021 08:16:20 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-176-EYkf5d-YPb2VxoWsmJmK7Q-1; Tue, 13 Apr 2021 08:15:54 -0400 X-MC-Unique: EYkf5d-YPb2VxoWsmJmK7Q-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 513C88B459E; Tue, 13 Apr 2021 12:15:31 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.196.16]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2912610023B0; Tue, 13 Apr 2021 12:15:27 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8ilan?= =?utf-8?q?d-J=C3=B8rgensen?= , Viktor Malik Subject: [PATCHv2 RFC bpf-next 3/7] bpf: Add support to attach program to ftrace probe Date: Tue, 13 Apr 2021 14:15:12 +0200 Message-Id: <20210413121516.1467989-4-jolsa@kernel.org> In-Reply-To: <20210413121516.1467989-1-jolsa@kernel.org> References: <20210413121516.1467989-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adding support to attach bpf program to ftrace probes. The program needs to loaded with BPF_TRACE_FTRACE_ENTRY as its expected_attach_type. With such program we can create a link with new 'funcs_fd' field, which holds fd of the bpf_function object. The attach will create ftrace_ops object and set filter to the bpf_functions functions. The ftrace bpf program gets following arguments on entry: ip, parent_ip It's possible to add registers in the future, but I have no use for them at the moment. Currently bpftrace is using 'ip' to identify the probe. Adding 'entry' support for now, 'exit' support can be added later when it's supported in ftrace. Forcing userspace to use bpf_ftrace_probe BTF ID as probed function, which is used in verifier to check on probe's data accesses. The verifier now checks that we use directly bpf_ftrace_probe as probe, but we could change it to use any function with same prototype if needed. Signed-off-by: Jiri Olsa --- include/uapi/linux/bpf.h | 3 + kernel/bpf/syscall.c | 147 +++++++++++++++++++++++++++++++++ kernel/bpf/verifier.c | 30 +++++++ net/bpf/test_run.c | 1 + tools/include/uapi/linux/bpf.h | 3 + 5 files changed, 184 insertions(+) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 5d616735fe1b..dbedbcdc8122 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -980,6 +980,7 @@ enum bpf_attach_type { BPF_SK_LOOKUP, BPF_XDP, BPF_SK_SKB_VERDICT, + BPF_TRACE_FTRACE_ENTRY, __MAX_BPF_ATTACH_TYPE }; @@ -993,6 +994,7 @@ enum bpf_link_type { BPF_LINK_TYPE_ITER = 4, BPF_LINK_TYPE_NETNS = 5, BPF_LINK_TYPE_XDP = 6, + BPF_LINK_TYPE_FTRACE = 7, MAX_BPF_LINK_TYPE, }; @@ -1427,6 +1429,7 @@ union bpf_attr { __aligned_u64 iter_info; /* extra bpf_iter_link_info */ __u32 iter_info_len; /* iter_info length */ }; + __u32 funcs_fd; }; } link_create; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index b240a500cae5..c83515d41020 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1965,6 +1965,11 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type, prog_type != BPF_PROG_TYPE_EXT) return -EINVAL; + if (prog_type == BPF_PROG_TYPE_TRACING && + expected_attach_type == BPF_TRACE_FTRACE_ENTRY && + !IS_ENABLED(CONFIG_FUNCTION_TRACER)) + return -EINVAL; + switch (prog_type) { case BPF_PROG_TYPE_CGROUP_SOCK: switch (expected_attach_type) { @@ -2861,6 +2866,144 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, return err; } +#ifdef CONFIG_FUNCTION_TRACER +struct bpf_tracing_ftrace_link { + struct bpf_link link; + enum bpf_attach_type attach_type; + struct ftrace_ops ops; +}; + +static void bpf_tracing_ftrace_link_release(struct bpf_link *link) +{ + struct bpf_tracing_ftrace_link *tr_link = + container_of(link, struct bpf_tracing_ftrace_link, link); + + WARN_ON(unregister_ftrace_function(&tr_link->ops)); +} + +static void bpf_tracing_ftrace_link_dealloc(struct bpf_link *link) +{ + struct bpf_tracing_ftrace_link *tr_link = + container_of(link, struct bpf_tracing_ftrace_link, link); + + kfree(tr_link); +} + +static void bpf_tracing_ftrace_link_show_fdinfo(const struct bpf_link *link, + struct seq_file *seq) +{ + struct bpf_tracing_ftrace_link *tr_link = + container_of(link, struct bpf_tracing_ftrace_link, link); + + seq_printf(seq, + "attach_type:\t%d\n", + tr_link->attach_type); +} + +static int bpf_tracing_ftrace_link_fill_link_info(const struct bpf_link *link, + struct bpf_link_info *info) +{ + struct bpf_tracing_ftrace_link *tr_link = + container_of(link, struct bpf_tracing_ftrace_link, link); + + info->tracing.attach_type = tr_link->attach_type; + return 0; +} + +static const struct bpf_link_ops bpf_tracing_ftrace_lops = { + .release = bpf_tracing_ftrace_link_release, + .dealloc = bpf_tracing_ftrace_link_dealloc, + .show_fdinfo = bpf_tracing_ftrace_link_show_fdinfo, + .fill_link_info = bpf_tracing_ftrace_link_fill_link_info, +}; + +static void +bpf_ftrace_function_call(unsigned long ip, unsigned long parent_ip, + struct ftrace_ops *ops, struct ftrace_regs *fregs) +{ + struct bpf_tracing_ftrace_link *tr_link; + struct bpf_prog *prog; + u64 start; + + tr_link = container_of(ops, struct bpf_tracing_ftrace_link, ops); + prog = tr_link->link.prog; + + if (prog->aux->sleepable) + start = __bpf_prog_enter_sleepable(prog); + else + start = __bpf_prog_enter(prog); + + if (start) + bpf_trace_run2(tr_link->link.prog, ip, parent_ip); + + if (prog->aux->sleepable) + __bpf_prog_exit_sleepable(prog, start); + else + __bpf_prog_exit(prog, start); +} + +static int bpf_tracing_ftrace_attach(struct bpf_prog *prog, int funcs_fd) +{ + struct bpf_tracing_ftrace_link *link; + struct bpf_link_primer link_primer; + struct bpf_functions *funcs; + struct ftrace_ops *ops; + int err = -ENOMEM; + struct fd orig; + int i; + + if (prog->type != BPF_PROG_TYPE_TRACING) + return -EINVAL; + + if (prog->expected_attach_type != BPF_TRACE_FTRACE_ENTRY) + return -EINVAL; + + funcs = bpf_functions_get_from_fd(funcs_fd, &orig); + if (IS_ERR(funcs)) + return PTR_ERR(funcs); + + link = kzalloc(sizeof(*link), GFP_USER); + if (!link) + goto out_free; + + ops = &link->ops; + ops->func = bpf_ftrace_function_call; + ops->flags = FTRACE_OPS_FL_DYNAMIC; + + bpf_link_init(&link->link, BPF_LINK_TYPE_FTRACE, + &bpf_tracing_ftrace_lops, prog); + link->attach_type = prog->expected_attach_type; + + err = bpf_link_prime(&link->link, &link_primer); + if (err) + goto out_free; + + for (i = 0; i < funcs->cnt; i++) { + err = ftrace_set_filter_ip(ops, funcs->addrs[i], 0, 0); + if (err) + goto out_free; + } + + err = register_ftrace_function(ops); + if (err) + goto out_free; + + fdput(orig); + return bpf_link_settle(&link_primer); + +out_free: + kfree(link); + fdput(orig); + return err; +} +#else +static int bpf_tracing_ftrace_attach(struct bpf_prog *prog __maybe_unused, + int funcs_fd __maybe_unused) +{ + return -ENODEV; +} +#endif /* CONFIG_FUNCTION_TRACER */ + struct bpf_raw_tp_link { struct bpf_link link; struct bpf_raw_event_map *btp; @@ -3093,6 +3236,7 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type) case BPF_CGROUP_GETSOCKOPT: case BPF_CGROUP_SETSOCKOPT: return BPF_PROG_TYPE_CGROUP_SOCKOPT; + case BPF_TRACE_FTRACE_ENTRY: case BPF_TRACE_ITER: return BPF_PROG_TYPE_TRACING; case BPF_SK_LOOKUP: @@ -4149,6 +4293,9 @@ static int tracing_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog * if (prog->expected_attach_type == BPF_TRACE_ITER) return bpf_iter_link_attach(attr, prog); + else if (prog->expected_attach_type == BPF_TRACE_FTRACE_ENTRY) + return bpf_tracing_ftrace_attach(prog, + attr->link_create.funcs_fd); else if (prog->type == BPF_PROG_TYPE_EXT) return bpf_tracing_prog_attach(prog, attr->link_create.target_fd, diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 852541a435ef..ea001aec66f6 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8946,6 +8946,7 @@ static int check_return_code(struct bpf_verifier_env *env) break; case BPF_TRACE_RAW_TP: case BPF_MODIFY_RETURN: + case BPF_TRACE_FTRACE_ENTRY: return 0; case BPF_TRACE_ITER: break; @@ -12794,6 +12795,14 @@ static int check_non_sleepable_error_inject(u32 btf_id) return btf_id_set_contains(&btf_non_sleepable_error_inject, btf_id); } +__maybe_unused +void bpf_ftrace_probe(unsigned long ip __maybe_unused, + unsigned long parent_ip __maybe_unused) +{ +} + +BTF_ID_LIST_SINGLE(btf_ftrace_probe_id, func, bpf_ftrace_probe); + int bpf_check_attach_target(struct bpf_verifier_log *log, const struct bpf_prog *prog, const struct bpf_prog *tgt_prog, @@ -13021,6 +13030,25 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, } break; + case BPF_TRACE_FTRACE_ENTRY: + if (tgt_prog) { + bpf_log(log, + "Only FENTRY/FEXIT progs are attachable to another BPF prog\n"); + return -EINVAL; + } + if (btf_id != btf_ftrace_probe_id[0]) { + bpf_log(log, + "Only btf id '%d' allowed for ftrace probe\n", + btf_ftrace_probe_id[0]); + return -EINVAL; + } + t = btf_type_by_id(btf, t->type); + if (!btf_type_is_func_proto(t)) + return -EINVAL; + ret = btf_distill_func_proto(log, btf, t, tname, &tgt_info->fmodel); + if (ret < 0) + return ret; + break; } tgt_info->tgt_addr = addr; tgt_info->tgt_name = tname; @@ -13081,6 +13109,8 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) if (!bpf_iter_prog_supported(prog)) return -EINVAL; return 0; + } else if (prog->expected_attach_type == BPF_TRACE_FTRACE_ENTRY) { + return 0; } if (prog->type == BPF_PROG_TYPE_LSM) { diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index a5d72c48fb66..0a891c27bad0 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -285,6 +285,7 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog, switch (prog->expected_attach_type) { case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: + case BPF_TRACE_FTRACE_ENTRY: if (bpf_fentry_test1(1) != 2 || bpf_fentry_test2(2, 3) != 5 || bpf_fentry_test3(4, 5, 6) != 15 || diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 5d616735fe1b..dbedbcdc8122 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -980,6 +980,7 @@ enum bpf_attach_type { BPF_SK_LOOKUP, BPF_XDP, BPF_SK_SKB_VERDICT, + BPF_TRACE_FTRACE_ENTRY, __MAX_BPF_ATTACH_TYPE }; @@ -993,6 +994,7 @@ enum bpf_link_type { BPF_LINK_TYPE_ITER = 4, BPF_LINK_TYPE_NETNS = 5, BPF_LINK_TYPE_XDP = 6, + BPF_LINK_TYPE_FTRACE = 7, MAX_BPF_LINK_TYPE, }; @@ -1427,6 +1429,7 @@ union bpf_attr { __aligned_u64 iter_info; /* extra bpf_iter_link_info */ __u32 iter_info_len; /* iter_info length */ }; + __u32 funcs_fd; }; } link_create; From patchwork Tue Apr 13 12:15:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 420689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29CAEC43460 for ; Tue, 13 Apr 2021 12:16:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02900600D4 for ; Tue, 13 Apr 2021 12:16:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345467AbhDMMQ3 convert rfc822-to-8bit (ORCPT ); Tue, 13 Apr 2021 08:16:29 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:22855 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344217AbhDMMQR (ORCPT ); Tue, 13 Apr 2021 08:16:17 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-64-KGhuL3XPMLKUv4IYbETgew-1; Tue, 13 Apr 2021 08:15:51 -0400 X-MC-Unique: KGhuL3XPMLKUv4IYbETgew-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 81CE483DE6B; Tue, 13 Apr 2021 12:15:38 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.196.16]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1C30F10023B0; Tue, 13 Apr 2021 12:15:34 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8ilan?= =?utf-8?q?d-J=C3=B8rgensen?= , Viktor Malik Subject: [PATCHv2 RFC bpf-next 5/7] libbpf: Add support to load and attach ftrace probe Date: Tue, 13 Apr 2021 14:15:14 +0200 Message-Id: <20210413121516.1467989-6-jolsa@kernel.org> In-Reply-To: <20210413121516.1467989-1-jolsa@kernel.org> References: <20210413121516.1467989-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adding support to load and attach ftrace probe. Adding new section type 'fentry.ftrace', that identifies ftrace probe and assigns BPF_TRACE_FTRACE_ENTRY to prog's expected_attach_type. The attach function creates bpf_functions object and makes an ftrace link with the program. Signed-off-by: Jiri Olsa --- tools/lib/bpf/bpf.c | 12 +++++++ tools/lib/bpf/bpf.h | 5 ++- tools/lib/bpf/libbpf.c | 74 ++++++++++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.map | 1 + 4 files changed, 91 insertions(+), 1 deletion(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index bba48ff4c5c0..b3195ac3e32e 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -643,6 +643,7 @@ int bpf_link_create(int prog_fd, int target_fd, attr.link_create.target_fd = target_fd; attr.link_create.attach_type = attach_type; attr.link_create.flags = OPTS_GET(opts, flags, 0); + attr.link_create.funcs_fd = OPTS_GET(opts, funcs_fd, 0); if (iter_info_len) { attr.link_create.iter_info = @@ -971,3 +972,14 @@ int bpf_prog_bind_map(int prog_fd, int map_fd, return sys_bpf(BPF_PROG_BIND_MAP, &attr, sizeof(attr)); } + +int bpf_functions_add(int fd, int btf_id) +{ + union bpf_attr attr; + + memset(&attr, 0, sizeof(attr)); + attr.functions_add.fd = fd; + attr.functions_add.btf_id = btf_id; + + return sys_bpf(BPF_FUNCTIONS_ADD, &attr, sizeof(attr)); +} diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index 875dde20d56e..f677fe06262b 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -175,8 +175,9 @@ struct bpf_link_create_opts { union bpf_iter_link_info *iter_info; __u32 iter_info_len; __u32 target_btf_id; + __u32 funcs_fd; }; -#define bpf_link_create_opts__last_field target_btf_id +#define bpf_link_create_opts__last_field funcs_fd LIBBPF_API int bpf_link_create(int prog_fd, int target_fd, enum bpf_attach_type attach_type, @@ -278,6 +279,8 @@ struct bpf_test_run_opts { LIBBPF_API int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts); +LIBBPF_API int bpf_functions_add(int fd, int btf_id); + #ifdef __cplusplus } /* extern "C" */ #endif diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index ed5586cce227..b3cb43990524 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -8838,6 +8838,10 @@ static const struct bpf_sec_def section_defs[] = { .expected_attach_type = BPF_TRACE_ITER, .is_attach_btf = true, .attach_fn = attach_iter), + SEC_DEF("fentry.ftrace/", TRACING, + .expected_attach_type = BPF_TRACE_FTRACE_ENTRY, + .is_attach_btf = true, + .attach_fn = attach_trace), BPF_EAPROG_SEC("xdp_devmap/", BPF_PROG_TYPE_XDP, BPF_XDP_DEVMAP), BPF_EAPROG_SEC("xdp_cpumap/", BPF_PROG_TYPE_XDP, @@ -9125,6 +9129,7 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj, #define BTF_TRACE_PREFIX "btf_trace_" #define BTF_LSM_PREFIX "bpf_lsm_" #define BTF_ITER_PREFIX "bpf_iter_" +#define BTF_FTRACE_PROBE "bpf_ftrace_probe" #define BTF_MAX_NAME_SIZE 128 static int find_btf_by_prefix_kind(const struct btf *btf, const char *prefix, @@ -9158,6 +9163,9 @@ static inline int find_attach_btf_id(struct btf *btf, const char *name, else if (attach_type == BPF_TRACE_ITER) err = find_btf_by_prefix_kind(btf, BTF_ITER_PREFIX, name, BTF_KIND_FUNC); + else if (attach_type == BPF_TRACE_FTRACE_ENTRY) + err = btf__find_by_name_kind(btf, BTF_FTRACE_PROBE, + BTF_KIND_FUNC); else err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC); @@ -10191,8 +10199,74 @@ static struct bpf_link *bpf_program__attach_btf_id(struct bpf_program *prog) return (struct bpf_link *)link; } +static struct bpf_link *bpf_program__attach_ftrace(struct bpf_program *prog) +{ + char *pattern = prog->sec_name + prog->sec_def->len; + DECLARE_LIBBPF_OPTS(bpf_link_create_opts, opts); + int prog_fd, link_fd, cnt, err, i; + enum bpf_attach_type attach_type; + struct bpf_link *link = NULL; + __s32 *ids = NULL; + int funcs_fd = -1; + + prog_fd = bpf_program__fd(prog); + if (prog_fd < 0) { + pr_warn("prog '%s': can't attach before loaded\n", prog->name); + return ERR_PTR(-EINVAL); + } + + err = bpf_object__load_vmlinux_btf(prog->obj, true); + if (err) + return ERR_PTR(err); + + cnt = btf__find_by_pattern_kind(prog->obj->btf_vmlinux, pattern, + BTF_KIND_FUNC, &ids); + if (cnt <= 0) + return ERR_PTR(-EINVAL); + + for (i = 0; i < cnt; i++) { + err = bpf_functions_add(funcs_fd, ids[i]); + if (err < 0) { + pr_warn("prog '%s': can't attach function BTF ID %d\n", + prog->name, ids[i]); + goto out_err; + } + if (funcs_fd == -1) + funcs_fd = err; + } + + link = calloc(1, sizeof(*link)); + if (!link) { + err = -ENOMEM; + goto out_err; + } + link->detach = &bpf_link__detach_fd; + + opts.funcs_fd = funcs_fd; + + attach_type = bpf_program__get_expected_attach_type(prog); + link_fd = bpf_link_create(prog_fd, 0, attach_type, &opts); + if (link_fd < 0) { + err = -errno; + goto out_err; + } + link->fd = link_fd; + free(ids); + return link; + +out_err: + if (funcs_fd != -1) + close(funcs_fd); + free(link); + free(ids); + return ERR_PTR(err); +} + struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog) { + if (prog->expected_attach_type == BPF_TRACE_FTRACE_ENTRY) + return bpf_program__attach_ftrace(prog); + return bpf_program__attach_btf_id(prog); } diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index b9b29baf1df8..69cbe54125e3 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -355,6 +355,7 @@ LIBBPF_0.4.0 { global: btf__add_float; btf__add_type; + bpf_functions_add; bpf_linker__add_file; bpf_linker__finalize; bpf_linker__free; From patchwork Tue Apr 13 12:15:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 420687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18485C433ED for ; Tue, 13 Apr 2021 12:16:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E73806128E for ; Tue, 13 Apr 2021 12:16:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345482AbhDMMRC convert rfc822-to-8bit (ORCPT ); Tue, 13 Apr 2021 08:17:02 -0400 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:32362 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243979AbhDMMQV (ORCPT ); Tue, 13 Apr 2021 08:16:21 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-39-6hzOu45_OQ6dHaQUMcxa7w-1; Tue, 13 Apr 2021 08:15:56 -0400 X-MC-Unique: 6hzOu45_OQ6dHaQUMcxa7w-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0B2F71856A63; Tue, 13 Apr 2021 12:15:42 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.196.16]) by smtp.corp.redhat.com (Postfix) with ESMTP id D05BE10023B0; Tue, 13 Apr 2021 12:15:38 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8ilan?= =?utf-8?q?d-J=C3=B8rgensen?= , Viktor Malik Subject: [PATCHv2 RFC bpf-next 6/7] selftests/bpf: Add ftrace probe to fentry test Date: Tue, 13 Apr 2021 14:15:15 +0200 Message-Id: <20210413121516.1467989-7-jolsa@kernel.org> In-Reply-To: <20210413121516.1467989-1-jolsa@kernel.org> References: <20210413121516.1467989-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adding 2 more tests for fentry probe test, to show/test ftrace probe. Signed-off-by: Jiri Olsa --- .../selftests/bpf/prog_tests/fentry_test.c | 5 ++++- tools/testing/selftests/bpf/progs/fentry_test.c | 16 ++++++++++++++++ 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_test.c b/tools/testing/selftests/bpf/prog_tests/fentry_test.c index 04ebbf1cb390..70f414cb3bfd 100644 --- a/tools/testing/selftests/bpf/prog_tests/fentry_test.c +++ b/tools/testing/selftests/bpf/prog_tests/fentry_test.c @@ -26,12 +26,15 @@ void test_fentry_test(void) err, errno, retval, duration); result = (__u64 *)fentry_skel->bss; - for (i = 0; i < 6; i++) { + for (i = 0; i < 8; i++) { if (CHECK(result[i] != 1, "result", "fentry_test%d failed err %lld\n", i + 1, result[i])) goto cleanup; } + ASSERT_EQ(result[8], 8, "result"); + ASSERT_EQ(result[9], 2, "result"); + cleanup: fentry_test__destroy(fentry_skel); } diff --git a/tools/testing/selftests/bpf/progs/fentry_test.c b/tools/testing/selftests/bpf/progs/fentry_test.c index 52a550d281d9..b32b589923a4 100644 --- a/tools/testing/selftests/bpf/progs/fentry_test.c +++ b/tools/testing/selftests/bpf/progs/fentry_test.c @@ -77,3 +77,19 @@ int BPF_PROG(test8, struct bpf_fentry_test_t *arg) test8_result = 1; return 0; } + +__u64 test9_result = 0; +SEC("fentry.ftrace/bpf_fentry_test*") +int BPF_PROG(test9) +{ + test9_result++; + return 0; +} + +__u64 test10_result = 0; +SEC("fentry.ftrace/bpf_fentry_test1|bpf_fentry_test2") +int BPF_PROG(test10) +{ + test10_result++; + return 0; +}