mbox series

[PATCHv3,bpf-next,0/7] bpf, x86: Add bpf_get_func_ip helper

Message ID 20210707214751.159713-1-jolsa@kernel.org
Headers show
Series bpf, x86: Add bpf_get_func_ip helper | expand

Message

Jiri Olsa July 7, 2021, 9:47 p.m. UTC
hi,
adding bpf_get_func_ip helper that returns IP address of the
caller function for trampoline and krobe programs.

There're 2 specific implementation of the bpf_get_func_ip
helper, one for trampoline progs and one for kprobe/kretprobe
progs.

The trampoline helper call is replaced/inlined by verifier
with simple move instruction. The kprobe/kretprobe is actual
helper call that returns prepared caller address.

Also available at:
  https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
  bpf/get_func_ip

v3 changes:
  - resend with Masami in cc and v3 in each patch subject

v2 changes:
  - use kprobe_running to get kprobe instead of cpu var [Masami]
  - added support to add kprobe on function+offset
    and test for that [Alan]

thanks,
jirka


---
Alan Maguire (1):
      libbpf: allow specification of "kprobe/function+offset"

Jiri Olsa (6):
      bpf, x86: Store caller's ip in trampoline stack
      bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip
      bpf: Add bpf_get_func_ip helper for tracing programs
      bpf: Add bpf_get_func_ip helper for kprobe programs
      selftests/bpf: Add test for bpf_get_func_ip helper
      selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe

 arch/x86/net/bpf_jit_comp.c                               | 19 +++++++++++++++++++
 include/linux/bpf.h                                       |  5 +++++
 include/linux/filter.h                                    |  3 ++-
 include/uapi/linux/bpf.h                                  |  7 +++++++
 kernel/bpf/trampoline.c                                   | 12 +++++++++---
 kernel/bpf/verifier.c                                     | 55 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c                                  | 32 ++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h                            |  7 +++++++
 tools/lib/bpf/libbpf.c                                    | 20 +++++++++++++++++---
 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/get_func_ip_test.c      | 75 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 11 files changed, 270 insertions(+), 7 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c

Comments

Andrii Nakryiko July 8, 2021, 12:06 a.m. UTC | #1
On Wed, Jul 7, 2021 at 2:53 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,
> specifically for all trampoline attach types.
>
> The trampoline's caller IP address is stored in (ctx - 8) address.
> so there's no reason to actually call the helper, but rather fixup
> the call instruction and return [ctx - 8] value directly (suggested
> by Alexei).
>
> [fixed has_get_func_ip wrong return type]
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/uapi/linux/bpf.h       |  7 +++++
>  kernel/bpf/verifier.c          | 53 ++++++++++++++++++++++++++++++++++
>  kernel/trace/bpf_trace.c       | 15 ++++++++++
>  tools/include/uapi/linux/bpf.h |  7 +++++
>  4 files changed, 82 insertions(+)
>

[...]

>  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>                              int *insn_idx_p)
>  {
> @@ -6225,6 +6256,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>         if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)
>                 env->prog->call_get_stack = true;
>
> +       if (func_id == BPF_FUNC_get_func_ip) {
> +               if (has_get_func_ip(env))

from has_xxx name I'd expect it returns true/false, so this reads
super confusing. check_get_func_ip would be a bit more consistent with
other cases like this (still reads confusing to me, but that's ok)

> +                       return -ENOTSUPP;
> +               env->prog->call_get_func_ip = true;
> +       }
> +
>         if (changes_data)
>                 clear_all_pkt_pointers(env);
>         return 0;

[...]
Andrii Nakryiko July 8, 2021, 12:12 a.m. UTC | #2
On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
>

> Adding test for bpf_get_func_ip helper for fentry, fexit,

> kprobe, kretprobe and fmod_ret programs.

>

> Signed-off-by: Jiri Olsa <jolsa@kernel.org>

> ---

>  .../bpf/prog_tests/get_func_ip_test.c         | 42 +++++++++++++

>  .../selftests/bpf/progs/get_func_ip_test.c    | 62 +++++++++++++++++++

>  2 files changed, 104 insertions(+)

>  create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c

>  create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c

>


[...]

> +       ASSERT_OK(err, "test_run");

> +

> +       result = (__u64 *)skel->bss;

> +       for (i = 0; i < sizeof(*skel->bss) / sizeof(__u64); i++) {

> +               if (!ASSERT_EQ(result[i], 1, "fentry_result"))

> +                       break;

> +       }


I dislike such generic loop over results. It's super error prone and
takes the same 5 lines of code that you'd write for explicit

ASSERT_EQ(testX_result, 1, "testX_result"); /* 5 times */

> +

> +       get_func_ip_test__detach(skel);


no need to explicitly detach, __destroy does that automatically

> +

> +cleanup:

> +       get_func_ip_test__destroy(skel);

> +}


[...]
Andrii Nakryiko July 8, 2021, 12:18 a.m. UTC | #3
On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
>

> Adding test for bpf_get_func_ip in kprobe+ofset probe.


typo: offset

> Because of the offset value it's arch specific, adding

> it only for x86_64 architecture.


I'm not following, you specified +0x5 offset explicitly, why is this
arch-specific?

>

> Signed-off-by: Jiri Olsa <jolsa@kernel.org>

> ---

>  .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++

>  1 file changed, 13 insertions(+)

>

> diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> index 8ca54390d2b1..e8a9428a0ea3 100644

> --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> @@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;

>  extern const void bpf_fentry_test3 __ksym;

>  extern const void bpf_fentry_test4 __ksym;

>  extern const void bpf_modify_return_test __ksym;

> +extern const void bpf_fentry_test6 __ksym;

>

>  __u64 test1_result = 0;

>  SEC("fentry/bpf_fentry_test1")

> @@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)

>         test5_result = (const void *) addr == &bpf_modify_return_test;

>         return ret;

>  }

> +

> +#ifdef __x86_64__

> +__u64 test6_result = 0;


see, and you just forgot to update the user-space part of the test to
even check test6_result...

please group variables together and do explicit ASSERT_EQ

> +SEC("kprobe/bpf_fentry_test6+0x5")

> +int test6(struct pt_regs *ctx)

> +{

> +       __u64 addr = bpf_get_func_ip(ctx);

> +

> +       test6_result = (const void *) addr == &bpf_fentry_test6 + 5;

> +       return 0;

> +}

> +#endif

> --

> 2.31.1

>
Alexei Starovoitov July 8, 2021, 2:11 a.m. UTC | #4
On Wed, Jul 07, 2021 at 11:47:47PM +0200, Jiri Olsa wrote:
>  

> +static bool allow_get_func_ip_tracing(struct bpf_verifier_env *env)

> +{

> +	return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);


Why does it have to be gated by 'jited && x86_64' ?
It's gated by bpf trampoline and it's only implemented on x86_64 so far.
The trampoline has plenty of features. I would expect bpf trampoline
for arm64 to implement all of them. If not the func_ip would be just
one of the trampoline features that couldn't be implemented and at that
time we'd need a flag mask of a sort, but I'd rather push of feature
equivalence between trampoline implementations.

Then jited part also doesn't seem to be necessary.
The trampoline passed pointer to a stack in R1.
Interpreter should deal with BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8) insn
the same way and it should work, since trampoline prepared it.
What did I miss?

> +static int has_get_func_ip(struct bpf_verifier_env *env)

> +{

> +	enum bpf_attach_type eatype = env->prog->expected_attach_type;

> +	enum bpf_prog_type type = resolve_prog_type(env->prog);

> +	int func_id = BPF_FUNC_get_func_ip;

> +

> +	if (type == BPF_PROG_TYPE_TRACING) {

> +		if (eatype != BPF_TRACE_FENTRY && eatype != BPF_TRACE_FEXIT &&

> +		    eatype != BPF_MODIFY_RETURN) {

> +			verbose(env, "func %s#%d supported only for fentry/fexit/fmod_ret programs\n",

> +				func_id_name(func_id), func_id);

> +			return -ENOTSUPP;

> +		}

> +		if (!allow_get_func_ip_tracing(env)) {

> +			verbose(env, "func %s#%d for tracing programs supported only for JITed x86_64\n",

> +				func_id_name(func_id), func_id);

> +			return -ENOTSUPP;

> +		}

> +		return 0;

> +	}

> +

> +	verbose(env, "func %s#%d not supported for program type %d\n",

> +		func_id_name(func_id), func_id, type);

> +	return -ENOTSUPP;

> +}

> +

>  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,

>  			     int *insn_idx_p)

>  {

> @@ -6225,6 +6256,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn

>  	if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)

>  		env->prog->call_get_stack = true;

>  

> +	if (func_id == BPF_FUNC_get_func_ip) {

> +		if (has_get_func_ip(env))

> +			return -ENOTSUPP;

> +		env->prog->call_get_func_ip = true;

> +	}

> +

>  	if (changes_data)

>  		clear_all_pkt_pointers(env);

>  	return 0;

> @@ -12369,6 +12406,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)

>  {

>  	struct bpf_prog *prog = env->prog;

>  	bool expect_blinding = bpf_jit_blinding_enabled(prog);

> +	enum bpf_prog_type prog_type = resolve_prog_type(prog);

>  	struct bpf_insn *insn = prog->insnsi;

>  	const struct bpf_func_proto *fn;

>  	const int insn_cnt = prog->len;

> @@ -12702,6 +12740,21 @@ static int do_misc_fixups(struct bpf_verifier_env *env)

>  			continue;

>  		}

>  

> +		/* Implement bpf_get_func_ip inline. */

> +		if (prog_type == BPF_PROG_TYPE_TRACING &&

> +		    insn->imm == BPF_FUNC_get_func_ip) {

> +			/* Load IP address from ctx - 8 */

> +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);

> +

> +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);

> +			if (!new_prog)

> +				return -ENOMEM;

> +

> +			env->prog = prog = new_prog;

> +			insn      = new_prog->insnsi + i + delta;

> +			continue;

> +		}

> +

>  patch_call_imm:

>  		fn = env->ops->get_func_proto(insn->imm, env->prog);

>  		/* all functions that have prototype and verifier allowed

> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c

> index 64bd2d84367f..9edd3b1a00ad 100644

> --- a/kernel/trace/bpf_trace.c

> +++ b/kernel/trace/bpf_trace.c

> @@ -948,6 +948,19 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {

>  	.arg5_type	= ARG_ANYTHING,

>  };

>  

> +BPF_CALL_1(bpf_get_func_ip_tracing, void *, ctx)

> +{

> +	/* Stub, the helper call is inlined in the program. */

> +	return 0;

> +}


may be add a WARN in here that it should never be executed ?
Or may be add an actual implementation:
 return ((u64 *)ctx)[-1];
and check that it works without inlining by the verifier?
Jiri Olsa July 11, 2021, 2:48 p.m. UTC | #5
On Wed, Jul 07, 2021 at 07:11:23PM -0700, Alexei Starovoitov wrote:
> On Wed, Jul 07, 2021 at 11:47:47PM +0200, Jiri Olsa wrote:

> >  

> > +static bool allow_get_func_ip_tracing(struct bpf_verifier_env *env)

> > +{

> > +	return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);

> 

> Why does it have to be gated by 'jited && x86_64' ?

> It's gated by bpf trampoline and it's only implemented on x86_64 so far.

> The trampoline has plenty of features. I would expect bpf trampoline

> for arm64 to implement all of them. If not the func_ip would be just

> one of the trampoline features that couldn't be implemented and at that

> time we'd need a flag mask of a sort, but I'd rather push of feature

> equivalence between trampoline implementations.


ok, check for trampoline's prog types should be enough

> 

> Then jited part also doesn't seem to be necessary.

> The trampoline passed pointer to a stack in R1.

> Interpreter should deal with BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8) insn

> the same way and it should work, since trampoline prepared it.

> What did I miss?


ah right.. will remove that

SNIP

> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c

> > index 64bd2d84367f..9edd3b1a00ad 100644

> > --- a/kernel/trace/bpf_trace.c

> > +++ b/kernel/trace/bpf_trace.c

> > @@ -948,6 +948,19 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {

> >  	.arg5_type	= ARG_ANYTHING,

> >  };

> >  

> > +BPF_CALL_1(bpf_get_func_ip_tracing, void *, ctx)

> > +{

> > +	/* Stub, the helper call is inlined in the program. */

> > +	return 0;

> > +}

> 

> may be add a WARN in here that it should never be executed ?

> Or may be add an actual implementation:

>  return ((u64 *)ctx)[-1];

> and check that it works without inlining by the verifier?

> 


sure, but having tracing program with this helper, it will be
always inlined, right? I can't see how it could be skipped

thanks,
jirka
Jiri Olsa July 11, 2021, 2:48 p.m. UTC | #6
On Wed, Jul 07, 2021 at 05:18:49PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:

> >

> > Adding test for bpf_get_func_ip in kprobe+ofset probe.

> 

> typo: offset

> 

> > Because of the offset value it's arch specific, adding

> > it only for x86_64 architecture.

> 

> I'm not following, you specified +0x5 offset explicitly, why is this

> arch-specific?


I need some instruction offset != 0 in the traced function,
x86_64's fentry jump is 5 bytes, other archs will be different

> 

> >

> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>

> > ---

> >  .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++

> >  1 file changed, 13 insertions(+)

> >

> > diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > index 8ca54390d2b1..e8a9428a0ea3 100644

> > --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > @@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;

> >  extern const void bpf_fentry_test3 __ksym;

> >  extern const void bpf_fentry_test4 __ksym;

> >  extern const void bpf_modify_return_test __ksym;

> > +extern const void bpf_fentry_test6 __ksym;

> >

> >  __u64 test1_result = 0;

> >  SEC("fentry/bpf_fentry_test1")

> > @@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)

> >         test5_result = (const void *) addr == &bpf_modify_return_test;

> >         return ret;

> >  }

> > +

> > +#ifdef __x86_64__

> > +__u64 test6_result = 0;

> 

> see, and you just forgot to update the user-space part of the test to

> even check test6_result...

> 

> please group variables together and do explicit ASSERT_EQ


right.. will change

thanks,
jirka

> 

> > +SEC("kprobe/bpf_fentry_test6+0x5")

> > +int test6(struct pt_regs *ctx)

> > +{

> > +       __u64 addr = bpf_get_func_ip(ctx);

> > +

> > +       test6_result = (const void *) addr == &bpf_fentry_test6 + 5;

> > +       return 0;

> > +}

> > +#endif

> > --

> > 2.31.1

> >

>
Jiri Olsa July 11, 2021, 2:48 p.m. UTC | #7
On Wed, Jul 07, 2021 at 05:06:17PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 7, 2021 at 2:53 PM Jiri Olsa <jolsa@redhat.com> wrote:

> >

> > Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,

> > specifically for all trampoline attach types.

> >

> > The trampoline's caller IP address is stored in (ctx - 8) address.

> > so there's no reason to actually call the helper, but rather fixup

> > the call instruction and return [ctx - 8] value directly (suggested

> > by Alexei).

> >

> > [fixed has_get_func_ip wrong return type]

> > Reported-by: kernel test robot <lkp@intel.com>

> > Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>

> > ---

> >  include/uapi/linux/bpf.h       |  7 +++++

> >  kernel/bpf/verifier.c          | 53 ++++++++++++++++++++++++++++++++++

> >  kernel/trace/bpf_trace.c       | 15 ++++++++++

> >  tools/include/uapi/linux/bpf.h |  7 +++++

> >  4 files changed, 82 insertions(+)

> >

> 

> [...]

> 

> >  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,

> >                              int *insn_idx_p)

> >  {

> > @@ -6225,6 +6256,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn

> >         if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)

> >                 env->prog->call_get_stack = true;

> >

> > +       if (func_id == BPF_FUNC_get_func_ip) {

> > +               if (has_get_func_ip(env))

> 

> from has_xxx name I'd expect it returns true/false, so this reads

> super confusing. check_get_func_ip would be a bit more consistent with

> other cases like this (still reads confusing to me, but that's ok)


ok, will change

jirka

> 

> > +                       return -ENOTSUPP;

> > +               env->prog->call_get_func_ip = true;

> > +       }

> > +

> >         if (changes_data)

> >                 clear_all_pkt_pointers(env);

> >         return 0;

> 

> [...]

>
Jiri Olsa July 11, 2021, 2:48 p.m. UTC | #8
On Wed, Jul 07, 2021 at 05:12:07PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:

> >

> > Adding test for bpf_get_func_ip helper for fentry, fexit,

> > kprobe, kretprobe and fmod_ret programs.

> >

> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>

> > ---

> >  .../bpf/prog_tests/get_func_ip_test.c         | 42 +++++++++++++

> >  .../selftests/bpf/progs/get_func_ip_test.c    | 62 +++++++++++++++++++

> >  2 files changed, 104 insertions(+)

> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c

> >  create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c

> >

> 

> [...]

> 

> > +       ASSERT_OK(err, "test_run");

> > +

> > +       result = (__u64 *)skel->bss;

> > +       for (i = 0; i < sizeof(*skel->bss) / sizeof(__u64); i++) {

> > +               if (!ASSERT_EQ(result[i], 1, "fentry_result"))

> > +                       break;

> > +       }

> 

> I dislike such generic loop over results. It's super error prone and

> takes the same 5 lines of code that you'd write for explicit

> 

> ASSERT_EQ(testX_result, 1, "testX_result"); /* 5 times */


ok

> 

> > +

> > +       get_func_ip_test__detach(skel);

> 

> no need to explicitly detach, __destroy does that automatically


I see, will remove that

thanks,
jirka

> 

> > +

> > +cleanup:

> > +       get_func_ip_test__destroy(skel);

> > +}

> 

> [...]

>
Andrii Nakryiko July 12, 2021, 11:32 p.m. UTC | #9
On Sun, Jul 11, 2021 at 7:48 AM Jiri Olsa <jolsa@redhat.com> wrote:
>

> On Wed, Jul 07, 2021 at 05:18:49PM -0700, Andrii Nakryiko wrote:

> > On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:

> > >

> > > Adding test for bpf_get_func_ip in kprobe+ofset probe.

> >

> > typo: offset

> >

> > > Because of the offset value it's arch specific, adding

> > > it only for x86_64 architecture.

> >

> > I'm not following, you specified +0x5 offset explicitly, why is this

> > arch-specific?

>

> I need some instruction offset != 0 in the traced function,

> x86_64's fentry jump is 5 bytes, other archs will be different


Right, ok. I don't see an easy way to detect this offset, but the
#ifdef __x86_64__ detection doesn't work because we are compiling with
-target bpf. Please double-check that it actually worked in the first
place.

I think a better way would be to have test6 defined unconditionally in
BPF code, but then disable loading test6 program on anything but
x86_64 platform at runtime with bpf_program__set_autoload(false).

>

> >

> > >

> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>

> > > ---

> > >  .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++

> > >  1 file changed, 13 insertions(+)

> > >

> > > diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > > index 8ca54390d2b1..e8a9428a0ea3 100644

> > > --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > > +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > > @@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;

> > >  extern const void bpf_fentry_test3 __ksym;

> > >  extern const void bpf_fentry_test4 __ksym;

> > >  extern const void bpf_modify_return_test __ksym;

> > > +extern const void bpf_fentry_test6 __ksym;

> > >

> > >  __u64 test1_result = 0;

> > >  SEC("fentry/bpf_fentry_test1")

> > > @@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)

> > >         test5_result = (const void *) addr == &bpf_modify_return_test;

> > >         return ret;

> > >  }

> > > +

> > > +#ifdef __x86_64__

> > > +__u64 test6_result = 0;

> >

> > see, and you just forgot to update the user-space part of the test to

> > even check test6_result...

> >

> > please group variables together and do explicit ASSERT_EQ

>

> right.. will change

>

> thanks,

> jirka

>

> >

> > > +SEC("kprobe/bpf_fentry_test6+0x5")

> > > +int test6(struct pt_regs *ctx)

> > > +{

> > > +       __u64 addr = bpf_get_func_ip(ctx);

> > > +

> > > +       test6_result = (const void *) addr == &bpf_fentry_test6 + 5;

> > > +       return 0;

> > > +}

> > > +#endif

> > > --

> > > 2.31.1

> > >

> >

>
Jiri Olsa July 13, 2021, 2:15 p.m. UTC | #10
On Mon, Jul 12, 2021 at 04:32:25PM -0700, Andrii Nakryiko wrote:
> On Sun, Jul 11, 2021 at 7:48 AM Jiri Olsa <jolsa@redhat.com> wrote:

> >

> > On Wed, Jul 07, 2021 at 05:18:49PM -0700, Andrii Nakryiko wrote:

> > > On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:

> > > >

> > > > Adding test for bpf_get_func_ip in kprobe+ofset probe.

> > >

> > > typo: offset

> > >

> > > > Because of the offset value it's arch specific, adding

> > > > it only for x86_64 architecture.

> > >

> > > I'm not following, you specified +0x5 offset explicitly, why is this

> > > arch-specific?

> >

> > I need some instruction offset != 0 in the traced function,

> > x86_64's fentry jump is 5 bytes, other archs will be different

> 

> Right, ok. I don't see an easy way to detect this offset, but the

> #ifdef __x86_64__ detection doesn't work because we are compiling with

> -target bpf. Please double-check that it actually worked in the first

> place.


ugh, right

> 

> I think a better way would be to have test6 defined unconditionally in

> BPF code, but then disable loading test6 program on anything but

> x86_64 platform at runtime with bpf_program__set_autoload(false).


great, I did not know about this function, will be easier

thanks,
jirka

> 

> >

> > >

> > > >

> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>

> > > > ---

> > > >  .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++

> > > >  1 file changed, 13 insertions(+)

> > > >

> > > > diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > > > index 8ca54390d2b1..e8a9428a0ea3 100644

> > > > --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > > > +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c

> > > > @@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;

> > > >  extern const void bpf_fentry_test3 __ksym;

> > > >  extern const void bpf_fentry_test4 __ksym;

> > > >  extern const void bpf_modify_return_test __ksym;

> > > > +extern const void bpf_fentry_test6 __ksym;

> > > >

> > > >  __u64 test1_result = 0;

> > > >  SEC("fentry/bpf_fentry_test1")

> > > > @@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)

> > > >         test5_result = (const void *) addr == &bpf_modify_return_test;

> > > >         return ret;

> > > >  }

> > > > +

> > > > +#ifdef __x86_64__

> > > > +__u64 test6_result = 0;

> > >

> > > see, and you just forgot to update the user-space part of the test to

> > > even check test6_result...

> > >

> > > please group variables together and do explicit ASSERT_EQ

> >

> > right.. will change

> >

> > thanks,

> > jirka

> >

> > >

> > > > +SEC("kprobe/bpf_fentry_test6+0x5")

> > > > +int test6(struct pt_regs *ctx)

> > > > +{

> > > > +       __u64 addr = bpf_get_func_ip(ctx);

> > > > +

> > > > +       test6_result = (const void *) addr == &bpf_fentry_test6 + 5;

> > > > +       return 0;

> > > > +}

> > > > +#endif

> > > > --

> > > > 2.31.1

> > > >

> > >

> >

>