Message ID | 20231002135242.247536-1-asavkov@redhat.com |
---|---|
State | New |
Headers | show |
Series | [RFC] tracing: change syscall number type in struct syscall_trace_* | expand |
On Mon, Oct 2, 2023 at 6:53 AM Artem Savkov <asavkov@redhat.com> wrote: > > linux-rt-devel tree contains a patch that adds an extra member to struct can you please point to the patch itself that makes that change? > trace_entry. This causes the offset of args field in struct > trace_event_raw_sys_enter be different from the one in struct > syscall_trace_enter: > > struct trace_event_raw_sys_enter { > struct trace_entry ent; /* 0 12 */ > > /* XXX last struct has 3 bytes of padding */ > /* XXX 4 bytes hole, try to pack */ > > long int id; /* 16 8 */ > long unsigned int args[6]; /* 24 48 */ > /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ > char __data[]; /* 72 0 */ > > /* size: 72, cachelines: 2, members: 4 */ > /* sum members: 68, holes: 1, sum holes: 4 */ > /* paddings: 1, sum paddings: 3 */ > /* last cacheline: 8 bytes */ > }; > > struct syscall_trace_enter { > struct trace_entry ent; /* 0 12 */ > > /* XXX last struct has 3 bytes of padding */ > > int nr; /* 12 4 */ > long unsigned int args[]; /* 16 0 */ > > /* size: 16, cachelines: 1, members: 3 */ > /* paddings: 1, sum paddings: 3 */ > /* last cacheline: 16 bytes */ > }; > > This, in turn, causes perf_event_set_bpf_prog() fail while running bpf > test_profiler testcase because max_ctx_offset is calculated based on the > former struct, while off on the latter: > > 10488 if (is_tracepoint || is_syscall_tp) { > 10489 int off = trace_event_get_offsets(event->tp_event); > 10490 > 10491 if (prog->aux->max_ctx_offset > off) > 10492 return -EACCES; > 10493 } > > This patch changes the type of nr member in syscall_trace_* structs to > be long so that "args" offset is equal to that in struct > trace_event_raw_sys_enter. > > Signed-off-by: Artem Savkov <asavkov@redhat.com> > --- > kernel/trace/trace.h | 4 ++-- > kernel/trace/trace_syscalls.c | 7 ++++--- > 2 files changed, 6 insertions(+), 5 deletions(-) > > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h > index 77debe53f07cf..cd1d24df85364 100644 > --- a/kernel/trace/trace.h > +++ b/kernel/trace/trace.h > @@ -135,13 +135,13 @@ enum trace_type { > */ > struct syscall_trace_enter { > struct trace_entry ent; > - int nr; > + long nr; > unsigned long args[]; > }; > > struct syscall_trace_exit { > struct trace_entry ent; > - int nr; > + long nr; > long ret; > }; > > diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c > index de753403cdafb..c26939119f2e4 100644 > --- a/kernel/trace/trace_syscalls.c > +++ b/kernel/trace/trace_syscalls.c > @@ -101,7 +101,7 @@ find_syscall_meta(unsigned long syscall) > return NULL; > } > > -static struct syscall_metadata *syscall_nr_to_meta(int nr) > +static struct syscall_metadata *syscall_nr_to_meta(long nr) > { > if (IS_ENABLED(CONFIG_HAVE_SPARSE_SYSCALL_NR)) > return xa_load(&syscalls_metadata_sparse, (unsigned long)nr); > @@ -132,7 +132,8 @@ print_syscall_enter(struct trace_iterator *iter, int flags, > struct trace_entry *ent = iter->ent; > struct syscall_trace_enter *trace; > struct syscall_metadata *entry; > - int i, syscall; > + int i; > + long syscall; > > trace = (typeof(trace))ent; > syscall = trace->nr; > @@ -177,7 +178,7 @@ print_syscall_exit(struct trace_iterator *iter, int flags, > struct trace_seq *s = &iter->seq; > struct trace_entry *ent = iter->ent; > struct syscall_trace_exit *trace; > - int syscall; > + long syscall; > struct syscall_metadata *entry; > > trace = (typeof(trace))ent; > -- > 2.41.0 > >
On Mon, 2 Oct 2023 15:52:42 +0200 Artem Savkov <asavkov@redhat.com> wrote: > linux-rt-devel tree contains a patch that adds an extra member to struct > trace_entry. This causes the offset of args field in struct > trace_event_raw_sys_enter be different from the one in struct > syscall_trace_enter: This patch looks like it's fixing the symptom and not the issue. No code should rely on the two event structures to be related. That's an unwanted coupling, that will likely cause issues down the road (like the RT patch you mentioned). > > struct trace_event_raw_sys_enter { > struct trace_entry ent; /* 0 12 */ > > /* XXX last struct has 3 bytes of padding */ > /* XXX 4 bytes hole, try to pack */ > > long int id; /* 16 8 */ > long unsigned int args[6]; /* 24 48 */ > /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ > char __data[]; /* 72 0 */ > > /* size: 72, cachelines: 2, members: 4 */ > /* sum members: 68, holes: 1, sum holes: 4 */ > /* paddings: 1, sum paddings: 3 */ > /* last cacheline: 8 bytes */ > }; > > struct syscall_trace_enter { > struct trace_entry ent; /* 0 12 */ > > /* XXX last struct has 3 bytes of padding */ > > int nr; /* 12 4 */ > long unsigned int args[]; /* 16 0 */ > > /* size: 16, cachelines: 1, members: 3 */ > /* paddings: 1, sum paddings: 3 */ > /* last cacheline: 16 bytes */ > }; > > This, in turn, causes perf_event_set_bpf_prog() fail while running bpf > test_profiler testcase because max_ctx_offset is calculated based on the > former struct, while off on the latter: The above appears to be pointing to the real bug. The "is calculated based on the former struct while off on the latter" Why are the two being used together? They are supposed to be *unrelated*! > > 10488 if (is_tracepoint || is_syscall_tp) { > 10489 int off = trace_event_get_offsets(event->tp_event); So basically this is clumping together the raw_syscalls with the syscalls events as if they are the same. But the are not. They are created differently. It's basically like using one structure to get the offsets of another structure. That would be a bug anyplace else in the kernel. Sounds like it's a bug here too. I think the issue is with this code, not the tracing code. We could expose the struct syscall_trace_enter and syscall_trace_exit if the offsets to those are needed. -- Steve > 10490 > 10491 if (prog->aux->max_ctx_offset > off) > 10492 return -EACCES; > 10493 } > > This patch changes the type of nr member in syscall_trace_* structs to > be long so that "args" offset is equal to that in struct > trace_event_raw_sys_enter. >
On Tue, Oct 03, 2023 at 03:11:15PM -0700, Andrii Nakryiko wrote: > On Mon, Oct 2, 2023 at 6:53 AM Artem Savkov <asavkov@redhat.com> wrote: > > > > linux-rt-devel tree contains a patch that adds an extra member to struct > > can you please point to the patch itself that makes that change? Of course, some context would be useful. The patch in question is b1773eac3f29c ("sched: Add support for lazy preemption") from rt-devel tree [0]. It came up a couple of times before: [1] [2] [3] [4]. [0] https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?id=b1773eac3f29cbdcdfd16e0339f1a164066e9f71 [1] https://lore.kernel.org/linux-rt-users/20200221153541.681468-1-jolsa@kernel.org/t/#u [2] https://github.com/iovisor/bpftrace/commit/a2e3d5dbc03ceb49b776cf5602d31896158844a7 [3] https://lore.kernel.org/bpf/xunyjzy64q9b.fsf@redhat.com/t/#u [4] https://lore.kernel.org/bpf/20230727150647.397626-1-ykaliuta@redhat.com/t/#u > > trace_entry. This causes the offset of args field in struct > > trace_event_raw_sys_enter be different from the one in struct > > syscall_trace_enter: > > > > struct trace_event_raw_sys_enter { > > struct trace_entry ent; /* 0 12 */ > > > > /* XXX last struct has 3 bytes of padding */ > > /* XXX 4 bytes hole, try to pack */ > > > > long int id; /* 16 8 */ > > long unsigned int args[6]; /* 24 48 */ > > /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ > > char __data[]; /* 72 0 */ > > > > /* size: 72, cachelines: 2, members: 4 */ > > /* sum members: 68, holes: 1, sum holes: 4 */ > > /* paddings: 1, sum paddings: 3 */ > > /* last cacheline: 8 bytes */ > > }; > > > > struct syscall_trace_enter { > > struct trace_entry ent; /* 0 12 */ > > > > /* XXX last struct has 3 bytes of padding */ > > > > int nr; /* 12 4 */ > > long unsigned int args[]; /* 16 0 */ > > > > /* size: 16, cachelines: 1, members: 3 */ > > /* paddings: 1, sum paddings: 3 */ > > /* last cacheline: 16 bytes */ > > }; > > > > This, in turn, causes perf_event_set_bpf_prog() fail while running bpf > > test_profiler testcase because max_ctx_offset is calculated based on the > > former struct, while off on the latter: > > > > 10488 if (is_tracepoint || is_syscall_tp) { > > 10489 int off = trace_event_get_offsets(event->tp_event); > > 10490 > > 10491 if (prog->aux->max_ctx_offset > off) > > 10492 return -EACCES; > > 10493 } > > > > This patch changes the type of nr member in syscall_trace_* structs to > > be long so that "args" offset is equal to that in struct > > trace_event_raw_sys_enter. > > > > Signed-off-by: Artem Savkov <asavkov@redhat.com> > > --- > > kernel/trace/trace.h | 4 ++-- > > kernel/trace/trace_syscalls.c | 7 ++++--- > > 2 files changed, 6 insertions(+), 5 deletions(-) > > > > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h > > index 77debe53f07cf..cd1d24df85364 100644 > > --- a/kernel/trace/trace.h > > +++ b/kernel/trace/trace.h > > @@ -135,13 +135,13 @@ enum trace_type { > > */ > > struct syscall_trace_enter { > > struct trace_entry ent; > > - int nr; > > + long nr; > > unsigned long args[]; > > }; > > > > struct syscall_trace_exit { > > struct trace_entry ent; > > - int nr; > > + long nr; > > long ret; > > }; > > > > diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c > > index de753403cdafb..c26939119f2e4 100644 > > --- a/kernel/trace/trace_syscalls.c > > +++ b/kernel/trace/trace_syscalls.c > > @@ -101,7 +101,7 @@ find_syscall_meta(unsigned long syscall) > > return NULL; > > } > > > > -static struct syscall_metadata *syscall_nr_to_meta(int nr) > > +static struct syscall_metadata *syscall_nr_to_meta(long nr) > > { > > if (IS_ENABLED(CONFIG_HAVE_SPARSE_SYSCALL_NR)) > > return xa_load(&syscalls_metadata_sparse, (unsigned long)nr); > > @@ -132,7 +132,8 @@ print_syscall_enter(struct trace_iterator *iter, int flags, > > struct trace_entry *ent = iter->ent; > > struct syscall_trace_enter *trace; > > struct syscall_metadata *entry; > > - int i, syscall; > > + int i; > > + long syscall; > > > > trace = (typeof(trace))ent; > > syscall = trace->nr; > > @@ -177,7 +178,7 @@ print_syscall_exit(struct trace_iterator *iter, int flags, > > struct trace_seq *s = &iter->seq; > > struct trace_entry *ent = iter->ent; > > struct syscall_trace_exit *trace; > > - int syscall; > > + long syscall; > > struct syscall_metadata *entry; > > > > trace = (typeof(trace))ent; > > -- > > 2.41.0 > > > > >
On Tue, Oct 03, 2023 at 09:38:44PM -0400, Steven Rostedt wrote: > On Mon, 2 Oct 2023 15:52:42 +0200 > Artem Savkov <asavkov@redhat.com> wrote: > > > linux-rt-devel tree contains a patch that adds an extra member to struct > > trace_entry. This causes the offset of args field in struct > > trace_event_raw_sys_enter be different from the one in struct > > syscall_trace_enter: > > This patch looks like it's fixing the symptom and not the issue. No code > should rely on the two event structures to be related. That's an unwanted > coupling, that will likely cause issues down the road (like the RT patch > you mentioned). I agree, but I didn't see a better solution and that was my way of starting conversation, thus the RFC. > > > > struct trace_event_raw_sys_enter { > > struct trace_entry ent; /* 0 12 */ > > > > /* XXX last struct has 3 bytes of padding */ > > /* XXX 4 bytes hole, try to pack */ > > > > long int id; /* 16 8 */ > > long unsigned int args[6]; /* 24 48 */ > > /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ > > char __data[]; /* 72 0 */ > > > > /* size: 72, cachelines: 2, members: 4 */ > > /* sum members: 68, holes: 1, sum holes: 4 */ > > /* paddings: 1, sum paddings: 3 */ > > /* last cacheline: 8 bytes */ > > }; > > > > struct syscall_trace_enter { > > struct trace_entry ent; /* 0 12 */ > > > > /* XXX last struct has 3 bytes of padding */ > > > > int nr; /* 12 4 */ > > long unsigned int args[]; /* 16 0 */ > > > > /* size: 16, cachelines: 1, members: 3 */ > > /* paddings: 1, sum paddings: 3 */ > > /* last cacheline: 16 bytes */ > > }; > > > > This, in turn, causes perf_event_set_bpf_prog() fail while running bpf > > test_profiler testcase because max_ctx_offset is calculated based on the > > former struct, while off on the latter: > > The above appears to be pointing to the real bug. The "is calculated based > on the former struct while off on the latter" Why are the two being used > together? They are supposed to be *unrelated*! > > > > > > 10488 if (is_tracepoint || is_syscall_tp) { > > 10489 int off = trace_event_get_offsets(event->tp_event); > > So basically this is clumping together the raw_syscalls with the syscalls > events as if they are the same. But the are not. They are created > differently. It's basically like using one structure to get the offsets of > another structure. That would be a bug anyplace else in the kernel. Sounds > like it's a bug here too. > > I think the issue is with this code, not the tracing code. > > We could expose the struct syscall_trace_enter and syscall_trace_exit if > the offsets to those are needed. I don't think we need syscall_trace_* offsets, looks like trace_event_get_offsets() should return offset trace_event_raw_sys_enter instead. I am still trying to figure out how all of this works together. Maybe Alexei or Andrii have more context here.
On Wed, Oct 04, 2023 at 02:55:47PM +0200, Artem Savkov wrote: > On Tue, Oct 03, 2023 at 09:38:44PM -0400, Steven Rostedt wrote: > > On Mon, 2 Oct 2023 15:52:42 +0200 > > Artem Savkov <asavkov@redhat.com> wrote: > > > > > linux-rt-devel tree contains a patch that adds an extra member to struct > > > trace_entry. This causes the offset of args field in struct > > > trace_event_raw_sys_enter be different from the one in struct > > > syscall_trace_enter: > > > > This patch looks like it's fixing the symptom and not the issue. No code > > should rely on the two event structures to be related. That's an unwanted > > coupling, that will likely cause issues down the road (like the RT patch > > you mentioned). > > I agree, but I didn't see a better solution and that was my way of > starting conversation, thus the RFC. > > > > > > > struct trace_event_raw_sys_enter { > > > struct trace_entry ent; /* 0 12 */ > > > > > > /* XXX last struct has 3 bytes of padding */ > > > /* XXX 4 bytes hole, try to pack */ > > > > > > long int id; /* 16 8 */ > > > long unsigned int args[6]; /* 24 48 */ > > > /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ > > > char __data[]; /* 72 0 */ > > > > > > /* size: 72, cachelines: 2, members: 4 */ > > > /* sum members: 68, holes: 1, sum holes: 4 */ > > > /* paddings: 1, sum paddings: 3 */ > > > /* last cacheline: 8 bytes */ > > > }; > > > > > > struct syscall_trace_enter { > > > struct trace_entry ent; /* 0 12 */ > > > > > > /* XXX last struct has 3 bytes of padding */ > > > > > > int nr; /* 12 4 */ > > > long unsigned int args[]; /* 16 0 */ > > > > > > /* size: 16, cachelines: 1, members: 3 */ > > > /* paddings: 1, sum paddings: 3 */ > > > /* last cacheline: 16 bytes */ > > > }; > > > > > > This, in turn, causes perf_event_set_bpf_prog() fail while running bpf > > > test_profiler testcase because max_ctx_offset is calculated based on the > > > former struct, while off on the latter: > > > > The above appears to be pointing to the real bug. The "is calculated based > > on the former struct while off on the latter" Why are the two being used > > together? They are supposed to be *unrelated*! > > > > > > > > > > 10488 if (is_tracepoint || is_syscall_tp) { > > > 10489 int off = trace_event_get_offsets(event->tp_event); > > > > So basically this is clumping together the raw_syscalls with the syscalls > > events as if they are the same. But the are not. They are created > > differently. It's basically like using one structure to get the offsets of > > another structure. That would be a bug anyplace else in the kernel. Sounds > > like it's a bug here too. > > > > I think the issue is with this code, not the tracing code. > > > > We could expose the struct syscall_trace_enter and syscall_trace_exit if > > the offsets to those are needed. > > I don't think we need syscall_trace_* offsets, looks like > trace_event_get_offsets() should return offset trace_event_raw_sys_enter > instead. I am still trying to figure out how all of this works together. > Maybe Alexei or Andrii have more context here. Turns out it is even more confusing. The tests dereference the context as struct trace_event_raw_sys_enter so bpf verifier sets max_ctx_offset based on that, then perf_event_set_bpf_prog() checks this offset against the one in struct syscall_trace_enter, but what bpf prog really gets is a pointer to struct syscall_tp_t from kernel/trace/trace_syscalls.c. I don't know the history behind these decisions, but should the tests dereference context as struct syscall_trace_enter instead and struct syscall_tp_t be changed to have syscall_nr as int?
On Thu, Oct 12, 2023 at 6:43 AM Steven Rostedt <rostedt@goodmis.org> wrote: > > On Thu, 12 Oct 2023 13:45:50 +0200 > Artem Savkov <asavkov@redhat.com> wrote: > > > linux-rt-devel tree contains a patch (b1773eac3f29c ("sched: Add support > > for lazy preemption")) that adds an extra member to struct trace_entry. > > This causes the offset of args field in struct trace_event_raw_sys_enter > > be different from the one in struct syscall_trace_enter: > > > > struct trace_event_raw_sys_enter { > > struct trace_entry ent; /* 0 12 */ > > > > /* XXX last struct has 3 bytes of padding */ > > /* XXX 4 bytes hole, try to pack */ > > > > long int id; /* 16 8 */ > > long unsigned int args[6]; /* 24 48 */ > > /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ > > char __data[]; /* 72 0 */ > > > > /* size: 72, cachelines: 2, members: 4 */ > > /* sum members: 68, holes: 1, sum holes: 4 */ > > /* paddings: 1, sum paddings: 3 */ > > /* last cacheline: 8 bytes */ > > }; > > > > struct syscall_trace_enter { > > struct trace_entry ent; /* 0 12 */ > > > > /* XXX last struct has 3 bytes of padding */ > > > > int nr; /* 12 4 */ > > long unsigned int args[]; /* 16 0 */ > > > > /* size: 16, cachelines: 1, members: 3 */ > > /* paddings: 1, sum paddings: 3 */ > > /* last cacheline: 16 bytes */ > > }; > > > > This, in turn, causes perf_event_set_bpf_prog() fail while running bpf > > test_profiler testcase because max_ctx_offset is calculated based on the > > former struct, while off on the latter: > > > > 10488 if (is_tracepoint || is_syscall_tp) { > > 10489 int off = trace_event_get_offsets(event->tp_event); > > 10490 > > 10491 if (prog->aux->max_ctx_offset > off) > > 10492 return -EACCES; > > 10493 } > > > > What bpf program is actually getting is a pointer to struct > > syscall_tp_t, defined in kernel/trace/trace_syscalls.c. This patch fixes > > the problem by aligning struct syscall_tp_t with with struct > > syscall_trace_(enter|exit) and changing the tests to use these structs > > to dereference context. > > > > Signed-off-by: Artem Savkov <asavkov@redhat.com> > I think these changes make sense regardless, can you please resend the patch without RFC tag so that our CI can run tests for it? > Thanks for doing a proper fix. > > Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> But looking at [0] and briefly reading some of the discussions you, Steven, had. I'm just wondering if it would be best to avoid increasing struct trace_entry altogether? It seems like preempt_count is actually a 4-bit field in trace context, so it doesn't seem like we really need to allocate an entire byte for both preempt_count and preempt_lazy_count. Why can't we just combine them and not waste 8 extra bytes for each trace event in a ring buffer? [0] https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?id=b1773eac3f29cbdcdfd16e0339f1a164066e9f71 > > -- Steve
On Thu, Oct 12, 2023 at 04:32:51PM -0700, Andrii Nakryiko wrote: > On Thu, Oct 12, 2023 at 6:43 AM Steven Rostedt <rostedt@goodmis.org> wrote: > > > > On Thu, 12 Oct 2023 13:45:50 +0200 > > Artem Savkov <asavkov@redhat.com> wrote: > > > > > linux-rt-devel tree contains a patch (b1773eac3f29c ("sched: Add support > > > for lazy preemption")) that adds an extra member to struct trace_entry. > > > This causes the offset of args field in struct trace_event_raw_sys_enter > > > be different from the one in struct syscall_trace_enter: > > > > > > struct trace_event_raw_sys_enter { > > > struct trace_entry ent; /* 0 12 */ > > > > > > /* XXX last struct has 3 bytes of padding */ > > > /* XXX 4 bytes hole, try to pack */ > > > > > > long int id; /* 16 8 */ > > > long unsigned int args[6]; /* 24 48 */ > > > /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ > > > char __data[]; /* 72 0 */ > > > > > > /* size: 72, cachelines: 2, members: 4 */ > > > /* sum members: 68, holes: 1, sum holes: 4 */ > > > /* paddings: 1, sum paddings: 3 */ > > > /* last cacheline: 8 bytes */ > > > }; > > > > > > struct syscall_trace_enter { > > > struct trace_entry ent; /* 0 12 */ > > > > > > /* XXX last struct has 3 bytes of padding */ > > > > > > int nr; /* 12 4 */ > > > long unsigned int args[]; /* 16 0 */ > > > > > > /* size: 16, cachelines: 1, members: 3 */ > > > /* paddings: 1, sum paddings: 3 */ > > > /* last cacheline: 16 bytes */ > > > }; > > > > > > This, in turn, causes perf_event_set_bpf_prog() fail while running bpf > > > test_profiler testcase because max_ctx_offset is calculated based on the > > > former struct, while off on the latter: > > > > > > 10488 if (is_tracepoint || is_syscall_tp) { > > > 10489 int off = trace_event_get_offsets(event->tp_event); > > > 10490 > > > 10491 if (prog->aux->max_ctx_offset > off) > > > 10492 return -EACCES; > > > 10493 } > > > > > > What bpf program is actually getting is a pointer to struct > > > syscall_tp_t, defined in kernel/trace/trace_syscalls.c. This patch fixes > > > the problem by aligning struct syscall_tp_t with with struct > > > syscall_trace_(enter|exit) and changing the tests to use these structs > > > to dereference context. > > > > > > Signed-off-by: Artem Savkov <asavkov@redhat.com> > > > > I think these changes make sense regardless, can you please resend the > patch without RFC tag so that our CI can run tests for it? Ok, didn't know it was set up like that. > > Thanks for doing a proper fix. > > > > Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> > > But looking at [0] and briefly reading some of the discussions you, > Steven, had. I'm just wondering if it would be best to avoid > increasing struct trace_entry altogether? It seems like preempt_count > is actually a 4-bit field in trace context, so it doesn't seem like we > really need to allocate an entire byte for both preempt_count and > preempt_lazy_count. Why can't we just combine them and not waste 8 > extra bytes for each trace event in a ring buffer? > > [0] https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?id=b1773eac3f29cbdcdfd16e0339f1a164066e9f71 I agree that avoiding increase in struct trace_entry size would be very desirable, but I have no knowledge whether rt developers had reasons to do it like this. Nevertheless I think the issue with verifier running against a wrong struct still needs to be addressed.
On Fri, 13 Oct 2023 08:01:34 +0200 Artem Savkov <asavkov@redhat.com> wrote: > > But looking at [0] and briefly reading some of the discussions you, > > Steven, had. I'm just wondering if it would be best to avoid > > increasing struct trace_entry altogether? It seems like preempt_count > > is actually a 4-bit field in trace context, so it doesn't seem like we > > really need to allocate an entire byte for both preempt_count and > > preempt_lazy_count. Why can't we just combine them and not waste 8 > > extra bytes for each trace event in a ring buffer? > > > > [0] https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?id=b1773eac3f29cbdcdfd16e0339f1a164066e9f71 > > I agree that avoiding increase in struct trace_entry size would be very > desirable, but I have no knowledge whether rt developers had reasons to > do it like this. > > Nevertheless I think the issue with verifier running against a wrong > struct still needs to be addressed. Correct. My Ack is based on the current way things are done upstream. It was just that linux-rt showed the issue, where the code was not as robust as it should have been. To me this was a correctness issue, not an issue that had to do with how things are done in linux-rt. As for the changes in linux-rt, they are not upstream yet. I'll have my comments on that code when that happens. -- Steve
On Fri, Oct 13, 2023 at 7:00 AM Steven Rostedt <rostedt@goodmis.org> wrote: > > On Fri, 13 Oct 2023 08:01:34 +0200 > Artem Savkov <asavkov@redhat.com> wrote: > > > > But looking at [0] and briefly reading some of the discussions you, > > > Steven, had. I'm just wondering if it would be best to avoid > > > increasing struct trace_entry altogether? It seems like preempt_count > > > is actually a 4-bit field in trace context, so it doesn't seem like we > > > really need to allocate an entire byte for both preempt_count and > > > preempt_lazy_count. Why can't we just combine them and not waste 8 > > > extra bytes for each trace event in a ring buffer? > > > > > > [0] https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?id=b1773eac3f29cbdcdfd16e0339f1a164066e9f71 > > > > I agree that avoiding increase in struct trace_entry size would be very > > desirable, but I have no knowledge whether rt developers had reasons to > > do it like this. > > > > Nevertheless I think the issue with verifier running against a wrong > > struct still needs to be addressed. > > Correct. My Ack is based on the current way things are done upstream. > It was just that linux-rt showed the issue, where the code was not as > robust as it should have been. To me this was a correctness issue, not > an issue that had to do with how things are done in linux-rt. I think we should at least add some BUILD_BUG_ON() that validates offsets in syscall_tp_t matches the ones in syscall_trace_enter and syscall_trace_exit, to fail more loudly if there is any mismatch in the future. WDYT? > > As for the changes in linux-rt, they are not upstream yet. I'll have my > comments on that code when that happens. Ah, ok, cool. I'd appreciate you cc'ing bpf@vger.kernel.org in that discussion, thank you! > > -- Steve
On Fri, 13 Oct 2023 12:43:18 -0700 Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > Correct. My Ack is based on the current way things are done upstream. > > It was just that linux-rt showed the issue, where the code was not as > > robust as it should have been. To me this was a correctness issue, not > > an issue that had to do with how things are done in linux-rt. > > I think we should at least add some BUILD_BUG_ON() that validates > offsets in syscall_tp_t matches the ones in syscall_trace_enter and > syscall_trace_exit, to fail more loudly if there is any mismatch in > the future. WDYT? If you want to, feel free to send a patch. > > > > > As for the changes in linux-rt, they are not upstream yet. I'll have my > > comments on that code when that happens. > > Ah, ok, cool. I'd appreciate you cc'ing bpf@vger.kernel.org in that > discussion, thank you! If I remember ;-) -- Steve
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 77debe53f07cf..cd1d24df85364 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -135,13 +135,13 @@ enum trace_type { */ struct syscall_trace_enter { struct trace_entry ent; - int nr; + long nr; unsigned long args[]; }; struct syscall_trace_exit { struct trace_entry ent; - int nr; + long nr; long ret; }; diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c index de753403cdafb..c26939119f2e4 100644 --- a/kernel/trace/trace_syscalls.c +++ b/kernel/trace/trace_syscalls.c @@ -101,7 +101,7 @@ find_syscall_meta(unsigned long syscall) return NULL; } -static struct syscall_metadata *syscall_nr_to_meta(int nr) +static struct syscall_metadata *syscall_nr_to_meta(long nr) { if (IS_ENABLED(CONFIG_HAVE_SPARSE_SYSCALL_NR)) return xa_load(&syscalls_metadata_sparse, (unsigned long)nr); @@ -132,7 +132,8 @@ print_syscall_enter(struct trace_iterator *iter, int flags, struct trace_entry *ent = iter->ent; struct syscall_trace_enter *trace; struct syscall_metadata *entry; - int i, syscall; + int i; + long syscall; trace = (typeof(trace))ent; syscall = trace->nr; @@ -177,7 +178,7 @@ print_syscall_exit(struct trace_iterator *iter, int flags, struct trace_seq *s = &iter->seq; struct trace_entry *ent = iter->ent; struct syscall_trace_exit *trace; - int syscall; + long syscall; struct syscall_metadata *entry; trace = (typeof(trace))ent;
linux-rt-devel tree contains a patch that adds an extra member to struct trace_entry. This causes the offset of args field in struct trace_event_raw_sys_enter be different from the one in struct syscall_trace_enter: struct trace_event_raw_sys_enter { struct trace_entry ent; /* 0 12 */ /* XXX last struct has 3 bytes of padding */ /* XXX 4 bytes hole, try to pack */ long int id; /* 16 8 */ long unsigned int args[6]; /* 24 48 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ char __data[]; /* 72 0 */ /* size: 72, cachelines: 2, members: 4 */ /* sum members: 68, holes: 1, sum holes: 4 */ /* paddings: 1, sum paddings: 3 */ /* last cacheline: 8 bytes */ }; struct syscall_trace_enter { struct trace_entry ent; /* 0 12 */ /* XXX last struct has 3 bytes of padding */ int nr; /* 12 4 */ long unsigned int args[]; /* 16 0 */ /* size: 16, cachelines: 1, members: 3 */ /* paddings: 1, sum paddings: 3 */ /* last cacheline: 16 bytes */ }; This, in turn, causes perf_event_set_bpf_prog() fail while running bpf test_profiler testcase because max_ctx_offset is calculated based on the former struct, while off on the latter: 10488 if (is_tracepoint || is_syscall_tp) { 10489 int off = trace_event_get_offsets(event->tp_event); 10490 10491 if (prog->aux->max_ctx_offset > off) 10492 return -EACCES; 10493 } This patch changes the type of nr member in syscall_trace_* structs to be long so that "args" offset is equal to that in struct trace_event_raw_sys_enter. Signed-off-by: Artem Savkov <asavkov@redhat.com> --- kernel/trace/trace.h | 4 ++-- kernel/trace/trace_syscalls.c | 7 ++++--- 2 files changed, 6 insertions(+), 5 deletions(-)