diff mbox

[RFC,2/2] bpf: Implement bpf_perf_event_sample_enable/disable() helpers

Message ID 1444640563-159175-3-git-send-email-xiakaixu@huawei.com
State New
Headers show

Commit Message

Kaixu Xia Oct. 12, 2015, 9:02 a.m. UTC
The functions bpf_perf_event_sample_enable/disable() can set the
flag sample_disable to enable/disable output trace data on samples.

Signed-off-by: Kaixu Xia <xiakaixu@huawei.com>
---
 include/linux/bpf.h      |  2 ++
 include/uapi/linux/bpf.h |  2 ++
 kernel/bpf/verifier.c    |  4 +++-
 kernel/trace/bpf_trace.c | 34 ++++++++++++++++++++++++++++++++++
 4 files changed, 41 insertions(+), 1 deletion(-)

Comments

Alexei Starovoitov Oct. 12, 2015, 7:29 p.m. UTC | #1
On 10/12/15 2:02 AM, Kaixu Xia wrote:
> +extern const struct bpf_func_proto bpf_perf_event_sample_enable_proto;
> +extern const struct bpf_func_proto bpf_perf_event_sample_disable_proto;

externs are unnecessary. Just make them static.
Also I prefer single helper that takes a flag, so we can extend it
instead of adding func_id for every little operation.

To avoid conflicts if you touch kernel/bpf/* or bpf.h please always
base your patches of net-next.

 > +	atomic_set(&map->perf_sample_disable, 0);

global flag per map is no go.
events are independent and should be treated as such.

Please squash these two patches, since they're part of one logical
feature. Splitting them like this only makes review harder.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Wang Nan Oct. 13, 2015, 3:27 a.m. UTC | #2
On 2015/10/13 3:29, Alexei Starovoitov wrote:
> On 10/12/15 2:02 AM, Kaixu Xia wrote:
>> +extern const struct bpf_func_proto bpf_perf_event_sample_enable_proto;
>> +extern const struct bpf_func_proto bpf_perf_event_sample_disable_proto;
>
> externs are unnecessary. Just make them static.
> Also I prefer single helper that takes a flag, so we can extend it
> instead of adding func_id for every little operation.
>
> To avoid conflicts if you touch kernel/bpf/* or bpf.h please always
> base your patches of net-next.
>
> > +    atomic_set(&map->perf_sample_disable, 0);
>
> global flag per map is no go.
> events are independent and should be treated as such.
>

Then how to avoid racing? For example, when one core disabling all events
in a map, another core is enabling all of them. This racing may causes 
sereval
perf events in a map dump samples while other events not. To avoid such 
racing
I think some locking must be introduced, then cost is even higher.

The reason why we introduce an atomic pointer is because each operation 
should
controls a set of events, not one event, due to the per-cpu manner of 
perf events.

Thank you.

> Please squash these two patches, since they're part of one logical
> feature. Splitting them like this only makes review harder.
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Alexei Starovoitov Oct. 13, 2015, 3:39 a.m. UTC | #3
On 10/12/15 8:27 PM, Wangnan (F) wrote:
> Then how to avoid racing? For example, when one core disabling all events
> in a map, another core is enabling all of them. This racing may causes
> sereval
> perf events in a map dump samples while other events not. To avoid such
> racing
> I think some locking must be introduced, then cost is even higher.
>
> The reason why we introduce an atomic pointer is because each operation
> should
> controls a set of events, not one event, due to the per-cpu manner of
> perf events.

why 'set disable' is needed ?
the example given in cover letter shows the use case where you want
to receive samples only within sys_write() syscall.
The example makes sense, but sys_write() is running on this cpu, so just
disabling it on the current one is enough.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Wang Nan Oct. 13, 2015, 3:51 a.m. UTC | #4
On 2015/10/13 11:39, Alexei Starovoitov wrote:
> On 10/12/15 8:27 PM, Wangnan (F) wrote:
>> Then how to avoid racing? For example, when one core disabling all 
>> events
>> in a map, another core is enabling all of them. This racing may causes
>> sereval
>> perf events in a map dump samples while other events not. To avoid such
>> racing
>> I think some locking must be introduced, then cost is even higher.
>>
>> The reason why we introduce an atomic pointer is because each operation
>> should
>> controls a set of events, not one event, due to the per-cpu manner of
>> perf events.
>
> why 'set disable' is needed ?
> the example given in cover letter shows the use case where you want
> to receive samples only within sys_write() syscall.
> The example makes sense, but sys_write() is running on this cpu, so just
> disabling it on the current one is enough.
>

Our real use case is control of the system-wide sampling. For example,
we need sampling all CPUs when smartphone start refershing its display.
We need all CPUs because in Android system there are plenty of threads
get involed into this behavior. We can't achieve this by controling
sampling on only one CPU. This is the reason we need 'set enable'
and 'set disable'.

Thank you.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Alexei Starovoitov Oct. 13, 2015, 4:16 a.m. UTC | #5
On 10/12/15 8:51 PM, Wangnan (F) wrote:
>> why 'set disable' is needed ?
>> the example given in cover letter shows the use case where you want
>> to receive samples only within sys_write() syscall.
>> The example makes sense, but sys_write() is running on this cpu, so just
>> disabling it on the current one is enough.
>>
>
> Our real use case is control of the system-wide sampling. For example,
> we need sampling all CPUs when smartphone start refershing its display.
> We need all CPUs because in Android system there are plenty of threads
> get involed into this behavior. We can't achieve this by controling
> sampling on only one CPU. This is the reason we need 'set enable'
> and 'set disable'.

ok, but that use case may have different enable/disable pattern.
In sys_write example ultra-fast enable/disable is must have, since
the whole syscall is fast and overhead should be minimal.
but for display refresh? we're talking milliseconds, no?
Can you just ioctl() it from user space?
If cost of enable/disable is high or the time range between toggling is
long, then doing it from the bpf program doesn't make sense. Instead
the program can do bpf_perf_event_output() to send a notification to
user space that condition is met and the user space can ioctl() events.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Wang Nan Oct. 13, 2015, 4:34 a.m. UTC | #6
On 2015/10/13 12:16, Alexei Starovoitov wrote:
> On 10/12/15 8:51 PM, Wangnan (F) wrote:
>>> why 'set disable' is needed ?
>>> the example given in cover letter shows the use case where you want
>>> to receive samples only within sys_write() syscall.
>>> The example makes sense, but sys_write() is running on this cpu, so 
>>> just
>>> disabling it on the current one is enough.
>>>
>>
>> Our real use case is control of the system-wide sampling. For example,
>> we need sampling all CPUs when smartphone start refershing its display.
>> We need all CPUs because in Android system there are plenty of threads
>> get involed into this behavior. We can't achieve this by controling
>> sampling on only one CPU. This is the reason we need 'set enable'
>> and 'set disable'.
>
> ok, but that use case may have different enable/disable pattern.
> In sys_write example ultra-fast enable/disable is must have, since
> the whole syscall is fast and overhead should be minimal.
> but for display refresh? we're talking milliseconds, no?
> Can you just ioctl() it from user space?
> If cost of enable/disable is high or the time range between toggling is
> long, then doing it from the bpf program doesn't make sense. Instead
> the program can do bpf_perf_event_output() to send a notification to
> user space that condition is met and the user space can ioctl() events.
>

OK. I think I understand your design principle that, everything inside BPF
should be as fast as possible.

Make userspace control events using ioctl make things harder. You know that
'perf record' itself doesn't care too much about events it reveived. It only
copies data to perf.data, but what we want is to use perf record simply like
this:

  # perf record -e evt=cycles -e control.o/pmu=evt/ -a sleep 100

And in control.o we create uprobe point to mark the start and finish of 
a frame:

  SEC("target=/a/b/c.o\nstartFrame=0x123456")
  int startFrame(void *) {
    bpf_pmu_enable(pmu);
    return 1;
  }

  SEC("target=/a/b/c.o\nfinishFrame=0x234568")
  int finishFrame(void *) {
    bpf_pmu_disable(pmu);
    return 1;
  }

I think it is make sence also.

I still think perf is not necessary be independent each other. You know 
we have
PERF_EVENT_IOC_SET_OUTPUT which can set multiple events output through one
ringbuffer. This way perf events are connected.

I think the 'set disable/enable' design in this patchset satisify the 
design goal
that in BPF program we only do simple and fast things. The only 
inconvience is
we add something into map, which is ugly. What about using similar 
implementation
like PERF_EVENT_IOC_SET_OUTPUT, creating a new ioctl like 
PERF_EVENT_IOC_SET_ENABLER,
then let perf to select an event as 'enabler', then BPF can still 
control one atomic
variable to enable/disable a set of events.

Thank you.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Alexei Starovoitov Oct. 13, 2015, 5:15 a.m. UTC | #7
On 10/12/15 9:34 PM, Wangnan (F) wrote:
>
>
> On 2015/10/13 12:16, Alexei Starovoitov wrote:
>> On 10/12/15 8:51 PM, Wangnan (F) wrote:
>>>> why 'set disable' is needed ?
>>>> the example given in cover letter shows the use case where you want
>>>> to receive samples only within sys_write() syscall.
>>>> The example makes sense, but sys_write() is running on this cpu, so
>>>> just
>>>> disabling it on the current one is enough.
>>>>
>>>
>>> Our real use case is control of the system-wide sampling. For example,
>>> we need sampling all CPUs when smartphone start refershing its display.
>>> We need all CPUs because in Android system there are plenty of threads
>>> get involed into this behavior. We can't achieve this by controling
>>> sampling on only one CPU. This is the reason we need 'set enable'
>>> and 'set disable'.
>>
>> ok, but that use case may have different enable/disable pattern.
>> In sys_write example ultra-fast enable/disable is must have, since
>> the whole syscall is fast and overhead should be minimal.
>> but for display refresh? we're talking milliseconds, no?
>> Can you just ioctl() it from user space?
>> If cost of enable/disable is high or the time range between toggling is
>> long, then doing it from the bpf program doesn't make sense. Instead
>> the program can do bpf_perf_event_output() to send a notification to
>> user space that condition is met and the user space can ioctl() events.
>>
>
> OK. I think I understand your design principle that, everything inside BPF
> should be as fast as possible.
>
> Make userspace control events using ioctl make things harder. You know that
> 'perf record' itself doesn't care too much about events it reveived. It
> only
> copies data to perf.data, but what we want is to use perf record simply
> like
> this:
>
>   # perf record -e evt=cycles -e control.o/pmu=evt/ -a sleep 100
>
> And in control.o we create uprobe point to mark the start and finish of
> a frame:
>
>   SEC("target=/a/b/c.o\nstartFrame=0x123456")
>   int startFrame(void *) {
>     bpf_pmu_enable(pmu);
>     return 1;
>   }
>
>   SEC("target=/a/b/c.o\nfinishFrame=0x234568")
>   int finishFrame(void *) {
>     bpf_pmu_disable(pmu);
>     return 1;
>   }
>
> I think it is make sence also.

yes. that looks quite useful,
but did you consider re-entrant startFrame() ?
start << here sampling starts
   start
   finish << here all samples disabled?!
finish
and startFrame()/finishFrame() running on all cpus of that user app ?
One cpu entering into startFrame() while another cpu doing finishFrame
what behavior should be? sampling is still enabled on all cpus? or off?
Either case doesn't seem to work with simple enable/disable.
Few emails in this thread back, I mentioned inc/dec of a flag
to solve that.

> What about using similar
> implementation
> like PERF_EVENT_IOC_SET_OUTPUT, creating a new ioctl like
> PERF_EVENT_IOC_SET_ENABLER,
> then let perf to select an event as 'enabler', then BPF can still
> control one atomic
> variable to enable/disable a set of events.

you lost me on that last sentence. How this 'enabler' will work?
Also I'm still missing what's wrong with perf doing ioctl() on
events on all cpus manually when bpf program tells it to do so.
Is it speed you concerned about or extra work in perf ?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Wang Nan Oct. 13, 2015, 6:57 a.m. UTC | #8
On 2015/10/13 13:15, Alexei Starovoitov wrote:
> On 10/12/15 9:34 PM, Wangnan (F) wrote:
>>
>>
>> On 2015/10/13 12:16, Alexei Starovoitov wrote:
>>> On 10/12/15 8:51 PM, Wangnan (F) wrote:
>>>>> why 'set disable' is needed ?
>>>>> the example given in cover letter shows the use case where you want
>>>>> to receive samples only within sys_write() syscall.
>>>>> The example makes sense, but sys_write() is running on this cpu, so
>>>>> just
>>>>> disabling it on the current one is enough.
>>>>>
>>>>
>>>> Our real use case is control of the system-wide sampling. For example,
>>>> we need sampling all CPUs when smartphone start refershing its 
>>>> display.
>>>> We need all CPUs because in Android system there are plenty of threads
>>>> get involed into this behavior. We can't achieve this by controling
>>>> sampling on only one CPU. This is the reason we need 'set enable'
>>>> and 'set disable'.
>>>
>>> ok, but that use case may have different enable/disable pattern.
>>> In sys_write example ultra-fast enable/disable is must have, since
>>> the whole syscall is fast and overhead should be minimal.
>>> but for display refresh? we're talking milliseconds, no?
>>> Can you just ioctl() it from user space?
>>> If cost of enable/disable is high or the time range between toggling is
>>> long, then doing it from the bpf program doesn't make sense. Instead
>>> the program can do bpf_perf_event_output() to send a notification to
>>> user space that condition is met and the user space can ioctl() events.
>>>
>>
>> OK. I think I understand your design principle that, everything 
>> inside BPF
>> should be as fast as possible.
>>
>> Make userspace control events using ioctl make things harder. You 
>> know that
>> 'perf record' itself doesn't care too much about events it reveived. It
>> only
>> copies data to perf.data, but what we want is to use perf record simply
>> like
>> this:
>>
>>   # perf record -e evt=cycles -e control.o/pmu=evt/ -a sleep 100
>>
>> And in control.o we create uprobe point to mark the start and finish of
>> a frame:
>>
>>   SEC("target=/a/b/c.o\nstartFrame=0x123456")
>>   int startFrame(void *) {
>>     bpf_pmu_enable(pmu);
>>     return 1;
>>   }
>>
>>   SEC("target=/a/b/c.o\nfinishFrame=0x234568")
>>   int finishFrame(void *) {
>>     bpf_pmu_disable(pmu);
>>     return 1;
>>   }
>>
>> I think it is make sence also.
>
> yes. that looks quite useful,
> but did you consider re-entrant startFrame() ?
> start << here sampling starts
>   start
>   finish << here all samples disabled?!
> finish
> and startFrame()/finishFrame() running on all cpus of that user app ?
> One cpu entering into startFrame() while another cpu doing finishFrame
> what behavior should be? sampling is still enabled on all cpus? or off?
> Either case doesn't seem to work with simple enable/disable.
> Few emails in this thread back, I mentioned inc/dec of a flag
> to solve that.

Correct.

>
>> What about using similar
>> implementation
>> like PERF_EVENT_IOC_SET_OUTPUT, creating a new ioctl like
>> PERF_EVENT_IOC_SET_ENABLER,
>> then let perf to select an event as 'enabler', then BPF can still
>> control one atomic
>> variable to enable/disable a set of events.
>
> you lost me on that last sentence. How this 'enabler' will work?

Like what we did in this patchset: add an atomic flag to perf_event,
make all perf_event connected to this enabler by PERF_EVENT_IOC_SET_ENABLER.
During running, check the enabler's atomic flag. So we use one atomic
variable to control a set of perf_event. Finally create a BPF helper
function to control that atomic variable.

> Also I'm still missing what's wrong with perf doing ioctl() on
> events on all cpus manually when bpf program tells it to do so.
> Is it speed you concerned about or extra work in perf ?
>

I think both speed and extra work need be concerned.

Say we use perf to enable/disable sampling. Use the above example to
describe, when smartphone starting refresing display, we write something
into ringbuffer, then display refreshing start. We have to wait for
perf be scheduled in, parse event it get (perf record doesn't do this
currently), discover the trigger event then enable sampling perf events
on all cpus. We make trigger and action asynchronous. I'm not sure how
many ns or ms it need, and I believe asynchronization itself introduces
complexity, which I think need to be avoided except we can explain the
advantages asynchronization can bring.

But yes, perf based implementation can shut down the PMU completely, which
is better than current light-weight implementation.

In summary:

  - In next version we will use counter based flag instead of current
    0/1 switcher in considering of reentering problem.

  - I think we both agree we need a light weight solution in which we can
    enable/disable sampling in function level. This light-weight solution
    can be applied to only one perf-event.

  - Our disagreement is whether to introduce a heavy-weight solution based
    on perf to enable/disable a group of perf event. For me, perf-based
    solution can shut down PMU completly, which is good. However, it
    introduces asynchronization and extra work on perf. I think we can
    do it in a much simpler, fully BPF way. Enabler solution I mentioned
    above is a candidate.

Thank you.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
He Kuang Oct. 13, 2015, 10:54 a.m. UTC | #9
hi, Alexei

>> What about using similar
>> implementation
>> like PERF_EVENT_IOC_SET_OUTPUT, creating a new ioctl like
>> PERF_EVENT_IOC_SET_ENABLER,
>> then let perf to select an event as 'enabler', then BPF can still
>> control one atomic
>> variable to enable/disable a set of events.
>
> you lost me on that last sentence. How this 'enabler' will work?
> Also I'm still missing what's wrong with perf doing ioctl() on
> events on all cpus manually when bpf program tells it to do so.
> Is it speed you concerned about or extra work in perf ?
>
>

For not having too much wakeups, perf ringbuffer has a watermark
limit to cache events and reduce the wakeups, which causes perf
userspace tool can not receive perf events immediately.

Here's a simple demo expamle to prove it, 'sleep_exec' does some
writes and prints a timestamp every second, and an lable is
printed when perf poll gets events.

   $ perf record -m 2 -e syscalls:sys_enter_write sleep_exec 1000
   userspace sleep time: 0 seconds
   userspace sleep time: 1 seconds
   userspace sleep time: 2 seconds
   userspace sleep time: 3 seconds
   perf record wakeup onetime 0
   userspace sleep time: 4 seconds
   userspace sleep time: 5 seconds
   userspace sleep time: 6 seconds
   userspace sleep time: 7 seconds
   perf record wakeup onetime 1
   userspace sleep time: 8 seconds
   perf record wakeup onetime 2
   ..

   $ perf record -m 1 -e syscalls:sys_enter_write sleep_exec 1000
   userspace sleep time: 0 seconds
   userspace sleep time: 1 seconds
   perf record wakeup onetime 0
   userspace sleep time: 2 seconds
   userspace sleep time: 3 seconds
   perf record wakeup onetime 1
   userspace sleep time: 4 seconds
   userspace sleep time: 5 seconds
   ..

By default, if no mmap_pages is specified, perf tools wakeup only
when the target executalbe finished:

   $ perf record -e syscalls:sys_enter_write sleep_exec 5
   userspace sleep time: 0 seconds
   userspace sleep time: 1 seconds
   userspace sleep time: 2 seconds
   userspace sleep time: 3 seconds
   userspace sleep time: 4 seconds
   perf record wakeup onetime 0
   [ perf record: Woken up 1 times to write data ]
   [ perf record: Captured and wrote 0.006 MB perf.data (54 samples) ]

If we want perf to reflect as soon as our sample event be generated,
--no-buffering should be used, but this option has a greater
impact on performance.

   $ perf record --no-buffering -e syscalls:sys_enter_write sleep_exec 1000
   userspace sleep time: 0 seconds
   perf record wakeup onetime 0
   perf record wakeup onetime 1
   perf record wakeup onetime 2
   perf record wakeup onetime 3
   perf record wakeup onetime 4
   perf record wakeup onetime 5
   perf record wakeup onetime 6
   ..

Thank you

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Wang Nan Oct. 13, 2015, 11:07 a.m. UTC | #10
On 2015/10/13 18:54, He Kuang wrote:
> hi, Alexei
>
>>> What about using similar
>>> implementation
>>> like PERF_EVENT_IOC_SET_OUTPUT, creating a new ioctl like
>>> PERF_EVENT_IOC_SET_ENABLER,
>>> then let perf to select an event as 'enabler', then BPF can still
>>> control one atomic
>>> variable to enable/disable a set of events.
>>
>> you lost me on that last sentence. How this 'enabler' will work?
>> Also I'm still missing what's wrong with perf doing ioctl() on
>> events on all cpus manually when bpf program tells it to do so.
>> Is it speed you concerned about or extra work in perf ?
>>
>>
>
> For not having too much wakeups, perf ringbuffer has a watermark
> limit to cache events and reduce the wakeups, which causes perf
> userspace tool can not receive perf events immediately.
>
> Here's a simple demo expamle to prove it, 'sleep_exec' does some
> writes and prints a timestamp every second, and an lable is
> printed when perf poll gets events.
>
>   $ perf record -m 2 -e syscalls:sys_enter_write sleep_exec 1000
>   userspace sleep time: 0 seconds
>   userspace sleep time: 1 seconds
>   userspace sleep time: 2 seconds
>   userspace sleep time: 3 seconds
>   perf record wakeup onetime 0
>   userspace sleep time: 4 seconds
>   userspace sleep time: 5 seconds
>   userspace sleep time: 6 seconds
>   userspace sleep time: 7 seconds
>   perf record wakeup onetime 1
>   userspace sleep time: 8 seconds
>   perf record wakeup onetime 2
>   ..
>
>   $ perf record -m 1 -e syscalls:sys_enter_write sleep_exec 1000
>   userspace sleep time: 0 seconds
>   userspace sleep time: 1 seconds
>   perf record wakeup onetime 0
>   userspace sleep time: 2 seconds
>   userspace sleep time: 3 seconds
>   perf record wakeup onetime 1
>   userspace sleep time: 4 seconds
>   userspace sleep time: 5 seconds
>   ..
>
> By default, if no mmap_pages is specified, perf tools wakeup only
> when the target executalbe finished:
>
>   $ perf record -e syscalls:sys_enter_write sleep_exec 5
>   userspace sleep time: 0 seconds
>   userspace sleep time: 1 seconds
>   userspace sleep time: 2 seconds
>   userspace sleep time: 3 seconds
>   userspace sleep time: 4 seconds
>   perf record wakeup onetime 0
>   [ perf record: Woken up 1 times to write data ]
>   [ perf record: Captured and wrote 0.006 MB perf.data (54 samples) ]
>
> If we want perf to reflect as soon as our sample event be generated,
> --no-buffering should be used, but this option has a greater
> impact on performance.
>
>   $ perf record --no-buffering -e syscalls:sys_enter_write sleep_exec 
> 1000
>   userspace sleep time: 0 seconds
>   perf record wakeup onetime 0
>   perf record wakeup onetime 1
>   perf record wakeup onetime 2
>   perf record wakeup onetime 3
>   perf record wakeup onetime 4
>   perf record wakeup onetime 5
>   perf record wakeup onetime 6
>   ..
>

Hi Alexei,

Based on He Kuang's test result, if we choose to use perf to control 
perf event
and output trigger event through bof_output_data, with default setting we
have to wait for sereval seconds until perf can get first trigger event 
if the
trigger event's frequency is low. In my display refreshing example, it 
causes
losting of event triggering. From user's view, random frames would miss.

With --no-buffering, things can become faster, but --no-buffering causes 
perf
to be scheduled in faster than normal, which is conflict to the goal of 
event
disabling that we want to reduce recording overhead as much as possible.

Thank you.

> Thank you
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Alexei Starovoitov Oct. 14, 2015, 5:14 a.m. UTC | #11
On 10/13/15 3:54 AM, He Kuang wrote:
> If we want perf to reflect as soon as our sample event be generated,
> --no-buffering should be used, but this option has a greater
> impact on performance.

no_buffering doesn't have to be applied to all events obviously.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
diff mbox

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 25e073d..09148ff 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -192,6 +192,8 @@  extern const struct bpf_func_proto bpf_map_update_elem_proto;
 extern const struct bpf_func_proto bpf_map_delete_elem_proto;
 
 extern const struct bpf_func_proto bpf_perf_event_read_proto;
+extern const struct bpf_func_proto bpf_perf_event_sample_enable_proto;
+extern const struct bpf_func_proto bpf_perf_event_sample_disable_proto;
 extern const struct bpf_func_proto bpf_get_prandom_u32_proto;
 extern const struct bpf_func_proto bpf_get_smp_processor_id_proto;
 extern const struct bpf_func_proto bpf_tail_call_proto;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 92a48e2..5229c550 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -272,6 +272,8 @@  enum bpf_func_id {
 	BPF_FUNC_skb_get_tunnel_key,
 	BPF_FUNC_skb_set_tunnel_key,
 	BPF_FUNC_perf_event_read,	/* u64 bpf_perf_event_read(&map, index) */
+	BPF_FUNC_perf_event_sample_enable,	/* u64 bpf_perf_event_enable(&map) */
+	BPF_FUNC_perf_event_sample_disable,	/* u64 bpf_perf_event_disable(&map) */
 	__BPF_FUNC_MAX_ID,
 };
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b074b23..6428daf 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -244,6 +244,8 @@  static const struct {
 } func_limit[] = {
 	{BPF_MAP_TYPE_PROG_ARRAY, BPF_FUNC_tail_call},
 	{BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_read},
+	{BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_sample_enable},
+	{BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_sample_disable},
 };
 
 static void print_verifier_state(struct verifier_env *env)
@@ -860,7 +862,7 @@  static int check_map_func_compatibility(struct bpf_map *map, int func_id)
 		 * don't allow any other map type to be passed into
 		 * the special func;
 		 */
-		if (bool_map != bool_func)
+		if (bool_func && bool_map != bool_func)
 			return -EINVAL;
 	}
 
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0fe96c7..abe943a 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -215,6 +215,36 @@  const struct bpf_func_proto bpf_perf_event_read_proto = {
 	.arg2_type	= ARG_ANYTHING,
 };
 
+static u64 bpf_perf_event_sample_enable(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
+{
+	struct bpf_map *map = (struct bpf_map *) (unsigned long) r1;
+
+	atomic_set(&map->perf_sample_disable, 0);
+	return 0;
+}
+
+const struct bpf_func_proto bpf_perf_event_sample_enable_proto = {
+       .func           = bpf_perf_event_sample_enable,
+       .gpl_only       = false,
+       .ret_type       = RET_INTEGER,
+       .arg1_type      = ARG_CONST_MAP_PTR,
+};
+
+static u64 bpf_perf_event_sample_disable(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
+{
+       struct bpf_map *map = (struct bpf_map *) (unsigned long) r1;
+
+       atomic_set(&map->perf_sample_disable, 1);
+       return 0;
+}
+
+const struct bpf_func_proto bpf_perf_event_sample_disable_proto = {
+       .func           = bpf_perf_event_sample_disable,
+       .gpl_only       = false,
+       .ret_type       = RET_INTEGER,
+       .arg1_type      = ARG_CONST_MAP_PTR,
+};
+
 static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func_id)
 {
 	switch (func_id) {
@@ -242,6 +272,10 @@  static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func
 		return &bpf_get_smp_processor_id_proto;
 	case BPF_FUNC_perf_event_read:
 		return &bpf_perf_event_read_proto;
+	case BPF_FUNC_perf_event_sample_enable:
+		return &bpf_perf_event_sample_enable_proto;
+	case BPF_FUNC_perf_event_sample_disable:
+		return &bpf_perf_event_sample_disable_proto;
 	default:
 		return NULL;
 	}