From patchwork Wed Aug 24 11:17:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binoy Jayan X-Patchwork-Id: 74586 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp258656qga; Wed, 24 Aug 2016 04:18:09 -0700 (PDT) X-Received: by 10.98.102.79 with SMTP id a76mr4594898pfc.75.1472037488600; Wed, 24 Aug 2016 04:18:08 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t14si9252443pfk.155.2016.08.24.04.18.08; Wed, 24 Aug 2016 04:18:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933012AbcHXLRt (ORCPT + 27 others); Wed, 24 Aug 2016 07:17:49 -0400 Received: from mail-pf0-f176.google.com ([209.85.192.176]:35325 "EHLO mail-pf0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751568AbcHXLRq (ORCPT ); Wed, 24 Aug 2016 07:17:46 -0400 Received: by mail-pf0-f176.google.com with SMTP id x72so6051672pfd.2 for ; Wed, 24 Aug 2016 04:17:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lB5FKC2x8xfgK4gfkrOFy+YbdEQzCTycMuBB2pZbO5o=; b=jLS/fF2JY1YKmePxEMUP0ezKMB37WR9HQaMIPx7/dsGOaUNoYMg9MrRX7GWQSWsR2P NGuBZz7kFbbXU2jRYwGTrJFEecZCyXafCSvSbnzig21tCVrjPCh9PVjnDvsRqNMSs/eh rh/4mujg9QaJ6FD931/aQyP5ggUeBAs3M4OOc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lB5FKC2x8xfgK4gfkrOFy+YbdEQzCTycMuBB2pZbO5o=; b=SLqIbl10+um1iKGbGC7r6fFZAY6MuMeWnt+YVAJu/Y1m9euFzReJaVg/kqEkHSYnHY kdNEOK1qpuWr3hFcyP5OQlUXZ3vg5u0upT1KVUJlaGrQ51nz8Bk2PA6Dyx8FK7ZG+GrR ueHIgbrkGzu5C8eQW0qflRNejk8yVxC+cK6EnKHhHuO9mqkBeWZ2DIXfC2F+2zQgpR/b ApffcJK7swrDvMreiztP8216pxieu66i8blqkSDj2Jgux/arPaSJBVFaG+Sl4YRe7NuP MCCfCvu3t8UceJ4gx2alWSrSlpa8VaDGi5595gaQ+LVwP4yf5tT1eT4zfNP0edBlvgRg BV2A== X-Gm-Message-State: AE9vXwPXtlXPTLVFo5If8QwOJc9pbjTbM60gafgpK3lW0vCK4qt389asSnO5a0zEBpoFgJEZ X-Received: by 10.98.94.6 with SMTP id s6mr4608897pfb.31.1472037465742; Wed, 24 Aug 2016 04:17:45 -0700 (PDT) Received: from blr-ubuntu-59.ap.qualcomm.com ([202.46.23.61]) by smtp.gmail.com with ESMTPSA id f3sm12843626pff.26.2016.08.24.04.17.42 (version=TLS1_1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 24 Aug 2016 04:17:45 -0700 (PDT) From: Binoy Jayan To: Arnd Bergmann , linaro-kernel@lists.linaro.org Cc: Daniel Wagner , Carsten Emde , linux-kernel@vger.kernel.org, "Steven Rostedt (Red Hat)" , Binoy Jayan Subject: [PATCH v2 2/3] tracing: Add trace_irqsoff tracepoints Date: Wed, 24 Aug 2016 16:47:29 +0530 Message-Id: <1472037450-8907-3-git-send-email-binoy.jayan@linaro.org> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <1472037450-8907-1-git-send-email-binoy.jayan@linaro.org> References: <1472037450-8907-1-git-send-email-binoy.jayan@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This work is based on work by Daniel Wagner. A few tracepoints are added at the end of the critical section. With the hist trigger in place, the hist trigger plots may be generated, with per-cpu breakdown of events captured. It is based on linux kernel's event infrastructure. The following filter(s) may be used 'hist:key=latency.log2:val=hitcount:sort=latency' 'hist:key=cpu,latency:val=hitcount:sort=latency if cpu==1' 'hist:key=common_pid.execname' This captures only the latencies introduced by disabled irqs and preemption. Additional per process data has to be captured to calculate the effective latencies introduced for individual processes. Initial work - latency.patch [1] https://git.kernel.org/cgit/linux/kernel/git/rt/linux-stable-rt.git/commit/?h=v3.14-rt-rebase&id=56d50cc34943bbba12b8c5942ee1ae3b29f73acb Signed-off-by: Binoy Jayan --- include/trace/events/latency.h | 43 ++++++++++++++++++++++++++++++++++++++++++ kernel/trace/trace_irqsoff.c | 42 ++++++++++++++++++++++++++++++++++++++++- 2 files changed, 84 insertions(+), 1 deletion(-) create mode 100644 include/trace/events/latency.h -- The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project diff --git a/include/trace/events/latency.h b/include/trace/events/latency.h new file mode 100644 index 0000000..77896c7 --- /dev/null +++ b/include/trace/events/latency.h @@ -0,0 +1,43 @@ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM latency + +#if !defined(_TRACE_HIST_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_HIST_H + +#include + +DECLARE_EVENT_CLASS(latency_template, + TP_PROTO(int cpu, cycles_t latency), + + TP_ARGS(cpu, latency), + + TP_STRUCT__entry( + __field(int, cpu) + __field(cycles_t, latency) + ), + + TP_fast_assign( + __entry->cpu = cpu; + __entry->latency = latency; + ), + + TP_printk("cpu=%d, latency=%lu", __entry->cpu, + (unsigned long) __entry->latency) +); + +DEFINE_EVENT(latency_template, latency_irqs, + TP_PROTO(int cpu, cycles_t latency), + TP_ARGS(cpu, latency)); + +DEFINE_EVENT(latency_template, latency_preempt, + TP_PROTO(int cpu, cycles_t latency), + TP_ARGS(cpu, latency)); + +DEFINE_EVENT(latency_template, latency_critical_timings, + TP_PROTO(int cpu, cycles_t latency), + TP_ARGS(cpu, latency)); + +#endif /* _TRACE_HIST_H */ + +/* This part must be outside protection */ +#include diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 03cdff8..3fcf446 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -13,14 +13,22 @@ #include #include #include +#include + +#include #include "trace.h" +#define CREATE_TRACE_POINTS +#include + static struct trace_array *irqsoff_trace __read_mostly; static int tracer_enabled __read_mostly; static DEFINE_PER_CPU(int, tracing_cpu); - +static DEFINE_PER_CPU(cycle_t __maybe_unused, ts_irqs); +static DEFINE_PER_CPU(cycle_t __maybe_unused, ts_preempt); +static DEFINE_PER_CPU(cycle_t __maybe_unused, ts_critical_timings); static DEFINE_RAW_SPINLOCK(max_trace_lock); enum { @@ -419,9 +427,17 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip) atomic_dec(&data->disabled); } +static inline cycle_t get_delta(cycle_t __percpu *ts) +{ + return (cycle_t) trace_clock_local() - this_cpu_read(*ts); +} /* start and stop critical timings used to for stoppage (in idle) */ void start_critical_timings(void) { + if (trace_latency_critical_timings_enabled()) + this_cpu_write(ts_critical_timings, + (cycle_t) trace_clock_local()); + if (preempt_trace() || irq_trace()) start_critical_timing(CALLER_ADDR0, CALLER_ADDR1); } @@ -431,6 +447,13 @@ void stop_critical_timings(void) { if (preempt_trace() || irq_trace()) stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1); + + if (trace_latency_critical_timings_enabled()) { + trace_latency_critical_timings( + raw_smp_processor_id(), + get_delta(&ts_critical_timings)); + } + } EXPORT_SYMBOL_GPL(stop_critical_timings); @@ -438,6 +461,10 @@ EXPORT_SYMBOL_GPL(stop_critical_timings); #ifdef CONFIG_PROVE_LOCKING void time_hardirqs_on(unsigned long a0, unsigned long a1) { + if (trace_latency_irqs_enabled()) { + trace_latency_irqs(raw_smp_processor_id(), + get_delta(&ts_irqs)); + } if (!preempt_trace() && irq_trace()) stop_critical_timing(a0, a1); } @@ -446,6 +473,10 @@ void time_hardirqs_off(unsigned long a0, unsigned long a1) { if (!preempt_trace() && irq_trace()) start_critical_timing(a0, a1); + + if (trace_latency_irqs_enabled()) { + this_cpu_write(ts_irqs, (cycle_t) trace_clock_local()); + } } #else /* !CONFIG_PROVE_LOCKING */ @@ -503,6 +534,11 @@ EXPORT_SYMBOL(trace_hardirqs_off_caller); #ifdef CONFIG_PREEMPT_TRACER void trace_preempt_on(unsigned long a0, unsigned long a1) { + if (trace_latency_preempt_enabled()) { + trace_latency_preempt(raw_smp_processor_id(), + get_delta(&ts_preempt)); + } + if (preempt_trace() && !irq_trace()) stop_critical_timing(a0, a1); } @@ -511,6 +547,10 @@ void trace_preempt_off(unsigned long a0, unsigned long a1) { if (preempt_trace() && !irq_trace()) start_critical_timing(a0, a1); + + if (trace_latency_preempt_enabled()) { + this_cpu_write(ts_preempt, (cycle_t) trace_clock_local()); + } } #endif /* CONFIG_PREEMPT_TRACER */