From patchwork Tue Sep 6 18:00:09 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 3914 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id B007C23FC0 for ; Wed, 7 Sep 2011 06:20:52 +0000 (UTC) Received: from mail-fx0-f52.google.com (mail-fx0-f52.google.com [209.85.161.52]) by fiordland.canonical.com (Postfix) with ESMTP id A52EFA186A8 for ; Wed, 7 Sep 2011 06:20:52 +0000 (UTC) Received: by mail-fx0-f52.google.com with SMTP id 18so398726fxd.11 for ; Tue, 06 Sep 2011 23:20:52 -0700 (PDT) Received: by 10.223.62.8 with SMTP id v8mr2769764fah.43.1315376452434; Tue, 06 Sep 2011 23:20:52 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.152.11.8 with SMTP id m8cs127776lab; Tue, 6 Sep 2011 23:20:52 -0700 (PDT) Received: by 10.150.202.7 with SMTP id z7mr4909416ybf.54.1315376450928; Tue, 06 Sep 2011 23:20:50 -0700 (PDT) Received: from e3.ny.us.ibm.com (e3.ny.us.ibm.com [32.97.182.143]) by mx.google.com with ESMTPS id m14si543841ybo.109.2011.09.06.23.20.50 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 06 Sep 2011 23:20:50 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.143 as permitted sender) client-ip=32.97.182.143; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.143 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from d01relay07.pok.ibm.com (d01relay07.pok.ibm.com [9.56.227.147]) by e3.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id p875u3P0022800; Wed, 7 Sep 2011 01:56:03 -0400 Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by d01relay07.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p876Klsk3170482; Wed, 7 Sep 2011 02:20:47 -0400 Received: from d01av01.pok.ibm.com (loopback [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p876Kh44005189; Wed, 7 Sep 2011 02:20:46 -0400 Received: from paulmck-ThinkPad-W500 (dyn9050016039.mts.ibm.com [9.50.16.39] (may be forged)) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p876Kdug004782; Wed, 7 Sep 2011 02:20:41 -0400 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id D097F13F877; Tue, 6 Sep 2011 11:00:51 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, patches@linaro.org, "Paul E. McKenney" , "Paul E. McKenney" Subject: [PATCH tip/core/rcu 15/55] rcu: Event-trace markers for computing RCU CPU utilization Date: Tue, 6 Sep 2011 11:00:09 -0700 Message-Id: <1315332049-2604-15-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.3.2 In-Reply-To: <20110906180015.GA2560@linux.vnet.ibm.com> References: <20110906180015.GA2560@linux.vnet.ibm.com> From: Paul E. McKenney This commit adds the trace_rcu_utilization() marker that is to be used to allow postprocessing scripts compute RCU's CPU utilization, give or take event-trace overhead. Note that we do not include RCU's dyntick-idle interface because event tracing requires RCU protection, which is not available in dyntick-idle mode. Signed-off-by: Paul E. McKenney Signed-off-by: Paul E. McKenney --- include/trace/events/rcu.h | 73 ++++++++++++++++++++++++++++++++------------ kernel/rcutree.c | 16 +++++++++- 2 files changed, 68 insertions(+), 21 deletions(-) diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h index db3f6e9..ab458eb 100644 --- a/include/trace/events/rcu.h +++ b/include/trace/events/rcu.h @@ -7,29 +7,58 @@ #include /* - * Tracepoint for calling rcu_do_batch, performed to start callback invocation: + * Tracepoint for start/end markers used for utilization calculations. + * By convention, the string is of the following forms: + * + * "Start " -- Mark the start of the specified activity, + * such as "context switch". Nesting is permitted. + * "End " -- Mark the end of the specified activity. + */ +TRACE_EVENT(rcu_utilization, + + TP_PROTO(char *s), + + TP_ARGS(s), + + TP_STRUCT__entry( + __field(char *, s) + ), + + TP_fast_assign( + __entry->s = s; + ), + + TP_printk("%s", __entry->s) +); + +/* + * Tracepoint for marking the beginning rcu_do_batch, performed to start + * RCU callback invocation. The first argument is the total number of + * callbacks (including those that are not yet ready to be invoked), + * and the second argument is the current RCU-callback batch limit. */ TRACE_EVENT(rcu_batch_start, - TP_PROTO(long callbacks_ready, int blimit), + TP_PROTO(long qlen, int blimit), - TP_ARGS(callbacks_ready, blimit), + TP_ARGS(qlen, blimit), TP_STRUCT__entry( - __field( long, callbacks_ready ) - __field( int, blimit ) + __field(long, qlen) + __field(int, blimit) ), TP_fast_assign( - __entry->callbacks_ready = callbacks_ready; - __entry->blimit = blimit; + __entry->qlen = qlen; + __entry->blimit = blimit; ), - TP_printk("CBs=%ld bl=%d", __entry->callbacks_ready, __entry->blimit) + TP_printk("CBs=%ld bl=%d", __entry->qlen, __entry->blimit) ); /* - * Tracepoint for the invocation of a single RCU callback + * Tracepoint for the invocation of a single RCU callback function. + * The argument is a pointer to the RCU callback itself. */ TRACE_EVENT(rcu_invoke_callback, @@ -38,20 +67,23 @@ TRACE_EVENT(rcu_invoke_callback, TP_ARGS(rhp), TP_STRUCT__entry( - __field( void *, rhp ) - __field( void *, func ) + __field(void *, rhp) + __field(void *, func) ), TP_fast_assign( - __entry->rhp = rhp; - __entry->func = rhp->func; + __entry->rhp = rhp; + __entry->func = rhp->func; ), TP_printk("rhp=%p func=%pf", __entry->rhp, __entry->func) ); /* - * Tracepoint for the invocation of a single RCU kfree callback + * Tracepoint for the invocation of a single RCU callback of the special + * kfree() form. The first argument is a pointer to the RCU callback + * and the second argument is the offset of the callback within the + * enclosing RCU-protected data structure. */ TRACE_EVENT(rcu_invoke_kfree_callback, @@ -60,12 +92,12 @@ TRACE_EVENT(rcu_invoke_kfree_callback, TP_ARGS(rhp, offset), TP_STRUCT__entry( - __field(void *, rhp ) - __field(unsigned long, offset ) + __field(void *, rhp) + __field(unsigned long, offset) ), TP_fast_assign( - __entry->rhp = rhp; + __entry->rhp = rhp; __entry->offset = offset; ), @@ -73,7 +105,8 @@ TRACE_EVENT(rcu_invoke_kfree_callback, ); /* - * Tracepoint for leaving rcu_do_batch, performed after callback invocation: + * Tracepoint for exiting rcu_do_batch after RCU callbacks have been + * invoked. The first argument is the number of callbacks actually invoked. */ TRACE_EVENT(rcu_batch_end, @@ -82,11 +115,11 @@ TRACE_EVENT(rcu_batch_end, TP_ARGS(callbacks_invoked), TP_STRUCT__entry( - __field( int, callbacks_invoked ) + __field(int, callbacks_invoked) ), TP_fast_assign( - __entry->callbacks_invoked = callbacks_invoked; + __entry->callbacks_invoked = callbacks_invoked; ), TP_printk("CBs-invoked=%d", __entry->callbacks_invoked) diff --git a/kernel/rcutree.c b/kernel/rcutree.c index 45dcc20..2a9643b 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -184,8 +184,10 @@ void rcu_bh_qs(int cpu) */ void rcu_note_context_switch(int cpu) { + trace_rcu_utilization("Start context switch"); rcu_sched_qs(cpu); rcu_preempt_note_context_switch(cpu); + trace_rcu_utilization("End context switch"); } EXPORT_SYMBOL_GPL(rcu_note_context_switch); @@ -1275,6 +1277,7 @@ static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp) */ void rcu_check_callbacks(int cpu, int user) { + trace_rcu_utilization("Start scheduler-tick"); if (user || (idle_cpu(cpu) && rcu_scheduler_active && !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) { @@ -1308,6 +1311,7 @@ void rcu_check_callbacks(int cpu, int user) rcu_preempt_check_callbacks(cpu); if (rcu_pending(cpu)) invoke_rcu_core(); + trace_rcu_utilization("End scheduler-tick"); } #ifdef CONFIG_SMP @@ -1369,10 +1373,14 @@ static void force_quiescent_state(struct rcu_state *rsp, int relaxed) unsigned long flags; struct rcu_node *rnp = rcu_get_root(rsp); - if (!rcu_gp_in_progress(rsp)) + trace_rcu_utilization("Start fqs"); + if (!rcu_gp_in_progress(rsp)) { + trace_rcu_utilization("End fqs"); return; /* No grace period in progress, nothing to force. */ + } if (!raw_spin_trylock_irqsave(&rsp->fqslock, flags)) { rsp->n_force_qs_lh++; /* Inexact, can lose counts. Tough! */ + trace_rcu_utilization("End fqs"); return; /* Someone else is already on the job. */ } if (relaxed && ULONG_CMP_GE(rsp->jiffies_force_qs, jiffies)) @@ -1421,11 +1429,13 @@ static void force_quiescent_state(struct rcu_state *rsp, int relaxed) raw_spin_unlock(&rsp->fqslock); /* irqs remain disabled */ rsp->fqs_need_gp = 0; rcu_start_gp(rsp, flags); /* releases rnp->lock */ + trace_rcu_utilization("End fqs"); return; } raw_spin_unlock(&rnp->lock); /* irqs remain disabled */ unlock_fqs_ret: raw_spin_unlock_irqrestore(&rsp->fqslock, flags); + trace_rcu_utilization("End fqs"); } #else /* #ifdef CONFIG_SMP */ @@ -1481,6 +1491,7 @@ __rcu_process_callbacks(struct rcu_state *rsp, struct rcu_data *rdp) */ static void rcu_process_callbacks(struct softirq_action *unused) { + trace_rcu_utilization("Start RCU core"); __rcu_process_callbacks(&rcu_sched_state, &__get_cpu_var(rcu_sched_data)); __rcu_process_callbacks(&rcu_bh_state, &__get_cpu_var(rcu_bh_data)); @@ -1488,6 +1499,7 @@ static void rcu_process_callbacks(struct softirq_action *unused) /* If we are last CPU on way to dyntick-idle mode, accelerate it. */ rcu_needs_cpu_flush(); + trace_rcu_utilization("End RCU core"); } /* @@ -1910,6 +1922,7 @@ static int __cpuinit rcu_cpu_notify(struct notifier_block *self, struct rcu_data *rdp = per_cpu_ptr(rcu_state->rda, cpu); struct rcu_node *rnp = rdp->mynode; + trace_rcu_utilization("Start CPU hotplug"); switch (action) { case CPU_UP_PREPARE: case CPU_UP_PREPARE_FROZEN: @@ -1945,6 +1958,7 @@ static int __cpuinit rcu_cpu_notify(struct notifier_block *self, default: break; } + trace_rcu_utilization("End CPU hotplug"); return NOTIFY_OK; }