From patchwork Tue Aug 30 08:07:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chunyan Zhang X-Patchwork-Id: 74954 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp2019262qga; Tue, 30 Aug 2016 01:09:13 -0700 (PDT) X-Received: by 10.98.16.75 with SMTP id y72mr4006856pfi.50.1472544553301; Tue, 30 Aug 2016 01:09:13 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i8si43927461pap.189.2016.08.30.01.09.13; Tue, 30 Aug 2016 01:09:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754596AbcH3IJI (ORCPT + 27 others); Tue, 30 Aug 2016 04:09:08 -0400 Received: from mail-pf0-f173.google.com ([209.85.192.173]:36431 "EHLO mail-pf0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753412AbcH3IJA (ORCPT ); Tue, 30 Aug 2016 04:09:00 -0400 Received: by mail-pf0-f173.google.com with SMTP id h186so5463266pfg.3 for ; Tue, 30 Aug 2016 01:08:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uYRXpE9hdLUgbgGQT6bnfykznfZimNj3GRptMqjf+f8=; b=V2+ZeMNaEpzFqSagbIICwHLhZHcduVATzYSGPOzAKQGgxHpg5xazDHplduk1ODC5HK Ta4R6M1rq+aL/AHzGwMMSgGuvFe/2inve1f9My+cjboC74TYELiZXcHgERdArt7LPOuf pauVa/zn5CdVgv1Ia5psrutlfz84aSiDZ4QAw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uYRXpE9hdLUgbgGQT6bnfykznfZimNj3GRptMqjf+f8=; b=d3bwJQP+fGe6XPh+8GKjm/gp5J4Jsb99xTXewp4YXP+ll85aQkR0K6CAwpkmMrXI46 cBcHHVhvGRyGsoP4AMOCq5OZvdDtnpV3lYqyPTC7ISWvTNyzx/4ZmUDsZxAEXtG0G7Fi RjFZw9lnXtiZPMvrJKHJmtBok5Xb889V4vsgo3ZAUusqXSIWny0f5nHg0YdeCAdROB0Q qKlFCDMD1aGyL+pbjGgyCpkmP4GE3pwhe6ipn58UPnZ7yprjqRM9y1ZTyUIkpmzl9OyL j0xwvjOlJVD4wu7jUyed6jUu8c3OGu2GiA3NolkLc1sq1ivQjEbA+GLL2JFJdZSbp4s7 BH8A== X-Gm-Message-State: AE9vXwOV6KJd23X2Nh90cxa6pCsgz3DKb5j0shGoV/aAevMErI3WmxorFmTkDEfA3Ud0K2nr X-Received: by 10.98.29.201 with SMTP id d192mr3989150pfd.142.1472544538313; Tue, 30 Aug 2016 01:08:58 -0700 (PDT) Received: from ubuntu16.spreadtrum.com ([175.111.195.49]) by smtp.gmail.com with ESMTPSA id c125sm54644732pfc.40.2016.08.30.01.08.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 30 Aug 2016 01:08:57 -0700 (PDT) From: Chunyan Zhang To: rostedt@goodmis.org, mathieu.poirier@linaro.org, alexander.shishkin@linux.intel.com, mingo@redhat.com Cc: arnd@arndb.de, mike.leach@arm.com, tor@ti.com, philippe.langlais@st.com, nicolas.guion@st.com, felipe.balbi@linux.intel.com, zhang.lyra@gmail.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCHV5 1/3] tracing: add a possibility of exporting function trace to other places instead of ring buffer only Date: Tue, 30 Aug 2016 16:07:28 +0800 Message-Id: <1472544450-9915-2-git-send-email-zhang.chunyan@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1472544450-9915-1-git-send-email-zhang.chunyan@linaro.org> References: <1472544450-9915-1-git-send-email-zhang.chunyan@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently Function traces can be only exported to ring buffer, this patch added trace_export concept which can process traces and export them to a registered destination as an addition to the current only output of Ftrace - i.e. ring buffer. In this way, if we want Function traces to be sent to other destination rather than ring buffer only, we just need to register a new trace_export and implement its own .commit() callback or just use 'trace_generic_commit()' which this patch also added and hooks up its own .write() function for writing traces to the storage. With this patch, only Function trace (trace type is TRACE_FN) is supported. Signed-off-by: Chunyan Zhang --- include/linux/trace.h | 35 ++++++++++++ kernel/trace/trace.c | 155 +++++++++++++++++++++++++++++++++++++++++++++++++- kernel/trace/trace.h | 1 + 3 files changed, 190 insertions(+), 1 deletion(-) create mode 100644 include/linux/trace.h -- 2.7.4 diff --git a/include/linux/trace.h b/include/linux/trace.h new file mode 100644 index 0000000..30ded92 --- /dev/null +++ b/include/linux/trace.h @@ -0,0 +1,35 @@ +#ifndef _LINUX_TRACE_H +#define _LINUX_TRACE_H + +#include +struct trace_array; + +#ifdef CONFIG_TRACING +/* + * The trace export - an export of Ftrace. The trace_export can process + * traces and export them to a registered destination as an addition to + * the current only output of Ftrace - i.e. ring buffer. + * + * If you want traces to be sent to some other place rather than + * ring buffer only, just need to register a new trace_export and + * implement its own .commit() callback or just directly use + * 'trace_generic_commit()' and hooks up its own .write() function + * for writing traces to the storage. + * + * next - pointer to the next trace_export + * commit - commit the traces to the destination + * write - copy traces which have been delt with ->commit() to + * the destination + */ +struct trace_export { + struct trace_export __rcu *next; + void (*commit)(struct trace_array *, struct ring_buffer_event *); + void (*write)(const char *, unsigned int); +}; + +int register_ftrace_export(struct trace_export *export); +int unregister_ftrace_export(struct trace_export *export); + +#endif /* CONFIG_TRACING */ + +#endif /* _LINUX_TRACE_H */ diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index dade4c9..3163fa6 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include "trace.h" @@ -2128,6 +2129,155 @@ void trace_buffer_unlock_commit_regs(struct trace_array *tr, ftrace_trace_userstack(buffer, flags, pc); } +static DEFINE_STATIC_KEY_FALSE(ftrace_exports_enabled); + +static void ftrace_exports_enable(void) +{ + static_branch_enable(&ftrace_exports_enabled); +} + +static void ftrace_exports_disable(void) +{ + static_branch_disable(&ftrace_exports_enabled); +} + +static size_t trace_size[] = { + [TRACE_FN] = sizeof(struct ftrace_entry), + [TRACE_CTX] = sizeof(struct ctx_switch_entry), + [TRACE_WAKE] = sizeof(struct ctx_switch_entry), + [TRACE_STACK] = sizeof(struct stack_entry), + [TRACE_PRINT] = sizeof(struct print_entry), + [TRACE_BPRINT] = sizeof(struct bprint_entry), + [TRACE_MMIO_RW] = sizeof(struct trace_mmiotrace_rw), + [TRACE_MMIO_MAP] = sizeof(struct trace_mmiotrace_map), + [TRACE_BRANCH] = sizeof(struct trace_branch), + [TRACE_GRAPH_RET] = sizeof(struct ftrace_graph_ret_entry), + [TRACE_GRAPH_ENT] = sizeof(struct ftrace_graph_ent_entry), + [TRACE_USER_STACK] = sizeof(struct userstack_entry), + [TRACE_BPUTS] = sizeof(struct bputs_entry), +}; + +static void +trace_generic_commit(struct trace_array *tr, + struct ring_buffer_event *event) +{ + struct trace_entry *entry; + struct trace_export *export = tr->export; + unsigned int size = 0; + + entry = ring_buffer_event_data(event); + + size = trace_size[entry->type]; + if (!size) + return; + + if (export && export->write) + export->write((char *)entry, size); +} + +static DEFINE_MUTEX(ftrace_export_lock); + +static struct trace_export __rcu *ftrace_exports_list __read_mostly; + +static inline void +ftrace_exports(struct trace_array *tr, struct ring_buffer_event *event) +{ + struct trace_export *export; + + preempt_disable_notrace(); + + for (export = rcu_dereference_raw_notrace(ftrace_exports_list); + export && export->commit; + export = rcu_dereference_raw_notrace(export->next)) { + tr->export = export; + export->commit(tr, event); + } + + preempt_enable_notrace(); +} + +static inline void +add_trace_export(struct trace_export **list, struct trace_export *export) +{ + rcu_assign_pointer(export->next, *list); + /* + * We are entering export into the list but another + * CPU might be walking that list. We need to make sure + * the export->next pointer is valid before another CPU sees + * the export pointer included into the list. + */ + rcu_assign_pointer(*list, export); +} + +static inline int +rm_trace_export(struct trace_export **list, struct trace_export *export) +{ + struct trace_export **p; + + for (p = list; *p != NULL; p = &(*p)->next) + if (*p == export) + break; + + if (*p != export) + return -1; + + rcu_assign_pointer(*p, (*p)->next); + + return 0; +} + +static inline void +add_ftrace_export(struct trace_export **list, struct trace_export *export) +{ + if (*list == NULL) + ftrace_exports_enable(); + + add_trace_export(list, export); +} + +static inline int +rm_ftrace_export(struct trace_export **list, struct trace_export *export) +{ + int ret; + + ret = rm_trace_export(list, export); + if (*list == NULL) + ftrace_exports_disable(); + + return ret; +} + +int register_ftrace_export(struct trace_export *export) +{ + if (WARN_ON_ONCE(!export->write)) + return -1; + + mutex_lock(&ftrace_export_lock); + + export->commit = trace_generic_commit; + + add_ftrace_export(&ftrace_exports_list, export); + + mutex_unlock(&ftrace_export_lock); + + return 0; +} +EXPORT_SYMBOL_GPL(register_ftrace_export); + +int unregister_ftrace_export(struct trace_export *export) +{ + int ret; + + mutex_lock(&ftrace_export_lock); + + ret = rm_ftrace_export(&ftrace_exports_list, export); + + mutex_unlock(&ftrace_export_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(unregister_ftrace_export); + void trace_function(struct trace_array *tr, unsigned long ip, unsigned long parent_ip, unsigned long flags, @@ -2146,8 +2296,11 @@ trace_function(struct trace_array *tr, entry->ip = ip; entry->parent_ip = parent_ip; - if (!call_filter_check_discard(call, entry, buffer, event)) + if (!call_filter_check_discard(call, entry, buffer, event)) { + if (static_branch_unlikely(&ftrace_exports_enabled)) + ftrace_exports(tr, event); __buffer_unlock_commit(buffer, event); + } } #ifdef CONFIG_STACKTRACE diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index f783df4..26a3088 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -260,6 +260,7 @@ struct trace_array { /* function tracing enabled */ int function_enabled; #endif + struct trace_export *export; }; enum {