From patchwork Fri Jan 23 14:22:29 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 43643 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 78619218DB for ; Fri, 23 Jan 2015 14:24:19 +0000 (UTC) Received: by mail-la0-f69.google.com with SMTP id gm9sf4436415lab.0 for ; Fri, 23 Jan 2015 06:24:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=+w8LCc8xHXDmK4mSehHZXV5tpY8sxuWAfGtrMXUS+/I=; b=CedYdnIXLNwXX5cHluv22ARoEdkXM0J+HOBbwLmDJFL+/HakgCMkMlLgRmDl4+tyDi FTevJMsaaFTDxXHeoJdqKhAXO9jL3edRUocS4qMTaWQ2vCjlqS2Ef2RbTetpf2vYGGDA LqFSorznAld1MQRuHEqDEUd5f7ATrv467alcvBNZPHi36do5hXUU39mse4gLlbbW5VJJ Vr31jLbN4J4FA+kublOlXFKJUrzvTOQk0ZCB0lp83K8V5r3JJtxcZaOdyqSCpO2AUSI6 q6sNN+7vI9ZyuILy+hmrV3yZap+xTrvcV4pjU0ePvGUOAfOe6XduA0lZHIr7yDGmU/ZC jT/w== X-Gm-Message-State: ALoCoQnAmjjwXjEm/VwFMpHunD5Q85c9eynRQJFznAgzMYqlXZTx1YhA3gKAPHPYKWO5RH126sgj X-Received: by 10.152.37.130 with SMTP id y2mr147834laj.8.1422023058496; Fri, 23 Jan 2015 06:24:18 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.35 with SMTP id p3ls332099lap.49.gmail; Fri, 23 Jan 2015 06:24:18 -0800 (PST) X-Received: by 10.112.119.167 with SMTP id kv7mr7664909lbb.62.1422023058351; Fri, 23 Jan 2015 06:24:18 -0800 (PST) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id r10si1500713laj.58.2015.01.23.06.24.18 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2015 06:24:18 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by mail-la0-f51.google.com with SMTP id ge10so7539067lab.10 for ; Fri, 23 Jan 2015 06:24:18 -0800 (PST) X-Received: by 10.112.131.1 with SMTP id oi1mr7547151lbb.2.1422023058213; Fri, 23 Jan 2015 06:24:18 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.9.200 with SMTP id c8csp270835lbb; Fri, 23 Jan 2015 06:24:16 -0800 (PST) X-Received: by 10.194.239.134 with SMTP id vs6mr13885210wjc.19.1422023054424; Fri, 23 Jan 2015 06:24:14 -0800 (PST) Received: from mail-wg0-f43.google.com (mail-wg0-f43.google.com. [74.125.82.43]) by mx.google.com with ESMTPS id hh16si2722753wib.102.2015.01.23.06.24.14 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2015 06:24:14 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 74.125.82.43 as permitted sender) client-ip=74.125.82.43; Received: by mail-wg0-f43.google.com with SMTP id y19so7792856wgg.2 for ; Fri, 23 Jan 2015 06:24:14 -0800 (PST) X-Received: by 10.180.103.201 with SMTP id fy9mr4298406wib.31.1422023054044; Fri, 23 Jan 2015 06:24:14 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id bj3sm2058188wib.3.2015.01.23.06.24.12 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2015 06:24:13 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner Cc: Daniel Thompson , Jason Cooper , Russell King , Will Deacon , Catalin Marinas , Marc Zyngier , Stephen Boyd , John Stultz , Steven Rostedt , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, patches@linaro.org, linaro-kernel@lists.linaro.org, Sumit Semwal , Dirk Behme , Daniel Drake , Dmitry Pervushin , Tim Sander Subject: [PATCH 3.19-rc2 v15 5/8] printk: Simple implementation for NMI backtracing Date: Fri, 23 Jan 2015 14:22:29 +0000 Message-Id: <1422022952-31552-6-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1422022952-31552-1-git-send-email-daniel.thompson@linaro.org> References: <1422022952-31552-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently there is a quite a pile of code sitting in arch/x86/kernel/apic/hw_nmi.c to support safe all-cpu backtracing from NMI. The code is inaccessible to backtrace implementations for other architectures, which is a shame because they would probably like to be safe too. Copy this code into printk. We'll port the x86 NMI backtrace to it in a later patch. Incidentally, technically I think it might be safe to call prepare_nmi_printk() from NMI, providing care were taken to honour the return code. complete_nmi_printk() cannot be called from NMI but could be scheduled using irq_work_queue(). However honouring the return code means sometimes it is impossible to get the message out so I'd say using this code in such a way should probably attract sympathy and/or derision rather than admiration. Signed-off-by: Daniel Thompson Cc: Steven Rostedt --- arch/Kconfig | 3 ++ include/linux/printk.h | 22 +++++++++ kernel/printk/printk.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 147 insertions(+) diff --git a/arch/Kconfig b/arch/Kconfig index 05d7a8a458d5..50c9412a77d0 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -309,6 +309,9 @@ config ARCH_WANT_OLD_COMPAT_IPC select ARCH_WANT_COMPAT_IPC_PARSE_VERSION bool +config ARCH_WANT_NMI_PRINTK + bool + config HAVE_ARCH_SECCOMP_FILTER bool help diff --git a/include/linux/printk.h b/include/linux/printk.h index c8f170324e64..188fdc2c1efd 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -219,6 +219,28 @@ static inline void show_regs_print_info(const char *log_lvl) } #endif +#ifdef CONFIG_ARCH_WANT_NMI_PRINTK +extern __printf(1, 0) int nmi_vprintk(const char *fmt, va_list args); + +struct cpumask; +extern int prepare_nmi_printk(struct cpumask *cpus); +extern void complete_nmi_printk(struct cpumask *cpus); + +/* + * Replace printk to write into the NMI seq. + * + * To avoid include hell this is a macro rather than an inline function + * (printk_func is not declared in this header file). + */ +#define this_cpu_begin_nmi_printk() ({ \ + printk_func_t __orig = this_cpu_read(printk_func); \ + this_cpu_write(printk_func, nmi_vprintk); \ + __orig; \ +}) +#define this_cpu_end_nmi_printk(fn) this_cpu_write(printk_func, fn) + +#endif + extern asmlinkage void dump_stack(void) __cold; #ifndef pr_fmt diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 02d6b6d28796..774119e27e0b 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -1805,6 +1805,127 @@ asmlinkage int printk_emit(int facility, int level, } EXPORT_SYMBOL(printk_emit); +#ifdef CONFIG_ARCH_WANT_NMI_PRINTK + +#define NMI_BUF_SIZE 4096 + +struct nmi_seq_buf { + unsigned char buffer[NMI_BUF_SIZE]; + struct seq_buf seq; +}; + +/* Safe printing in NMI context */ +static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq); + +/* "in progress" flag of NMI printing */ +static unsigned long nmi_print_flag; + +/* + * It is not safe to call printk() directly from NMI handlers. + * It may be fine if the NMI detected a lock up and we have no choice + * but to do so, but doing a NMI on all other CPUs to get a back trace + * can be done with a sysrq-l. We don't want that to lock up, which + * can happen if the NMI interrupts a printk in progress. + * + * Instead, we redirect the vprintk() to this nmi_vprintk() that writes + * the content into a per cpu seq_buf buffer. Then when the NMIs are + * all done, we can safely dump the contents of the seq_buf to a printk() + * from a non NMI context. + * + * This is not a generic printk() implementation and must be used with + * great care. In particular there is a static limit on the quantity of + * data that may be emitted during NMI, only one client can be active at + * one time (arbitrated by the return value of begin_nmi_printk() and + * it is required that something at task or interrupt context be scheduled + * to issue the output. + */ +int nmi_vprintk(const char *fmt, va_list args) +{ + struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq); + unsigned int len = seq_buf_used(&s->seq); + + seq_buf_vprintf(&s->seq, fmt, args); + return seq_buf_used(&s->seq) - len; +} +EXPORT_SYMBOL_GPL(nmi_vprintk); + +/* + * Check for concurrent usage and set up per_cpu seq_buf buffers that the NMIs + * running on the other CPUs will write to. Provides the mask of CPUs it is + * safe to write from (i.e. a copy of the online mask). + */ +int prepare_nmi_printk(struct cpumask *cpus) +{ + struct nmi_seq_buf *s; + int cpu; + + if (test_and_set_bit(0, &nmi_print_flag)) { + /* + * If something is already using the NMI print facility we + * can't allow a second one... + */ + return -EBUSY; + } + + cpumask_copy(cpus, cpu_online_mask); + + for_each_cpu(cpu, cpus) { + s = &per_cpu(nmi_print_seq, cpu); + seq_buf_init(&s->seq, s->buffer, NMI_BUF_SIZE); + } + + return 0; +} +EXPORT_SYMBOL_GPL(prepare_nmi_printk); + +static void print_seq_line(struct nmi_seq_buf *s, int start, int end) +{ + const char *buf = s->buffer + start; + + printk("%.*s", (end - start) + 1, buf); +} + +void complete_nmi_printk(struct cpumask *cpus) +{ + struct nmi_seq_buf *s; + int len; + int cpu; + int i; + + /* + * Now that all the NMIs have triggered, we can dump out their + * back traces safely to the console. + */ + for_each_cpu(cpu, cpus) { + int last_i = 0; + + s = &per_cpu(nmi_print_seq, cpu); + + len = seq_buf_used(&s->seq); + if (!len) + continue; + + /* Print line by line. */ + for (i = 0; i < len; i++) { + if (s->buffer[i] == '\n') { + print_seq_line(s, last_i, i); + last_i = i + 1; + } + } + /* Check if there was a partial line. */ + if (last_i < len) { + print_seq_line(s, last_i, len - 1); + pr_cont("\n"); + } + } + + clear_bit(0, &nmi_print_flag); + smp_mb__after_atomic(); +} +EXPORT_SYMBOL_GPL(complete_nmi_printk); + +#endif /* CONFIG_ARCH_WANT_NMI_PRINTK */ + int vprintk_default(const char *fmt, va_list args) { int r; @@ -1829,6 +1950,7 @@ EXPORT_SYMBOL_GPL(vprintk_default); */ DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default; + /** * printk - print a kernel message * @fmt: format string