From patchwork Tue Sep 9 14:15:10 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 37114 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yk0-f200.google.com (mail-yk0-f200.google.com [209.85.160.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2B65D20C93 for ; Tue, 9 Sep 2014 14:15:35 +0000 (UTC) Received: by mail-yk0-f200.google.com with SMTP id 79sf11068940ykr.7 for ; Tue, 09 Sep 2014 07:15:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=e442FwHN9y34DlabLIYLp9ygi0RZwXzlC3O7IEOqwCc=; b=Gk+sQzEd+kh0eiFRGCxRHbNwOWK/+425aO+toKcoCY2EPBW2xPGCa6SLFx9To7RFnG Y8OK+Ab4wmJr3RZKGY+Py7ZKvC6tgFJFXGG9GZ4TdcxyFYhvGQKiU4MQEVjbwf/KDWGF fP8AWZXpEvqsqDxGgJpobyqPZVxa1XZvZbijs9tGscH27unWJY57J0Bc3Y1cjBHWmB05 A8ZgzAaKOm+GnLKzxgOIH4tm/tV/HwnFCfaNgSnpGFE/zha2d1KIfe6XmnRWL5nLs+fM jDgogj8q/0k1qjNvGOa1mtay1AatgzdptVhwNitwfESqEfmcJUwlYBiWYZdIg2xteJS0 RRsA== X-Gm-Message-State: ALoCoQlfA0nlNrVXI1iimR3JvQvb42R8Ud9KIFdEYeYzOSewYi/uL477eMDhK/3VBtAf/9fupF1e X-Received: by 10.52.186.103 with SMTP id fj7mr21859010vdc.8.1410272134898; Tue, 09 Sep 2014 07:15:34 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.101.207 with SMTP id u73ls1149149qge.34.gmail; Tue, 09 Sep 2014 07:15:34 -0700 (PDT) X-Received: by 10.52.111.232 with SMTP id il8mr1688284vdb.22.1410272134766; Tue, 09 Sep 2014 07:15:34 -0700 (PDT) Received: from mail-vc0-f170.google.com (mail-vc0-f170.google.com [209.85.220.170]) by mx.google.com with ESMTPS id q8si3854565vcq.58.2014.09.09.07.15.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 09 Sep 2014 07:15:34 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.170 as permitted sender) client-ip=209.85.220.170; Received: by mail-vc0-f170.google.com with SMTP id hy4so2386833vcb.15 for ; Tue, 09 Sep 2014 07:15:34 -0700 (PDT) X-Received: by 10.52.146.161 with SMTP id td1mr755887vdb.8.1410272134674; Tue, 09 Sep 2014 07:15:34 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.45.67 with SMTP id uj3csp290830vcb; Tue, 9 Sep 2014 07:15:33 -0700 (PDT) X-Received: by 10.194.63.241 with SMTP id j17mr3821472wjs.115.1410272132807; Tue, 09 Sep 2014 07:15:32 -0700 (PDT) Received: from mail-wi0-f169.google.com (mail-wi0-f169.google.com [209.85.212.169]) by mx.google.com with ESMTPS id lk14si18069495wic.106.2014.09.09.07.15.32 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 09 Sep 2014 07:15:32 -0700 (PDT) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 209.85.212.169 as permitted sender) client-ip=209.85.212.169; Received: by mail-wi0-f169.google.com with SMTP id ex7so1138771wid.0 for ; Tue, 09 Sep 2014 07:15:32 -0700 (PDT) X-Received: by 10.180.83.68 with SMTP id o4mr5506349wiy.72.1410272132401; Tue, 09 Sep 2014 07:15:32 -0700 (PDT) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id k10sm15059594wjb.28.2014.09.09.07.15.30 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Sep 2014 07:15:31 -0700 (PDT) From: Daniel Thompson To: Russell King Cc: Russell King , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Thomas Gleixner , Sumit Semwal , Daniel Thompson Subject: [PATCH v4 5/6] ARM: add basic support for on-demand backtrace of other CPUs Date: Tue, 9 Sep 2014 15:15:10 +0100 Message-Id: <1410272111-30516-6-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1410272111-30516-1-git-send-email-daniel.thompson@linaro.org> References: <1410190115-32604-1-git-send-email-daniel.thompson@linaro.org> <1410272111-30516-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.170 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Russell King Add basic infrastructure for triggering a backtrace of other CPUs via an IPI, preferably at FIQ level. It is intended that this shall be used for cases where we have detected that something has already failed in the kernel. Signed-off-by: Russell King Signed-off-by: Daniel Thompson --- arch/arm/include/asm/irq.h | 5 ++++ arch/arm/kernel/smp.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 67 insertions(+) diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h index 53c15de..be1d07d 100644 --- a/arch/arm/include/asm/irq.h +++ b/arch/arm/include/asm/irq.h @@ -35,6 +35,11 @@ extern void (*handle_arch_irq)(struct pt_regs *); extern void set_handle_irq(void (*handle_irq)(struct pt_regs *)); #endif +#ifdef CONFIG_SMP +extern void arch_trigger_all_cpu_backtrace(bool); +#define arch_trigger_all_cpu_backtrace(x) arch_trigger_all_cpu_backtrace(x) +#endif + #endif #endif diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 9388a3d..94959f9 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -72,8 +72,12 @@ enum ipi_msg_type { IPI_CPU_STOP, IPI_IRQ_WORK, IPI_COMPLETION, + IPI_CPU_BACKTRACE, }; +/* For reliability, we're prepared to waste bits here. */ +static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly; + static DECLARE_COMPLETION(cpu_running); static struct smp_operations smp_ops; @@ -539,6 +543,21 @@ static void ipi_cpu_stop(unsigned int cpu) cpu_relax(); } +static void ipi_cpu_backtrace(struct pt_regs *regs) +{ + int cpu = smp_processor_id(); + + if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) { + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED; + + arch_spin_lock(&lock); + printk(KERN_WARNING "FIQ backtrace for cpu %d\n", cpu); + show_regs(regs); + arch_spin_unlock(&lock); + cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask)); + } +} + static DEFINE_PER_CPU(struct completion *, cpu_completion); int register_ipi_completion(struct completion *completion, int cpu) @@ -618,6 +637,12 @@ void handle_IPI(int ipinr, struct pt_regs *regs) irq_exit(); break; + case IPI_CPU_BACKTRACE: + irq_enter(); + ipi_cpu_backtrace(regs); + irq_exit(); + break; + default: printk(KERN_CRIT "CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr); @@ -712,3 +737,40 @@ static int __init register_cpufreq_notifier(void) core_initcall(register_cpufreq_notifier); #endif + +void arch_trigger_all_cpu_backtrace(bool include_self) +{ + static unsigned long backtrace_flag; + int i, cpu = get_cpu(); + + if (test_and_set_bit(0, &backtrace_flag)) { + /* + * If there is already a trigger_all_cpu_backtrace() in progress + * (backtrace_flag == 1), don't output double cpu dump infos. + */ + put_cpu(); + return; + } + + cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask); + if (!include_self) + cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask)); + + if (!cpumask_empty(to_cpumask(backtrace_mask))) { + pr_info("Sending FIQ to %s CPUs:\n", + (include_self ? "all" : "other")); + smp_cross_call(to_cpumask(backtrace_mask), IPI_CPU_BACKTRACE); + } + + /* Wait for up to 10 seconds for all CPUs to do the backtrace */ + for (i = 0; i < 10 * 1000; i++) { + if (cpumask_empty(to_cpumask(backtrace_mask))) + break; + + mdelay(1); + } + + clear_bit(0, &backtrace_flag); + smp_mb__after_atomic(); + put_cpu(); +}