From patchwork Fri Sep 5 15:33:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 36864 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f71.google.com (mail-yh0-f71.google.com [209.85.213.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CBC6A202E4 for ; Fri, 5 Sep 2014 15:33:51 +0000 (UTC) Received: by mail-yh0-f71.google.com with SMTP id 29sf40521055yhl.10 for ; Fri, 05 Sep 2014 08:33:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=uVotBOh90YDnLoOfObfphyYrRy8Cn+fEdK4aJm7UVSI=; b=KK4UOgXZXb9tEVVFv7RHXeZLeLXVVHrF40OPFlXdWY7KndNB6BeZVRr3zP0FRMeoGQ yHxVrGhbzKT9SdF6cOfd3b5c3R5Khl0QBqO8wm3+ipVNldcdSdpWC4wUu60I9Ct0dmdT pVv1h7KXwSKPpLDJjFM5QtLzaCGY8556JsQgwh+Cj9UJ645D8NcNv4i+BVMY/Ooe7Z6H WMlbf/TEV1dJz9vYUP0YCXwluifAJGEZ+sLts8unycI0PWksYTQvWQR1fqcvVfCgR0EH HUwQkV0cSaMKEWYvwJNkbJUmktRJGzPOzvq4ScUiswWE4w4RCzS6LT5q5DxQN7NyPS9o lfWg== X-Gm-Message-State: ALoCoQmUxjJET9nsAY29oECfYgQ+f+2vEVcyEI6xWgw5xR5M7YMhXMnvf4G2yjSXuY/PYlyqBsZi X-Received: by 10.236.87.46 with SMTP id x34mr7019132yhe.49.1409931231651; Fri, 05 Sep 2014 08:33:51 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.39.10 with SMTP id u10ls783729qgu.64.gmail; Fri, 05 Sep 2014 08:33:51 -0700 (PDT) X-Received: by 10.52.129.165 with SMTP id nx5mr8678295vdb.25.1409931231360; Fri, 05 Sep 2014 08:33:51 -0700 (PDT) Received: from mail-vc0-f172.google.com (mail-vc0-f172.google.com [209.85.220.172]) by mx.google.com with ESMTPS id jx6si1555639vdb.102.2014.09.05.08.33.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 05 Sep 2014 08:33:46 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.172 as permitted sender) client-ip=209.85.220.172; Received: by mail-vc0-f172.google.com with SMTP id le20so1151990vcb.31 for ; Fri, 05 Sep 2014 08:33:46 -0700 (PDT) X-Received: by 10.52.3.40 with SMTP id 8mr9635890vdz.24.1409931226769; Fri, 05 Sep 2014 08:33:46 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.45.67 with SMTP id uj3csp100098vcb; Fri, 5 Sep 2014 08:33:46 -0700 (PDT) X-Received: by 10.180.92.225 with SMTP id cp1mr4901365wib.5.1409931225353; Fri, 05 Sep 2014 08:33:45 -0700 (PDT) Received: from mail-wi0-f173.google.com (mail-wi0-f173.google.com [209.85.212.173]) by mx.google.com with ESMTPS id s6si3750201wiv.72.2014.09.05.08.33.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 05 Sep 2014 08:33:44 -0700 (PDT) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 209.85.212.173 as permitted sender) client-ip=209.85.212.173; Received: by mail-wi0-f173.google.com with SMTP id cc10so1030693wib.12 for ; Fri, 05 Sep 2014 08:33:44 -0700 (PDT) X-Received: by 10.194.62.104 with SMTP id x8mr16087133wjr.7.1409931224174; Fri, 05 Sep 2014 08:33:44 -0700 (PDT) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id mz16sm2186860wic.13.2014.09.05.08.33.42 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 05 Sep 2014 08:33:43 -0700 (PDT) From: Daniel Thompson To: Russell King Cc: Daniel Thompson , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Thomas Gleixner , Sumit Semwal , Nicolas Pitre , Catalin Marinas Subject: [PATCH v2 3/5] arm: fiq: Replace default FIQ handler Date: Fri, 5 Sep 2014 16:33:16 +0100 Message-Id: <1409931198-22600-4-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1409931198-22600-1-git-send-email-daniel.thompson@linaro.org> References: <1409846620-14542-1-git-send-email-daniel.thompson@linaro.org> <1409931198-22600-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch introduces a new default FIQ handler that is structured in a similar way to the existing ARM exception handler and result in the FIQ being handled by C code running on the SVC stack (despite this code run in the FIQ handler is subject to severe limitations with respect to locking making normal interaction with the kernel impossible). This default handler allows concepts that on x86 would be handled using NMIs to be realized on ARM. Credit: This patch is a near complete re-write of a patch originally provided by Anton Vorontsov. Today only a couple of small fragments survive, however without Anton's work to build from this patch would not exist. Signed-off-by: Daniel Thompson Cc: Russell King Cc: Nicolas Pitre Cc: Catalin Marinas --- arch/arm/include/asm/smp.h | 3 + arch/arm/kernel/entry-armv.S | 137 +++++++++++++++++++++++++++++++++++++++---- arch/arm/kernel/setup.c | 8 ++- arch/arm/kernel/smp.c | 4 +- arch/arm/kernel/traps.c | 28 +++++++++ 5 files changed, 168 insertions(+), 12 deletions(-) diff --git a/arch/arm/include/asm/smp.h b/arch/arm/include/asm/smp.h index 2ec765c..0580270 100644 --- a/arch/arm/include/asm/smp.h +++ b/arch/arm/include/asm/smp.h @@ -18,6 +18,8 @@ # error " included in non-SMP build" #endif +#define SMP_IPI_FIQ_MASK 0x0100 + #define raw_smp_processor_id() (current_thread_info()->cpu) struct seq_file; @@ -85,6 +87,7 @@ extern void arch_send_call_function_single_ipi(int cpu); extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); extern void arch_send_wakeup_ipi_mask(const struct cpumask *mask); +extern void ipi_cpu_backtrace(struct pt_regs *regs); extern int register_ipi_completion(struct completion *completion, int cpu); struct smp_operations { diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 36276cd..eb37aa5 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -146,7 +146,7 @@ ENDPROC(__und_invalid) #define SPFIX(code...) #endif - .macro svc_entry, stack_hole=0 + .macro svc_entry, stack_hole=0, call_trace=1 UNWIND(.fnstart ) UNWIND(.save {r0 - pc} ) sub sp, sp, #(S_FRAME_SIZE + \stack_hole - 4) @@ -183,7 +183,51 @@ ENDPROC(__und_invalid) stmia r7, {r2 - r6} #ifdef CONFIG_TRACE_IRQFLAGS + .if \call_trace bl trace_hardirqs_off + .endif +#endif + .endm + +@ +@ svc_exit_via_fiq - similar to svc_exit but switches to FIQ mode before exit +@ +@ This macro acts in a similar manner to svc_exit but switches to FIQ +@ mode to restore the final part of the register state. +@ +@ We cannot use the normal svc_exit procedure because that would +@ clobber spsr_svc (FIQ could be delivered during the first few instructions +@ of vector_swi meaning its contents have not been saved anywhere). +@ +@ Note that, unlike svc_exit, this macro also does not allow a caller +@ supplied rpsr. This is because the FIQ exceptions are not re-entrant +@ and the handlers cannot call into the scheduler (meaning the value +@ on the stack remains correct). +@ + .macro svc_exit_via_fiq + +#ifndef CONFIG_THUMB2_KERNEL + mov r0, sp + ldmib r0, {r1 - r14} @ abort is deadly from here onward (it will + @ clobber state restored below) + msr cpsr_c, #FIQ_MODE | PSR_I_BIT | PSR_F_BIT + add r8, r0, #S_PC + ldr r9, [r0, #S_PSR] + msr spsr_cxsf, r9 + ldr r0, [r0, #S_R0] + ldmia r8, {pc}^ +#else + add r0, sp, #S_R2 + ldr lr, [sp, #S_LR] + ldr sp, [sp, #S_SP] @ abort is deadly from here onward (it will + @ clobber state restored below) + ldmia r0, {r2 - r12} + mov r1, #FIQ_MODE | PSR_I_BIT | PSR_F_BIT + msr cpsr_c, r1 + sub r0, #S_R2 + add r8, r0, #S_PC + ldmia r0, {r0 - r1} + rfeia r8 #endif .endm @@ -295,6 +339,15 @@ __pabt_svc: ENDPROC(__pabt_svc) .align 5 +__fiq_svc: + svc_entry 0, 0 + mov r0, sp @ struct pt_regs *regs + bl handle_fiq_as_nmi + svc_exit_via_fiq + UNWIND(.fnend ) +ENDPROC(__fiq_svc) + + .align 5 .LCcralign: .word cr_alignment #ifdef MULTI_DABORT @@ -305,6 +358,46 @@ ENDPROC(__pabt_svc) .word fp_enter /* + * Abort mode handlers + */ + +@ +@ Taking a FIQ in abort mode is similar to taking a FIQ in SVC mode +@ and reuses the same macros. However in abort mode we must also +@ save/restore lr_abt and spsr_abt to make nested aborts safe. +@ + .align 5 +__fiq_abt: + svc_entry 0, 0 + + ARM( msr cpsr_c, #ABT_MODE | PSR_I_BIT | PSR_F_BIT ) + THUMB( mov r0, #ABT_MODE | PSR_I_BIT | PSR_F_BIT ) + THUMB( msr cpsr_c, r0 ) + mov r1, lr @ Save lr_abt + mrs r2, spsr @ Save spsr_abt, abort is now safe + ARM( msr cpsr_c, #SVC_MODE | PSR_I_BIT | PSR_F_BIT ) + THUMB( mov r0, #SVC_MODE | PSR_I_BIT | PSR_F_BIT ) + THUMB( msr cpsr_c, r0 ) + push {r1 - r2} + + sub r0, sp, #8 @ struct pt_regs *regs + bl handle_fiq_as_nmi + + pop {r1 - r2} + ARM( msr cpsr_c, #ABT_MODE | PSR_I_BIT | PSR_F_BIT ) + THUMB( mov r0, #ABT_MODE | PSR_I_BIT | PSR_F_BIT ) + THUMB( msr cpsr_c, r0 ) + mov lr, r1 @ Restore lr_abt, abort is unsafe + msr spsr_cxsf, r2 @ Restore spsr_abt + ARM( msr cpsr_c, #SVC_MODE | PSR_I_BIT | PSR_F_BIT ) + THUMB( mov r0, #SVC_MODE | PSR_I_BIT | PSR_F_BIT ) + THUMB( msr cpsr_c, r0 ) + + svc_exit_via_fiq + UNWIND(.fnend ) +ENDPROC(__fiq_svc) + +/* * User mode handlers * * EABI note: sp_svc is always 64-bit aligned here, so should S_FRAME_SIZE @@ -683,6 +776,18 @@ ENTRY(ret_from_exception) ENDPROC(__pabt_usr) ENDPROC(ret_from_exception) + .align 5 +__fiq_usr: + usr_entry + kuser_cmpxchg_check + mov r0, sp @ struct pt_regs *regs + bl handle_fiq_as_nmi + get_thread_info tsk + mov why, #0 + b ret_to_user_from_irq + UNWIND(.fnend ) +ENDPROC(__fiq_usr) + /* * Register switch for ARMv3 and ARMv4 processors * r0 = previous task_struct, r1 = previous thread_info, r2 = next thread_info @@ -1118,17 +1223,29 @@ vector_addrexcptn: b vector_addrexcptn /*============================================================================= - * Undefined FIQs + * FIQ "NMI" handler *----------------------------------------------------------------------------- - * Enter in FIQ mode, spsr = ANY CPSR, lr = ANY PC - * MUST PRESERVE SVC SPSR, but need to switch to SVC mode to show our msg. - * Basically to switch modes, we *HAVE* to clobber one register... brain - * damage alert! I don't think that we can execute any code in here in any - * other mode than FIQ... Ok you can switch to another mode, but you can't - * get out of that mode without clobbering one register. + * Handle a FIQ using the SVC stack allowing FIQ act like NMI on x86 + * systems. */ -vector_fiq: - subs pc, lr, #4 + vector_stub fiq, FIQ_MODE, 4 + + .long __fiq_usr @ 0 (USR_26 / USR_32) + .long __fiq_svc @ 1 (FIQ_26 / FIQ_32) + .long __fiq_svc @ 2 (IRQ_26 / IRQ_32) + .long __fiq_svc @ 3 (SVC_26 / SVC_32) + .long __fiq_svc @ 4 + .long __fiq_svc @ 5 + .long __fiq_svc @ 6 + .long __fiq_abt @ 7 + .long __fiq_svc @ 8 + .long __fiq_svc @ 9 + .long __fiq_svc @ a + .long __fiq_svc @ b + .long __fiq_svc @ c + .long __fiq_svc @ d + .long __fiq_svc @ e + .long __fiq_svc @ f .globl vector_fiq_offset .equ vector_fiq_offset, vector_fiq diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 84db893d..c031063 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -133,6 +133,7 @@ struct stack { u32 irq[3]; u32 abt[3]; u32 und[3]; + u32 fiq[3]; } ____cacheline_aligned; #ifndef CONFIG_CPU_V7M @@ -470,7 +471,10 @@ void notrace cpu_init(void) "msr cpsr_c, %5\n\t" "add r14, %0, %6\n\t" "mov sp, r14\n\t" - "msr cpsr_c, %7" + "msr cpsr_c, %7\n\t" + "add r14, %0, %8\n\t" + "mov sp, r14\n\t" + "msr cpsr_c, %9" : : "r" (stk), PLC (PSR_F_BIT | PSR_I_BIT | IRQ_MODE), @@ -479,6 +483,8 @@ void notrace cpu_init(void) "I" (offsetof(struct stack, abt[0])), PLC (PSR_F_BIT | PSR_I_BIT | UND_MODE), "I" (offsetof(struct stack, und[0])), + PLC (PSR_F_BIT | PSR_I_BIT | FIQ_MODE), + "I" (offsetof(struct stack, fiq[0])), PLC (PSR_F_BIT | PSR_I_BIT | SVC_MODE) : "r14"); #endif diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 94959f9..7a79d11 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -543,7 +543,7 @@ static void ipi_cpu_stop(unsigned int cpu) cpu_relax(); } -static void ipi_cpu_backtrace(struct pt_regs *regs) +void ipi_cpu_backtrace(struct pt_regs *regs) { int cpu = smp_processor_id(); @@ -584,6 +584,8 @@ void handle_IPI(int ipinr, struct pt_regs *regs) unsigned int cpu = smp_processor_id(); struct pt_regs *old_regs = set_irq_regs(regs); + BUILD_BUG_ON(SMP_IPI_FIQ_MASK != BIT(IPI_CPU_BACKTRACE)); + if ((unsigned)ipinr < NR_IPI) { trace_ipi_entry(ipi_types[ipinr]); __inc_irq_stat(cpu, ipi_irqs[ipinr]); diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c index a447dcc..f8189c2 100644 --- a/arch/arm/kernel/traps.c +++ b/arch/arm/kernel/traps.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -461,6 +462,33 @@ die_sig: } /* + * Handle FIQ similarly to NMI on x86 systems. + * + * The runtime environment for NMIs is extremely restrictive + * (NMIs can pre-empt critical sections meaning almost all locking is + * forbidden) meaning this default FIQ handling must only be used in + * circumstances where non-maskability improves robustness, such as + * watchdog or debug logic. + * + * This handler is not appropriate for general purpose use in drivers + * platform code and can be overrideen using set_fiq_handler. + */ +asmlinkage void __exception_irq_entry handle_fiq_as_nmi(struct pt_regs *regs) +{ + struct pt_regs *old_regs = set_irq_regs(regs); + + nmi_enter(); + +#ifdef CONFIG_SMP + ipi_cpu_backtrace(regs); +#endif + + nmi_exit(); + + set_irq_regs(old_regs); +} + +/* * bad_mode handles the impossible case in the vectors. If you see one of * these, then it's extremely serious, and could mean you have buggy hardware. * It never returns, and never tries to sync. We hope that we can at least