From patchwork Thu Apr 27 17:44:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 98294 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp214493qgf; Thu, 27 Apr 2017 10:46:12 -0700 (PDT) X-Received: by 10.98.80.69 with SMTP id e66mr7668742pfb.250.1493315172866; Thu, 27 Apr 2017 10:46:12 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l4si3169259plb.153.2017.04.27.10.46.12; Thu, 27 Apr 2017 10:46:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1163864AbdD0Rpx (ORCPT + 25 others); Thu, 27 Apr 2017 13:45:53 -0400 Received: from foss.arm.com ([217.140.101.70]:40458 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1163833AbdD0RpX (ORCPT ); Thu, 27 Apr 2017 13:45:23 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 016ED1682; Thu, 27 Apr 2017 10:45:23 -0700 (PDT) Received: from leverpostej.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 473653F4FF; Thu, 27 Apr 2017 10:45:21 -0700 (PDT) From: Mark Rutland To: will.deacon@arm.com, catalin.marinas@arm.com, tglx@linutronix.de Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bigeasy@linutronix.de, jbaron@akamai.com, mark.rutland@arm.com, peterz@infradead.org, rostedt@goodmis.org, suzuki.poulose@arm.com Subject: [PATCHv2 1/2] jump_label: Provide static_key_[enable|/slow_inc]_cpuslocked() Date: Thu, 27 Apr 2017 18:44:36 +0100 Message-Id: <1493315077-19496-2-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1493315077-19496-1-git-send-email-mark.rutland@arm.com> References: <1493315077-19496-1-git-send-email-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sebastian Andrzej Siewior Provide static_key_[enable|slow_inc]_cpuslocked() variant that don't take cpu_hotplug_lock(). Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Mark Rutland Cc: Peter Zijlstra Cc: Sebastian Siewior Cc: Steven Rostedt Cc: jbaron@akamai.com --- include/linux/jump_label.h | 7 +++++++ kernel/jump_label.c | 10 ++++++++++ 2 files changed, 17 insertions(+) -- 1.9.1 diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index d7b17d1..c80d8b1 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -164,6 +164,7 @@ extern void arch_jump_label_transform_static(struct jump_entry *entry, extern void jump_label_apply_nops(struct module *mod); extern int static_key_count(struct static_key *key); extern void static_key_enable(struct static_key *key); +extern void static_key_enable_cpuslocked(struct static_key *key); extern void static_key_disable(struct static_key *key); extern void static_key_disable_cpuslocked(struct static_key *key); @@ -252,6 +253,11 @@ static inline void static_key_enable(struct static_key *key) static_key_slow_inc(key); } +static inline void static_key_enable_cpuslocked(struct static_key *key) +{ + static_key_enable(key); +} + static inline void static_key_disable(struct static_key *key) { int count = static_key_count(key); @@ -429,6 +435,7 @@ struct static_key_false { */ #define static_branch_enable(x) static_key_enable(&(x)->key) +#define static_branch_enable_cpuslocked(x) static_key_enable_cpuslocked(&(x)->key) #define static_branch_disable(x) static_key_disable(&(x)->key) #define static_branch_disable_cpuslocked(x) static_key_disable_cpuslocked(&(x)->key) diff --git a/kernel/jump_label.c b/kernel/jump_label.c index d71124e..6343f4c 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -90,6 +90,16 @@ void static_key_enable(struct static_key *key) } EXPORT_SYMBOL_GPL(static_key_enable); +void static_key_enable_cpuslocked(struct static_key *key) +{ + int count = static_key_count(key); + + WARN_ON_ONCE(count < 0 || count > 1); + + if (!count) + static_key_slow_inc_cpuslocked(key); +} + void static_key_disable(struct static_key *key) { int count = static_key_count(key); From patchwork Thu Apr 27 17:44:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 98293 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp214374qgf; Thu, 27 Apr 2017 10:45:53 -0700 (PDT) X-Received: by 10.84.216.10 with SMTP id m10mr9030813pli.79.1493315153413; Thu, 27 Apr 2017 10:45:53 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q123si3301216pfb.306.2017.04.27.10.45.53; Thu, 27 Apr 2017 10:45:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S939554AbdD0Rpg (ORCPT + 25 others); Thu, 27 Apr 2017 13:45:36 -0400 Received: from foss.arm.com ([217.140.101.70]:40474 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1163837AbdD0Rp0 (ORCPT ); Thu, 27 Apr 2017 13:45:26 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 622E61684; Thu, 27 Apr 2017 10:45:25 -0700 (PDT) Received: from leverpostej.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A8AF53F4FF; Thu, 27 Apr 2017 10:45:23 -0700 (PDT) From: Mark Rutland To: will.deacon@arm.com, catalin.marinas@arm.com, tglx@linutronix.de Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bigeasy@linutronix.de, jbaron@akamai.com, mark.rutland@arm.com, peterz@infradead.org, rostedt@goodmis.org, suzuki.poulose@arm.com Subject: [PATCHv2 2/2] arm64: cpufeature: use static_branch_enable_cpuslocked() Date: Thu, 27 Apr 2017 18:44:37 +0100 Message-Id: <1493315077-19496-3-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1493315077-19496-1-git-send-email-mark.rutland@arm.com> References: <1493315077-19496-1-git-send-email-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Recently, the hotplug locking was conveted to use a percpu rwsem. Unlike the existing {get,put}_online_cpus() logic, this can't nest. Unfortunately, in arm64's secondary boot path we can end up nesting via static_branch_enable() in cpus_set_cap() when we detect an erratum. This leads to a stream of messages as below, where the secondary attempts to schedule before it has been fully onlined. As the CPU orchestrating the onlining holds the rswem, this hangs the system. [ 0.250334] BUG: scheduling while atomic: swapper/1/0/0x00000002 [ 0.250337] Modules linked in: [ 0.250346] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.11.0-rc7-next-20170424 #2 [ 0.250349] Hardware name: ARM Juno development board (r1) (DT) [ 0.250353] Call trace: [ 0.250365] [] dump_backtrace+0x0/0x238 [ 0.250371] [] show_stack+0x14/0x20 [ 0.250377] [] dump_stack+0x9c/0xc0 [ 0.250384] [] __schedule_bug+0x50/0x70 [ 0.250391] [] __schedule+0x52c/0x5a8 [ 0.250395] [] schedule+0x38/0xa0 [ 0.250400] [] rwsem_down_read_failed+0xc4/0x108 [ 0.250407] [] __percpu_down_read+0x100/0x118 [ 0.250414] [] get_online_cpus+0x70/0x78 [ 0.250420] [] static_key_enable+0x28/0x48 [ 0.250425] [] update_cpu_capabilities+0x78/0xf8 [ 0.250430] [] update_cpu_errata_workarounds+0x1c/0x28 [ 0.250435] [] check_local_cpu_capabilities+0xf4/0x128 [ 0.250440] [] secondary_start_kernel+0x8c/0x118 [ 0.250444] [<000000008093d1b4>] 0x8093d1b4 We call cpus_set_cap() from update_cpu_capabilities(), which is called from the secondary boot path (where the CPU orchestrating the onlining holds the hotplug rwsem), and in the primary boot path, where this is not held. This patch makes cpus_set_cap() use static_branch_enable_cpuslocked(), and updates all the callers of update_cpu_capabilities() consistent with the change. Signed-off-by: Mark Rutland Reported-by: Catalin Marinas Suggested-by: Sebastian Andrzej Siewior Suggested-by: Thomas Gleixner Cc: Will Deacon Signed-off-by: Suzuki K Poulose [Mark: minor fixups] Signed-off-by: Mark Rutland --- arch/arm64/include/asm/cpufeature.h | 5 +++-- arch/arm64/kernel/cpu_errata.c | 13 ++++++++++++- arch/arm64/kernel/cpufeature.c | 5 ++++- arch/arm64/kernel/smp.c | 7 +++---- 4 files changed, 22 insertions(+), 8 deletions(-) -- 1.9.1 Acked-by: Will Deacon diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index f31c48d..c96353a 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -145,7 +145,7 @@ static inline void cpus_set_cap(unsigned int num) num, ARM64_NCAPS); } else { __set_bit(num, cpu_hwcaps); - static_branch_enable(&cpu_hwcap_keys[num]); + static_branch_enable_cpuslocked(&cpu_hwcap_keys[num]); } } @@ -222,7 +222,8 @@ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, void enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps); void check_local_cpu_capabilities(void); -void update_cpu_errata_workarounds(void); +void update_secondary_cpu_errata_workarounds(void); +void update_boot_cpu_errata_workarounds(void); void __init enable_errata_workarounds(void); void verify_local_cpu_errata_workarounds(void); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index f6cc67e..379ad8d 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -175,9 +175,20 @@ void verify_local_cpu_errata_workarounds(void) } } -void update_cpu_errata_workarounds(void) +/* + * Secondary CPUs are booted with the waker holding the + * CPU hotplug lock, hence we don't need to lock it here again. + */ +void update_secondary_cpu_errata_workarounds(void) +{ + update_cpu_capabilities(arm64_errata, "enabling workaround for"); +} + +void update_boot_cpu_errata_workarounds(void) { + get_online_cpus(); update_cpu_capabilities(arm64_errata, "enabling workaround for"); + put_online_cpus(); } void __init enable_errata_workarounds(void) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index abda8e8..62d3a12 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -956,6 +956,7 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) cap_set_elf_hwcap(hwcaps); } +/* Should be called with CPU hotplug lock held */ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, const char *info) { @@ -1075,14 +1076,16 @@ void check_local_cpu_capabilities(void) * advertised capabilities. */ if (!sys_caps_initialised) - update_cpu_errata_workarounds(); + update_secondary_cpu_errata_workarounds(); else verify_local_cpu_capabilities(); } static void __init setup_feature_capabilities(void) { + get_online_cpus(); update_cpu_capabilities(arm64_features, "detected feature:"); + put_online_cpus(); enable_cpu_capabilities(arm64_features); } diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 9b10365..d9ddd5b 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -447,11 +447,10 @@ void __init smp_prepare_boot_cpu(void) cpuinfo_store_boot_cpu(); save_boot_cpu_run_el(); /* - * Run the errata work around checks on the boot CPU, once we have - * initialised the cpu feature infrastructure from - * cpuinfo_store_boot_cpu() above. + * Run the errata work around checks on the boot CPU, now that + * cpuinfo_store_boot_cpu() has set things up. */ - update_cpu_errata_workarounds(); + update_boot_cpu_errata_workarounds(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn)