From patchwork Mon May 18 17:37:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 225828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26498C433DF for ; Mon, 18 May 2020 17:49:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EE79B20849 for ; Mon, 18 May 2020 17:49:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589824163; bh=LrTXq/dkUrb8jTWPcMFz5tG0IRGolFV8Mo1nOSI7fw8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=itdkytOLc7LLAwwrfhMb1r+xHsWOH0JJrHGyJcwK+6afd2yxfNlFy7hEJXQSDgO95 E1BEwEqB3d9pAZZnPRzOzaqZPeHAApubf/rlw/y5P+SBZAd4ZL/OV7dUKGQAN5XQbb HJAnnA/3E9EBuu2nEwI+GQeWPVXhONerpNMRe/DY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730511AbgERRtV (ORCPT ); Mon, 18 May 2020 13:49:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:50784 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729211AbgERRtU (ORCPT ); Mon, 18 May 2020 13:49:20 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1963A20657; Mon, 18 May 2020 17:49:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589824159; bh=LrTXq/dkUrb8jTWPcMFz5tG0IRGolFV8Mo1nOSI7fw8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mYrz3ynpYjoghfm4Wo+Y802TF1hMe02YELJ3Z9hL88LA+p6vVGn4b3QiEWiLnKTt3 GF13TpysLHCuGJmW3YzCWgBjAp10E6+QzuBIB5mqajshlYs6jzPpSFY9TXLKj56J9K d8Cjr9IZW5Tx4ibB0hDyyKdqCcfSPE/I1Oe05yNw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sergei Trofimovich , Borislav Petkov , Kalle Valo Subject: [PATCH 4.14 095/114] x86: Fix early boot crash on gcc-10, third try Date: Mon, 18 May 2020 19:37:07 +0200 Message-Id: <20200518173519.031923237@linuxfoundation.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200518173503.033975649@linuxfoundation.org> References: <20200518173503.033975649@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Borislav Petkov commit a9a3ed1eff3601b63aea4fb462d8b3b92c7c1e7e upstream. ... or the odyssey of trying to disable the stack protector for the function which generates the stack canary value. The whole story started with Sergei reporting a boot crash with a kernel built with gcc-10: Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 #139 Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013 Call Trace: dump_stack panic ? start_secondary __stack_chk_fail start_secondary secondary_startup_64 -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary This happens because gcc-10 tail-call optimizes the last function call in start_secondary() - cpu_startup_entry() - and thus emits a stack canary check which fails because the canary value changes after the boot_init_stack_canary() call. To fix that, the initial attempt was to mark the one function which generates the stack canary with: __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused) however, using the optimize attribute doesn't work cumulatively as the attribute does not add to but rather replaces previously supplied optimization options - roughly all -fxxx options. The key one among them being -fno-omit-frame-pointer and thus leading to not present frame pointer - frame pointer which the kernel needs. The next attempt to prevent compilers from tail-call optimizing the last function call cpu_startup_entry(), shy of carving out start_secondary() into a separate compilation unit and building it with -fno-stack-protector, was to add an empty asm(""). This current solution was short and sweet, and reportedly, is supported by both compilers but we didn't get very far this time: future (LTO?) optimization passes could potentially eliminate this, which leads us to the third attempt: having an actual memory barrier there which the compiler cannot ignore or move around etc. That should hold for a long time, but hey we said that about the other two solutions too so... Reported-by: Sergei Trofimovich Signed-off-by: Borislav Petkov Tested-by: Kalle Valo Cc: Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/stackprotector.h | 7 ++++++- arch/x86/kernel/smpboot.c | 8 ++++++++ arch/x86/xen/smp_pv.c | 1 + include/linux/compiler.h | 6 ++++++ init/main.c | 2 ++ 5 files changed, 23 insertions(+), 1 deletion(-) --- a/arch/x86/include/asm/stackprotector.h +++ b/arch/x86/include/asm/stackprotector.h @@ -55,8 +55,13 @@ /* * Initialize the stackprotector canary value. * - * NOTE: this must only be called from functions that never return, + * NOTE: this must only be called from functions that never return * and it must always be inlined. + * + * In addition, it should be called from a compilation unit for which + * stack protector is disabled. Alternatively, the caller should not end + * with a function call which gets tail-call optimized as that would + * lead to checking a modified canary value. */ static __always_inline void boot_init_stack_canary(void) { --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -270,6 +270,14 @@ static void notrace start_secondary(void wmb(); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); + + /* + * Prevent tail call to cpu_startup_entry() because the stack protector + * guard has been changed a couple of function calls up, in + * boot_init_stack_canary() and must not be checked before tail calling + * another function. + */ + prevent_tail_call_optimization(); } /** --- a/arch/x86/xen/smp_pv.c +++ b/arch/x86/xen/smp_pv.c @@ -89,6 +89,7 @@ asmlinkage __visible void cpu_bringup_an { cpu_bringup(); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); + prevent_tail_call_optimization(); } void xen_smp_intr_free_pv(unsigned int cpu) --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -382,4 +382,10 @@ unsigned long read_word_at_a_time(const (_________p1); \ }) +/* + * This is needed in functions which generate the stack canary, see + * arch/x86/kernel/smpboot.c::start_secondary() for an example. + */ +#define prevent_tail_call_optimization() mb() + #endif /* __LINUX_COMPILER_H */ --- a/init/main.c +++ b/init/main.c @@ -706,6 +706,8 @@ asmlinkage __visible void __init start_k /* Do the rest non-__init'ed, we're now alive */ rest_init(); + + prevent_tail_call_optimization(); } /* Call all constructor functions linked into the kernel. */