From patchwork Fri Nov 20 11:03:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3703C64E7A for ; Fri, 20 Nov 2020 11:04:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 81D6D2242B for ; Fri, 20 Nov 2020 11:04:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="mOz6rIiu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727959AbgKTLE0 (ORCPT ); Fri, 20 Nov 2020 06:04:26 -0500 Received: from mail.kernel.org ([198.145.29.99]:51950 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727940AbgKTLE0 (ORCPT ); Fri, 20 Nov 2020 06:04:26 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 98DC92236F; Fri, 20 Nov 2020 11:04:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870265; bh=yVP9HCp5Xi6g7SjY83SY5YsHLJNL3JHWYJ+JMcjpzw0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mOz6rIiu6spW2LbfFdpFyDbysb1Qz2n0Plg0IE56UlxpezUbdlFXNu7OX1kd25HcJ cgL8rx8ine2QE0QhtVJd+ppzrDDe686Uzl9Soml3rCeolDb9DhpZEx72Ii3txff63S QLu7KeVGKIUdMTDGpVJEUwnuSCx5J4plFgc2LzZ4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , dja@axtens.net Subject: [PATCH 4.9 02/16] powerpc/64s: move some exception handlers out of line Date: Fri, 20 Nov 2020 12:03:07 +0100 Message-Id: <20201120104539.835892188@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Daniel Axtens (backport only) We're about to grow the exception handlers, which will make a bunch of them no longer fit within the space available. We move them out of line. This is a fiddly and error-prone business, so in the interests of reviewability I haven't merged this in with the addition of the entry flush. Signed-off-by: Daniel Axtens Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kernel/exceptions-64s.S | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -519,6 +519,10 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TY EXC_REAL_BEGIN(data_access_slb, 0x380, 0x400) SET_SCRATCH0(r13) EXCEPTION_PROLOG_0(PACA_EXSLB) + b tramp_data_access_slb +EXC_REAL_END(data_access_slb, 0x380, 0x400) + +TRAMP_REAL_BEGIN(tramp_data_access_slb) EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380) std r3,PACA_EXSLB+EX_R3(r13) mfspr r3,SPRN_DAR @@ -537,7 +541,6 @@ EXC_REAL_BEGIN(data_access_slb, 0x380, 0 mtctr r10 bctr #endif -EXC_REAL_END(data_access_slb, 0x380, 0x400) EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400) SET_SCRATCH0(r13) @@ -587,6 +590,10 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TY EXC_REAL_BEGIN(instruction_access_slb, 0x480, 0x500) SET_SCRATCH0(r13) EXCEPTION_PROLOG_0(PACA_EXSLB) + b tramp_instruction_access_slb +EXC_REAL_END(instruction_access_slb, 0x480, 0x500) + +TRAMP_REAL_BEGIN(tramp_instruction_access_slb) EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480) std r3,PACA_EXSLB+EX_R3(r13) mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */ @@ -600,7 +607,6 @@ EXC_REAL_BEGIN(instruction_access_slb, 0 mtctr r10 bctr #endif -EXC_REAL_END(instruction_access_slb, 0x480, 0x500) EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500) SET_SCRATCH0(r13) From patchwork Fri Nov 20 11:03:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5285C5519F for ; Fri, 20 Nov 2020 11:11:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 70D772222F for ; Fri, 20 Nov 2020 11:11:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="iUIPGh06" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727246AbgKTLEe (ORCPT ); Fri, 20 Nov 2020 06:04:34 -0500 Received: from mail.kernel.org ([198.145.29.99]:52048 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727973AbgKTLEc (ORCPT ); Fri, 20 Nov 2020 06:04:32 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A4596206E3; Fri, 20 Nov 2020 11:04:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870271; bh=iIm/zTG25a4+AjfBGOKSunF46YRxDd+azgdtBOiJf40=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iUIPGh06hV6w6voaQlHeUj+yOA2PneyGMvwiWqUo/0uNIUMlcjDTC/5bnJ1X+A2/F MMBXmb77euANuD7HR7dEtpIKavSmbjEUeX4BKQbolwB/Vxq0+qJky3b3t/Y+j9NpMG dgP+vPf5M0zqIUbZL31sOpBKR8gGR06r/yB63g4E= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , dja@axtens.net, Christophe Leroy , Russell Currey , Michael Ellerman Subject: [PATCH 4.9 04/16] powerpc: Add a framework for user access tracking Date: Fri, 20 Nov 2020 12:03:09 +0100 Message-Id: <20201120104539.940454331@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christophe Leroy Backported from commit de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection"). Here we don't try to add the KUAP framework, we just want the helper functions because we want to put uaccess flush helpers in them. In terms of fixes, we don't need commit 1d8f739b07bd ("powerpc/kuap: Fix set direction in allow/prevent_user_access()") as we don't have real KUAP. Likewise as all our allows are noops and all our prevents are just flushes, we don't need commit 9dc086f1e9ef ("powerpc/futex: Fix incorrect user access blocking") The other 2 fixes we do need. The original description is: This patch implements a framework for Kernel Userspace Access Protection. Then subarches will have the possibility to provide their own implementation by providing setup_kuap() and allow/prevent_user_access(). Some platforms will need to know the area accessed and whether it is accessed from read, write or both. Therefore source, destination and size and handed over to the two functions. mpe: Rename to allow/prevent rather than unlock/lock, and add read/write wrappers. Drop the 32-bit code for now until we have an implementation for it. Add kuap to pt_regs for 64-bit as well as 32-bit. Don't split strings, use pr_crit_ratelimited(). Signed-off-by: Christophe Leroy Signed-off-by: Russell Currey Signed-off-by: Michael Ellerman Signed-off-by: Daniel Axtens Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/include/asm/futex.h | 4 +++ arch/powerpc/include/asm/kup.h | 36 ++++++++++++++++++++++++++++++++ arch/powerpc/include/asm/uaccess.h | 39 ++++++++++++++++++++++++++--------- arch/powerpc/lib/checksum_wrappers.c | 4 +++ 4 files changed, 74 insertions(+), 9 deletions(-) create mode 100644 arch/powerpc/include/asm/kup.h --- a/arch/powerpc/include/asm/futex.h +++ b/arch/powerpc/include/asm/futex.h @@ -36,6 +36,7 @@ static inline int arch_futex_atomic_op_i { int oldval = 0, ret; + allow_write_to_user(uaddr, sizeof(*uaddr)); pagefault_disable(); switch (op) { @@ -62,6 +63,7 @@ static inline int arch_futex_atomic_op_i *oval = oldval; + prevent_write_to_user(uaddr, sizeof(*uaddr)); return ret; } @@ -75,6 +77,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))) return -EFAULT; + allow_write_to_user(uaddr, sizeof(*uaddr)); __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER "1: lwarx %1,0,%3 # futex_atomic_cmpxchg_inatomic\n\ @@ -97,6 +100,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, : "cc", "memory"); *uval = prev; + prevent_write_to_user(uaddr, sizeof(*uaddr)); return ret; } --- /dev/null +++ b/arch/powerpc/include/asm/kup.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_KUP_H_ +#define _ASM_POWERPC_KUP_H_ + +#ifndef __ASSEMBLY__ + +#include + +static inline void allow_user_access(void __user *to, const void __user *from, + unsigned long size) { } +static inline void prevent_user_access(void __user *to, const void __user *from, + unsigned long size) { } + +static inline void allow_read_from_user(const void __user *from, unsigned long size) +{ + allow_user_access(NULL, from, size); +} + +static inline void allow_write_to_user(void __user *to, unsigned long size) +{ + allow_user_access(to, NULL, size); +} + +static inline void prevent_read_from_user(const void __user *from, unsigned long size) +{ + prevent_user_access(NULL, from, size); +} + +static inline void prevent_write_to_user(void __user *to, unsigned long size) +{ + prevent_user_access(to, NULL, size); +} + +#endif /* !__ASSEMBLY__ */ + +#endif /* _ASM_POWERPC_KUP_H_ */ --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -9,6 +9,7 @@ #include #include #include +#include #define VERIFY_READ 0 #define VERIFY_WRITE 1 @@ -164,6 +165,7 @@ extern long __put_user_bad(void); #define __put_user_size(x, ptr, size, retval) \ do { \ retval = 0; \ + allow_write_to_user(ptr, size); \ switch (size) { \ case 1: __put_user_asm(x, ptr, retval, "stb"); break; \ case 2: __put_user_asm(x, ptr, retval, "sth"); break; \ @@ -171,6 +173,7 @@ do { \ case 8: __put_user_asm2(x, ptr, retval); break; \ default: __put_user_bad(); \ } \ + prevent_write_to_user(ptr, size); \ } while (0) #define __put_user_nocheck(x, ptr, size) \ @@ -252,6 +255,7 @@ do { \ __chk_user_ptr(ptr); \ if (size > sizeof(x)) \ (x) = __get_user_bad(); \ + allow_read_from_user(ptr, size); \ switch (size) { \ case 1: __get_user_asm(x, ptr, retval, "lbz"); break; \ case 2: __get_user_asm(x, ptr, retval, "lhz"); break; \ @@ -259,6 +263,7 @@ do { \ case 8: __get_user_asm2(x, ptr, retval); break; \ default: (x) = __get_user_bad(); \ } \ + prevent_read_from_user(ptr, size); \ } while (0) #define __get_user_nocheck(x, ptr, size) \ @@ -312,9 +317,14 @@ extern unsigned long __copy_tofrom_user( static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long ret; + if (likely(access_ok(VERIFY_READ, from, n))) { check_object_size(to, n, false); - return __copy_tofrom_user((__force void __user *)to, from, n); + allow_user_access(to, from, n); + ret = __copy_tofrom_user((__force void __user *)to, from, n); + prevent_user_access(to, from, n); + return ret; } memset(to, 0, n); return n; @@ -347,8 +357,9 @@ extern unsigned long copy_in_user(void _ static inline unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { + unsigned long ret; if (__builtin_constant_p(n) && (n <= 8)) { - unsigned long ret = 1; + ret = 1; switch (n) { case 1: @@ -375,14 +386,18 @@ static inline unsigned long __copy_from_ check_object_size(to, n, false); barrier_nospec(); - return __copy_tofrom_user((__force void __user *)to, from, n); + allow_read_from_user(from, n); + ret = __copy_tofrom_user((__force void __user *)to, from, n); + prevent_read_from_user(from, n); + return ret; } static inline unsigned long __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) { + unsigned long ret; if (__builtin_constant_p(n) && (n <= 8)) { - unsigned long ret = 1; + ret = 1; switch (n) { case 1: @@ -403,8 +418,10 @@ static inline unsigned long __copy_to_us } check_object_size(from, n, true); - - return __copy_tofrom_user(to, (__force const void __user *)from, n); + allow_write_to_user(to, n); + ret = __copy_tofrom_user(to, (__force const void __user *)from, n); + prevent_write_to_user(to, n); + return ret; } static inline unsigned long __copy_from_user(void *to, @@ -425,10 +442,14 @@ extern unsigned long __clear_user(void _ static inline unsigned long clear_user(void __user *addr, unsigned long size) { + unsigned long ret = size; might_fault(); - if (likely(access_ok(VERIFY_WRITE, addr, size))) - return __clear_user(addr, size); - return size; + if (likely(access_ok(VERIFY_WRITE, addr, size))) { + allow_write_to_user(addr, size); + ret = __clear_user(addr, size); + prevent_write_to_user(addr, size); + } + return ret; } extern long strncpy_from_user(char *dst, const char __user *src, long count); --- a/arch/powerpc/lib/checksum_wrappers.c +++ b/arch/powerpc/lib/checksum_wrappers.c @@ -29,6 +29,7 @@ __wsum csum_and_copy_from_user(const voi unsigned int csum; might_sleep(); + allow_read_from_user(src, len); *err_ptr = 0; @@ -60,6 +61,7 @@ __wsum csum_and_copy_from_user(const voi } out: + prevent_read_from_user(src, len); return (__force __wsum)csum; } EXPORT_SYMBOL(csum_and_copy_from_user); @@ -70,6 +72,7 @@ __wsum csum_and_copy_to_user(const void unsigned int csum; might_sleep(); + allow_write_to_user(dst, len); *err_ptr = 0; @@ -97,6 +100,7 @@ __wsum csum_and_copy_to_user(const void } out: + prevent_write_to_user(dst, len); return (__force __wsum)csum; } EXPORT_SYMBOL(csum_and_copy_to_user); From patchwork Fri Nov 20 11:03:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2D48C83011 for ; Fri, 20 Nov 2020 11:04:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 777C72240C for ; Fri, 20 Nov 2020 11:04:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="2nporCt9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726118AbgKTLEp (ORCPT ); Fri, 20 Nov 2020 06:04:45 -0500 Received: from mail.kernel.org ([198.145.29.99]:52180 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728022AbgKTLEo (ORCPT ); Fri, 20 Nov 2020 06:04:44 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 41B812242B; Fri, 20 Nov 2020 11:04:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870282; bh=y5vtgsJjOSM1hXZdUk3/7LCfJHk3Q+B/PENNgMV8Bd0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2nporCt9sg8ObFbPV8HHSEya7elTNcONsx5lhc2oeW0m+n4PNfSbqBJHaEzqiAlF4 GD1JaDCOnxMJJAIynWk9yfGDESASHtdXqOrKvoWtOUv3m4f7XZEvW7/Wo59B6V7UaD iXynPsd5t9SWpyS//03S+Conggv65Fwjrq7nA1qY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , dja@axtens.net, Nicholas Piggin Subject: [PATCH 4.9 08/16] powerpc/64s: flush L1D after user accesses Date: Fri, 20 Nov 2020 12:03:13 +0100 Message-Id: <20201120104540.138288851@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicholas Piggin commit 9a32a7e78bd0cd9a9b6332cbdc345ee5ffd0c5de upstream. IBM Power9 processors can speculatively operate on data in the L1 cache before it has been completely validated, via a way-prediction mechanism. It is not possible for an attacker to determine the contents of impermissible memory using this method, since these systems implement a combination of hardware and software security measures to prevent scenarios where protected data could be leaked. However these measures don't address the scenario where an attacker induces the operating system to speculatively execute instructions using data that the attacker controls. This can be used for example to speculatively bypass "kernel user access prevention" techniques, as discovered by Anthony Steinhauser of Google's Safeside Project. This is not an attack by itself, but there is a possibility it could be used in conjunction with side-channels or other weaknesses in the privileged code to construct an attack. This issue can be mitigated by flushing the L1 cache between privilege boundaries of concern. This patch flushes the L1 cache after user accesses. This is part of the fix for CVE-2020-4788. Signed-off-by: Nicholas Piggin Signed-off-by: Daniel Axtens Signed-off-by: Greg Kroah-Hartman --- Documentation/kernel-parameters.txt | 4 arch/powerpc/include/asm/book3s/64/kup-radix.h | 22 ++++ arch/powerpc/include/asm/feature-fixups.h | 9 + arch/powerpc/include/asm/kup.h | 4 arch/powerpc/include/asm/security_features.h | 3 arch/powerpc/include/asm/setup.h | 1 arch/powerpc/kernel/exceptions-64s.S | 123 +++++++++---------------- arch/powerpc/kernel/setup_64.c | 62 ++++++++++++ arch/powerpc/kernel/vmlinux.lds.S | 7 + arch/powerpc/lib/feature-fixups.c | 50 ++++++++++ arch/powerpc/platforms/powernv/setup.c | 7 + arch/powerpc/platforms/pseries/setup.c | 4 12 files changed, 217 insertions(+), 79 deletions(-) create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -2528,6 +2528,7 @@ bytes respectively. Such letter suffixes tsx_async_abort=off [X86] kvm.nx_huge_pages=off [X86] no_entry_flush [PPC] + no_uaccess_flush [PPC] Exceptions: This does not have any effect on @@ -2885,6 +2886,9 @@ bytes respectively. Such letter suffixes nospec_store_bypass_disable [HW] Disable all mitigations for the Speculative Store Bypass vulnerability + no_uaccess_flush + [PPC] Don't flush the L1-D cache after accessing user data. + noxsave [BUGS=X86] Disables x86 extended register state save and restore using xsave. The kernel will fallback to enabling legacy floating-point and sse state. --- /dev/null +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H +#define _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H + +DECLARE_STATIC_KEY_FALSE(uaccess_flush_key); + +/* Prototype for function defined in exceptions-64s.S */ +void do_uaccess_flush(void); + +static __always_inline void allow_user_access(void __user *to, const void __user *from, + unsigned long size) +{ +} + +static inline void prevent_user_access(void __user *to, const void __user *from, + unsigned long size) +{ + if (static_branch_unlikely(&uaccess_flush_key)) + do_uaccess_flush(); +} + +#endif /* _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H */ --- a/arch/powerpc/include/asm/feature-fixups.h +++ b/arch/powerpc/include/asm/feature-fixups.h @@ -205,6 +205,14 @@ void setup_feature_keys(void); FTR_ENTRY_OFFSET 955b-956b; \ .popsection; +#define UACCESS_FLUSH_FIXUP_SECTION \ +959: \ + .pushsection __uaccess_flush_fixup,"a"; \ + .align 2; \ +960: \ + FTR_ENTRY_OFFSET 959b-960b; \ + .popsection; + #define ENTRY_FLUSH_FIXUP_SECTION \ 957: \ .pushsection __entry_flush_fixup,"a"; \ @@ -247,6 +255,7 @@ extern long stf_barrier_fallback; extern long entry_flush_fallback; extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup; extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup; +extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup; extern long __start___entry_flush_fixup, __stop___entry_flush_fixup; extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup; extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; --- a/arch/powerpc/include/asm/kup.h +++ b/arch/powerpc/include/asm/kup.h @@ -6,10 +6,14 @@ #include +#ifdef CONFIG_PPC_BOOK3S_64 +#include +#else static inline void allow_user_access(void __user *to, const void __user *from, unsigned long size) { } static inline void prevent_user_access(void __user *to, const void __user *from, unsigned long size) { } +#endif /* CONFIG_PPC_BOOK3S_64 */ static inline void allow_read_from_user(const void __user *from, unsigned long size) { --- a/arch/powerpc/include/asm/security_features.h +++ b/arch/powerpc/include/asm/security_features.h @@ -87,6 +87,8 @@ static inline bool security_ftr_enabled( // The L1-D cache should be flushed when entering the kernel #define SEC_FTR_L1D_FLUSH_ENTRY 0x0000000000004000ull +// The L1-D cache should be flushed after user accesses from the kernel +#define SEC_FTR_L1D_FLUSH_UACCESS 0x0000000000008000ull // Features enabled by default #define SEC_FTR_DEFAULT \ @@ -94,6 +96,7 @@ static inline bool security_ftr_enabled( SEC_FTR_L1D_FLUSH_PR | \ SEC_FTR_BNDS_CHK_SPEC_BAR | \ SEC_FTR_L1D_FLUSH_ENTRY | \ + SEC_FTR_L1D_FLUSH_UACCESS | \ SEC_FTR_FAVOUR_SECURITY) #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */ --- a/arch/powerpc/include/asm/setup.h +++ b/arch/powerpc/include/asm/setup.h @@ -58,6 +58,7 @@ void setup_barrier_nospec(void); #else static inline void setup_barrier_nospec(void) { }; #endif +void do_uaccess_flush_fixups(enum l1d_flush_type types); void do_entry_flush_fixups(enum l1d_flush_type types); void do_barrier_nospec_fixups(bool enable); extern bool barrier_nospec_enabled; --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1377,6 +1377,48 @@ TRAMP_REAL_BEGIN(stf_barrier_fallback) .endr blr +/* Clobbers r10, r11, ctr */ +.macro L1D_DISPLACEMENT_FLUSH + ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) + ld r11,PACA_L1D_FLUSH_SIZE(r13) + srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ + mtctr r11 + DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */ + + /* order ld/st prior to dcbt stop all streams with flushing */ + sync + + /* + * The load adresses are at staggered offsets within cachelines, + * which suits some pipelines better (on others it should not + * hurt). + */ +1: + ld r11,(0x80 + 8)*0(r10) + ld r11,(0x80 + 8)*1(r10) + ld r11,(0x80 + 8)*2(r10) + ld r11,(0x80 + 8)*3(r10) + ld r11,(0x80 + 8)*4(r10) + ld r11,(0x80 + 8)*5(r10) + ld r11,(0x80 + 8)*6(r10) + ld r11,(0x80 + 8)*7(r10) + addi r10,r10,0x80*8 + bdnz 1b +.endm + +USE_TEXT_SECTION() + +_GLOBAL(do_uaccess_flush) + UACCESS_FLUSH_FIXUP_SECTION + nop + nop + nop + blr + L1D_DISPLACEMENT_FLUSH + blr +_ASM_NOKPROBE_SYMBOL(do_uaccess_flush) +EXPORT_SYMBOL(do_uaccess_flush) + /* * Real mode exceptions actually use this too, but alternate * instruction code patches (which end up in the common .text area) @@ -1632,32 +1674,7 @@ rfi_flush_fallback: std r10,PACA_EXRFI+EX_R10(r13) std r11,PACA_EXRFI+EX_R11(r13) mfctr r9 - ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) - ld r11,PACA_L1D_FLUSH_SIZE(r13) - srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ - mtctr r11 - DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */ - - /* order ld/st prior to dcbt stop all streams with flushing */ - sync - - /* - * The load adresses are at staggered offsets within cachelines, - * which suits some pipelines better (on others it should not - * hurt). - */ -1: - ld r11,(0x80 + 8)*0(r10) - ld r11,(0x80 + 8)*1(r10) - ld r11,(0x80 + 8)*2(r10) - ld r11,(0x80 + 8)*3(r10) - ld r11,(0x80 + 8)*4(r10) - ld r11,(0x80 + 8)*5(r10) - ld r11,(0x80 + 8)*6(r10) - ld r11,(0x80 + 8)*7(r10) - addi r10,r10,0x80*8 - bdnz 1b - + L1D_DISPLACEMENT_FLUSH mtctr r9 ld r9,PACA_EXRFI+EX_R9(r13) ld r10,PACA_EXRFI+EX_R10(r13) @@ -1673,32 +1690,7 @@ hrfi_flush_fallback: std r10,PACA_EXRFI+EX_R10(r13) std r11,PACA_EXRFI+EX_R11(r13) mfctr r9 - ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) - ld r11,PACA_L1D_FLUSH_SIZE(r13) - srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ - mtctr r11 - DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */ - - /* order ld/st prior to dcbt stop all streams with flushing */ - sync - - /* - * The load adresses are at staggered offsets within cachelines, - * which suits some pipelines better (on others it should not - * hurt). - */ -1: - ld r11,(0x80 + 8)*0(r10) - ld r11,(0x80 + 8)*1(r10) - ld r11,(0x80 + 8)*2(r10) - ld r11,(0x80 + 8)*3(r10) - ld r11,(0x80 + 8)*4(r10) - ld r11,(0x80 + 8)*5(r10) - ld r11,(0x80 + 8)*6(r10) - ld r11,(0x80 + 8)*7(r10) - addi r10,r10,0x80*8 - bdnz 1b - + L1D_DISPLACEMENT_FLUSH mtctr r9 ld r9,PACA_EXRFI+EX_R9(r13) ld r10,PACA_EXRFI+EX_R10(r13) @@ -1712,32 +1704,7 @@ entry_flush_fallback: std r10,PACA_EXRFI+EX_R10(r13) std r11,PACA_EXRFI+EX_R11(r13) mfctr r9 - ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) - ld r11,PACA_L1D_FLUSH_SIZE(r13) - srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ - mtctr r11 - DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */ - - /* order ld/st prior to dcbt stop all streams with flushing */ - sync - - /* - * The load addresses are at staggered offsets within cachelines, - * which suits some pipelines better (on others it should not - * hurt). - */ -1: - ld r11,(0x80 + 8)*0(r10) - ld r11,(0x80 + 8)*1(r10) - ld r11,(0x80 + 8)*2(r10) - ld r11,(0x80 + 8)*3(r10) - ld r11,(0x80 + 8)*4(r10) - ld r11,(0x80 + 8)*5(r10) - ld r11,(0x80 + 8)*6(r10) - ld r11,(0x80 + 8)*7(r10) - addi r10,r10,0x80*8 - bdnz 1b - + L1D_DISPLACEMENT_FLUSH mtctr r9 ld r9,PACA_EXRFI+EX_R9(r13) ld r10,PACA_EXRFI+EX_R10(r13) --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -686,8 +686,12 @@ static enum l1d_flush_type enabled_flush static void *l1d_flush_fallback_area; static bool no_rfi_flush; static bool no_entry_flush; +static bool no_uaccess_flush; bool rfi_flush; bool entry_flush; +bool uaccess_flush; +DEFINE_STATIC_KEY_FALSE(uaccess_flush_key); +EXPORT_SYMBOL(uaccess_flush_key); static int __init handle_no_rfi_flush(char *p) { @@ -705,6 +709,14 @@ static int __init handle_no_entry_flush( } early_param("no_entry_flush", handle_no_entry_flush); +static int __init handle_no_uaccess_flush(char *p) +{ + pr_info("uaccess-flush: disabled on command line."); + no_uaccess_flush = true; + return 0; +} +early_param("no_uaccess_flush", handle_no_uaccess_flush); + /* * The RFI flush is not KPTI, but because users will see doco that says to use * nopti we hijack that option here to also disable the RFI flush. @@ -748,6 +760,20 @@ void entry_flush_enable(bool enable) entry_flush = enable; } +void uaccess_flush_enable(bool enable) +{ + if (enable) { + do_uaccess_flush_fixups(enabled_flush_types); + static_branch_enable(&uaccess_flush_key); + on_each_cpu(do_nothing, NULL, 1); + } else { + static_branch_disable(&uaccess_flush_key); + do_uaccess_flush_fixups(L1D_FLUSH_NONE); + } + + uaccess_flush = enable; +} + static void __ref init_fallback_flush(void) { u64 l1d_size, limit; @@ -802,6 +828,15 @@ void setup_entry_flush(bool enable) entry_flush_enable(enable); } +void setup_uaccess_flush(bool enable) +{ + if (cpu_mitigations_off()) + return; + + if (!no_uaccess_flush) + uaccess_flush_enable(enable); +} + #ifdef CONFIG_DEBUG_FS static int rfi_flush_set(void *data, u64 val) { @@ -855,10 +890,37 @@ static int entry_flush_get(void *data, u DEFINE_SIMPLE_ATTRIBUTE(fops_entry_flush, entry_flush_get, entry_flush_set, "%llu\n"); +static int uaccess_flush_set(void *data, u64 val) +{ + bool enable; + + if (val == 1) + enable = true; + else if (val == 0) + enable = false; + else + return -EINVAL; + + /* Only do anything if we're changing state */ + if (enable != uaccess_flush) + uaccess_flush_enable(enable); + + return 0; +} + +static int uaccess_flush_get(void *data, u64 *val) +{ + *val = uaccess_flush ? 1 : 0; + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(fops_uaccess_flush, uaccess_flush_get, uaccess_flush_set, "%llu\n"); + static __init int rfi_flush_debugfs_init(void) { debugfs_create_file("rfi_flush", 0600, powerpc_debugfs_root, NULL, &fops_rfi_flush); debugfs_create_file("entry_flush", 0600, powerpc_debugfs_root, NULL, &fops_entry_flush); + debugfs_create_file("uaccess_flush", 0600, powerpc_debugfs_root, NULL, &fops_uaccess_flush); return 0; } device_initcall(rfi_flush_debugfs_init); --- a/arch/powerpc/kernel/vmlinux.lds.S +++ b/arch/powerpc/kernel/vmlinux.lds.S @@ -141,6 +141,13 @@ SECTIONS } . = ALIGN(8); + __uaccess_flush_fixup : AT(ADDR(__uaccess_flush_fixup) - LOAD_OFFSET) { + __start___uaccess_flush_fixup = .; + *(__uaccess_flush_fixup) + __stop___uaccess_flush_fixup = .; + } + + . = ALIGN(8); __entry_flush_fixup : AT(ADDR(__entry_flush_fixup) - LOAD_OFFSET) { __start___entry_flush_fixup = .; *(__entry_flush_fixup) --- a/arch/powerpc/lib/feature-fixups.c +++ b/arch/powerpc/lib/feature-fixups.c @@ -232,6 +232,56 @@ void do_stf_barrier_fixups(enum stf_barr do_stf_exit_barrier_fixups(types); } +void do_uaccess_flush_fixups(enum l1d_flush_type types) +{ + unsigned int instrs[4], *dest; + long *start, *end; + int i; + + start = PTRRELOC(&__start___uaccess_flush_fixup); + end = PTRRELOC(&__stop___uaccess_flush_fixup); + + instrs[0] = 0x60000000; /* nop */ + instrs[1] = 0x60000000; /* nop */ + instrs[2] = 0x60000000; /* nop */ + instrs[3] = 0x4e800020; /* blr */ + + i = 0; + if (types == L1D_FLUSH_FALLBACK) { + instrs[3] = 0x60000000; /* nop */ + /* fallthrough to fallback flush */ + } + + if (types & L1D_FLUSH_ORI) { + instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */ + instrs[i++] = 0x63de0000; /* ori 30,30,0 L1d flush*/ + } + + if (types & L1D_FLUSH_MTTRIG) + instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ + + for (i = 0; start < end; start++, i++) { + dest = (void *)start + *start; + + pr_devel("patching dest %lx\n", (unsigned long)dest); + + patch_instruction(dest, instrs[0]); + + patch_instruction((dest + 1), instrs[1]); + patch_instruction((dest + 2), instrs[2]); + patch_instruction((dest + 3), instrs[3]); + } + + printk(KERN_DEBUG "uaccess-flush: patched %d locations (%s flush)\n", i, + (types == L1D_FLUSH_NONE) ? "no" : + (types == L1D_FLUSH_FALLBACK) ? "fallback displacement" : + (types & L1D_FLUSH_ORI) ? (types & L1D_FLUSH_MTTRIG) + ? "ori+mttrig type" + : "ori type" : + (types & L1D_FLUSH_MTTRIG) ? "mttrig type" + : "unknown"); +} + void do_entry_flush_fixups(enum l1d_flush_type types) { unsigned int instrs[3], *dest; --- a/arch/powerpc/platforms/powernv/setup.c +++ b/arch/powerpc/platforms/powernv/setup.c @@ -126,9 +126,10 @@ static void pnv_setup_rfi_flush(void) /* * 4.9 doesn't support Power9 bare metal, so we don't need to flush - * here - the flush fixes a P9 specific vulnerability. + * here - the flushes fix a P9 specific vulnerability. */ security_ftr_clear(SEC_FTR_L1D_FLUSH_ENTRY); + security_ftr_clear(SEC_FTR_L1D_FLUSH_UACCESS); enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \ (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) || \ @@ -140,6 +141,10 @@ static void pnv_setup_rfi_flush(void) enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && security_ftr_enabled(SEC_FTR_L1D_FLUSH_ENTRY); setup_entry_flush(enable); + + enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && + security_ftr_enabled(SEC_FTR_L1D_FLUSH_UACCESS); + setup_uaccess_flush(enable); } static void __init pnv_setup_arch(void) --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -539,6 +539,10 @@ void pseries_setup_rfi_flush(void) enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && security_ftr_enabled(SEC_FTR_L1D_FLUSH_ENTRY); setup_entry_flush(enable); + + enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && + security_ftr_enabled(SEC_FTR_L1D_FLUSH_UACCESS); + setup_uaccess_flush(enable); } static void __init pSeries_setup_arch(void) From patchwork Fri Nov 20 11:03:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3085C64E90 for ; Fri, 20 Nov 2020 11:11:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6BC7F2222F for ; Fri, 20 Nov 2020 11:11:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="Z//6eD64" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728036AbgKTLEr (ORCPT ); Fri, 20 Nov 2020 06:04:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:52234 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728031AbgKTLEq (ORCPT ); Fri, 20 Nov 2020 06:04:46 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1B68B2236F; Fri, 20 Nov 2020 11:04:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870285; bh=uZP5PO/o0bTiMJkMMZ9aGNNs5w+4eUQLtZYBwDLUAVE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z//6eD64mvnAWHnqScatzt46mykDwIhxitCuKJHuQUjVa7mC+/xrREBVUxJmW3KYk W2LKBLYYnghdF2bwtvf/GSncxnu6U9nwBW3lvEOPsfykKV3fCPBOTs62poEFPMUMKb GF14C8Ctv4X8VhQreBnymoNLrTBt2BoxTwl308VA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Lucas Stach , Philipp Zabel , Wolfram Sang , Sudip Mukherjee Subject: [PATCH 4.9 09/16] i2c: imx: use clk notifier for rate changes Date: Fri, 20 Nov 2020 12:03:14 +0100 Message-Id: <20201120104540.189421521@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Lucas Stach commit 90ad2cbe88c22d0215225ab9594eeead0eb24fde upstream Instead of repeatedly calling clk_get_rate for each transfer, register a clock notifier to update the cached divider value each time the clock rate actually changes. Signed-off-by: Lucas Stach Reviewed-by: Philipp Zabel Signed-off-by: Wolfram Sang Signed-off-by: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- drivers/i2c/busses/i2c-imx.c | 32 +++++++++++++++++++++++++------- 1 file changed, 25 insertions(+), 7 deletions(-) --- a/drivers/i2c/busses/i2c-imx.c +++ b/drivers/i2c/busses/i2c-imx.c @@ -194,6 +194,7 @@ struct imx_i2c_dma { struct imx_i2c_struct { struct i2c_adapter adapter; struct clk *clk; + struct notifier_block clk_change_nb; void __iomem *base; wait_queue_head_t queue; unsigned long i2csr; @@ -468,15 +469,14 @@ static int i2c_imx_acked(struct imx_i2c_ return 0; } -static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx) +static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx, + unsigned int i2c_clk_rate) { struct imx_i2c_clk_pair *i2c_clk_div = i2c_imx->hwdata->clk_div; - unsigned int i2c_clk_rate; unsigned int div; int i; /* Divider value calculation */ - i2c_clk_rate = clk_get_rate(i2c_imx->clk); if (i2c_imx->cur_clk == i2c_clk_rate) return; @@ -511,6 +511,20 @@ static void i2c_imx_set_clk(struct imx_i #endif } +static int i2c_imx_clk_notifier_call(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct clk_notifier_data *ndata = data; + struct imx_i2c_struct *i2c_imx = container_of(&ndata->clk, + struct imx_i2c_struct, + clk); + + if (action & POST_RATE_CHANGE) + i2c_imx_set_clk(i2c_imx, ndata->new_rate); + + return NOTIFY_OK; +} + static int i2c_imx_start(struct imx_i2c_struct *i2c_imx) { unsigned int temp = 0; @@ -518,8 +532,6 @@ static int i2c_imx_start(struct imx_i2c_ dev_dbg(&i2c_imx->adapter.dev, "<%s>\n", __func__); - i2c_imx_set_clk(i2c_imx); - imx_i2c_write_reg(i2c_imx->ifdr, i2c_imx, IMX_I2C_IFDR); /* Enable I2C controller */ imx_i2c_write_reg(i2c_imx->hwdata->i2sr_clr_opcode, i2c_imx, IMX_I2C_I2SR); @@ -1131,6 +1143,9 @@ static int i2c_imx_probe(struct platform "clock-frequency", &i2c_imx->bitrate); if (ret < 0 && pdata && pdata->bitrate) i2c_imx->bitrate = pdata->bitrate; + i2c_imx->clk_change_nb.notifier_call = i2c_imx_clk_notifier_call; + clk_notifier_register(i2c_imx->clk, &i2c_imx->clk_change_nb); + i2c_imx_set_clk(i2c_imx, clk_get_rate(i2c_imx->clk)); /* Set up chip registers to defaults */ imx_i2c_write_reg(i2c_imx->hwdata->i2cr_ien_opcode ^ I2CR_IEN, @@ -1141,12 +1156,12 @@ static int i2c_imx_probe(struct platform ret = i2c_imx_init_recovery_info(i2c_imx, pdev); /* Give it another chance if pinctrl used is not ready yet */ if (ret == -EPROBE_DEFER) - goto rpm_disable; + goto clk_notifier_unregister; /* Add I2C adapter */ ret = i2c_add_numbered_adapter(&i2c_imx->adapter); if (ret < 0) - goto rpm_disable; + goto clk_notifier_unregister; pm_runtime_mark_last_busy(&pdev->dev); pm_runtime_put_autosuspend(&pdev->dev); @@ -1162,6 +1177,8 @@ static int i2c_imx_probe(struct platform return 0; /* Return OK */ +clk_notifier_unregister: + clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb); rpm_disable: pm_runtime_put_noidle(&pdev->dev); pm_runtime_disable(&pdev->dev); @@ -1195,6 +1212,7 @@ static int i2c_imx_remove(struct platfor imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2CR); imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR); + clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb); clk_disable_unprepare(i2c_imx->clk); pm_runtime_put_noidle(&pdev->dev); From patchwork Fri Nov 20 11:03:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B550C83018 for ; Fri, 20 Nov 2020 11:04:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 43E9F22264 for ; Fri, 20 Nov 2020 11:04:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="2qFZSN/C" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727882AbgKTLEJ (ORCPT ); Fri, 20 Nov 2020 06:04:09 -0500 Received: from mail.kernel.org ([198.145.29.99]:51138 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727479AbgKTLEJ (ORCPT ); Fri, 20 Nov 2020 06:04:09 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CB7102245A; Fri, 20 Nov 2020 11:04:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870248; bh=0y8pARHSu06BzIOetLFWR+XIxZY/Sk47qNXdJ+ZBZY0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2qFZSN/CQxjbk8IwNU9JJMTx6a4PMB00QMtdJNFX2zd/i0CCbux2PRv/00vnQWw6b ASW10q6YqFv24A4R2A9o/h9xTiUtpebHc1o5GpU5XTaUIOc70/bwh/YNozuMWYcixT QxZmxzDObj4KSGYyIy4oJbhm2cPSG607/sdA4GOY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Looijmans , Peter Rosin , Hauke Mehrtens Subject: [PATCH 4.9 11/16] i2c: mux: pca954x: Add missing pca9546 definition to chip_desc Date: Fri, 20 Nov 2020 12:03:16 +0100 Message-Id: <20201120104540.289446648@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mike Looijmans commit dbe4d69d252e9e65c6c46826980b77b11a142065 upstream. The spec for the pca9546 was missing. This chip is the same as the pca9545 except that it lacks interrupt lines. While the i2c_device_id table mapped the pca9546 to the pca9545 definition the compatible table did not. Signed-off-by: Mike Looijmans Signed-off-by: Peter Rosin Cc: Hauke Mehrtens Signed-off-by: Greg Kroah-Hartman --- drivers/i2c/muxes/i2c-mux-pca954x.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) --- a/drivers/i2c/muxes/i2c-mux-pca954x.c +++ b/drivers/i2c/muxes/i2c-mux-pca954x.c @@ -96,6 +96,10 @@ static const struct chip_desc chips[] = .nchans = 4, .muxtype = pca954x_isswi, }, + [pca_9546] = { + .nchans = 4, + .muxtype = pca954x_isswi, + }, [pca_9547] = { .nchans = 8, .enable = 0x8, @@ -113,7 +117,7 @@ static const struct i2c_device_id pca954 { "pca9543", pca_9543 }, { "pca9544", pca_9544 }, { "pca9545", pca_9545 }, - { "pca9546", pca_9545 }, + { "pca9546", pca_9546 }, { "pca9547", pca_9547 }, { "pca9548", pca_9548 }, { } From patchwork Fri Nov 20 11:03:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63615C8300A for ; Fri, 20 Nov 2020 11:11:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 28C0922201 for ; Fri, 20 Nov 2020 11:11:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="zVis4Cs4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727898AbgKTLEM (ORCPT ); Fri, 20 Nov 2020 06:04:12 -0500 Received: from mail.kernel.org ([198.145.29.99]:51316 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727479AbgKTLEL (ORCPT ); Fri, 20 Nov 2020 06:04:11 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9200722255; Fri, 20 Nov 2020 11:04:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870251; bh=AzG+hYW+2MMV6CP9u3mWyQwef1/Y96kbJ63kZ3Op9R8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zVis4Cs4765sEicjkF5hkHwHSAKvetvOAuylhH0hhpzob2po7IkZ5+M+SxR3WcZJi eZggogNTkzmAExfA//P9FkuLwPLuyv43TRpa+gckQK42tWItEpAhIwYcyhXC8Ro99W PaeeY9jRYokBGBSJ3K5rubnRmnPTdk8nPM6/jfHc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christophe Leroy , Michael Ellerman Subject: [PATCH 4.9 12/16] powerpc/8xx: Always fault when _PAGE_ACCESSED is not set Date: Fri, 20 Nov 2020 12:03:17 +0100 Message-Id: <20201120104540.338331426@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christophe Leroy commit 29daf869cbab69088fe1755d9dd224e99ba78b56 upstream. The kernel expects pte_young() to work regardless of CONFIG_SWAP. Make sure a minor fault is taken to set _PAGE_ACCESSED when it is not already set, regardless of the selection of CONFIG_SWAP. This adds at least 3 instructions to the TLB miss exception handlers fast path. Following patch will reduce this overhead. Also update the rotation instruction to the correct number of bits to reflect all changes done to _PAGE_ACCESSED over time. Fixes: d069cb4373fe ("powerpc/8xx: Don't touch ACCESSED when no SWAP.") Fixes: 5f356497c384 ("powerpc/8xx: remove unused _PAGE_WRITETHRU") Fixes: e0a8e0d90a9f ("powerpc/8xx: Handle PAGE_USER via APG bits") Fixes: 5b2753fc3e8a ("powerpc/8xx: Implementation of PAGE_EXEC") Fixes: a891c43b97d3 ("powerpc/8xx: Prepare handlers for _PAGE_HUGE for 512k pages.") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/af834e8a0f1fa97bfae65664950f0984a70c4750.1602492856.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kernel/head_8xx.S | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) --- a/arch/powerpc/kernel/head_8xx.S +++ b/arch/powerpc/kernel/head_8xx.S @@ -359,11 +359,9 @@ InstructionTLBMiss: /* Load the MI_TWC with the attributes for this "segment." */ MTSPR_CPU6(SPRN_MI_TWC, r11, r3) /* Set segment attributes */ -#ifdef CONFIG_SWAP - rlwinm r11, r10, 32-5, _PAGE_PRESENT + rlwinm r11, r10, 32-11, _PAGE_PRESENT and r11, r11, r10 rlwimi r10, r11, 0, _PAGE_PRESENT -#endif li r11, RPN_PATTERN /* The Linux PTE won't go exactly into the MMU TLB. * Software indicator bits 20-23 and 28 must be clear. @@ -443,11 +441,9 @@ _ENTRY(DTLBMiss_jmp) * r11 = ((r10 & PRESENT) & ((r10 & ACCESSED) >> 5)); * r10 = (r10 & ~PRESENT) | r11; */ -#ifdef CONFIG_SWAP - rlwinm r11, r10, 32-5, _PAGE_PRESENT + rlwinm r11, r10, 32-11, _PAGE_PRESENT and r11, r11, r10 rlwimi r10, r11, 0, _PAGE_PRESENT -#endif /* The Linux PTE won't go exactly into the MMU TLB. * Software indicator bits 22 and 28 must be clear. * Software indicator bits 24, 25, 26, and 27 must be From patchwork Fri Nov 20 11:03:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31183C8300B for ; Fri, 20 Nov 2020 11:11:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E656F2222F for ; Fri, 20 Nov 2020 11:11:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="BmJ8tHur" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727907AbgKTLEQ (ORCPT ); Fri, 20 Nov 2020 06:04:16 -0500 Received: from mail.kernel.org ([198.145.29.99]:51394 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727479AbgKTLEO (ORCPT ); Fri, 20 Nov 2020 06:04:14 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8ACCD2240C; Fri, 20 Nov 2020 11:04:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870254; bh=CS1m08S/M49rUEL6Eyl6/INRnwZpMB9PJeYy452BtIY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BmJ8tHurGMNHLOMuQ7OPSaDir6x1x8W/BQ4/iPY97afug1gBHDXOQwNtnsi2E6wYk GIyjQN5TIVfLrAky2p+c32KcGOB+0GZBI8QvU+Y88/bROv3yETBePdT/KNfhinBxwA HdkfqvAQ8Hn+F1p8p3HGzqt0EwzXxZQ1eOZIs9L4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Bodong Zhao , Dmitry Torokhov Subject: [PATCH 4.9 13/16] Input: sunkbd - avoid use-after-free in teardown paths Date: Fri, 20 Nov 2020 12:03:18 +0100 Message-Id: <20201120104540.385301968@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Dmitry Torokhov commit 77e70d351db7de07a46ac49b87a6c3c7a60fca7e upstream. We need to make sure we cancel the reinit work before we tear down the driver structures. Reported-by: Bodong Zhao Tested-by: Bodong Zhao Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov Signed-off-by: Greg Kroah-Hartman --- drivers/input/keyboard/sunkbd.c | 41 ++++++++++++++++++++++++++++++++-------- 1 file changed, 33 insertions(+), 8 deletions(-) --- a/drivers/input/keyboard/sunkbd.c +++ b/drivers/input/keyboard/sunkbd.c @@ -115,7 +115,8 @@ static irqreturn_t sunkbd_interrupt(stru switch (data) { case SUNKBD_RET_RESET: - schedule_work(&sunkbd->tq); + if (sunkbd->enabled) + schedule_work(&sunkbd->tq); sunkbd->reset = -1; break; @@ -216,16 +217,12 @@ static int sunkbd_initialize(struct sunk } /* - * sunkbd_reinit() sets leds and beeps to a state the computer remembers they - * were in. + * sunkbd_set_leds_beeps() sets leds and beeps to a state the computer remembers + * they were in. */ -static void sunkbd_reinit(struct work_struct *work) +static void sunkbd_set_leds_beeps(struct sunkbd *sunkbd) { - struct sunkbd *sunkbd = container_of(work, struct sunkbd, tq); - - wait_event_interruptible_timeout(sunkbd->wait, sunkbd->reset >= 0, HZ); - serio_write(sunkbd->serio, SUNKBD_CMD_SETLED); serio_write(sunkbd->serio, (!!test_bit(LED_CAPSL, sunkbd->dev->led) << 3) | @@ -238,11 +235,39 @@ static void sunkbd_reinit(struct work_st SUNKBD_CMD_BELLOFF - !!test_bit(SND_BELL, sunkbd->dev->snd)); } + +/* + * sunkbd_reinit() wait for the keyboard reset to complete and restores state + * of leds and beeps. + */ + +static void sunkbd_reinit(struct work_struct *work) +{ + struct sunkbd *sunkbd = container_of(work, struct sunkbd, tq); + + /* + * It is OK that we check sunkbd->enabled without pausing serio, + * as we only want to catch true->false transition that will + * happen once and we will be woken up for it. + */ + wait_event_interruptible_timeout(sunkbd->wait, + sunkbd->reset >= 0 || !sunkbd->enabled, + HZ); + + if (sunkbd->reset >= 0 && sunkbd->enabled) + sunkbd_set_leds_beeps(sunkbd); +} + static void sunkbd_enable(struct sunkbd *sunkbd, bool enable) { serio_pause_rx(sunkbd->serio); sunkbd->enabled = enable; serio_continue_rx(sunkbd->serio); + + if (!enable) { + wake_up_interruptible(&sunkbd->wait); + cancel_work_sync(&sunkbd->tq); + } } /* From patchwork Fri Nov 20 11:03:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 111D7C64E7D for ; Fri, 20 Nov 2020 11:04:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD5A322264 for ; Fri, 20 Nov 2020 11:04:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="WT3YJICO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727922AbgKTLET (ORCPT ); Fri, 20 Nov 2020 06:04:19 -0500 Received: from mail.kernel.org ([198.145.29.99]:51524 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727479AbgKTLER (ORCPT ); Fri, 20 Nov 2020 06:04:17 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 38CB122264; Fri, 20 Nov 2020 11:04:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870256; bh=YyNMP4s+fAVEs4wzrJXXlfuTu+zbfEM82cunMjayh9g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WT3YJICOZyB3K55uaQonRd1TdGQBFL98lQrkwGGlJ1UfLS8HUztrxZrpTGA8NZ3dq l/6PqBGlCN2UDwYqPrtIa1mDVwTFTMM5SX1Kw6GJ34ymVVzIyb5ozM4aKpdKLE3itU F75vWIiqXlfYDnHOQDMfNJnb8Mhpgb/FIhdHEgGE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, syzbot+2e293dbd67de2836ba42@syzkaller.appspotmail.com, Johannes Berg Subject: [PATCH 4.9 14/16] mac80211: always wind down STA state Date: Fri, 20 Nov 2020 12:03:19 +0100 Message-Id: <20201120104540.427104816@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Johannes Berg commit dcd479e10a0510522a5d88b29b8f79ea3467d501 upstream. When (for example) an IBSS station is pre-moved to AUTHORIZED before it's inserted, and then the insertion fails, we don't clean up the fast RX/TX states that might already have been created, since we don't go through all the state transitions again on the way down. Do that, if it hasn't been done already, when the station is freed. I considered only freeing the fast TX/RX state there, but we might add more state so it's more robust to wind down the state properly. Note that we warn if the station was ever inserted, it should have been properly cleaned up in that case, and the driver will probably not like things happening out of order. Reported-by: syzbot+2e293dbd67de2836ba42@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/20201009141710.7223b322a955.I95bd08b9ad0e039c034927cce0b75beea38e059b@changeid Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman --- net/mac80211/sta_info.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) --- a/net/mac80211/sta_info.c +++ b/net/mac80211/sta_info.c @@ -243,6 +243,24 @@ struct sta_info *sta_info_get_by_idx(str */ void sta_info_free(struct ieee80211_local *local, struct sta_info *sta) { + /* + * If we had used sta_info_pre_move_state() then we might not + * have gone through the state transitions down again, so do + * it here now (and warn if it's inserted). + * + * This will clear state such as fast TX/RX that may have been + * allocated during state transitions. + */ + while (sta->sta_state > IEEE80211_STA_NONE) { + int ret; + + WARN_ON_ONCE(test_sta_flag(sta, WLAN_STA_INSERTED)); + + ret = sta_info_move_state(sta, sta->sta_state - 1); + if (WARN_ONCE(ret, "sta_info_move_state() returned %d\n", ret)) + break; + } + if (sta->rate_ctrl) rate_control_free_sta(sta); From patchwork Fri Nov 20 11:03:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 329149 Delivered-To: patch@linaro.org Received: by 2002:a17:907:2110:0:0:0:0 with SMTP id qn16csp1166289ejb; Fri, 20 Nov 2020 03:04:46 -0800 (PST) X-Google-Smtp-Source: ABdhPJyhjTfWRkER+FQ8tPS00F72EsW75WNumyn94JMD1BUt6fxV+ZoFOoHwEuOZ0Mgpa3piBWcX X-Received: by 2002:a05:6402:2363:: with SMTP id a3mr9592331eda.388.1605870286464; Fri, 20 Nov 2020 03:04:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605870286; cv=none; d=google.com; s=arc-20160816; b=XzvWbFLA2kH7ibpJ3PtUcmZQKNU8oEiWhkDjwI7NaF5yP5lFpxAtaHcSuxrIfGvDwp TfTn/z0eLFyjBJpfXCAspFkKhJ4FZ2wWz6Y3aCZg/fyFpYNsMJwwh9PsvQ74g94nTTsz WmNRNpOollV4Ehr96A27cvnWsG35nD3AE8fRKzijN4hMVTXcU/Lu9CI02oX7delf3i3L IGynBdqG8IwRT4s3aqXh2s57nr84Bin34cXMGv892tyxVvN7/WEpcVqg1oHCDHS/5gTN NEnXYJsuZNMiEjO9BzOI+HpGOR9j8fIE7hIDky/4UNx5cEwnpxMS6qhozy+BqtIJIlln LEhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=YSLkabvB5PDtpavmmsKJMM7OWpFDAzlnbnlp2OM8hLk=; b=PpocC5BFRfnMmHWej8PfD7017BATPGsjqfhdNMz2X/mDV3VSGqiwdC1nF6MbHZ6Eul +iQfjhfQeqxwSTwaoFm6GmDh2SY24Ae+ZprcyC7+DoF4h6k5KDZPiQRz8N+AYbOOAXah Pr4VJYsT2Z/Dhdk660+f5meAms2PT4jSQGvGjh02ypyiDJsJ89KlcFBYUbskN1qB8uaI eZdp2Xn5TPXtUZVSimkDOoKibI+D/aWS0rInq7mzUKMHN0aRn4TaFrggmMPyMoobr7eO WNcBK8V70IP9ZZsRsYCkzcsg178rWcckIQXtnl9INAne9Xp1w0mw0t0O8huynJ03qXaX TqWQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=WCCFBIJQ; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bx13si1538655edb.284.2020.11.20.03.04.46; Fri, 20 Nov 2020 03:04:46 -0800 (PST) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=WCCFBIJQ; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727942AbgKTLEX (ORCPT + 14 others); Fri, 20 Nov 2020 06:04:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:51820 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727940AbgKTLEX (ORCPT ); Fri, 20 Nov 2020 06:04:23 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D260B2242B; Fri, 20 Nov 2020 11:04:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1605870262; bh=OFN24E43YJvgaH4jHFbBy/LxYpvbUaHvyiPjFyTyebU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WCCFBIJQT42JE0pfmmh/RBTLpdeRw0JG78bDzxSOWY4RNONCqumpWfh7GOCSPTkKM 78sZJzOP6SM17SvpS1rw0g4APuXWqYrVLcjIwqx5LHggDL3Ir+2N4RkdyH9vfI46S/ 60yUzk7WGgYPyfIz7eehVmTvGpqYJOfDaMxLvuyg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ard Biesheuvel , Nick Desaulniers , "Rafael J. Wysocki" Subject: [PATCH 4.9 16/16] ACPI: GED: fix -Wformat Date: Fri, 20 Nov 2020 12:03:21 +0100 Message-Id: <20201120104540.521558662@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201120104539.706905067@linuxfoundation.org> References: <20201120104539.706905067@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nick Desaulniers commit 9debfb81e7654fe7388a49f45bc4d789b94c1103 upstream. Clang is more aggressive about -Wformat warnings when the format flag specifies a type smaller than the parameter. It turns out that gsi is an int. Fixes: drivers/acpi/evged.c:105:48: warning: format specifies type 'unsigned char' but the argument has type 'unsigned int' [-Wformat] trigger == ACPI_EDGE_SENSITIVE ? 'E' : 'L', gsi); ^~~ Link: https://github.com/ClangBuiltLinux/linux/issues/378 Fixes: ea6f3af4c5e6 ("ACPI: GED: add support for _Exx / _Lxx handler methods") Acked-by: Ard Biesheuvel Signed-off-by: Nick Desaulniers Signed-off-by: Rafael J. Wysocki Signed-off-by: Greg Kroah-Hartman --- drivers/acpi/evged.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/acpi/evged.c +++ b/drivers/acpi/evged.c @@ -104,7 +104,7 @@ static acpi_status acpi_ged_request_inte switch (gsi) { case 0 ... 255: - sprintf(ev_name, "_%c%02hhX", + sprintf(ev_name, "_%c%02X", trigger == ACPI_EDGE_SENSITIVE ? 'E' : 'L', gsi); if (ACPI_SUCCESS(acpi_get_handle(handle, ev_name, &evt_handle)))