From patchwork Fri Nov 20 00:06:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 329102 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78C4EC63777 for ; Fri, 20 Nov 2020 00:07:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2DBF722242 for ; Fri, 20 Nov 2020 00:07:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="U+eM8SKH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726386AbgKTAHN (ORCPT ); Thu, 19 Nov 2020 19:07:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726117AbgKTAHN (ORCPT ); Thu, 19 Nov 2020 19:07:13 -0500 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEF2DC0613CF for ; Thu, 19 Nov 2020 16:07:12 -0800 (PST) Received: by mail-pg1-x541.google.com with SMTP id j19so5723446pgg.5 for ; Thu, 19 Nov 2020 16:07:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XuhETvPtQA5Z9kFuaSbH4siF/bUyGqipG60HblBYyQg=; b=U+eM8SKHA4JluSt7DB7UrObxLyl+BjXIDmLmNN9dLHZvL9zhmLDh8PHVolxUcdb5Qy nC6FT4W6UvLhRZ7hQJw1k3vLYOlzibo1JLkBXOA340eEgWEa8239csSRz5snye/MIGFx 6G8hkCSbRAB0nwvkGKLbpZWvI8DQ7i7T0c134= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XuhETvPtQA5Z9kFuaSbH4siF/bUyGqipG60HblBYyQg=; b=GdIbD2fgk+wS+X3AqfG8ZOVbcHDOYQyiGLeDewRiJzlS5NHT4si/th1x6VMuiiyWIF TTuAzLwCqaK2EaXQSY9cqjKTnxXYPbsbHG6MygH7eJfT9u8CjYPMGXDGPpFEIVJdFdyZ bxj+eDOSzlvWdeKRrwmXS/2aEVpQfmEKWxYDeK15FPYaBBbagHpNnOAiW5zq/A9LMwO+ BbVUmKgmz+h9KEWw2P7xYHV0STv1txIOT9k0EC1oSt9DiGk2MsLeHjHESZIH2iMJzh0K MLkbyKI5ixFmROAZ0iGtrQnT09EhhVAJ///Z8r6OgXpSUyklQgNXPIb2qfEQX4qY2naD wlTg== X-Gm-Message-State: AOAM530Vqk5C/lTF1C5AOWz6/tFUT8ltdtpInO+cYS22AmS2tU0nmTsQ i9ysQKqKAjfyEycx/P4jhoXoZDh+e4r25A== X-Google-Smtp-Source: ABdhPJymN9AoBT6HbIZyx+vJ8WjdNsuDYrcBo2wZPdYJjP9K/HbS5shmZDT3nJBKE6UPGeqwjdKyMQ== X-Received: by 2002:a63:3c10:: with SMTP id j16mr14685802pga.140.1605830832248; Thu, 19 Nov 2020 16:07:12 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-4d44-522c-3789-8f33.static.ipv6.internode.on.net. [2001:44b8:1113:6700:4d44:522c:3789:8f33]) by smtp.gmail.com with ESMTPSA id s21sm860458pgk.52.2020.11.19.16.07.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Nov 2020 16:07:11 -0800 (PST) From: Daniel Axtens To: stable@vger.kernel.org Cc: dja@axtens.net Subject: [PATCH 4.4 1/8] powerpc/64s: Define MASKABLE_RELON_EXCEPTION_PSERIES_OOL Date: Fri, 20 Nov 2020 11:06:57 +1100 Message-Id: <20201120000704.374811-2-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201120000704.374811-1-dja@axtens.net> References: <20201120000704.374811-1-dja@axtens.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Add a definition provided by mpe and fixed up for 4.4. It doesn't exist for 4.4 and we'd quite like to use it. Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/exception-64s.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h index 3ed536bec462..26f00ab2d0c9 100644 --- a/arch/powerpc/include/asm/exception-64s.h +++ b/arch/powerpc/include/asm/exception-64s.h @@ -597,6 +597,12 @@ label##_relon_hv: \ EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec); \ EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV); +#define MASKABLE_RELON_EXCEPTION_PSERIES_OOL(vec, label) \ + .globl label##_relon_pSeries; \ +label##_relon_pSeries: \ + EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_PR, vec); \ + EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD) + /* * Our exception common code can be passed various "additions" * to specify the behaviour of interrupts, whether to kick the From patchwork Fri Nov 20 00:06:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 329101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B13ABC6379D for ; Fri, 20 Nov 2020 00:07:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F89A2224D for ; Fri, 20 Nov 2020 00:07:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="WbTa5VSr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726192AbgKTAHR (ORCPT ); Thu, 19 Nov 2020 19:07:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726117AbgKTAHQ (ORCPT ); Thu, 19 Nov 2020 19:07:16 -0500 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90D5AC0613CF for ; Thu, 19 Nov 2020 16:07:16 -0800 (PST) Received: by mail-pf1-x441.google.com with SMTP id a18so6115488pfl.3 for ; Thu, 19 Nov 2020 16:07:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=x3mxnXcy3kak4c4IfShbrcJe5LIJJZ7bgihtJ/TZs2w=; b=WbTa5VSrffa9VQ9cDIelO6GEeNYKAqHRjnWseil6u2u+oMgvxojhGGepBUFt8l6HnR eT6NoCu7JJW3mwQmfLC/HZz8fuOExTtpzoxRdlSo0tRVr7C7T8Lf7e/jlkcBe6d3LSU5 Rp+PfDg9DPc4WO7aQmLyyhRxpknFLK1NPnPxM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=x3mxnXcy3kak4c4IfShbrcJe5LIJJZ7bgihtJ/TZs2w=; b=AwxXGz09bobRwDEg7ZWAuW8OGVEKZcLsORiOD7X6KfgfjNxzM4xRCA6Jvl9v9cnEdT k0v6X5/TJ0Ew+QI1Q4yTAwG/V+6wkRR1fwTCOO26LWCXEg0nDa59R7U+Xqa5I2a4bSgT mpCx1ggwhL5RsaHJZf8V+xBgx0bQifckndW8/8utVyfSK/xplEsQPUl9Zn6K1k+pDsSZ SsDepQDg6b4w4WC/WdcF9RSUZi2BDgJwKGU/OKyF2nGToEaWprZQFGoefmBtESWhf2Ut EvtqebRKwreLvJDLTgeHLOyzXUgkhZdotTx6jSPcyOidADZXtePD47b0nEYe2OC2BJKq JW7A== X-Gm-Message-State: AOAM531m958ISu4qKvZwOWv1DVjly8yWMJrzgSUPzO6zZeOBJLX0vXQi jOoUTeE0kkmhNFK72fzr8jxEoCdA3YPDJw== X-Google-Smtp-Source: ABdhPJyoXk6KL+e5AnWag6XBpkHGpkWrfvXOp8O4P23XWAZs+iJGs3dywHVVAZYSqWTMap/8Qog3jQ== X-Received: by 2002:a17:90a:b118:: with SMTP id z24mr7400050pjq.102.1605830835912; Thu, 19 Nov 2020 16:07:15 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-4d44-522c-3789-8f33.static.ipv6.internode.on.net. [2001:44b8:1113:6700:4d44:522c:3789:8f33]) by smtp.gmail.com with ESMTPSA id d68sm1079933pfd.32.2020.11.19.16.07.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Nov 2020 16:07:15 -0800 (PST) From: Daniel Axtens To: stable@vger.kernel.org Cc: dja@axtens.net Subject: [PATCH 4.4 2/8] powerpc/64s: move some exception handlers out of line Date: Fri, 20 Nov 2020 11:06:58 +1100 Message-Id: <20201120000704.374811-3-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201120000704.374811-1-dja@axtens.net> References: <20201120000704.374811-1-dja@axtens.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org (backport only) We're about to grow the exception handlers, which will make a bunch of them no longer fit within the space available. We move them out of line. This is a fiddly and error-prone business, so in the interests of reviewability I haven't merged this in with the addition of the entry flush. Signed-off-by: Daniel Axtens --- arch/powerpc/kernel/exceptions-64s.S | 138 +++++++++++++++++---------- 1 file changed, 90 insertions(+), 48 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 536718ed033f..3d843e1a162c 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -202,8 +202,8 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE) data_access_pSeries: HMT_MEDIUM_PPR_DISCARD SET_SCRATCH0(r13) - EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, data_access_common, EXC_STD, - KVMTEST, 0x300) + EXCEPTION_PROLOG_0(PACA_EXGEN) + b data_access_pSeries_ool . = 0x380 .globl data_access_slb_pSeries @@ -211,31 +211,15 @@ data_access_slb_pSeries: HMT_MEDIUM_PPR_DISCARD SET_SCRATCH0(r13) EXCEPTION_PROLOG_0(PACA_EXSLB) - EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380) - std r3,PACA_EXSLB+EX_R3(r13) - mfspr r3,SPRN_DAR -#ifdef __DISABLED__ - /* Keep that around for when we re-implement dynamic VSIDs */ - cmpdi r3,0 - bge slb_miss_user_pseries -#endif /* __DISABLED__ */ - mfspr r12,SPRN_SRR1 -#ifndef CONFIG_RELOCATABLE - b slb_miss_realmode -#else - /* - * We can't just use a direct branch to slb_miss_realmode - * because the distance from here to there depends on where - * the kernel ends up being put. - */ - mfctr r11 - ld r10,PACAKBASE(r13) - LOAD_HANDLER(r10, slb_miss_realmode) - mtctr r10 - bctr -#endif + b data_access_slb_pSeries_ool - STD_EXCEPTION_PSERIES(0x400, 0x400, instruction_access) + . = 0x400 + .globl instruction_access_pSeries +instruction_access_pSeries: + HMT_MEDIUM_PPR_DISCARD + SET_SCRATCH0(r13) + EXCEPTION_PROLOG_0(PACA_EXGEN) + b instruction_access_pSeries_ool . = 0x480 .globl instruction_access_slb_pSeries @@ -243,24 +227,7 @@ instruction_access_slb_pSeries: HMT_MEDIUM_PPR_DISCARD SET_SCRATCH0(r13) EXCEPTION_PROLOG_0(PACA_EXSLB) - EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480) - std r3,PACA_EXSLB+EX_R3(r13) - mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */ -#ifdef __DISABLED__ - /* Keep that around for when we re-implement dynamic VSIDs */ - cmpdi r3,0 - bge slb_miss_user_pseries -#endif /* __DISABLED__ */ - mfspr r12,SPRN_SRR1 -#ifndef CONFIG_RELOCATABLE - b slb_miss_realmode -#else - mfctr r11 - ld r10,PACAKBASE(r13) - LOAD_HANDLER(r10, slb_miss_realmode) - mtctr r10 - bctr -#endif + b instruction_access_slb_pSeries_ool /* We open code these as we can't have a ". = x" (even with * x = "." within a feature section @@ -291,13 +258,19 @@ hardware_interrupt_hv: KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x800) . = 0x900 - .globl decrementer_pSeries -decrementer_pSeries: + .globl decrementer_trampoline +decrementer_trampoline: SET_SCRATCH0(r13) EXCEPTION_PROLOG_0(PACA_EXGEN) b decrementer_ool - STD_EXCEPTION_HV(0x980, 0x982, hdecrementer) + . = 0x980 + .globl hdecrementer_trampoline +hdecrementer_trampoline: + HMT_MEDIUM_PPR_DISCARD; + SET_SCRATCH0(r13); + EXCEPTION_PROLOG_0(PACA_EXGEN) + b hdecrementer_hv MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super) KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xa00) @@ -545,6 +518,64 @@ machine_check_pSeries_0: KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x900) KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x982) +/* moved from 0x300 */ + .globl data_access_pSeries_ool +data_access_pSeries_ool: + EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, 0x300) + EXCEPTION_PROLOG_PSERIES_1(data_access_common, EXC_STD) + + .globl data_access_slb_pSeries_ool +data_access_slb_pSeries_ool: + EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380) + std r3,PACA_EXSLB+EX_R3(r13) + mfspr r3,SPRN_DAR +#ifdef __DISABLED__ + /* Keep that around for when we re-implement dynamic VSIDs */ + cmpdi r3,0 + bge slb_miss_user_pseries +#endif /* __DISABLED__ */ + mfspr r12,SPRN_SRR1 +#ifndef CONFIG_RELOCATABLE + b slb_miss_realmode +#else + /* + * We can't just use a direct branch to slb_miss_realmode + * because the distance from here to there depends on where + * the kernel ends up being put. + */ + mfctr r11 + ld r10,PACAKBASE(r13) + LOAD_HANDLER(r10, slb_miss_realmode) + mtctr r10 + bctr +#endif + + .globl instruction_access_pSeries_ool +instruction_access_pSeries_ool: + EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, 0x400) + EXCEPTION_PROLOG_PSERIES_1(instruction_access_common, EXC_STD) + + .globl instruction_access_slb_pSeries_ool +instruction_access_slb_pSeries_ool: + EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480) + std r3,PACA_EXSLB+EX_R3(r13) + mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */ +#ifdef __DISABLED__ + /* Keep that around for when we re-implement dynamic VSIDs */ + cmpdi r3,0 + bge slb_miss_user_pseries +#endif /* __DISABLED__ */ + mfspr r12,SPRN_SRR1 +#ifndef CONFIG_RELOCATABLE + b slb_miss_realmode +#else + mfctr r11 + ld r10,PACAKBASE(r13) + LOAD_HANDLER(r10, slb_miss_realmode) + mtctr r10 + bctr +#endif + #ifdef CONFIG_PPC_DENORMALISATION denorm_assist: BEGIN_FTR_SECTION @@ -612,6 +643,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) .align 7 /* moved from 0xe00 */ MASKABLE_EXCEPTION_OOL(0x900, decrementer) + STD_EXCEPTION_HV_OOL(0x982, hdecrementer) STD_EXCEPTION_HV_OOL(0xe02, h_data_storage) KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02) STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage) @@ -894,7 +926,15 @@ hardware_interrupt_relon_hv: STD_RELON_EXCEPTION_PSERIES(0x4600, 0x600, alignment) STD_RELON_EXCEPTION_PSERIES(0x4700, 0x700, program_check) STD_RELON_EXCEPTION_PSERIES(0x4800, 0x800, fp_unavailable) - MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer) + + . = 0x4900 + .globl decrementer_relon_trampoline +decrementer_relon_trampoline: + HMT_MEDIUM_PPR_DISCARD + SET_SCRATCH0(r13) + EXCEPTION_PROLOG_0(PACA_EXGEN) + b decrementer_relon_pSeries + STD_RELON_EXCEPTION_HV(0x4980, 0x982, hdecrementer) MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super) STD_RELON_EXCEPTION_PSERIES(0x4b00, 0xb00, trap_0b) @@ -1244,6 +1284,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) __end_handlers: /* Equivalents to the above handlers for relocation-on interrupt vectors */ + MASKABLE_RELON_EXCEPTION_PSERIES_OOL(0x900, decrementer) + STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist) MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell) From patchwork Thu Nov 19 23:57:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 329104 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18F7BC6379F for ; Thu, 19 Nov 2020 23:58:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1DC222248 for ; Thu, 19 Nov 2020 23:58:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="iLYqCBZ7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727443AbgKSX57 (ORCPT ); Thu, 19 Nov 2020 18:57:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727444AbgKSX56 (ORCPT ); Thu, 19 Nov 2020 18:57:58 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 973BDC0613CF for ; Thu, 19 Nov 2020 15:57:58 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id q10so6104537pfn.0 for ; Thu, 19 Nov 2020 15:57:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kzTWHOddzQoT3AbveAHpX3VSjrE4LA+8zXOolPm8yGU=; b=iLYqCBZ7tH6HCGchnYHJ4T0QABOaAJDbSLB+6Jet9+r6MCzk3TpBP7g1J9fjCNOCMB e56eWM2c00Jwch9K54jK5c0VazaK39CcayYVjcz9Xv4T0bc37nkelASitOJmbwXZtTJQ stqfShdL48Zz8EOm8YFgVHm4/MQiy8dQknCm8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kzTWHOddzQoT3AbveAHpX3VSjrE4LA+8zXOolPm8yGU=; b=T91/zTWdz8Xtg/1AK/vRHzF5cOY6ANOvCOvTNvmyrbuMUj/+YleGfGkhpauR+LNwVL xXDahfeOR/qEAxL8zOYaGqeOr6sUNpZU85EiFxzur5qxyalrZc7qbUpBY83mCVeW6b1h wR2gwjuXIORYZx0v03x/392uVYh8A6P4freYgia0doBXiWLkwHMHbaBwVm49nIiSNyxo Mtfp9UzxFFfsfWs2/vEIDXEm49pmWcklqwCFnvA6WoTAlM/igrFGJ27nZZxvqxjgOaL8 8/ZPvA4PBSVm9e3ydoG0KC0fUYO51Q+du4PyrykbRpbmvCqYnZcomoRowsXlczvveuX9 DXMA== X-Gm-Message-State: AOAM532ankMk3z4gkcIbBJYr38oKzl+NNf5UxzY02GuOBxjCw1IWmyX7 ldtWHLK1HDA89NPxu6JEGd9LGeI3uoqq2Q== X-Google-Smtp-Source: ABdhPJyim8OGtrliC1d6yh4ysiPXUbNW2YT9h4CxZX+iF5xKjvqOd4IEwx68jOwO809UvRGgJDXlCQ== X-Received: by 2002:a62:7ac2:0:b029:18b:c5bb:303d with SMTP id v185-20020a627ac20000b029018bc5bb303dmr11363201pfc.71.1605830277662; Thu, 19 Nov 2020 15:57:57 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-4d44-522c-3789-8f33.static.ipv6.internode.on.net. [2001:44b8:1113:6700:4d44:522c:3789:8f33]) by smtp.gmail.com with ESMTPSA id b5sm1108103pfr.193.2020.11.19.15.57.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Nov 2020 15:57:57 -0800 (PST) From: Daniel Axtens To: stable@vger.kernel.org Cc: dja@axtens.net Subject: [PATCH 4.9 3/8] powerpc/64s: flush L1D on kernel entry Date: Fri, 20 Nov 2020 10:57:38 +1100 Message-Id: <20201119235743.373635-4-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201119235743.373635-1-dja@axtens.net> References: <20201119235743.373635-1-dja@axtens.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicholas Piggin commit f79643787e0a0762d2409b7b8334e83f22d85695 upstream. IBM Power9 processors can speculatively operate on data in the L1 cache before it has been completely validated, via a way-prediction mechanism. It is not possible for an attacker to determine the contents of impermissible memory using this method, since these systems implement a combination of hardware and software security measures to prevent scenarios where protected data could be leaked. However these measures don't address the scenario where an attacker induces the operating system to speculatively execute instructions using data that the attacker controls. This can be used for example to speculatively bypass "kernel user access prevention" techniques, as discovered by Anthony Steinhauser of Google's Safeside Project. This is not an attack by itself, but there is a possibility it could be used in conjunction with side-channels or other weaknesses in the privileged code to construct an attack. This issue can be mitigated by flushing the L1 cache between privilege boundaries of concern. This patch flushes the L1 cache on kernel entry. This is part of the fix for CVE-2020-4788. Signed-off-by: Nicholas Piggin Signed-off-by: Daniel Axtens --- Documentation/kernel-parameters.txt | 3 + arch/powerpc/include/asm/exception-64s.h | 9 ++- arch/powerpc/include/asm/feature-fixups.h | 10 ++++ arch/powerpc/include/asm/security_features.h | 4 ++ arch/powerpc/include/asm/setup.h | 3 + arch/powerpc/kernel/exceptions-64s.S | 49 +++++++++++++++-- arch/powerpc/kernel/setup_64.c | 58 ++++++++++++++++++++ arch/powerpc/kernel/vmlinux.lds.S | 7 +++ arch/powerpc/lib/feature-fixups.c | 54 ++++++++++++++++++ arch/powerpc/platforms/powernv/setup.c | 10 ++++ arch/powerpc/platforms/pseries/setup.c | 4 ++ 11 files changed, 205 insertions(+), 6 deletions(-) diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 40602517ca52..92ec5ab0f3e9 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -2527,6 +2527,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. mds=off [X86] tsx_async_abort=off [X86] kvm.nx_huge_pages=off [X86] + no_entry_flush [PPC] Exceptions: This does not have any effect on @@ -2833,6 +2834,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted. noefi Disable EFI runtime services support. + no_entry_flush [PPC] Don't flush the L1-D cache when entering the kernel. + noexec [IA-64] noexec [X86] diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h index 9616fe842202..6ffec5b18a6d 100644 --- a/arch/powerpc/include/asm/exception-64s.h +++ b/arch/powerpc/include/asm/exception-64s.h @@ -66,11 +66,18 @@ nop; \ nop +#define ENTRY_FLUSH_SLOT \ + ENTRY_FLUSH_FIXUP_SECTION; \ + nop; \ + nop; \ + nop; + /* * r10 must be free to use, r13 must be paca */ #define INTERRUPT_TO_KERNEL \ - STF_ENTRY_BARRIER_SLOT + STF_ENTRY_BARRIER_SLOT; \ + ENTRY_FLUSH_SLOT /* * Macros for annotating the expected destination of (h)rfid diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h index 175128e19025..db8d384f7b09 100644 --- a/arch/powerpc/include/asm/feature-fixups.h +++ b/arch/powerpc/include/asm/feature-fixups.h @@ -205,6 +205,14 @@ void setup_feature_keys(void); FTR_ENTRY_OFFSET 955b-956b; \ .popsection; +#define ENTRY_FLUSH_FIXUP_SECTION \ +957: \ + .pushsection __entry_flush_fixup,"a"; \ + .align 2; \ +958: \ + FTR_ENTRY_OFFSET 957b-958b; \ + .popsection; + #define RFI_FLUSH_FIXUP_SECTION \ 951: \ .pushsection __rfi_flush_fixup,"a"; \ @@ -236,8 +244,10 @@ void setup_feature_keys(void); #ifndef __ASSEMBLY__ extern long stf_barrier_fallback; +extern long entry_flush_fallback; extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup; extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup; +extern long __start___entry_flush_fixup, __stop___entry_flush_fixup; extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup; extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; extern long __start__btb_flush_fixup, __stop__btb_flush_fixup; diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h index ccf44c135389..082b56bf678d 100644 --- a/arch/powerpc/include/asm/security_features.h +++ b/arch/powerpc/include/asm/security_features.h @@ -84,12 +84,16 @@ static inline bool security_ftr_enabled(unsigned long feature) // Software required to flush link stack on context switch #define SEC_FTR_FLUSH_LINK_STACK 0x0000000000001000ull +// The L1-D cache should be flushed when entering the kernel +#define SEC_FTR_L1D_FLUSH_ENTRY 0x0000000000004000ull + // Features enabled by default #define SEC_FTR_DEFAULT \ (SEC_FTR_L1D_FLUSH_HV | \ SEC_FTR_L1D_FLUSH_PR | \ SEC_FTR_BNDS_CHK_SPEC_BAR | \ + SEC_FTR_L1D_FLUSH_ENTRY | \ SEC_FTR_FAVOUR_SECURITY) #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */ diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h index 862ebce3ae54..da9ae3a1bfd7 100644 --- a/arch/powerpc/include/asm/setup.h +++ b/arch/powerpc/include/asm/setup.h @@ -50,12 +50,15 @@ enum l1d_flush_type { }; void setup_rfi_flush(enum l1d_flush_type, bool enable); +void setup_entry_flush(bool enable); +void setup_uaccess_flush(bool enable); void do_rfi_flush_fixups(enum l1d_flush_type types); #ifdef CONFIG_PPC_BARRIER_NOSPEC void setup_barrier_nospec(void); #else static inline void setup_barrier_nospec(void) { }; #endif +void do_entry_flush_fixups(enum l1d_flush_type types); void do_barrier_nospec_fixups(bool enable); extern bool barrier_nospec_enabled; diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 05c1f0c90316..e31c362e6d83 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -487,7 +487,7 @@ EXC_COMMON_BEGIN(unrecover_mce) b 1b -EXC_REAL(data_access, 0x300, 0x380) +EXC_REAL_OOL(data_access, 0x300, 0x380) EXC_VIRT(data_access, 0x4300, 0x4380, 0x300) TRAMP_KVM_SKIP(PACA_EXGEN, 0x300) @@ -567,7 +567,7 @@ EXC_VIRT_END(data_access_slb, 0x4380, 0x4400) TRAMP_KVM_SKIP(PACA_EXSLB, 0x380) -EXC_REAL(instruction_access, 0x400, 0x480) +EXC_REAL_OOL(instruction_access, 0x400, 0x480) EXC_VIRT(instruction_access, 0x4400, 0x4480, 0x400) TRAMP_KVM(PACA_EXGEN, 0x400) @@ -857,13 +857,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) EXC_REAL_OOL_MASKABLE(decrementer, 0x900, 0x980) -EXC_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900) +EXC_VIRT_OOL_MASKABLE(decrementer, 0x4900, 0x4980, 0x900) TRAMP_KVM(PACA_EXGEN, 0x900) EXC_COMMON_ASYNC(decrementer_common, 0x900, timer_interrupt) -EXC_REAL_HV(hdecrementer, 0x980, 0xa00) -EXC_VIRT_HV(hdecrementer, 0x4980, 0x4a00, 0x980) +EXC_REAL_OOL_HV(hdecrementer, 0x980, 0xa00) +EXC_VIRT_OOL_HV(hdecrementer, 0x4980, 0x4a00, 0x980) TRAMP_KVM_HV(PACA_EXGEN, 0x980) EXC_COMMON(hdecrementer_common, 0x980, hdec_interrupt) @@ -1706,6 +1706,45 @@ hrfi_flush_fallback: GET_SCRATCH0(r13); hrfid + .globl entry_flush_fallback +entry_flush_fallback: + std r9,PACA_EXRFI+EX_R9(r13) + std r10,PACA_EXRFI+EX_R10(r13) + std r11,PACA_EXRFI+EX_R11(r13) + mfctr r9 + ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) + ld r11,PACA_L1D_FLUSH_SIZE(r13) + srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ + mtctr r11 + DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */ + + /* order ld/st prior to dcbt stop all streams with flushing */ + sync + + /* + * The load addresses are at staggered offsets within cachelines, + * which suits some pipelines better (on others it should not + * hurt). + */ +1: + ld r11,(0x80 + 8)*0(r10) + ld r11,(0x80 + 8)*1(r10) + ld r11,(0x80 + 8)*2(r10) + ld r11,(0x80 + 8)*3(r10) + ld r11,(0x80 + 8)*4(r10) + ld r11,(0x80 + 8)*5(r10) + ld r11,(0x80 + 8)*6(r10) + ld r11,(0x80 + 8)*7(r10) + addi r10,r10,0x80*8 + bdnz 1b + + mtctr r9 + ld r9,PACA_EXRFI+EX_R9(r13) + ld r10,PACA_EXRFI+EX_R10(r13) + ld r11,PACA_EXRFI+EX_R11(r13) + blr + + /* * Called from arch_local_irq_enable when an interrupt needs * to be resent. r3 contains 0x500, 0x900, 0xa00 or 0xe80 to indicate diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index fdba10695208..217785eb5ddc 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -685,7 +685,9 @@ early_initcall(disable_hardlockup_detector); static enum l1d_flush_type enabled_flush_types; static void *l1d_flush_fallback_area; static bool no_rfi_flush; +static bool no_entry_flush; bool rfi_flush; +bool entry_flush; static int __init handle_no_rfi_flush(char *p) { @@ -695,6 +697,14 @@ static int __init handle_no_rfi_flush(char *p) } early_param("no_rfi_flush", handle_no_rfi_flush); +static int __init handle_no_entry_flush(char *p) +{ + pr_info("entry-flush: disabled on command line."); + no_entry_flush = true; + return 0; +} +early_param("no_entry_flush", handle_no_entry_flush); + /* * The RFI flush is not KPTI, but because users will see doco that says to use * nopti we hijack that option here to also disable the RFI flush. @@ -726,6 +736,18 @@ void rfi_flush_enable(bool enable) rfi_flush = enable; } +void entry_flush_enable(bool enable) +{ + if (enable) { + do_entry_flush_fixups(enabled_flush_types); + on_each_cpu(do_nothing, NULL, 1); + } else { + do_entry_flush_fixups(L1D_FLUSH_NONE); + } + + entry_flush = enable; +} + static void __ref init_fallback_flush(void) { u64 l1d_size, limit; @@ -771,6 +793,15 @@ void setup_rfi_flush(enum l1d_flush_type types, bool enable) rfi_flush_enable(enable); } +void setup_entry_flush(bool enable) +{ + if (cpu_mitigations_off()) + return; + + if (!no_entry_flush) + entry_flush_enable(enable); +} + #ifdef CONFIG_DEBUG_FS static int rfi_flush_set(void *data, u64 val) { @@ -798,9 +829,36 @@ static int rfi_flush_get(void *data, u64 *val) DEFINE_SIMPLE_ATTRIBUTE(fops_rfi_flush, rfi_flush_get, rfi_flush_set, "%llu\n"); +static int entry_flush_set(void *data, u64 val) +{ + bool enable; + + if (val == 1) + enable = true; + else if (val == 0) + enable = false; + else + return -EINVAL; + + /* Only do anything if we're changing state */ + if (enable != entry_flush) + entry_flush_enable(enable); + + return 0; +} + +static int entry_flush_get(void *data, u64 *val) +{ + *val = entry_flush ? 1 : 0; + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(fops_entry_flush, entry_flush_get, entry_flush_set, "%llu\n"); + static __init int rfi_flush_debugfs_init(void) { debugfs_create_file("rfi_flush", 0600, powerpc_debugfs_root, NULL, &fops_rfi_flush); + debugfs_create_file("entry_flush", 0600, powerpc_debugfs_root, NULL, &fops_entry_flush); return 0; } device_initcall(rfi_flush_debugfs_init); diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S index c20510497c49..61975435e502 100644 --- a/arch/powerpc/kernel/vmlinux.lds.S +++ b/arch/powerpc/kernel/vmlinux.lds.S @@ -140,6 +140,13 @@ SECTIONS __stop___stf_entry_barrier_fixup = .; } + . = ALIGN(8); + __entry_flush_fixup : AT(ADDR(__entry_flush_fixup) - LOAD_OFFSET) { + __start___entry_flush_fixup = .; + *(__entry_flush_fixup) + __stop___entry_flush_fixup = .; + } + . = ALIGN(8); __stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) { __start___stf_exit_barrier_fixup = .; diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c index e6ed0ec94bc8..9adbbf2d2fb9 100644 --- a/arch/powerpc/lib/feature-fixups.c +++ b/arch/powerpc/lib/feature-fixups.c @@ -232,6 +232,60 @@ void do_stf_barrier_fixups(enum stf_barrier_type types) do_stf_exit_barrier_fixups(types); } +void do_entry_flush_fixups(enum l1d_flush_type types) +{ + unsigned int instrs[3], *dest; + long *start, *end; + int i; + + start = PTRRELOC(&__start___entry_flush_fixup); + end = PTRRELOC(&__stop___entry_flush_fixup); + + instrs[0] = 0x60000000; /* nop */ + instrs[1] = 0x60000000; /* nop */ + instrs[2] = 0x60000000; /* nop */ + + i = 0; + if (types == L1D_FLUSH_FALLBACK) { + instrs[i++] = 0x7d4802a6; /* mflr r10 */ + instrs[i++] = 0x60000000; /* branch patched below */ + instrs[i++] = 0x7d4803a6; /* mtlr r10 */ + } + + if (types & L1D_FLUSH_ORI) { + instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */ + instrs[i++] = 0x63de0000; /* ori 30,30,0 L1d flush*/ + } + + if (types & L1D_FLUSH_MTTRIG) + instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ + + for (i = 0; start < end; start++, i++) { + dest = (void *)start + *start; + + pr_devel("patching dest %lx\n", (unsigned long)dest); + + patch_instruction(dest, instrs[0]); + + if (types == L1D_FLUSH_FALLBACK) + patch_branch((dest + 1), (unsigned long)&entry_flush_fallback, + BRANCH_SET_LINK); + else + patch_instruction((dest + 1), instrs[1]); + + patch_instruction((dest + 2), instrs[2]); + } + + printk(KERN_DEBUG "entry-flush: patched %d locations (%s flush)\n", i, + (types == L1D_FLUSH_NONE) ? "no" : + (types == L1D_FLUSH_FALLBACK) ? "fallback displacement" : + (types & L1D_FLUSH_ORI) ? (types & L1D_FLUSH_MTTRIG) + ? "ori+mttrig type" + : "ori type" : + (types & L1D_FLUSH_MTTRIG) ? "mttrig type" + : "unknown"); +} + void do_rfi_flush_fixups(enum l1d_flush_type types) { unsigned int instrs[3], *dest; diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c index 365e2b620201..7787b4b061df 100644 --- a/arch/powerpc/platforms/powernv/setup.c +++ b/arch/powerpc/platforms/powernv/setup.c @@ -124,12 +124,22 @@ static void pnv_setup_rfi_flush(void) type = L1D_FLUSH_ORI; } + /* + * 4.9 doesn't support Power9 bare metal, so we don't need to flush + * here - the flush fixes a P9 specific vulnerability. + */ + security_ftr_clear(SEC_FTR_L1D_FLUSH_ENTRY); + enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \ (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) || \ security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV)); setup_rfi_flush(type, enable); setup_count_cache_flush(); + + enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && + security_ftr_enabled(SEC_FTR_L1D_FLUSH_ENTRY); + setup_entry_flush(enable); } static void __init pnv_setup_arch(void) diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 30782859d898..d9e0db9513d0 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -535,6 +535,10 @@ void pseries_setup_rfi_flush(void) setup_rfi_flush(types, enable); setup_count_cache_flush(); + + enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && + security_ftr_enabled(SEC_FTR_L1D_FLUSH_ENTRY); + setup_entry_flush(enable); } static void __init pSeries_setup_arch(void) From patchwork Fri Nov 20 00:07:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 329100 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70E07C64E69 for ; Fri, 20 Nov 2020 00:07:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1715B22240 for ; Fri, 20 Nov 2020 00:07:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="QFMpzTV7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726449AbgKTAH0 (ORCPT ); Thu, 19 Nov 2020 19:07:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726117AbgKTAH0 (ORCPT ); Thu, 19 Nov 2020 19:07:26 -0500 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC8C5C0613CF for ; Thu, 19 Nov 2020 16:07:24 -0800 (PST) Received: by mail-pg1-x542.google.com with SMTP id q34so5709203pgb.11 for ; Thu, 19 Nov 2020 16:07:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DcduxEqkfwBnRn5Qqt841VS9G/KJxk1aE8EPLBYaKMw=; b=QFMpzTV7IMbs6kGVayRJ0D4hGPj8XvI0tpFb726V9/51CuDzykpMufdnQDwUPYFTyQ 2YtztYlLyh1KNAOUIktKLKuJlZlynjdGOuetthTutH3RpfIl131g9udlilEDspvmMhX5 UYXvWvR0RsNm0tOLh3TDADsQOCgsMAw3bldE0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DcduxEqkfwBnRn5Qqt841VS9G/KJxk1aE8EPLBYaKMw=; b=JPy2hFLj863TgUDgK2fryCQX/UzQWJ1d94WVAKhMCo3ChUSeQ82p5VBVU39RPhOeBK CpJFl2HXLhzVkqeNU9Xdm3fRQM9wuXRmRMk7mYeWCUqsdeTU91R05FqcMEj0u7ADdzJU VoxlQrnc0Rh0njMkg9lclqxIzZn/YzSvi3IXWhHF0qNxa3JtnrQIkzs4uerG0BRhxu7+ pYGV/JCI/5U9j0zkEMvwczV4JpGWEhpXALxfPfFG0UJ1K/Qr6CuNx+zDjO1sXlv0qG/V CWu5IimOg9K1ZKjHYnpZswwEUpLTTOLOv8fHYFHdThQRGrBi4DEQlPRBeySWDsFele23 UHOg== X-Gm-Message-State: AOAM530rQiO7M0Y4YaYNzuQVq2Bg0nryFG2OAiDWdJ1F2NaqOgc1Jx9V v2xRZYVcA/qwOLQomY6CEHOqK8nf/YgFFw== X-Google-Smtp-Source: ABdhPJzG3+4spi69p8urCX70QzXf11aPF4Ovt16+HbHqv7RaVwxMU94XRCCeIcZynIeLzyCf6J+6UA== X-Received: by 2002:a63:3fcb:: with SMTP id m194mr14622570pga.58.1605830844016; Thu, 19 Nov 2020 16:07:24 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-4d44-522c-3789-8f33.static.ipv6.internode.on.net. [2001:44b8:1113:6700:4d44:522c:3789:8f33]) by smtp.gmail.com with ESMTPSA id s30sm859423pgl.39.2020.11.19.16.07.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Nov 2020 16:07:23 -0800 (PST) From: Daniel Axtens To: stable@vger.kernel.org Cc: dja@axtens.net Subject: [PATCH 4.4 4/8] powerpc: Add a framework for user access tracking Date: Fri, 20 Nov 2020 11:07:00 +1100 Message-Id: <20201120000704.374811-5-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201120000704.374811-1-dja@axtens.net> References: <20201120000704.374811-1-dja@axtens.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christophe Leroy Backported from commit de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection"). Here we don't try to add the KUAP framework, we just want the helper functions because we want to put uaccess flush helpers in them. In terms of fixes, we don't need commit 1d8f739b07bd ("powerpc/kuap: Fix set direction in allow/prevent_user_access()") as we don't have real KUAP. Likewise as all our allows are noops and all our prevents are just flushes, we don't need commit 9dc086f1e9ef ("powerpc/futex: Fix incorrect user access blocking") The other 2 fixes we do need. The original description is: This patch implements a framework for Kernel Userspace Access Protection. Then subarches will have the possibility to provide their own implementation by providing setup_kuap() and allow/prevent_user_access(). Some platforms will need to know the area accessed and whether it is accessed from read, write or both. Therefore source, destination and size and handed over to the two functions. mpe: Rename to allow/prevent rather than unlock/lock, and add read/write wrappers. Drop the 32-bit code for now until we have an implementation for it. Add kuap to pt_regs for 64-bit as well as 32-bit. Don't split strings, use pr_crit_ratelimited(). Signed-off-by: Christophe Leroy Signed-off-by: Russell Currey Signed-off-by: Michael Ellerman Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/futex.h | 4 +++ arch/powerpc/include/asm/kup.h | 36 +++++++++++++++++++++++ arch/powerpc/include/asm/uaccess.h | 38 +++++++++++++++++++------ arch/powerpc/lib/checksum_wrappers_64.c | 4 +++ 4 files changed, 74 insertions(+), 8 deletions(-) create mode 100644 arch/powerpc/include/asm/kup.h diff --git a/arch/powerpc/include/asm/futex.h b/arch/powerpc/include/asm/futex.h index b73ab8a7ebc3..10746519b351 100644 --- a/arch/powerpc/include/asm/futex.h +++ b/arch/powerpc/include/asm/futex.h @@ -36,6 +36,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, { int oldval = 0, ret; + allow_write_to_user(uaddr, sizeof(*uaddr)); pagefault_disable(); switch (op) { @@ -62,6 +63,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, *oval = oldval; + prevent_write_to_user(uaddr, sizeof(*uaddr)); return ret; } @@ -75,6 +77,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))) return -EFAULT; + allow_write_to_user(uaddr, sizeof(*uaddr)); __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER "1: lwarx %1,0,%3 # futex_atomic_cmpxchg_inatomic\n\ @@ -97,6 +100,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, : "cc", "memory"); *uval = prev; + prevent_write_to_user(uaddr, sizeof(*uaddr)); return ret; } diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h new file mode 100644 index 000000000000..7895d5eeaf21 --- /dev/null +++ b/arch/powerpc/include/asm/kup.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_KUP_H_ +#define _ASM_POWERPC_KUP_H_ + +#ifndef __ASSEMBLY__ + +#include + +static inline void allow_user_access(void __user *to, const void __user *from, + unsigned long size) { } +static inline void prevent_user_access(void __user *to, const void __user *from, + unsigned long size) { } + +static inline void allow_read_from_user(const void __user *from, unsigned long size) +{ + allow_user_access(NULL, from, size); +} + +static inline void allow_write_to_user(void __user *to, unsigned long size) +{ + allow_user_access(to, NULL, size); +} + +static inline void prevent_read_from_user(const void __user *from, unsigned long size) +{ + prevent_user_access(NULL, from, size); +} + +static inline void prevent_write_to_user(void __user *to, unsigned long size) +{ + prevent_user_access(to, NULL, size); +} + +#endif /* !__ASSEMBLY__ */ + +#endif /* _ASM_POWERPC_KUP_H_ */ diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index e51ce5a0e221..50d3c953b33e 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -9,6 +9,7 @@ #include #include #include +#include #define VERIFY_READ 0 #define VERIFY_WRITE 1 @@ -164,6 +165,7 @@ extern long __put_user_bad(void); #define __put_user_size(x, ptr, size, retval) \ do { \ retval = 0; \ + allow_write_to_user(ptr, size); \ switch (size) { \ case 1: __put_user_asm(x, ptr, retval, "stb"); break; \ case 2: __put_user_asm(x, ptr, retval, "sth"); break; \ @@ -171,6 +173,7 @@ do { \ case 8: __put_user_asm2(x, ptr, retval); break; \ default: __put_user_bad(); \ } \ + prevent_write_to_user(ptr, size); \ } while (0) #define __put_user_nocheck(x, ptr, size) \ @@ -252,6 +255,7 @@ do { \ __chk_user_ptr(ptr); \ if (size > sizeof(x)) \ (x) = __get_user_bad(); \ + allow_read_from_user(ptr, size); \ switch (size) { \ case 1: __get_user_asm(x, ptr, retval, "lbz"); break; \ case 2: __get_user_asm(x, ptr, retval, "lhz"); break; \ @@ -259,6 +263,7 @@ do { \ case 8: __get_user_asm2(x, ptr, retval); break; \ default: (x) = __get_user_bad(); \ } \ + prevent_read_from_user(ptr, size); \ } while (0) #define __get_user_nocheck(x, ptr, size) \ @@ -328,9 +333,14 @@ extern unsigned long __copy_tofrom_user(void __user *to, static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long ret; + if (likely(access_ok(VERIFY_READ, from, n))) { + allow_user_access(to, from, n); barrier_nospec(); - return __copy_tofrom_user((__force void __user *)to, from, n); + ret = __copy_tofrom_user((__force void __user *)to, from, n); + prevent_user_access(to, from, n); + return ret; } memset(to, 0, n); return n; @@ -361,8 +371,9 @@ extern unsigned long copy_in_user(void __user *to, const void __user *from, static inline unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { + unsigned long ret; if (__builtin_constant_p(n) && (n <= 8)) { - unsigned long ret = 1; + ret = 1; switch (n) { case 1: @@ -387,14 +398,18 @@ static inline unsigned long __copy_from_user_inatomic(void *to, } barrier_nospec(); - return __copy_tofrom_user((__force void __user *)to, from, n); + allow_read_from_user(from, n); + ret = __copy_tofrom_user((__force void __user *)to, from, n); + prevent_read_from_user(from, n); + return ret; } static inline unsigned long __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) { + unsigned long ret; if (__builtin_constant_p(n) && (n <= 8)) { - unsigned long ret = 1; + ret = 1; switch (n) { case 1: @@ -414,7 +429,10 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to, return 0; } - return __copy_tofrom_user(to, (__force const void __user *)from, n); + allow_write_to_user(to, n); + ret = __copy_tofrom_user(to, (__force const void __user *)from, n); + prevent_write_to_user(to, n); + return ret; } static inline unsigned long __copy_from_user(void *to, @@ -435,10 +453,14 @@ extern unsigned long __clear_user(void __user *addr, unsigned long size); static inline unsigned long clear_user(void __user *addr, unsigned long size) { + unsigned long ret = size; might_fault(); - if (likely(access_ok(VERIFY_WRITE, addr, size))) - return __clear_user(addr, size); - return size; + if (likely(access_ok(VERIFY_WRITE, addr, size))) { + allow_write_to_user(addr, size); + ret = __clear_user(addr, size); + prevent_write_to_user(addr, size); + } + return ret; } extern long strncpy_from_user(char *dst, const char __user *src, long count); diff --git a/arch/powerpc/lib/checksum_wrappers_64.c b/arch/powerpc/lib/checksum_wrappers_64.c index 08e3a3356c40..11b58949eb62 100644 --- a/arch/powerpc/lib/checksum_wrappers_64.c +++ b/arch/powerpc/lib/checksum_wrappers_64.c @@ -29,6 +29,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst, unsigned int csum; might_sleep(); + allow_read_from_user(src, len); *err_ptr = 0; @@ -60,6 +61,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst, } out: + prevent_read_from_user(src, len); return (__force __wsum)csum; } EXPORT_SYMBOL(csum_and_copy_from_user); @@ -70,6 +72,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len, unsigned int csum; might_sleep(); + allow_write_to_user(dst, len); *err_ptr = 0; @@ -97,6 +100,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len, } out: + prevent_write_to_user(dst, len); return (__force __wsum)csum; } EXPORT_SYMBOL(csum_and_copy_to_user); From patchwork Thu Nov 19 23:57:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 329105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C0E3C64E69 for ; Thu, 19 Nov 2020 23:58:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4958E22244 for ; Thu, 19 Nov 2020 23:58:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="nfQzNm6O" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727451AbgKSX6G (ORCPT ); Thu, 19 Nov 2020 18:58:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727444AbgKSX6G (ORCPT ); Thu, 19 Nov 2020 18:58:06 -0500 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AF36C0613CF for ; Thu, 19 Nov 2020 15:58:06 -0800 (PST) Received: by mail-pl1-x643.google.com with SMTP id 5so3875348plj.8 for ; Thu, 19 Nov 2020 15:58:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GHHMyfABjsK2ExGWAwHqBzxTHV66BKa6BIigIAHptvc=; b=nfQzNm6OnEiF7dcX19+1FLyF66BpoYEs4Xrn0iSrAjerxmWfvp8B1Y9tn5I53DYz6h WYd8NnGLaimvSR1rKf76SrRhLc6M4oqimBYgcfO3idgKjBkq/WXjO7ZnnP8rU0ErNq6X 1HKHPYYZ0JSGDFb+JOWvoW0G4EcRdHmiTSy4k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GHHMyfABjsK2ExGWAwHqBzxTHV66BKa6BIigIAHptvc=; b=Ub9JbUv9QD8Brlg+CQVqAzcYPOCEw9oeBfgngI+4uC5Oo4zzzhaR3SvAxygYviSPnG vj3mhvljrCoMJ3uKSyZTfk/yXVkDzPcIBeC7o+/WZLxSLSl47ahM1etTo/YUfdKPlNaP I9jxMBXPpzpvkUG+aIgY9NRn8e5cDuundvRFV+81mHd6ptfKQUoYROhuF6TLESRp72GD Ag35zPCJmhVPFtzf/phZOhGfZFPKNeS8AAbcWgNBrv2rsTgB6LzfjeCkZvpnrTFzhJuI 06sGq58YkUFiTolnG81+EHvwZaoDAbnzhaibVbTv2Vd3l46RtcZD8g5rM83Hx6316Kct Nkcw== X-Gm-Message-State: AOAM53065D0aaggxxg+pwmlDkpSywYB2Wk4ZYU4c3IhNwDvyAenAbxvY iy+YLxl8zOVs1ifL57z5DNjG5mA73Y4ugw== X-Google-Smtp-Source: ABdhPJxRPeG82GFFinWThYMcqYPlcVQp/2mwF163N7H43GCSAR/eHAaoQXLZw6LdnovHXitpOiU93w== X-Received: by 2002:a17:902:aa85:b029:d9:e383:6851 with SMTP id d5-20020a170902aa85b02900d9e3836851mr2686173plr.35.1605830285329; Thu, 19 Nov 2020 15:58:05 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-4d44-522c-3789-8f33.static.ipv6.internode.on.net. [2001:44b8:1113:6700:4d44:522c:3789:8f33]) by smtp.gmail.com with ESMTPSA id m18sm1160215pff.144.2020.11.19.15.58.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Nov 2020 15:58:04 -0800 (PST) From: Daniel Axtens To: stable@vger.kernel.org Cc: dja@axtens.net Subject: [PATCH 4.9 5/8] powerpc: Implement user_access_begin and friends Date: Fri, 20 Nov 2020 10:57:40 +1100 Message-Id: <20201119235743.373635-6-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201119235743.373635-1-dja@axtens.net> References: <20201119235743.373635-1-dja@axtens.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christophe Leroy commit 5cd623333e7cf4e3a334c70529268b65f2a6c2c7 upstream. Today, when a function like strncpy_from_user() is called, the userspace access protection is de-activated and re-activated for every word read. By implementing user_access_begin and friends, the protection is de-activated at the beginning of the copy and re-activated at the end. Implement user_access_begin(), user_access_end() and unsafe_get_user(), unsafe_put_user() and unsafe_copy_to_user() For the time being, we keep user_access_save() and user_access_restore() as nops. Signed-off-by: Christophe Leroy Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/36d4fbf9e56a75994aca4ee2214c77b26a5a8d35.1579866752.git.christophe.leroy@c-s.fr Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/uaccess.h | 60 +++++++++++++++++++++++------- 1 file changed, 46 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index 9521028eebfa..a395e440c320 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -106,9 +106,14 @@ struct exception_table_entry { __put_user_check((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr))) #define __get_user(x, ptr) \ - __get_user_nocheck((x), (ptr), sizeof(*(ptr))) + __get_user_nocheck((x), (ptr), sizeof(*(ptr)), true) #define __put_user(x, ptr) \ - __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr))) + __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), true) + +#define __get_user_allowed(x, ptr) \ + __get_user_nocheck((x), (ptr), sizeof(*(ptr)), false) +#define __put_user_allowed(x, ptr) \ + __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), false) #define __get_user_inatomic(x, ptr) \ __get_user_nosleep((x), (ptr), sizeof(*(ptr))) @@ -162,10 +167,9 @@ extern long __put_user_bad(void); : "r" (x), "b" (addr), "i" (-EFAULT), "0" (err)) #endif /* __powerpc64__ */ -#define __put_user_size(x, ptr, size, retval) \ +#define __put_user_size_allowed(x, ptr, size, retval) \ do { \ retval = 0; \ - allow_write_to_user(ptr, size); \ switch (size) { \ case 1: __put_user_asm(x, ptr, retval, "stb"); break; \ case 2: __put_user_asm(x, ptr, retval, "sth"); break; \ @@ -173,17 +177,26 @@ do { \ case 8: __put_user_asm2(x, ptr, retval); break; \ default: __put_user_bad(); \ } \ +} while (0) + +#define __put_user_size(x, ptr, size, retval) \ +do { \ + allow_write_to_user(ptr, size); \ + __put_user_size_allowed(x, ptr, size, retval); \ prevent_write_to_user(ptr, size); \ } while (0) -#define __put_user_nocheck(x, ptr, size) \ +#define __put_user_nocheck(x, ptr, size, do_allow) \ ({ \ long __pu_err; \ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ if (!is_kernel_addr((unsigned long)__pu_addr)) \ might_fault(); \ __chk_user_ptr(ptr); \ - __put_user_size((x), __pu_addr, (size), __pu_err); \ + if (do_allow) \ + __put_user_size((x), __pu_addr, (size), __pu_err); \ + else \ + __put_user_size_allowed((x), __pu_addr, (size), __pu_err); \ __pu_err; \ }) @@ -249,13 +262,12 @@ extern long __get_user_bad(void); : "b" (addr), "i" (-EFAULT), "0" (err)) #endif /* __powerpc64__ */ -#define __get_user_size(x, ptr, size, retval) \ +#define __get_user_size_allowed(x, ptr, size, retval) \ do { \ retval = 0; \ __chk_user_ptr(ptr); \ if (size > sizeof(x)) \ (x) = __get_user_bad(); \ - allow_read_from_user(ptr, size); \ switch (size) { \ case 1: __get_user_asm(x, ptr, retval, "lbz"); break; \ case 2: __get_user_asm(x, ptr, retval, "lhz"); break; \ @@ -263,10 +275,16 @@ do { \ case 8: __get_user_asm2(x, ptr, retval); break; \ default: (x) = __get_user_bad(); \ } \ +} while (0) + +#define __get_user_size(x, ptr, size, retval) \ +do { \ + allow_read_from_user(ptr, size); \ + __get_user_size_allowed(x, ptr, size, retval); \ prevent_read_from_user(ptr, size); \ } while (0) -#define __get_user_nocheck(x, ptr, size) \ +#define __get_user_nocheck(x, ptr, size, do_allow) \ ({ \ long __gu_err; \ unsigned long __gu_val; \ @@ -275,7 +293,10 @@ do { \ if (!is_kernel_addr((unsigned long)__gu_addr)) \ might_fault(); \ barrier_nospec(); \ - __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ + if (do_allow) \ + __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ + else \ + __get_user_size_allowed(__gu_val, __gu_addr, (size), __gu_err); \ (x) = (__typeof__(*(ptr)))__gu_val; \ __gu_err; \ }) @@ -396,21 +417,22 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) { unsigned long ret; + if (__builtin_constant_p(n) && (n <= 8)) { ret = 1; switch (n) { case 1: - __put_user_size(*(u8 *)from, (u8 __user *)to, 1, ret); + __put_user_size_allowed(*(u8 *)from, (u8 __user *)to, 1, ret); break; case 2: - __put_user_size(*(u16 *)from, (u16 __user *)to, 2, ret); + __put_user_size_allowed(*(u16 *)from, (u16 __user *)to, 2, ret); break; case 4: - __put_user_size(*(u32 *)from, (u32 __user *)to, 4, ret); + __put_user_size_allowed(*(u32 *)from, (u32 __user *)to, 4, ret); break; case 8: - __put_user_size(*(u64 *)from, (u64 __user *)to, 8, ret); + __put_user_size_allowed(*(u64 *)from, (u64 __user *)to, 8, ret); break; } if (ret == 0) @@ -456,6 +478,16 @@ extern long strncpy_from_user(char *dst, const char __user *src, long count); extern __must_check long strlen_user(const char __user *str); extern __must_check long strnlen_user(const char __user *str, long n); + +#define user_access_begin() do { } while (0) +#define user_access_end() prevent_user_access(NULL, NULL, ~0ul) + +#define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0) +#define unsafe_get_user(x, p, e) unsafe_op_wrap(__get_user_allowed(x, p), e) +#define unsafe_put_user(x, p, e) unsafe_op_wrap(__put_user_allowed(x, p), e) +#define unsafe_copy_to_user(d, s, l, e) \ + unsafe_op_wrap(__copy_to_user_inatomic(d, s, l), e) + #endif /* __ASSEMBLY__ */ #endif /* __KERNEL__ */ From patchwork Fri Nov 20 00:07:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 329099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E75DDC6379F for ; Fri, 20 Nov 2020 00:07:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8EE5822242 for ; Fri, 20 Nov 2020 00:07:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="kFg/CPww" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726476AbgKTAHc (ORCPT ); Thu, 19 Nov 2020 19:07:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726117AbgKTAHc (ORCPT ); Thu, 19 Nov 2020 19:07:32 -0500 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24FABC0613CF for ; Thu, 19 Nov 2020 16:07:32 -0800 (PST) Received: by mail-pg1-x541.google.com with SMTP id 62so5718329pgg.12 for ; Thu, 19 Nov 2020 16:07:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rOgkIUDv1jgPnTWrUFJlC6eZsnuFHauRIqe9KxVY0Rg=; b=kFg/CPwwyLUv5bx7ZlMNuVGM2cPGfefvKwH6/T4m/8wnkr6jx6ArjdK58abg1UHSX3 PbYAJ56KLtImQk+t5nkGxW8aZnou3ffn7Mz6WAx8pYd54b5TCAfilrBXzILDY1RM092e twpt4QjuAY3JOurx85yuyf70npumlG9fKdnjo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rOgkIUDv1jgPnTWrUFJlC6eZsnuFHauRIqe9KxVY0Rg=; b=Cpg9sWYzYH+l+cQsRjxlno/H60hRbUQPUzQ5B/QUAzKrAIDt0T2qC+0F6v2eRPhWkX 1sc7nvzKbji7ut6TmuF8yP95+IJTgmx6QS27mQwG5lP/JWpWR0VNiZKuggPzptm3DjHF Y36wanvSmgFvgs1CauzamHnG/BQd5euobufHD42RsyXHXiMHjKc1ULfekV6Vqbz41fEc Qvgb/H5YjByBkXJIdonN5ead85V4G29JGHdpv/RUWQgw2UHQpTxbldyARsGy2RDpHMQO /4hIioZc2zlYi8EkuzflTVDMP6RQhx4euIxRzZDosOmm32fUt2BahH0hrRvP30WYKW95 dd9w== X-Gm-Message-State: AOAM531CuAz4jztnv4ZmW7riuFmDtubgRrkJb3ngzb+up/ff2az/GSU3 SslV9pZ0gO5F4vnlYRcI7LFdhrqJ1mLRCg== X-Google-Smtp-Source: ABdhPJx+yRjWkmXluqabaDWWvBMZv0pRjotjubrwRyStWTLa6GO6Kpb5RtCjJbSnWPJKYCoBM43/fw== X-Received: by 2002:a62:37c4:0:b029:197:bfa9:2078 with SMTP id e187-20020a6237c40000b0290197bfa92078mr4043130pfa.15.1605830851518; Thu, 19 Nov 2020 16:07:31 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-4d44-522c-3789-8f33.static.ipv6.internode.on.net. [2001:44b8:1113:6700:4d44:522c:3789:8f33]) by smtp.gmail.com with ESMTPSA id r36sm865496pgb.75.2020.11.19.16.07.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Nov 2020 16:07:31 -0800 (PST) From: Daniel Axtens To: stable@vger.kernel.org Cc: dja@axtens.net Subject: [PATCH 4.4 6/8] powerpc: Fix __clear_user() with KUAP enabled Date: Fri, 20 Nov 2020 11:07:02 +1100 Message-Id: <20201120000704.374811-7-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201120000704.374811-1-dja@axtens.net> References: <20201120000704.374811-1-dja@axtens.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Andrew Donnellan commit 61e3acd8c693a14fc69b824cb5b08d02cb90a6e7 upstream. The KUAP implementation adds calls in clear_user() to enable and disable access to userspace memory. However, it doesn't add these to __clear_user(), which is used in the ptrace regset code. As there's only one direct user of __clear_user() (the regset code), and the time taken to set the AMR for KUAP purposes is going to dominate the cost of a quick access_ok(), there's not much point having a separate path. Rename __clear_user() to __arch_clear_user(), and make __clear_user() just call clear_user(). Reported-by: syzbot+f25ecf4b2982d8c7a640@syzkaller-ppc64.appspotmail.com Reported-by: Daniel Axtens Suggested-by: Michael Ellerman Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection") Signed-off-by: Andrew Donnellan [mpe: Use __arch_clear_user() for the asm version like arm64 & nds32] Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20191209132221.15328-1-ajd@linux.ibm.com Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/uaccess.h | 9 +++++++-- arch/powerpc/kernel/ppc_ksyms.c | 3 +++ arch/powerpc/lib/string.S | 2 +- arch/powerpc/lib/string_64.S | 4 ++-- 4 files changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index f0195ad25836..edf211b5ada0 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -471,7 +471,7 @@ static inline unsigned long __copy_to_user(void __user *to, return __copy_to_user_inatomic(to, from, size); } -extern unsigned long __clear_user(void __user *addr, unsigned long size); +unsigned long __arch_clear_user(void __user *addr, unsigned long size); static inline unsigned long clear_user(void __user *addr, unsigned long size) { @@ -479,12 +479,17 @@ static inline unsigned long clear_user(void __user *addr, unsigned long size) might_fault(); if (likely(access_ok(VERIFY_WRITE, addr, size))) { allow_write_to_user(addr, size); - ret = __clear_user(addr, size); + ret = __arch_clear_user(addr, size); prevent_write_to_user(addr, size); } return ret; } +static inline unsigned long __clear_user(void __user *addr, unsigned long size) +{ + return clear_user(addr, size); +} + extern long strncpy_from_user(char *dst, const char __user *src, long count); extern __must_check long strlen_user(const char __user *str); extern __must_check long strnlen_user(const char __user *str, long n); diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c index 202963ee013a..b92debacb821 100644 --- a/arch/powerpc/kernel/ppc_ksyms.c +++ b/arch/powerpc/kernel/ppc_ksyms.c @@ -5,6 +5,7 @@ #include #include #include +#include EXPORT_SYMBOL(flush_dcache_range); EXPORT_SYMBOL(flush_icache_range); @@ -43,3 +44,5 @@ EXPORT_SYMBOL(epapr_hypercall_start); #endif EXPORT_SYMBOL(current_stack_pointer); + +EXPORT_SYMBOL(__arch_clear_user); diff --git a/arch/powerpc/lib/string.S b/arch/powerpc/lib/string.S index c80fb49ce607..93c4c34ad091 100644 --- a/arch/powerpc/lib/string.S +++ b/arch/powerpc/lib/string.S @@ -122,7 +122,7 @@ _GLOBAL(memchr) blr #ifdef CONFIG_PPC32 -_GLOBAL(__clear_user) +_GLOBAL(__arch_clear_user) addi r6,r3,-4 li r3,0 li r5,0 diff --git a/arch/powerpc/lib/string_64.S b/arch/powerpc/lib/string_64.S index 7bd9549a90a2..14d26ad2cd69 100644 --- a/arch/powerpc/lib/string_64.S +++ b/arch/powerpc/lib/string_64.S @@ -27,7 +27,7 @@ PPC64_CACHES: .section ".text" /** - * __clear_user: - Zero a block of memory in user space, with less checking. + * __arch_clear_user: - Zero a block of memory in user space, with less checking. * @to: Destination address, in user space. * @n: Number of bytes to zero. * @@ -77,7 +77,7 @@ err3; stb r0,0(r3) mr r3,r4 blr -_GLOBAL_TOC(__clear_user) +_GLOBAL_TOC(__arch_clear_user) cmpdi r4,32 neg r6,r3 li r0,0 From patchwork Thu Nov 19 23:57:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 329103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 751ACC388F9 for ; Thu, 19 Nov 2020 23:58:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 230B422249 for ; Thu, 19 Nov 2020 23:58:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="lDbQpZ9N" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726983AbgKSX6O (ORCPT ); Thu, 19 Nov 2020 18:58:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727470AbgKSX6N (ORCPT ); Thu, 19 Nov 2020 18:58:13 -0500 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66F3EC0613CF for ; Thu, 19 Nov 2020 15:58:13 -0800 (PST) Received: by mail-pg1-x542.google.com with SMTP id t37so5629055pga.7 for ; Thu, 19 Nov 2020 15:58:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WRTxiPO0ok1vI94GoN4Oz3ukryy2f2qIAh0OFz41j/M=; b=lDbQpZ9NOs4B4Kwzvc6vYlpu3+u2iBoVMZlr0irTqGe/3zajN0eB9ZSQAyhmCw9kMJ XxJo+i/YV9g6PPqbNlbjzvmgZzkVqoxVrbXW9q5gbJ9iTElj4iR32mlE17yMUz3b6Hrx dl5eYp5HvcMy8PFFDxCTyMzsLapyHnXq5cG8c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WRTxiPO0ok1vI94GoN4Oz3ukryy2f2qIAh0OFz41j/M=; b=p4phWxu0Uph6/Ph6R2VY0Li8GRGoMG4j44+ZIypvBiB32xD5CqOAcDkYAv5/Lj/Hb3 Bgr17g5EqnuvmorrwGO7MGzQKQrC32QYKmkwVgk2uLJDlnfrCxtJr5vBGqGjiQaSD5Na DeBD6F3y90njweWnR1QOHa9tf/8IN1S1dHEaoLVAjsYUtVW4eZkzKm/vEJZScHsh3qnw U7lxpy5aWzy9ba1rBpcwFDu+Asj3fx7C+TcURMrXTZFuF/UgzW2xrxhxgonALnR/ntzd 1ms1lx4gSDri+l6eFWAspLRrfN9V/BTKVpPOc8PJfkH2zaR/FkdoSgTTCPJH067BTwI5 4lrQ== X-Gm-Message-State: AOAM532fY7j9bMK5kHbX03eV2g1Y3SFBzJlnMt0P4whYMWu0acQR6gci llYLll+7rUXphJJy8S0muWwAmz83+dPSyg== X-Google-Smtp-Source: ABdhPJwax8n3Tgj0lU40jYBIp/Jf+ZCnOxt1Z5kPkMFYPMfQr3ofWXdxNBd1WnfvQtxGxYVkHPqaqA== X-Received: by 2002:a65:6649:: with SMTP id z9mr14368753pgv.156.1605830292776; Thu, 19 Nov 2020 15:58:12 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-4d44-522c-3789-8f33.static.ipv6.internode.on.net. [2001:44b8:1113:6700:4d44:522c:3789:8f33]) by smtp.gmail.com with ESMTPSA id x2sm1176706pfp.13.2020.11.19.15.58.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Nov 2020 15:58:12 -0800 (PST) From: Daniel Axtens To: stable@vger.kernel.org Cc: dja@axtens.net Subject: [PATCH 4.9 7/8] powerpc/uaccess: Evaluate macro arguments once, before user access is allowed Date: Fri, 20 Nov 2020 10:57:42 +1100 Message-Id: <20201119235743.373635-8-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201119235743.373635-1-dja@axtens.net> References: <20201119235743.373635-1-dja@axtens.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicholas Piggin commit d02f6b7dab8228487268298ea1f21081c0b4b3eb upstream. get/put_user() can be called with nontrivial arguments. fs/proc/page.c has a good example: if (put_user(stable_page_flags(ppage), out)) { stable_page_flags() is quite a lot of code, including spin locks in the page allocator. Ensure these arguments are evaluated before user access is allowed. This improves security by reducing code with access to userspace, but it also fixes a PREEMPT bug with KUAP on powerpc/64s: stable_page_flags() is currently called with AMR set to allow writes, it ends up calling spin_unlock(), which can call preempt_schedule. But the task switch code can not be called with AMR set (it relies on interrupts saving the register), so this blows up. It's fine if the code inside allow_user_access() is preemptible, because a timer or IPI will save the AMR, but it's not okay to explicitly cause a reschedule. Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection") Signed-off-by: Nicholas Piggin Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20200407041245.600651-1-npiggin@gmail.com Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/uaccess.h | 49 +++++++++++++++++++++--------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index 5fc6a9f410f2..fde865a4e2cb 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -190,13 +190,17 @@ do { \ ({ \ long __pu_err; \ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ + __typeof__(*(ptr)) __pu_val = (x); \ + __typeof__(size) __pu_size = (size); \ + \ if (!is_kernel_addr((unsigned long)__pu_addr)) \ might_fault(); \ - __chk_user_ptr(ptr); \ + __chk_user_ptr(__pu_addr); \ if (do_allow) \ - __put_user_size((x), __pu_addr, (size), __pu_err); \ + __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ else \ - __put_user_size_allowed((x), __pu_addr, (size), __pu_err); \ + __put_user_size_allowed(__pu_val, __pu_addr, __pu_size, __pu_err); \ + \ __pu_err; \ }) @@ -204,9 +208,13 @@ do { \ ({ \ long __pu_err = -EFAULT; \ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ + __typeof__(*(ptr)) __pu_val = (x); \ + __typeof__(size) __pu_size = (size); \ + \ might_fault(); \ - if (access_ok(VERIFY_WRITE, __pu_addr, size)) \ - __put_user_size((x), __pu_addr, (size), __pu_err); \ + if (access_ok(VERIFY_WRITE, __pu_addr, __pu_size)) \ + __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ + \ __pu_err; \ }) @@ -214,8 +222,12 @@ do { \ ({ \ long __pu_err; \ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ - __chk_user_ptr(ptr); \ - __put_user_size((x), __pu_addr, (size), __pu_err); \ + __typeof__(*(ptr)) __pu_val = (x); \ + __typeof__(size) __pu_size = (size); \ + \ + __chk_user_ptr(__pu_addr); \ + __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ + \ __pu_err; \ }) @@ -289,15 +301,18 @@ do { \ long __gu_err; \ unsigned long __gu_val; \ __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ - __chk_user_ptr(ptr); \ + __typeof__(size) __gu_size = (size); \ + \ + __chk_user_ptr(__gu_addr); \ if (!is_kernel_addr((unsigned long)__gu_addr)) \ might_fault(); \ barrier_nospec(); \ if (do_allow) \ - __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ + __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ else \ - __get_user_size_allowed(__gu_val, __gu_addr, (size), __gu_err); \ + __get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \ (x) = (__typeof__(*(ptr)))__gu_val; \ + \ __gu_err; \ }) @@ -306,12 +321,15 @@ do { \ long __gu_err = -EFAULT; \ unsigned long __gu_val = 0; \ __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ + __typeof__(size) __gu_size = (size); \ + \ might_fault(); \ - if (access_ok(VERIFY_READ, __gu_addr, (size))) { \ + if (access_ok(VERIFY_READ, __gu_addr, __gu_size)) { \ barrier_nospec(); \ - __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ + __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ } \ (x) = (__force __typeof__(*(ptr)))__gu_val; \ + \ __gu_err; \ }) @@ -320,10 +338,13 @@ do { \ long __gu_err; \ unsigned long __gu_val; \ __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ - __chk_user_ptr(ptr); \ + __typeof__(size) __gu_size = (size); \ + \ + __chk_user_ptr(__gu_addr); \ barrier_nospec(); \ - __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ + __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ (x) = (__force __typeof__(*(ptr)))__gu_val; \ + \ __gu_err; \ }) From patchwork Fri Nov 20 00:07:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 329098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6D7AC63777 for ; Fri, 20 Nov 2020 00:07:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E5AE22256 for ; Fri, 20 Nov 2020 00:07:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="apSumLAK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726540AbgKTAHm (ORCPT ); Thu, 19 Nov 2020 19:07:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726117AbgKTAHl (ORCPT ); Thu, 19 Nov 2020 19:07:41 -0500 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DB6AC0613CF for ; Thu, 19 Nov 2020 16:07:40 -0800 (PST) Received: by mail-pg1-x541.google.com with SMTP id v21so5745706pgi.2 for ; Thu, 19 Nov 2020 16:07:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7EQMD8scrFkeCrc6CCCnzJcZ3JqTzbC5BvZ5QsBWbm8=; b=apSumLAKiv/6gfN7WYM7ahLLTeCK+OsQBN4vEdW2zGZwSxcpEcB8OtCTEQpmx/oLMY EKEaoMI9VwxOg8IBMaUqi2oaXHAAz1j/mLWE5D5CWb4l6zcPaSxl6dqQkgSMV7xwlT3t gEVvEMwYmks0joNU1Zzj2Q1V8e2pdlSk8d2cQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7EQMD8scrFkeCrc6CCCnzJcZ3JqTzbC5BvZ5QsBWbm8=; b=fFnxKOjMgA5KyT62/DV3KoOQLt3E+IaF4oD/aCjJSMrKBPyjVuWVdq1huwW7U3tT1J Ctje7njxMjx0y1yXEAm99My10LABkBk2WFpQvq3KYM2FtXwCf9fNxAWf2eQKpGK3oIWG vemrZ0aNhPHIVpfP+/gXfPwAN4kpFcum6hfsMfQ0U5N/+HGaRTPK8KBd1FBRkzgCQy1N cc1ARAV4vunOpEQFR4cJxFfGF3XKfpUL9leNGOSPDRDvAWv+5KvApumIzYBubnTCG2zr M2OHK6kH1QwQ2YCySR5azU4B6HgckEQqm7mLKhKgE1ZHMLwuPQfQ97Bd46fWxaXMi7CN 87Kw== X-Gm-Message-State: AOAM530CCqbb/iSNQes0iA0Q2roE/vPNWnUwFiaTmSe8AHYMrktjR4qO VoXM0UoFNbX8ATabD5S9++owTa5AItk4IQ== X-Google-Smtp-Source: ABdhPJxdQstCjqB/dMov1F+I7v9biPwJ7z4NeAnubL0Asa4PN9Pw7k1/WTsBTb41bqp6i3oxUzvFEA== X-Received: by 2002:a62:2a81:0:b029:18c:310f:74fe with SMTP id q123-20020a622a810000b029018c310f74femr11324053pfq.50.1605830859361; Thu, 19 Nov 2020 16:07:39 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-4d44-522c-3789-8f33.static.ipv6.internode.on.net. [2001:44b8:1113:6700:4d44:522c:3789:8f33]) by smtp.gmail.com with ESMTPSA id m68sm908054pga.46.2020.11.19.16.07.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Nov 2020 16:07:38 -0800 (PST) From: Daniel Axtens To: stable@vger.kernel.org Cc: dja@axtens.net Subject: [PATCH 4.4 8/8] powerpc/64s: flush L1D after user accesses Date: Fri, 20 Nov 2020 11:07:04 +1100 Message-Id: <20201120000704.374811-9-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201120000704.374811-1-dja@axtens.net> References: <20201120000704.374811-1-dja@axtens.net> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Nicholas Piggin commit 9a32a7e78bd0cd9a9b6332cbdc345ee5ffd0c5de upstream. IBM Power9 processors can speculatively operate on data in the L1 cache before it has been completely validated, via a way-prediction mechanism. It is not possible for an attacker to determine the contents of impermissible memory using this method, since these systems implement a combination of hardware and software security measures to prevent scenarios where protected data could be leaked. However these measures don't address the scenario where an attacker induces the operating system to speculatively execute instructions using data that the attacker controls. This can be used for example to speculatively bypass "kernel user access prevention" techniques, as discovered by Anthony Steinhauser of Google's Safeside Project. This is not an attack by itself, but there is a possibility it could be used in conjunction with side-channels or other weaknesses in the privileged code to construct an attack. This issue can be mitigated by flushing the L1 cache between privilege boundaries of concern. This patch flushes the L1 cache after user accesses. This is part of the fix for CVE-2020-4788. Signed-off-by: Nicholas Piggin Signed-off-by: Daniel Axtens --- Documentation/kernel-parameters.txt | 4 + .../powerpc/include/asm/book3s/64/kup-radix.h | 23 +++++ arch/powerpc/include/asm/feature-fixups.h | 9 ++ arch/powerpc/include/asm/kup.h | 4 + arch/powerpc/include/asm/security_features.h | 3 + arch/powerpc/include/asm/setup.h | 1 + arch/powerpc/kernel/exceptions-64s.S | 86 ++++++------------- arch/powerpc/kernel/ppc_ksyms.c | 7 ++ arch/powerpc/kernel/setup_64.c | 80 +++++++++++++++++ arch/powerpc/kernel/vmlinux.lds.S | 7 ++ arch/powerpc/lib/feature-fixups.c | 50 +++++++++++ arch/powerpc/platforms/powernv/setup.c | 7 +- arch/powerpc/platforms/pseries/setup.c | 4 + 13 files changed, 224 insertions(+), 61 deletions(-) create mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 007f12c79365..2c579b53d582 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -2197,6 +2197,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. mds=off [X86] tsx_async_abort=off [X86] no_entry_flush [PPC] + no_uaccess_flush [PPC] auto (default) Mitigate all CPU vulnerabilities, but leave SMT @@ -2521,6 +2522,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted. nospec_store_bypass_disable [HW] Disable all mitigations for the Speculative Store Bypass vulnerability + no_uaccess_flush + [PPC] Don't flush the L1-D cache after accessing user data. + noxsave [BUGS=X86] Disables x86 extended register state save and restore using xsave. The kernel will fallback to enabling legacy floating-point and sse state. diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h new file mode 100644 index 000000000000..cce8e7497d72 --- /dev/null +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H +#define _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H +#include + +DECLARE_STATIC_KEY_FALSE(uaccess_flush_key); + +/* Prototype for function defined in exceptions-64s.S */ +void do_uaccess_flush(void); + +static __always_inline void allow_user_access(void __user *to, const void __user *from, + unsigned long size) +{ +} + +static inline void prevent_user_access(void __user *to, const void __user *from, + unsigned long size) +{ + if (static_branch_unlikely(&uaccess_flush_key)) + do_uaccess_flush(); +} + +#endif /* _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H */ diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h index a963c26b2d34..83219710e904 100644 --- a/arch/powerpc/include/asm/feature-fixups.h +++ b/arch/powerpc/include/asm/feature-fixups.h @@ -200,6 +200,14 @@ label##3: \ FTR_ENTRY_OFFSET 955b-956b; \ .popsection; +#define UACCESS_FLUSH_FIXUP_SECTION \ +959: \ + .pushsection __uaccess_flush_fixup,"a"; \ + .align 2; \ +960: \ + FTR_ENTRY_OFFSET 959b-960b; \ + .popsection; + #define ENTRY_FLUSH_FIXUP_SECTION \ 957: \ .pushsection __entry_flush_fixup,"a"; \ @@ -242,6 +250,7 @@ extern long stf_barrier_fallback; extern long entry_flush_fallback; extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup; extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup; +extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup; extern long __start___entry_flush_fixup, __stop___entry_flush_fixup; extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup; extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h index 7895d5eeaf21..f0f8e36ad71f 100644 --- a/arch/powerpc/include/asm/kup.h +++ b/arch/powerpc/include/asm/kup.h @@ -6,10 +6,14 @@ #include +#ifdef CONFIG_PPC_BOOK3S_64 +#include +#else static inline void allow_user_access(void __user *to, const void __user *from, unsigned long size) { } static inline void prevent_user_access(void __user *to, const void __user *from, unsigned long size) { } +#endif /* CONFIG_PPC_BOOK3S_64 */ static inline void allow_read_from_user(const void __user *from, unsigned long size) { diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h index 082b56bf678d..3b45a64e491e 100644 --- a/arch/powerpc/include/asm/security_features.h +++ b/arch/powerpc/include/asm/security_features.h @@ -87,6 +87,8 @@ static inline bool security_ftr_enabled(unsigned long feature) // The L1-D cache should be flushed when entering the kernel #define SEC_FTR_L1D_FLUSH_ENTRY 0x0000000000004000ull +// The L1-D cache should be flushed after user accesses from the kernel +#define SEC_FTR_L1D_FLUSH_UACCESS 0x0000000000008000ull // Features enabled by default #define SEC_FTR_DEFAULT \ @@ -94,6 +96,7 @@ static inline bool security_ftr_enabled(unsigned long feature) SEC_FTR_L1D_FLUSH_PR | \ SEC_FTR_BNDS_CHK_SPEC_BAR | \ SEC_FTR_L1D_FLUSH_ENTRY | \ + SEC_FTR_L1D_FLUSH_UACCESS | \ SEC_FTR_FAVOUR_SECURITY) #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */ diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h index 26b55f23cf64..1ccf474f08ab 100644 --- a/arch/powerpc/include/asm/setup.h +++ b/arch/powerpc/include/asm/setup.h @@ -46,6 +46,7 @@ void setup_barrier_nospec(void); #else static inline void setup_barrier_nospec(void) { }; #endif +void do_uaccess_flush_fixups(enum l1d_flush_type types); void do_entry_flush_fixups(enum l1d_flush_type types); void do_barrier_nospec_fixups(bool enable); extern bool barrier_nospec_enabled; diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 7715fd89bb94..7f8e1bdbe3e2 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1630,14 +1630,9 @@ stf_barrier_fallback: .endr blr - .globl rfi_flush_fallback -rfi_flush_fallback: - SET_SCRATCH0(r13); - GET_PACA(r13); - std r9,PACA_EXRFI+EX_R9(r13) - std r10,PACA_EXRFI+EX_R10(r13) - std r11,PACA_EXRFI+EX_R11(r13) - mfctr r9 + +/* Clobbers r10, r11, ctr */ +.macro L1D_DISPLACEMENT_FLUSH ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) ld r11,PACA_L1D_FLUSH_SIZE(r13) srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ @@ -1663,7 +1658,18 @@ rfi_flush_fallback: ld r11,(0x80 + 8)*7(r10) addi r10,r10,0x80*8 bdnz 1b +.endm + + .globl rfi_flush_fallback +rfi_flush_fallback: + SET_SCRATCH0(r13); + GET_PACA(r13); + std r9,PACA_EXRFI+EX_R9(r13) + std r10,PACA_EXRFI+EX_R10(r13) + std r11,PACA_EXRFI+EX_R11(r13) + mfctr r9 + L1D_DISPLACEMENT_FLUSH mtctr r9 ld r9,PACA_EXRFI+EX_R9(r13) ld r10,PACA_EXRFI+EX_R10(r13) @@ -1679,32 +1685,7 @@ hrfi_flush_fallback: std r10,PACA_EXRFI+EX_R10(r13) std r11,PACA_EXRFI+EX_R11(r13) mfctr r9 - ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) - ld r11,PACA_L1D_FLUSH_SIZE(r13) - srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ - mtctr r11 - DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */ - - /* order ld/st prior to dcbt stop all streams with flushing */ - sync - - /* - * The load adresses are at staggered offsets within cachelines, - * which suits some pipelines better (on others it should not - * hurt). - */ -1: - ld r11,(0x80 + 8)*0(r10) - ld r11,(0x80 + 8)*1(r10) - ld r11,(0x80 + 8)*2(r10) - ld r11,(0x80 + 8)*3(r10) - ld r11,(0x80 + 8)*4(r10) - ld r11,(0x80 + 8)*5(r10) - ld r11,(0x80 + 8)*6(r10) - ld r11,(0x80 + 8)*7(r10) - addi r10,r10,0x80*8 - bdnz 1b - + L1D_DISPLACEMENT_FLUSH mtctr r9 ld r9,PACA_EXRFI+EX_R9(r13) ld r10,PACA_EXRFI+EX_R10(r13) @@ -1718,38 +1699,14 @@ entry_flush_fallback: std r10,PACA_EXRFI+EX_R10(r13) std r11,PACA_EXRFI+EX_R11(r13) mfctr r9 - ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) - ld r11,PACA_L1D_FLUSH_SIZE(r13) - srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ - mtctr r11 - DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */ - - /* order ld/st prior to dcbt stop all streams with flushing */ - sync - - /* - * The load addresses are at staggered offsets within cachelines, - * which suits some pipelines better (on others it should not - * hurt). - */ -1: - ld r11,(0x80 + 8)*0(r10) - ld r11,(0x80 + 8)*1(r10) - ld r11,(0x80 + 8)*2(r10) - ld r11,(0x80 + 8)*3(r10) - ld r11,(0x80 + 8)*4(r10) - ld r11,(0x80 + 8)*5(r10) - ld r11,(0x80 + 8)*6(r10) - ld r11,(0x80 + 8)*7(r10) - addi r10,r10,0x80*8 - bdnz 1b - + L1D_DISPLACEMENT_FLUSH mtctr r9 ld r9,PACA_EXRFI+EX_R9(r13) ld r10,PACA_EXRFI+EX_R10(r13) ld r11,PACA_EXRFI+EX_R11(r13) blr + /* * Hash table stuff */ @@ -1909,3 +1866,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) 1: addi r3,r1,STACK_FRAME_OVERHEAD bl kernel_bad_stack b 1b + +_KPROBE(do_uaccess_flush) + UACCESS_FLUSH_FIXUP_SECTION + nop + nop + nop + blr + L1D_DISPLACEMENT_FLUSH + blr diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c index b92debacb821..80eb47113d5d 100644 --- a/arch/powerpc/kernel/ppc_ksyms.c +++ b/arch/powerpc/kernel/ppc_ksyms.c @@ -6,6 +6,9 @@ #include #include #include +#ifdef CONFIG_PPC64 +#include +#endif EXPORT_SYMBOL(flush_dcache_range); EXPORT_SYMBOL(flush_icache_range); @@ -46,3 +49,7 @@ EXPORT_SYMBOL(epapr_hypercall_start); EXPORT_SYMBOL(current_stack_pointer); EXPORT_SYMBOL(__arch_clear_user); + +#ifdef CONFIG_PPC64 +EXPORT_SYMBOL(do_uaccess_flush); +#endif diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index cd405eaffa23..3c6717569360 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -845,8 +845,12 @@ static enum l1d_flush_type enabled_flush_types; static void *l1d_flush_fallback_area; static bool no_rfi_flush; static bool no_entry_flush; +static bool no_uaccess_flush; bool rfi_flush; bool entry_flush; +bool uaccess_flush; +DEFINE_STATIC_KEY_FALSE(uaccess_flush_key); +EXPORT_SYMBOL(uaccess_flush_key); static int __init handle_no_rfi_flush(char *p) { @@ -864,6 +868,14 @@ static int __init handle_no_entry_flush(char *p) } early_param("no_entry_flush", handle_no_entry_flush); +static int __init handle_no_uaccess_flush(char *p) +{ + pr_info("uaccess-flush: disabled on command line."); + no_uaccess_flush = true; + return 0; +} +early_param("no_uaccess_flush", handle_no_uaccess_flush); + /* * The RFI flush is not KPTI, but because users will see doco that says to use * nopti we hijack that option here to also disable the RFI flush. @@ -907,6 +919,23 @@ void entry_flush_enable(bool enable) entry_flush = enable; } +void uaccess_flush_enable(bool enable) +{ + if (enable) { + do_uaccess_flush_fixups(enabled_flush_types); + if (static_key_initialized) + static_branch_enable(&uaccess_flush_key); + else + printk(KERN_DEBUG "uaccess-flush: deferring static key until after static key initialization\n"); + on_each_cpu(do_nothing, NULL, 1); + } else { + static_branch_disable(&uaccess_flush_key); + do_uaccess_flush_fixups(L1D_FLUSH_NONE); + } + + uaccess_flush = enable; +} + static void __ref init_fallback_flush(void) { u64 l1d_size, limit; @@ -961,6 +990,15 @@ void setup_entry_flush(bool enable) entry_flush_enable(enable); } +void setup_uaccess_flush(bool enable) +{ + if (cpu_mitigations_off()) + return; + + if (!no_uaccess_flush) + uaccess_flush_enable(enable); +} + #ifdef CONFIG_DEBUG_FS static int rfi_flush_set(void *data, u64 val) { @@ -1014,12 +1052,54 @@ static int entry_flush_get(void *data, u64 *val) DEFINE_SIMPLE_ATTRIBUTE(fops_entry_flush, entry_flush_get, entry_flush_set, "%llu\n"); +static int uaccess_flush_set(void *data, u64 val) +{ + bool enable; + + if (val == 1) + enable = true; + else if (val == 0) + enable = false; + else + return -EINVAL; + + /* Only do anything if we're changing state */ + if (enable != uaccess_flush) + uaccess_flush_enable(enable); + + return 0; +} + +static int uaccess_flush_get(void *data, u64 *val) +{ + *val = uaccess_flush ? 1 : 0; + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(fops_uaccess_flush, uaccess_flush_get, uaccess_flush_set, "%llu\n"); + + static __init int rfi_flush_debugfs_init(void) { debugfs_create_file("rfi_flush", 0600, powerpc_debugfs_root, NULL, &fops_rfi_flush); debugfs_create_file("entry_flush", 0600, powerpc_debugfs_root, NULL, &fops_entry_flush); + debugfs_create_file("uaccess_flush", 0600, powerpc_debugfs_root, NULL, &fops_uaccess_flush); return 0; } device_initcall(rfi_flush_debugfs_init); #endif + +/* + * setup_uaccess_flush runs before jump_label_init, so we can't do the setup + * there. Do it now instead. + */ +static __init int uaccess_flush_static_key_init(void) +{ + if (uaccess_flush) { + printk(KERN_DEBUG "uaccess-flush: switching on static key\n"); + static_branch_enable(&uaccess_flush_key); + } + return 0; +} +early_initcall(uaccess_flush_static_key_init); #endif /* CONFIG_PPC_BOOK3S_64 */ diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S index 43a8cfa5e2fb..f820d03335eb 100644 --- a/arch/powerpc/kernel/vmlinux.lds.S +++ b/arch/powerpc/kernel/vmlinux.lds.S @@ -80,6 +80,13 @@ SECTIONS __stop___stf_entry_barrier_fixup = .; } + . = ALIGN(8); + __uaccess_flush_fixup : AT(ADDR(__uaccess_flush_fixup) - LOAD_OFFSET) { + __start___uaccess_flush_fixup = .; + *(__uaccess_flush_fixup) + __stop___uaccess_flush_fixup = .; + } + . = ALIGN(8); __entry_flush_fixup : AT(ADDR(__entry_flush_fixup) - LOAD_OFFSET) { __start___entry_flush_fixup = .; diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c index 4f13bba13596..40b134bf5a68 100644 --- a/arch/powerpc/lib/feature-fixups.c +++ b/arch/powerpc/lib/feature-fixups.c @@ -229,6 +229,56 @@ void do_stf_barrier_fixups(enum stf_barrier_type types) do_stf_exit_barrier_fixups(types); } +void do_uaccess_flush_fixups(enum l1d_flush_type types) +{ + unsigned int instrs[4], *dest; + long *start, *end; + int i; + + start = PTRRELOC(&__start___uaccess_flush_fixup); + end = PTRRELOC(&__stop___uaccess_flush_fixup); + + instrs[0] = 0x60000000; /* nop */ + instrs[1] = 0x60000000; /* nop */ + instrs[2] = 0x60000000; /* nop */ + instrs[3] = 0x4e800020; /* blr */ + + i = 0; + if (types == L1D_FLUSH_FALLBACK) { + instrs[3] = 0x60000000; /* nop */ + /* fallthrough to fallback flush */ + } + + if (types & L1D_FLUSH_ORI) { + instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */ + instrs[i++] = 0x63de0000; /* ori 30,30,0 L1d flush*/ + } + + if (types & L1D_FLUSH_MTTRIG) + instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ + + for (i = 0; start < end; start++, i++) { + dest = (void *)start + *start; + + pr_devel("patching dest %lx\n", (unsigned long)dest); + + patch_instruction(dest, instrs[0]); + + patch_instruction((dest + 1), instrs[1]); + patch_instruction((dest + 2), instrs[2]); + patch_instruction((dest + 3), instrs[3]); + } + + printk(KERN_DEBUG "uaccess-flush: patched %d locations (%s flush)\n", i, + (types == L1D_FLUSH_NONE) ? "no" : + (types == L1D_FLUSH_FALLBACK) ? "fallback displacement" : + (types & L1D_FLUSH_ORI) ? (types & L1D_FLUSH_MTTRIG) + ? "ori+mttrig type" + : "ori type" : + (types & L1D_FLUSH_MTTRIG) ? "mttrig type" + : "unknown"); +} + void do_entry_flush_fixups(enum l1d_flush_type types) { unsigned int instrs[3], *dest; diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c index fe3f1f438f78..6259228a0e18 100644 --- a/arch/powerpc/platforms/powernv/setup.c +++ b/arch/powerpc/platforms/powernv/setup.c @@ -126,9 +126,10 @@ static void pnv_setup_rfi_flush(void) /* * 4.4 doesn't support Power9 bare metal, so we don't need to flush - * here - the flush fixes a P9 specific vulnerability. + * here - the flushes fix a P9 specific vulnerability. */ security_ftr_clear(SEC_FTR_L1D_FLUSH_ENTRY); + security_ftr_clear(SEC_FTR_L1D_FLUSH_UACCESS); enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \ (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) || \ @@ -140,6 +141,10 @@ static void pnv_setup_rfi_flush(void) enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && security_ftr_enabled(SEC_FTR_L1D_FLUSH_ENTRY); setup_entry_flush(enable); + + enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && + security_ftr_enabled(SEC_FTR_L1D_FLUSH_UACCESS); + setup_uaccess_flush(enable); } static void __init pnv_setup_arch(void) diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 69f1808ecbd2..498c5092bd23 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -588,6 +588,10 @@ void pseries_setup_rfi_flush(void) enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && security_ftr_enabled(SEC_FTR_L1D_FLUSH_ENTRY); setup_entry_flush(enable); + + enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && + security_ftr_enabled(SEC_FTR_L1D_FLUSH_UACCESS); + setup_uaccess_flush(enable); } static void __init pSeries_setup_arch(void)