From patchwork Sun Oct 27 21:00:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177830 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2530962ill; Sun, 27 Oct 2019 14:08:35 -0700 (PDT) X-Google-Smtp-Source: APXvYqxqZS0/wiu1y5uDGUPr9GkZwxR8ceCncQN0CfFheUJ+M4LYc3G3icb7lgp9hKIC3aRhZ/xr X-Received: by 2002:a50:fc18:: with SMTP id i24mr15839216edr.42.1572210515774; Sun, 27 Oct 2019 14:08:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210515; cv=none; d=google.com; s=arc-20160816; b=lMthdbJZpzL2zVWhkkUOHrXr71HQ3w8VmSR3TkW8HMp/fIX0EZXFo0z0lEhV6nOGzK edL0cnvHnKjBpK2Foa85wlKEFG56Cy0fAUMr2relX7RvN87oU0mpBaDq+sLmGkptM6Mh 1XAJRJNWUlnYeP8fqrDlOuS8vcsrZ7Zf90vhgIfMUai7qfVnZfgMd4hf/715EAtqiTAx 2CtKuL4tRJb7Z285dRj57dCEZuRRUoK/awMGz3G0T3AFarh0LlODoIP3pcd2pZ+DuA4j t8KAtLPYmeyOjcuw//N2vIlQC3egMtt8r+m+pslgnAvz9sAeKEuWaDlfskENSWpe4Lpg sKNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=6+GBABfYz0VwkQuyn0Bo+BAWx4KDke0qKPUS4l29ZPw=; b=CiSXMPM5KK8/pZHctvwlBmkOuWEknF0mBnnJDGv0rY3wgzOCh9J+kIrHHpBoIj9Vqw C4+H2FOzB0ffbIunP91Hl+Ck0UiEsO1ava4OG5pKg7PgOOswJFDUPbujlJzIxzzVp84i SZ6474Y9ZEUy+nrA918JaQ5xvkvf9d5PvqhgrX1n6y3+WD3P7RrdSOEqUXWvcS5vOHR2 6ao3KvncOM1NluymbAW6Bi5D9BGMwi/U5De3Z2SPZGUKpQ75+5YlEqTU1IyS/kVVo218 8NmiXh9kGeKo67aoJl4ayK7zTc6gMUZT5E+BmGqwN0P2OE7PklkP1KdkAGVy/QmWvAFg oKug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=hH+ojIv3; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e7si5142608ejk.20.2019.10.27.14.08.35; Sun, 27 Oct 2019 14:08:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=hH+ojIv3; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727930AbfJ0VIe (ORCPT + 14 others); Sun, 27 Oct 2019 17:08:34 -0400 Received: from mail.kernel.org ([198.145.29.99]:54768 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728744AbfJ0VId (ORCPT ); Sun, 27 Oct 2019 17:08:33 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2977A20B7C; Sun, 27 Oct 2019 21:08:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210511; bh=El9ZsWCJ0ZN4AZBe+Cgor0HJAgaRb2PCeyU5H1wVL+o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hH+ojIv3KXoOb9g87yvyTE3wrTK+73l5sWKHIozupLvp4OH9KlAxkymQWYdYnKvmC H/z8YGnBduNYWqpR7RnlXTvtwRf7b8JwEqrChA21Ni94Cs2+CYA5oWdtih9zMp/VcZ 4gOD7nQkTzVkStTr0qNuOE2h30/0ZIXFc/MJUw3Q= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Catalin Marinas , Dave Martin , Mark Rutland , Will Deacon , Suzuki K Poulose , Ard Biesheuvel Subject: [PATCH 4.14 039/119] arm64: Fix the feature type for ID register fields Date: Sun, 27 Oct 2019 22:00:16 +0100 Message-Id: <20191027203313.723141654@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 5bdecb7971572a1aef828df507558e7a4dfe25ec ] Now that the ARM ARM clearly specifies the rules for inferring the values of the ID register fields, fix the types of the feature bits we have in the kernel. As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the ID scheme for fields in ID registers" lists the registers to which the scheme applies along with the exceptions. This patch changes the relevant feature bits from FTR_EXACT to FTR_LOWER_SAFE to select the safer value. This will enable an older kernel running on a new CPU detect the safer option rather than completely disabling the feature. Cc: Catalin Marinas Cc: Dave Martin Cc: Mark Rutland Cc: Will Deacon Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/cpufeature.c | 102 ++++++++++++++++++++--------------------- 1 file changed, 51 insertions(+), 51 deletions(-) --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -107,11 +107,11 @@ cpufeature_pan_not_uao(const struct arm6 * sync with the documentation of the CPU feature register ABI. */ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = { - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_DP_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SM4_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SM3_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SHA3_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_RDM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0), @@ -121,36 +121,36 @@ static const struct arm64_ftr_bits ftr_i }; static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_FCMA_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_DPB_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64PFR0_GIC_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0), S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI), S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI), /* Linux doesn't care about the EL3 */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64PFR0_EL3_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64PFR0_EL2_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = { - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI), - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0), + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI), + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0), /* Linux shouldn't care about secure memory */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_ASID_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0), /* * Differing PARange is fine as long as all peripherals and memory are mapped * within the minimum PARange of all CPUs @@ -161,20 +161,20 @@ static const struct arm64_ftr_bits ftr_i static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = { ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_LOR_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_HPD_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_VHE_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_HADBS_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_LVA_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_IESB_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_LSM_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_UAO_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_CNP_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0), ARM64_FTR_END, }; @@ -201,14 +201,14 @@ struct arm64_ftr_reg arm64_ftr_reg_ctrel }; static const struct arm64_ftr_bits ftr_id_mmfr0[] = { - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 28, 4, 0xf), /* InnerShr */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 24, 4, 0), /* FCSE */ + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0xf), /* InnerShr */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0), /* FCSE */ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, 20, 4, 0), /* AuxReg */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 16, 4, 0), /* TCM */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 12, 4, 0), /* ShareLvl */ - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 8, 4, 0xf), /* OuterShr */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 4, 4, 0), /* PMSA */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 4, 0), /* VMSA */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0), /* TCM */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0), /* ShareLvl */ + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0xf), /* OuterShr */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0), /* PMSA */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0), /* VMSA */ ARM64_FTR_END, }; @@ -229,8 +229,8 @@ static const struct arm64_ftr_bits ftr_i }; static const struct arm64_ftr_bits ftr_mvfr2[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 4, 4, 0), /* FPMisc */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 4, 0), /* SIMDMisc */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0), /* FPMisc */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0), /* SIMDMisc */ ARM64_FTR_END, }; @@ -242,25 +242,25 @@ static const struct arm64_ftr_bits ftr_d static const struct arm64_ftr_bits ftr_id_isar5[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_RDM_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_CRC32_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_SHA2_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_SHA1_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_AES_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_SEVL_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0), ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_mmfr4[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 4, 4, 0), /* ac2 */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0), /* ac2 */ ARM64_FTR_END, }; static const struct arm64_ftr_bits ftr_id_pfr0[] = { - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 12, 4, 0), /* State3 */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 8, 4, 0), /* State2 */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 4, 4, 0), /* State1 */ - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 4, 0), /* State0 */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0), /* State3 */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0), /* State2 */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0), /* State1 */ + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0), /* State0 */ ARM64_FTR_END, }; From patchwork Sun Oct 27 21:00:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177881 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2552279ill; Sun, 27 Oct 2019 14:35:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqxhrrEeYsbUDvyN7c+NDz0WBDtax03v5nInNPCde3PAMQsJxRCgum9R3BqQRv3RDM892AxD X-Received: by 2002:aa7:db17:: with SMTP id t23mr16104610eds.135.1572212140840; Sun, 27 Oct 2019 14:35:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572212140; cv=none; d=google.com; s=arc-20160816; b=OTHCPklqZrpzwGWWyA7YtDSWS5EAIGpokvDSLdVOgfkixuPob8O8zpIIP2nL1DGdQR /NNcbAw3pfcKY3jdHoPqsuHfQTnRyRSxYesuwnWg9rye4sNwVIkOuFGaxZVmfWEk4FKT l+AB7cZfKsTpKE7ChWDJBomhGJklsFORGRQrk4AQhQXi+lxfjBL4PHg87VLPQ4DdH1sN R6TfMozI8EfSLD24OwkCMKtKOxRHzrIZ+KXMfDCQRxHi9f+564Ez2EkS0lIz91qr7ZZD Ww9vha7bcmccxSQKAC8VanofEo8hNGYXY+FACxbjsQAjpsjE/3zPlnFfrRgNq7i8uUHx y+xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=E/d+FdwFTvwgEWG4ylsVjcvrRfOtui44Zrbxp710KCk=; b=vFlD94EMo3HbSnSo5XEgD2j/7ZRjzN2Ck0exuLXPvg2TL49WzNBqtDozJbrmw/q2yC a8zQ6Nk7HFF/eaBn0cuAqc3ulsO73WBLhFp3F/ThU/tqbfe9MsUeuBXCQ8lbBM0Ahbd6 9ksr2LLSSoH5U/W+RtFn4Ehb5yD6FI2zNvv8LNCuTqDKS2Zx+zE61rMhqEqF4BBk37SH 8jhIdWlxTtDYAAIldJiD5glgFXQcMIHRrqHbD0jnl+G66I6rYwHPnloXsKg4qixJ9Ird fQmFe/ttOcO3t1evXN8W6MNRVWZ3ChzoqS7oGWDQXx40TfrHcAwibMj6dMa3WxB+zaee yRrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=swR2oEST; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u1si5357353ejt.18.2019.10.27.14.35.40; Sun, 27 Oct 2019 14:35:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=swR2oEST; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728362AbfJ0Vfj (ORCPT + 14 others); Sun, 27 Oct 2019 17:35:39 -0400 Received: from mail.kernel.org ([198.145.29.99]:54894 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728766AbfJ0VIi (ORCPT ); Sun, 27 Oct 2019 17:08:38 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8F75020B7C; Sun, 27 Oct 2019 21:08:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210517; bh=MbJJGwCPobPXC73iVz1NPLAWQ5aGkr8Eo6NSbR550Mw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=swR2oESTSRZaWjW4oWfksFEOXt90FldNxoMnAeS1FxDV7wWrYdIwULPBzGNjfn5bB dobBu674tS9zrsOxg3Vt+7DLcEpr2C1Vk0QyQVLVKGPDO4K8i0YjtUvQswUvm1jMmE 9P9l+T3VgTMG7TEiMx2RMmHKbxxxRxQmdJuCXAIM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Catalin Marinas , Will Deacon , Mark Rutland , Dave Martin , Suzuki K Poulose , Ard Biesheuvel Subject: [PATCH 4.14 041/119] arm64: Documentation: cpu-feature-registers: Remove RES0 fields Date: Sun, 27 Oct 2019 22:00:18 +0100 Message-Id: <20191027203315.175774322@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 847ecd3fa311cde0f10a1b66c572abb136742b1d ] Remove the invisible RES0 field entries from the table, listing fields in CPU ID feature registers, as : 1) We are only interested in the user visible fields. 2) The field description may not be up-to-date, as the field could be assigned a new meaning. 3) We already explain the rules of the fields which are not visible. Cc: Catalin Marinas Cc: Will Deacon Acked-by: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon [ardb: fix up for missing SVE in context] Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- Documentation/arm64/cpu-feature-registers.txt | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) --- a/Documentation/arm64/cpu-feature-registers.txt +++ b/Documentation/arm64/cpu-feature-registers.txt @@ -110,7 +110,6 @@ infrastructure: x--------------------------------------------------x | Name | bits | visible | |--------------------------------------------------| - | RES0 | [63-52] | n | |--------------------------------------------------| | FHM | [51-48] | y | |--------------------------------------------------| @@ -124,8 +123,6 @@ infrastructure: |--------------------------------------------------| | RDM | [31-28] | y | |--------------------------------------------------| - | RES0 | [27-24] | n | - |--------------------------------------------------| | ATOMICS | [23-20] | y | |--------------------------------------------------| | CRC32 | [19-16] | y | @@ -135,8 +132,6 @@ infrastructure: | SHA1 | [11-8] | y | |--------------------------------------------------| | AES | [7-4] | y | - |--------------------------------------------------| - | RES0 | [3-0] | n | x--------------------------------------------------x @@ -144,7 +139,8 @@ infrastructure: x--------------------------------------------------x | Name | bits | visible | |--------------------------------------------------| - | RES0 | [63-28] | n | + |--------------------------------------------------| + | SVE | [35-32] | y | |--------------------------------------------------| | GIC | [27-24] | n | |--------------------------------------------------| From patchwork Sun Oct 27 21:00:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177836 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2531397ill; Sun, 27 Oct 2019 14:09:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqwzy5pI8JJStyI8sdNDbig+Fypo5qY9jKhrMZNhVHPhDeMtc9IZXvYMiq0IF7OQi/Lr8QEg X-Received: by 2002:a05:6402:3c7:: with SMTP id t7mr16100055edw.232.1572210543642; Sun, 27 Oct 2019 14:09:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210543; cv=none; d=google.com; s=arc-20160816; b=zMEuvsf2m9R3wKjOQxspxV9X1z2fwYNG8TvKI3cM1wyCQjL1VGfBFO8UFW1gZCL4Mr JSCcSPGqmWp172kWlJIMjVZxIvBpBGP8GBdxyARJWojcSXpnldnNuQqytfXkxPoX/PzI YbOo2oBcRCv9dDHlNO6Y3CTimAemwGxurjlTvJmn4GjIVDK/635RemBp2w9sO+SM9inm 3xsY2nVYUhUBEVwzThwzLRcwPBYZomX41nm5YJOGeOqZQi3Gt1nXblajs06+oQ9UFM5G 6q/g1Xtbze/Keqb9ubxCXJKcJ9NYJiQaI8K7nGhcA2rDjmX50RZ//bPxMJRMQ55xIK1h UmmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=U/4cT6XByVFDjhejXLDQ5SlEDBZNx6RWpUAjPiFk5BI=; b=UIq6/7p366HHJygbnWMVlK+T/pB4GmLJTNGeFxPbwaYgVoGlYYyuSR8EpzOuAjaZhK wJc73nxEzBaFW8g1CznViDJuJJMX6g3BBGz91wuLtbqwU+BofSUZIdTLAK1k2Wq9QGpO 3c7z0yxAa3D6p3ccXfvcLV1tnlWIf5ZjjNfGygl80l2ua5dwa3LH/yIqL9N0Lbbm6V7h DjpsZqINBCwn/vk2c5HXwOYkOpzp6irnFOHm2Z3qo1E1MNCyiaOMQWY7Vp4H8WSBJy0o Q8Ye3fApiTEROPOHHxgfdYrJ567NQzjjk5s1R0uHP8LqSiogFGmcBI9m/p1OSuOI2C65 Ys5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="1/YPhprM"; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d10si6595307edn.266.2019.10.27.14.09.03; Sun, 27 Oct 2019 14:09:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="1/YPhprM"; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727967AbfJ0VJB (ORCPT + 14 others); Sun, 27 Oct 2019 17:09:01 -0400 Received: from mail.kernel.org ([198.145.29.99]:55330 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729455AbfJ0VJB (ORCPT ); Sun, 27 Oct 2019 17:09:01 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 841282064A; Sun, 27 Oct 2019 21:08:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210540; bh=JwJbmvmue4rSUIF6gaZB1bl6wcv45ZlgHlBozyiTPtA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1/YPhprMV9xrGYedamsbLpCSqfMG9RDk07lOAKbK6SnlZGwbv5j0I0uSSax9m7vYq HeB36d8hGBfHqFBKUrpn9x2k8FmttrRS2AS73Vy6tGta5BQoQv4fMMwfteN+wSsNDU LTusBUk7p6wfUdYHqXzbP5wl/6LQ/AXtTSy4+lOA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Mark Rutland , Andre Przywara , Dave Martin , Suzuki K Poulose , Ard Biesheuvel Subject: [PATCH 4.14 048/119] arm64: capabilities: Move errata processing code Date: Sun, 27 Oct 2019 22:00:25 +0100 Message-Id: <20191027203319.576429658@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 1e89baed5d50d2b8d9fd420830902570270703f1 ] We have errata work around processing code in cpu_errata.c, which calls back into helpers defined in cpufeature.c. Now that we are going to make the handling of capabilities generic, by adding the information to each capability, move the errata work around specific processing code. No functional changes. Cc: Will Deacon Cc: Marc Zyngier Cc: Mark Rutland Cc: Andre Przywara Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpufeature.h | 7 ----- arch/arm64/kernel/cpu_errata.c | 33 --------------------------- arch/arm64/kernel/cpufeature.c | 43 +++++++++++++++++++++++++++++++++--- 3 files changed, 40 insertions(+), 43 deletions(-) --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -230,15 +230,8 @@ static inline bool id_aa64pfr0_32bit_el0 } void __init setup_cpu_features(void); - -void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, - const char *info); -void enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps); void check_local_cpu_capabilities(void); -void update_cpu_errata_workarounds(void); -void __init enable_errata_workarounds(void); -void verify_local_cpu_errata_workarounds(void); u64 read_sanitised_ftr_reg(u32 id); --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -621,36 +621,3 @@ const struct arm64_cpu_capabilities arm6 { } }; - -/* - * The CPU Errata work arounds are detected and applied at boot time - * and the related information is freed soon after. If the new CPU requires - * an errata not detected at boot, fail this CPU. - */ -void verify_local_cpu_errata_workarounds(void) -{ - const struct arm64_cpu_capabilities *caps = arm64_errata; - - for (; caps->matches; caps++) { - if (cpus_have_cap(caps->capability)) { - if (caps->cpu_enable) - caps->cpu_enable(caps); - } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) { - pr_crit("CPU%d: Requires work around for %s, not detected" - " at boot time\n", - smp_processor_id(), - caps->desc ? : "an erratum"); - cpu_die_early(); - } - } -} - -void update_cpu_errata_workarounds(void) -{ - update_cpu_capabilities(arm64_errata, "enabling workaround for"); -} - -void __init enable_errata_workarounds(void) -{ - enable_cpu_capabilities(arm64_errata); -} --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -484,6 +484,9 @@ static void __init init_cpu_ftr_reg(u32 reg->user_mask = user_mask; } +extern const struct arm64_cpu_capabilities arm64_errata[]; +static void update_cpu_errata_workarounds(void); + void __init init_cpu_features(struct cpuinfo_arm64 *info) { /* Before we start using the tables, make sure it is sorted */ @@ -1160,8 +1163,8 @@ static bool __this_cpu_has_cap(const str return false; } -void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, - const char *info) +static void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, + const char *info) { for (; caps->matches; caps++) { if (!caps->matches(caps, caps->def_scope)) @@ -1185,7 +1188,8 @@ static int __enable_cpu_capability(void * Run through the enabled capabilities and enable() it on all active * CPUs */ -void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps) +static void __init +enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps) { for (; caps->matches; caps++) { unsigned int num = caps->capability; @@ -1268,6 +1272,39 @@ verify_local_cpu_features(const struct a } /* + * The CPU Errata work arounds are detected and applied at boot time + * and the related information is freed soon after. If the new CPU requires + * an errata not detected at boot, fail this CPU. + */ +static void verify_local_cpu_errata_workarounds(void) +{ + const struct arm64_cpu_capabilities *caps = arm64_errata; + + for (; caps->matches; caps++) { + if (cpus_have_cap(caps->capability)) { + if (caps->cpu_enable) + caps->cpu_enable(caps); + } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) { + pr_crit("CPU%d: Requires work around for %s, not detected" + " at boot time\n", + smp_processor_id(), + caps->desc ? : "an erratum"); + cpu_die_early(); + } + } +} + +static void update_cpu_errata_workarounds(void) +{ + update_cpu_capabilities(arm64_errata, "enabling workaround for"); +} + +static void __init enable_errata_workarounds(void) +{ + enable_cpu_capabilities(arm64_errata); +} + +/* * Run through the enabled system capabilities and enable() it on this CPU. * The capabilities were decided based on the available CPUs at the boot time. * Any new CPU should match the system wide status of the capability. If the From patchwork Sun Oct 27 21:00:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177837 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2531729ill; Sun, 27 Oct 2019 14:09:24 -0700 (PDT) X-Google-Smtp-Source: APXvYqw3rXpUJrZXZepWx2cRSLf6dogUqgPOLEWi7kn96+MDyHBfn+SGRQbQGQ+DbLsTpKNnrPNV X-Received: by 2002:aa7:d615:: with SMTP id c21mr16003799edr.167.1572210564269; Sun, 27 Oct 2019 14:09:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210564; cv=none; d=google.com; s=arc-20160816; b=Gez+g4tDzLD1M9YM0+76NKrCpCdsfXWe1mRDcUu29jfm0OvYNK8/ZvxFrAlvOjLrey 5t6cQSi4mwB0Kl4paOYKXtzSr9JouV9pG6oTtBBvLhsHUdGuJ3lrNiz4ESmINP2qVKk4 b4xIUno+BuNAOy75iySS9IP1wevvJYSjcOdGE7b1dtus+U9dzc9dJmzi9U/fcwKyvHrC yJySxyId0El0hJUDe/TVcvxEXeGEPSFgxVM+xmh4nF7RfET8B5ZsUCwqtuZmBNElWDE9 RaxmLa2XzoofIx6cvjfWfQGpxsOSnVQmAgm4nT7faKQIVjczwZrPws9rbs/EBe52/ohw gyrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8dzCTi7GEvRzk/F4ib0vUWfzdtWluzduai6V9DJxGEY=; b=GnWiQ+z5M1hHx22pdj+PECf/jPgNJYxCQ5ZNtHO8vO8dl0pI8CAk+KUTtrNLnN5eeo yfEx87HWJpBZcR5bNMYUubfuw+P3FPeFJrAePqcHoFn+om3KCPkluxrzsfes4PfB473c NDXTbgMafPg0LROh5ffLHSW8t8mahMAOCIxGgzujnl+pFSvF7vxFfkF9OcMDO80RbzTs ZSTH+D5pxhdE4dPGgoYzC3CjwphWg9YC/CnjJ9tsH8fOof3uN9HB3Z26KMvXnDDMi9US 5cu2rGGhU+7igmxezc2w1jBwUWuHIf/4l9WaQXNfjhV3pFd9BLAwZ1fQ2T8b+Cc77Kui ATuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=PoNmzatR; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f19si6250342eda.435.2019.10.27.14.09.23; Sun, 27 Oct 2019 14:09:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=PoNmzatR; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727718AbfJ0VJW (ORCPT + 14 others); Sun, 27 Oct 2019 17:09:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:55714 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729487AbfJ0VJU (ORCPT ); Sun, 27 Oct 2019 17:09:20 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8D151208C0; Sun, 27 Oct 2019 21:09:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210559; bh=tO/fMj0rNXURUSNa/LcjmZo8h8zJc4QFkkoZPUTx8c4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PoNmzatRZa9Yvo27sjgG85wikZpElsV5nOn1V+XRXV571dF5HiFfcaAwWSaiKP4UJ gFDur0EzTPi8EOgkmowDzG5ooG8jGeyD/eLhcjg0Bt9B1TgFpN1vsdOghsou1QgPdF OWbiTAviSMhtYS7EPMumvwz1zArEqbASeRLbzjTs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Will Deacon , Mark Rutland , Dave Martin , Suzuki K Poulose , Ard Biesheuvel Subject: [PATCH 4.14 050/119] arm64: capabilities: Add flags to handle the conflicts on late CPU Date: Sun, 27 Oct 2019 22:00:27 +0100 Message-Id: <20191027203320.879287292@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 5b4747c5dce7a873e1e7fe1608835825f714267a ] When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | |-----------------------------| | a | y | n | |-----------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds that cannot be activated after the kernel has finished booting.And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). Add two different flags to indicate how the conflict should be handled. ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability. Now that we have the flags to describe the behavior of the errata and the features, as we treat them, define types for ERRATUM and FEATURE. Cc: Will Deacon Cc: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpufeature.h | 68 ++++++++++++++++++++++++++++++++++++ arch/arm64/kernel/cpu_errata.c | 12 +++--- arch/arm64/kernel/cpufeature.c | 26 ++++++------- 3 files changed, 87 insertions(+), 19 deletions(-) --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -149,6 +149,7 @@ extern struct arm64_ftr_reg arm64_ftr_re * an action, based on the severity (e.g, a CPU could be prevented from * booting or cause a kernel panic). The CPU is allowed to "affect" the * state of the capability, if it has not been finalised already. + * See section 5 for more details on conflicts. * * 4) Action: As mentioned in (2), the kernel can take an action for each * detected capability, on all CPUs on the system. Appropriate actions @@ -166,6 +167,34 @@ extern struct arm64_ftr_reg arm64_ftr_re * * check_local_cpu_capabilities() -> verify_local_cpu_capabilities() * + * 5) Conflicts: Based on the state of the capability on a late CPU vs. + * the system state, we could have the following combinations : + * + * x-----------------------------x + * | Type | System | Late CPU | + * |-----------------------------| + * | a | y | n | + * |-----------------------------| + * | b | n | y | + * x-----------------------------x + * + * Two separate flag bits are defined to indicate whether each kind of + * conflict can be allowed: + * ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - Case(a) is allowed + * ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - Case(b) is allowed + * + * Case (a) is not permitted for a capability that the system requires + * all CPUs to have in order for the capability to be enabled. This is + * typical for capabilities that represent enhanced functionality. + * + * Case (b) is not permitted for a capability that must be enabled + * during boot if any CPU in the system requires it in order to run + * safely. This is typical for erratum work arounds that cannot be + * enabled after the corresponding capability is finalised. + * + * In some non-typical cases either both (a) and (b), or neither, + * should be permitted. This can be described by including neither + * or both flags in the capability's type field. */ @@ -179,6 +208,33 @@ extern struct arm64_ftr_reg arm64_ftr_re #define SCOPE_SYSTEM ARM64_CPUCAP_SCOPE_SYSTEM #define SCOPE_LOCAL_CPU ARM64_CPUCAP_SCOPE_LOCAL_CPU +/* + * Is it permitted for a late CPU to have this capability when system + * hasn't already enabled it ? + */ +#define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU ((u16)BIT(4)) +/* Is it safe for a late CPU to miss this capability when system has it */ +#define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5)) + +/* + * CPU errata workarounds that need to be enabled at boot time if one or + * more CPUs in the system requires it. When one of these capabilities + * has been enabled, it is safe to allow any CPU to boot that doesn't + * require the workaround. However, it is not safe if a "late" CPU + * requires a workaround and the system hasn't enabled it already. + */ +#define ARM64_CPUCAP_LOCAL_CPU_ERRATUM \ + (ARM64_CPUCAP_SCOPE_LOCAL_CPU | ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU) +/* + * CPU feature detected at boot time based on system-wide value of a + * feature. It is safe for a late CPU to have this feature even though + * the system hasn't enabled it, although the featuer will not be used + * by Linux in this case. If the system has enabled this feature already, + * then every late CPU must have it. + */ +#define ARM64_CPUCAP_SYSTEM_FEATURE \ + (ARM64_CPUCAP_SCOPE_SYSTEM | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU) + struct arm64_cpu_capabilities { const char *desc; u16 capability; @@ -212,6 +268,18 @@ static inline int cpucap_default_scope(c return cap->type & ARM64_CPUCAP_SCOPE_MASK; } +static inline bool +cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU); +} + +static inline bool +cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU); +} + extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; extern struct static_key_false arm64_const_caps_ready; --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -406,14 +406,14 @@ static bool has_ssbd_mitigation(const st #endif /* CONFIG_ARM64_SSBD */ #define MIDR_RANGE(model, min, max) \ - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ .matches = is_affected_midr_range, \ .midr_model = model, \ .midr_range_min = min, \ .midr_range_max = max #define MIDR_ALL_VERSIONS(model) \ - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ .matches = is_affected_midr_range, \ .midr_model = model, \ .midr_range_min = 0, \ @@ -517,14 +517,14 @@ const struct arm64_cpu_capabilities arm6 .desc = "Mismatched cache line size", .capability = ARM64_MISMATCHED_CACHE_LINE_SIZE, .matches = has_mismatched_cache_type, - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .cpu_enable = cpu_enable_trap_ctr_access, }, { .desc = "Mismatched cache type", .capability = ARM64_MISMATCHED_CACHE_TYPE, .matches = has_mismatched_cache_type, - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .cpu_enable = cpu_enable_trap_ctr_access, }, #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003 @@ -538,7 +538,7 @@ const struct arm64_cpu_capabilities arm6 { .desc = "Qualcomm Technologies Kryo erratum 1003", .capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003, - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .midr_model = MIDR_QCOM_KRYO, .matches = is_kryo_midr, }, @@ -613,7 +613,7 @@ const struct arm64_cpu_capabilities arm6 #ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypass Disable", - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .capability = ARM64_SSBD, .matches = has_ssbd_mitigation, }, --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -924,7 +924,7 @@ static const struct arm64_cpu_capabiliti { .desc = "GIC system register CPU interface", .capability = ARM64_HAS_SYSREG_GIC_CPUIF, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_useable_gicv3_cpuif, .sys_reg = SYS_ID_AA64PFR0_EL1, .field_pos = ID_AA64PFR0_GIC_SHIFT, @@ -935,7 +935,7 @@ static const struct arm64_cpu_capabiliti { .desc = "Privileged Access Never", .capability = ARM64_HAS_PAN, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR1_EL1, .field_pos = ID_AA64MMFR1_PAN_SHIFT, @@ -948,7 +948,7 @@ static const struct arm64_cpu_capabiliti { .desc = "LSE atomic instructions", .capability = ARM64_HAS_LSE_ATOMICS, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64ISAR0_EL1, .field_pos = ID_AA64ISAR0_ATOMICS_SHIFT, @@ -959,14 +959,14 @@ static const struct arm64_cpu_capabiliti { .desc = "Software prefetching using PRFM", .capability = ARM64_HAS_NO_HW_PREFETCH, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_no_hw_prefetch, }, #ifdef CONFIG_ARM64_UAO { .desc = "User Access Override", .capability = ARM64_HAS_UAO, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR2_EL1, .field_pos = ID_AA64MMFR2_UAO_SHIFT, @@ -980,21 +980,21 @@ static const struct arm64_cpu_capabiliti #ifdef CONFIG_ARM64_PAN { .capability = ARM64_ALT_PAN_NOT_UAO, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = cpufeature_pan_not_uao, }, #endif /* CONFIG_ARM64_PAN */ { .desc = "Virtualization Host Extensions", .capability = ARM64_HAS_VIRT_HOST_EXTN, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = runs_at_el2, .cpu_enable = cpu_copy_el2regs, }, { .desc = "32-bit EL0 Support", .capability = ARM64_HAS_32BIT_EL0, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64PFR0_EL1, .sign = FTR_UNSIGNED, @@ -1004,14 +1004,14 @@ static const struct arm64_cpu_capabiliti { .desc = "Reduced HYP mapping offset", .capability = ARM64_HYP_OFFSET_LOW, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = hyp_offset_low, }, #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 { .desc = "Kernel page table isolation (KPTI)", .capability = ARM64_UNMAP_KERNEL_AT_EL0, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = unmap_kernel_at_el0, .cpu_enable = kpti_install_ng_mappings, }, @@ -1019,7 +1019,7 @@ static const struct arm64_cpu_capabiliti { /* FP/SIMD is not implemented */ .capability = ARM64_HAS_NO_FPSIMD, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .min_field_value = 0, .matches = has_no_fpsimd, }, @@ -1027,7 +1027,7 @@ static const struct arm64_cpu_capabiliti { .desc = "Data cache clean to Point of Persistence", .capability = ARM64_HAS_DCPOP, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64ISAR1_EL1, .field_pos = ID_AA64ISAR1_DPB_SHIFT, @@ -1040,7 +1040,7 @@ static const struct arm64_cpu_capabiliti #define HWCAP_CAP(reg, field, s, min_value, cap_type, cap) \ { \ .desc = #cap, \ - .type = ARM64_CPUCAP_SCOPE_SYSTEM, \ + .type = ARM64_CPUCAP_SYSTEM_FEATURE, \ .matches = has_cpuid_feature, \ .sys_reg = reg, \ .field_pos = field, \ From patchwork Sun Oct 27 21:00:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177845 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2532101ill; Sun, 27 Oct 2019 14:09:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqzpzbt2l9qJJc2QgnO9xyUhAUcxZKPYSj3P2mzx8iLVJLSA7f8rfgA8GJjYAzQFK5Z5n86I X-Received: by 2002:a50:9a46:: with SMTP id o64mr16354987edb.191.1572210593866; Sun, 27 Oct 2019 14:09:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210593; cv=none; d=google.com; s=arc-20160816; b=Y6bEASYG9qol++ptoxx31GQ4rQrWo1toy1tLAnvBWPdaH8Zg7UmJhZ0SHXfviG1pCl C9JDki3CMeLMIUI1plpOm8Q8EAFnt7xg+DXTTPDyeXmTn8UFhHIY32nRxFQJKw3lzrGR ti5Ur4AeDA+z7JEll1MNFEUgGRYDRbo1jpfvGY6V7m0qd8Wpq3+MypRYDtnY5PRgs0IQ jQ1b19K3cOaRRoKhEmSug1t6EdBEU7+dUzmfRL4xWW9IYLAHfcUzjZf4RGi2h8nKKCj+ qUtF7rpEiUJCLsjUp1+kyPLqGOCZKiczL2Q2PQkkHBXqkV2iGe3tbzvJUOKuCOMP22RK c15A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=CV7U2+wptZW8Ssr1sa5K/lDBs7DMg9/jsU74ZN7xyWA=; b=ziCJwjCBrp/Xv8V9ly108pMCc9TCgfbR9w/z+3DlvIVWeBNo6ENlEgy3yB7/An/oOV JEnXr4cZHiLNh1TXoYTJKFKqXo79THslkr3XWDJk6YqsCW+cci+vUkseWV6pyUXbDXK/ wMLHYnB1Sm6GMq2Egf8xn75fXxhS0a9OqJCRUsOs68rbEha744v8jEXIWt0K+lVl1WF6 Wq13SKvOr6JdezvYeY3QWtMshbXkJfIJ+wy0r8nsmtrhkMg1IwGqnVSb2F2ZIZfB553q ryQng0bcutAn5rr9+VnqZxQr8ViO9EL55JiRUMNiKUjQnwkM46LWRwJTTnzHeXY/TJfe 75wA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=tIJvHdDM; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id qw5si5167061ejb.91.2019.10.27.14.09.53; Sun, 27 Oct 2019 14:09:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=tIJvHdDM; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729568AbfJ0VJw (ORCPT + 14 others); Sun, 27 Oct 2019 17:09:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:56258 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728832AbfJ0VJv (ORCPT ); Sun, 27 Oct 2019 17:09:51 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5DD7520873; Sun, 27 Oct 2019 21:09:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210590; bh=t74p8N9ci9kwCGvfHTXx7zxJ4/yXagwEI90cMiviUfY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tIJvHdDMTP2HtXc1tmorRaLQ0EoTuszHEuXxL+VX2sDppia0JeoDLEZYp0WYAjYrg WDHks0pN57dQYt83EwkxNz1rc2qfCjdULbXO5KpaWz76gfJpJl8WcuCmDcU28sSIMr Z5DMyi2nXg39xCJjKfFoYQSnIZlzQtts+QPmtpUQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Dave Martin , Suzuki K Poulose , Will Deacon , Ard Biesheuvel Subject: [PATCH 4.14 051/119] arm64: capabilities: Unify the verification Date: Sun, 27 Oct 2019 22:00:28 +0100 Message-Id: <20191027203321.576169722@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit eaac4d83daa50fc1b9b7850346e9a62adfd4647e ] Now that each capability describes how to treat the conflicts of CPU cap state vs System wide cap state, we can unify the verification logic to a single place. Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/cpufeature.c | 91 ++++++++++++++++++++++++++--------------- 1 file changed, 58 insertions(+), 33 deletions(-) --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1229,6 +1229,58 @@ static inline void set_sys_caps_initiali } /* + * Run through the list of capabilities to check for conflicts. + * If the system has already detected a capability, take necessary + * action on this CPU. + * + * Returns "false" on conflicts. + */ +static bool +__verify_local_cpu_caps(const struct arm64_cpu_capabilities *caps_list) +{ + bool cpu_has_cap, system_has_cap; + const struct arm64_cpu_capabilities *caps; + + for (caps = caps_list; caps->matches; caps++) { + cpu_has_cap = __this_cpu_has_cap(caps_list, caps->capability); + system_has_cap = cpus_have_cap(caps->capability); + + if (system_has_cap) { + /* + * Check if the new CPU misses an advertised feature, + * which is not safe to miss. + */ + if (!cpu_has_cap && !cpucap_late_cpu_optional(caps)) + break; + /* + * We have to issue cpu_enable() irrespective of + * whether the CPU has it or not, as it is enabeld + * system wide. It is upto the call back to take + * appropriate action on this CPU. + */ + if (caps->cpu_enable) + caps->cpu_enable(caps); + } else { + /* + * Check if the CPU has this capability if it isn't + * safe to have when the system doesn't. + */ + if (cpu_has_cap && !cpucap_late_cpu_permitted(caps)) + break; + } + } + + if (caps->matches) { + pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n", + smp_processor_id(), caps->capability, + caps->desc, system_has_cap, cpu_has_cap); + return false; + } + + return true; +} + +/* * Check for CPU features that are used in early boot * based on the Boot CPU value. */ @@ -1250,25 +1302,10 @@ verify_local_elf_hwcaps(const struct arm } } -static void -verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list) +static void verify_local_cpu_features(void) { - const struct arm64_cpu_capabilities *caps = caps_list; - for (; caps->matches; caps++) { - if (!cpus_have_cap(caps->capability)) - continue; - /* - * If the new CPU misses an advertised feature, we cannot proceed - * further, park the cpu. - */ - if (!__this_cpu_has_cap(caps_list, caps->capability)) { - pr_crit("CPU%d: missing feature: %s\n", - smp_processor_id(), caps->desc); - cpu_die_early(); - } - if (caps->cpu_enable) - caps->cpu_enable(caps); - } + if (!__verify_local_cpu_caps(arm64_features)) + cpu_die_early(); } /* @@ -1278,20 +1315,8 @@ verify_local_cpu_features(const struct a */ static void verify_local_cpu_errata_workarounds(void) { - const struct arm64_cpu_capabilities *caps = arm64_errata; - - for (; caps->matches; caps++) { - if (cpus_have_cap(caps->capability)) { - if (caps->cpu_enable) - caps->cpu_enable(caps); - } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) { - pr_crit("CPU%d: Requires work around for %s, not detected" - " at boot time\n", - smp_processor_id(), - caps->desc ? : "an erratum"); - cpu_die_early(); - } - } + if (!__verify_local_cpu_caps(arm64_errata)) + cpu_die_early(); } static void update_cpu_errata_workarounds(void) @@ -1315,7 +1340,7 @@ static void __init enable_errata_workaro static void verify_local_cpu_capabilities(void) { verify_local_cpu_errata_workarounds(); - verify_local_cpu_features(arm64_features); + verify_local_cpu_features(); verify_local_elf_hwcaps(arm64_elf_hwcaps); if (system_supports_32bit_el0()) verify_local_elf_hwcaps(compat_elf_hwcaps); From patchwork Sun Oct 27 21:00:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177839 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2531819ill; Sun, 27 Oct 2019 14:09:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqy5V3nM1iGZh1tFNWycZqWfgfMeJh9tv5aXALyR0Lq5dyEI6yyT/8ZrczU4IOneOHXnWL8j X-Received: by 2002:a05:6402:b06:: with SMTP id bm6mr16240678edb.160.1572210571723; Sun, 27 Oct 2019 14:09:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210571; cv=none; d=google.com; s=arc-20160816; b=TS5rMqnBszBQPFu2S6AZbbLLSpPWrNWOou9ZG7zaST0boTWYj/h79l1HNwS5pAP/+e IdOFdhYTkCHmR/Ui1COXh64JrJfDe488K8Q5fgvCz6dHlzy/RZkwFI/IsX/OV+Y6UoQL shyzvFVN7roNjJ//ESRx8zCJtUJ3mij/I/GdfwqGs3OL+XdxkQQjSPq82wgPAbvPA2N8 Xq0+LKCMVwc6vtP59LdvrMpwu9j7p5a/4Q1ct2mSTPtaR2+m1jWHxIXjO1LnFc1Cxri+ KQYKiQWp3/lHTTCzChDOSNlw3M9k3SY3UzVk4sRVn8c6/QzgxWjqSLPkXe3cp9YetQhE 9Wbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=HJGK2HzgrHqiA62+65RI9k8SZD/RRIabXwezHf0r0Ok=; b=v+YyV/UcHsBKnp5KfQ4Ha/g/veY21WsPMQBkJG858/DF3oKWm7gSHazmLHG5o9qxhe tUgRAZx0KOoPwNkzHWh3rEH4Nx9n1HMF6xo7WcjHRtJjiotbJAmVDgCXoIaKYPDR0pyp FKAA53GJjMUoxvmLYySFUMBdbNAx+w0pK/vBPXkrcpU93cqG6nap7rKkQWvrL0KpAX0r 3pM7KaWuTJDVo4+1OSUFQYvsA7gNibTfwA74golbPs0S2RJHSkO/47WVDUWr29t34Gin 6YM/PBNianqdCQo7HIMtiJ+byT3bF57VE2B0KIH//cVhRuqHWXNdlhqY5weugbEkZkrd 4Apw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=f1LYp7Nx; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g33si5955070eda.117.2019.10.27.14.09.31; Sun, 27 Oct 2019 14:09:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=f1LYp7Nx; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729504AbfJ0VJ2 (ORCPT + 14 others); Sun, 27 Oct 2019 17:09:28 -0400 Received: from mail.kernel.org ([198.145.29.99]:55808 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728255AbfJ0VJ0 (ORCPT ); Sun, 27 Oct 2019 17:09:26 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DC66020873; Sun, 27 Oct 2019 21:09:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210565; bh=5rGOHxdHV9ZIQXqg1PPXAB+NumLzZ2ZkyPS4HKwri8I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f1LYp7NxR6NMvt4fuPzr187wJcPGBiHIDOkWMDKBK/fzYLD2MgyqTwNU+m+XU3wTv GFUPSjkPM3l5/rb79y3ZBuawin7V07ORsF8ptRLYyY/VSCWZ16mtmTZG1mN6BeiXeS sbPJVCx8lnb2is0liENyfu07xvf8vZ9/X05JaUrw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Marc Zyngier , Will Deacon , Dave Martin , Suzuki K Poulose , Ard Biesheuvel Subject: [PATCH 4.14 060/119] arm64: capabilities: Change scope of VHE to Boot CPU feature Date: Sun, 27 Oct 2019 22:00:37 +0100 Message-Id: <20191027203326.326345015@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 830dcc9f9a7cd26a812522a26efaacf7df6fc365 ] We expect all CPUs to be running at the same EL inside the kernel with or without VHE enabled and we have strict checks to ensure that any mismatch triggers a kernel panic. If VHE is enabled, we use the feature based on the boot CPU and all other CPUs should follow. This makes it a perfect candidate for a capability based on the boot CPU, which should be matched by all the CPUs (both when is ON and OFF). This saves us some not-so-pretty hooks and special code, just for verifying the conflict. The patch also makes the VHE capability entry depend on CONFIG_ARM64_VHE. Cc: Marc Zyngier Cc: Will Deacon Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpufeature.h | 6 +++++ arch/arm64/include/asm/virt.h | 6 ----- arch/arm64/kernel/cpufeature.c | 5 ++-- arch/arm64/kernel/smp.c | 38 ------------------------------------ 4 files changed, 9 insertions(+), 46 deletions(-) --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -283,6 +283,12 @@ extern struct arm64_ftr_reg arm64_ftr_re (ARM64_CPUCAP_SCOPE_LOCAL_CPU | \ ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU) +/* + * CPU feature used early in the boot based on the boot CPU. All secondary + * CPUs must match the state of the capability as detected by the boot CPU. + */ +#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU + struct arm64_cpu_capabilities { const char *desc; u16 capability; --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -102,12 +102,6 @@ static inline bool has_vhe(void) return false; } -#ifdef CONFIG_ARM64_VHE -extern void verify_cpu_run_el(void); -#else -static inline void verify_cpu_run_el(void) {} -#endif - #endif /* __ASSEMBLY__ */ #endif /* ! __ASM__VIRT_H */ --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -982,13 +982,15 @@ static const struct arm64_cpu_capabiliti .matches = cpufeature_pan_not_uao, }, #endif /* CONFIG_ARM64_PAN */ +#ifdef CONFIG_ARM64_VHE { .desc = "Virtualization Host Extensions", .capability = ARM64_HAS_VIRT_HOST_EXTN, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE, .matches = runs_at_el2, .cpu_enable = cpu_copy_el2regs, }, +#endif /* CONFIG_ARM64_VHE */ { .desc = "32-bit EL0 Support", .capability = ARM64_HAS_32BIT_EL0, @@ -1332,7 +1334,6 @@ static bool verify_local_cpu_caps(u16 sc */ static void check_early_cpu_features(void) { - verify_cpu_run_el(); verify_cpu_asid_bits(); /* * Early features are used by the kernel already. If there --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -83,43 +83,6 @@ enum ipi_msg_type { IPI_WAKEUP }; -#ifdef CONFIG_ARM64_VHE - -/* Whether the boot CPU is running in HYP mode or not*/ -static bool boot_cpu_hyp_mode; - -static inline void save_boot_cpu_run_el(void) -{ - boot_cpu_hyp_mode = is_kernel_in_hyp_mode(); -} - -static inline bool is_boot_cpu_in_hyp_mode(void) -{ - return boot_cpu_hyp_mode; -} - -/* - * Verify that a secondary CPU is running the kernel at the same - * EL as that of the boot CPU. - */ -void verify_cpu_run_el(void) -{ - bool in_el2 = is_kernel_in_hyp_mode(); - bool boot_cpu_el2 = is_boot_cpu_in_hyp_mode(); - - if (in_el2 ^ boot_cpu_el2) { - pr_crit("CPU%d: mismatched Exception Level(EL%d) with boot CPU(EL%d)\n", - smp_processor_id(), - in_el2 ? 2 : 1, - boot_cpu_el2 ? 2 : 1); - cpu_panic_kernel(); - } -} - -#else -static inline void save_boot_cpu_run_el(void) {} -#endif - #ifdef CONFIG_HOTPLUG_CPU static int op_cpu_kill(unsigned int cpu); #else @@ -448,7 +411,6 @@ void __init smp_prepare_boot_cpu(void) */ jump_label_init(); cpuinfo_store_boot_cpu(); - save_boot_cpu_run_el(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn) From patchwork Sun Oct 27 21:00:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177841 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2531897ill; Sun, 27 Oct 2019 14:09:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqw5Pt6xUrDur5ip7hi+6dnHe+a+rEoD1RPJTZXRahSPXuk1AgaIVemdb/Wi79grQw0KFscT X-Received: by 2002:a17:906:5919:: with SMTP id h25mr2942470ejq.222.1572210577233; Sun, 27 Oct 2019 14:09:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210577; cv=none; d=google.com; s=arc-20160816; b=i8LZsZbXYKNYSYl7XxrbLjqXPWFYfBRrQG+pFxFtIIJzFJj73pQcL4un59j/SNm47m 4+mzFdNzVDwhBBGu5KXTwISMstetM7EokhDDsiSfolpx82wG9PgggLJAvpKCmRXYKd05 1kuPLbvhwEKOEDqJahLHMX4nxzj4b/yrETAVxzv3ncB0GN/OdGUeFzkmzk1x7bQwg94H 0QHHjU7X4xU4urFr0+mv+pZEVEaJeNFvnZTK85GA6uCPMA+6S7RORYrlWIjMKlVpBWcZ EEE7Ih8cPYynPq8xhoM//Xo/FrZy+P3fcJSZvO3DHpy7587zaN7yPML09Tfmu+f+vJLv gWaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=pro8sTKNLytY7NEoTEgxaOxrqdZaXG7mUmh9ZRr/M7k=; b=rl+HetN3Smbfy3l7hdcDVQBi5efoDhFT65PItxMTeH7Xp5nzSXAsciyLYhOsUYhnTl yc2hVOkr49vMgmf7XsRJsIAnnSwpLqrCF6wr0zGTabEG0/GtYr7SFpvsnfsqUWupLO6g j83reXknSQrkvFg/RxhENtNIIkm+yJTvRV1xR78Zs7nOuS7MOfF0dzErV3zC/8WfJByn H+x9FWuYd0ooFF86ukWkBMhIJSeePWlRJcnHXqhqCGmfeP3VZ0wQL8VJnK4s9oM9Rh6l T5ITZkbubSgmnklZqhFjbxzt7J5kM6fMdyCizut5aQFEP5UHanqcmCozmy360gXM1WCY evYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=PTNxSSSu; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e8si2439900edk.444.2019.10.27.14.09.37; Sun, 27 Oct 2019 14:09:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=PTNxSSSu; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729519AbfJ0VJe (ORCPT + 14 others); Sun, 27 Oct 2019 17:09:34 -0400 Received: from mail.kernel.org ([198.145.29.99]:55952 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728976AbfJ0VJe (ORCPT ); Sun, 27 Oct 2019 17:09:34 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1F16020873; Sun, 27 Oct 2019 21:09:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210573; bh=guMc78V86eIqIsGt3antPGPaD57jfmvrq6eLIvdlquc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PTNxSSSuH+Bci+VndZ+4agnxHo8q/VwCFWF6KRtJvL4YmDMc3mu+FR82ebYdGdWS8 cYFAYQIKS0kFskuqnPExQTmkUX40AI6dD3LBL/SrDrBKZL5vDAHhandeQq1xWEZohO TeDjKxlspaGPVydni9QzM0h7OJuaH9HT99Pxh+gg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Mark Rutland , Dave Martin , Suzuki K Poulose , Will Deacon , Ard Biesheuvel Subject: [PATCH 4.14 063/119] arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 Date: Sun, 27 Oct 2019 22:00:40 +0100 Message-Id: <20191027203328.879416494@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 6e616864f21160d8d503523b60a53a29cecc6f24 ] Update the MIDR encodings for the Cortex-A55 and Cortex-A35 Cc: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cputype.h | 4 ++++ 1 file changed, 4 insertions(+) --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -85,6 +85,8 @@ #define ARM_CPU_PART_CORTEX_A53 0xD03 #define ARM_CPU_PART_CORTEX_A73 0xD09 #define ARM_CPU_PART_CORTEX_A75 0xD0A +#define ARM_CPU_PART_CORTEX_A35 0xD04 +#define ARM_CPU_PART_CORTEX_A55 0xD05 #define APM_CPU_PART_POTENZA 0x000 @@ -108,6 +110,8 @@ #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72) #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73) #define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75) +#define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35) +#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX) From patchwork Sun Oct 27 21:00:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177843 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2532034ill; Sun, 27 Oct 2019 14:09:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqyBehV5hI8+XcadRFI+ZCVhUWDgrN+bk1F92adCBccXAPZNyCTFuQQ23t5xvswA4m4M92Q9 X-Received: by 2002:a17:906:1a13:: with SMTP id i19mr12055706ejf.46.1572210587391; Sun, 27 Oct 2019 14:09:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210587; cv=none; d=google.com; s=arc-20160816; b=mBQuPXLaRl/uen6qYbjwreN2GReEHiRfWgXeTVBURysZTeqcWuK/Plk18MX5g43NWj 2CSn8auIc9yoJOukqAWd7IIoel+92Vy79kXAJxp+AWxSXWPdR0UF5D2YWVA3QFiZCqMb EVLkQiko73DmHhzZYePuRrFvxVfIGcnlX5eK/KscJ0oWN0khhsKHedVde7UAh6s15ZMc jLXfTZYvUJpStBFSCiKktM6mTVkKACzrO43octjhNH9vQHPf21RQKhmcW6+B+L89gF04 p7JgzO6TjDH12BkZOtAGXIcNOKewrHq03TKkkyfNT3jqHC/cp2G23z60dIToegkGUkNr RHeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=VDC3ParIOojNVFPRxO7OWnhPJUFTynDyspWNzqFhelo=; b=Q7oWmJMPfygICq7hayxk0dpiRqc/Pp3U/Ue0cJkS4E74qE3ogsAhf2MP34eweeZizf qavKcWxAw9c4T2FD5tpwDBbov8u6bAfapOfMB/FWeN4M/jZtVvVT7lzCm/b4H7GSV4+v uxknPIoUOM1YQSbCEc2vU15TfCJFvs6+rdPG6Oq235eaCFbyEtQXOZlptwRSJDRmNv/g Km5YXDFmlx9pfgj/mhODIbgHolo7D2RLKDgG8Ve+scCwVpQp4GcBcbzMBJ4pNip/Gu7a KXOF5BZE2bW5m1r+z0MtnQy+L9s3oIp3o8EW9fauu+vraKxcOd+91WezbqiaWyOooWga 5PtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=GvygN+YW; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z7si6146903edi.318.2019.10.27.14.09.47; Sun, 27 Oct 2019 14:09:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=GvygN+YW; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726944AbfJ0VJp (ORCPT + 14 others); Sun, 27 Oct 2019 17:09:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:56146 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729551AbfJ0VJp (ORCPT ); Sun, 27 Oct 2019 17:09:45 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E151F2064A; Sun, 27 Oct 2019 21:09:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210584; bh=uoJolbFr7OvIbu2blQ9jg1khCWXqzO03hk38WiCuQxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GvygN+YW3rF832jJj2MHTR56ErBxPAQfHUsuBrPcbKpUhZz5uXSrH4Si1hcr4Fs2Y LD6p8Nen5BOTPZ0jB82voES6imX/CSzj1MBvIgy9Sje5DtcKA4xZ7xPLMk0XIZS0O4 lmMwoyz4rqdeFMC7tSWSJ9lYVlGFIhXybjuGxywo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Marc Zyngier , Will Deacon , Ard Biesheuvel Subject: [PATCH 4.14 067/119] arm64: Get rid of __smccc_workaround_1_hvc_* Date: Sun, 27 Oct 2019 22:00:44 +0100 Message-Id: <20191027203333.787221281@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier [ Upstream commit 22765f30dbaf1118c6ff0fcb8b99c9f2b4d396d5 ] The very existence of __smccc_workaround_1_hvc_* is a thinko, as KVM will never use a HVC call to perform the branch prediction invalidation. Even as a nested hypervisor, it would use an SMC instruction. Let's get rid of it. Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/bpi.S | 12 ++---------- arch/arm64/kernel/cpu_errata.c | 9 +++------ 2 files changed, 5 insertions(+), 16 deletions(-) --- a/arch/arm64/kernel/bpi.S +++ b/arch/arm64/kernel/bpi.S @@ -56,21 +56,13 @@ ENTRY(__bp_harden_hyp_vecs_start) ENTRY(__bp_harden_hyp_vecs_end) -.macro smccc_workaround_1 inst +ENTRY(__smccc_workaround_1_smc_start) sub sp, sp, #(8 * 4) stp x2, x3, [sp, #(8 * 0)] stp x0, x1, [sp, #(8 * 2)] mov w0, #ARM_SMCCC_ARCH_WORKAROUND_1 - \inst #0 + smc #0 ldp x2, x3, [sp, #(8 * 0)] ldp x0, x1, [sp, #(8 * 2)] add sp, sp, #(8 * 4) -.endm - -ENTRY(__smccc_workaround_1_smc_start) - smccc_workaround_1 smc ENTRY(__smccc_workaround_1_smc_end) - -ENTRY(__smccc_workaround_1_hvc_start) - smccc_workaround_1 hvc -ENTRY(__smccc_workaround_1_hvc_end) --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -85,8 +85,6 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_har #ifdef CONFIG_KVM extern char __smccc_workaround_1_smc_start[]; extern char __smccc_workaround_1_smc_end[]; -extern char __smccc_workaround_1_hvc_start[]; -extern char __smccc_workaround_1_hvc_end[]; static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) @@ -131,8 +129,6 @@ static void __install_bp_hardening_cb(bp #else #define __smccc_workaround_1_smc_start NULL #define __smccc_workaround_1_smc_end NULL -#define __smccc_workaround_1_hvc_start NULL -#define __smccc_workaround_1_hvc_end NULL static void __install_bp_hardening_cb(bp_hardening_cb_t fn, const char *hyp_vecs_start, @@ -206,8 +202,9 @@ enable_smccc_arch_workaround_1(const str if ((int)res.a0 < 0) return; cb = call_hvc_arch_workaround_1; - smccc_start = __smccc_workaround_1_hvc_start; - smccc_end = __smccc_workaround_1_hvc_end; + /* This is a guest, no need to patch KVM vectors */ + smccc_start = NULL; + smccc_end = NULL; break; case PSCI_CONDUIT_SMC: From patchwork Sun Oct 27 21:00:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177844 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2532065ill; Sun, 27 Oct 2019 14:09:50 -0700 (PDT) X-Google-Smtp-Source: APXvYqzwX/vQzcUOyX0yTePug58iNP0vSbcuZCyijtKiSvJSHY/thQI8JFVA5GpRWosV0tFZuBY/ X-Received: by 2002:a17:906:5007:: with SMTP id s7mr13775439ejj.181.1572210590423; Sun, 27 Oct 2019 14:09:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210590; cv=none; d=google.com; s=arc-20160816; b=DXjVnTRSe+siv2q1yayAo+GXccBsEuLQ5U0z14aSF9uubCH4W79QHFijpr0KwTVMPx OmtQqJTTuYtt72OfTExkOqx2q60L1gVuhfay+PUHOFAowgY2yG9VwnKaE1F/EW7LGr7S vEE5bpS186ZsvPeGwg/IJ6JzPblBIEpkuDAhsr1F316Kj5yNpsa+UDNyQh6kCywVnAFl +UFr7hb9cZ87Ci/ylkmXWTsO90jOrmzJDdBt6pLyjwNkfHduwqz/R4SeThWol+W7SmtA rutsPJqOL4nNL6bwCqupva+OasMeSTfOINOcsJwrH4k80GhCnMESCAWxum2Y50MR0UtS 4tZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=pkp1XO70GYIWUhgGNXkRoHuLz3Vx0dnFVA+6RCBTJZE=; b=u+64LfGTLcz3LGeBLYDpc8kx6CwEQq+TJAbjGGTASSSQKOsG1NqGtz+olw8IsT8mJF RPohpsmozf78R5CSckGDkSyN7PRpvhGSh1u3Is6M6UX9mHLpaZF5iphpNmuZbTIeC76z yd8FX5UXQR77g1+shHGWbtqTGDI23p4mFEfwyefGp5pwKTVf+PmVJBWDh6eGuupsjKr4 vV0zKIPZFTvcQXWi9up+OU1PEixzs1vPI9ru1CypQhwgq48uGVIFl+GVnXdShG3b8Km4 qhWFPN8HGV7DvDUdmd0yQ5JEjranOIdV4a+uIhXopmsbA6EZIjW5wCXj/mjCnx08lnrE zt1A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="owzv1C/p"; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z7si6146903edi.318.2019.10.27.14.09.50; Sun, 27 Oct 2019 14:09:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="owzv1C/p"; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729562AbfJ0VJs (ORCPT + 14 others); Sun, 27 Oct 2019 17:09:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:56200 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729553AbfJ0VJs (ORCPT ); Sun, 27 Oct 2019 17:09:48 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 900782064A; Sun, 27 Oct 2019 21:09:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210587; bh=5IYsMnwEVU7S0aJmWZ1jf2IkeC4mjTIg6SGyMtNRQpE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=owzv1C/pCyqHU6dhx76/IjY/vC8X8DFVlNyDmFbLsISexpM6ygILZ9PAbVYZPQjfZ 0GL5gMfbl2lHu1y0XqFCbFZt+/FFRcDvS9lpItlfXJ6d8zFv6RfhxScWMEbw0sssae Q6n7uFp5HiPR7s3cYuPh7rz1yJfchBDcYX2ButWE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Suzuki K Poulose , Will Deacon , Catalin Marinas , Ard Biesheuvel Subject: [PATCH 4.14 068/119] arm64: cpufeature: Detect SSBS and advertise to userspace Date: Sun, 27 Oct 2019 22:00:45 +0100 Message-Id: <20191027203334.219935634@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon [ Upstream commit d71be2b6c0e19180b5f80a6d42039cc074a693a2 ] Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass Safe (SSBS) which can be used as a mitigation against Spectre variant 4. Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS directly, so that userspace can toggle the SSBS control without trapping to the kernel. This patch probes for the existence of SSBS and advertise the new instructions to userspace if they exist. Reviewed-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Catalin Marinas Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/sysreg.h | 16 ++++++++++++---- arch/arm64/include/uapi/asm/hwcap.h | 1 + arch/arm64/kernel/cpufeature.c | 19 +++++++++++++++++-- arch/arm64/kernel/cpuinfo.c | 1 + 5 files changed, 33 insertions(+), 7 deletions(-) --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -44,7 +44,8 @@ #define ARM64_HARDEN_BRANCH_PREDICTOR 24 #define ARM64_SSBD 25 #define ARM64_MISMATCHED_CACHE_TYPE 26 +#define ARM64_SSBS 27 -#define ARM64_NCAPS 27 +#define ARM64_NCAPS 28 #endif /* __ASM_CPUCAPS_H */ --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -297,6 +297,7 @@ #define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7) /* Common SCTLR_ELx flags. */ +#define SCTLR_ELx_DSSBS (1UL << 44) #define SCTLR_ELx_EE (1 << 25) #define SCTLR_ELx_WXN (1 << 19) #define SCTLR_ELx_I (1 << 12) @@ -316,7 +317,7 @@ (1 << 10) | (1 << 13) | (1 << 14) | (1 << 15) | \ (1 << 17) | (1 << 20) | (1 << 21) | (1 << 24) | \ (1 << 26) | (1 << 27) | (1 << 30) | (1 << 31) | \ - (0xffffffffUL << 32)) + (0xffffefffUL << 32)) #ifdef CONFIG_CPU_BIG_ENDIAN #define ENDIAN_SET_EL2 SCTLR_ELx_EE @@ -330,7 +331,7 @@ #define SCTLR_EL2_SET (ENDIAN_SET_EL2 | SCTLR_EL2_RES1) #define SCTLR_EL2_CLEAR (SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | \ SCTLR_ELx_SA | SCTLR_ELx_I | SCTLR_ELx_WXN | \ - ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0) + SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0) #if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff #error "Inconsistent SCTLR_EL2 set/clear bits" @@ -354,7 +355,7 @@ (1 << 29)) #define SCTLR_EL1_RES0 ((1 << 6) | (1 << 10) | (1 << 13) | (1 << 17) | \ (1 << 21) | (1 << 27) | (1 << 30) | (1 << 31) | \ - (0xffffffffUL << 32)) + (0xffffefffUL << 32)) #ifdef CONFIG_CPU_BIG_ENDIAN #define ENDIAN_SET_EL1 (SCTLR_EL1_E0E | SCTLR_ELx_EE) @@ -371,7 +372,7 @@ SCTLR_EL1_UCI | SCTLR_EL1_RES1) #define SCTLR_EL1_CLEAR (SCTLR_ELx_A | SCTLR_EL1_CP15BEN | SCTLR_EL1_ITD |\ SCTLR_EL1_UMA | SCTLR_ELx_WXN | ENDIAN_CLEAR_EL1 |\ - SCTLR_EL1_RES0) + SCTLR_ELx_DSSBS | SCTLR_EL1_RES0) #if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff #error "Inconsistent SCTLR_EL1 set/clear bits" @@ -417,6 +418,13 @@ #define ID_AA64PFR0_EL0_64BIT_ONLY 0x1 #define ID_AA64PFR0_EL0_32BIT_64BIT 0x2 +/* id_aa64pfr1 */ +#define ID_AA64PFR1_SSBS_SHIFT 4 + +#define ID_AA64PFR1_SSBS_PSTATE_NI 0 +#define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 +#define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 + /* id_aa64mmfr0 */ #define ID_AA64MMFR0_TGRAN4_SHIFT 28 #define ID_AA64MMFR0_TGRAN64_SHIFT 24 --- a/arch/arm64/include/uapi/asm/hwcap.h +++ b/arch/arm64/include/uapi/asm/hwcap.h @@ -48,5 +48,6 @@ #define HWCAP_USCAT (1 << 25) #define HWCAP_ILRCPC (1 << 26) #define HWCAP_FLAGM (1 << 27) +#define HWCAP_SSBS (1 << 28) #endif /* _UAPI__ASM_HWCAP_H */ --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -145,6 +145,11 @@ static const struct arm64_ftr_bits ftr_i ARM64_FTR_END, }; +static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI), + ARM64_FTR_END, +}; + static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = { S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI), S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI), @@ -345,7 +350,7 @@ static const struct __ftr_reg_entry { /* Op1 = 0, CRn = 0, CRm = 4 */ ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0), - ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_raz), + ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1), /* Op1 = 0, CRn = 0, CRm = 5 */ ARM64_FTR_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0), @@ -625,7 +630,6 @@ void update_cpu_features(int cpu, /* * EL3 is not our concern. - * ID_AA64PFR1 is currently RES0. */ taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu, info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0); @@ -1045,6 +1049,16 @@ static const struct arm64_cpu_capabiliti .min_field_value = 1, }, #endif + { + .desc = "Speculative Store Bypassing Safe (SSBS)", + .capability = ARM64_SSBS, + .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, + .matches = has_cpuid_feature, + .sys_reg = SYS_ID_AA64PFR1_EL1, + .field_pos = ID_AA64PFR1_SSBS_SHIFT, + .sign = FTR_UNSIGNED, + .min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY, + }, {}, }; @@ -1087,6 +1101,7 @@ static const struct arm64_cpu_capabiliti HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC), HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC), HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT), + HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS), {}, }; --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -80,6 +80,7 @@ static const char *const hwcap_str[] = { "uscat", "ilrcpc", "flagm", + "ssbs", NULL }; From patchwork Sun Oct 27 21:00:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177846 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2532153ill; Sun, 27 Oct 2019 14:09:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqziFMz6+B52lIB3PDENiJgt1nAKWeQCxb89xkPWKZBV+M+RWDMEixGcCyaeGVDuIg+OzieR X-Received: by 2002:aa7:cd01:: with SMTP id b1mr16153670edw.122.1572210596898; Sun, 27 Oct 2019 14:09:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210596; cv=none; d=google.com; s=arc-20160816; b=voGDo5V31rNVpPGP81K0+1MN9Qd4zdh+gRpUOQuREpuXvYIFdl1M7sNSnGp19WPSeQ oBK0llhYnhSMrpSuDSN3TX2wtKBBAX1PwesEfkNYMjY+7yckCu29ndUg+uNSp0zcZoz9 UkE8Nk/2Dlqtxo0PN9Vl+KK0jApwTby5MWD9ULpTKLRhherH19RtIizdYNoepXcBmfHe FHrrS+AUhc1m1X/WsU/WaGE8NNehuwuLBuZjrJZC+OyqMn3QlupiG0fquckCkFb3119g N0rVLj0tp/xR7xgknE/f1KlPvbydsxkqSB82gnuIKP3+6torJdY8hKz/b54Bq+78Y3DW iLlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=cbH02QU1uwCA/wPAwm5jmm1CRP9cWszRvd+LANfa7DY=; b=mWfAaE3I2LhsiP25kVtI6Yn5/G6yvSyXXwlhWcKN47pKjXr2cRjHXEQyQIJFJg1OeD 0ABXVLkZbTVGOEB7D5xu7a1GH8nDqqUywbF/bPQfiuZIczb7e4lPSxcze18/EVyqioqw HvxoC9/M5kitsYPxxDUpNcKuxW84pAr63muApA+R/FpNWNnz7hbskE82HXXoFe56hujQ E+lH7bBpLccuW5HzGycDt4iKTha51AiV8OVPlCAmg0MqQzdaIHTU/zzAcHazy5MXILi0 /mSk5kgZQX8yk3ajRhgWjpc6b1pSa+wfJtRO71wCBaV5bvRDRUjRZ81Ml7sgbkzxaheH rptA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=DWnq3DVP; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p5si5076390eja.141.2019.10.27.14.09.56; Sun, 27 Oct 2019 14:09:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=DWnq3DVP; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729582AbfJ0VJy (ORCPT + 14 others); Sun, 27 Oct 2019 17:09:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:56304 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728381AbfJ0VJy (ORCPT ); Sun, 27 Oct 2019 17:09:54 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 95B842064A; Sun, 27 Oct 2019 21:09:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210593; bh=exaceR17ZpyK+a9qmuDwq+GnWzptwDX/tjY8NgAgLjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DWnq3DVPx3m/Bw3ifHDIpfWMqdo+9tIEVMdFxnaAhvQ7HxsIzVRpUBcKwZz9byhXx BT8dxbIN7HE5PNcEvXsshNQp+2/VGzZk5aSF6EtsfWZraOQYyGoSWuiO24erxH9dX3 LHyoJxamRfVpnNH7ypmd/ewqoo0RBJk0sSwL+Jzw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Will Deacon , Catalin Marinas , Ard Biesheuvel Subject: [PATCH 4.14 069/119] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3 Date: Sun, 27 Oct 2019 22:00:46 +0100 Message-Id: <20191027203335.061613590@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon [ Upstream commit 8f04e8e6e29c93421a95b61cad62e3918425eac7 ] On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD state without needing to call into firmware. This patch hooks into the existing SSBD infrastructure so that SSBS is used on CPUs that support it, but it's all made horribly complicated by the very real possibility of big/little systems that don't uniformly provide the new capability. Signed-off-by: Will Deacon Signed-off-by: Catalin Marinas [ardb: add #include of asm/compat.h] Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/processor.h | 7 +++++ arch/arm64/include/asm/ptrace.h | 1 arch/arm64/include/asm/sysreg.h | 3 ++ arch/arm64/include/uapi/asm/ptrace.h | 1 arch/arm64/kernel/cpu_errata.c | 26 ++++++++++++++++++-- arch/arm64/kernel/cpufeature.c | 45 +++++++++++++++++++++++++++++++++++ arch/arm64/kernel/process.c | 4 +++ arch/arm64/kernel/ssbd.c | 22 +++++++++++++++++ 8 files changed, 107 insertions(+), 2 deletions(-) --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -153,6 +153,10 @@ static inline void start_thread(struct p { start_thread_common(regs, pc); regs->pstate = PSR_MODE_EL0t; + + if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE) + regs->pstate |= PSR_SSBS_BIT; + regs->sp = sp; } @@ -169,6 +173,9 @@ static inline void compat_start_thread(s regs->pstate |= COMPAT_PSR_E_BIT; #endif + if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE) + regs->pstate |= PSR_AA32_SSBS_BIT; + regs->compat_sp = sp; } #endif --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -50,6 +50,7 @@ #define PSR_AA32_I_BIT 0x00000080 #define PSR_AA32_A_BIT 0x00000100 #define PSR_AA32_E_BIT 0x00000200 +#define PSR_AA32_SSBS_BIT 0x00800000 #define PSR_AA32_DIT_BIT 0x01000000 #define PSR_AA32_Q_BIT 0x08000000 #define PSR_AA32_V_BIT 0x10000000 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -86,11 +86,14 @@ #define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4) #define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3) +#define REG_PSTATE_SSBS_IMM sys_reg(0, 3, 4, 0, 1) #define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM | \ (!!x)<<8 | 0x1f) #define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \ (!!x)<<8 | 0x1f) +#define SET_PSTATE_SSBS(x) __emit_inst(0xd5000000 | REG_PSTATE_SSBS_IMM | \ + (!!x)<<8 | 0x1f) #define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2) #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -45,6 +45,7 @@ #define PSR_I_BIT 0x00000080 #define PSR_A_BIT 0x00000100 #define PSR_D_BIT 0x00000200 +#define PSR_SSBS_BIT 0x00001000 #define PSR_PAN_BIT 0x00400000 #define PSR_UAO_BIT 0x00800000 #define PSR_Q_BIT 0x08000000 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -304,6 +304,14 @@ void __init arm64_enable_wa2_handling(st void arm64_set_ssbd_mitigation(bool state) { + if (this_cpu_has_cap(ARM64_SSBS)) { + if (state) + asm volatile(SET_PSTATE_SSBS(0)); + else + asm volatile(SET_PSTATE_SSBS(1)); + return; + } + switch (psci_ops.conduit) { case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL); @@ -328,6 +336,11 @@ static bool has_ssbd_mitigation(const st WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + if (this_cpu_has_cap(ARM64_SSBS)) { + required = false; + goto out_printmsg; + } + if (psci_ops.smccc_version == SMCCC_VERSION_1_0) { ssbd_state = ARM64_SSBD_UNKNOWN; return false; @@ -376,7 +389,6 @@ static bool has_ssbd_mitigation(const st switch (ssbd_state) { case ARM64_SSBD_FORCE_DISABLE: - pr_info_once("%s disabled from command-line\n", entry->desc); arm64_set_ssbd_mitigation(false); required = false; break; @@ -389,7 +401,6 @@ static bool has_ssbd_mitigation(const st break; case ARM64_SSBD_FORCE_ENABLE: - pr_info_once("%s forced from command-line\n", entry->desc); arm64_set_ssbd_mitigation(true); required = true; break; @@ -399,6 +410,17 @@ static bool has_ssbd_mitigation(const st break; } +out_printmsg: + switch (ssbd_state) { + case ARM64_SSBD_FORCE_DISABLE: + pr_info_once("%s disabled from command-line\n", entry->desc); + break; + + case ARM64_SSBD_FORCE_ENABLE: + pr_info_once("%s forced from command-line\n", entry->desc); + break; + } + return required; } #endif /* CONFIG_ARM64_SSBD */ --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -925,6 +925,48 @@ static void cpu_copy_el2regs(const struc write_sysreg(read_sysreg(tpidr_el1), tpidr_el2); } +#ifdef CONFIG_ARM64_SSBD +static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr) +{ + if (user_mode(regs)) + return 1; + + if (instr & BIT(CRm_shift)) + regs->pstate |= PSR_SSBS_BIT; + else + regs->pstate &= ~PSR_SSBS_BIT; + + arm64_skip_faulting_instruction(regs, 4); + return 0; +} + +static struct undef_hook ssbs_emulation_hook = { + .instr_mask = ~(1U << CRm_shift), + .instr_val = 0xd500001f | REG_PSTATE_SSBS_IMM, + .fn = ssbs_emulation_handler, +}; + +static void cpu_enable_ssbs(const struct arm64_cpu_capabilities *__unused) +{ + static bool undef_hook_registered = false; + static DEFINE_SPINLOCK(hook_lock); + + spin_lock(&hook_lock); + if (!undef_hook_registered) { + register_undef_hook(&ssbs_emulation_hook); + undef_hook_registered = true; + } + spin_unlock(&hook_lock); + + if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) { + sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS); + arm64_set_ssbd_mitigation(false); + } else { + arm64_set_ssbd_mitigation(true); + } +} +#endif /* CONFIG_ARM64_SSBD */ + static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "GIC system register CPU interface", @@ -1049,6 +1091,7 @@ static const struct arm64_cpu_capabiliti .min_field_value = 1, }, #endif +#ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypassing Safe (SSBS)", .capability = ARM64_SSBS, @@ -1058,7 +1101,9 @@ static const struct arm64_cpu_capabiliti .field_pos = ID_AA64PFR1_SSBS_SHIFT, .sign = FTR_UNSIGNED, .min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY, + .cpu_enable = cpu_enable_ssbs, }, +#endif {}, }; --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -296,6 +296,10 @@ int copy_thread(unsigned long clone_flag if (IS_ENABLED(CONFIG_ARM64_UAO) && cpus_have_const_cap(ARM64_HAS_UAO)) childregs->pstate |= PSR_UAO_BIT; + + if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) + childregs->pstate |= PSR_SSBS_BIT; + p->thread.cpu_context.x19 = stack_start; p->thread.cpu_context.x20 = stk_sz; } --- a/arch/arm64/kernel/ssbd.c +++ b/arch/arm64/kernel/ssbd.c @@ -3,13 +3,32 @@ * Copyright (C) 2018 ARM Ltd, All Rights Reserved. */ +#include #include #include #include +#include #include +#include #include +static void ssbd_ssbs_enable(struct task_struct *task) +{ + u64 val = is_compat_thread(task_thread_info(task)) ? + PSR_AA32_SSBS_BIT : PSR_SSBS_BIT; + + task_pt_regs(task)->pstate |= val; +} + +static void ssbd_ssbs_disable(struct task_struct *task) +{ + u64 val = is_compat_thread(task_thread_info(task)) ? + PSR_AA32_SSBS_BIT : PSR_SSBS_BIT; + + task_pt_regs(task)->pstate &= ~val; +} + /* * prctl interface for SSBD */ @@ -45,12 +64,14 @@ static int ssbd_prctl_set(struct task_st return -EPERM; task_clear_spec_ssb_disable(task); clear_tsk_thread_flag(task, TIF_SSBD); + ssbd_ssbs_enable(task); break; case PR_SPEC_DISABLE: if (state == ARM64_SSBD_FORCE_DISABLE) return -EPERM; task_set_spec_ssb_disable(task); set_tsk_thread_flag(task, TIF_SSBD); + ssbd_ssbs_disable(task); break; case PR_SPEC_FORCE_DISABLE: if (state == ARM64_SSBD_FORCE_DISABLE) @@ -58,6 +79,7 @@ static int ssbd_prctl_set(struct task_st task_set_spec_ssb_disable(task); task_set_spec_ssb_force_disable(task); set_tsk_thread_flag(task, TIF_SSBD); + ssbd_ssbs_disable(task); break; default: return -ERANGE; From patchwork Sun Oct 27 21:00:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177849 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2532291ill; Sun, 27 Oct 2019 14:10:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqwp0s9CnzJzRG9J2CMmx/eOMAShYQQOhWoQOz6zYlDpfmzX3j1cj2pWRhg+qKfAWm/ONCBJ X-Received: by 2002:aa7:d38e:: with SMTP id x14mr14169203edq.102.1572210607565; Sun, 27 Oct 2019 14:10:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210607; cv=none; d=google.com; s=arc-20160816; b=xoy0sLzQv7lTgt/TQ5rduhQWNK+oboSKVBK9s2a9I2rNpq3EPTreLMwULXPT1zr6RB t7VmUOHBwIUx4fQqR6sd/usInsLFGsG9NUBb2FaYgGzEJnCHYGqyWLVQB9tDyqR/HucZ vUpAeMxilFN67ZJ45Kj8v2JTlT/zmAhrYMbsAO52nWwBo4A58Go7kvEH0RqwOc0wuDZi U8k7Q5lNUDNEEpFYSgSFGWozuXsMxPPS9NDrB3K2iCN2YolD/FHct5eNT2XF2CiDErFR 0asgYQvjcU7lePHrzRAoO049ttgy2nhQfaao18CKgs2qpRCfIY/X+chTjtuRXacUsgDZ bC7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=enErM8Zny1Vo6/BhhawBit6YifzGTnD4ykaVcL0H8u8=; b=NeTgQewBExgKFiizX61id0h6+qzaGgrrD19rnxMs9HDWKrmPaUhriqQs3YUHxfo0xW g6bqCak6We0a4WEpLY98M/oxH7SjXJu1bT1TNe37kUNuBY8MAoy6/hvHXjawbpKlP8G0 ugQD+idyc8O5mvwnHeG0R9CdXaHWWCyC9UiAYfoY+TWyaWNl/T9C/7XfxMu9dZ++4fXW yTzI6GPik9dSW7VV44pTcrOmJih7n+jXzLR+TxHsdpmqMm5dSJ3oGSdrpigdwEIEC6jP 5yVC7SsLF5LNI1jSj7YjM/MXAWipjJH5JycESBHA8tZDcUShkUCs/7n5+aXDm+qdJ/fY 8pSA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ieDndb3K; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g2si5556436edn.415.2019.10.27.14.10.07; Sun, 27 Oct 2019 14:10:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ieDndb3K; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729627AbfJ0VKG (ORCPT + 14 others); Sun, 27 Oct 2019 17:10:06 -0400 Received: from mail.kernel.org ([198.145.29.99]:56492 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729606AbfJ0VKF (ORCPT ); Sun, 27 Oct 2019 17:10:05 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8E2732064A; Sun, 27 Oct 2019 21:10:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210605; bh=1oMNMrK0NmsSeFiB3o1cyKQqEZTPEu/Oc6jImObE9eI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ieDndb3KckUAVpViXEHWp9PwkPUutUmzoAf6wfSC/Gdr0p8Qg3+LIDL0wo8jzEKvp lwhc7ZU5qcc3Cwjpk8gRdKYywL+JpvXopYJKglCJW08Mgtv3vMy9sNCLfnto5fYIKU Ds0JS89nMiRhOQUhdsBZjspVYiTICtX8pe7OZh2Y= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Jeremy Linton , Suzuki K Poulose , Andre Przywara , Catalin Marinas , Stefan Wahren , Will Deacon , Ard Biesheuvel Subject: [PATCH 4.14 073/119] arm64: add sysfs vulnerability show for meltdown Date: Sun, 27 Oct 2019 22:00:50 +0100 Message-Id: <20191027203339.394733907@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jeremy Linton [ Upstream commit 1b3ccf4be0e7be8c4bd8522066b6cbc92591e912 ] We implement page table isolation as a mitigation for meltdown. Report this to userspace via sysfs. Signed-off-by: Jeremy Linton Reviewed-by: Suzuki K Poulose Reviewed-by: Andre Przywara Reviewed-by: Catalin Marinas Tested-by: Stefan Wahren Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/cpufeature.c | 58 +++++++++++++++++++++++++++++++---------- 1 file changed, 44 insertions(+), 14 deletions(-) --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -824,7 +824,7 @@ static bool has_no_fpsimd(const struct a ID_AA64PFR0_FP_SHIFT) < 0; } -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 +static bool __meltdown_safe = true; static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, @@ -842,6 +842,16 @@ static bool unmap_kernel_at_el0(const st MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), }; char const *str = "command line option"; + bool meltdown_safe; + + meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list); + + /* Defer to CPU feature registers */ + if (has_cpuid_feature(entry, scope)) + meltdown_safe = true; + + if (!meltdown_safe) + __meltdown_safe = false; /* * For reasons that aren't entirely clear, enabling KPTI on Cavium @@ -853,6 +863,19 @@ static bool unmap_kernel_at_el0(const st __kpti_forced = -1; } + /* Useful for KASLR robustness */ + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) { + if (!__kpti_forced) { + str = "KASLR"; + __kpti_forced = 1; + } + } + + if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) { + pr_info_once("kernel page table isolation disabled by kernel configuration\n"); + return false; + } + /* Forced? */ if (__kpti_forced) { pr_info_once("kernel page table isolation forced %s by %s\n", @@ -860,18 +883,10 @@ static bool unmap_kernel_at_el0(const st return __kpti_forced > 0; } - /* Useful for KASLR robustness */ - if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) - return true; - - /* Don't force KPTI for CPUs that are not vulnerable */ - if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list)) - return false; - - /* Defer to CPU feature registers */ - return !has_cpuid_feature(entry, scope); + return !meltdown_safe; } +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 static void kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) { @@ -896,6 +911,12 @@ kpti_install_ng_mappings(const struct ar return; } +#else +static void +kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) +{ +} +#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ static int __init parse_kpti(char *str) { @@ -909,7 +930,6 @@ static int __init parse_kpti(char *str) return 0; } early_param("kpti", parse_kpti); -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused) { @@ -1056,7 +1076,6 @@ static const struct arm64_cpu_capabiliti .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = hyp_offset_low, }, -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 { .desc = "Kernel page table isolation (KPTI)", .capability = ARM64_UNMAP_KERNEL_AT_EL0, @@ -1072,7 +1091,6 @@ static const struct arm64_cpu_capabiliti .matches = unmap_kernel_at_el0, .cpu_enable = kpti_install_ng_mappings, }, -#endif { /* FP/SIMD is not implemented */ .capability = ARM64_HAS_NO_FPSIMD, @@ -1629,3 +1647,15 @@ static int __init enable_mrs_emulation(v } core_initcall(enable_mrs_emulation); + +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, + char *buf) +{ + if (__meltdown_safe) + return sprintf(buf, "Not affected\n"); + + if (arm64_kernel_unmapped_at_el0()) + return sprintf(buf, "Mitigation: PTI\n"); + + return sprintf(buf, "Vulnerable\n"); +} From patchwork Sun Oct 27 21:00:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177851 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2532453ill; Sun, 27 Oct 2019 14:10:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqyctxv/6WbStz1BW1ZCmoTLWKhNl0GOGMU+1ET9hrjI78z7APZOaTWvcyVa5ucj8DRc+IIE X-Received: by 2002:a05:6402:13c5:: with SMTP id a5mr15827940edx.106.1572210619466; Sun, 27 Oct 2019 14:10:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210619; cv=none; d=google.com; s=arc-20160816; b=bOZQMvB/6FQno3uQ7x7ABVFOrWW2lYb9iEQu5bzYHBVPkO9EnCkm15CaX81kB+l3vA jy4FcBFswSQtS307Uwb/QAKEKoHJ+fSuBMtGptvFPPNEduzDQoqP+tHWJqavbC/FIuXq WbvOeo8VEhAPNKpFuiuTCogYNMs2GF1kxteIXfg2YW902WJisuRIEKPGEL4Z/C5P1PDW +60bwHH13EuLTdQSE/5Q7q/Am5gCzJTJ11ql1iy7aoYMpW4w5Ne0GloCtgoQ+t2FDdBy jpwj5q/zYIqrFR7L5NFU/79oykZg4AJuHBi2lmcW2jpNg2UQIcyll+QL0PomfBtnsu/D XhUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=dUkQCJ3aHiMYN4EkDBSpmbWyXptiS3ca+kX38SJsPXs=; b=kgJTrKubyvjSJT+1gPOuAu2YmQMw/ovuvFaUjiHcfCsmvRz6G/PU3DlwM86IGfiV6d fyNPV1fyzVheHHzqZFJR/9L5BATy+K/745To9WKcL09sGcyeaT0FX56gTtkIZ3I7ej+q aQXAeuiISCjDXOLpBjDFR3ccVjw1k55hk68AgsF67yQFt+7cGnGwaFy+UzBl/2QnpZy/ f1P1mjgxqwA8MbWMPSrFUMQm4IVudtjawlYhHM40Ll/rQCE4VBcUkQYz7bUw/uc+xF0o QoQkOZSiC0lIBaxwBldSz2bhGfu+DKYDKz0YXO71rpYyZjunK7f/hlzr0isfqiQaHw2C 8oVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=AqRnr2WP; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k19si6202812ede.293.2019.10.27.14.10.19; Sun, 27 Oct 2019 14:10:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=AqRnr2WP; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729667AbfJ0VKP (ORCPT + 14 others); Sun, 27 Oct 2019 17:10:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:56622 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729659AbfJ0VKO (ORCPT ); Sun, 27 Oct 2019 17:10:14 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DB7D22064A; Sun, 27 Oct 2019 21:10:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210613; bh=0gr3NY6N99eU/D68NrZCCTp+jlsGqPArsdEQMYjHC6w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AqRnr2WPYjLD38gxjv5lmmo5l+VP08YHmi8HgaFH4FOkC4wNlZCMBWTAZu8VZPcQB XWUWJO5czh4oTbrsQYaPPeQSp6MuAdoETIAghvxSO0Hn6dvQzZuMomrU1boD+XG9Tz 90KrFRTcx5y24nnO12Zx8g/t5d597kWmkW9GKHW8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Jeremy Linton , Suzuki K Poulose , Andre Przywara , Catalin Marinas , Stefan Wahren , Jonathan Corbet , linux-doc@vger.kernel.org, Will Deacon , Ard Biesheuvel Subject: [PATCH 4.14 076/119] arm64: Provide a command line to disable spectre_v2 mitigation Date: Sun, 27 Oct 2019 22:00:53 +0100 Message-Id: <20191027203342.881904911@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jeremy Linton [ Upstream commit e5ce5e7267ddcbe13ab9ead2542524e1b7993e5a ] There are various reasons, such as benchmarking, to disable spectrev2 mitigation on a machine. Provide a command-line option to do so. Signed-off-by: Jeremy Linton Reviewed-by: Suzuki K Poulose Reviewed-by: Andre Przywara Reviewed-by: Catalin Marinas Tested-by: Stefan Wahren Cc: Jonathan Corbet Cc: linux-doc@vger.kernel.org Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- Documentation/admin-guide/kernel-parameters.txt | 8 ++++---- arch/arm64/kernel/cpu_errata.c | 13 +++++++++++++ 2 files changed, 17 insertions(+), 4 deletions(-) --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2745,10 +2745,10 @@ (bounds check bypass). With this option data leaks are possible in the system. - nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2 - (indirect branch prediction) vulnerability. System may - allow data leaks with this option, which is equivalent - to spectre_v2=off. + nospectre_v2 [X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for + the Spectre variant 2 (indirect branch prediction) + vulnerability. System may allow data leaks with this + option. nospec_store_bypass_disable [HW] Disable all mitigations for the Speculative Store Bypass vulnerability --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -181,6 +181,14 @@ static void qcom_link_stack_sanitization : "=&r" (tmp)); } +static bool __nospectre_v2; +static int __init parse_nospectre_v2(char *str) +{ + __nospectre_v2 = true; + return 0; +} +early_param("nospectre_v2", parse_nospectre_v2); + static void enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) { @@ -192,6 +200,11 @@ enable_smccc_arch_workaround_1(const str if (!entry->matches(entry, SCOPE_LOCAL_CPU)) return; + if (__nospectre_v2) { + pr_info_once("spectrev2 mitigation disabled by command line option\n"); + return; + } + if (psci_ops.smccc_version == SMCCC_VERSION_1_0) return; From patchwork Sun Oct 27 21:00:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177875 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2551462ill; Sun, 27 Oct 2019 14:34:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqxk65R0yCu44YhqyQUwja3xalgnzNAN8A/TIqt8fSl5hLIfzlU1V1NicfMkd38S8qIMxpbm X-Received: by 2002:a17:906:2ada:: with SMTP id m26mr13790337eje.87.1572212079202; Sun, 27 Oct 2019 14:34:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572212079; cv=none; d=google.com; s=arc-20160816; b=w4S0gVsl12Le1l3kfpPTguxsUbCKgPhAyijQQU7/13gLH3vhLbLMCtS/uEBkB2CnO6 dzMnmgun+QeEQoQ58tpqiH1KYL4XjptRiA+U2MdskmdCPdx4As4sd9C9X2ccx6/xfJsU zf9BB93borImsQD567wDk2dP7Jh1+INpyG6qGA/j3ZuxMFSzGIhCA0J+SJNowOPCS/nl tcV8PJHzAH6SlS75hNvrScMhIqvTF7AtC195y7OtooDLkHzUYh6VGFGK8rkLOMTiKeTU f7k19T4JYeM/zm3sIucUFgJeNI9sTgZakTyNIYRNDYD/TRdM7asuHOVTrV1vTwNWz3bv glnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=3ZKBe0t3NGedkkS9Kt9la+QDCAYVXUco8Hv/E1/8Mzs=; b=C1aKK8tfYv1R3yQx+NugFWK7kSphSSIFy3eBc9G+ytSmGCVcaZazWfPHCdIzIkz9K/ mIMEOMeso9QGeI7F46MhAG+tbiaiblCl41qlgqHWxtE+7wGog/S/dDT6Oe1tOl6ukbHQ e/MlsXaDK4BgOoQJ/cDYiYuFbcLcnC5orJLUQ7ZJxx/S/pB7ag51tVBxUFhR0lzwbXk1 pf8KtW2CucFslcUeuewjQXUtgsZHEQPZvDF961iG0V64QTTSmln9vckQNuwfXH8YkRLk nCFBpAbtpJ8Ef/L3e7yhyA+eI08mdmgdWauiZWE98gNoEU6CWsjYOpoWTf5UypxMsqP8 KpyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=wmaSuoRx; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q24si4808598ejr.128.2019.10.27.14.34.39; Sun, 27 Oct 2019 14:34:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=wmaSuoRx; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728820AbfJ0Veh (ORCPT + 14 others); Sun, 27 Oct 2019 17:34:37 -0400 Received: from mail.kernel.org ([198.145.29.99]:57016 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728768AbfJ0VKf (ORCPT ); Sun, 27 Oct 2019 17:10:35 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B6015208C0; Sun, 27 Oct 2019 21:10:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210634; bh=6u/I12k282DOBG2MRmZ69eO1I+TUxQ9KbrQ2nM3A04A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wmaSuoRxphQ/N1E5qgVtCIjfEGP62HbCXBwk75POoLT3nzBBSNcCdL2B/teMJcRv+ PY8iN2dO7ZO6qkpf44dJEZgqHYVwlRQZ6uhjCTMF2o3OxBDVT1dAsqNxauG9VCrXBW n6mDQMdV4tGkuTFVkKSvsNbSjs5SbzGsW/YH9JhI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Marc Zyngier , Will Deacon , Ard Biesheuvel Subject: [PATCH 4.14 082/119] arm64: Force SSBS on context switch Date: Sun, 27 Oct 2019 22:00:59 +0100 Message-Id: <20191027203345.704836456@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier [ Upstream commit cbdf8a189a66001c36007bf0f5c975d0376c5c3a ] On a CPU that doesn't support SSBS, PSTATE[12] is RES0. In a system where only some of the CPUs implement SSBS, we end-up losing track of the SSBS bit across task migration. To address this issue, let's force the SSBS bit on context switch. Fixes: 8f04e8e6e29c ("arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3") Signed-off-by: Marc Zyngier [will: inverted logic and added comments] Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/processor.h | 14 ++++++++++++-- arch/arm64/kernel/process.c | 29 ++++++++++++++++++++++++++++- 2 files changed, 40 insertions(+), 3 deletions(-) --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -148,6 +148,16 @@ static inline void start_thread_common(s regs->pc = pc; } +static inline void set_ssbs_bit(struct pt_regs *regs) +{ + regs->pstate |= PSR_SSBS_BIT; +} + +static inline void set_compat_ssbs_bit(struct pt_regs *regs) +{ + regs->pstate |= PSR_AA32_SSBS_BIT; +} + static inline void start_thread(struct pt_regs *regs, unsigned long pc, unsigned long sp) { @@ -155,7 +165,7 @@ static inline void start_thread(struct p regs->pstate = PSR_MODE_EL0t; if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE) - regs->pstate |= PSR_SSBS_BIT; + set_ssbs_bit(regs); regs->sp = sp; } @@ -174,7 +184,7 @@ static inline void compat_start_thread(s #endif if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE) - regs->pstate |= PSR_AA32_SSBS_BIT; + set_compat_ssbs_bit(regs); regs->compat_sp = sp; } --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -298,7 +298,7 @@ int copy_thread(unsigned long clone_flag childregs->pstate |= PSR_UAO_BIT; if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) - childregs->pstate |= PSR_SSBS_BIT; + set_ssbs_bit(childregs); p->thread.cpu_context.x19 = stack_start; p->thread.cpu_context.x20 = stk_sz; @@ -340,6 +340,32 @@ void uao_thread_switch(struct task_struc } /* + * Force SSBS state on context-switch, since it may be lost after migrating + * from a CPU which treats the bit as RES0 in a heterogeneous system. + */ +static void ssbs_thread_switch(struct task_struct *next) +{ + struct pt_regs *regs = task_pt_regs(next); + + /* + * Nothing to do for kernel threads, but 'regs' may be junk + * (e.g. idle task) so check the flags and bail early. + */ + if (unlikely(next->flags & PF_KTHREAD)) + return; + + /* If the mitigation is enabled, then we leave SSBS clear. */ + if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) || + test_tsk_thread_flag(next, TIF_SSBD)) + return; + + if (compat_user_mode(regs)) + set_compat_ssbs_bit(regs); + else if (user_mode(regs)) + set_ssbs_bit(regs); +} + +/* * We store our current task in sp_el0, which is clobbered by userspace. Keep a * shadow copy so that we can restore this upon entry from userspace. * @@ -367,6 +393,7 @@ __notrace_funcgraph struct task_struct * contextidr_thread_switch(next); entry_task_switch(next); uao_thread_switch(next); + ssbs_thread_switch(next); /* * Complete any pending TLB or cache maintenance on this CPU in case From patchwork Sun Oct 27 21:01:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 177861 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2535016ill; Sun, 27 Oct 2019 14:13:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqw0aPjmZWSZ0VZHg4oxg4zfODLwrYdugc4hgMLFHTi/u0rqbOUeuu/ZPVRaxHroXkNLIbco X-Received: by 2002:a50:8264:: with SMTP id 91mr16020677edf.69.1572210642069; Sun, 27 Oct 2019 14:10:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572210642; cv=none; d=google.com; s=arc-20160816; b=E8fw3VLYkXZdrHDLeRIK/OS0o+GK/PWuh0Pc879SI9UYtUBlSbd4QX3QHSSYfUtNlN GLu/ciIYFSYFiwb3+aalCe7YmsGnTHIMs3VHMZoyh9FpejMnh23tBK+3c6rEFzW7E6tH 6pyoyYVWa96SakvT79bvGiX8eFdoxpBboDZ3LTL+rvZeZwPYJq/6s2LPsFtGE2gaJ47x QJNX2jWnB45DYoFPVwe5LtmZwT6x308FXZXFuRRRPCMOX+rkfNuzDStAiuUERtwJtohL s6naw8WeqvkJgkO0mRuDmA9hviUa611hgTZw9AfAIaSacPtiOZ9UqjW7CCSDUWIWArpb x7gQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=pT76LrD6hih0G4kmU9q+ITl2ZQocLT3rInmbiiZ/RNI=; b=KoACV0eaFMvowgag6euz9XjiQ863Wen0TjVegdpPh7zzjv+g/Wy8lBJ45SnNy09Kdw 0LdkRSLrr02Mei2GhgXZdYJPh0r7egi4U1Tai72ib2WwRssZyhy/Mu24Y8ZIQGEBnt/t iI84kJCQS1RdCceMaaedqAKgkzZT7fMZKGlbLhmPbfwe/lDIYfU9o5y3R/qUHlxl+JdR Ryn2UzXJyzFaafGeVlM+GaP53jL9P8sM9jw//kJF6d11QWR7wuDPAgxNdhdXJtMxnMXI AO9iY9frYGNoXfXQvkIv1vr/53aAXN2x6hlpFMgiFs/Jbn6fqBAIGaCMEvRzWQ8xXlCx PzdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=y1yjB3Oh; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c29si299346ede.50.2019.10.27.14.10.41; Sun, 27 Oct 2019 14:10:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=y1yjB3Oh; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728827AbfJ0VKi (ORCPT + 14 others); Sun, 27 Oct 2019 17:10:38 -0400 Received: from mail.kernel.org ([198.145.29.99]:57074 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728797AbfJ0VKh (ORCPT ); Sun, 27 Oct 2019 17:10:37 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 819A520873; Sun, 27 Oct 2019 21:10:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572210637; bh=yy9iRurEiMd6hM0tViCdcWOujPSbCP1i+PLRyE2P5Ls=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=y1yjB3Oh61SDImSDfYD9gtE2VDXf62Ee9M5nCgwQeBvj32ErgYLftmU3Ao2p+FDhr 8l+c+G/1iTFu5AdjPrppR60oM7UMslaRzuavE+uKjnUeo0/G/aCxklC4g7emDOWx9v LLJlblYA7nsB79B0IucSK70y95YbwA8HNgORibrI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Marc Zyngier , Jeremy Linton , Andre Przywara , Catalin Marinas , Stefan Wahren , Will Deacon , Ard Biesheuvel Subject: [PATCH 4.14 083/119] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Date: Sun, 27 Oct 2019 22:01:00 +0100 Message-Id: <20191027203345.935698895@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203259.948006506@linuxfoundation.org> References: <20191027203259.948006506@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier [ Upstream commit 517953c2c47f9c00a002f588ac856a5bc70cede3 ] The SMCCC ARCH_WORKAROUND_1 service can indicate that although the firmware knows about the Spectre-v2 mitigation, this particular CPU is not vulnerable, and it is thus not necessary to call the firmware on this CPU. Let's use this information to our benefit. Signed-off-by: Marc Zyngier Signed-off-by: Jeremy Linton Reviewed-by: Andre Przywara Reviewed-by: Catalin Marinas Tested-by: Stefan Wahren Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/cpu_errata.c | 32 +++++++++++++++++++++++--------- 1 file changed, 23 insertions(+), 9 deletions(-) --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -190,22 +190,36 @@ static int detect_harden_bp_fw(void) case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); - if ((int)res.a0 < 0) + switch ((int)res.a0) { + case 1: + /* Firmware says we're just fine */ + return 0; + case 0: + cb = call_hvc_arch_workaround_1; + /* This is a guest, no need to patch KVM vectors */ + smccc_start = NULL; + smccc_end = NULL; + break; + default: return -1; - cb = call_hvc_arch_workaround_1; - /* This is a guest, no need to patch KVM vectors */ - smccc_start = NULL; - smccc_end = NULL; + } break; case PSCI_CONDUIT_SMC: arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); - if ((int)res.a0 < 0) + switch ((int)res.a0) { + case 1: + /* Firmware says we're just fine */ + return 0; + case 0: + cb = call_smc_arch_workaround_1; + smccc_start = __smccc_workaround_1_smc_start; + smccc_end = __smccc_workaround_1_smc_end; + break; + default: return -1; - cb = call_smc_arch_workaround_1; - smccc_start = __smccc_workaround_1_smc_start; - smccc_end = __smccc_workaround_1_smc_end; + } break; default: