From patchwork Thu Feb 23 16:22:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 94387 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp304987obz; Thu, 23 Feb 2017 08:34:56 -0800 (PST) X-Received: by 10.84.236.67 with SMTP id h3mr34479784pln.12.1487867696190; Thu, 23 Feb 2017 08:34:56 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p37si4670592plb.96.2017.02.23.08.34.55; Thu, 23 Feb 2017 08:34:56 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751354AbdBWQey (ORCPT + 5 others); Thu, 23 Feb 2017 11:34:54 -0500 Received: from foss.arm.com ([217.140.101.70]:56682 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751156AbdBWQex (ORCPT ); Thu, 23 Feb 2017 11:34:53 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EFF1328; Thu, 23 Feb 2017 08:23:09 -0800 (PST) Received: from leverpostej.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C73863F483; Thu, 23 Feb 2017 08:23:08 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ard.biesheuvel@linaro.org, catalin.marinas@arm.com, jean-philippe.brucker@arm.com, mark.rutland@arm.com, stable@vger.kernel.org, Will.Deacon@arm.com Subject: [PATCH] Revert "arm64: mm: set the contiguous bit for kernel mappings where appropriate" Date: Thu, 23 Feb 2017 16:22:55 +0000 Message-Id: <1487866978-25863-1-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org This reverts commit 0bfc445dec9dd8130d22c9f4476eed7598524129. When we change the permissions of regions mapped using contiguous entries, the architecture requires us to follow a Break-Before-Make strategy, breaking *all* associated entries before we can change any of the following properties from the entries: - presence of the contiguous bit - output address - attributes - permissiones Failure to do so can result in a number of problems (e.g. TLB conflict aborts and/or erroneous results from TLB lookups). See ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit", page D4-1762. We do not take this into account when altering the permissions of kernel segments in mark_rodata_ro(), where we change the permissions of live contiguous entires one-by-one, leaving them transiently inconsistent. This has been observed to result in failures on some fast model configurations. Unfortunately, we cannot follow Break-Before-Make here as we'd have to unmap kernel text and data used to perform the sequence. For the timebeing, revert commit 0bfc445dec9dd813 so as to avoid issues resulting from this misuse of the contiguous bit. Signed-off-by: Mark Rutland Reported-by: Jean-Philippe Brucker Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: stable@vger.kernel.org # v4.10 --- arch/arm64/mm/mmu.c | 34 ++++------------------------------ 1 file changed, 4 insertions(+), 30 deletions(-) I'm aware that our hugtlbpage code has a similar issue. I'm looking into that now, and will address that with separate patches. It should be possible to use BBM there as it's a userspace mapping. Mark. -- 1.9.1 diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 17243e4..c391b1f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -108,10 +108,8 @@ static bool pgattr_change_is_safe(u64 old, u64 new) static void alloc_init_pte(pmd_t *pmd, unsigned long addr, unsigned long end, unsigned long pfn, pgprot_t prot, - phys_addr_t (*pgtable_alloc)(void), - bool page_mappings_only) + phys_addr_t (*pgtable_alloc)(void)) { - pgprot_t __prot = prot; pte_t *pte; BUG_ON(pmd_sect(*pmd)); @@ -129,18 +127,7 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr, do { pte_t old_pte = *pte; - /* - * Set the contiguous bit for the subsequent group of PTEs if - * its size and alignment are appropriate. - */ - if (((addr | PFN_PHYS(pfn)) & ~CONT_PTE_MASK) == 0) { - if (end - addr >= CONT_PTE_SIZE && !page_mappings_only) - __prot = __pgprot(pgprot_val(prot) | PTE_CONT); - else - __prot = prot; - } - - set_pte(pte, pfn_pte(pfn, __prot)); + set_pte(pte, pfn_pte(pfn, prot)); pfn++; /* @@ -159,7 +146,6 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end, phys_addr_t (*pgtable_alloc)(void), bool page_mappings_only) { - pgprot_t __prot = prot; pmd_t *pmd; unsigned long next; @@ -186,18 +172,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end, /* try section mapping first */ if (((addr | next | phys) & ~SECTION_MASK) == 0 && !page_mappings_only) { - /* - * Set the contiguous bit for the subsequent group of - * PMDs if its size and alignment are appropriate. - */ - if (((addr | phys) & ~CONT_PMD_MASK) == 0) { - if (end - addr >= CONT_PMD_SIZE) - __prot = __pgprot(pgprot_val(prot) | - PTE_CONT); - else - __prot = prot; - } - pmd_set_huge(pmd, phys, __prot); + pmd_set_huge(pmd, phys, prot); /* * After the PMD entry has been populated once, we @@ -207,8 +182,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end, pmd_val(*pmd))); } else { alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys), - prot, pgtable_alloc, - page_mappings_only); + prot, pgtable_alloc); BUG_ON(pmd_val(old_pmd) != 0 && pmd_val(old_pmd) != pmd_val(*pmd));