From patchwork Wed Oct 12 11:23:41 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 77548 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp401577qge; Wed, 12 Oct 2016 04:26:38 -0700 (PDT) X-Received: by 10.98.223.145 with SMTP id d17mr974724pfl.82.1476271598302; Wed, 12 Oct 2016 04:26:38 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id wz7si7366372pab.268.2016.10.12.04.26.38 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Oct 2016 04:26:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1buHf1-0005A3-N5; Wed, 12 Oct 2016 11:25:23 +0000 Received: from mail-qt0-f182.google.com ([209.85.216.182]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1buHev-0004Ds-7E for linux-arm-kernel@lists.infradead.org; Wed, 12 Oct 2016 11:25:19 +0000 Received: by mail-qt0-f182.google.com with SMTP id m5so12196296qtb.3 for ; Wed, 12 Oct 2016 04:24:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Fq+eK8AjlSr7qD5kg06pdBWvK0x1S/la166SQEUNrEM=; b=jtJx4vxdFGAp/sgFPxe/VGt2nlQaHEKUkGVdfbJ/+8SvtLpYTG7wIoZ1yIX+MEP+xk ekKZAcFxzEjkS3c4HArHPWyFV41Is+eBrzgoGCQ37myOjk1TiE+dX7PYu1bPm7tSi7LL pNcb0GG0KHd0w+9j865b0CGSPAtdfuaoIgIlc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Fq+eK8AjlSr7qD5kg06pdBWvK0x1S/la166SQEUNrEM=; b=gYqAE72WJj5FZMvJbL9uWVoZtQ5i6hoxRFHECbnYr24dw0puju+MWlou5sS5CsH0qJ 5Luqrl93mRTVyABIkLot2KeSgW8LN5GXf97xe+ZGzI36fVuiGGoBqOarA4pyFV+bt2vo MAb/Kn6PkKE7PyIqlZ+XrifVIZFrZiV8TRxx2v43qtH7b9lB0B11zGLQ5W/SEJrbxdZa 1dka2g94qMUqDehfbaQjS3aNqoq8/QSUt3AikqRyCFosrN/JqpysKfVuyPTCqioQrThv 00L6dPpqxiINEtD8oWcSVOvlgM//zNxd1w9qWo6kwt6mqv78lDgvYKzObYxdxWKywbPD vkDg== X-Gm-Message-State: AA6/9RkQ6xLWvPHoplpfvoZuHHaHXJsNTarGv5U0fZ0i4IynSHuaeNzR12qLVD9d1h5FdcnL X-Received: by 10.194.171.225 with SMTP id ax1mr823161wjc.48.1476271435879; Wed, 12 Oct 2016 04:23:55 -0700 (PDT) Received: from localhost.localdomain ([196.67.25.208]) by smtp.gmail.com with ESMTPSA id jt8sm12034690wjc.33.2016.10.12.04.23.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 12 Oct 2016 04:23:54 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, catalin.marinas@arm.com, will.deacon@arm.com Subject: [PATCH v3 1/5] arm64: mm: BUG on unsupported manipulations of live kernel mappings Date: Wed, 12 Oct 2016 12:23:41 +0100 Message-Id: <1476271425-19401-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1476271425-19401-1-git-send-email-ard.biesheuvel@linaro.org> References: <1476271425-19401-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161012_042517_449284_E77677D3 X-CRM114-Status: GOOD ( 19.45 ) X-Spam-Score: -2.7 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.216.182 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.216.182 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: steve.capper@linaro.org, jeremy.linton@arm.com, Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org Now that we take care not manipulate the live kernel page tables in a way that may lead to TLB conflicts, the case where a table mapping is replaced by a block mapping can no longer occur. So remove the handling of this at the PUD and PMD levels, and instead, BUG() on any occurrence of live kernel page table manipulations that modify anything other than the permission bits. Since mark_rodata_ro() is the only caller where the kernel mappings that are being manipulated are actually live, drop the various conditional flush_tlb_all() invocations, and add a single call to mark_rodata_ro() instead. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 68 ++++++++++++-------- 1 file changed, 41 insertions(+), 27 deletions(-) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 05615a3fdc6f..e1c34e5a1d7d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -28,8 +28,6 @@ #include #include #include -#include -#include #include #include @@ -95,6 +93,12 @@ static phys_addr_t __init early_pgtable_alloc(void) return phys; } +/* + * The following mapping attributes may be updated in live + * kernel mappings without the need for break-before-make. + */ +static const pteval_t modifiable_attr_mask = PTE_PXN | PTE_RDONLY | PTE_WRITE; + static void alloc_init_pte(pmd_t *pmd, unsigned long addr, unsigned long end, unsigned long pfn, pgprot_t prot, @@ -115,8 +119,18 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr, pte = pte_set_fixmap_offset(pmd, addr); do { + pte_t old_pte = *pte; + set_pte(pte, pfn_pte(pfn, prot)); pfn++; + + /* + * After the PTE entry has been populated once, we + * only allow updates to the permission attributes. + */ + BUG_ON(pte_val(old_pte) != 0 && + ((pte_val(old_pte) ^ pte_val(*pte)) & + ~modifiable_attr_mask) != 0); } while (pte++, addr += PAGE_SIZE, addr != end); pte_clear_fixmap(); @@ -146,27 +160,28 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end, pmd = pmd_set_fixmap_offset(pud, addr); do { + pmd_t old_pmd = *pmd; + next = pmd_addr_end(addr, end); + /* try section mapping first */ if (((addr | next | phys) & ~SECTION_MASK) == 0 && allow_block_mappings) { - pmd_t old_pmd =*pmd; pmd_set_huge(pmd, phys, prot); + /* - * Check for previous table entries created during - * boot (__create_page_tables) and flush them. + * After the PMD entry has been populated once, we + * only allow updates to the permission attributes. */ - if (!pmd_none(old_pmd)) { - flush_tlb_all(); - if (pmd_table(old_pmd)) { - phys_addr_t table = pmd_page_paddr(old_pmd); - if (!WARN_ON_ONCE(slab_is_available())) - memblock_free(table, PAGE_SIZE); - } - } + BUG_ON(pmd_val(old_pmd) != 0 && + ((pmd_val(old_pmd) ^ pmd_val(*pmd)) & + ~modifiable_attr_mask) != 0); } else { alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys), prot, pgtable_alloc); + + BUG_ON(pmd_val(old_pmd) != 0 && + pmd_val(old_pmd) != pmd_val(*pmd)); } phys += next - addr; } while (pmd++, addr = next, addr != end); @@ -204,33 +219,29 @@ static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end, pud = pud_set_fixmap_offset(pgd, addr); do { + pud_t old_pud = *pud; + next = pud_addr_end(addr, end); /* * For 4K granule only, attempt to put down a 1GB block */ if (use_1G_block(addr, next, phys) && allow_block_mappings) { - pud_t old_pud = *pud; pud_set_huge(pud, phys, prot); /* - * If we have an old value for a pud, it will - * be pointing to a pmd table that we no longer - * need (from swapper_pg_dir). - * - * Look up the old pmd table and free it. + * After the PUD entry has been populated once, we + * only allow updates to the permission attributes. */ - if (!pud_none(old_pud)) { - flush_tlb_all(); - if (pud_table(old_pud)) { - phys_addr_t table = pud_page_paddr(old_pud); - if (!WARN_ON_ONCE(slab_is_available())) - memblock_free(table, PAGE_SIZE); - } - } + BUG_ON(pud_val(old_pud) != 0 && + ((pud_val(old_pud) ^ pud_val(*pud)) & + ~modifiable_attr_mask) != 0); } else { alloc_init_pmd(pud, addr, next, phys, prot, pgtable_alloc, allow_block_mappings); + + BUG_ON(pud_val(old_pud) != 0 && + pud_val(old_pud) != pud_val(*pud)); } phys += next - addr; } while (pud++, addr = next, addr != end); @@ -396,6 +407,9 @@ void mark_rodata_ro(void) section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata, section_size, PAGE_KERNEL_RO); + + /* flush the TLBs after updating live kernel mappings */ + flush_tlb_all(); } static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,