From patchwork Mon Sep 12 14:16:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 76004 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp874384qgf; Mon, 12 Sep 2016 07:18:23 -0700 (PDT) X-Received: by 10.66.9.42 with SMTP id w10mr34078991paa.34.1473689903844; Mon, 12 Sep 2016 07:18:23 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id d1si21875373pax.191.2016.09.12.07.18.23 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Sep 2016 07:18:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bjS2d-0004LV-Et; Mon, 12 Sep 2016 14:16:59 +0000 Received: from mail-wm0-x22a.google.com ([2a00:1450:400c:c09::22a]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bjS2Z-0004HA-I0 for linux-arm-kernel@lists.infradead.org; Mon, 12 Sep 2016 14:16:57 +0000 Received: by mail-wm0-x22a.google.com with SMTP id a6so56504870wmc.0 for ; Mon, 12 Sep 2016 07:16:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=WrhEBqT7CAfHSkuXFiLbPqNz7pglsC+xqCwc1I6jjRM=; b=WQJ/vezRYMZdndLoyvK0xg2jyOz4rMWR1QTmn6VmAb1VVxcgQwf6b4QdmsoNSxaaCy Vf54cbCmY2fi9n5o0+UaFCmZ01tQ5a9vGEsagt9ogCyEz4JfDwnTc+fj7MVrhubLZvKS 4qcBl6gSI8Uk59BKSKe6E7lrIL2UXadDggqdc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=WrhEBqT7CAfHSkuXFiLbPqNz7pglsC+xqCwc1I6jjRM=; b=SvPaUquB8LQQd5n7QPwRjp41hV8TLZ+StBv2k0usTffK61YCO6pDTVphfxxLFutIPE yEhVM9kElZpT5Le7wuR2kDSXZXQ2Y7f65wxcbFGulioVrK6SpLwE/iVcxONCi6RvfSHj gAIU6wBY6tfPxPIC2v6XiMV0DrK6HiSqPh6N0Rx5iy34Rzv2vPHGSS7IAoAya07hFLiV kQx0GretGoPlSLlUki8g4/h2mOBMFQRqNd9deW3mDb1G1ISuCs/zXSq1PsymabURlNxG afj5kGjN3h9m7CpLKHYizsBc3Oybqly6nsftQQvjhbrBQtrQENExIB/builgIU4JfPEZ yjAQ== X-Gm-Message-State: AE9vXwOTwDvzBm6FBB2groBmWn+W35jfZfb4zVk37xKdhRZOBsvb8OApxxDZwF+j3a76NRN+ X-Received: by 10.194.157.164 with SMTP id wn4mr15111245wjb.142.1473689793187; Mon, 12 Sep 2016 07:16:33 -0700 (PDT) Received: from localhost.localdomain ([197.128.106.42]) by smtp.gmail.com with ESMTPSA id r4sm18032086wme.21.2016.09.12.07.16.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 12 Sep 2016 07:16:32 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, mark.rutland@arm.com Subject: [PATCH] arm64: mm: move zero page from .bss to right before swapper_pg_dir Date: Mon, 12 Sep 2016 15:16:24 +0100 Message-Id: <1473689784-29745-1-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160912_071655_980361_63C27821 X-CRM114-Status: GOOD ( 17.42 ) X-Spam-Score: -2.7 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2a00:1450:400c:c09:0:0:0:22a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org Move the statically allocated zero page from the .bss section to right before swapper_pg_dir. This allows us to refer to its physical address by simply reading TTBR1_EL1 (which always points to swapper_pg_dir and always has its ASID field cleared), and subtracting PAGE_SIZE. To protect the zero page from inadvertent modification, carve out a segment that covers it as well as idmap_pg_dir[], and mark it read-only in both the primary and the linear mappings of the kernel. Signed-off-by: Ard Biesheuvel --- v2: make empty_zero_page[] read-only make idmap_pg_dir[] read-only as well fix issue in v1 with cpu_reserved_ttbr0() This is perhaps becoming a bit unwieldy, but I agree with Mark that having a read-only zero page is a significant improvement. arch/arm64/include/asm/mmu_context.h | 19 +++---- arch/arm64/include/asm/sections.h | 1 + arch/arm64/kernel/vmlinux.lds.S | 14 ++++- arch/arm64/mm/mmu.c | 56 ++++++++++++-------- 4 files changed, 57 insertions(+), 33 deletions(-) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index b1892a0dbcb0..1fe4c4422f0a 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -49,13 +49,12 @@ static inline void contextidr_thread_switch(struct task_struct *next) */ static inline void cpu_set_reserved_ttbr0(void) { - unsigned long ttbr = virt_to_phys(empty_zero_page); - - asm( - " msr ttbr0_el1, %0 // set TTBR0\n" - " isb" - : - : "r" (ttbr)); + /* + * The zero page is located right before swapper_pg_dir, whose + * physical address we can easily fetch from TTBR1_EL1. + */ + write_sysreg(read_sysreg(ttbr1_el1) - PAGE_SIZE, ttbr0_el1); + isb(); } /* @@ -109,7 +108,8 @@ static inline void cpu_uninstall_idmap(void) { struct mm_struct *mm = current->active_mm; - cpu_set_reserved_ttbr0(); + write_sysreg(virt_to_phys(empty_zero_page), ttbr0_el1); + isb(); local_flush_tlb_all(); cpu_set_default_tcr_t0sz(); @@ -119,7 +119,8 @@ static inline void cpu_uninstall_idmap(void) static inline void cpu_install_idmap(void) { - cpu_set_reserved_ttbr0(); + write_sysreg(virt_to_phys(empty_zero_page), ttbr0_el1); + isb(); local_flush_tlb_all(); cpu_set_idmap_tcr_t0sz(); diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index 4e7e7067afdb..44e94e234ba0 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -26,5 +26,6 @@ extern char __hyp_text_start[], __hyp_text_end[]; extern char __idmap_text_start[], __idmap_text_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; +extern char __robss_start[], __robss_end[]; #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 5ce9b2929e0d..eae5036dc725 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -209,9 +209,19 @@ SECTIONS BSS_SECTION(0, 0, 0) - . = ALIGN(PAGE_SIZE); + . = ALIGN(SEGMENT_ALIGN); + __robss_start = .; idmap_pg_dir = .; - . += IDMAP_DIR_SIZE; + . = ALIGN(. + IDMAP_DIR_SIZE + PAGE_SIZE, SEGMENT_ALIGN); + __robss_end = .; + + /* + * Put the zero page right before swapper_pg_dir so we can easily + * obtain its physical address by subtracting PAGE_SIZE from the + * contents of TTBR1_EL1. + */ + empty_zero_page = __robss_end - PAGE_SIZE; + swapper_pg_dir = .; . += SWAPPER_DIR_SIZE; diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e634a0f6d62b..adb00035a6a4 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -54,7 +54,6 @@ EXPORT_SYMBOL(kimage_voffset); * Empty_zero_page is a special page that is used for zero-initialized data * and COW. */ -unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; EXPORT_SYMBOL(empty_zero_page); static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss; @@ -321,16 +320,18 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt, static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end) { - unsigned long kernel_start = __pa(_text); - unsigned long kernel_end = __pa(__init_begin); + unsigned long text_start = __pa(_text); + unsigned long text_end = __pa(__init_begin); + unsigned long robss_start = __pa(__robss_start); + unsigned long robss_end = __pa(__robss_end); /* * Take care not to create a writable alias for the - * read-only text and rodata sections of the kernel image. + * read-only text/rodata/robss sections of the kernel image. */ - /* No overlap with the kernel text/rodata */ - if (end < kernel_start || start >= kernel_end) { + /* No overlap with the kernel text/rodata/robss */ + if (end < text_start || start >= robss_end) { __create_pgd_mapping(pgd, start, __phys_to_virt(start), end - start, PAGE_KERNEL, early_pgtable_alloc, @@ -342,27 +343,32 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end * This block overlaps the kernel text/rodata mappings. * Map the portion(s) which don't overlap. */ - if (start < kernel_start) - __create_pgd_mapping(pgd, start, - __phys_to_virt(start), - kernel_start - start, PAGE_KERNEL, + if (start < text_start) + __create_pgd_mapping(pgd, start, __phys_to_virt(start), + text_start - start, PAGE_KERNEL, early_pgtable_alloc, !debug_pagealloc_enabled()); - if (kernel_end < end) - __create_pgd_mapping(pgd, kernel_end, - __phys_to_virt(kernel_end), - end - kernel_end, PAGE_KERNEL, + if (robss_end < end) + __create_pgd_mapping(pgd, robss_end, __phys_to_virt(robss_end), + end - robss_end, PAGE_KERNEL, early_pgtable_alloc, !debug_pagealloc_enabled()); /* - * Map the linear alias of the [_text, __init_begin) interval as - * read-only/non-executable. This makes the contents of the - * region accessible to subsystems such as hibernate, but - * protects it from inadvertent modification or execution. + * Map the linear alias of the intervals [_text, __init_begin) and + * [robss_start, robss_end) as read-only/non-executable. This makes + * the contents of these regions accessible to subsystems such + * as hibernate, but protects them from inadvertent modification or + * execution. */ - __create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start), - kernel_end - kernel_start, PAGE_KERNEL_RO, + __create_pgd_mapping(pgd, text_start, __phys_to_virt(text_start), + text_end - text_start, PAGE_KERNEL_RO, + early_pgtable_alloc, !debug_pagealloc_enabled()); + __create_pgd_mapping(pgd, text_end, __phys_to_virt(text_end), + robss_start - text_end, PAGE_KERNEL, + early_pgtable_alloc, !debug_pagealloc_enabled()); + __create_pgd_mapping(pgd, robss_start, __phys_to_virt(robss_start), + robss_end - robss_start, PAGE_KERNEL_RO, early_pgtable_alloc, !debug_pagealloc_enabled()); } @@ -436,13 +442,19 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, */ static void __init map_kernel(pgd_t *pgd) { - static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data; + static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, + vmlinux_data, vmlinux_robss, vmlinux_tail; map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text); map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata); map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC, &vmlinux_init); - map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data); + map_kernel_segment(pgd, _data, __robss_start, PAGE_KERNEL, + &vmlinux_data); + map_kernel_segment(pgd, __robss_start, __robss_end, PAGE_KERNEL_RO, + &vmlinux_robss); + map_kernel_segment(pgd, __robss_end, _end, PAGE_KERNEL, + &vmlinux_tail); if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) { /*