From patchwork Fri Aug 12 15:27:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 73851 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp230418qga; Fri, 12 Aug 2016 08:30:07 -0700 (PDT) X-Received: by 10.98.79.27 with SMTP id d27mr28122573pfb.127.1471015807060; Fri, 12 Aug 2016 08:30:07 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id j6si9388048pad.199.2016.08.12.08.30.06 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Aug 2016 08:30:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bYEOJ-00022X-Eu; Fri, 12 Aug 2016 15:28:59 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bYENd-0001jC-MP for linux-arm-kernel@lists.infradead.org; Fri, 12 Aug 2016 15:28:21 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 592AF43C; Fri, 12 Aug 2016 08:29:28 -0700 (PDT) Received: from e104818-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B293D3F213; Fri, 12 Aug 2016 08:27:57 -0700 (PDT) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 3/7] arm64: Introduce uaccess_{disable, enable} functionality based on TTBR0_EL1 Date: Fri, 12 Aug 2016 16:27:42 +0100 Message-Id: <1471015666-23125-4-git-send-email-catalin.marinas@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1471015666-23125-1-git-send-email-catalin.marinas@arm.com> References: <1471015666-23125-1-git-send-email-catalin.marinas@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160812_082817_879856_A7E8B034 X-CRM114-Status: GOOD ( 16.22 ) X-Spam-Score: -8.3 (--------) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-8.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -1.4 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: James Morse , Will Deacon , Kees Cook , kernel-hardening@lists.openwall.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org This patch adds the uaccess macros/functions to disable access to user space by setting TTBR0_EL1 to a reserved zeroed page. Since the value written to TTBR0_EL1 must be a physical address, for simplicity this path introduces a reserved_ttbr0 page at a constant offset from swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value adjusted by the reserved_ttbr0 offset. Enabling access to user is done by restoring TTBR0_EL1 with the value from saved_ttbr0_el1 per-CPU variable. Interrupts are disabled during the uaccess_enable code to ensure the atomicity of the saved_ttbr0_el1 read and TTBR0_EL1 write. Cc: Will Deacon Cc: James Morse Cc: Kees Cook Signed-off-by: Catalin Marinas --- arch/arm64/include/asm/assembler.h | 63 +++++++++++++++++++++++++++++++-- arch/arm64/include/asm/cpufeature.h | 6 ++++ arch/arm64/include/asm/kernel-pgtable.h | 7 ++++ arch/arm64/include/asm/uaccess.h | 38 +++++++++++++++++--- arch/arm64/kernel/cpufeature.c | 1 + arch/arm64/kernel/head.S | 6 ++-- arch/arm64/kernel/vmlinux.lds.S | 5 +++ arch/arm64/mm/context.c | 13 +++++++ 8 files changed, 130 insertions(+), 9 deletions(-) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 039db634a693..45545393f605 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include @@ -42,6 +43,15 @@ msr daifclr, #2 .endm + .macro save_and_disable_irq, flags + mrs \flags, daif + msr daifset, #2 + .endm + + .macro restore_irq, flags + msr daif, \flags + .endm + /* * Enable and disable debug exceptions. */ @@ -195,7 +205,7 @@ lr .req x30 // link register /* * @sym: The name of the per-cpu variable - * @reg: Result of per_cpu(sym, smp_processor_id()) + * @reg: Result of this_cpu_ptr(sym) * @tmp: scratch register */ .macro this_cpu_ptr, sym, reg, tmp @@ -204,6 +214,17 @@ lr .req x30 // link register add \reg, \reg, \tmp .endm + /* + * @sym: The name of the per-cpu variable + * @reg: Result of this_cpu_read(sym) + * @tmp: scratch register + */ + .macro this_cpu_read, sym, reg, tmp + adr_l \reg, \sym + mrs \tmp, tpidr_el1 + ldr \reg, [\reg, \tmp] + .endm + /* * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm) */ @@ -379,7 +400,28 @@ alternative_endif /* * User access enabling/disabling macros. */ + .macro uaccess_ttbr0_disable, tmp1 + mrs \tmp1, ttbr1_el1 // swapper_pg_dir + add \tmp1, \tmp1, #SWAPPER_DIR_SIZE // reserved_ttbr0 at the end of swapper_pg_dir + cpu_set_ttbr0 \tmp1 // set reserved TTBR0_EL1 + .endm + + .macro uaccess_ttbr0_enable, tmp1, tmp2, errata = 0 + this_cpu_read saved_ttbr0_el1 \tmp1, \tmp2 + cpu_set_ttbr0 \tmp1, errata = \errata + .endm + .macro uaccess_disable, tmp1 +#ifdef CONFIG_ARM64_TTBR0_PAN +alternative_if_not ARM64_HAS_PAN + uaccess_ttbr0_disable \tmp1 +alternative_else + nop + nop + nop + nop +alternative_endif +#endif alternative_if_not ARM64_ALT_PAN_NOT_UAO nop alternative_else @@ -387,7 +429,24 @@ alternative_else alternative_endif .endm - .macro uaccess_enable, tmp1, tmp2, flags, errata = 0 + .macro uaccess_enable, tmp1, tmp2, tmp3 +#ifdef CONFIG_ARM64_TTBR0_PAN +alternative_if_not ARM64_HAS_PAN + save_and_disable_irq \tmp3 // avoid preemption + uaccess_ttbr0_enable \tmp1, \tmp2 + restore_irq \tmp3 +alternative_else + nop + nop + nop + nop + nop + nop + nop + nop + nop +alternative_endif +#endif alternative_if_not ARM64_ALT_PAN_NOT_UAO nop alternative_else diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 7099f26e3702..023066d9bf7f 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -216,6 +216,12 @@ static inline bool system_supports_mixed_endian_el0(void) return id_aa64mmfr0_mixed_endian_el0(read_system_reg(SYS_ID_AA64MMFR0_EL1)); } +static inline bool system_supports_ttbr0_pan(void) +{ + return IS_ENABLED(CONFIG_ARM64_TTBR0_PAN) && + !cpus_have_cap(ARM64_HAS_PAN); +} + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 7e51d1b57c0c..f825ffda9c52 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -19,6 +19,7 @@ #ifndef __ASM_KERNEL_PGTABLE_H #define __ASM_KERNEL_PGTABLE_H +#include #include /* @@ -54,6 +55,12 @@ #define SWAPPER_DIR_SIZE (SWAPPER_PGTABLE_LEVELS * PAGE_SIZE) #define IDMAP_DIR_SIZE (IDMAP_PGTABLE_LEVELS * PAGE_SIZE) +#ifdef CONFIG_ARM64_TTBR0_PAN +#define RESERVED_TTBR0_SIZE (PAGE_SIZE) +#else +#define RESERVED_TTBR0_SIZE (0) +#endif + /* Initial memory map size */ #if ARM64_SWAPPER_USES_SECTION_MAPS #define SWAPPER_BLOCK_SHIFT SECTION_SHIFT diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index f04869630207..e0eccdfd2427 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -22,11 +22,13 @@ * User space memory access functions */ #include +#include #include #include #include #include +#include #include #include #include @@ -114,16 +116,44 @@ static inline void set_fs(mm_segment_t fs) /* * User access enabling/disabling. */ +DECLARE_PER_CPU(u64, saved_ttbr0_el1); + +static inline void uaccess_ttbr0_disable(void) +{ + unsigned long ttbr; + + ttbr = read_sysreg(ttbr1_el1) + SWAPPER_DIR_SIZE; + write_sysreg(ttbr, ttbr0_el1); + isb(); +} + +static inline void uaccess_ttbr0_enable(void) +{ + unsigned long ttbr, flags; + + local_irq_save(flags); + ttbr = per_cpu(saved_ttbr0_el1, smp_processor_id()); + write_sysreg(ttbr, ttbr0_el1); + isb(); + local_irq_restore(flags); +} + #define uaccess_disable(alt) \ do { \ - asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), alt, \ - CONFIG_ARM64_PAN)); \ + if (system_supports_ttbr0_pan()) \ + uaccess_ttbr0_disable(); \ + else \ + asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), alt, \ + CONFIG_ARM64_PAN)); \ } while (0) #define uaccess_enable(alt) \ do { \ - asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), alt, \ - CONFIG_ARM64_PAN)); \ + if (system_supports_ttbr0_pan()) \ + uaccess_ttbr0_enable(); \ + else \ + asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), alt, \ + CONFIG_ARM64_PAN)); \ } while (0) /* diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 62272eac1352..fd0971afd142 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -45,6 +45,7 @@ unsigned int compat_elf_hwcap2 __read_mostly; #endif DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); +EXPORT_SYMBOL(cpu_hwcaps); #define __ARM64_FTR_BITS(SIGNED, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ { \ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index b77f58355da1..57ae28e4d8de 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -320,14 +320,14 @@ __create_page_tables: * dirty cache lines being evicted. */ mov x0, x25 - add x1, x26, #SWAPPER_DIR_SIZE + add x1, x26, #SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE bl __inval_cache_range /* * Clear the idmap and swapper page tables. */ mov x0, x25 - add x6, x26, #SWAPPER_DIR_SIZE + add x6, x26, #SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE 1: stp xzr, xzr, [x0], #16 stp xzr, xzr, [x0], #16 stp xzr, xzr, [x0], #16 @@ -406,7 +406,7 @@ __create_page_tables: * tables again to remove any speculatively loaded cache lines. */ mov x0, x25 - add x1, x26, #SWAPPER_DIR_SIZE + add x1, x26, #SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE dmb sy bl __inval_cache_range diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 659963d40bb4..fe393ccf9352 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -196,6 +196,11 @@ SECTIONS swapper_pg_dir = .; . += SWAPPER_DIR_SIZE; +#ifdef CONFIG_ARM64_TTBR0_PAN + reserved_ttbr0 = .; + . += PAGE_SIZE; +#endif + _end = .; STABS_DEBUG diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index efcf1f7ef1e4..f4bdee285774 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -37,6 +37,11 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; +#ifdef CONFIG_ARM64_TTBR0_PAN +DEFINE_PER_CPU(u64, saved_ttbr0_el1); +EXPORT_PER_CPU_SYMBOL(saved_ttbr0_el1); +#endif + #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) #define NUM_USER_ASIDS ASID_FIRST_VERSION @@ -226,6 +231,8 @@ switch_mm_fastpath: static int asids_init(void) { + unsigned int cpu __maybe_unused; + asid_bits = get_cpu_asid_bits(); /* * Expect allocation after rollover to fail if we don't have at least @@ -239,6 +246,12 @@ static int asids_init(void) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); +#ifdef CONFIG_ARM64_TTBR0_PAN + /* Initialise saved_ttbr0_el1 to the reserved TTBR0 and ASID */ + for_each_possible_cpu(cpu) + per_cpu(saved_ttbr0_el1, cpu) = virt_to_phys(empty_zero_page); +#endif + pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS); return 0; }