From patchwork Mon Aug 7 18:36:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 109592 Delivered-To: patch@linaro.org Received: by 10.140.95.78 with SMTP id h72csp1952905qge; Mon, 7 Aug 2017 11:38:45 -0700 (PDT) X-Received: by 10.84.231.135 with SMTP id g7mr1606222plk.405.1502131125364; Mon, 07 Aug 2017 11:38:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1502131125; cv=none; d=google.com; s=arc-20160816; b=lD6pV9Uwz4vxUIShKh4ihbEPAM3wlG1oiY2gFrofo+sJKFc9n9KBs9T5EmVn63cyvc p2xg3ipqeyInBro95WEFHidI7mknOhiHr9sz4CaXXN52xa3fpoLzVcnWT+HOkCAbHADK v1pE3di6iIc71rHqJUtFq/hCNdgi6MQNIVZ7O+COWOGzxk9TbNbSfCoYV2oFDWYFxozN BFp3PkBV0+1/b4cTvfji5jPpcuDVWmr9ocsw79GvjRtZuMWpPRQY7mA48sN6AdeGfZeg 83b95iT7CEnQdva9igf68Zir2dLx1bhFl1S9954wKRgrPUj1RRCNWj/Mx9PXV8goscAB Tpxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=NeG4PQTODK1lNwSgI6qnp/s7JYl6vSSjq2bmHxXqwGY=; b=EqzmOuYaXZ1E537C8feiTon4RAL1h7zL0rMsKU+hsskGOess3cBk+WYJu+yM9lAkMQ 2dgXlB4q+3aHxHRdSrQjGK5hwaZcWTprNoFFr2UdmpBXME1gZIyMnnjVHJNFtCmLXMe5 D2vzksDJHGb/UVl84uwXxW5Kx4cI5Y3yB6OQGFR5wOUVvBSkq/Yv/gV/bWtYfANgc7Es DiMW4131tfzm0kuQMRt8Q+BJTXnRDkcime0jhNbO9d2OFzzfMHQnki9cqsJKntQz5OEu ILV3DqFXrSBhvPfasOUK7JDBF8OqeFZoTb+ezrla1yBVSum2mPDU5XWpz/qr3PVmZ+bZ wacA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o11si4965564pgc.125.2017.08.07.11.38.45; Mon, 07 Aug 2017 11:38:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752228AbdHGSim (ORCPT + 25 others); Mon, 7 Aug 2017 14:38:42 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52726 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752202AbdHGSik (ORCPT ); Mon, 7 Aug 2017 14:38:40 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F2721A09; Mon, 7 Aug 2017 11:38:40 -0700 (PDT) Received: from leverpostej.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6FCD63F577; Mon, 7 Aug 2017 11:38:38 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ard.biesheuvel@linaro.org, catalin.marinas@arm.com, james.morse@arm.com, labbott@redhat.com, linux-kernel@vger.kernel.org, luto@amacapital.net, mark.rutland@arm.com, matt@codeblueprint.co.uk, will.deacon@arm.com, kernel-hardening@lists.openwall.com, keescook@chromium.org Subject: [PATCH 12/14] arm64: add basic VMAP_STACK support Date: Mon, 7 Aug 2017 19:36:03 +0100 Message-Id: <1502130965-18710-13-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1502130965-18710-1-git-send-email-mark.rutland@arm.com> References: <1502130965-18710-1-git-send-email-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This path enables arm64 to be built with vmap'd task and IRQ stacks. As vmap'd stacks are mapped at page granularity, stacks must be a multiple of PAGE_SIZE. This means that a 64K page kernel must use stacks of at least 64K in size. To minimize the increase in Image size, IRQ stacks are dynamically allocated at boot time, rather than embedding the boot CPU's IRQ stack in the kernel image. This patch was co-authored by Ard Biesheuvel and Mark Rutland. Signed-off-by: Ard Biesheuvel Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Laura Abbott Cc: Will Deacon --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/efi.h | 7 ++++++- arch/arm64/include/asm/memory.h | 23 ++++++++++++++++++++++- arch/arm64/kernel/irq.c | 30 ++++++++++++++++++++++++++++-- arch/arm64/kernel/vmlinux.lds.S | 2 +- 5 files changed, 58 insertions(+), 5 deletions(-) -- 1.9.1 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index dfd9086..d66f9db 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -75,6 +75,7 @@ config ARM64 select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRANSPARENT_HUGEPAGE + select HAVE_ARCH_VMAP_STACK select HAVE_ARM_SMCCC select HAVE_EBPF_JIT select HAVE_C_RECORDMCOUNT diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index 0e8cc3b..2b1e5de 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -49,7 +49,12 @@ */ #define EFI_FDT_ALIGN SZ_2M /* used by allocate_new_fdt_and_exit_boot() */ -#define EFI_KIMG_ALIGN SEGMENT_ALIGN +/* + * In some configurations (e.g. VMAP_STACK && 64K pages), stacks built into the + * kernel need greater alignment than we require the segments to be padded to. + */ +#define EFI_KIMG_ALIGN \ + (SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN : THREAD_ALIGN) /* on arm64, the FDT may be located anywhere in system RAM */ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 7fa6ad4..c5cd2c5 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -102,7 +102,17 @@ #define KASAN_SHADOW_SIZE (0) #endif -#define THREAD_SHIFT 14 +#define MIN_THREAD_SHIFT 14 + +/* + * VMAP'd stacks are allocated at page granularity, so we must ensure that such + * stacks are a multiple of page size. + */ +#if defined(CONFIG_VMAP_STACK) && (MIN_THREAD_SHIFT < PAGE_SHIFT) +#define THREAD_SHIFT PAGE_SHIFT +#else +#define THREAD_SHIFT MIN_THREAD_SHIFT +#endif #if THREAD_SHIFT >= PAGE_SHIFT #define THREAD_SIZE_ORDER (THREAD_SHIFT - PAGE_SHIFT) @@ -110,6 +120,17 @@ #define THREAD_SIZE (UL(1) << THREAD_SHIFT) +/* + * By aligning VMAP'd stacks to 2 * THREAD_SIZE, we can detect overflow by + * checking sp & (1 << THREAD_SHIFT), which we can do cheaply in the entry + * assembly. + */ +#ifdef CONFIG_VMAP_STACK +#define THREAD_ALIGN (2 * THREAD_SIZE) +#else +#define THREAD_ALIGN THREAD_SIZE +#endif + #define IRQ_STACK_SIZE THREAD_SIZE /* diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 5141282..713561e 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -23,15 +23,15 @@ #include #include +#include #include #include #include #include +#include unsigned long irq_err_count; -/* irq stack only needs to be 16 byte aligned - not IRQ_STACK_SIZE aligned. */ -DEFINE_PER_CPU(unsigned long [IRQ_STACK_SIZE/sizeof(long)], irq_stack) __aligned(16); DEFINE_PER_CPU(unsigned long *, irq_stack_ptr); int arch_show_interrupts(struct seq_file *p, int prec) @@ -51,6 +51,31 @@ void __init set_handle_irq(void (*handle_irq)(struct pt_regs *)) handle_arch_irq = handle_irq; } +#ifdef CONFIG_VMAP_STACK +static void init_irq_stacks(void) +{ + int cpu; + unsigned long *p; + + for_each_possible_cpu(cpu) { + /* + * To ensure that VMAP'd stack overflow detection works + * correctly, the IRQ stacks need to have the same + * alignment as other stacks. + */ + p = __vmalloc_node_range(IRQ_STACK_SIZE, THREAD_ALIGN, + VMALLOC_START, VMALLOC_END, + THREADINFO_GFP, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + per_cpu(irq_stack_ptr, cpu) = p; + } +} +#else +/* irq stack only needs to be 16 byte aligned - not IRQ_STACK_SIZE aligned. */ +DEFINE_PER_CPU_ALIGNED(unsigned long [IRQ_STACK_SIZE/sizeof(long)], irq_stack); + static void init_irq_stacks(void) { int cpu; @@ -58,6 +83,7 @@ static void init_irq_stacks(void) for_each_possible_cpu(cpu) per_cpu(irq_stack_ptr, cpu) = per_cpu(irq_stack, cpu); } +#endif void __init init_IRQ(void) { diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 7156538..fe56c26 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -176,7 +176,7 @@ SECTIONS _data = .; _sdata = .; - RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) + RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) /* * Data written with the MMU off but read with the MMU on requires