From patchwork Fri Nov 17 18:21:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 119193 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp843226qgn; Fri, 17 Nov 2017 10:22:08 -0800 (PST) X-Google-Smtp-Source: AGs4zMZLsPMG5d5T+fFWMOjaVZczQ7cdR/RXNNSjbxNo0h8b/22G8tu0mTpkO3pbnwLls16mdm/c X-Received: by 10.84.235.140 with SMTP id p12mr3530609plk.153.1510942928732; Fri, 17 Nov 2017 10:22:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510942928; cv=none; d=google.com; s=arc-20160816; b=RnOjs7SUWb0C6NWP5apC/EeGYvPt4lVxwGLkhBRFe4K6X8DuwSFvegldKMpPxElcvK 5NARNQ37VMvfAf1y2Zo4aTwZSGmrUTJckKmCdDENuD/cHdRe74hOG0lRE8rO7etb6vDe /F2iViHXaNmYcfYgEqGFK4VBtb7FOzimZW62nLKzmvz8SDathVVZ0q/3upesI0GzeOmr eNSFDsVowCo/jqkzewW2whfzFyDWTfz/M4hnLovTTCQ3sEPlp+WYYizyKfOvGFzsRlT1 hZ+iag09WIPHa3dILwVPfwwvw+/pU3l3C1UOgR17c0CXk/BIKASSw8FKlliuRWC3qZhX IOWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=H5MdTnt1nd0qc/ErpBHa/PT6llhd9eRfJBQ18AhxrdU=; b=Qa/X2wn0clj4Ocr34mmlHu6XhHAYuabkrSicvD2iaucgHzybnKnyWisM4eUseDEy6/ lrrQ7icjTyNsZQ3af6J0wuWcThaVkY9lC4VDKy18ZlM9dBncv/kB5eGXBriWc+ZMD7P1 Wa/8CAlF6V87O+B5HvwCqDFHdg2sSPlmSAZCrTn9ID3c39UPPELl9VLyQDvu9uKxFwd8 pXctJxyOz3cL6aHrBj/03wbeaKniYp+VKYEm5uXi98oxUME5febiNZSdG0odKXtrbzZH OH/eSdmupM9ma7ftiLb73KXEZdNvrqEAurYw0t9wgYzmSXhA8yyGGiDmukpP56jySt78 pxRQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x5si3079420pgb.437.2017.11.17.10.22.08; Fri, 17 Nov 2017 10:22:08 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760330AbdKQSWF (ORCPT + 28 others); Fri, 17 Nov 2017 13:22:05 -0500 Received: from foss.arm.com ([217.140.101.70]:39276 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752501AbdKQSVy (ORCPT ); Fri, 17 Nov 2017 13:21:54 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 096AC1529; Fri, 17 Nov 2017 10:21:54 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CE1DE3F246; Fri, 17 Nov 2017 10:21:53 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E39181AE10B5; Fri, 17 Nov 2017 18:22:02 +0000 (GMT) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, ard.biesheuvel@linaro.org, sboyd@codeaurora.org, dave.hansen@linux.intel.com, keescook@chromium.org, Will Deacon Subject: [PATCH 00/18] arm64: Unmap the kernel whilst running in userspace (KAISER) Date: Fri, 17 Nov 2017 18:21:43 +0000 Message-Id: <1510942921-12564-1-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi all, This patch series implements something along the lines of KAISER for arm64: https://gruss.cc/files/kaiser.pdf although I wrote this from scratch because the paper has some funny assumptions about how the architecture works. There is a patch series in review for x86, which follows a similar approach: http://lkml.kernel.org/r/<20171110193058.BECA7D88@viggo.jf.intel.com> and the topic was recently covered by LWN (currently subscriber-only): https://lwn.net/Articles/738975/ The basic idea is that transitions to and from userspace are proxied through a trampoline page which is mapped into a separate page table and can switch the full kernel mapping in and out on exception entry and exit respectively. This is a valuable defence against various KASLR and timing attacks, particularly as the trampoline page is at a fixed virtual address and therefore the kernel text can be randomized independently. The major consequences of the trampoline are: * We can no longer make use of global mappings for kernel space, so each task is assigned two ASIDs: one for user mappings and one for kernel mappings * Our ASID moves into TTBR1 so that we can quickly switch between the trampoline and kernel page tables * Switching TTBR0 always requires use of the zero page, so we can dispense with some of our errata workaround code. * entry.S gets more complicated to read The performance hit from this series isn't as bad as I feared: things like cyclictest and kernbench seem to be largely unaffected, although syscall micro-benchmarks appear to show that syscall overhead is roughly doubled, and this has an impact on things like hackbench which exhibits a ~10% hit due to its heavy context-switching. Patches based on 4.14 and also pushed here: git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git kaiser Feedback welcome, Will --->8 Will Deacon (18): arm64: mm: Use non-global mappings for kernel space arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN arm64: mm: Move ASID from TTBR0 to TTBR1 arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003 arm64: mm: Rename post_ttbr0_update_workaround arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN arm64: mm: Allocate ASIDs in pairs arm64: mm: Add arm64_kernel_mapped_at_el0 helper using static key arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI arm64: entry: Add exception trampoline page for exceptions from EL0 arm64: mm: Map entry trampoline into trampoline and kernel page tables arm64: entry: Explicitly pass exception level to kernel_ventry macro arm64: entry: Hook up entry trampoline to exception vectors arm64: erratum: Work around Falkor erratum #E1003 in trampoline code arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks arm64: entry: Add fake CPU feature for mapping the kernel at EL0 arm64: makefile: Ensure TEXT_OFFSET doesn't overlap with trampoline arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0 arch/arm64/Kconfig | 30 +++-- arch/arm64/Makefile | 18 ++- arch/arm64/include/asm/asm-uaccess.h | 25 ++-- arch/arm64/include/asm/assembler.h | 27 +---- arch/arm64/include/asm/cpucaps.h | 3 +- arch/arm64/include/asm/kernel-pgtable.h | 12 +- arch/arm64/include/asm/memory.h | 1 + arch/arm64/include/asm/mmu.h | 12 ++ arch/arm64/include/asm/mmu_context.h | 9 +- arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/include/asm/pgtable-prot.h | 21 +++- arch/arm64/include/asm/pgtable.h | 1 + arch/arm64/include/asm/proc-fns.h | 6 - arch/arm64/include/asm/tlbflush.h | 16 ++- arch/arm64/include/asm/uaccess.h | 21 +++- arch/arm64/kernel/cpufeature.c | 11 ++ arch/arm64/kernel/entry.S | 195 ++++++++++++++++++++++++++------ arch/arm64/kernel/process.c | 12 +- arch/arm64/kernel/vmlinux.lds.S | 17 +++ arch/arm64/lib/clear_user.S | 2 +- arch/arm64/lib/copy_from_user.S | 2 +- arch/arm64/lib/copy_in_user.S | 2 +- arch/arm64/lib/copy_to_user.S | 2 +- arch/arm64/mm/cache.S | 2 +- arch/arm64/mm/context.c | 36 +++--- arch/arm64/mm/mmu.c | 60 ++++++++++ arch/arm64/mm/proc.S | 12 +- arch/arm64/xen/hypercall.S | 2 +- 28 files changed, 418 insertions(+), 140 deletions(-) -- 2.1.4