From patchwork Wed Sep 23 14:24:10 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 54055 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by patches.linaro.org (Postfix) with ESMTPS id 83E9A22B1E for ; Wed, 23 Sep 2015 14:41:11 +0000 (UTC) Received: by lagj9 with SMTP id j9sf24569837lag.0 for ; Wed, 23 Sep 2015 07:41:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:cc:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=40RpSEIaeRcAPZf7OIavBS0YaVaBjAL3CYq5BUFL1D0=; b=PrgrF7QDcd6Cjtvx6V2n1XkxklQkPDTO5+bK03RzH65fpmEnGisTzoa5mJ9670OsAA 1owxs6mpe5N086DwkaB9U6ua8HnyfkfY4rUQjwAfq+Q6mVyUR4pQRIxfXqtrY51Q8+Cc wYg18LRS/ik8ZIjzpDJ0IMWnbZt4zFGTEykZGDt1uxAH5mJKoo5G28C+zEgYj4nhOp6L fhS6IOs87MuqawIXBlaZ1hDA7ehfcTVzUTgcUDm4aUkOBU2O432f75+c9fsd30VdEjK6 tnevh/TAymI05/QZhzLcA1SUBua8+sYN9lXeqT03nRix6JTnV6xRHcKhW+ey0VS0Q/T0 Zo1Q== X-Gm-Message-State: ALoCoQk1SaZp0PgLyDFR68u7HAUnPCa0+EflOeT6My4Yqv9cxHrNyJ7I7axKdnNQv80NmOvsWu57 X-Received: by 10.112.145.3 with SMTP id sq3mr5381740lbb.7.1443019270512; Wed, 23 Sep 2015 07:41:10 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.3.194 with SMTP id e2ls52990lae.49.gmail; Wed, 23 Sep 2015 07:41:10 -0700 (PDT) X-Received: by 10.25.150.199 with SMTP id y190mr4017993lfd.35.1443019270233; Wed, 23 Sep 2015 07:41:10 -0700 (PDT) Received: from mail-la0-f50.google.com (mail-la0-f50.google.com. [209.85.215.50]) by mx.google.com with ESMTPS id s20si3109624lfe.134.2015.09.23.07.41.10 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 Sep 2015 07:41:10 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) client-ip=209.85.215.50; Received: by lacao8 with SMTP id ao8so30083515lac.3 for ; Wed, 23 Sep 2015 07:41:10 -0700 (PDT) X-Received: by 10.152.23.199 with SMTP id o7mr9844469laf.76.1443019269604; Wed, 23 Sep 2015 07:41:09 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1152062lbq; Wed, 23 Sep 2015 07:41:08 -0700 (PDT) X-Received: by 10.68.179.33 with SMTP id dd1mr25102342pbc.134.1443019268429; Wed, 23 Sep 2015 07:41:08 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id o3si10997624pap.210.2015.09.23.07.41.08 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 Sep 2015 07:41:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Zekz6-0003dj-O8; Wed, 23 Sep 2015 14:25:24 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZekyP-0002Cm-1V for linux-arm-kernel@lists.infradead.org; Wed, 23 Sep 2015 14:24:42 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 18F53597; Wed, 23 Sep 2015 07:24:24 -0700 (PDT) Received: from e104818-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E0D8B3F2E5; Wed, 23 Sep 2015 07:24:18 -0700 (PDT) From: Catalin Marinas To: Russell King - ARM Linux Subject: [PATCH 4/4] arm: Implement privileged no-access using TTBR0 page table walks disabling Date: Wed, 23 Sep 2015 15:24:10 +0100 Message-Id: <1443018250-22893-5-git-send-email-catalin.marinas@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1443018250-22893-1-git-send-email-catalin.marinas@arm.com> References: <1443018250-22893-1-git-send-email-catalin.marinas@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150923_072441_107779_976B0C0E X-CRM114-Status: GOOD ( 18.63 ) X-Spam-Score: -6.9 (------) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-6.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] 0.0 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: catalin.marinas@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. Signed-off-by: Catalin Marinas --- arch/arm/Kconfig | 22 ++++++++++++--- arch/arm/include/asm/assembler.h | 42 +++++++++++++++++++++++++++++ arch/arm/include/asm/pgtable-3level-hwdef.h | 9 +++++++ arch/arm/include/asm/uaccess.h | 34 ++++++++++++++++++++--- arch/arm/lib/csumpartialcopyuser.S | 14 ++++++++++ arch/arm/mm/fault.c | 10 +++++++ 6 files changed, 124 insertions(+), 7 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 72ad724c67ae..bcfe80c1036a 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1704,9 +1704,9 @@ config HIGHPTE consumed by page tables. Setting this option will allow user-space 2nd level page tables to reside in high memory. -config CPU_SW_DOMAIN_PAN - bool "Enable use of CPU domains to implement privileged no-access" - depends on MMU && !ARM_LPAE +config ARM_PAN + bool "Enable privileged no-access" + depends on MMU default y help Increase kernel security by ensuring that normal kernel accesses @@ -1715,10 +1715,26 @@ config CPU_SW_DOMAIN_PAN by ensuring that magic values (such as LIST_POISON) will always fault when dereferenced. + The implementation uses CPU domains when !CONFIG_ARM_LPAE and + disabling of TTBR0 page table walks with CONFIG_ARM_LPAE. + +config CPU_SW_DOMAIN_PAN + def_bool y + depends on ARM_PAN && !ARM_LPAE + help + Enable use of CPU domains to implement privileged no-access. + CPUs with low-vector mappings use a best-efforts implementation. Their lower 1MB needs to remain accessible for the vectors, but the remainder of userspace will become appropriately inaccessible. +config CPU_TTBR0_PAN + def_bool y + depends on ARM_PAN && ARM_LPAE + help + Enable privileged no-access by disabling TTBR0 page table walks when + running in kernel mode. + config HW_PERF_EVENTS def_bool y depends on ARM_PMU diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 26b4c697c857..8dccd8916172 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -25,6 +25,7 @@ #include #include #include +#include #include #define IOMEM(x) (x) @@ -485,6 +486,47 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) mcr p15, 0, r0, c3, c0, 0 .endm +#elif defined(CONFIG_CPU_TTBR0_PAN) + + .macro uaccess_disable, tmp, isb=1 + /* + * Disable TTBR0 page table walks (EDP0 = 1), use the reserved ASID + * from TTBR1 (A1 = 1) and enable TTBR1 page table walks for kernel + * addresses by reducing TTBR0 range to 32MB (T0SZ = 7). + */ + mrc p15, 0, \tmp, c2, c0, 2 + orr \tmp, \tmp, #TTBCR_EPD0 | TTBCR_T0SZ_MASK + orr \tmp, \tmp, #TTBCR_A1 + mcr p15, 0, \tmp, c2, c0, 2 + .if \isb + instr_sync + .endif + .endm + + .macro uaccess_enable, tmp, isb=1 + /* + * Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from + * TTBR0 (A1 = 0). + */ + mrc p15, 0, \tmp, c2, c0, 2 + bic \tmp, \tmp, #TTBCR_EPD0 | TTBCR_T0SZ_MASK + bic \tmp, \tmp, #TTBCR_A1 + mcr p15, 0, \tmp, c2, c0, 2 + .if \isb + instr_sync + .endif + .endm + + .macro uaccess_save, tmp + mrc p15, 0, \tmp, c2, c0, 2 + str \tmp, [sp, #S_FRAME_SIZE] + .endm + + .macro uaccess_restore + ldr r0, [sp, #S_FRAME_SIZE] + mcr p15, 0, r0, c2, c0, 2 + .endm + #else .macro uaccess_disable, tmp, isb=1 diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h index 3ed7965106e3..92fee5f79e0f 100644 --- a/arch/arm/include/asm/pgtable-3level-hwdef.h +++ b/arch/arm/include/asm/pgtable-3level-hwdef.h @@ -85,6 +85,7 @@ #define PHYS_MASK_SHIFT (40) #define PHYS_MASK ((1ULL << PHYS_MASK_SHIFT) - 1) +#ifndef CONFIG_CPU_TTBR0_PAN /* * TTBR0/TTBR1 split (PAGE_OFFSET): * 0x40000000: T0SZ = 2, T1SZ = 0 (not used) @@ -104,6 +105,14 @@ #endif #define TTBR1_SIZE (((PAGE_OFFSET >> 30) - 1) << 16) +#else +/* + * With CONFIG_CPU_TTBR0_PAN enabled, TTBR1 is only used during uaccess + * disabled regions when TTBR0 is disabled. + */ +#define TTBR1_OFFSET 0 /* pointing to swapper_pg_dir */ +#define TTBR1_SIZE 0 /* TTBR1 size controlled via TTBCR.T0SZ */ +#endif /* * TTBCR register bits. diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 711c9877787b..bbc4e97c1951 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include #include #include @@ -74,21 +76,45 @@ static inline void uaccess_restore(unsigned int flags) set_domain(flags); } - -#else +#elif defined(CONFIG_CPU_TTBR0_PAN) static inline unsigned int uaccess_save_and_enable(void) { - return 0; + unsigned int old_ttbcr = cpu_get_ttbcr(); + + /* + * Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from + * TTBR0 (A1 = 0). + */ + cpu_set_ttbcr(old_ttbcr & ~(TTBCR_A1 | TTBCR_EPD0 | TTBCR_T0SZ_MASK)); + isb(); + + return old_ttbcr; } static inline void uaccess_restore(unsigned int flags) { + cpu_set_ttbcr(flags); + isb(); } static inline bool uaccess_disabled(struct pt_regs *regs) { - return false; + /* uaccess state saved above pt_regs on SVC exception entry */ + unsigned int ttbcr = *(unsigned int *)(regs + 1); + + return ttbcr & TTBCR_EPD0; +} + +#else + +static inline unsigned int uaccess_save_and_enable(void) +{ + return 0; +} + +static inline void uaccess_restore(unsigned int flags) +{ } #endif diff --git a/arch/arm/lib/csumpartialcopyuser.S b/arch/arm/lib/csumpartialcopyuser.S index d50fe3c07615..4ef2515f051a 100644 --- a/arch/arm/lib/csumpartialcopyuser.S +++ b/arch/arm/lib/csumpartialcopyuser.S @@ -31,6 +31,20 @@ ret lr .endm +#elif defined(CONFIG_CPU_TTBR0_PAN) + + .macro save_regs + mrc p15, 0, ip, c2, c0, 2 + stmfd sp!, {r1, r2, r4 - r8, ip, lr} + uaccess_enable ip + .endm + + .macro load_regs + ldmfd sp!, {r1, r2, r4 - r8, ip, lr} + mcr p15, 0, ip, c2, c0, 2 + ret lr + .endm + #else .macro save_regs diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 0d629b8f973f..a16de0635de2 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -284,6 +284,16 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (fsr & FSR_WRITE) flags |= FAULT_FLAG_WRITE; +#ifdef CONFIG_CPU_TTBR0_PAN + /* + * Privileged access aborts with CONFIG_CPU_TTBR0_PAN enabled are + * routed via the translation fault mechanism. Check whether uaccess + * is disabled while in kernel mode. + */ + if (!user_mode(regs) && uaccess_disabled(regs)) + goto no_context; +#endif + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code,