From patchwork Mon Jul 21 14:47:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 33971 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f199.google.com (mail-qc0-f199.google.com [209.85.216.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7F8D820492 for ; Mon, 21 Jul 2014 14:48:42 +0000 (UTC) Received: by mail-qc0-f199.google.com with SMTP id c9sf21659739qcz.6 for ; Mon, 21 Jul 2014 07:48:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=m30UZwrwMVx1JWQPQ83ainWAlu4YK4MuOobmrOE5SUU=; b=XXm5mRgJZfZ39vzBjAHuZkFO18RZ/jYXtj2i8T3GIGhmHbvrC1vfKjBXXhqTPrgIQN RrZOybGYtDaQiKzyDfjdhvCMmlWVJBEzelIPUhKyD406QZOvLnoLKkCVXpoGzDwb8zZk pfh9TmtpcrotndpAcmFOb3wZ4yS1sHA2tM/jO4XJ/LMv9RVXhN/COD0eSKq3TMVF6AU5 /gQkYsmc9qFz8Z4put/DsL4IkrCIWJ3BH1GUm2fekBfbNJJ0ZtR+GEjKGVhAhqzY6Ajq I+EWVyZ94YmkLNiXK7ucArI3FMM1AzQrtlhOixBj47MlPtT10Lno/viiJp2aXNakJzCw XkxA== X-Gm-Message-State: ALoCoQkhMqjkancQdK9H8lxKsBQ8/DLDnmQwhtrSGODxFGK6tzCELt9+TstRNyTr1kgkiVLuUqrt X-Received: by 10.236.117.141 with SMTP id j13mr11184075yhh.3.1405954122340; Mon, 21 Jul 2014 07:48:42 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.50.227 with SMTP id s90ls1286965qga.95.gmail; Mon, 21 Jul 2014 07:48:42 -0700 (PDT) X-Received: by 10.220.163.201 with SMTP id b9mr2628864vcy.79.1405954122160; Mon, 21 Jul 2014 07:48:42 -0700 (PDT) Received: from mail-vc0-f173.google.com (mail-vc0-f173.google.com [209.85.220.173]) by mx.google.com with ESMTPS id nu8si11492873vcb.16.2014.07.21.07.48.40 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 21 Jul 2014 07:48:40 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.173 as permitted sender) client-ip=209.85.220.173; Received: by mail-vc0-f173.google.com with SMTP id hy10so12263632vcb.32 for ; Mon, 21 Jul 2014 07:48:40 -0700 (PDT) X-Received: by 10.220.15.8 with SMTP id i8mr17853240vca.45.1405954120819; Mon, 21 Jul 2014 07:48:40 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp118419vcb; Mon, 21 Jul 2014 07:48:40 -0700 (PDT) X-Received: by 10.194.121.65 with SMTP id li1mr24048707wjb.10.1405954119866; Mon, 21 Jul 2014 07:48:39 -0700 (PDT) Received: from mail-wg0-f48.google.com (mail-wg0-f48.google.com [74.125.82.48]) by mx.google.com with ESMTPS id es8si21017108wic.43.2014.07.21.07.48.36 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 21 Jul 2014 07:48:36 -0700 (PDT) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 74.125.82.48 as permitted sender) client-ip=74.125.82.48; Received: by mail-wg0-f48.google.com with SMTP id x13so6470554wgg.31 for ; Mon, 21 Jul 2014 07:48:36 -0700 (PDT) X-Received: by 10.180.19.138 with SMTP id f10mr5316534wie.38.1405954116345; Mon, 21 Jul 2014 07:48:36 -0700 (PDT) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id di7sm38135166wjb.34.2014.07.21.07.48.32 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 21 Jul 2014 07:48:35 -0700 (PDT) From: Daniel Thompson To: Russell King , Thomas Gleixner , Jason Cooper Cc: Marek Vasut , Harro Haan , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Daniel Thompson Subject: [PATCH RFC 5/9] ARM: Add L1 PTE non-secure mapping Date: Mon, 21 Jul 2014 15:47:16 +0100 Message-Id: <1405954040-30399-6-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1405954040-30399-1-git-send-email-daniel.thompson@linaro.org> References: <1405954040-30399-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.173 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Marek Vasut Add new device type, MT_DEVICE_NS. This type sets the NS bit in L1 PTE [1]. Accesses to a memory region which is mapped this way generate non-secure access to that memory area. One must be careful here, since the NS bit is only available in L1 PTE, therefore when creating the mapping, the mapping must be at least 1 MiB big and must be aligned to 1 MiB. If that condition was false, the kernel would use regular L2 page mapping for this area instead and the NS bit setting would be ineffective. [1] See DDI0406B , Section B3 "Virtual Memory System Architecture (VMSA)", Subsection B3.3.1 "Translation table entry formats", paragraph "First-level descriptors", Table B3-1 and associated description of the NS bit in the "Section" table entry. Signed-off-by: Marek Vasut Signed-off-by: Daniel Thompson --- arch/arm/include/asm/io.h | 5 ++++- arch/arm/include/asm/mach/map.h | 4 ++-- arch/arm/include/asm/pgtable-2level-hwdef.h | 1 + arch/arm/mm/mmu.c | 13 ++++++++++++- 4 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h index 3d23418..22765e0 100644 --- a/arch/arm/include/asm/io.h +++ b/arch/arm/include/asm/io.h @@ -125,8 +125,10 @@ static inline u32 __raw_readl(const volatile void __iomem *addr) #define MT_DEVICE_NONSHARED 1 #define MT_DEVICE_CACHED 2 #define MT_DEVICE_WC 3 +#define MT_DEVICE_NS 4 + /* - * types 4 onwards can be found in asm/mach/map.h and are undefined + * types 5 onwards can be found in asm/mach/map.h and are undefined * for ioremap */ @@ -343,6 +345,7 @@ extern void _memset_io(volatile void __iomem *, int, size_t); #define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) #define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED) #define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC) +#define ioremap_ns(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_NS) #define iounmap __arm_iounmap /* diff --git a/arch/arm/include/asm/mach/map.h b/arch/arm/include/asm/mach/map.h index f98c7f3..42be265 100644 --- a/arch/arm/include/asm/mach/map.h +++ b/arch/arm/include/asm/mach/map.h @@ -21,9 +21,9 @@ struct map_desc { unsigned int type; }; -/* types 0-3 are defined in asm/io.h */ +/* types 0-4 are defined in asm/io.h */ enum { - MT_UNCACHED = 4, + MT_UNCACHED = 5, MT_CACHECLEAN, MT_MINICLEAN, MT_LOW_VECTORS, diff --git a/arch/arm/include/asm/pgtable-2level-hwdef.h b/arch/arm/include/asm/pgtable-2level-hwdef.h index 5cfba15..d24e7ea 100644 --- a/arch/arm/include/asm/pgtable-2level-hwdef.h +++ b/arch/arm/include/asm/pgtable-2level-hwdef.h @@ -36,6 +36,7 @@ #define PMD_SECT_S (_AT(pmdval_t, 1) << 16) /* v6 */ #define PMD_SECT_nG (_AT(pmdval_t, 1) << 17) /* v6 */ #define PMD_SECT_SUPER (_AT(pmdval_t, 1) << 18) /* v6 */ +#define PMD_SECT_NS (_AT(pmdval_t, 1) << 19) /* v6 */ #define PMD_SECT_AF (_AT(pmdval_t, 0)) #define PMD_SECT_UNCACHED (_AT(pmdval_t, 0)) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index ab14b79..9baf1cb 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -268,6 +268,13 @@ static struct mem_type mem_types[] = { .prot_sect = PROT_SECT_DEVICE, .domain = DOMAIN_IO, }, + [MT_DEVICE_NS] = { /* Non-secure accesses from secure mode */ + .prot_pte = PROT_PTE_DEVICE | L_PTE_MT_DEV_SHARED | + L_PTE_SHARED, + .prot_l1 = PMD_TYPE_TABLE, + .prot_sect = PROT_SECT_DEVICE | PMD_SECT_S | PMD_SECT_NS, + .domain = DOMAIN_IO, + }, [MT_UNCACHED] = { .prot_pte = PROT_PTE_DEVICE, .prot_l1 = PMD_TYPE_TABLE, @@ -474,6 +481,7 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_XN; mem_types[MT_DEVICE_CACHED].prot_sect |= PMD_SECT_XN; mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_XN; + mem_types[MT_DEVICE_NS].prot_sect |= PMD_SECT_XN; /* Also setup NX memory mapping */ mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_XN; @@ -489,6 +497,7 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE].prot_sect |= PMD_SECT_TEX(1); mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(1); mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_BUFFERABLE; + mem_types[MT_DEVICE_NS].prot_sect |= PMD_SECT_TEX(1); } else if (cpu_is_xsc3()) { /* * For Xscale3, @@ -500,6 +509,7 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE].prot_sect |= PMD_SECT_TEX(1) | PMD_SECT_BUFFERED; mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(2); mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_TEX(1); + mem_types[MT_DEVICE_NS].prot_sect |= PMD_SECT_TEX(1) | PMD_SECT_BUFFERED; } else { /* * For ARMv6 and ARMv7 without TEX remapping, @@ -511,6 +521,7 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE].prot_sect |= PMD_SECT_BUFFERED; mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(2); mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_TEX(1); + mem_types[MT_DEVICE_NS].prot_sect |= PMD_SECT_BUFFERED; } } else { /* @@ -856,7 +867,7 @@ static void __init create_mapping(struct map_desc *md) return; } - if ((md->type == MT_DEVICE || md->type == MT_ROM) && + if ((md->type == MT_DEVICE || md->type == MT_DEVICE_NS || md->type == MT_ROM) && md->virtual >= PAGE_OFFSET && (md->virtual < VMALLOC_START || md->virtual >= VMALLOC_END)) { printk(KERN_WARNING "BUG: mapping for 0x%08llx"