From patchwork Mon Mar 23 15:36:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 46206 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f69.google.com (mail-wg0-f69.google.com [74.125.82.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2900120D10 for ; Mon, 23 Mar 2015 15:41:51 +0000 (UTC) Received: by wggy19 with SMTP id y19sf27721192wgg.2 for ; Mon, 23 Mar 2015 08:41:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:in-reply-to:references:cc:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list; bh=aAZtQT8DEjikkGhzjBHGELxLJbfupjXWUDY+25lzMS0=; b=eBlqJSFVhy58VFkrvW6C7bPY80nIHL06Q+RxeOO2wxGLsr6TZKEsibAUG6ZDnR52pr PjFOOSiYSFxsBPtLdl//tUcdJycqrlCjWIsUnODM/GwY031qDSdY5UNtwIii1nRImDp4 iqe4JOn/WyLF+B1UTloCWi67czv39uMAdh0eugbbkD8fBOOrOZI+SHkqMDIpJccsfLjC bPN0ZmiyISw+ZnaE0CBZ7mClOk0iWS1kskQ5zzG2ZGbdZlppBysOPip77Gm3DWYraC3R wvbXkFK/5xkNe9Vxaut+te0MIIXl1fkih5O/1ByC+xsj+EmqY6DUjqAvAlfWwzywJhyX RMQA== X-Gm-Message-State: ALoCoQmvx8Po7nYOsoo/yqVpCmcSpcbuQGx/mRAVk/9QAfxpqfKI9LLq/hEcl6N4j9g/4441VcDr X-Received: by 10.180.75.232 with SMTP id f8mr2176873wiw.0.1427125310455; Mon, 23 Mar 2015 08:41:50 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.202.133 with SMTP id ki5ls711291lac.90.gmail; Mon, 23 Mar 2015 08:41:50 -0700 (PDT) X-Received: by 10.112.25.38 with SMTP id z6mr34003788lbf.106.1427125310295; Mon, 23 Mar 2015 08:41:50 -0700 (PDT) Received: from mail-la0-f53.google.com (mail-la0-f53.google.com. [209.85.215.53]) by mx.google.com with ESMTPS id z6si821335lag.156.2015.03.23.08.41.50 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Mar 2015 08:41:50 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) client-ip=209.85.215.53; Received: by lagg8 with SMTP id g8so137435588lag.1 for ; Mon, 23 Mar 2015 08:41:50 -0700 (PDT) X-Received: by 10.112.46.74 with SMTP id t10mr7444066lbm.73.1427125310083; Mon, 23 Mar 2015 08:41:50 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.57.201 with SMTP id k9csp904879lbq; Mon, 23 Mar 2015 08:41:49 -0700 (PDT) X-Received: by 10.68.172.3 with SMTP id ay3mr22156972pbc.94.1427125308144; Mon, 23 Mar 2015 08:41:48 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id ug8si1763644pac.7.2015.03.23.08.41.47 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Mar 2015 08:41:48 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Ya4SO-0003S8-T1; Mon, 23 Mar 2015 15:40:00 +0000 Received: from mail-we0-f178.google.com ([74.125.82.178]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Ya4Py-00028M-P0 for linux-arm-kernel@lists.infradead.org; Mon, 23 Mar 2015 15:37:32 +0000 Received: by wegp1 with SMTP id p1so140991846weg.1 for ; Mon, 23 Mar 2015 08:37:08 -0700 (PDT) X-Received: by 10.194.60.104 with SMTP id g8mr182552786wjr.96.1427125026992; Mon, 23 Mar 2015 08:37:06 -0700 (PDT) Received: from ards-macbook-pro.local (62.82.106.185.static.user.ono.com. [62.82.106.185]) by mx.google.com with ESMTPSA id s5sm11712061wia.1.2015.03.23.08.37.05 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 23 Mar 2015 08:37:06 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com Subject: [PATCH 3/4] arm64: move kernel text below PAGE_OFFSET Date: Mon, 23 Mar 2015 16:36:55 +0100 Message-Id: <1427125016-3873-4-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1427125016-3873-1-git-send-email-ard.biesheuvel@linaro.org> References: <1427125016-3873-1-git-send-email-ard.biesheuvel@linaro.org> In-Reply-To: <20150317164353.GN23340@leverpostej> References: <20150317164353.GN23340@leverpostej> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150323_083731_157053_859FCDA4 X-CRM114-Status: GOOD ( 12.90 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.178 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.178 listed in wl.mailspike.net] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This moves the virtual mapping of the kernel Image down into the lower half of the kernel virtual memory range, moving it out of the linear mapping. An exception is made for the statically allocated translation tables: these are so entangled with the translation regime that they need to be accessed via the linear mapping exclusively. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 9 +++++++-- arch/arm64/kernel/head.S | 39 +++++++++++++++++++++++++++++---------- arch/arm64/kernel/vmlinux.lds.S | 4 ++-- arch/arm64/mm/mmu.c | 2 ++ 4 files changed, 40 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 7dfe1b0c9c01..2b2d2fccfee3 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -38,6 +38,8 @@ */ #define PCI_IO_SIZE SZ_16M +#define KIMAGE_OFFSET SZ_64M + /* * PAGE_OFFSET - the virtual address of the start of the kernel image (top * (VA_BITS - 1)) @@ -49,7 +51,8 @@ */ #define VA_BITS (CONFIG_ARM64_VA_BITS) #define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS - 1)) -#define MODULES_END (PAGE_OFFSET) +#define KIMAGE_VADDR (PAGE_OFFSET - KIMAGE_OFFSET) +#define MODULES_END KIMAGE_VADDR #define MODULES_VADDR (MODULES_END - SZ_64M) #define PCI_IO_END (MODULES_VADDR - SZ_2M) #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) @@ -117,6 +120,8 @@ extern phys_addr_t memstart_addr; /* PHYS_OFFSET - the physical address of the start of memory. */ #define PHYS_OFFSET ({ memstart_addr; }) +extern u64 image_offset; + /* * PFNs are used to describe any physical page; this means * PFN 0 == physical address 0. @@ -151,7 +156,7 @@ static inline void *phys_to_virt(phys_addr_t x) */ static inline phys_addr_t __text_to_phys(unsigned long x) { - return __virt_to_phys(__VIRT(x)); + return __virt_to_phys(__VIRT(x)) + image_offset; } /* diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 504423422e20..16134608eecf 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -235,7 +235,10 @@ section_table: ENTRY(stext) bl preserve_boot_args bl el2_setup // Drop to EL1, w20=cpu_boot_mode + adrp x24, __PHYS_OFFSET + mov x23, #KIMAGE_OFFSET + bl set_cpu_boot_mode_flag bl __create_page_tables // x25=TTBR0, x26=TTBR1 /* @@ -279,13 +282,15 @@ ENDPROC(preserve_boot_args) * Corrupts: tmp1, tmp2 * Returns: tbl -> next level table page address */ - .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2 + .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2, off=0 lsr \tmp1, \virt, #\shift + .if \off + add \tmp1, \tmp1, #\off + .endif and \tmp1, \tmp1, #\ptrs - 1 // table index - add \tmp2, \tbl, #PAGE_SIZE + add \tmp2, \tbl, #(\off + 1) * PAGE_SIZE orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type str \tmp2, [\tbl, \tmp1, lsl #3] - add \tbl, \tbl, #PAGE_SIZE // next level table page .endm /* @@ -298,8 +303,13 @@ ENDPROC(preserve_boot_args) .macro create_pgd_entry, tbl, virt, tmp1, tmp2 create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2 #if SWAPPER_PGTABLE_LEVELS == 3 + add \tbl, \tbl, #PAGE_SIZE // next level table page create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2 + create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2, 1 +#else + create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2, 1 #endif + add \tbl, \tbl, #PAGE_SIZE // next level table page .endm /* @@ -312,15 +322,15 @@ ENDPROC(preserve_boot_args) .macro create_block_map, tbl, flags, phys, start, end lsr \phys, \phys, #BLOCK_SHIFT lsr \start, \start, #BLOCK_SHIFT - and \start, \start, #PTRS_PER_PTE - 1 // table index orr \phys, \flags, \phys, lsl #BLOCK_SHIFT // table entry lsr \end, \end, #BLOCK_SHIFT - and \end, \end, #PTRS_PER_PTE - 1 // table end index + sub \end, \end, \start + and \start, \start, #PTRS_PER_PTE - 1 // table index 9999: str \phys, [\tbl, \start, lsl #3] // store the entry add \start, \start, #1 // next entry add \phys, \phys, #BLOCK_SIZE // next block - cmp \start, \end - b.ls 9999b + subs \end, \end, #1 + b.pl 9999b .endm /* @@ -371,10 +381,18 @@ __create_page_tables: * Map the kernel image (starting with PHYS_OFFSET). */ mov x0, x26 // swapper_pg_dir - mov x5, #PAGE_OFFSET + ldr x5, =KERNEL_START // VA of __PHYS_OFFSET create_pgd_entry x0, x5, x3, x6 - ldr x6, =KERNEL_END // __va(KERNEL_END) - mov x3, x24 // phys offset + ldr x6, =__pgdir_start // VA of KERNEL_END + adrp x3, KERNEL_START // phys offset + create_block_map x0, x7, x3, x5, x6 + + ldr x5, =__pgdir_start + add x5, x5, x23 + adrp x3, idmap_pg_dir + add x0, x0, #PAGE_SIZE + ldr x6, =__pgdir_stop + add x6, x6, x23 create_block_map x0, x7, x3, x5, x6 /* @@ -406,6 +424,7 @@ __mmap_switched: 2: adr_l sp, initial_sp, x4 str_l x21, __fdt_pointer, x5 // Save FDT pointer + str_l x23, image_offset, x5 str_l x24, memstart_addr, x6 // Save PHYS_OFFSET mov x29, #0 b start_kernel diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index d3885043f0b7..9f514a0a38ad 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -74,7 +74,7 @@ SECTIONS *(.discard.*) } - . = __TEXT(PAGE_OFFSET) + TEXT_OFFSET; + . = __TEXT(KIMAGE_VADDR) + TEXT_OFFSET; .head.text : { _text = .; @@ -176,4 +176,4 @@ ASSERT(((__hyp_idmap_text_start + PAGE_SIZE) > __hyp_idmap_text_end), /* * If padding is applied before .head.text, virt<->phys conversions will fail. */ -ASSERT(_text == (__TEXT(PAGE_OFFSET) + TEXT_OFFSET), "HEAD is misaligned") +ASSERT(_text == (__TEXT(KIMAGE_VADDR) + TEXT_OFFSET), "HEAD is misaligned") diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index bb3ce41130f3..14ba1dd80932 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -47,6 +47,8 @@ struct page *empty_zero_page; EXPORT_SYMBOL(empty_zero_page); +u64 image_offset; + pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) {