From patchwork Fri Apr 10 13:53:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 47049 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 36DE9218D9 for ; Fri, 10 Apr 2015 14:02:06 +0000 (UTC) Received: by wizk4 with SMTP id k4sf5279812wiz.2 for ; Fri, 10 Apr 2015 07:02:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=0yqxWfiprbRfbKhCklMauiNnsovlgLaBIV+ljlvgr20=; b=BGOdNavZFNc2/YxmNsDwZ5MX9dMjc9yB9shfW7t5WTu5nbtAGZwhsSZdD9L0fhLl5w YkMDX2dzIlY4iuXWmApAa1HvudMNDmVlMRPxWRO3jSC5/DIiai9gulEQGTFBW3JKKoZu QG29S5TK2ekLdt2y20Ezx8Pz2MvaGOflZ8u1d0fZRzjOOTVLK1+RKRX6faZRnEMPXHeB jT5ozOB8H6EXZeXDZRmS8xh49N7mrJqqSDNSmnRf8m/6hBAwqyO1Ckt9FnJ+yCoERORz hNV1/TJGlMgfki9vivihe7nuMiXWC2QNTzUXebNqOII+asp01CEmEo+XlME1mMUY5QiU jeHg== X-Gm-Message-State: ALoCoQlTfpxbS+utuIGWYXBX+YHkmpooHUiMdMwc9ZkNU6EcNQRgbKmw8ZFHcLeYrlRsrO4mcYND X-Received: by 10.180.97.68 with SMTP id dy4mr1540095wib.0.1428674525477; Fri, 10 Apr 2015 07:02:05 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.183.228 with SMTP id ep4ls460043lac.52.gmail; Fri, 10 Apr 2015 07:02:05 -0700 (PDT) X-Received: by 10.112.26.209 with SMTP id n17mr1601375lbg.84.1428674525307; Fri, 10 Apr 2015 07:02:05 -0700 (PDT) Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com. [209.85.217.178]) by mx.google.com with ESMTPS id w9si1573750lal.18.2015.04.10.07.02.04 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Apr 2015 07:02:04 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) client-ip=209.85.217.178; Received: by lbbuc2 with SMTP id uc2so14389448lbb.2 for ; Fri, 10 Apr 2015 07:02:04 -0700 (PDT) X-Received: by 10.152.28.5 with SMTP id x5mr1575256lag.112.1428674524746; Fri, 10 Apr 2015 07:02:04 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp1120973lbt; Fri, 10 Apr 2015 07:02:03 -0700 (PDT) X-Received: by 10.70.1.75 with SMTP id 11mr2896044pdk.147.1428674522866; Fri, 10 Apr 2015 07:02:02 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id jd2si3039357pbd.233.2015.04.10.07.02.02 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Apr 2015 07:02:02 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YgZTC-0002WH-RP; Fri, 10 Apr 2015 13:59:42 +0000 Received: from mail-wg0-f41.google.com ([74.125.82.41]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YgZON-0006Im-Lm for linux-arm-kernel@lists.infradead.org; Fri, 10 Apr 2015 13:54:45 +0000 Received: by wgsk9 with SMTP id k9so18450739wgs.3 for ; Fri, 10 Apr 2015 06:54:21 -0700 (PDT) X-Received: by 10.194.157.68 with SMTP id wk4mr3219535wjb.130.1428674061545; Fri, 10 Apr 2015 06:54:21 -0700 (PDT) Received: from ards-macbook-pro.local ([84.78.25.50]) by mx.google.com with ESMTPSA id e2sm3051482wjy.46.2015.04.10.06.54.19 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 10 Apr 2015 06:54:20 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 07/11] arm64: fixmap: allow init before linear mapping is set up Date: Fri, 10 Apr 2015 15:53:51 +0200 Message-Id: <1428674035-26603-8-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1428674035-26603-1-git-send-email-ard.biesheuvel@linaro.org> References: <1428674035-26603-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150410_065443_875022_DEF1518F X-CRM114-Status: GOOD ( 18.81 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.41 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.41 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This reworks early_ioremap_init() so it populates the various levels of translation tables while taking the following into account: - be prepared for any of the levels to have been populated already, as this may be the case once we move the kernel text mapping out of the linear mapping; - don't rely on __va() to translate the physical address in a page table entry to a virtual address, since this produces linear mapping addresses; instead, use the fact that at any level, we know exactly which page in swapper_pg_dir an entry could be pointing to if it points anywhere. This allows us to defer the initialization of the linear mapping until after we have figured out where our RAM resides in the physical address space. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/compiler.h | 2 + arch/arm64/kernel/vmlinux.lds.S | 14 +++-- arch/arm64/mm/mmu.c | 117 +++++++++++++++++++++++++------------- 3 files changed, 90 insertions(+), 43 deletions(-) diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h index ee35fd0f2236..dd342af63673 100644 --- a/arch/arm64/include/asm/compiler.h +++ b/arch/arm64/include/asm/compiler.h @@ -27,4 +27,6 @@ */ #define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t" +#define __pgdir __attribute__((section(".pgdir"),aligned(PAGE_SIZE))) + #endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 98073332e2d0..604f285d3832 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -160,11 +160,15 @@ SECTIONS BSS_SECTION(0, 0, 0) - . = ALIGN(PAGE_SIZE); - idmap_pg_dir = .; - . += IDMAP_DIR_SIZE; - swapper_pg_dir = .; - . += SWAPPER_DIR_SIZE; + .pgdir (NOLOAD) : { + . = ALIGN(PAGE_SIZE); + idmap_pg_dir = .; + . += IDMAP_DIR_SIZE; + swapper_pg_dir = .; + __swapper_bs_region = . + PAGE_SIZE; + . += SWAPPER_DIR_SIZE; + *(.pgdir) + } _end = .; diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 60be58a160a2..c0427b5c90c7 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -341,6 +341,70 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end) } #endif +struct mem_bootstrap_region { +#if CONFIG_ARM64_PGTABLE_LEVELS > 3 + pud_t pud[PTRS_PER_PUD]; +#endif +#if CONFIG_ARM64_PGTABLE_LEVELS > 2 + pmd_t pmd[PTRS_PER_PMD]; +#endif + pte_t pte[PTRS_PER_PTE]; +}; + +static void __init bootstrap_mem_region(unsigned long addr, + struct mem_bootstrap_region *reg, + pmd_t **ppmd, pte_t **ppte) +{ + /* + * Avoid using the linear phys-to-virt translation __va() so that we + * can use this code before the linear mapping is set up. Note that + * any populated entries at any level can only point into swapper_pg_dir + * since no other translation table pages have been allocated yet. + * So at each level, we either need to populate it, or it has already + * been populated by a swapper_pg_dir table at the same level, in which + * case we can figure out its virtual address without applying __va() + * on the contents of the entry, using the following struct. + */ + extern struct mem_bootstrap_region __swapper_bs_region; + + pgd_t *pgd = pgd_offset_k(addr); + pud_t *pud = (pud_t *)pgd; + pmd_t *pmd = (pmd_t *)pud; + +#if CONFIG_ARM64_PGTABLE_LEVELS > 3 + if (pgd_none(*pgd)) { + clear_page(reg->pud); + pgd_populate(&init_mm, pgd, reg->pud); + pud = reg->pud; + } else { + pud = __swapper_bs_region.pud; + } + pud += pud_index(addr); +#endif + +#if CONFIG_ARM64_PGTABLE_LEVELS > 2 + if (pud_none(*pud)) { + clear_page(reg->pmd); + pud_populate(&init_mm, pud, reg->pmd); + *ppmd = reg->pmd; + } else { + *ppmd = __swapper_bs_region.pmd; + } + pmd = *ppmd + pmd_index(addr); +#endif + + if (!ppte) + return; + + if (pmd_none(*pmd)) { + clear_page(reg->pte); + pmd_populate_kernel(&init_mm, pmd, reg->pte); + *ppte = reg->pte; + } else { + *ppte = __swapper_bs_region.pte; + } +} + static void __init map_mem(void) { struct memblock_region *reg; @@ -554,58 +618,35 @@ void vmemmap_free(unsigned long start, unsigned long end) } #endif /* CONFIG_SPARSEMEM_VMEMMAP */ -static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss; -#if CONFIG_ARM64_PGTABLE_LEVELS > 2 -static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss; -#endif -#if CONFIG_ARM64_PGTABLE_LEVELS > 3 -static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss; -#endif - -static inline pud_t * fixmap_pud(unsigned long addr) -{ - pgd_t *pgd = pgd_offset_k(addr); - - BUG_ON(pgd_none(*pgd) || pgd_bad(*pgd)); - - return pud_offset(pgd, addr); -} +static pmd_t *fixmap_pmd_dir __initdata = (pmd_t *)swapper_pg_dir; +static pte_t *fixmap_pte_dir; -static inline pmd_t * fixmap_pmd(unsigned long addr) +static __always_inline pmd_t *fixmap_pmd(unsigned long addr) { - pud_t *pud = fixmap_pud(addr); - - BUG_ON(pud_none(*pud) || pud_bad(*pud)); - - return pmd_offset(pud, addr); +#if CONFIG_ARM64_PGTABLE_LEVELS > 2 + return fixmap_pmd_dir + pmd_index(addr); +#else + return fixmap_pmd_dir + pgd_index(addr); +#endif } -static inline pte_t * fixmap_pte(unsigned long addr) +static inline pte_t *fixmap_pte(unsigned long addr) { - pmd_t *pmd = fixmap_pmd(addr); - - BUG_ON(pmd_none(*pmd) || pmd_bad(*pmd)); - - return pte_offset_kernel(pmd, addr); + return fixmap_pte_dir + pte_index(addr); } void __init early_fixmap_init(void) { - pgd_t *pgd; - pud_t *pud; + static struct mem_bootstrap_region fixmap_bs_region __pgdir; pmd_t *pmd; - unsigned long addr = FIXADDR_START; - pgd = pgd_offset_k(addr); - pgd_populate(&init_mm, pgd, bm_pud); - pud = pud_offset(pgd, addr); - pud_populate(&init_mm, pud, bm_pmd); - pmd = pmd_offset(pud, addr); - pmd_populate_kernel(&init_mm, pmd, bm_pte); + bootstrap_mem_region(FIXADDR_START, &fixmap_bs_region, &fixmap_pmd_dir, + &fixmap_pte_dir); + pmd = fixmap_pmd(FIXADDR_START); /* * The boot-ioremap range spans multiple pmds, for which - * we are not preparted: + * we are not prepared: */ BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT) != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));