From patchwork Tue Jun 25 17:46:41 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leif Lindholm X-Patchwork-Id: 18117 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-gh0-f200.google.com (mail-gh0-f200.google.com [209.85.160.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6D40225E4B for ; Tue, 25 Jun 2013 17:41:59 +0000 (UTC) Received: by mail-gh0-f200.google.com with SMTP id 10sf13281289ghy.11 for ; Tue, 25 Jun 2013 10:41:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=bmJLk2XD2eu9nA4H6HhCFITkG7IhpEB1vEJsTVUN8Ao=; b=bXg0JC2ICn7v7ZUb/kzxptRJb4vqok2frI583U6qMIFMaEzA0aDnQWh42kE7FCU79q 9e8QcJ+jh5Rb0MTMCgu3/ujLfgOvM6Ejgd2c09ndwMpGq9ajwlx8icoS98SYp4N4XnGc w0xcrxyZNgkubmei0vup9627B2SbA8szwAp0iMqy9LhJ+8SiYh/mne/9EoBCvpHo7qsj SykuE+AIkExZLC8CN19m7xJYBpHX7s4OCGcibWRMj4kypQkdONXlHxlaYQwCZ3AOGbyM Ovto1dmRkOyIi4VvtO8gOnsLNWE+OVa84V+oEQyOh+xBYcnCw8Zi/O4RsJiuwxAcZNVY mlAA== X-Received: by 10.236.226.6 with SMTP id a6mr40863yhq.35.1372182119200; Tue, 25 Jun 2013 10:41:59 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.12.111 with SMTP id x15ls97809qeb.58.gmail; Tue, 25 Jun 2013 10:41:58 -0700 (PDT) X-Received: by 10.52.120.46 with SMTP id kz14mr40292vdb.72.1372182118859; Tue, 25 Jun 2013 10:41:58 -0700 (PDT) Received: from mail-vb0-x22f.google.com (mail-vb0-x22f.google.com [2607:f8b0:400c:c02::22f]) by mx.google.com with ESMTPS id zp6si6554383vdb.145.2013.06.25.10.41.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Jun 2013 10:41:58 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::22f is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::22f; Received: by mail-vb0-f47.google.com with SMTP id x14so9582630vbb.6 for ; Tue, 25 Jun 2013 10:41:58 -0700 (PDT) X-Received: by 10.58.215.200 with SMTP id ok8mr108322vec.21.1372182118678; Tue, 25 Jun 2013 10:41:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.165.8 with SMTP id yu8csp78046veb; Tue, 25 Jun 2013 10:41:58 -0700 (PDT) X-Received: by 10.180.189.102 with SMTP id gh6mr120162wic.19.1372182117477; Tue, 25 Jun 2013 10:41:57 -0700 (PDT) Received: from mail-wg0-x22d.google.com (mail-wg0-x22d.google.com [2a00:1450:400c:c00::22d]) by mx.google.com with ESMTPS id ng18si1543088wic.53.2013.06.25.10.41.56 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Jun 2013 10:41:57 -0700 (PDT) Received-SPF: neutral (google.com: 2a00:1450:400c:c00::22d is neither permitted nor denied by best guess record for domain of leif.lindholm@linaro.org) client-ip=2a00:1450:400c:c00::22d; Received: by mail-wg0-f45.google.com with SMTP id j13so9827930wgh.24 for ; Tue, 25 Jun 2013 10:41:56 -0700 (PDT) X-Received: by 10.180.198.146 with SMTP id jc18mr9866881wic.61.1372182116895; Tue, 25 Jun 2013 10:41:56 -0700 (PDT) Received: from mohikan.mushroom.smurfnet.nu (cpc4-cmbg17-2-0-cust71.5-4.cable.virginmedia.com. [86.14.224.72]) by mx.google.com with ESMTPSA id u9sm5607332wif.6.2013.06.25.10.41.55 for (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Jun 2013 10:41:56 -0700 (PDT) From: Leif Lindholm To: linux-arm-kernel@lists.infradead.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, patches@linaro.org, nico@linaro.org, Leif Lindholm Subject: [PATCH 2/2] arm: add early_ioremap support Date: Tue, 25 Jun 2013 18:46:41 +0100 Message-Id: <1372182401-11029-3-git-send-email-leif.lindholm@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1372182401-11029-1-git-send-email-leif.lindholm@linaro.org> References: <1372182401-11029-1-git-send-email-leif.lindholm@linaro.org> X-Gm-Message-State: ALoCoQlz09C2zEargk34xQm55ZOT6jIouHTtQPI0HWYIzHWQDkisSKewNfd96EO4CxpOe/vb+/Sk X-Original-Sender: leif.lindholm@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::22f is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch adds support for early_ioremap, based on the existing mechanism in x86. Up to 7 regions of up to 128KB each can be temporarily mapped in before paging_init, regardless of later highmem status. Signed-off-by: Leif Lindholm --- arch/arm/Kconfig | 7 ++ arch/arm/include/asm/fixmap.h | 31 ++++- arch/arm/include/asm/io.h | 13 ++ arch/arm/kernel/setup.c | 3 + arch/arm/mm/Makefile | 1 + arch/arm/mm/early_ioremap.c | 273 +++++++++++++++++++++++++++++++++++++++++ arch/arm/mm/mmu.c | 2 + 7 files changed, 328 insertions(+), 2 deletions(-) create mode 100644 arch/arm/mm/early_ioremap.c diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 49d993c..bf8e55d 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1756,6 +1756,13 @@ config UACCESS_WITH_MEMCPY However, if the CPU data cache is using a write-allocate mode, this option is unlikely to provide any performance gain. +config EARLY_IOREMAP + depends on MMU + bool "Provide early_ioremap() support for kernel initialization." + help + Provides a mechanism for kernel initialisation code to temporarily + map, in a highmem-agnostic way, memory pages in before paging_init(). + config SECCOMP bool prompt "Enable seccomp to safely compute untrusted bytecode" diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h index bbae919..a2a5f50 100644 --- a/arch/arm/include/asm/fixmap.h +++ b/arch/arm/include/asm/fixmap.h @@ -1,6 +1,8 @@ #ifndef _ASM_FIXMAP_H #define _ASM_FIXMAP_H +#include + /* * Nothing too fancy for now. * @@ -20,13 +22,38 @@ #define FIX_KMAP_BEGIN 0 #define FIX_KMAP_END (FIXADDR_SIZE >> PAGE_SHIFT) +/* + * 224 temporary boot-time mappings, used by early_ioremap(), + * before ioremap() is functional. + * + * (P)re-using the FIXADDR region, which is used for highmem + * later on, and statically aligned to 1MB. + */ +#define NR_FIX_BTMAPS 32 +#define FIX_BTMAPS_SLOTS 7 +#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS) +#define FIX_BTMAP_BEGIN FIX_KMAP_BEGIN +#define FIX_BTMAP_END (FIX_KMAP_END - 1) + +#define clear_fixmap(idx) \ + __set_fixmap(idx, 0, __pgprot(0)) + #define __fix_to_virt(x) (FIXADDR_START + ((x) << PAGE_SHIFT)) #define __virt_to_fix(x) (((x) - FIXADDR_START) >> PAGE_SHIFT) extern void __this_fixmap_does_not_exist(void); -static inline unsigned long fix_to_virt(const unsigned int idx) +static __always_inline unsigned long fix_to_virt(const unsigned int idx) { + /* + * this branch gets completely eliminated after inlining, + * except when someone tries to use fixaddr indices in an + * illegal way. (such as mixing up address types or using + * out-of-range indices). + * + * If it doesn't get removed, the linker will complain + * loudly with a reasonably clear error message.. + */ if (idx >= FIX_KMAP_END) __this_fixmap_does_not_exist(); return __fix_to_virt(idx); @@ -38,4 +65,4 @@ static inline unsigned int virt_to_fix(const unsigned long vaddr) return __virt_to_fix(vaddr); } -#endif +#endif /* _ASM_FIXMAP_H */ diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h index 652b560..c8866e3 100644 --- a/arch/arm/include/asm/io.h +++ b/arch/arm/include/asm/io.h @@ -397,5 +397,18 @@ extern int devmem_is_allowed(unsigned long pfn); extern void register_isa_ports(unsigned int mmio, unsigned int io, unsigned int io_shift); +/* + * early_ioremap() and early_iounmap() are for temporary early boot-time + * mappings, before the real ioremap() is functional. + * A boot-time mapping is currently limited to at most 16 pages. + * + * This is all squashed by paging_init(). + */ +extern void early_ioremap_init(void); +extern void early_ioremap_reset(void); +extern void __iomem *early_ioremap(resource_size_t phys_addr, + unsigned long size); +extern void early_iounmap(void __iomem *addr, unsigned long size); + #endif /* __KERNEL__ */ #endif /* __ASM_ARM_IO_H */ diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 1522c7a..290c561 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include #include @@ -783,6 +784,8 @@ void __init setup_arch(char **cmdline_p) parse_early_param(); + early_ioremap_init(); + sort(&meminfo.bank, meminfo.nr_banks, sizeof(meminfo.bank[0]), meminfo_cmp, NULL); sanity_check_meminfo(); arm_memblock_init(&meminfo, mdesc); diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 9e51be9..ae2c477 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -15,6 +15,7 @@ endif obj-$(CONFIG_MODULES) += proc-syms.o obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o +obj-$(CONFIG_EARLY_IOREMAP) += early_ioremap.o obj-$(CONFIG_HIGHMEM) += highmem.o obj-$(CONFIG_CPU_ABRT_NOMMU) += abort-nommu.o diff --git a/arch/arm/mm/early_ioremap.c b/arch/arm/mm/early_ioremap.c new file mode 100644 index 0000000..b14f58b --- /dev/null +++ b/arch/arm/mm/early_ioremap.c @@ -0,0 +1,273 @@ +/* + * early_ioremap() support for ARM + * + * Based on existing support in arch/x86/mm/ioremap.c + * + * Restrictions: currently only functional before paging_init() + */ + +#include +#include + +#include +#include +#include +#include + +#include + +static int __initdata early_ioremap_debug; + +static int __init early_ioremap_debug_setup(char *str) +{ + early_ioremap_debug = 1; + + return 0; +} +early_param("early_ioremap_debug", early_ioremap_debug_setup); + +static pte_t __initdata bm_pte[PTRS_PER_PTE] __aligned(PTRS_PER_PTE * sizeof(pte_t)); +static __initdata int after_paging_init; + +static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) +{ + unsigned int index = pgd_index(addr); + pgd_t *pgd = cpu_get_pgd() + index; + pud_t *pud = pud_offset(pgd, addr); + pmd_t *pmd = pmd_offset(pud, addr); + + return pmd; +} + +static inline pte_t * __init early_ioremap_pte(unsigned long addr) +{ + return &bm_pte[pte_index(addr)]; +} + +static unsigned long slot_virt[FIX_BTMAPS_SLOTS] __initdata; + +void __init early_ioremap_init(void) +{ + pmd_t *pmd; + int i; + u64 desc; + + if (early_ioremap_debug) + pr_info("early_ioremap_init()\n"); + + for (i = 0; i < FIX_BTMAPS_SLOTS; i++) { + slot_virt[i] = __fix_to_virt(FIX_BTMAP_BEGIN + NR_FIX_BTMAPS*i); + if (early_ioremap_debug) + pr_info(" %lu byte slot @ 0x%08x\n", + NR_FIX_BTMAPS * PAGE_SIZE, (u32)slot_virt[i]); + } + + pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)); + desc = *pmd; + memset(bm_pte, 0, sizeof(bm_pte)); + + pmd_populate_kernel(NULL, pmd, bm_pte); + desc = *pmd; + + BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT) + != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT)); + + if (pmd != early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END))) { + WARN_ON(1); + pr_warn("pmd %p != %p\n", + pmd, early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END))); + pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n", + fix_to_virt(FIX_BTMAP_BEGIN)); + pr_warn("fix_to_virt(FIX_BTMAP_END): %08lx\n", + fix_to_virt(FIX_BTMAP_END)); + pr_warn("FIX_BTMAP_END: %lu\n", FIX_BTMAP_END); + pr_warn("FIX_BTMAP_BEGIN: %d\n", FIX_BTMAP_BEGIN); + } +} + +void __init early_ioremap_reset(void) +{ + after_paging_init = 1; +} + +static void __init __early_set_fixmap(unsigned long idx, + phys_addr_t phys, pgprot_t flags) +{ + unsigned long addr = __fix_to_virt(idx); + pte_t *pte; + u64 desc; + + if (idx >= FIX_KMAP_END) { + BUG(); + return; + } + pte = early_ioremap_pte(addr); + + if (pgprot_val(flags)) + set_pte_at(NULL, 0xfff00000, pte, + pfn_pte(phys >> PAGE_SHIFT, flags)); + else + pte_clear(NULL, addr, pte); + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + desc = *pte; +} + +static inline void __init early_set_fixmap(unsigned long idx, + phys_addr_t phys, pgprot_t prot) +{ + __early_set_fixmap(idx, phys, prot); +} + +static inline void __init early_clear_fixmap(unsigned long idx) +{ + __early_set_fixmap(idx, 0, __pgprot(0)); +} + +static void __iomem *prev_map[FIX_BTMAPS_SLOTS] __initdata; +static unsigned long prev_size[FIX_BTMAPS_SLOTS] __initdata; + +static void __init __iomem * +__early_ioremap(resource_size_t phys_addr, unsigned long size, pgprot_t prot) +{ + unsigned long offset; + resource_size_t last_addr; + unsigned int nrpages; + unsigned long idx; + int i, slot; + + slot = -1; + for (i = 0; i < FIX_BTMAPS_SLOTS; i++) { + if (!prev_map[i]) { + slot = i; + break; + } + } + + if (slot < 0) { + pr_info("early_iomap(%08llx, %08lx) not found slot\n", + (u64)phys_addr, size); + WARN_ON(1); + return NULL; + } + + if (early_ioremap_debug) { + pr_info("early_ioremap(%08llx, %08lx) [%d] => ", + (u64)phys_addr, size, slot); + } + + /* Don't allow wraparound or zero size */ + last_addr = phys_addr + size - 1; + if (!size || last_addr < phys_addr) { + WARN_ON(1); + return NULL; + } + + prev_size[slot] = size; + /* + * Mappings have to be page-aligned + */ + offset = phys_addr & ~PAGE_MASK; + phys_addr &= PAGE_MASK; + size = PAGE_ALIGN(last_addr + 1) - phys_addr; + + /* + * Mappings have to fit in the FIX_BTMAP area. + */ + nrpages = size >> PAGE_SHIFT; + if (nrpages > NR_FIX_BTMAPS) { + WARN_ON(1); + return NULL; + } + + /* + * Ok, go for it.. + */ + idx = FIX_BTMAP_BEGIN + slot * NR_FIX_BTMAPS; + while (nrpages > 0) { + early_set_fixmap(idx, phys_addr, prot); + phys_addr += PAGE_SIZE; + idx++; + --nrpages; + } + if (early_ioremap_debug) + pr_cont("%08lx + %08lx\n", offset, slot_virt[slot]); + + prev_map[slot] = (void __iomem *)(offset + slot_virt[slot]); + return prev_map[slot]; +} + +/* Remap an IO device */ +void __init __iomem * +early_ioremap(resource_size_t phys_addr, unsigned long size) +{ + unsigned long prot; + + if (after_paging_init) { + WARN_ON(1); + return NULL; + } + + /* + * PAGE_KERNEL depends on not-yet-initialised variables. + * We don't care about coherency or executability of early_ioremap + * pages anyway. + */ + prot = L_PTE_YOUNG | L_PTE_PRESENT | L_PTE_MT_DEV_NONSHARED; + return __early_ioremap(phys_addr, size, prot); +} + + +void __init early_iounmap(void __iomem *addr, unsigned long size) +{ + unsigned long virt_addr; + unsigned long offset; + unsigned int nrpages; + unsigned long idx; + int i, slot; + + if (after_paging_init) { + WARN_ON(1); + return; + } + + slot = -1; + for (i = 0; i < FIX_BTMAPS_SLOTS; i++) { + if (prev_map[i] == addr) { + slot = i; + break; + } + } + + if (slot < 0) { + pr_info("early_iounmap(%p, %08lx) not found slot\n", + addr, size); + WARN_ON(1); + return; + } + + if (prev_size[slot] != size) { + pr_info("early_iounmap(%p, %08lx) [%d] size not consistent %08lx\n", + addr, size, slot, prev_size[slot]); + WARN_ON(1); + return; + } + + if (early_ioremap_debug) + pr_info("early_iounmap(%p, %08lx) [%d]\n", addr, size, slot); + + virt_addr = (unsigned long)addr; + if (virt_addr < fix_to_virt(FIX_BTMAP_BEGIN)) { + WARN_ON(1); + return; + } + offset = virt_addr & ~PAGE_MASK; + nrpages = PAGE_ALIGN(offset + size) >> PAGE_SHIFT; + + idx = FIX_BTMAP_BEGIN + slot * NR_FIX_BTMAPS; + while (nrpages > 0) { + early_clear_fixmap(idx); + idx++; + --nrpages; + } + prev_map[slot] = NULL; +} diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index e0d8565..c953b20 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include @@ -1306,4 +1307,5 @@ void __init paging_init(struct machine_desc *mdesc) empty_zero_page = virt_to_page(zero_page); __flush_dcache_page(NULL, empty_zero_page); + early_ioremap_reset(); }