Message ID | 1426870974-4801-1-git-send-email-ard.biesheuvel@linaro.org |
---|---|
State | New |
Headers | show |
On 20 March 2015 at 18:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > Hi all, > > This is another possible approach to fixing the current, flawed > EFI Image placement logic: instead of fixing that logic and the > documentation, why not relax the requirement that the kernel > Image not be placed across a 512 MB boundary? > Hmm, it seems this still needs a pinch of refinement, but we could at least discuss the general idea.
On 20 March 2015 at 18:49, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > On 20 March 2015 at 18:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: >> Hi all, >> >> This is another possible approach to fixing the current, flawed >> EFI Image placement logic: instead of fixing that logic and the >> documentation, why not relax the requirement that the kernel >> Image not be placed across a 512 MB boundary? >> > > Hmm, it seems this still needs a pinch of refinement, but we could at > least discuss the general idea. > Replying to self again: it appears that, in order to relax the restriction that no 512 MB alignment boundary may be crossed, it would be sufficient to shrink the ID mapping to a single page, containing only the couple of instructions we execute between enabling the MMU and jumping into the virtually remapped kernel text. Since mapping a single page can never cross such a boundary, no extended translation tables are needed.
So as it turns out, the 512 MB alignment boundary restriction appears to have been introduced by accident when increasing the ID map to cover the entire kernel Image. So this reverts that change, by reducing the ID map to something that can never cross a 512 MB boundary by construction. Patch #1 removes some functions that are unused, so that I don't have to worry about them in patch #2 Patch #2 introduces the reduced ID map, using a separate linker section that contains code the manipulates the state of the MMU. Patch #3 removes the sleep_idmap_phys global which always points to the ID map anyway Ard Biesheuvel (3): arm64: remove soft_restart() and friends arm64: reduce ID map to a single page arm64: drop sleep_idmap_phys arch/arm64/include/asm/mmu.h | 1 - arch/arm64/include/asm/proc-fns.h | 3 --- arch/arm64/include/asm/system_misc.h | 1 - arch/arm64/kernel/head.S | 13 +++++++------ arch/arm64/kernel/process.c | 12 +----------- arch/arm64/kernel/sleep.S | 9 ++++----- arch/arm64/kernel/suspend.c | 3 --- arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++- arch/arm64/mm/mmu.c | 11 ----------- arch/arm64/mm/proc.S | 33 --------------------------------- 10 files changed, 22 insertions(+), 75 deletions(-)
On 23 March 2015 at 11:45, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote: > On Mon, Mar 23, 2015 at 09:07:23AM +0000, Ard Biesheuvel wrote: >> So as it turns out, the 512 MB alignment boundary restriction appears >> to have been introduced by accident when increasing the ID map to cover >> the entire kernel Image. >> >> So this reverts that change, by reducing the ID map to something that >> can never cross a 512 MB boundary by construction. >> >> Patch #1 removes some functions that are unused, so that I don't have >> to worry about them in patch #2 >> >> Patch #2 introduces the reduced ID map, using a separate linker section >> that contains code the manipulates the state of the MMU. >> >> Patch #3 removes the sleep_idmap_phys global which always points to >> the ID map anyway > > Patch 1 and 2 do not apply for me, what tree/commit are they based against ? > > Please let me know so that I can give patch 3 a go. > They should apply onto for-next/core, but I pushed them to a public branch here: https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-idmap Regards, Ard. >> Ard Biesheuvel (3): >> arm64: remove soft_restart() and friends >> arm64: reduce ID map to a single page >> arm64: drop sleep_idmap_phys >> >> arch/arm64/include/asm/mmu.h | 1 - >> arch/arm64/include/asm/proc-fns.h | 3 --- >> arch/arm64/include/asm/system_misc.h | 1 - >> arch/arm64/kernel/head.S | 13 +++++++------ >> arch/arm64/kernel/process.c | 12 +----------- >> arch/arm64/kernel/sleep.S | 9 ++++----- >> arch/arm64/kernel/suspend.c | 3 --- >> arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++- >> arch/arm64/mm/mmu.c | 11 ----------- >> arch/arm64/mm/proc.S | 33 --------------------------------- >> 10 files changed, 22 insertions(+), 75 deletions(-) >> >> -- >> 1.8.3.2 >> >> >> _______________________________________________ >> linux-arm-kernel mailing list >> linux-arm-kernel@lists.infradead.org >> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel >>
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 3d02b1869eb8..d2189c359364 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -43,8 +43,8 @@ #define SWAPPER_PGTABLE_LEVELS (CONFIG_ARM64_PGTABLE_LEVELS - 1) #endif -#define SWAPPER_DIR_SIZE (SWAPPER_PGTABLE_LEVELS * PAGE_SIZE) -#define IDMAP_DIR_SIZE (3 * PAGE_SIZE) +#define SWAPPER_DIR_SIZE ((SWAPPER_PGTABLE_LEVELS + 1) * PAGE_SIZE) +#define IDMAP_DIR_SIZE (4 * PAGE_SIZE) #ifndef __ASSEMBLY__ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 0dbdb4f3634f..9c3f95f30421 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -274,18 +274,21 @@ ENDPROC(preserve_boot_args) * virt: virtual address * shift: #imm page table shift * ptrs: #imm pointers per table page + * offset: #imm offset into the lowest translation level, in pages * * Preserves: virt * Corrupts: tmp1, tmp2 * Returns: tbl -> next level table page address */ - .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2 + .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2, offset=0 lsr \tmp1, \virt, #\shift + .if \offset + add \tmp1, \tmp1, #\offset + .endif and \tmp1, \tmp1, #\ptrs - 1 // table index - add \tmp2, \tbl, #PAGE_SIZE + add \tmp2, \tbl, #(\offset + 1) * PAGE_SIZE orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type str \tmp2, [\tbl, \tmp1, lsl #3] - add \tbl, \tbl, #PAGE_SIZE // next level table page .endm /* @@ -297,9 +300,14 @@ ENDPROC(preserve_boot_args) */ .macro create_pgd_entry, tbl, virt, tmp1, tmp2 create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2 -#if SWAPPER_PGTABLE_LEVELS == 3 +#if SWAPPER_PGTABLE_LEVELS != 3 + create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2, 1 +#else + add \tbl, \tbl, #PAGE_SIZE // next level table page create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2 + create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2, 1 #endif + add \tbl, \tbl, #PAGE_SIZE // next level table page .endm /* @@ -396,6 +404,7 @@ __create_page_tables: str_l x5, idmap_t0sz, x6 create_table_entry x0, x3, EXTRA_SHIFT, EXTRA_PTRS, x5, x6 + add x0, x0, #PAGE_SIZE // next level table page 1: #endif
Hi all, This is another possible approach to fixing the current, flawed EFI Image placement logic: instead of fixing that logic and the documentation, why not relax the requirement that the kernel Image not be placed across a 512 MB boundary? ---------------->8----------------- This patch changes the early page table code so that two adjacent entries are used to map two pages worth of block entries at the lowest level, both for idmap_pg_dir and swapper_pg_dir. The purpose is to allow the kernel Image to cross a 512 MB or 1 GB alignment boundary (depending on page size), which is something that is not specifically banned by the current wording of the boot protocol. (The boot protocol stipulates that the kernel must be placed within 512 MB if the beginning of RAM. However, the beginning of RAM is not necessarily aligned to 512 MB) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- arch/arm64/include/asm/page.h | 4 ++-- arch/arm64/kernel/head.S | 17 +++++++++++++---- 2 files changed, 15 insertions(+), 6 deletions(-)