Message ID | fae59721001d43db9a0ad2c9c09947284f1ecaa1.1662459668.git.baskov@ispras.ru |
---|---|
State | Superseded |
Headers | show |
Series | x86_64: Improvements at compressed kernel stage | expand |
On Tue, 6 Sept 2022 at 12:41, Evgeniy Baskov <baskov@ispras.ru> wrote: > > After every implicit mapping is removed, this code is no longer needed. > > Remove memory mapping from page fault handler to ensure that there are > no hidden invalid memory accesses. > > Signed-off-by: Evgeniy Baskov <baskov@ispras.ru> I don't grok this 100% but to me, it seems not having to rely on a page fault handler to ensure that the 1:1 mapping has sufficient coverage is a win so Acked-by: Ard Biesheuvel <ardb@kernel.org> > --- > arch/x86/boot/compressed/ident_map_64.c | 26 ++++++++++--------------- > 1 file changed, 10 insertions(+), 16 deletions(-) > > diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c > index 880e08293023..c20cd31e665f 100644 > --- a/arch/x86/boot/compressed/ident_map_64.c > +++ b/arch/x86/boot/compressed/ident_map_64.c > @@ -385,27 +385,21 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code) > { > unsigned long address = native_read_cr2(); > unsigned long end; > - bool ghcb_fault; > + char *msg; > > - ghcb_fault = sev_es_check_ghcb_fault(address); > + if (sev_es_check_ghcb_fault(address)) > + msg = "Page-fault on GHCB page:"; > + else > + msg = "Unexpected page-fault:"; > > address &= PMD_MASK; > end = address + PMD_SIZE; > > /* > - * Check for unexpected error codes. Unexpected are: > - * - Faults on present pages > - * - User faults > - * - Reserved bits set > - */ > - if (error_code & (X86_PF_PROT | X86_PF_USER | X86_PF_RSVD)) > - do_pf_error("Unexpected page-fault:", error_code, address, regs->ip); > - else if (ghcb_fault) > - do_pf_error("Page-fault on GHCB page:", error_code, address, regs->ip); > - > - /* > - * Error code is sane - now identity map the 2M region around > - * the faulting address. > + * Since all memory allocations are made explicit > + * now, every page fault at this stage is an > + * error and the error handler is there only > + * for debug purposes. > */ > - kernel_add_identity_map(address, end, MAP_WRITE); > + do_pf_error(msg, error_code, address, regs->ip); > } > -- > 2.35.1 >
diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c index 880e08293023..c20cd31e665f 100644 --- a/arch/x86/boot/compressed/ident_map_64.c +++ b/arch/x86/boot/compressed/ident_map_64.c @@ -385,27 +385,21 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code) { unsigned long address = native_read_cr2(); unsigned long end; - bool ghcb_fault; + char *msg; - ghcb_fault = sev_es_check_ghcb_fault(address); + if (sev_es_check_ghcb_fault(address)) + msg = "Page-fault on GHCB page:"; + else + msg = "Unexpected page-fault:"; address &= PMD_MASK; end = address + PMD_SIZE; /* - * Check for unexpected error codes. Unexpected are: - * - Faults on present pages - * - User faults - * - Reserved bits set - */ - if (error_code & (X86_PF_PROT | X86_PF_USER | X86_PF_RSVD)) - do_pf_error("Unexpected page-fault:", error_code, address, regs->ip); - else if (ghcb_fault) - do_pf_error("Page-fault on GHCB page:", error_code, address, regs->ip); - - /* - * Error code is sane - now identity map the 2M region around - * the faulting address. + * Since all memory allocations are made explicit + * now, every page fault at this stage is an + * error and the error handler is there only + * for debug purposes. */ - kernel_add_identity_map(address, end, MAP_WRITE); + do_pf_error(msg, error_code, address, regs->ip); }
After every implicit mapping is removed, this code is no longer needed. Remove memory mapping from page fault handler to ensure that there are no hidden invalid memory accesses. Signed-off-by: Evgeniy Baskov <baskov@ispras.ru> --- arch/x86/boot/compressed/ident_map_64.c | 26 ++++++++++--------------- 1 file changed, 10 insertions(+), 16 deletions(-)