Message ID | 20221018110441.3855148-1-ardb@kernel.org |
---|---|
Headers | show |
Series | arm64: efi: leave MMU and caches on at boot | expand |
Hi Ard, On Tue, Oct 18, 2022 at 01:04:35PM +0200, Ard Biesheuvel wrote: > The purpose of this series is to remove any explicit cache maintenance > for coherency during early boot that becomes unnecessary if we simply > retain the cacheable 1:1 mapping of all of system RAM provided by EFI, > and use it to populate the ID map page tables. After setting up this > preliminary ID map, we disable the MMU, drop to EL1, reprogram the MAIR, > TCR and SCTLR registers as before, and proceed as usual, avoiding the > need for any manipulations of memory while the MMU and caches are off. > > The only properties of the firmware provided 1:1 map we rely on is that > it does not require any explicit cache maintenance for coherency, and > that it covers the entire memory footprint of the image, including the > BSS and padding at the end - all else is under control of the kernel > itself, as before. > > Changes since v3: > - drop EFI_LOADER_CODE memory type patch that has been queued in the > mean time > - rebased onto [partial] series that moves efi-entry.S into the libstub/ > source directory [0] > - fixed a correctness issue in patch #2 I really like this series, but I'm also very nervous about supporting booting the kernel with the MMU enabled outside of EFI. The booting documentation prohibits this, but we don't appear to take any steps to prevent this case with your series. Perhaps we shouldn't, but I do think it would be worth trying to warn+taint if we detect it so that we don't spend too much time debugging strange memory issues on platforms that try to use such a configuration. What do you think? Cheers, Will
On Mon, 7 Nov 2022 at 17:12, Will Deacon <will@kernel.org> wrote: > > Hi Ard, > > On Tue, Oct 18, 2022 at 01:04:35PM +0200, Ard Biesheuvel wrote: > > The purpose of this series is to remove any explicit cache maintenance > > for coherency during early boot that becomes unnecessary if we simply > > retain the cacheable 1:1 mapping of all of system RAM provided by EFI, > > and use it to populate the ID map page tables. After setting up this > > preliminary ID map, we disable the MMU, drop to EL1, reprogram the MAIR, > > TCR and SCTLR registers as before, and proceed as usual, avoiding the > > need for any manipulations of memory while the MMU and caches are off. > > > > The only properties of the firmware provided 1:1 map we rely on is that > > it does not require any explicit cache maintenance for coherency, and > > that it covers the entire memory footprint of the image, including the > > BSS and padding at the end - all else is under control of the kernel > > itself, as before. > > > > Changes since v3: > > - drop EFI_LOADER_CODE memory type patch that has been queued in the > > mean time > > - rebased onto [partial] series that moves efi-entry.S into the libstub/ > > source directory [0] > > - fixed a correctness issue in patch #2 > > I really like this series, but I'm also very nervous about supporting > booting the kernel with the MMU enabled outside of EFI. The booting > documentation prohibits this, but we don't appear to take any steps to > prevent this case with your series. Perhaps we shouldn't, but I do think > it would be worth trying to warn+taint if we detect it so that we don't > spend too much time debugging strange memory issues on platforms that > try to use such a configuration. > > What do you think? > I share your concern, and capturing the value of SCTLR at boot and warning about it later should be trivial to do. In fact, we already do something similar for the alignment, where only EFI is permitted to deviate from the 2 MiB alignment requirement of the image's placement in memory. I'll add something in the same spot. Note that I need to respin this in any case - the EL2 startup code needs to be cleaned to the PoC as well, given that it will also execute with MMU and caches off at EL2 when finalise_el2() is called. I was about to get back to this so I should have a v5 tomorrow.