Message ID | 20211218085843.212497-1-cuigaosheng1@huawei.com |
---|---|
Headers | show |
Series | replace open coded VA->PA calculation | expand |
On Mon, Dec 20, 2021 at 04:39:43PM +0100, Arnd Bergmann wrote: > On Sat, Dec 18, 2021 at 9:58 AM Gaosheng Cui <cuigaosheng1@huawei.com> wrote: > > > > These patches replace an open coded calculation to obtain the physical > > address of a far symbol with a call to the new ldr_l etc macro, and they > > belong to the kaslr patch set of arm32. > > > > Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=arm-kaslr-latest > > > > Ard Biesheuvel (3): > > arm-soc: exynos: replace open coded VA->PA conversions > > arm-soc: mvebu: replace open coded VA->PA conversion > > arm-soc: various: replace open coded VA->PA calculation > > Usually these patches should go through the respective platform > maintainer trees, > and from there into the soc tree, but time is a little short here. > > I could apply them directly with the maintainer Acks Sorry, but this is too low level for me to understand what is going on, and so feel confident actually giving an ACK for the mvebu change. Should the resulting assembly be exactly the same? Has the submitter disassembled the object code and shown there is no actual difference in the assembler output? Andrew
> I could apply them directly with the maintainer Acks, but I don't understand > the significance of you sending them now. Is something broken without the > three patches? Are these the only ones missing from Ard's original series, > or is this preparation? Would you expect the patches to get backported to > stable kernels? Thanks for your reply. This is preparation work for arm32 kaslr,and I want to continue to improve the solution based on the work of Ard. These patches are relatively independent, so I submit these patches first. Gaosheng. 在 2021/12/20 23:39, Arnd Bergmann 写道: > On Sat, Dec 18, 2021 at 9:58 AM Gaosheng Cui <cuigaosheng1@huawei.com> wrote: >> These patches replace an open coded calculation to obtain the physical >> address of a far symbol with a call to the new ldr_l etc macro, and they >> belong to the kaslr patch set of arm32. >> >> Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=arm-kaslr-latest >> >> Ard Biesheuvel (3): >> arm-soc: exynos: replace open coded VA->PA conversions >> arm-soc: mvebu: replace open coded VA->PA conversion >> arm-soc: various: replace open coded VA->PA calculation > Usually these patches should go through the respective platform > maintainer trees, > and from there into the soc tree, but time is a little short here. > > I could apply them directly with the maintainer Acks, but I don't understand > the significance of you sending them now. Is something broken without the > three patches? Are these the only ones missing from Ard's original series, > or is this preparation? Would you expect the patches to get backported to > stable kernels? > > Arnd > .
On Tue, Dec 21, 2021 at 2:41 AM cuigaosheng <cuigaosheng1@huawei.com> wrote: > > > I could apply them directly with the maintainer Acks, but I don't understand > > the significance of you sending them now. Is something broken without the > > three patches? Are these the only ones missing from Ard's original series, > > or is this preparation? Would you expect the patches to get backported to > > stable kernels? > > Thanks for your reply. > > This is preparation work for arm32 kaslr,and I want to continue to improve > the solution based on the work of Ard. These patches are relatively > independent, so I submit these patches first. The approach of merging support incrementally is good in principle, but in this case I think we first need to agree on the overall direction first. How far have you come rebasing Ard's patches, do you have KASLR working yet? This is information that should go into the [PATCH 0/3] cover letter. Do you have a particular target platform in mind? I think for CPUs that can use LPAE, we want to eventually move to the 4G:4G memory model, which in turn depends on having the kernel in vmalloc space, as implemented by Linus Walleij in https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kernel-in-vmalloc-v5.14-rc1 With this work, the randomization will look quite different, on the one hand it leaves less room for relocating the kernel within the smaller 256MB vmalloc space, while on the other hand it does open the possibility of complete randomization by scrambling the virt-to-phys mapping in vmalloc space, using linear virtual addresses to refer to a randomized set of physical addresses. (this is just a wild idea that one could implement, nobody has actual plans for it at the moment, and it comes with additional runtime overhead). Arnd
On Tue, Dec 21, 2021 at 10:16 AM Arnd Bergmann <arnd@arndb.de> wrote: > I think for CPUs that can use LPAE, we want to eventually move to the 4G:4G > memory model, which in turn depends on having the kernel in vmalloc space, as > implemented by Linus Walleij in > https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kernel-in-vmalloc-v5.14-rc1 Yeah I'm still working on this series. The 4/4 split works pretty well, but it breaks KASan and I was in the process of fixing that when I left for parental leave. The reason it breaks would be related to KASan not expecting executable code in the vmalloc area, and since the shadowing algorithm is a simple pointer offset, and since we go to lengths to chisel shadow memory out of lowmem at a fixed offset from TEXT_OFFSET, we have a problem. The patch "KASAN horror" shows what I am trying to do to fix it, it's "just" some hard work missing. Yours, Linus Walleij
On Wed, Dec 22, 2021 at 10:30 AM Ard Biesheuvel <ardb@kernel.org> wrote: > On Wed, 22 Dec 2021 at 03:31, Linus Walleij <linus.walleij@linaro.org> wrote: > > On Tue, Dec 21, 2021 at 10:16 AM Arnd Bergmann <arnd@arndb.de> wrote: > > > > > I think for CPUs that can use LPAE, we want to eventually move to the 4G:4G > > > memory model, which in turn depends on having the kernel in vmalloc space, as > > > implemented by Linus Walleij in > > > https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-integrator.git/log/?h=kernel-in-vmalloc-v5.14-rc1 > > > > Yeah I'm still working on this series. The 4/4 split works pretty well, Actually it's just kernel-in-vmalloc, 4/4 comes after that. > > but it breaks KASan and I was in the process of fixing that when I left > > for parental leave. > > > > The reason it breaks would be related to KASan not expecting > > executable code in the vmalloc area, and since the shadowing > > algorithm is a simple pointer offset, and since we go to lengths > > to chisel shadow memory out of lowmem at a fixed offset from > > TEXT_OFFSET, we have a problem. > > > > Vmap'ed stacks actually has a similar problem, which is why it is > disabled when KAsan is enabled. But this can be fixed by enabling arch > support for KASAN_VMALLOC, and I suspect it may address the vmap'ed > kernel as well. Yep after seeing the other convo on the topic I realized that this is indeed the same as I'm seeing. I can't disable KASAN just because the kernel is in VMALLOC though, so I suppose when I finally get back to this I have to fix KASAN_VMALLOC too if noone beats me to it. (It'd be great if someone could beat me to it...) Yours, Linus Walleij