Message ID | 20210707181506.30489-22-brijesh.singh@amd.com |
---|---|
State | New |
Headers | show |
Series | Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand |
On Wed, Jul 07, 2021 at 01:14:51PM -0500, Brijesh Singh wrote: > From: Michael Roth <michael.roth@amd.com> > > As of commit 103a4908ad4d ("x86/head/64: Disable stack protection for > head$(BITS).o") kernel/head64.c is compiled with -fno-stack-protector > to allow a call to set_bringup_idt_handler(), which would otherwise > have stack protection enabled with CONFIG_STACKPROTECTOR_STRONG. While > sufficient for that case, this will still cause issues if we attempt to Who's "we"? Please use passive voice in your text: no "we" or "I", etc. Personal pronouns are ambiguous in text, especially with so many parties/companies/etc developing the kernel so let's avoid them please. > call out to any external functions that were compiled with stack > protection enabled that in-turn make stack-protected calls, or if the > exception handlers set up by set_bringup_idt_handler() make calls to > stack-protected functions. > > Subsequent patches for SEV-SNP CPUID validation support will introduce > both such cases. Attempting to disable stack protection for everything > in scope to address that is prohibitive since much of the code, like > SEV-ES #VC handler, is shared code that remains in use after boot and > could benefit from having stack protection enabled. Attempting to inline > calls is brittle and can quickly balloon out to library/helper code > where that's not really an option. > > Instead, set up %gs to point a buffer that stack protector can use for > canary values when needed. > > In doing so, it's likely we can stop using -no-stack-protector for > head64.c, but that hasn't been tested yet, and head32.c would need a > similar solution to be safe, so that is left as a potential follow-up. Well, then fix it properly pls. Remove the -no-stack-protector, test it and send it out, even separately if easier to handle. This version looks half-baked, just so that it gets you what you need for the SNP stuff but we don't do half-baked, sorry. > Signed-off-by: Michael Roth <michael.roth@amd.com> > Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> > --- > arch/x86/kernel/head64.c | 18 +++++++++++++++++- > 1 file changed, 17 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c > index f4c3e632345a..8615418f98f1 100644 > --- a/arch/x86/kernel/head64.c > +++ b/arch/x86/kernel/head64.c > @@ -74,6 +74,9 @@ static struct desc_struct startup_gdt[GDT_ENTRIES] = { > [GDT_ENTRY_KERNEL_DS] = GDT_ENTRY_INIT(0xc093, 0, 0xfffff), > }; > > +/* For use by stack protector code before switching to virtual addresses */ > +static char startup_gs_area[64]; That needs some CONFIG_STACKPROTECTOR ifdeffery around it, below too. > + > /* > * Address needs to be set at runtime because it references the startup_gdt > * while the kernel still uses a direct mapping. > @@ -598,6 +601,8 @@ void early_setup_idt(void) > */ > void __head startup_64_setup_env(unsigned long physbase) > { > + u64 gs_area = (u64)fixup_pointer(startup_gs_area, physbase); > + > /* Load GDT */ > startup_gdt_descr.address = (unsigned long)fixup_pointer(startup_gdt, physbase); > native_load_gdt(&startup_gdt_descr); > @@ -605,7 +610,18 @@ void __head startup_64_setup_env(unsigned long physbase) > /* New GDT is live - reload data segment registers */ > asm volatile("movl %%eax, %%ds\n" > "movl %%eax, %%ss\n" > - "movl %%eax, %%es\n" : : "a"(__KERNEL_DS) : "memory"); > + "movl %%eax, %%es\n" > + "movl %%eax, %%gs\n" : : "a"(__KERNEL_DS) : "memory"); > + > + /* > + * GCC stack protection needs a place to store canary values. The > + * default is %gs:0x28, which is what the kernel currently uses. > + * Point GS base to a buffer that can be used for this purpose. > + * Note that newer GCCs now allow this location to be configured, > + * so if we change from the default in the future we need to ensure > + * that this buffer overlaps whatever address ends up being used. > + */ > + native_wrmsr(MSR_GS_BASE, gs_area, gs_area >> 32); > > startup_64_load_idt(physbase); > } > --
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index f4c3e632345a..8615418f98f1 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -74,6 +74,9 @@ static struct desc_struct startup_gdt[GDT_ENTRIES] = { [GDT_ENTRY_KERNEL_DS] = GDT_ENTRY_INIT(0xc093, 0, 0xfffff), }; +/* For use by stack protector code before switching to virtual addresses */ +static char startup_gs_area[64]; + /* * Address needs to be set at runtime because it references the startup_gdt * while the kernel still uses a direct mapping. @@ -598,6 +601,8 @@ void early_setup_idt(void) */ void __head startup_64_setup_env(unsigned long physbase) { + u64 gs_area = (u64)fixup_pointer(startup_gs_area, physbase); + /* Load GDT */ startup_gdt_descr.address = (unsigned long)fixup_pointer(startup_gdt, physbase); native_load_gdt(&startup_gdt_descr); @@ -605,7 +610,18 @@ void __head startup_64_setup_env(unsigned long physbase) /* New GDT is live - reload data segment registers */ asm volatile("movl %%eax, %%ds\n" "movl %%eax, %%ss\n" - "movl %%eax, %%es\n" : : "a"(__KERNEL_DS) : "memory"); + "movl %%eax, %%es\n" + "movl %%eax, %%gs\n" : : "a"(__KERNEL_DS) : "memory"); + + /* + * GCC stack protection needs a place to store canary values. The + * default is %gs:0x28, which is what the kernel currently uses. + * Point GS base to a buffer that can be used for this purpose. + * Note that newer GCCs now allow this location to be configured, + * so if we change from the default in the future we need to ensure + * that this buffer overlaps whatever address ends up being used. + */ + native_wrmsr(MSR_GS_BASE, gs_area, gs_area >> 32); startup_64_load_idt(physbase); }