From patchwork Tue Apr 8 08:52:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 879092 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF215264F96 for ; Tue, 8 Apr 2025 08:53:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744102402; cv=none; b=WARdABw70XvBECLJTqEjXE2gw3rtmsdnyx0M8KpfakkGTcQqdcVhEjFxaJbH63GIk94kNNHesNNWGf75OjUpWn7SBeErTt8XVQB77Ain//EXtOE+mUJh6xs7uLfedv+17yaOTqtSkUksCFgWASP58r3btI6ZTVPdogmKIwn+HJI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744102402; c=relaxed/simple; bh=MfWGiN6WSi/Zk+tmGbO8szMKG53sTuUflJC9o3lgens=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ah0ncPERkXcMdXn/uKTcPmZX6JyHs/bQPxlxonYKboPcp1jB8WAc+D+kcqwKY8oR4jyeIGZ0J0l7Wf0p3xtwOeBsApDRJ9+7LCdyUUJ3Xk6YxOjwaTpnkwtS4a9vvebaNMKBSrGSG5WQid0mYhNglIgzuILA5PD5iWivkmJVrKc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=18ZzQDCo; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="18ZzQDCo" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3913b2d355fso2104993f8f.1 for ; Tue, 08 Apr 2025 01:53:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744102399; x=1744707199; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HO2l3tWFFummWZwJE2HnD5inCchClwn1ne9w9FdWdWQ=; b=18ZzQDCoC0wGUHH5Rfe0wOHs8bQwfdgUmereJK3zVUigtB+SMxbmZbqJUQNPdDSlJN DmrLmst85ewdYFSB4i8bhqvg1puQO69rwKjyXgY4oPXUvXzXddgtdVeZHqRNiwWu68ky LH6hQGdcgn3GZcGV0WMQVL6/i+LhwC0dXhO/tJfEVDPECQSibmin7uTy0SFBWJnsKQxv ZQkGU17ph7i/68eACsmIHNZSLsj7amYGcCD/qWUOD1P3dD5xt6R2NnO+SY8eWIj4a23y TjOZDT3m5WwyX0EWWLaD5myeuhWWco8UniY36/EBYT1WUV/EN3LrEtP9nmJ0uQIkX2TU 3XOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744102399; x=1744707199; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HO2l3tWFFummWZwJE2HnD5inCchClwn1ne9w9FdWdWQ=; b=aqTuxDVyxOiZdv+zoxwoxszX12Suno17XtHjwEUhGwZV/pCmsNI1fTVBRxpPzbNlNB LxgRHT2ClxKx7aJU88GVg6eVB2ximIH04YsV8cGAmIZ9o2NljMEhXi+Gobwcyku1WIZl 7adWyDxOKf7+cFZ3f+UAFNTqBv5QGqrFUk9TpPsAq2bU7qi/1/jHDbM+UKrQNQ+dR426 yR0ExHlgFFic/iOpZR9GXSYxRzUAJ21crv3UTrq3hy+mGtP+tVmJgsgI5/ircTd2bPiZ m8gEpjgsrTU8Jx8A3xB/vWBr0ZK5iz3L6xrbSovy77K6sz7K3aDZiJqcocEzHA59S/8r jtJQ== X-Gm-Message-State: AOJu0YyRDJg2KcY5Pac05eLnEOYhLLE31jT6ocpyuhrhkOq+qnNv3fB2 43SMOxzQ96jz59hWFqSRNfGNwKtdZbK3IZsej65iJPT7qb8hGMPZDjko7Tzk+UNOoVj+r7dAd/7 NOSHHKsmnw9KAcm4NZ9AGHIFeP+B3CAbxTr+rkbCJTqYlHRS8JY0ZDMrJBe5jWYhHD4CAONJJYD ECO9bOtz0g7JjYb6psqYraECRVew== X-Google-Smtp-Source: AGHT+IE5P19L2OxaCweIeN6QclTC3ENhDs8OX+Js27sfELdEnOiFqRnym6vFy4VvA/lxRLhOmmoibUWT X-Received: from wmgg3.prod.google.com ([2002:a05:600d:3:b0:43c:ef1f:48d3]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5c84:0:b0:391:4559:8761 with SMTP id ffacd0b85a97d-39cba9333a6mr13125653f8f.36.1744102399199; Tue, 08 Apr 2025 01:53:19 -0700 (PDT) Date: Tue, 8 Apr 2025 10:52:57 +0200 In-Reply-To: <20250408085254.836788-9-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250408085254.836788-9-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=8317; i=ardb@kernel.org; h=from:subject; bh=WeI9HuNMCdrTRyg+4/YKtxdJNSRFVjQXqhbM5pjqyAw=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3L41dTXcr+r9zKFiulL+537CGfb86uxw81V2r1pWw4n vOuPlm2o5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzk9yOG/+VrH89bd+b83VeT D7bLr/v4W3a32Sezuz+1QzW9mTlervVl+B9n8bxEt19tueD0k+d+XW+aEbfr5IXuJxsW/jn5T/B yyBxmAA== X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250408085254.836788-11-ardb+git@google.com> Subject: [PATCH v3 2/7] x86/asm: Make rip_rel_ptr() usable from fPIC code From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: x86@kernel.org, mingo@kernel.org, linux-kernel@vger.kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel RIP_REL_REF() is used in non-PIC C code that is called very early, before the kernel virtual mapping is up, which is the mapping that the linker expects. It is currently used in two different ways: - to refer to the value of a global variable, including as an lvalue in assignments; - to take the address of a global variable via the mapping that the code currently executes at. The former case is only needed in non-PIC code, as PIC code will never use absolute symbol references when the address of the symbol is not being used. But taking the address of a variable in PIC code may still require extra care, as a stack allocated struct assignment may be emitted as a memcpy() from a statically allocated copy in .rodata. For instance, this void startup_64_setup_gdt_idt(void) { struct desc_ptr startup_gdt_descr = { .address = (__force unsigned long)gdt_page.gdt, .size = GDT_SIZE - 1, }; may result in an absolute symbol reference in PIC code, even though the struct is allocated on the stack and populated at runtime. To address this case, make rip_rel_ptr() accessible in PIC code, and update any existing uses where the address of a global variable is taken using RIP_REL_REF. Once all code of this nature has been moved into arch/x86/boot/startup and built with -fPIC, RIP_REL_REF() can be retired, and only rip_rel_ptr() will remain. Signed-off-by: Ard Biesheuvel --- arch/x86/coco/sev/core.c | 2 +- arch/x86/coco/sev/shared.c | 4 ++-- arch/x86/include/asm/asm.h | 2 +- arch/x86/kernel/head64.c | 23 ++++++++++---------- arch/x86/mm/mem_encrypt_identity.c | 6 ++--- 5 files changed, 18 insertions(+), 19 deletions(-) diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c index b0c1a7a57497..832f7a7b10b2 100644 --- a/arch/x86/coco/sev/core.c +++ b/arch/x86/coco/sev/core.c @@ -2400,7 +2400,7 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info) * kernel was loaded (physbase), so the get the CA address using * RIP-relative addressing. */ - pa = (u64)&RIP_REL_REF(boot_svsm_ca_page); + pa = (u64)rip_rel_ptr(&boot_svsm_ca_page); /* * Switch over to the boot SVSM CA while the current CA is still diff --git a/arch/x86/coco/sev/shared.c b/arch/x86/coco/sev/shared.c index 2e4122f8aa6b..04982d356803 100644 --- a/arch/x86/coco/sev/shared.c +++ b/arch/x86/coco/sev/shared.c @@ -475,7 +475,7 @@ static int sev_cpuid_hv(struct ghcb *ghcb, struct es_em_ctxt *ctxt, struct cpuid */ static const struct snp_cpuid_table *snp_cpuid_get_table(void) { - return &RIP_REL_REF(cpuid_table_copy); + return rip_rel_ptr(&cpuid_table_copy); } /* @@ -1681,7 +1681,7 @@ static bool __head svsm_setup_ca(const struct cc_blob_sev_info *cc_info) * routine is running identity mapped when called, both by the decompressor * code and the early kernel code. */ - if (!rmpadjust((unsigned long)&RIP_REL_REF(boot_ghcb_page), RMP_PG_SIZE_4K, 1)) + if (!rmpadjust((unsigned long)rip_rel_ptr(&boot_ghcb_page), RMP_PG_SIZE_4K, 1)) return false; /* diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index cc2881576c2c..a9f07799e337 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -114,13 +114,13 @@ #endif #ifndef __ASSEMBLER__ -#ifndef __pic__ static __always_inline __pure void *rip_rel_ptr(void *p) { asm("leaq %c1(%%rip), %0" : "=r"(p) : "i"(p)); return p; } +#ifndef __pic__ #define RIP_REL_REF(var) (*(typeof(&(var)))rip_rel_ptr(&(var))) #else #define RIP_REL_REF(var) (var) diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index fa9b6339975f..3fb23d805cef 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -106,8 +106,8 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, * attribute. */ if (sme_get_me_mask()) { - paddr = (unsigned long)&RIP_REL_REF(__start_bss_decrypted); - paddr_end = (unsigned long)&RIP_REL_REF(__end_bss_decrypted); + paddr = (unsigned long)rip_rel_ptr(__start_bss_decrypted); + paddr_end = (unsigned long)rip_rel_ptr(__end_bss_decrypted); for (; paddr < paddr_end; paddr += PMD_SIZE) { /* @@ -144,8 +144,8 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, unsigned long __head __startup_64(unsigned long p2v_offset, struct boot_params *bp) { - pmd_t (*early_pgts)[PTRS_PER_PMD] = RIP_REL_REF(early_dynamic_pgts); - unsigned long physaddr = (unsigned long)&RIP_REL_REF(_text); + pmd_t (*early_pgts)[PTRS_PER_PMD] = rip_rel_ptr(early_dynamic_pgts); + unsigned long physaddr = (unsigned long)rip_rel_ptr(_text); unsigned long va_text, va_end; unsigned long pgtable_flags; unsigned long load_delta; @@ -174,18 +174,18 @@ unsigned long __head __startup_64(unsigned long p2v_offset, for (;;); va_text = physaddr - p2v_offset; - va_end = (unsigned long)&RIP_REL_REF(_end) - p2v_offset; + va_end = (unsigned long)rip_rel_ptr(_end) - p2v_offset; /* Include the SME encryption mask in the fixup value */ load_delta += sme_get_me_mask(); /* Fixup the physical addresses in the page table */ - pgd = &RIP_REL_REF(early_top_pgt)->pgd; + pgd = rip_rel_ptr(early_top_pgt); pgd[pgd_index(__START_KERNEL_map)] += load_delta; if (IS_ENABLED(CONFIG_X86_5LEVEL) && la57) { - p4d = (p4dval_t *)&RIP_REL_REF(level4_kernel_pgt); + p4d = (p4dval_t *)rip_rel_ptr(level4_kernel_pgt); p4d[MAX_PTRS_PER_P4D - 1] += load_delta; pgd[pgd_index(__START_KERNEL_map)] = (pgdval_t)p4d | _PAGE_TABLE; @@ -258,7 +258,7 @@ unsigned long __head __startup_64(unsigned long p2v_offset, * error, causing the BIOS to halt the system. */ - pmd = &RIP_REL_REF(level2_kernel_pgt)->pmd; + pmd = rip_rel_ptr(level2_kernel_pgt); /* invalidate pages before the kernel image */ for (i = 0; i < pmd_index(va_text); i++) @@ -531,7 +531,7 @@ static gate_desc bringup_idt_table[NUM_EXCEPTION_VECTORS] __page_aligned_data; static void __head startup_64_load_idt(void *vc_handler) { struct desc_ptr desc = { - .address = (unsigned long)&RIP_REL_REF(bringup_idt_table), + .address = (unsigned long)rip_rel_ptr(bringup_idt_table), .size = sizeof(bringup_idt_table) - 1, }; struct idt_data data; @@ -565,11 +565,10 @@ void early_setup_idt(void) */ void __head startup_64_setup_gdt_idt(void) { - struct desc_struct *gdt = (void *)(__force unsigned long)gdt_page.gdt; void *handler = NULL; struct desc_ptr startup_gdt_descr = { - .address = (unsigned long)&RIP_REL_REF(*gdt), + .address = (unsigned long)rip_rel_ptr((__force void *)&gdt_page), .size = GDT_SIZE - 1, }; @@ -582,7 +581,7 @@ void __head startup_64_setup_gdt_idt(void) "movl %%eax, %%es\n" : : "a"(__KERNEL_DS) : "memory"); if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) - handler = &RIP_REL_REF(vc_no_ghcb); + handler = rip_rel_ptr(vc_no_ghcb); startup_64_load_idt(handler); } diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c index 5eecdd92da10..e7fb3779b35f 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/mm/mem_encrypt_identity.c @@ -318,8 +318,8 @@ void __head sme_encrypt_kernel(struct boot_params *bp) * memory from being cached. */ - kernel_start = (unsigned long)RIP_REL_REF(_text); - kernel_end = ALIGN((unsigned long)RIP_REL_REF(_end), PMD_SIZE); + kernel_start = (unsigned long)rip_rel_ptr(_text); + kernel_end = ALIGN((unsigned long)rip_rel_ptr(_end), PMD_SIZE); kernel_len = kernel_end - kernel_start; initrd_start = 0; @@ -345,7 +345,7 @@ void __head sme_encrypt_kernel(struct boot_params *bp) * pagetable structures for the encryption of the kernel * pagetable structures for workarea (in case not currently mapped) */ - execute_start = workarea_start = (unsigned long)RIP_REL_REF(sme_workarea); + execute_start = workarea_start = (unsigned long)rip_rel_ptr(sme_workarea); execute_end = execute_start + (PAGE_SIZE * 2) + PMD_SIZE; execute_len = execute_end - execute_start; From patchwork Tue Apr 8 08:52:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 879091 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30FCD266583 for ; Tue, 8 Apr 2025 08:53:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744102408; cv=none; b=dRLnyn0aeJTrYGAQprxKGxtjLhZ2gOcZpBUa0jDYEF9UyemlSy+4HBME4DysVETYoTNnYfQRiqilHwZZ87I38Halc6CFqAc0ksT/XQcJup6w0slHgy6aU1fG62mIuZgznPqarnmejpgHgiTi6yaqxZcKNw9EKZikNOxlDA1TZGM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744102408; c=relaxed/simple; bh=afAM/r8LLBJ3ce6878gWia6R3fO9omu9O2KzXRnZMwI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IjNWvvqLWliwug78Xd2IZN5cX5IWiruQH8lktAX7VUMoWW+cCPMwO3vmdpN0wOHU9NZNbYRUHx6FybUgJFKc/sgD9UoKteKOwTohEIA0XIVx/HNd8aQ2YDkbPVZWkQsW+1MQIW53NyuNVYjf0Wv7DGLj3y05MgDTVtuk3VqqNlk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FK4Q2N5L; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FK4Q2N5L" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43941ad86d4so29618955e9.2 for ; Tue, 08 Apr 2025 01:53:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744102403; x=1744707203; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cD0R3YWzzmvI7uAGndwbKruAiXCyd47nNPAPxqQeLtI=; b=FK4Q2N5LhN1V6oAEVATYpNjqy90DE25brdl7RakvzM+Si/QXk8bbtv4V3x4jSrCdin dSVzIKbGXpALZYyDYz1aNx4pLrXK5uOaAMr3Zrtan1REHqqbD0bFQ2NGm3tT5C0UCIYP Mp2v5jKWbp3u0r6j/E53sleiEGfa2cqfxUYaeC9dbKjKmXWf23ZgBv8B4MCsAXs9HZSt G4tcu23Wei+9kZV7SM7NCBpCbWafhZYqrv3DADZeQ+3rsE3iVOGJRh/RFnJo9Ibrqpsi Cl6nVbNyLUEexHUZVlEBfiHwdHHV4RxR/51y3yApqudRhNghlh8KRz5Nk/j329qj0oOM ODJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744102403; x=1744707203; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cD0R3YWzzmvI7uAGndwbKruAiXCyd47nNPAPxqQeLtI=; b=tVmq+YZ8iV+LIGoEMOXIgkZd2saemkF1XBKgw7DWlrMETclgLlXXjccfbYzFixNgjC rHdsy6s94sm9qS8KPQSjo3fC7VLFRaxcvLu/PhTkuUim+VelQzgyjPiFxWdV1+MU1Prv ZCpZwjCRxkwa0k06bcmzkqNasNcjw1Wouj7z8+ch86N0gZ6XRXLSKmgSrsWnedqhA0Ks da2KC5BkOLGf5fFruKLOhyf7gsV6akwI49SASzotf5ncdEy7l//wovn/7P/o7KI2D4Za /djYyRhyO3rcsV4tUHEfhY4rRFaC3CSef6jgqJCUX6RqllZYg/XAxvRZPLc6C2wVcF53 P9pQ== X-Gm-Message-State: AOJu0YygXtCyBKEwmOxrN0dSpeOQYLxuotF7A+4H+Kfle7BvN8IaMB1A bMsoR3/YVf0QQgEL2Oz0V3xaVgjZ+ygswRVB6rbXosFeoVQjtXfeVPPDsBMtT8CYsxLLKH04Cf7 vbx5CrboAxbAzG7X6glN/HYkmThO9sANUnrriAYIBSbokZqQNavhu9ENHXazvGsxQNNnRimS8og F5JNzwGbBKnjXv36kQlyYuTBNR7w== X-Google-Smtp-Source: AGHT+IH3hgzl7XMIA+2byfCqeuv9ky4s6Y91FTFpPpgmkni5CTzO8UF27NgEzCp4MuWw4yVPNr1+RNNF X-Received: from wmsp32.prod.google.com ([2002:a05:600c:1da0:b0:43c:f6b3:fa10]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b1b:b0:43c:ec4c:25b1 with SMTP id 5b1f17b1804b1-43ee076d701mr88955555e9.23.1744102403597; Tue, 08 Apr 2025 01:53:23 -0700 (PDT) Date: Tue, 8 Apr 2025 10:52:59 +0200 In-Reply-To: <20250408085254.836788-9-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250408085254.836788-9-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=16737; i=ardb@kernel.org; h=from:subject; bh=tGbwhpUC8P3TmNYLN+90vBme6ochJTirJO/Rz83PktE=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3L47ccBU+KfxlvjjBIF910c0q95aNrTz1NFOfsF931W d/R5UtmRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZhI+gGGf2oTg2Wfr/jEuWxf 5d6l/955nm3QELYvNnrH9mq7pPJZxcMM/wwFF/38n2aw86gTU+mkbw95Nkence1e0nszf9+a3bI KoVwA X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250408085254.836788-13-ardb+git@google.com> Subject: [PATCH v3 4/7] x86/boot: Move early kernel mapping code into startup/ From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: x86@kernel.org, mingo@kernel.org, linux-kernel@vger.kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel The startup code that constructs the kernel virtual mapping runs from the 1:1 mapping of memory itself, and therefore, cannot use absolute symbol references. Before making changes in subsequent patches, move this code into a separate source file under arch/x86/boot/startup/ where all such code will be kept from now on. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/startup/Makefile | 2 +- arch/x86/boot/startup/map_kernel.c | 224 ++++++++++++++++++++ arch/x86/kernel/head64.c | 211 +----------------- 3 files changed, 226 insertions(+), 211 deletions(-) diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile index 1beb5de30735..10319aee666b 100644 --- a/arch/x86/boot/startup/Makefile +++ b/arch/x86/boot/startup/Makefile @@ -15,7 +15,7 @@ KMSAN_SANITIZE := n UBSAN_SANITIZE := n KCOV_INSTRUMENT := n -obj-$(CONFIG_X86_64) += gdt_idt.o +obj-$(CONFIG_X86_64) += gdt_idt.o map_kernel.o lib-$(CONFIG_X86_64) += la57toggle.o lib-$(CONFIG_EFI_MIXED) += efi-mixed.o diff --git a/arch/x86/boot/startup/map_kernel.c b/arch/x86/boot/startup/map_kernel.c new file mode 100644 index 000000000000..5f1b7e0ba26e --- /dev/null +++ b/arch/x86/boot/startup/map_kernel.c @@ -0,0 +1,224 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD]; +extern unsigned int next_early_pgt; + +static inline bool check_la57_support(void) +{ + if (!IS_ENABLED(CONFIG_X86_5LEVEL)) + return false; + + /* + * 5-level paging is detected and enabled at kernel decompression + * stage. Only check if it has been enabled there. + */ + if (!(native_read_cr4() & X86_CR4_LA57)) + return false; + + RIP_REL_REF(__pgtable_l5_enabled) = 1; + RIP_REL_REF(pgdir_shift) = 48; + RIP_REL_REF(ptrs_per_p4d) = 512; + RIP_REL_REF(page_offset_base) = __PAGE_OFFSET_BASE_L5; + RIP_REL_REF(vmalloc_base) = __VMALLOC_BASE_L5; + RIP_REL_REF(vmemmap_base) = __VMEMMAP_BASE_L5; + + return true; +} + +static unsigned long __head sme_postprocess_startup(struct boot_params *bp, + pmdval_t *pmd, + unsigned long p2v_offset) +{ + unsigned long paddr, paddr_end; + int i; + + /* Encrypt the kernel and related (if SME is active) */ + sme_encrypt_kernel(bp); + + /* + * Clear the memory encryption mask from the .bss..decrypted section. + * The bss section will be memset to zero later in the initialization so + * there is no need to zero it after changing the memory encryption + * attribute. + */ + if (sme_get_me_mask()) { + paddr = (unsigned long)rip_rel_ptr(__start_bss_decrypted); + paddr_end = (unsigned long)rip_rel_ptr(__end_bss_decrypted); + + for (; paddr < paddr_end; paddr += PMD_SIZE) { + /* + * On SNP, transition the page to shared in the RMP table so that + * it is consistent with the page table attribute change. + * + * __start_bss_decrypted has a virtual address in the high range + * mapping (kernel .text). PVALIDATE, by way of + * early_snp_set_memory_shared(), requires a valid virtual + * address but the kernel is currently running off of the identity + * mapping so use the PA to get a *currently* valid virtual address. + */ + early_snp_set_memory_shared(paddr, paddr, PTRS_PER_PMD); + + i = pmd_index(paddr - p2v_offset); + pmd[i] -= sme_get_me_mask(); + } + } + + /* + * Return the SME encryption mask (if SME is active) to be used as a + * modifier for the initial pgdir entry programmed into CR3. + */ + return sme_get_me_mask(); +} + +/* Code in __startup_64() can be relocated during execution, but the compiler + * doesn't have to generate PC-relative relocations when accessing globals from + * that function. Clang actually does not generate them, which leads to + * boot-time crashes. To work around this problem, every global pointer must + * be accessed using RIP_REL_REF(). Kernel virtual addresses can be determined + * by subtracting p2v_offset from the RIP-relative address. + */ +unsigned long __head __startup_64(unsigned long p2v_offset, + struct boot_params *bp) +{ + pmd_t (*early_pgts)[PTRS_PER_PMD] = rip_rel_ptr(early_dynamic_pgts); + unsigned long physaddr = (unsigned long)rip_rel_ptr(_text); + unsigned long va_text, va_end; + unsigned long pgtable_flags; + unsigned long load_delta; + pgdval_t *pgd; + p4dval_t *p4d; + pudval_t *pud; + pmdval_t *pmd, pmd_entry; + bool la57; + int i; + + la57 = check_la57_support(); + + /* Is the address too large? */ + if (physaddr >> MAX_PHYSMEM_BITS) + for (;;); + + /* + * Compute the delta between the address I am compiled to run at + * and the address I am actually running at. + */ + load_delta = __START_KERNEL_map + p2v_offset; + RIP_REL_REF(phys_base) = load_delta; + + /* Is the address not 2M aligned? */ + if (load_delta & ~PMD_MASK) + for (;;); + + va_text = physaddr - p2v_offset; + va_end = (unsigned long)rip_rel_ptr(_end) - p2v_offset; + + /* Include the SME encryption mask in the fixup value */ + load_delta += sme_get_me_mask(); + + /* Fixup the physical addresses in the page table */ + + pgd = rip_rel_ptr(early_top_pgt); + pgd[pgd_index(__START_KERNEL_map)] += load_delta; + + if (IS_ENABLED(CONFIG_X86_5LEVEL) && la57) { + p4d = (p4dval_t *)rip_rel_ptr(level4_kernel_pgt); + p4d[MAX_PTRS_PER_P4D - 1] += load_delta; + + pgd[pgd_index(__START_KERNEL_map)] = (pgdval_t)p4d | _PAGE_TABLE; + } + + RIP_REL_REF(level3_kernel_pgt)[PTRS_PER_PUD - 2].pud += load_delta; + RIP_REL_REF(level3_kernel_pgt)[PTRS_PER_PUD - 1].pud += load_delta; + + for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--) + RIP_REL_REF(level2_fixmap_pgt)[i].pmd += load_delta; + + /* + * Set up the identity mapping for the switchover. These + * entries should *NOT* have the global bit set! This also + * creates a bunch of nonsense entries but that is fine -- + * it avoids problems around wraparound. + */ + + pud = &early_pgts[0]->pmd; + pmd = &early_pgts[1]->pmd; + RIP_REL_REF(next_early_pgt) = 2; + + pgtable_flags = _KERNPG_TABLE_NOENC + sme_get_me_mask(); + + if (la57) { + p4d = &early_pgts[RIP_REL_REF(next_early_pgt)++]->pmd; + + i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; + pgd[i + 0] = (pgdval_t)p4d + pgtable_flags; + pgd[i + 1] = (pgdval_t)p4d + pgtable_flags; + + i = physaddr >> P4D_SHIFT; + p4d[(i + 0) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; + p4d[(i + 1) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; + } else { + i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; + pgd[i + 0] = (pgdval_t)pud + pgtable_flags; + pgd[i + 1] = (pgdval_t)pud + pgtable_flags; + } + + i = physaddr >> PUD_SHIFT; + pud[(i + 0) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; + pud[(i + 1) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; + + pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL; + /* Filter out unsupported __PAGE_KERNEL_* bits: */ + pmd_entry &= RIP_REL_REF(__supported_pte_mask); + pmd_entry += sme_get_me_mask(); + pmd_entry += physaddr; + + for (i = 0; i < DIV_ROUND_UP(va_end - va_text, PMD_SIZE); i++) { + int idx = i + (physaddr >> PMD_SHIFT); + + pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE; + } + + /* + * Fixup the kernel text+data virtual addresses. Note that + * we might write invalid pmds, when the kernel is relocated + * cleanup_highmap() fixes this up along with the mappings + * beyond _end. + * + * Only the region occupied by the kernel image has so far + * been checked against the table of usable memory regions + * provided by the firmware, so invalidate pages outside that + * region. A page table entry that maps to a reserved area of + * memory would allow processor speculation into that area, + * and on some hardware (particularly the UV platform) even + * speculative access to some reserved areas is caught as an + * error, causing the BIOS to halt the system. + */ + + pmd = rip_rel_ptr(level2_kernel_pgt); + + /* invalidate pages before the kernel image */ + for (i = 0; i < pmd_index(va_text); i++) + pmd[i] &= ~_PAGE_PRESENT; + + /* fixup pages that are part of the kernel image */ + for (; i <= pmd_index(va_end); i++) + if (pmd[i] & _PAGE_PRESENT) + pmd[i] += load_delta; + + /* invalidate pages after the kernel image */ + for (; i < PTRS_PER_PMD; i++) + pmd[i] &= ~_PAGE_PRESENT; + + return sme_postprocess_startup(bp, pmd, p2v_offset); +} diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 9b2ffec4bbad..6b68a206fa7f 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -47,7 +47,7 @@ * Manage page tables very early on. */ extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD]; -static unsigned int __initdata next_early_pgt; +unsigned int __initdata next_early_pgt; pmdval_t early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX); #ifdef CONFIG_X86_5LEVEL @@ -67,215 +67,6 @@ unsigned long vmemmap_base __ro_after_init = __VMEMMAP_BASE_L4; EXPORT_SYMBOL(vmemmap_base); #endif -static inline bool check_la57_support(void) -{ - if (!IS_ENABLED(CONFIG_X86_5LEVEL)) - return false; - - /* - * 5-level paging is detected and enabled at kernel decompression - * stage. Only check if it has been enabled there. - */ - if (!(native_read_cr4() & X86_CR4_LA57)) - return false; - - RIP_REL_REF(__pgtable_l5_enabled) = 1; - RIP_REL_REF(pgdir_shift) = 48; - RIP_REL_REF(ptrs_per_p4d) = 512; - RIP_REL_REF(page_offset_base) = __PAGE_OFFSET_BASE_L5; - RIP_REL_REF(vmalloc_base) = __VMALLOC_BASE_L5; - RIP_REL_REF(vmemmap_base) = __VMEMMAP_BASE_L5; - - return true; -} - -static unsigned long __head sme_postprocess_startup(struct boot_params *bp, - pmdval_t *pmd, - unsigned long p2v_offset) -{ - unsigned long paddr, paddr_end; - int i; - - /* Encrypt the kernel and related (if SME is active) */ - sme_encrypt_kernel(bp); - - /* - * Clear the memory encryption mask from the .bss..decrypted section. - * The bss section will be memset to zero later in the initialization so - * there is no need to zero it after changing the memory encryption - * attribute. - */ - if (sme_get_me_mask()) { - paddr = (unsigned long)rip_rel_ptr(__start_bss_decrypted); - paddr_end = (unsigned long)rip_rel_ptr(__end_bss_decrypted); - - for (; paddr < paddr_end; paddr += PMD_SIZE) { - /* - * On SNP, transition the page to shared in the RMP table so that - * it is consistent with the page table attribute change. - * - * __start_bss_decrypted has a virtual address in the high range - * mapping (kernel .text). PVALIDATE, by way of - * early_snp_set_memory_shared(), requires a valid virtual - * address but the kernel is currently running off of the identity - * mapping so use the PA to get a *currently* valid virtual address. - */ - early_snp_set_memory_shared(paddr, paddr, PTRS_PER_PMD); - - i = pmd_index(paddr - p2v_offset); - pmd[i] -= sme_get_me_mask(); - } - } - - /* - * Return the SME encryption mask (if SME is active) to be used as a - * modifier for the initial pgdir entry programmed into CR3. - */ - return sme_get_me_mask(); -} - -/* Code in __startup_64() can be relocated during execution, but the compiler - * doesn't have to generate PC-relative relocations when accessing globals from - * that function. Clang actually does not generate them, which leads to - * boot-time crashes. To work around this problem, every global pointer must - * be accessed using RIP_REL_REF(). Kernel virtual addresses can be determined - * by subtracting p2v_offset from the RIP-relative address. - */ -unsigned long __head __startup_64(unsigned long p2v_offset, - struct boot_params *bp) -{ - pmd_t (*early_pgts)[PTRS_PER_PMD] = rip_rel_ptr(early_dynamic_pgts); - unsigned long physaddr = (unsigned long)rip_rel_ptr(_text); - unsigned long va_text, va_end; - unsigned long pgtable_flags; - unsigned long load_delta; - pgdval_t *pgd; - p4dval_t *p4d; - pudval_t *pud; - pmdval_t *pmd, pmd_entry; - bool la57; - int i; - - la57 = check_la57_support(); - - /* Is the address too large? */ - if (physaddr >> MAX_PHYSMEM_BITS) - for (;;); - - /* - * Compute the delta between the address I am compiled to run at - * and the address I am actually running at. - */ - load_delta = __START_KERNEL_map + p2v_offset; - RIP_REL_REF(phys_base) = load_delta; - - /* Is the address not 2M aligned? */ - if (load_delta & ~PMD_MASK) - for (;;); - - va_text = physaddr - p2v_offset; - va_end = (unsigned long)rip_rel_ptr(_end) - p2v_offset; - - /* Include the SME encryption mask in the fixup value */ - load_delta += sme_get_me_mask(); - - /* Fixup the physical addresses in the page table */ - - pgd = rip_rel_ptr(early_top_pgt); - pgd[pgd_index(__START_KERNEL_map)] += load_delta; - - if (IS_ENABLED(CONFIG_X86_5LEVEL) && la57) { - p4d = (p4dval_t *)rip_rel_ptr(level4_kernel_pgt); - p4d[MAX_PTRS_PER_P4D - 1] += load_delta; - - pgd[pgd_index(__START_KERNEL_map)] = (pgdval_t)p4d | _PAGE_TABLE; - } - - RIP_REL_REF(level3_kernel_pgt)[PTRS_PER_PUD - 2].pud += load_delta; - RIP_REL_REF(level3_kernel_pgt)[PTRS_PER_PUD - 1].pud += load_delta; - - for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--) - RIP_REL_REF(level2_fixmap_pgt)[i].pmd += load_delta; - - /* - * Set up the identity mapping for the switchover. These - * entries should *NOT* have the global bit set! This also - * creates a bunch of nonsense entries but that is fine -- - * it avoids problems around wraparound. - */ - - pud = &early_pgts[0]->pmd; - pmd = &early_pgts[1]->pmd; - RIP_REL_REF(next_early_pgt) = 2; - - pgtable_flags = _KERNPG_TABLE_NOENC + sme_get_me_mask(); - - if (la57) { - p4d = &early_pgts[RIP_REL_REF(next_early_pgt)++]->pmd; - - i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; - pgd[i + 0] = (pgdval_t)p4d + pgtable_flags; - pgd[i + 1] = (pgdval_t)p4d + pgtable_flags; - - i = physaddr >> P4D_SHIFT; - p4d[(i + 0) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; - p4d[(i + 1) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; - } else { - i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; - pgd[i + 0] = (pgdval_t)pud + pgtable_flags; - pgd[i + 1] = (pgdval_t)pud + pgtable_flags; - } - - i = physaddr >> PUD_SHIFT; - pud[(i + 0) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; - pud[(i + 1) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; - - pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL; - /* Filter out unsupported __PAGE_KERNEL_* bits: */ - pmd_entry &= RIP_REL_REF(__supported_pte_mask); - pmd_entry += sme_get_me_mask(); - pmd_entry += physaddr; - - for (i = 0; i < DIV_ROUND_UP(va_end - va_text, PMD_SIZE); i++) { - int idx = i + (physaddr >> PMD_SHIFT); - - pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE; - } - - /* - * Fixup the kernel text+data virtual addresses. Note that - * we might write invalid pmds, when the kernel is relocated - * cleanup_highmap() fixes this up along with the mappings - * beyond _end. - * - * Only the region occupied by the kernel image has so far - * been checked against the table of usable memory regions - * provided by the firmware, so invalidate pages outside that - * region. A page table entry that maps to a reserved area of - * memory would allow processor speculation into that area, - * and on some hardware (particularly the UV platform) even - * speculative access to some reserved areas is caught as an - * error, causing the BIOS to halt the system. - */ - - pmd = rip_rel_ptr(level2_kernel_pgt); - - /* invalidate pages before the kernel image */ - for (i = 0; i < pmd_index(va_text); i++) - pmd[i] &= ~_PAGE_PRESENT; - - /* fixup pages that are part of the kernel image */ - for (; i <= pmd_index(va_end); i++) - if (pmd[i] & _PAGE_PRESENT) - pmd[i] += load_delta; - - /* invalidate pages after the kernel image */ - for (; i < PTRS_PER_PMD; i++) - pmd[i] &= ~_PAGE_PRESENT; - - return sme_postprocess_startup(bp, pmd, p2v_offset); -} - /* Wipe all early page tables except for the kernel symbol map */ static void __init reset_early_page_tables(void) { From patchwork Tue Apr 8 08:53:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 879090 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B786264FA5 for ; Tue, 8 Apr 2025 08:53:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744102411; cv=none; b=Q5/x/gKS9rLh39YkBWH4dwiCOJYK7o97MnWlKgWgQqzyHP+S/S43qNggRGCOn339rIHewL4TQ5YhxuPb7/caeR1O+XWNHbYBQSruou6GS45dAeS1GiCFhg8dtc3EO8Ik4oItfcRfmn2YU9jZpJQKvxKGQRkbu3XxSYyXqTH4zww= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744102411; c=relaxed/simple; bh=rGZMX0G8lRLt0dY4UTfyPAeFNjieLVMC1P9MjIC1z4k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NLLi9zV2Ay4Dmajobx/Reoeua91j9y/L7FHfuRkJ+FAbrGqEmgMtVps09MztvHvZ1n67Gv85vS5mIoO7qBoikyfV7VnuGdHS7hVtEZm1Iurstw9drcBlN6JDwFLY/9eWcBwDMxjtIpiB1Sf/Q4dB97Rff+nCy3AdqXtZaIYtjmU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lBYR0D04; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lBYR0D04" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-39123912ff0so2275901f8f.2 for ; Tue, 08 Apr 2025 01:53:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744102408; x=1744707208; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ut3DF7cdGnYGcsB+1y8GBrdozOKwjcN3MDn2MzZHkLM=; b=lBYR0D04NgiJ0QS+0Ffa5Ok/udk+5iT1U844uIBGP1Co7Pon44j9jqtBX33p7XrQTr /9KD0GnorE8R0dXfmbq1tp6LDgmyikgs2bEEYj357Th1H78mwdWrHOPVVM29MAejWLto CVflXJhInonSyz75MgTzMOaOOMEmVeysx39waMI/oeDFlZaCFrUhZ+4xx4QrCWzsf8yO Bsma0Dbpt8+AaeSvzMW7OvBkKTn2deo7Qz6qaMV/CbHFTcI8K1NlxivrMekGpMY3Lmui sEIRTdXkZajI6xslHRUq3WgGOzPpchpxWBvBWG5w/wQPi+rGL4358fhHoZO74AicJC11 7KVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744102408; x=1744707208; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ut3DF7cdGnYGcsB+1y8GBrdozOKwjcN3MDn2MzZHkLM=; b=D3jmNdD2QzDhTxkQTknwipqbYgpfk7HzFGXhhAK1vv2b7U6idXoA/ZDOtnyX3yDk0M tNvneIW+WaKHWJBl5LzxHKw1AGq4oKE7CovLZm6yA8xkFBWcA4MgVKicLlFQ//qxi2mY bDEAEwn4bM+8DkUa7IwbwE7kfYI19pdqoaNcmwoqVQERVCjBDzeFC8FFI2yhEqrVoHAL qYxpNO1YJc6TTvXIS5i0HR9U9Ipo6eV64CTS0JuYuxCtfeI0CsN4rr2aCj53b8H9lAd8 ifXP9jchK+hJnRXvGqiTIBR8niEYn6NcWIp6rrgt40eil9MNLD8Tm/A4epEFH2t32YTB 6fVg== X-Gm-Message-State: AOJu0Yy5BnUNkW3ErD9UJh5b1Lm2V2+z6xWEDWfN3AXd7Fmo1T++VCqr EvRP5+JL5OrD7KF3nTu+0pwSeGUHQtGG39O7TIknBp+DZCWKjLjCOEU2eakDOq1NIT2agHS+W+w tjgyUQ+nB5DVF7vMkFy8NWyO1O4XigcpQ0PEehjwnXWX+rWOoil28tWXmj3eIJI6EYIZKboV+kY KDeYmKGdPWobXcGB8UiaxPRgQN9Q== X-Google-Smtp-Source: AGHT+IGW/r/Kx/mn/IWLI6XXqQKCxtQJ+RSUOZT/+LuyEv9kOyZw0WzMpRZxWL1WPU2nH+qiHEBf4YWJ X-Received: from wmgg3.prod.google.com ([2002:a05:600d:3:b0:43c:ef1f:48d3]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1acf:b0:39c:1f10:c736 with SMTP id ffacd0b85a97d-39d0de5dfbcmr12437841f8f.43.1744102407776; Tue, 08 Apr 2025 01:53:27 -0700 (PDT) Date: Tue, 8 Apr 2025 10:53:01 +0200 In-Reply-To: <20250408085254.836788-9-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250408085254.836788-9-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=3139; i=ardb@kernel.org; h=from:subject; bh=1OMnBnM3B5uy1EYv4XyX2QqLee71wb5xFGUqQq+IsUU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3L4w/OGbPcrtRWK6vreq9/v/qVwNrPZTvlmqZlHNugf eatcw5DRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZiI4yyG/2Xrmu7sn3La/LKs Y/z/Ce7yXurMa49OqQ9/mRB9pcGfaQsjw/ZOYV/mS0xfLqzfEaJy8F17rmLA9YkMmjn2mcLbBL5 acAEA X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250408085254.836788-15-ardb+git@google.com> Subject: [PATCH v3 6/7] x86/boot: Move early SME init code into startup/ From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: x86@kernel.org, mingo@kernel.org, linux-kernel@vger.kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel Move the SME initialization code, which runs from the 1:1 mapping of memory as it operates on the kernel virtual mapping, into the new sub-directory arch/x86/boot/startup/ where all startup code will reside that needs to tolerate executing from the 1:1 mapping. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/startup/Makefile | 1 + arch/x86/{mm/mem_encrypt_identity.c => boot/startup/sme.c} | 2 -- arch/x86/mm/Makefile | 6 ------ 3 files changed, 1 insertion(+), 8 deletions(-) diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile index 10319aee666b..ccdfc42a4d59 100644 --- a/arch/x86/boot/startup/Makefile +++ b/arch/x86/boot/startup/Makefile @@ -16,6 +16,7 @@ UBSAN_SANITIZE := n KCOV_INSTRUMENT := n obj-$(CONFIG_X86_64) += gdt_idt.o map_kernel.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += sme.o lib-$(CONFIG_X86_64) += la57toggle.o lib-$(CONFIG_EFI_MIXED) += efi-mixed.o diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/boot/startup/sme.c similarity index 99% rename from arch/x86/mm/mem_encrypt_identity.c rename to arch/x86/boot/startup/sme.c index e7fb3779b35f..23d10cda5b58 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/boot/startup/sme.c @@ -45,8 +45,6 @@ #include #include -#include "mm_internal.h" - #define PGD_FLAGS _KERNPG_TABLE_NOENC #define P4D_FLAGS _KERNPG_TABLE_NOENC #define PUD_FLAGS _KERNPG_TABLE_NOENC diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 32035d5be5a0..3faa60f13a61 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -3,12 +3,10 @@ KCOV_INSTRUMENT_tlb.o := n KCOV_INSTRUMENT_mem_encrypt.o := n KCOV_INSTRUMENT_mem_encrypt_amd.o := n -KCOV_INSTRUMENT_mem_encrypt_identity.o := n KCOV_INSTRUMENT_pgprot.o := n KASAN_SANITIZE_mem_encrypt.o := n KASAN_SANITIZE_mem_encrypt_amd.o := n -KASAN_SANITIZE_mem_encrypt_identity.o := n KASAN_SANITIZE_pgprot.o := n # Disable KCSAN entirely, because otherwise we get warnings that some functions @@ -16,12 +14,10 @@ KASAN_SANITIZE_pgprot.o := n KCSAN_SANITIZE := n # Avoid recursion by not calling KMSAN hooks for CEA code. KMSAN_SANITIZE_cpu_entry_area.o := n -KMSAN_SANITIZE_mem_encrypt_identity.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_mem_encrypt.o = -pg CFLAGS_REMOVE_mem_encrypt_amd.o = -pg -CFLAGS_REMOVE_mem_encrypt_identity.o = -pg CFLAGS_REMOVE_pgprot.o = -pg endif @@ -32,7 +28,6 @@ obj-y += pat/ # Make sure __phys_addr has no stackprotector CFLAGS_physaddr.o := -fno-stack-protector -CFLAGS_mem_encrypt_identity.o := -fno-stack-protector CFLAGS_fault.o := -I $(src)/../include/asm/trace @@ -63,5 +58,4 @@ obj-$(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION) += pti.o obj-$(CONFIG_X86_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o -obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o