From patchwork Tue Jun 7 16:06:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 69528 Delivered-To: patch@linaro.org Received: by 10.140.106.246 with SMTP id e109csp2056645qgf; Tue, 7 Jun 2016 09:08:32 -0700 (PDT) X-Received: by 10.31.133.136 with SMTP id h130mr96316vkd.142.1465315692482; Tue, 07 Jun 2016 09:08:12 -0700 (PDT) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id c95si7332073uac.49.2016.06.07.09.08.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Jun 2016 09:08:12 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bAJWi-0002PC-FB; Tue, 07 Jun 2016 16:06:48 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bAJWg-0002MH-I9 for xen-devel@lists.xen.org; Tue, 07 Jun 2016 16:06:46 +0000 Received: from [193.109.254.147] by server-10.bemta-14.messagelabs.com id 08/2B-31433-511F6575; Tue, 07 Jun 2016 16:06:45 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrELMWRWlGSWpSXmKPExsVysyfVTVfkY1i 4wZQPVhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aE9R+ZCtbWVzy5cYu1gfFBShcjF4eQwAZG iaWdR9ghnNOMEg+2fwByODnYBDQl7nz+xARiiwhIS1z7fJkRxGYWaGaUWHjAD8QWFvCU+DHnO 1gNi4CqxJoVjawgNq+Ai8StS3/B6iUE5CROHpsMFucEirc9eMQGYgsJOEss/dXAOIGRewEjwy pG9eLUorLUIl0TvaSizPSMktzEzBxdQ0MTvdzU4uLE9NScxKRiveT83E2MQP8yAMEOxhULnQ8 xSnIwKYnyfn4WFi7El5SfUpmRWJwRX1Sak1p8iFGGg0NJgrfwHVBOsCg1PbUiLTMHGGgwaQkO HiUR3iyQNG9xQWJucWY6ROoUoy7Hsbk31jIJseTl56VKifPqgBQJgBRllObBjYAF/SVGWSlhX kago4R4ClKLcjNLUOVfMYpzMCoJ85qBTOHJzCuB2/QK6AgmoCOWXQsGOaIkESEl1cBY4bxMWs 1isrNm2jPluVOclnMyTN2gXTyppDZ42RNXwY8PHt7i1ppumX3s6Z1JoU+L0759KLuYP62+1S2 Xo03h5IX4PQ/UUzYcmrfvEeN1Nq6X3fk2Mx218/TqDuy1PZilcVBL703fw6Ivd5PMd3vKpEnf k/U33rihZ+LRnNyku1X24gcyz1oosRRnJBpqMRcVJwIA7zFUEnUCAAA= X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-14.tower-27.messagelabs.com!1465315604!34151923!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 52826 invoked from network); 7 Jun 2016 16:06:44 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-14.tower-27.messagelabs.com with SMTP; 7 Jun 2016 16:06:44 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 47663F; Tue, 7 Jun 2016 09:07:18 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.215.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B39F93F21A; Tue, 7 Jun 2016 09:06:42 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 7 Jun 2016 17:06:16 +0100 Message-Id: <1465315583-1278-10-git-send-email-julien.grall@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1465315583-1278-1-git-send-email-julien.grall@arm.com> References: <1465315583-1278-1-git-send-email-julien.grall@arm.com> Cc: andre.przywara@arm.com, Julien Grall , sstabellini@kernel.org, wei.chen@arm.com, steve.capper@arm.com Subject: [Xen-devel] [PATCH v3 09/16] xen/arm: Introduce alternative runtime patching X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" Some of the processor erratum will require to modify code sequence. As those modifications may impact the performance, they should only be enabled on affected cores. Furthermore, Xen may also want to take advantage of new hardware features coming up with v8.1 and v8.2. This patch adds an infrastructure to patch Xen during boot time depending on the "features" available on the platform. This code is based on the file arch/arm64/kernel/alternative.c in Linux 4.6-rc3. Any references to arm64 have been dropped to make the code as generic as possible. When Xen is creating the page tables, all the executable sections (.text and .init.text) will be marked read-only and then enforced by setting SCTLR.WNX. Whilst it might be possible to mark those entries read-only after Xen has been patched, we would need extra care to avoid possible TLBs conflicts (see D4-1732 in ARM DDI 0487A.i) as all physical CPUs will be running. All the physical CPUs have to be brought up before patching Xen because each cores may have different errata/features which require code patching. The only way to find them is to probe system registers on each CPU. To avoid extra complexity, it is possible to create a temporary writeable mapping with vmap. This mapping will be used to write the new instructions. Lastly, runtime patching is currently not necessary for ARM32. So the code is only enabled for ARM64. Signed-off-by: Julien Grall --- Changes in v2: - Use hard tabs in Kconfig - Update the copyright year - Update the commit message to explain the extra mapping Changes in v3: - .altinstr_replacement should live in init.text - Add a comment to explain when apply_alternatives_all should be called. --- xen/arch/arm/Kconfig | 5 + xen/arch/arm/Makefile | 1 + xen/arch/arm/alternative.c | 221 ++++++++++++++++++++++++++++++++++++++ xen/arch/arm/setup.c | 7 ++ xen/arch/arm/xen.lds.S | 10 ++ xen/include/asm-arm/alternative.h | 165 ++++++++++++++++++++++++++++ xen/include/asm-arm/arm64/insn.h | 16 +++ 7 files changed, 425 insertions(+) create mode 100644 xen/arch/arm/alternative.c create mode 100644 xen/include/asm-arm/alternative.h diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 6231cd5..0141bd9 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -14,6 +14,7 @@ config ARM_64 def_bool y depends on 64BIT select HAS_GICV3 + select ALTERNATIVE config ARM def_bool y @@ -46,6 +47,10 @@ config ACPI config HAS_GICV3 bool +# Select ALTERNATIVE if the architecture supports runtime patching +config ALTERNATIVE + bool + endmenu source "common/Kconfig" diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index b95d924..67ccc43 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -4,6 +4,7 @@ subdir-y += platforms subdir-$(CONFIG_ARM_64) += efi subdir-$(CONFIG_ACPI) += acpi +obj-$(CONFIG_ALTERNATIVE) += alternative.o obj-y += bootfdt.o obj-y += cpu.o obj-y += cpufeature.o diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c new file mode 100644 index 0000000..d00f98e --- /dev/null +++ b/xen/arch/arm/alternative.c @@ -0,0 +1,221 @@ +/* + * alternative runtime patching + * inspired by the x86 version + * + * Copyright (C) 2014-2016 ARM Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define __ALT_PTR(a,f) (u32 *)((void *)&(a)->f + (a)->f) +#define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) +#define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) + +extern const struct alt_instr __alt_instructions[], __alt_instructions_end[]; + +struct alt_region { + const struct alt_instr *begin; + const struct alt_instr *end; +}; + +/* + * Check if the target PC is within an alternative block. + */ +static bool_t branch_insn_requires_update(const struct alt_instr *alt, + unsigned long pc) +{ + unsigned long replptr; + + if ( is_active_kernel_text(pc) ) + return 1; + + replptr = (unsigned long)ALT_REPL_PTR(alt); + if ( pc >= replptr && pc <= (replptr + alt->alt_len) ) + return 0; + + /* + * Branching into *another* alternate sequence is doomed, and + * we're not even trying to fix it up. + */ + BUG(); +} + +static u32 get_alt_insn(const struct alt_instr *alt, + const u32 *insnptr, const u32 *altinsnptr) +{ + u32 insn; + + insn = le32_to_cpu(*altinsnptr); + + if ( insn_is_branch_imm(insn) ) + { + s32 offset = insn_get_branch_offset(insn); + unsigned long target; + + target = (unsigned long)altinsnptr + offset; + + /* + * If we're branching inside the alternate sequence, + * do not rewrite the instruction, as it is already + * correct. Otherwise, generate the new instruction. + */ + if ( branch_insn_requires_update(alt, target) ) + { + offset = target - (unsigned long)insnptr; + insn = insn_set_branch_offset(insn, offset); + } + } + + return insn; +} + +static int __apply_alternatives(const struct alt_region *region) +{ + const struct alt_instr *alt; + const u32 *origptr, *replptr; + u32 *writeptr, *writemap; + mfn_t text_mfn = _mfn(virt_to_mfn(_stext)); + unsigned int text_order = get_order_from_bytes(_end - _start); + + printk("Patching kernel code\n"); + + /* + * The text section is read-only. So re-map Xen to be able to patch + * the code. + */ + writemap = __vmap(&text_mfn, 1 << text_order, 1, 1, PAGE_HYPERVISOR, + VMAP_DEFAULT); + if ( !writemap ) + { + printk("alternatives: Unable to map the text section\n"); + return -ENOMEM; + } + + for ( alt = region->begin; alt < region->end; alt++ ) + { + u32 insn; + int i, nr_inst; + + if ( !cpus_have_cap(alt->cpufeature) ) + continue; + + BUG_ON(alt->alt_len != alt->orig_len); + + origptr = ALT_ORIG_PTR(alt); + writeptr = origptr - (u32 *)_start + writemap; + replptr = ALT_REPL_PTR(alt); + + nr_inst = alt->alt_len / sizeof(insn); + + for ( i = 0; i < nr_inst; i++ ) + { + insn = get_alt_insn(alt, origptr + i, replptr + i); + *(writeptr + i) = cpu_to_le32(insn); + } + + /* Ensure the new instructions reached the memory and nuke */ + clean_and_invalidate_dcache_va_range(writeptr, + (sizeof (*writeptr) * nr_inst)); + } + + /* Nuke the instruction cache */ + invalidate_icache(); + + vunmap(writemap); + + return 0; +} + +/* + * We might be patching the stop_machine state machine, so implement a + * really simple polling protocol here. + */ +static int __apply_alternatives_multi_stop(void *unused) +{ + static int patched = 0; + struct alt_region region = { + .begin = __alt_instructions, + .end = __alt_instructions_end, + }; + + /* We always have a CPU 0 at this point (__init) */ + if ( smp_processor_id() ) + { + while ( !read_atomic(&patched) ) + cpu_relax(); + isb(); + } + else + { + int ret; + + BUG_ON(patched); + ret = __apply_alternatives(®ion); + /* The patching is not expected to fail during boot. */ + BUG_ON(ret != 0); + + /* Barriers provided by the cache flushing */ + write_atomic(&patched, 1); + } + + return 0; +} + +/* + * This function should only be called during boot and before CPU0 jump + * into the idle_loop. + */ +void __init apply_alternatives_all(void) +{ + int ret; + + /* better not try code patching on a live SMP system */ + ret = stop_machine_run(__apply_alternatives_multi_stop, NULL, NR_CPUS); + + /* stop_machine_run should never fail at this stage of the boot */ + BUG_ON(ret); +} + +int apply_alternatives(void *start, size_t length) +{ + struct alt_region region = { + .begin = start, + .end = start + length, + }; + + return __apply_alternatives(®ion); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 9bc11c4..01e2900 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -835,6 +836,12 @@ void __init start_xen(unsigned long boot_phys_offset, do_initcalls(); + /* + * It needs to be called after do_initcalls to be able to use + * stop_machine (tasklets initialized via an initcall). + */ + apply_alternatives_all(); + /* Create initial domain 0. */ /* The vGIC for DOM0 is exactly emulating the hardware GIC */ config.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE; diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S index 1f010bd..495f9d8 100644 --- a/xen/arch/arm/xen.lds.S +++ b/xen/arch/arm/xen.lds.S @@ -129,6 +129,9 @@ SECTIONS _sinittext = .; *(.init.text) _einittext = .; +#ifdef CONFIG_ALTERNATIVE + *(.altinstr_replacement) +#endif } :text . = ALIGN(PAGE_SIZE); .init.data : { @@ -167,6 +170,13 @@ SECTIONS *(.xsm_initcall.init) __xsm_initcall_end = .; } :text +#ifdef CONFIG_ALTERNATIVE + .init.alternatives : { + __alt_instructions = .; + *(.altinstructions) + __alt_instructions_end = .; + } +#endif __init_end_efi = .; . = ALIGN(STACK_SIZE); __init_end = .; diff --git a/xen/include/asm-arm/alternative.h b/xen/include/asm-arm/alternative.h new file mode 100644 index 0000000..7e3a610 --- /dev/null +++ b/xen/include/asm-arm/alternative.h @@ -0,0 +1,165 @@ +#ifndef __ASM_ALTERNATIVE_H +#define __ASM_ALTERNATIVE_H + +#include +#include +#include + +#ifdef CONFIG_ALTERNATIVE + +#ifndef __ASSEMBLY__ + +#include +#include +#include + +struct alt_instr { + s32 orig_offset; /* offset to original instruction */ + s32 alt_offset; /* offset to replacement instruction */ + u16 cpufeature; /* cpufeature bit set for replacement */ + u8 orig_len; /* size of original instruction(s) */ + u8 alt_len; /* size of new instruction(s), <= orig_len */ +}; + +void __init apply_alternatives_all(void); +int apply_alternatives(void *start, size_t length); + +#define ALTINSTR_ENTRY(feature) \ + " .word 661b - .\n" /* label */ \ + " .word 663f - .\n" /* new instruction */ \ + " .hword " __stringify(feature) "\n" /* feature bit */ \ + " .byte 662b-661b\n" /* source len */ \ + " .byte 664f-663f\n" /* replacement len */ + +/* + * alternative assembly primitive: + * + * If any of these .org directive fail, it means that insn1 and insn2 + * don't have the same length. This used to be written as + * + * .if ((664b-663b) != (662b-661b)) + * .error "Alternatives instruction length mismatch" + * .endif + * + * but most assemblers die if insn1 or insn2 have a .inst. This should + * be fixed in a binutils release posterior to 2.25.51.0.2 (anything + * containing commit 4e4d08cf7399b606 or c1baaddf8861). + */ +#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled) \ + ".if "__stringify(cfg_enabled)" == 1\n" \ + "661:\n\t" \ + oldinstr "\n" \ + "662:\n" \ + ".pushsection .altinstructions,\"a\"\n" \ + ALTINSTR_ENTRY(feature) \ + ".popsection\n" \ + ".pushsection .altinstr_replacement, \"a\"\n" \ + "663:\n\t" \ + newinstr "\n" \ + "664:\n\t" \ + ".popsection\n\t" \ + ".org . - (664b-663b) + (662b-661b)\n\t" \ + ".org . - (662b-661b) + (664b-663b)\n" \ + ".endif\n" + +#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...) \ + __ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg)) + +#else + +#include + +.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len + .word \orig_offset - . + .word \alt_offset - . + .hword \feature + .byte \orig_len + .byte \alt_len +.endm + +.macro alternative_insn insn1, insn2, cap, enable = 1 + .if \enable +661: \insn1 +662: .pushsection .altinstructions, "a" + altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f + .popsection + .pushsection .altinstr_replacement, "ax" +663: \insn2 +664: .popsection + .org . - (664b-663b) + (662b-661b) + .org . - (662b-661b) + (664b-663b) + .endif +.endm + +/* + * Begin an alternative code sequence. + * + * The code that follows this macro will be assembled and linked as + * normal. There are no restrictions on this code. + */ +.macro alternative_if_not cap, enable = 1 + .if \enable + .pushsection .altinstructions, "a" + altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f + .popsection +661: + .endif +.endm + +/* + * Provide the alternative code sequence. + * + * The code that follows this macro is assembled into a special + * section to be used for dynamic patching. Code that follows this + * macro must: + * + * 1. Be exactly the same length (in bytes) as the default code + * sequence. + * + * 2. Not contain a branch target that is used outside of the + * alternative sequence it is defined in (branches into an + * alternative sequence are not fixed up). + */ +.macro alternative_else +662: .pushsection .altinstr_replacement, "ax" +663: +.endm + +/* + * Complete an alternative code sequence. + */ +.macro alternative_endif +664: .popsection + .org . - (664b-663b) + (662b-661b) + .org . - (662b-661b) + (664b-663b) +.endm + +#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \ + alternative_insn insn1, insn2, cap, IS_ENABLED(cfg) + +#endif /* __ASSEMBLY__ */ + +/* + * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature)); + * + * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO)); + * N.B. If CONFIG_FOO is specified, but not selected, the whole block + * will be omitted, including oldinstr. + */ +#define ALTERNATIVE(oldinstr, newinstr, ...) \ + _ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1) + +#else /* !CONFIG_ALTERNATIVE */ + +static inline void apply_alternatives_all(void) +{ +} + +int apply_alternatives(void *start, size_t lenght) +{ + return 0; +} + +#endif + +#endif /* __ASM_ALTERNATIVE_H */ diff --git a/xen/include/asm-arm/arm64/insn.h b/xen/include/asm-arm/arm64/insn.h index cfcdbe9..6ce37be 100644 --- a/xen/include/asm-arm/arm64/insn.h +++ b/xen/include/asm-arm/arm64/insn.h @@ -61,6 +61,22 @@ u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type, s32 aarch64_get_branch_offset(u32 insn); u32 aarch64_set_branch_offset(u32 insn, s32 offset); +/* Wrapper for common code */ +static inline bool insn_is_branch_imm(u32 insn) +{ + return aarch64_insn_is_branch_imm(insn); +} + +static inline s32 insn_get_branch_offset(u32 insn) +{ + return aarch64_get_branch_offset(insn); +} + +static inline u32 insn_set_branch_offset(u32 insn, s32 offset) +{ + return aarch64_set_branch_offset(insn, offset); +} + #endif /* !__ARCH_ARM_ARM64_INSN */ /* * Local variables: