Message ID | 20191011134744.2477-5-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | [v5,01/22] target/arm: Add MTE_ACTIVE to tb_flags | expand |
On Fri, 11 Oct 2019 at 14:49, Richard Henderson <richard.henderson@linaro.org> wrote: > > Implements the rules of "PE generation of Checked and Unchecked > accesses" which aren't already implied by TB_FLAGS_MTE_ACTIVE. > Implements the rules of "PE handling of Tag Check Failure". > > Does not implement tag physical address space, so all operations > reduce to unchecked so far. > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > v2: Fix TFSR update. > v3: Split helper_mte_check per {1,2} IAs; take tbi data from translate. > v5: Split helper_mte_check3, the only one that needs a runtime check for tbi. > --- > target/arm/helper-a64.h | 4 + > target/arm/mte_helper.c | 167 +++++++++++++++++++++++++++++++++++++ > target/arm/translate-a64.c | 15 +++- > target/arm/Makefile.objs | 1 + > 4 files changed, 186 insertions(+), 1 deletion(-) > create mode 100644 target/arm/mte_helper.c > > diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h > index a915c1247f..a82e21f15a 100644 > --- a/target/arm/helper-a64.h > +++ b/target/arm/helper-a64.h > @@ -102,3 +102,7 @@ DEF_HELPER_FLAGS_3(autda, TCG_CALL_NO_WG, i64, env, i64, i64) > DEF_HELPER_FLAGS_3(autdb, TCG_CALL_NO_WG, i64, env, i64, i64) > DEF_HELPER_FLAGS_2(xpaci, TCG_CALL_NO_RWG_SE, i64, env, i64) > DEF_HELPER_FLAGS_2(xpacd, TCG_CALL_NO_RWG_SE, i64, env, i64) > + > +DEF_HELPER_FLAGS_2(mte_check1, TCG_CALL_NO_WG, i64, env, i64) > +DEF_HELPER_FLAGS_2(mte_check2, TCG_CALL_NO_WG, i64, env, i64) > +DEF_HELPER_FLAGS_3(mte_check3, TCG_CALL_NO_WG, i64, env, i64, i32) > diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c > new file mode 100644 > index 0000000000..bbb90cbe86 > --- /dev/null > +++ b/target/arm/mte_helper.c > @@ -0,0 +1,167 @@ > +/* > + * ARM v8.5-MemTag Operations > + * > + * Copyright (c) 2019 Linaro, Ltd. > + * > + * This library is free software; you can redistribute it and/or > + * modify it under the terms of the GNU Lesser General Public > + * License as published by the Free Software Foundation; either > + * version 2.1 of the License, or (at your option) any later version. > + * > + * This library is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + * Lesser General Public License for more details. > + * > + * You should have received a copy of the GNU Lesser General Public > + * License along with this library; if not, see <http://www.gnu.org/licenses/>. > + */ > + > +#include "qemu/osdep.h" > +#include "cpu.h" > +#include "internals.h" > +#include "exec/exec-all.h" > +#include "exec/cpu_ldst.h" > +#include "exec/helper-proto.h" > + > + > +static int get_allocation_tag(CPUARMState *env, uint64_t ptr, uintptr_t ra) > +{ > + /* Tag storage not implemented. */ > + return -1; > +} > + > +static int allocation_tag_from_addr(uint64_t ptr) > +{ > + ptr += 1ULL << 55; /* carry ptr[55] into ptr[59:56]. */ > + return extract64(ptr, 56, 4); What's the carry-bit-55 logic for? The pseudocode AArch64.AllocationTagFromAddress just returns bits [59:56]. > +} > + > +/* > + * Perform a checked access for MTE. > + * On arrival, TBI is known to enabled, as is allocation_tag_access_enabled. "to be" > + */ > +static uint64_t do_mte_check(CPUARMState *env, uint64_t dirty_ptr, > + uint64_t clean_ptr, uint32_t select, > + uintptr_t ra) > +{ > + ARMMMUIdx stage1 = arm_stage1_mmu_idx(env); > + int ptr_tag, mem_tag; > + > + /* > + * If TCMA is enabled, then physical tag 0 is unchecked. > + * Note the rules in D6.8.1 are written with logical tags, where > + * the corresponding physical tag rule is simpler: equal to 0. > + * We will need the physical tag below anyway. > + */ This reads a bit oddly, because (in the final version of the spec) physical and logical tags are identical (AArch64.PhysicalTag() just returns bits [59:56] of the vaddr). > + ptr_tag = allocation_tag_from_addr(dirty_ptr); > + if (ptr_tag == 0) { > + ARMVAParameters p = aa64_va_parameters(env, dirty_ptr, stage1, true); > + if (p.tcma) { > + return clean_ptr; > + } > + } I don't think this logic gets the "regime has two address ranges" case correct. For a two-address-range translation regime (where TCR_ELx has TCMA0 and TCMA1 bits, rather than just a single TCMA bit), then the 'select' argument to this function needs to be involved, because we should do a tag-unchecked access if: * addr[59:55]==0b00000 (ie select == 0 and ptr_tag == 0) and TCR_ELx.TCMA0 is set * addr[59:55]==0b11111 (ie select == 1 and ptr_tag == 0xf) and TCR_ELx.TCMA1 is set (the pseudocode for this is in AArch64.AccessTagIsChecked(), and the TCR_EL1.TCMA[01] bit definitions agree; the text in D6.8.1 appears to be confused.) > + > + /* > + * If an access is made to an address that does not provide tag > + * storage, the result is IMPLEMENTATION DEFINED. We choose to > + * treat the access as unchecked. > + * This is similar to MemAttr != Tagged, which are also unchecked. > + */ > + mem_tag = get_allocation_tag(env, clean_ptr, ra); > + if (mem_tag < 0) { > + return clean_ptr; > + } > + > + /* If the tags do not match, the tag check operation fails. */ > + if (unlikely(ptr_tag != mem_tag)) { > + int el, regime_el, tcf; > + uint64_t sctlr; > + > + el = arm_current_el(env); > + regime_el = (el ? el : 1); /* TODO: ARMv8.1-VHE EL2&0 regime */ We could write this as "regime_el(env, stage1)" if that function wasn't local to helper.c, right ? > + sctlr = env->cp15.sctlr_el[regime_el]; > + if (el == 0) { > + tcf = extract64(sctlr, 38, 2); > + } else { > + tcf = extract64(sctlr, 40, 2); > + } > + > + switch (tcf) { > + case 1: > + /* > + * Tag check fail causes a synchronous exception. > + * > + * In restore_state_to_opc, we set the exception syndrome > + * for the load or store operation. Do that first so we > + * may overwrite that with the syndrome for the tag check. > + */ > + cpu_restore_state(env_cpu(env), ra, true); > + env->exception.vaddress = dirty_ptr; > + raise_exception(env, EXCP_DATA_ABORT, > + syn_data_abort_no_iss(el != 0, 0, 0, 0, 0, 0x11), > + exception_target_el(env)); > + /* noreturn; fall through to assert anyway */ hopefully this fallthrough comment syntax doesn't confuse any of our compilers/static analyzers... > + > + case 0: > + /* > + * Tag check fail does not affect the PE. > + * We eliminate this case by not setting MTE_ACTIVE > + * in tb_flags, so that we never make this runtime call. > + */ > + g_assert_not_reached(); > + > + case 2: > + /* Tag check fail causes asynchronous flag set. */ > + env->cp15.tfsr_el[regime_el] |= 1 << select; Won't this incorrectly accumulate tagfails for EL0 into TFSR_EL1 rather than TFSRE0_EL1 ? I think you want "[el]". > + break; > + > + default: > + /* Case 3: Reserved. */ > + qemu_log_mask(LOG_GUEST_ERROR, > + "Tag check failure with SCTLR_EL%d.TCF " > + "set to reserved value %d\n", regime_el, tcf); Technically this message is going to be wrong for the case of el==0 (where it's SCTLR_EL1.TCF0, not .TCF, that's been mis-set). > + break; > + } > + } > + > + return clean_ptr; > +} > + > +/* > + * Perform check in translation regime w/single IA range. > + * It is known that TBI is enabled on entry. > + */ > +uint64_t HELPER(mte_check1)(CPUARMState *env, uint64_t dirty_ptr) > +{ > + uint64_t clean_ptr = extract64(dirty_ptr, 0, 56); > + return do_mte_check(env, dirty_ptr, clean_ptr, 0, GETPC()); > +} > + > +/* > + * Perform check in translation regime w/two IA ranges. > + * It is known that TBI is enabled on entry. > + */ > +uint64_t HELPER(mte_check2)(CPUARMState *env, uint64_t dirty_ptr) > +{ > + uint32_t select = extract64(dirty_ptr, 55, 1); > + uint64_t clean_ptr = sextract64(dirty_ptr, 0, 56); > + return do_mte_check(env, dirty_ptr, clean_ptr, select, GETPC()); > +} > + > +/* > + * Perform check in translation regime w/two IA ranges. > + * The TBI argument is the concatenation of TBI1:TBI0. > + */ > +uint64_t HELPER(mte_check3)(CPUARMState *env, uint64_t dirty_ptr, uint32_t tbi) > +{ > + uint32_t select = extract64(dirty_ptr, 55, 1); > + uint64_t clean_ptr = sextract64(dirty_ptr, 0, 56); > + > + if ((tbi >> select) & 1) { > + return do_mte_check(env, dirty_ptr, clean_ptr, select, GETPC()); > + } else { > + /* TBI is disabled; the access is unchecked. */ > + return dirty_ptr; > + } > +} > diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c > index 62bdf50796..8e4fea6b4c 100644 > --- a/target/arm/translate-a64.c > +++ b/target/arm/translate-a64.c > @@ -214,7 +214,20 @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src) > static TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr) > { > TCGv_i64 clean = new_tmp_a64(s); > - gen_top_byte_ignore(s, clean, addr, s->tbid); > + > + /* Note that s->mte_active already includes a check for s->tbid != 0. */ > + if (!s->mte_active) { > + gen_top_byte_ignore(s, clean, addr, s->tbid); > + } else if (!regime_has_2_ranges(s->mmu_idx)) { > + gen_helper_mte_check1(clean, cpu_env, addr); > + } else if (s->tbid == 3) { > + /* Both TBI1:TBI0 are set; no need to check at runtime. */ > + gen_helper_mte_check2(clean, cpu_env, addr); > + } else { > + TCGv_i32 tbi = tcg_const_i32(s->tbid); > + gen_helper_mte_check3(clean, cpu_env, addr, tbi); > + tcg_temp_free_i32(tbi); > + } > return clean; > } > > diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs > index cf26c16f5f..8fd7d086c8 100644 > --- a/target/arm/Makefile.objs > +++ b/target/arm/Makefile.objs > @@ -67,3 +67,4 @@ obj-$(CONFIG_SOFTMMU) += psci.o > obj-$(TARGET_AARCH64) += translate-a64.o helper-a64.o > obj-$(TARGET_AARCH64) += translate-sve.o sve_helper.o > obj-$(TARGET_AARCH64) += pauth_helper.o > +obj-$(TARGET_AARCH64) += mte_helper.o > -- > 2.17.1 thanks -- PMM
On 12/3/19 1:42 PM, Peter Maydell wrote: >> +static int allocation_tag_from_addr(uint64_t ptr) >> +{ >> + ptr += 1ULL << 55; /* carry ptr[55] into ptr[59:56]. */ >> + return extract64(ptr, 56, 4); > > What's the carry-bit-55 logic for? The pseudocode > AArch64.AllocationTagFromAddress just returns bits [59:56]. This was the old physical tag extraction. >> +static uint64_t do_mte_check(CPUARMState *env, uint64_t dirty_ptr, >> + uint64_t clean_ptr, uint32_t select, >> + uintptr_t ra) >> +{ >> + ARMMMUIdx stage1 = arm_stage1_mmu_idx(env); >> + int ptr_tag, mem_tag; >> + >> + /* >> + * If TCMA is enabled, then physical tag 0 is unchecked. >> + * Note the rules in D6.8.1 are written with logical tags, where >> + * the corresponding physical tag rule is simpler: equal to 0. >> + * We will need the physical tag below anyway. >> + */ > > This reads a bit oddly, because (in the final version of the spec) > physical and logical tags are identical (AArch64.PhysicalTag() > just returns bits [59:56] of the vaddr). I missed that change between draft and final. Wow, that's really annoying. If they were going to drop physical vs logical tags, why did they keep the language? Frankly, it made a *lot* of sense as a way to handle addresses in TTBR1, which now have asymmetric special cases. In particular, ADDG will, as I read it now, with allocation tag access disabled, munge a TTBR1 address to <59:56> = 0. Which is fine so long as access is disabled, but when re-enabled (e.g. via PSTATE.TCO) the address will no longer pass the TCMA test. Is this really intentional? r~
Oh, to finish up on the replies... On 12/3/19 1:42 PM, Peter Maydell wrote: >> + ptr_tag = allocation_tag_from_addr(dirty_ptr); >> + if (ptr_tag == 0) { >> + ARMVAParameters p = aa64_va_parameters(env, dirty_ptr, stage1, true); >> + if (p.tcma) { >> + return clean_ptr; >> + } >> + } > > I don't think this logic gets the "regime has two address ranges" > case correct. For a two-address-range translation regime (where > TCR_ELx has TCMA0 and TCMA1 bits, rather than just a single TCMA bit), > then the 'select' argument to this function needs to be involved, > because we should do a tag-unchecked access if: > * addr[59:55]==0b00000 (ie select == 0 and ptr_tag == 0) > and TCR_ELx.TCMA0 is set > * addr[59:55]==0b11111 (ie select == 1 and ptr_tag == 0xf) > and TCR_ELx.TCMA1 is set > (the pseudocode for this is in AArch64.AccessTagIsChecked(), > and the TCR_EL1.TCMA[01] bit definitions agree; the text in > D6.8.1 appears to be confused.) It used to be correct. That was the lovely bit about physical vs logical tags. Add 1 bit bit 55, let the carry ripple up, so that the physical tag check for TCMA was always against 0. That seems to be broken now in the final spec. >> + el = arm_current_el(env); >> + regime_el = (el ? el : 1); /* TODO: ARMv8.1-VHE EL2&0 regime */ > > We could write this as "regime_el(env, stage1)" if that function > wasn't local to helper.c, right ? Yes. >> + /* noreturn; fall through to assert anyway */ > > hopefully this fallthrough comment syntax doesn't confuse any > of our compilers/static analyzers... It shouldn't... >> + /* Tag check fail causes asynchronous flag set. */ >> + env->cp15.tfsr_el[regime_el] |= 1 << select; > > Won't this incorrectly accumulate tagfails for EL0 into > TFSR_EL1 rather than TFSRE0_EL1 ? I think you want "[el]". Yep. >> + /* Case 3: Reserved. */ >> + qemu_log_mask(LOG_GUEST_ERROR, >> + "Tag check failure with SCTLR_EL%d.TCF " >> + "set to reserved value %d\n", regime_el, tcf); > > Technically this message is going to be wrong for the > case of el==0 (where it's SCTLR_EL1.TCF0, not .TCF, that's > been mis-set). Yep. r~
On Tue, 3 Dec 2019 at 16:06, Richard Henderson <richard.henderson@linaro.org> wrote: > On 12/3/19 1:42 PM, Peter Maydell wrote: > > This reads a bit oddly, because (in the final version of the spec) > > physical and logical tags are identical (AArch64.PhysicalTag() > > just returns bits [59:56] of the vaddr). > > I missed that change between draft and final. > > Wow, that's really annoying. If they were going to drop physical vs logical > tags, why did they keep the language? It leaves space for a potential future architecture making the mapping something other than 1:1. > Is this really intentional? Yes. thanks -- PMM
diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h index a915c1247f..a82e21f15a 100644 --- a/target/arm/helper-a64.h +++ b/target/arm/helper-a64.h @@ -102,3 +102,7 @@ DEF_HELPER_FLAGS_3(autda, TCG_CALL_NO_WG, i64, env, i64, i64) DEF_HELPER_FLAGS_3(autdb, TCG_CALL_NO_WG, i64, env, i64, i64) DEF_HELPER_FLAGS_2(xpaci, TCG_CALL_NO_RWG_SE, i64, env, i64) DEF_HELPER_FLAGS_2(xpacd, TCG_CALL_NO_RWG_SE, i64, env, i64) + +DEF_HELPER_FLAGS_2(mte_check1, TCG_CALL_NO_WG, i64, env, i64) +DEF_HELPER_FLAGS_2(mte_check2, TCG_CALL_NO_WG, i64, env, i64) +DEF_HELPER_FLAGS_3(mte_check3, TCG_CALL_NO_WG, i64, env, i64, i32) diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c new file mode 100644 index 0000000000..bbb90cbe86 --- /dev/null +++ b/target/arm/mte_helper.c @@ -0,0 +1,167 @@ +/* + * ARM v8.5-MemTag Operations + * + * Copyright (c) 2019 Linaro, Ltd. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see <http://www.gnu.org/licenses/>. + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" +#include "exec/exec-all.h" +#include "exec/cpu_ldst.h" +#include "exec/helper-proto.h" + + +static int get_allocation_tag(CPUARMState *env, uint64_t ptr, uintptr_t ra) +{ + /* Tag storage not implemented. */ + return -1; +} + +static int allocation_tag_from_addr(uint64_t ptr) +{ + ptr += 1ULL << 55; /* carry ptr[55] into ptr[59:56]. */ + return extract64(ptr, 56, 4); +} + +/* + * Perform a checked access for MTE. + * On arrival, TBI is known to enabled, as is allocation_tag_access_enabled. + */ +static uint64_t do_mte_check(CPUARMState *env, uint64_t dirty_ptr, + uint64_t clean_ptr, uint32_t select, + uintptr_t ra) +{ + ARMMMUIdx stage1 = arm_stage1_mmu_idx(env); + int ptr_tag, mem_tag; + + /* + * If TCMA is enabled, then physical tag 0 is unchecked. + * Note the rules in D6.8.1 are written with logical tags, where + * the corresponding physical tag rule is simpler: equal to 0. + * We will need the physical tag below anyway. + */ + ptr_tag = allocation_tag_from_addr(dirty_ptr); + if (ptr_tag == 0) { + ARMVAParameters p = aa64_va_parameters(env, dirty_ptr, stage1, true); + if (p.tcma) { + return clean_ptr; + } + } + + /* + * If an access is made to an address that does not provide tag + * storage, the result is IMPLEMENTATION DEFINED. We choose to + * treat the access as unchecked. + * This is similar to MemAttr != Tagged, which are also unchecked. + */ + mem_tag = get_allocation_tag(env, clean_ptr, ra); + if (mem_tag < 0) { + return clean_ptr; + } + + /* If the tags do not match, the tag check operation fails. */ + if (unlikely(ptr_tag != mem_tag)) { + int el, regime_el, tcf; + uint64_t sctlr; + + el = arm_current_el(env); + regime_el = (el ? el : 1); /* TODO: ARMv8.1-VHE EL2&0 regime */ + sctlr = env->cp15.sctlr_el[regime_el]; + if (el == 0) { + tcf = extract64(sctlr, 38, 2); + } else { + tcf = extract64(sctlr, 40, 2); + } + + switch (tcf) { + case 1: + /* + * Tag check fail causes a synchronous exception. + * + * In restore_state_to_opc, we set the exception syndrome + * for the load or store operation. Do that first so we + * may overwrite that with the syndrome for the tag check. + */ + cpu_restore_state(env_cpu(env), ra, true); + env->exception.vaddress = dirty_ptr; + raise_exception(env, EXCP_DATA_ABORT, + syn_data_abort_no_iss(el != 0, 0, 0, 0, 0, 0x11), + exception_target_el(env)); + /* noreturn; fall through to assert anyway */ + + case 0: + /* + * Tag check fail does not affect the PE. + * We eliminate this case by not setting MTE_ACTIVE + * in tb_flags, so that we never make this runtime call. + */ + g_assert_not_reached(); + + case 2: + /* Tag check fail causes asynchronous flag set. */ + env->cp15.tfsr_el[regime_el] |= 1 << select; + break; + + default: + /* Case 3: Reserved. */ + qemu_log_mask(LOG_GUEST_ERROR, + "Tag check failure with SCTLR_EL%d.TCF " + "set to reserved value %d\n", regime_el, tcf); + break; + } + } + + return clean_ptr; +} + +/* + * Perform check in translation regime w/single IA range. + * It is known that TBI is enabled on entry. + */ +uint64_t HELPER(mte_check1)(CPUARMState *env, uint64_t dirty_ptr) +{ + uint64_t clean_ptr = extract64(dirty_ptr, 0, 56); + return do_mte_check(env, dirty_ptr, clean_ptr, 0, GETPC()); +} + +/* + * Perform check in translation regime w/two IA ranges. + * It is known that TBI is enabled on entry. + */ +uint64_t HELPER(mte_check2)(CPUARMState *env, uint64_t dirty_ptr) +{ + uint32_t select = extract64(dirty_ptr, 55, 1); + uint64_t clean_ptr = sextract64(dirty_ptr, 0, 56); + return do_mte_check(env, dirty_ptr, clean_ptr, select, GETPC()); +} + +/* + * Perform check in translation regime w/two IA ranges. + * The TBI argument is the concatenation of TBI1:TBI0. + */ +uint64_t HELPER(mte_check3)(CPUARMState *env, uint64_t dirty_ptr, uint32_t tbi) +{ + uint32_t select = extract64(dirty_ptr, 55, 1); + uint64_t clean_ptr = sextract64(dirty_ptr, 0, 56); + + if ((tbi >> select) & 1) { + return do_mte_check(env, dirty_ptr, clean_ptr, select, GETPC()); + } else { + /* TBI is disabled; the access is unchecked. */ + return dirty_ptr; + } +} diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index 62bdf50796..8e4fea6b4c 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -214,7 +214,20 @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src) static TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr) { TCGv_i64 clean = new_tmp_a64(s); - gen_top_byte_ignore(s, clean, addr, s->tbid); + + /* Note that s->mte_active already includes a check for s->tbid != 0. */ + if (!s->mte_active) { + gen_top_byte_ignore(s, clean, addr, s->tbid); + } else if (!regime_has_2_ranges(s->mmu_idx)) { + gen_helper_mte_check1(clean, cpu_env, addr); + } else if (s->tbid == 3) { + /* Both TBI1:TBI0 are set; no need to check at runtime. */ + gen_helper_mte_check2(clean, cpu_env, addr); + } else { + TCGv_i32 tbi = tcg_const_i32(s->tbid); + gen_helper_mte_check3(clean, cpu_env, addr, tbi); + tcg_temp_free_i32(tbi); + } return clean; } diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs index cf26c16f5f..8fd7d086c8 100644 --- a/target/arm/Makefile.objs +++ b/target/arm/Makefile.objs @@ -67,3 +67,4 @@ obj-$(CONFIG_SOFTMMU) += psci.o obj-$(TARGET_AARCH64) += translate-a64.o helper-a64.o obj-$(TARGET_AARCH64) += translate-sve.o sve_helper.o obj-$(TARGET_AARCH64) += pauth_helper.o +obj-$(TARGET_AARCH64) += mte_helper.o
Implements the rules of "PE generation of Checked and Unchecked accesses" which aren't already implied by TB_FLAGS_MTE_ACTIVE. Implements the rules of "PE handling of Tag Check Failure". Does not implement tag physical address space, so all operations reduce to unchecked so far. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- v2: Fix TFSR update. v3: Split helper_mte_check per {1,2} IAs; take tbi data from translate. v5: Split helper_mte_check3, the only one that needs a runtime check for tbi. --- target/arm/helper-a64.h | 4 + target/arm/mte_helper.c | 167 +++++++++++++++++++++++++++++++++++++ target/arm/translate-a64.c | 15 +++- target/arm/Makefile.objs | 1 + 4 files changed, 186 insertions(+), 1 deletion(-) create mode 100644 target/arm/mte_helper.c -- 2.17.1