Message ID | 20190808202616.13782-8-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | target/arm: Misc cleanups | expand |
On Thu, 8 Aug 2019 at 21:26, Richard Henderson <richard.henderson@linaro.org> wrote: > > Separate shift + extract low will result in one extra insn > for hosts like RISC-V, MIPS, and Sparc. > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > target/arm/translate.c | 18 ++++++------------ > 1 file changed, 6 insertions(+), 12 deletions(-) > > diff --git a/target/arm/translate.c b/target/arm/translate.c > index 77154be743..9e103e4fad 100644 > --- a/target/arm/translate.c > +++ b/target/arm/translate.c > @@ -1761,8 +1761,7 @@ static int disas_iwmmxt_insn(DisasContext *s, uint32_t insn) > if (insn & ARM_CP_RW_BIT) { /* TMRRC */ > iwmmxt_load_reg(cpu_V0, wrd); > tcg_gen_extrl_i64_i32(cpu_R[rdlo], cpu_V0); > - tcg_gen_shri_i64(cpu_V0, cpu_V0, 32); > - tcg_gen_extrl_i64_i32(cpu_R[rdhi], cpu_V0); > + tcg_gen_extrh_i64_i32(cpu_R[rdhi], cpu_V0); > } else { /* TMCRR */ > tcg_gen_concat_i32_i64(cpu_V0, cpu_R[rdlo], cpu_R[rdhi]); > iwmmxt_store_reg(cpu_V0, wrd); This patch is fine, but I noticed while reviewing it that tcg/README labels both the extrl_i64_i32 and extrh_i64_i32 operations as "for 64-bit hosts only". Presumably that's a documentation error, since we're not guarding the existing uses of the extrl_i64_i32 here with any kind of ifdeffery to restrict them to 64-bit hosts ? thanks -- PMM
> >This patch is fine, but I noticed while reviewing it that tcg/README >labels both the extrl_i64_i32 and extrh_i64_i32 operations as >"for 64-bit hosts only". Presumably that's a documentation error, >since we're not guarding the existing uses of the extrl_i64_i32 >here with any kind of ifdeffery to restrict them to 64-bit hosts ? > A documentation unclarity in that the opcodes are for 64-bit hosts. The tcg_gen_* functions are always available, and expand to INDEX_op_mov_i32 on 32-bit hosts. r~
On Thu, 15 Aug 2019 at 12:56, Richard Henderson <richard.henderson@linaro.org> wrote: > > > > > > > >This patch is fine, but I noticed while reviewing it that tcg/README > >labels both the extrl_i64_i32 and extrh_i64_i32 operations as > >"for 64-bit hosts only". Presumably that's a documentation error, > >since we're not guarding the existing uses of the extrl_i64_i32 > >here with any kind of ifdeffery to restrict them to 64-bit hosts ? > > > > > A documentation unclarity in that the opcodes are for 64-bit hosts. The tcg_gen_* functions are always available, and expand to INDEX_op_mov_i32 on 32-bit hosts. Oh, I see. We should probably split that document out properly into a primary "what you need to know to generate TCG code as a target" (which is the main audience) and "what you need to implement for a TCG backend (which I guess is relevant to fewer people). thanks -- PMM
diff --git a/target/arm/translate.c b/target/arm/translate.c index 77154be743..9e103e4fad 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -1761,8 +1761,7 @@ static int disas_iwmmxt_insn(DisasContext *s, uint32_t insn) if (insn & ARM_CP_RW_BIT) { /* TMRRC */ iwmmxt_load_reg(cpu_V0, wrd); tcg_gen_extrl_i64_i32(cpu_R[rdlo], cpu_V0); - tcg_gen_shri_i64(cpu_V0, cpu_V0, 32); - tcg_gen_extrl_i64_i32(cpu_R[rdhi], cpu_V0); + tcg_gen_extrh_i64_i32(cpu_R[rdhi], cpu_V0); } else { /* TMCRR */ tcg_gen_concat_i32_i64(cpu_V0, cpu_R[rdlo], cpu_R[rdhi]); iwmmxt_store_reg(cpu_V0, wrd); @@ -2807,8 +2806,7 @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn) if (insn & ARM_CP_RW_BIT) { /* MRA */ iwmmxt_load_reg(cpu_V0, acc); tcg_gen_extrl_i64_i32(cpu_R[rdlo], cpu_V0); - tcg_gen_shri_i64(cpu_V0, cpu_V0, 32); - tcg_gen_extrl_i64_i32(cpu_R[rdhi], cpu_V0); + tcg_gen_extrh_i64_i32(cpu_R[rdhi], cpu_V0); tcg_gen_andi_i32(cpu_R[rdhi], cpu_R[rdhi], (1 << (40 - 32)) - 1); } else { /* MAR */ tcg_gen_concat_i32_i64(cpu_V0, cpu_R[rdlo], cpu_R[rdhi]); @@ -6005,8 +6003,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn) gen_helper_neon_narrow_high_u16(tmp, cpu_V0); break; case 2: - tcg_gen_shri_i64(cpu_V0, cpu_V0, 32); - tcg_gen_extrl_i64_i32(tmp, cpu_V0); + tcg_gen_extrh_i64_i32(tmp, cpu_V0); break; default: abort(); } @@ -6020,8 +6017,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn) break; case 2: tcg_gen_addi_i64(cpu_V0, cpu_V0, 1u << 31); - tcg_gen_shri_i64(cpu_V0, cpu_V0, 32); - tcg_gen_extrl_i64_i32(tmp, cpu_V0); + tcg_gen_extrh_i64_i32(tmp, cpu_V0); break; default: abort(); } @@ -7254,9 +7250,8 @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn) tmp = tcg_temp_new_i32(); tcg_gen_extrl_i64_i32(tmp, tmp64); store_reg(s, rt, tmp); - tcg_gen_shri_i64(tmp64, tmp64, 32); tmp = tcg_temp_new_i32(); - tcg_gen_extrl_i64_i32(tmp, tmp64); + tcg_gen_extrh_i64_i32(tmp, tmp64); tcg_temp_free_i64(tmp64); store_reg(s, rt2, tmp); } else { @@ -7365,8 +7360,7 @@ static void gen_storeq_reg(DisasContext *s, int rlow, int rhigh, TCGv_i64 val) tcg_gen_extrl_i64_i32(tmp, val); store_reg(s, rlow, tmp); tmp = tcg_temp_new_i32(); - tcg_gen_shri_i64(val, val, 32); - tcg_gen_extrl_i64_i32(tmp, val); + tcg_gen_extrh_i64_i32(tmp, val); store_reg(s, rhigh, tmp); }
Separate shift + extract low will result in one extra insn for hosts like RISC-V, MIPS, and Sparc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/translate.c | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) -- 2.17.1