From patchwork Tue Nov 21 21:25:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 119427 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp5830906qgn; Tue, 21 Nov 2017 13:50:01 -0800 (PST) X-Google-Smtp-Source: AGs4zMazx6/Cw2XIFzkxkBsJAKpyfb+xdMWhMi+1giRvlNkBurzvunCaDraXK/FRTX+ZxbNYdJAl X-Received: by 10.13.208.6 with SMTP id s6mr12128061ywd.200.1511301001140; Tue, 21 Nov 2017 13:50:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511301001; cv=none; d=google.com; s=arc-20160816; b=ZsqZtEs6aER3AprkpGuzWY65v9FqqYqsNL2qAqhdqRPYFr8pF1xQtQxfg3kLCVkWAs qQnBoUDgh2FginMBSr5PKtMQnWj/Ix5toOPsD1wrwV2plNV57//5pGVB57K01gwmhkmo NlQKMMinE6qnCNekBqqYzLZcwiQzJONvZyNYM/VxthyFq2uFu2Lz4F7Y4C0Doc34FxfH 82AKfXqY5eUX5BWLlTB2IOU7Hc0gLmKUw1VQ39TO+uw+FbWGMCH+C04Cd6IoFPvALMbJ /Qdh5yqqNR43bNGRR9xMrf/pmhqPK9f8YxPbjjwvd1SO4y6/8BA+rMNiGFBCQu1HXJzX VLog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature:arc-authentication-results; bh=AUCCcLSiloYJGFdkrS7hcMcQv27xMer+AqIbHA6YACE=; b=Fv/VP2vB1SovOX7G3gj9YcMMVjo66cjRQhX3GNHcTgKUSlYhqfa2L2gnX/Or0S2SW/ 749ygGsW5HcnASp8qP/lZYH8AJUtGyqw6cQcO9T0WBcp/MmVTlTBdmloJsDNv2qa+NBx nTbfHiMnKa1uWBmQULeOpKHO7acx9cAcWLEuLCk/PCthRR97fahUh5Ugf0FOaYDura19 GpTpWzWNZckKsIKfF6BsL8fP+bvQcEonx0yrZd9CGlImflOHvGEZd9D4UbIKNi1apzUj AaD8VOfpdbPqpb+tLeBMh1Ft9Gzp5WHodDciXZYUl1xYZsWmN5/RKiL7qoHPsBgfs7B3 AK5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=YzMcrust; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id h188si1491744ywh.755.2017.11.21.13.50.01 for (version=TLS1 cipher=AES128-SHA bits=128/128); Tue, 21 Nov 2017 13:50:01 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=YzMcrust; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:36700 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eHGQa-00078e-J0 for patch@linaro.org; Tue, 21 Nov 2017 16:50:00 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54136) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eHG5J-0004Z5-PC for qemu-devel@nongnu.org; Tue, 21 Nov 2017 16:28:03 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eHG5H-0007mK-Sp for qemu-devel@nongnu.org; Tue, 21 Nov 2017 16:28:01 -0500 Received: from mail-wr0-x244.google.com ([2a00:1450:400c:c0c::244]:43771) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1eHG5H-0007lw-IW for qemu-devel@nongnu.org; Tue, 21 Nov 2017 16:27:59 -0500 Received: by mail-wr0-x244.google.com with SMTP id u40so12593377wrf.10 for ; Tue, 21 Nov 2017 13:27:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=AUCCcLSiloYJGFdkrS7hcMcQv27xMer+AqIbHA6YACE=; b=YzMcrustL45KlU3E9oItFWKQ2/MZZ7p7deWdZP8s6qfd1raO7+tvMU9S7Fn5FcAO92 c7/cNJYlBBimcpzseXAoul+4j8RnOfJlDS10v9JVuRfhWxuwk2hmcsXFzk7HKwf3jXwo hNnE9EWsW7l2qXbRaGVWenBI7NtRkVxBwS55g= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=AUCCcLSiloYJGFdkrS7hcMcQv27xMer+AqIbHA6YACE=; b=Od9C8VII7u2dQsPHFZo6k1HKuUgYtE0O9DRP+1DaRz//z8BpjSwr5ThqqujxjRG3G4 5m75huFYD1R0B/FmeIIBff5LLhy9KFcFAQCWLj8lTHQEqtP9zkKlOueqvb8XkJLIFTaK pdWaMuJSW+fBPOlymImaeITzahs+KO97upmKMec0WxzeojsNwJskJF4ZDgaq9qnsZ9qN gYAJJc4RS8nGsSrrVGjdJ//3LT1Q/Gf8j+CRo/OOmKB3ieY9L2a3YP96Ua6JlbxbXSCk 4xYnKH/ek6lcgXiFV2eziT7fbON2ZsFwuSNphUizmmxZ02U+SNz7CsHhYTVo73DNAKb9 HDfw== X-Gm-Message-State: AJaThX5znGovJmcNouHsGO3Xz25SVzC4jrrKQm0wmzYSYV1ZARwKDLRC 04xzFptQ/yXlfYo4dH4WB7Hq2aWBKiY= X-Received: by 10.223.184.122 with SMTP id u55mr17394083wrf.61.1511299678158; Tue, 21 Nov 2017 13:27:58 -0800 (PST) Received: from cloudburst.twiddle.net (70.red-37-158-60.dynamicip.rima-tde.net. [37.158.60.70]) by smtp.gmail.com with ESMTPSA id e124sm706517wmg.34.2017.11.21.13.27.56 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Nov 2017 13:27:57 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 21 Nov 2017 22:25:25 +0100 Message-Id: <20171121212534.5177-18-richard.henderson@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171121212534.5177-1-richard.henderson@linaro.org> References: <20171121212534.5177-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:400c:c0c::244 Subject: [Qemu-devel] [PATCH v6 17/26] target/arm: Use vector infrastructure for aa64 constant shifts X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Richard Henderson --- target/arm/translate-a64.c | 386 ++++++++++++++++++++++++++++++++++++++------- tcg/tcg-op-gvec.c | 18 ++- tcg/tcg-op-vec.c | 9 +- 3 files changed, 351 insertions(+), 62 deletions(-) -- 2.13.6 diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index 8769b4505a..c47faa5633 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -6432,17 +6432,6 @@ static void handle_shri_with_rndacc(TCGv_i64 tcg_res, TCGv_i64 tcg_src, } } -/* Common SHL/SLI - Shift left with an optional insert */ -static void handle_shli_with_ins(TCGv_i64 tcg_res, TCGv_i64 tcg_src, - bool insert, int shift) -{ - if (insert) { /* SLI */ - tcg_gen_deposit_i64(tcg_res, tcg_res, tcg_src, shift, 64 - shift); - } else { /* SHL */ - tcg_gen_shli_i64(tcg_res, tcg_src, shift); - } -} - /* SRI: shift right with insert */ static void handle_shri_with_ins(TCGv_i64 tcg_res, TCGv_i64 tcg_src, int size, int shift) @@ -6546,7 +6535,11 @@ static void handle_scalar_simd_shli(DisasContext *s, bool insert, tcg_rn = read_fp_dreg(s, rn); tcg_rd = insert ? read_fp_dreg(s, rd) : tcg_temp_new_i64(); - handle_shli_with_ins(tcg_rd, tcg_rn, insert, shift); + if (insert) { + tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_rn, shift, 64 - shift); + } else { + tcg_gen_shli_i64(tcg_rd, tcg_rn, shift); + } write_fp_dreg(s, rd, tcg_rd); @@ -8283,16 +8276,195 @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn) } } +static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + tcg_gen_vec_sar8i_i64(a, a, shift); + tcg_gen_vec_add8_i64(d, d, a); +} + +static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + tcg_gen_vec_sar16i_i64(a, a, shift); + tcg_gen_vec_add16_i64(d, d, a); +} + +static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, unsigned shift) +{ + tcg_gen_sari_i32(a, a, shift); + tcg_gen_add_i32(d, d, a); +} + +static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + tcg_gen_sari_i64(a, a, shift); + tcg_gen_add_i64(d, d, a); +} + +static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, unsigned sh) +{ + tcg_gen_sari_vec(vece, a, a, sh); + tcg_gen_add_vec(vece, d, d, a); +} + +static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + tcg_gen_vec_shr8i_i64(a, a, shift); + tcg_gen_vec_add8_i64(d, d, a); +} + +static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + tcg_gen_vec_shr16i_i64(a, a, shift); + tcg_gen_vec_add16_i64(d, d, a); +} + +static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, unsigned shift) +{ + tcg_gen_shri_i32(a, a, shift); + tcg_gen_add_i32(d, d, a); +} + +static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + tcg_gen_shri_i64(a, a, shift); + tcg_gen_add_i64(d, d, a); +} + +static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, unsigned sh) +{ + tcg_gen_shri_vec(vece, a, a, sh); + tcg_gen_add_vec(vece, d, d, a); +} + +static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + uint64_t mask = (0xff >> shift) * (-1ull / 0xff); + TCGv_i64 t = tcg_temp_new_i64(); + + tcg_gen_shri_i64(t, a, shift); + tcg_gen_andi_i64(t, t, mask); + tcg_gen_andi_i64(d, d, ~mask); + tcg_gen_or_i64(d, d, t); + tcg_temp_free_i64(t); +} + +static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + uint64_t mask = (0xffff >> shift) * (-1ull / 0xffff); + TCGv_i64 t = tcg_temp_new_i64(); + + tcg_gen_shri_i64(t, a, shift); + tcg_gen_andi_i64(t, t, mask); + tcg_gen_andi_i64(d, d, ~mask); + tcg_gen_or_i64(d, d, t); + tcg_temp_free_i64(t); +} + +static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, unsigned shift) +{ + tcg_gen_shri_i32(a, a, shift); + tcg_gen_deposit_i32(d, d, a, 0, 32 - shift); +} + +static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + tcg_gen_shri_i64(a, a, shift); + tcg_gen_deposit_i64(d, d, a, 0, 64 - shift); +} + +static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, unsigned sh) +{ + uint64_t mask = (2ull << ((8 << vece) - 1)) - 1; + TCGv_vec t = tcg_temp_new_vec_matching(d); + TCGv_vec m = tcg_temp_new_vec_matching(d); + + tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh)); + tcg_gen_shri_vec(vece, t, a, sh); + tcg_gen_and_vec(vece, d, d, m); + tcg_gen_or_vec(vece, d, d, t); + + tcg_temp_free_vec(t); + tcg_temp_free_vec(m); +} + /* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u, int immh, int immb, int opcode, int rn, int rd) { + static const GVecGen2i ssra_op[4] = { + { .fni8 = gen_ssra8_i64, + .fniv = gen_ssra_vec, + .load_dest = true, + .opc = INDEX_op_sari_vec, + .vece = MO_8 }, + { .fni8 = gen_ssra16_i64, + .fniv = gen_ssra_vec, + .load_dest = true, + .opc = INDEX_op_sari_vec, + .vece = MO_16 }, + { .fni4 = gen_ssra32_i32, + .fniv = gen_ssra_vec, + .load_dest = true, + .opc = INDEX_op_sari_vec, + .vece = MO_32 }, + { .fni8 = gen_ssra64_i64, + .fniv = gen_ssra_vec, + .prefer_i64 = TCG_TARGET_REG_BITS == 64, + .load_dest = true, + .opc = INDEX_op_sari_vec, + .vece = MO_64 }, + }; + static const GVecGen2i usra_op[4] = { + { .fni8 = gen_usra8_i64, + .fniv = gen_usra_vec, + .load_dest = true, + .opc = INDEX_op_shri_vec, + .vece = MO_8, }, + { .fni8 = gen_usra16_i64, + .fniv = gen_usra_vec, + .load_dest = true, + .opc = INDEX_op_shri_vec, + .vece = MO_16, }, + { .fni4 = gen_usra32_i32, + .fniv = gen_usra_vec, + .load_dest = true, + .opc = INDEX_op_shri_vec, + .vece = MO_32, }, + { .fni8 = gen_usra64_i64, + .fniv = gen_usra_vec, + .prefer_i64 = TCG_TARGET_REG_BITS == 64, + .load_dest = true, + .opc = INDEX_op_shri_vec, + .vece = MO_64, }, + }; + static const GVecGen2i sri_op[4] = { + { .fni8 = gen_shr8_ins_i64, + .fniv = gen_shr_ins_vec, + .load_dest = true, + .opc = INDEX_op_shri_vec, + .vece = MO_8 }, + { .fni8 = gen_shr16_ins_i64, + .fniv = gen_shr_ins_vec, + .load_dest = true, + .opc = INDEX_op_shri_vec, + .vece = MO_16 }, + { .fni4 = gen_shr32_ins_i32, + .fniv = gen_shr_ins_vec, + .load_dest = true, + .opc = INDEX_op_shri_vec, + .vece = MO_32 }, + { .fni8 = gen_shr64_ins_i64, + .fniv = gen_shr_ins_vec, + .prefer_i64 = TCG_TARGET_REG_BITS == 64, + .load_dest = true, + .opc = INDEX_op_shri_vec, + .vece = MO_64 }, + }; + int size = 32 - clz32(immh) - 1; int immhb = immh << 3 | immb; int shift = 2 * (8 << size) - immhb; bool accumulate = false; - bool round = false; - bool insert = false; int dsize = is_q ? 128 : 64; int esize = 8 << size; int elements = dsize/esize; @@ -8300,6 +8472,8 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u, TCGv_i64 tcg_rn = new_tmp_a64(s); TCGv_i64 tcg_rd = new_tmp_a64(s); TCGv_i64 tcg_round; + uint64_t round_const; + const GVecGen2i *gvec_op; int i; if (extract32(immh, 3, 1) && !is_q) { @@ -8318,64 +8492,141 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u, switch (opcode) { case 0x02: /* SSRA / USRA (accumulate) */ - accumulate = true; - break; + if (is_u) { + /* Shift count same as element size produces zero to add. */ + if (shift == 8 << size) { + goto done; + } + gvec_op = &usra_op[size]; + } else { + /* Shift count same as element size produces all sign to add. */ + if (shift == 8 << size) { + shift -= 1; + } + gvec_op = &ssra_op[size]; + } + goto do_gvec; + case 0x08: /* SRI */ + /* Shift count same as element size is valid but does nothing. */ + if (shift == 8 << size) { + goto done; + } + gvec_op = &sri_op[size]; + do_gvec: + tcg_gen_gvec_2i(vec_full_reg_offset(s, rd), + vec_full_reg_offset(s, rn), is_q ? 16 : 8, + vec_full_reg_size(s), shift, gvec_op); + return; + + case 0x00: /* SSHR / USHR */ + if (is_u) { + if (shift == 8 << size) { + /* Shift count the same size as element size produces zero. */ + tcg_gen_gvec_dup8i(vec_full_reg_offset(s, rd), + is_q ? 16 : 8, vec_full_reg_size(s), 0); + } else { + tcg_gen_gvec_shri(size, vec_full_reg_offset(s, rd), + vec_full_reg_offset(s, rn), is_q ? 16 : 8, + vec_full_reg_size(s), shift); + } + } else { + /* Shift count the same size as element size produces all sign. */ + if (shift == 8 << size) { + shift -= 1; + } + tcg_gen_gvec_sari(size, vec_full_reg_offset(s, rd), + vec_full_reg_offset(s, rn), is_q ? 16 : 8, + vec_full_reg_size(s), shift); + } + return; + case 0x04: /* SRSHR / URSHR (rounding) */ - round = true; break; case 0x06: /* SRSRA / URSRA (accum + rounding) */ - accumulate = round = true; - break; - case 0x08: /* SRI */ - insert = true; + accumulate = true; break; + default: + g_assert_not_reached(); } - if (round) { - uint64_t round_const = 1ULL << (shift - 1); - tcg_round = tcg_const_i64(round_const); - } else { - tcg_round = NULL; - } + round_const = 1ULL << (shift - 1); + tcg_round = tcg_const_i64(round_const); for (i = 0; i < elements; i++) { read_vec_element(s, tcg_rn, rn, i, memop); - if (accumulate || insert) { + if (accumulate) { read_vec_element(s, tcg_rd, rd, i, memop); } - if (insert) { - handle_shri_with_ins(tcg_rd, tcg_rn, size, shift); - } else { - handle_shri_with_rndacc(tcg_rd, tcg_rn, tcg_round, - accumulate, is_u, size, shift); - } + handle_shri_with_rndacc(tcg_rd, tcg_rn, tcg_round, + accumulate, is_u, size, shift); write_vec_element(s, tcg_rd, rd, i, size); } + tcg_temp_free_i64(tcg_round); + done: if (!is_q) { clear_vec_high(s, rd); } +} - if (round) { - tcg_temp_free_i64(tcg_round); - } +static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + uint64_t mask = ((0xff << shift) & 0xff) * (-1ull / 0xff); + TCGv_i64 t = tcg_temp_new_i64(); + + tcg_gen_shli_i64(t, a, shift); + tcg_gen_andi_i64(t, t, mask); + tcg_gen_andi_i64(d, d, ~mask); + tcg_gen_or_i64(d, d, t); + tcg_temp_free_i64(t); +} + +static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + uint64_t mask = ((0xffff << shift) & 0xffff) * (-1ull / 0xffff); + TCGv_i64 t = tcg_temp_new_i64(); + + tcg_gen_shli_i64(t, a, shift); + tcg_gen_andi_i64(t, t, mask); + tcg_gen_andi_i64(d, d, ~mask); + tcg_gen_or_i64(d, d, t); + tcg_temp_free_i64(t); +} + +static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, unsigned shift) +{ + tcg_gen_deposit_i32(d, d, a, shift, 32 - shift); +} + +static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, unsigned shift) +{ + tcg_gen_deposit_i64(d, d, a, shift, 64 - shift); +} + +static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, unsigned sh) +{ + uint64_t mask = (1ull << sh) - 1; + TCGv_vec t = tcg_temp_new_vec_matching(d); + TCGv_vec m = tcg_temp_new_vec_matching(d); + + tcg_gen_dupi_vec(vece, m, mask); + tcg_gen_shli_vec(vece, t, a, sh); + tcg_gen_and_vec(vece, d, d, m); + tcg_gen_or_vec(vece, d, d, t); + + tcg_temp_free_vec(t); + tcg_temp_free_vec(m); } /* SHL/SLI - Vector shift left */ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert, - int immh, int immb, int opcode, int rn, int rd) + int immh, int immb, int opcode, int rn, int rd) { int size = 32 - clz32(immh) - 1; int immhb = immh << 3 | immb; int shift = immhb - (8 << size); - int dsize = is_q ? 128 : 64; - int esize = 8 << size; - int elements = dsize/esize; - TCGv_i64 tcg_rn = new_tmp_a64(s); - TCGv_i64 tcg_rd = new_tmp_a64(s); - int i; if (extract32(immh, 3, 1) && !is_q) { unallocated_encoding(s); @@ -8391,19 +8642,40 @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert, return; } - for (i = 0; i < elements; i++) { - read_vec_element(s, tcg_rn, rn, i, size); - if (insert) { - read_vec_element(s, tcg_rd, rd, i, size); - } - - handle_shli_with_ins(tcg_rd, tcg_rn, insert, shift); - - write_vec_element(s, tcg_rd, rd, i, size); - } - - if (!is_q) { - clear_vec_high(s, rd); + if (insert) { + static const GVecGen2i shi_op[4] = { + { .fni8 = gen_shl8_ins_i64, + .fniv = gen_shl_ins_vec, + .opc = INDEX_op_shli_vec, + .prefer_i64 = TCG_TARGET_REG_BITS == 64, + .load_dest = true, + .vece = MO_8 }, + { .fni8 = gen_shl16_ins_i64, + .fniv = gen_shl_ins_vec, + .opc = INDEX_op_shli_vec, + .prefer_i64 = TCG_TARGET_REG_BITS == 64, + .load_dest = true, + .vece = MO_16 }, + { .fni4 = gen_shl32_ins_i32, + .fniv = gen_shl_ins_vec, + .opc = INDEX_op_shli_vec, + .prefer_i64 = TCG_TARGET_REG_BITS == 64, + .load_dest = true, + .vece = MO_32 }, + { .fni8 = gen_shl64_ins_i64, + .fniv = gen_shl_ins_vec, + .opc = INDEX_op_shli_vec, + .prefer_i64 = TCG_TARGET_REG_BITS == 64, + .load_dest = true, + .vece = MO_64 }, + }; + tcg_gen_gvec_2i(vec_full_reg_offset(s, rd), + vec_full_reg_offset(s, rn), is_q ? 16 : 8, + vec_full_reg_size(s), shift, &shi_op[size]); + } else { + tcg_gen_gvec_shli(size, vec_full_reg_offset(s, rd), + vec_full_reg_offset(s, rn), is_q ? 16 : 8, + vec_full_reg_size(s), shift); } } diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c index f8ccb137eb..d91b424dfc 100644 --- a/tcg/tcg-op-gvec.c +++ b/tcg/tcg-op-gvec.c @@ -1208,7 +1208,11 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs, }; tcg_debug_assert(vece <= MO_64); - tcg_gen_gvec_2i(dofs, aofs, opsz, clsz, shift, &g[vece]); + if (shift == 0) { + tcg_gen_gvec_mov(vece, dofs, aofs, opsz, clsz); + } else { + tcg_gen_gvec_2i(dofs, aofs, opsz, clsz, shift, &g[vece]); + } } void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, unsigned c) @@ -1253,7 +1257,11 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs, }; tcg_debug_assert(vece <= MO_64); - tcg_gen_gvec_2i(dofs, aofs, opsz, clsz, shift, &g[vece]); + if (shift == 0) { + tcg_gen_gvec_mov(vece, dofs, aofs, opsz, clsz); + } else { + tcg_gen_gvec_2i(dofs, aofs, opsz, clsz, shift, &g[vece]); + } } void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, unsigned c) @@ -1312,7 +1320,11 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs, }; tcg_debug_assert(vece <= MO_64); - tcg_gen_gvec_2i(dofs, aofs, opsz, clsz, shift, &g[vece]); + if (shift == 0) { + tcg_gen_gvec_mov(vece, dofs, aofs, opsz, clsz); + } else { + tcg_gen_gvec_2i(dofs, aofs, opsz, clsz, shift, &g[vece]); + } } static void do_zip(unsigned vece, uint32_t dofs, uint32_t aofs, diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c index a441193b8e..502c5ba891 100644 --- a/tcg/tcg-op-vec.c +++ b/tcg/tcg-op-vec.c @@ -389,11 +389,16 @@ static void do_shifti(TCGOpcode opc, unsigned vece, TCGArg ri = temp_arg(rt); TCGArg ai = temp_arg(at); TCGType type = rt->base_type; - unsigned vecl = type - TCG_TYPE_V64; int can; tcg_debug_assert(at->base_type == type); tcg_debug_assert(i < (8 << vece)); + + if (i == 0) { + tcg_gen_mov_vec(r, a); + return; + } + can = tcg_can_emit_vec_op(opc, type, vece); if (can > 0) { vec_gen_3(opc, type, vece, ri, ai, i); @@ -402,7 +407,7 @@ static void do_shifti(TCGOpcode opc, unsigned vece, to the target. Often, but not always, dupi can feed a vector shift easier than a scalar. */ tcg_debug_assert(can < 0); - tcg_expand_vec_op(opc, vecl, vece, ri, ai, i); + tcg_expand_vec_op(opc, type, vece, ri, ai, i); } }