From patchwork Thu Feb 20 11:17:08 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 25024 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f72.google.com (mail-pa0-f72.google.com [209.85.220.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DDEAA203BE for ; Thu, 20 Feb 2014 11:28:48 +0000 (UTC) Received: by mail-pa0-f72.google.com with SMTP id rd3sf4449620pab.11 for ; Thu, 20 Feb 2014 03:28:48 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:date :message-id:in-reply-to:references:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=M2OteUC0bUYNEwzNZwAXXb89qkOlqRL6daK549c+Abs=; b=Ey5CcytFzmj4MMPOEKOMHY9GVUJDrh6yJw934tRTo9kBIwgmsl8cM7WaW/cQb5gkeH BzqXVEhiqUbzPHUCf9jJ8NQteOeWOEKPMaERC7gM6Z79u/Es4CWV/6qYRbdFFdxxdM02 63TRQhf6+qbhbV6kKirXbvnXrgK3mE1L6z6v5wJuu+ZQSiJdBToK6bITCUtKP7HTZ1D0 T4Spy8kcB9seljmx2q16B5vQmkw1vvCblitprOWi3xcwrtMtXdgOLEcf+f7j2sKQBZPK m43/iawT0th4ATwYMJbqWf8t/U7fuMPFwtqviOdZYuXy1O5xx4AjRyYliCMfBzuMj500 EsxQ== X-Gm-Message-State: ALoCoQnSBjC3i5zjhYPD9QlDzWz7Sm0hTNZuvolqcrZtY3FUwG5Y5yApjZHZTW68fPwDxlPXdE2Q X-Received: by 10.67.21.145 with SMTP id hk17mr481236pad.35.1392895728019; Thu, 20 Feb 2014 03:28:48 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.22.139 with SMTP id 11ls427582qgn.84.gmail; Thu, 20 Feb 2014 03:28:47 -0800 (PST) X-Received: by 10.52.166.103 with SMTP id zf7mr596173vdb.30.1392895727813; Thu, 20 Feb 2014 03:28:47 -0800 (PST) Received: from mail-vc0-f172.google.com (mail-vc0-f172.google.com [209.85.220.172]) by mx.google.com with ESMTPS id eo4si1326597vdb.121.2014.02.20.03.28.47 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 20 Feb 2014 03:28:47 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.172 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.172; Received: by mail-vc0-f172.google.com with SMTP id lf12so1738905vcb.3 for ; Thu, 20 Feb 2014 03:28:47 -0800 (PST) X-Received: by 10.220.98.204 with SMTP id r12mr723963vcn.48.1392895727738; Thu, 20 Feb 2014 03:28:47 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp50790vcz; Thu, 20 Feb 2014 03:28:47 -0800 (PST) X-Received: by 10.236.148.138 with SMTP id v10mr2209997yhj.100.1392895727278; Thu, 20 Feb 2014 03:28:47 -0800 (PST) Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id m1si3962652yhc.71.2014.02.20.03.28.47 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 20 Feb 2014 03:28:47 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Received: from localhost ([::1]:37635 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WGRo6-0001px-Pv for patch@linaro.org; Thu, 20 Feb 2014 06:28:46 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58883) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WGReh-00042P-4l for qemu-devel@nongnu.org; Thu, 20 Feb 2014 06:19:04 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WGRef-0004wz-Ij for qemu-devel@nongnu.org; Thu, 20 Feb 2014 06:19:03 -0500 Received: from mnementh.archaic.org.uk ([2001:8b0:1d0::1]:46035) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WGRef-0004wV-1t for qemu-devel@nongnu.org; Thu, 20 Feb 2014 06:19:01 -0500 Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.80) (envelope-from ) id 1WGRdG-0003S8-NU; Thu, 20 Feb 2014 11:17:34 +0000 From: Peter Maydell To: Anthony Liguori Date: Thu, 20 Feb 2014 11:17:08 +0000 Message-Id: <1392895054-13232-5-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1392895054-13232-1-git-send-email-peter.maydell@linaro.org> References: <1392895054-13232-1-git-send-email-peter.maydell@linaro.org> X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2001:8b0:1d0::1 Cc: Blue Swirl , qemu-devel@nongnu.org, Aurelien Jarno Subject: [Qemu-devel] [PULL 04/30] target-arm: A64: Implement SIMD scalar indexed instructions X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.172 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Implement the SIMD scalar indexed instructions. The encoding here is nearly identical to the vector indexed grouping, so we combine the two. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target-arm/translate-a64.c | 115 ++++++++++++++++++++++++++++++++------------- 1 file changed, 82 insertions(+), 33 deletions(-) diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c index f1cd08a..a52a3e7 100644 --- a/target-arm/translate-a64.c +++ b/target-arm/translate-a64.c @@ -6322,17 +6322,6 @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn) } } -/* C3.6.13 AdvSIMD scalar x indexed element - * 31 30 29 28 24 23 22 21 20 19 16 15 12 11 10 9 5 4 0 - * +-----+---+-----------+------+---+---+------+-----+---+---+------+------+ - * | 0 1 | U | 1 1 1 1 1 | size | L | M | Rm | opc | H | 0 | Rn | Rd | - * +-----+---+-----------+------+---+---+------+-----+---+---+------+------+ - */ -static void disas_simd_scalar_indexed(DisasContext *s, uint32_t insn) -{ - unsupported_encoding(s, insn); -} - /* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u, int immh, int immb, int opcode, int rn, int rd) @@ -7805,13 +7794,18 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn) } } -/* C3.6.18 AdvSIMD vector x indexed element +/* C3.6.13 AdvSIMD scalar x indexed element + * 31 30 29 28 24 23 22 21 20 19 16 15 12 11 10 9 5 4 0 + * +-----+---+-----------+------+---+---+------+-----+---+---+------+------+ + * | 0 1 | U | 1 1 1 1 1 | size | L | M | Rm | opc | H | 0 | Rn | Rd | + * +-----+---+-----------+------+---+---+------+-----+---+---+------+------+ + * C3.6.18 AdvSIMD vector x indexed element * 31 30 29 28 24 23 22 21 20 19 16 15 12 11 10 9 5 4 0 * +---+---+---+-----------+------+---+---+------+-----+---+---+------+------+ * | 0 | Q | U | 0 1 1 1 1 | size | L | M | Rm | opc | H | 0 | Rn | Rd | * +---+---+---+-----------+------+---+---+------+-----+---+---+------+------+ */ -static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) +static void disas_simd_indexed(DisasContext *s, uint32_t insn) { /* This encoding has two kinds of instruction: * normal, where we perform elt x idxelt => elt for each @@ -7820,6 +7814,7 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) * double the width of the input element * The long ops have a 'part' specifier (ie come in INSN, INSN2 pairs). */ + bool is_scalar = extract32(insn, 28, 1); bool is_q = extract32(insn, 30, 1); bool u = extract32(insn, 29, 1); int size = extract32(insn, 22, 2); @@ -7839,7 +7834,7 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) switch (opcode) { case 0x0: /* MLA */ case 0x4: /* MLS */ - if (!u) { + if (!u || is_scalar) { unallocated_encoding(s); return; } @@ -7847,6 +7842,10 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */ case 0x6: /* SMLSL, SMLSL2, UMLSL, UMLSL2 */ case 0xa: /* SMULL, SMULL2, UMULL, UMULL2 */ + if (is_scalar) { + unallocated_encoding(s); + return; + } is_long = true; break; case 0x3: /* SQDMLAL, SQDMLAL2 */ @@ -7856,12 +7855,17 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) /* fall through */ case 0xc: /* SQDMULH */ case 0xd: /* SQRDMULH */ - case 0x8: /* MUL */ if (u) { unallocated_encoding(s); return; } break; + case 0x8: /* MUL */ + if (u || is_scalar) { + unallocated_encoding(s); + return; + } + break; case 0x1: /* FMLA */ case 0x5: /* FMLS */ if (u) { @@ -7923,7 +7927,7 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) read_vec_element(s, tcg_idx, rm, index, MO_64); - for (pass = 0; pass < 2; pass++) { + for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) { TCGv_i64 tcg_op = tcg_temp_new_i64(); TCGv_i64 tcg_res = tcg_temp_new_i64(); @@ -7954,15 +7958,28 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) tcg_temp_free_i64(tcg_res); } + if (is_scalar) { + clear_vec_high(s, rd); + } + tcg_temp_free_i64(tcg_idx); } else if (!is_long) { - /* 32 bit floating point, or 16 or 32 bit integer */ + /* 32 bit floating point, or 16 or 32 bit integer. + * For the 16 bit scalar case we use the usual Neon helpers and + * rely on the fact that 0 op 0 == 0 with no side effects. + */ TCGv_i32 tcg_idx = tcg_temp_new_i32(); - int pass; + int pass, maxpasses; + + if (is_scalar) { + maxpasses = 1; + } else { + maxpasses = is_q ? 4 : 2; + } read_vec_element_i32(s, tcg_idx, rm, index, size); - if (size == 1) { + if (size == 1 && !is_scalar) { /* The simplest way to handle the 16x16 indexed ops is to duplicate * the index into both halves of the 32 bit tcg_idx and then use * the usual Neon helpers. @@ -7970,11 +7987,11 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) tcg_gen_deposit_i32(tcg_idx, tcg_idx, tcg_idx, 16, 16); } - for (pass = 0; pass < (is_q ? 4 : 2); pass++) { + for (pass = 0; pass < maxpasses; pass++) { TCGv_i32 tcg_op = tcg_temp_new_i32(); TCGv_i32 tcg_res = tcg_temp_new_i32(); - read_vec_element_i32(s, tcg_op, rn, pass, MO_32); + read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : MO_32); switch (opcode) { case 0x0: /* MLA */ @@ -8038,7 +8055,12 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) g_assert_not_reached(); } - write_vec_element_i32(s, tcg_res, rd, pass, MO_32); + if (is_scalar) { + write_fp_sreg(s, rd, tcg_res); + } else { + write_vec_element_i32(s, tcg_res, rd, pass, MO_32); + } + tcg_temp_free_i32(tcg_op); tcg_temp_free_i32(tcg_res); } @@ -8064,11 +8086,18 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) read_vec_element(s, tcg_idx, rm, index, memop); - for (pass = 0; pass < 2; pass++) { + for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) { TCGv_i64 tcg_op = tcg_temp_new_i64(); TCGv_i64 tcg_passres; + int passelt; - read_vec_element(s, tcg_op, rn, pass + (is_q * 2), memop); + if (is_scalar) { + passelt = 0; + } else { + passelt = pass + (is_q * 2); + } + + read_vec_element(s, tcg_op, rn, passelt, memop); tcg_res[pass] = tcg_temp_new_i64(); @@ -8116,23 +8145,35 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) tcg_temp_free_i64(tcg_passres); } tcg_temp_free_i64(tcg_idx); + + if (is_scalar) { + clear_vec_high(s, rd); + } } else { TCGv_i32 tcg_idx = tcg_temp_new_i32(); assert(size == 1); read_vec_element_i32(s, tcg_idx, rm, index, size); - /* The simplest way to handle the 16x16 indexed ops is to duplicate - * the index into both halves of the 32 bit tcg_idx and then use - * the usual Neon helpers. - */ - tcg_gen_deposit_i32(tcg_idx, tcg_idx, tcg_idx, 16, 16); + if (!is_scalar) { + /* The simplest way to handle the 16x16 indexed ops is to + * duplicate the index into both halves of the 32 bit tcg_idx + * and then use the usual Neon helpers. + */ + tcg_gen_deposit_i32(tcg_idx, tcg_idx, tcg_idx, 16, 16); + } - for (pass = 0; pass < 2; pass++) { + for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) { TCGv_i32 tcg_op = tcg_temp_new_i32(); TCGv_i64 tcg_passres; - read_vec_element_i32(s, tcg_op, rn, pass + (is_q * 2), MO_32); + if (is_scalar) { + read_vec_element_i32(s, tcg_op, rn, pass, size); + } else { + read_vec_element_i32(s, tcg_op, rn, + pass + (is_q * 2), MO_32); + } + tcg_res[pass] = tcg_temp_new_i64(); if (opcode == 0xa || opcode == 0xb) { @@ -8183,6 +8224,14 @@ static void disas_simd_indexed_vector(DisasContext *s, uint32_t insn) tcg_temp_free_i64(tcg_passres); } tcg_temp_free_i32(tcg_idx); + + if (is_scalar) { + tcg_gen_ext32u_i64(tcg_res[0], tcg_res[0]); + } + } + + if (is_scalar) { + tcg_res[1] = tcg_const_i64(0); } for (pass = 0; pass < 2; pass++) { @@ -8241,7 +8290,7 @@ static const AArch64DecodeTable data_proc_simd[] = { { 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc }, { 0x0e300800, 0x9f3e0c00, disas_simd_across_lanes }, { 0x0e000400, 0x9fe08400, disas_simd_copy }, - { 0x0f000000, 0x9f000400, disas_simd_indexed_vector }, + { 0x0f000000, 0x9f000400, disas_simd_indexed }, /* vector indexed */ /* simd_mod_imm decode is a subset of simd_shift_imm, so must precede it */ { 0x0f000400, 0x9ff80400, disas_simd_mod_imm }, { 0x0f000400, 0x9f800400, disas_simd_shift_imm }, @@ -8253,7 +8302,7 @@ static const AArch64DecodeTable data_proc_simd[] = { { 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc }, { 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise }, { 0x5e000400, 0xdfe08400, disas_simd_scalar_copy }, - { 0x5f000000, 0xdf000400, disas_simd_scalar_indexed }, + { 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */ { 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm }, { 0x4e280800, 0xff3e0c00, disas_crypto_aes }, { 0x5e000000, 0xff208c00, disas_crypto_three_reg_sha },