From patchwork Tue Oct 21 13:02:56 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 39123 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0ECC3202DB for ; Tue, 21 Oct 2014 13:06:37 +0000 (UTC) Received: by mail-lb0-f199.google.com with SMTP id w7sf780102lbi.10 for ; Tue, 21 Oct 2014 06:06:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:sender:delivered-to:from:to:subject:date:message-id :in-reply-to:references:x-original-sender :x-original-authentication-results; bh=BvEj5yK0HgZy5xQ7Wv8fCBagZuHa9k3pDd/ziW/acaQ=; b=k0i04GXN8JnvFjvQxvJUmSRfjA0PGBoGczKyI15I37hvuvCRayHtnuyVXcwciQsrZ4 GpfeJFt3+id8r4lrlOolKpOtrjn3v7nToYa6zzx7wvLnCZZYKpR7A/sD8hoO8bC8YN0d 2zEzLT0FocqdTFJ1D10Cg/YxrTYgp1PJZpHWHR5qqcrr/snhgc4s8cQszDGAjoDBZvbW IID67pmNc2jPjWQwoM5pL8xeRyatKctC4DDyZIyrunJm7mZsw7JBjFnjCF3zy0GvmixP 1TswyrDh6/fKTJSfacFyqAXBwQqs48Mbmq2L7dzyueYXAVwYnDbUiKbPmf6Lk4zML5Td YMjQ== X-Gm-Message-State: ALoCoQnNzQBMmTr+mIHEAN907HivnIMAc6bQ5x3/raFKEpIjkG9fRjmtqs9vPT44tqVl2wC0EwEy X-Received: by 10.152.1.133 with SMTP id 5mr174310lam.8.1413896791893; Tue, 21 Oct 2014 06:06:31 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.21.4 with SMTP id r4ls57695lae.24.gmail; Tue, 21 Oct 2014 06:06:31 -0700 (PDT) X-Received: by 10.152.20.199 with SMTP id p7mr34800391lae.49.1413896791677; Tue, 21 Oct 2014 06:06:31 -0700 (PDT) Received: from mail-lb0-x233.google.com (mail-lb0-x233.google.com. [2a00:1450:4010:c04::233]) by mx.google.com with ESMTPS id lk6si582356lac.74.2014.10.21.06.06.31 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 21 Oct 2014 06:06:31 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::233 as permitted sender) client-ip=2a00:1450:4010:c04::233; Received: by mail-lb0-f179.google.com with SMTP id l4so941657lbv.24 for ; Tue, 21 Oct 2014 06:06:30 -0700 (PDT) X-Received: by 10.112.77.74 with SMTP id q10mr26092252lbw.66.1413896790130; Tue, 21 Oct 2014 06:06:30 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp492954lbz; Tue, 21 Oct 2014 06:06:29 -0700 (PDT) X-Received: by 10.68.137.101 with SMTP id qh5mr35062450pbb.13.1413896788303; Tue, 21 Oct 2014 06:06:28 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id es2si11116502pbc.54.2014.10.21.06.06.27 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 21 Oct 2014 06:06:28 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-381274-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 26735 invoked by alias); 21 Oct 2014 13:03:24 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 26657 invoked by uid 89); 21 Oct 2014 13:03:24 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wi0-f170.google.com Received: from mail-wi0-f170.google.com (HELO mail-wi0-f170.google.com) (209.85.212.170) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Tue, 21 Oct 2014 13:03:13 +0000 Received: by mail-wi0-f170.google.com with SMTP id hi2so10964645wib.1 for ; Tue, 21 Oct 2014 06:03:06 -0700 (PDT) X-Received: by 10.194.242.4 with SMTP id wm4mr41302810wjc.61.1413896586819; Tue, 21 Oct 2014 06:03:06 -0700 (PDT) Received: from babel.clyon.hd.free.fr (vig38-2-82-225-222-175.fbx.proxad.net. [82.225.222.175]) by mx.google.com with ESMTPSA id ce1sm15348183wjc.2.2014.10.21.06.03.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 21 Oct 2014 06:03:06 -0700 (PDT) From: Christophe Lyon To: gcc-patches@gcc.gnu.org Subject: [Patch ARM-AArch64/testsuite v3 04/21] Add comparison operators: vceq, vcge, vcgt, vcle and vclt. Date: Tue, 21 Oct 2014 15:02:56 +0200 Message-Id: <1413896593-26607-5-git-send-email-christophe.lyon@linaro.org> In-Reply-To: <1413896593-26607-1-git-send-email-christophe.lyon@linaro.org> References: <1413896593-26607-1-git-send-email-christophe.lyon@linaro.org> X-IsSubscribed: yes X-Original-Sender: christophe.lyon@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::233 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 2014-10-21 Christophe Lyon * gcc.target/aarch64/advsimd-intrinsics/cmp_op.inc: New file. * gcc.target/aarch64/advsimd-intrinsics/vceq.c: Likewise. * gcc.target/aarch64/advsimd-intrinsics/vcge.c: Likewise. * gcc.target/aarch64/advsimd-intrinsics/vcgt.c: Likewise. * gcc.target/aarch64/advsimd-intrinsics/vcle.c: Likewise. * gcc.target/aarch64/advsimd-intrinsics/vclt.c: Likewise. diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/cmp_op.inc b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/cmp_op.inc new file mode 100644 index 0000000..a09c5f5 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/cmp_op.inc @@ -0,0 +1,224 @@ +#include +#include "arm-neon-ref.h" +#include "compute-ref-data.h" +#include + +/* Additional expected results declaration, they are initialized in + each test file. */ +extern ARRAY(expected_uint, uint, 8, 8); +extern ARRAY(expected_uint, uint, 16, 4); +extern ARRAY(expected_uint, uint, 32, 2); +extern ARRAY(expected_q_uint, uint, 8, 16); +extern ARRAY(expected_q_uint, uint, 16, 8); +extern ARRAY(expected_q_uint, uint, 32, 4); +extern ARRAY(expected_float, uint, 32, 2); +extern ARRAY(expected_q_float, uint, 32, 4); +extern ARRAY(expected_uint2, uint, 32, 2); +extern ARRAY(expected_uint3, uint, 32, 2); +extern ARRAY(expected_uint4, uint, 32, 2); +extern ARRAY(expected_nan, uint, 32, 2); +extern ARRAY(expected_mnan, uint, 32, 2); +extern ARRAY(expected_nan2, uint, 32, 2); +extern ARRAY(expected_inf, uint, 32, 2); +extern ARRAY(expected_minf, uint, 32, 2); +extern ARRAY(expected_inf2, uint, 32, 2); +extern ARRAY(expected_mzero, uint, 32, 2); +extern ARRAY(expected_p8, uint, 8, 8); +extern ARRAY(expected_q_p8, uint, 8, 16); + +#define FNNAME1(NAME) exec_ ## NAME +#define FNNAME(NAME) FNNAME1(NAME) + +void FNNAME (INSN_NAME) (void) +{ + /* Basic test: y=vcomp(x1,x2), then store the result. */ +#define TEST_VCOMP1(INSN, Q, T1, T2, T3, W, N) \ + VECT_VAR(vector_res, T3, W, N) = \ + INSN##Q##_##T2##W(VECT_VAR(vector, T1, W, N), \ + VECT_VAR(vector2, T1, W, N)); \ + vst1##Q##_u##W(VECT_VAR(result, T3, W, N), VECT_VAR(vector_res, T3, W, N)) + +#define TEST_VCOMP(INSN, Q, T1, T2, T3, W, N) \ + TEST_VCOMP1(INSN, Q, T1, T2, T3, W, N) + + /* No need for 64 bits elements. */ + DECL_VARIABLE(vector, int, 8, 8); + DECL_VARIABLE(vector, int, 16, 4); + DECL_VARIABLE(vector, int, 32, 2); + DECL_VARIABLE(vector, uint, 8, 8); + DECL_VARIABLE(vector, uint, 16, 4); + DECL_VARIABLE(vector, uint, 32, 2); + DECL_VARIABLE(vector, float, 32, 2); + DECL_VARIABLE(vector, int, 8, 16); + DECL_VARIABLE(vector, int, 16, 8); + DECL_VARIABLE(vector, int, 32, 4); + DECL_VARIABLE(vector, uint, 8, 16); + DECL_VARIABLE(vector, uint, 16, 8); + DECL_VARIABLE(vector, uint, 32, 4); + DECL_VARIABLE(vector, float, 32, 4); + + DECL_VARIABLE(vector2, int, 8, 8); + DECL_VARIABLE(vector2, int, 16, 4); + DECL_VARIABLE(vector2, int, 32, 2); + DECL_VARIABLE(vector2, uint, 8, 8); + DECL_VARIABLE(vector2, uint, 16, 4); + DECL_VARIABLE(vector2, uint, 32, 2); + DECL_VARIABLE(vector2, float, 32, 2); + DECL_VARIABLE(vector2, int, 8, 16); + DECL_VARIABLE(vector2, int, 16, 8); + DECL_VARIABLE(vector2, int, 32, 4); + DECL_VARIABLE(vector2, uint, 8, 16); + DECL_VARIABLE(vector2, uint, 16, 8); + DECL_VARIABLE(vector2, uint, 32, 4); + DECL_VARIABLE(vector2, float, 32, 4); + + DECL_VARIABLE(vector_res, uint, 8, 8); + DECL_VARIABLE(vector_res, uint, 16, 4); + DECL_VARIABLE(vector_res, uint, 32, 2); + DECL_VARIABLE(vector_res, uint, 8, 16); + DECL_VARIABLE(vector_res, uint, 16, 8); + DECL_VARIABLE(vector_res, uint, 32, 4); + + clean_results (); + + /* There is no 64 bits variant, don't use the generic initializer. */ + VLOAD(vector, buffer, , int, s, 8, 8); + VLOAD(vector, buffer, , int, s, 16, 4); + VLOAD(vector, buffer, , int, s, 32, 2); + VLOAD(vector, buffer, , uint, u, 8, 8); + VLOAD(vector, buffer, , uint, u, 16, 4); + VLOAD(vector, buffer, , uint, u, 32, 2); + VLOAD(vector, buffer, , float, f, 32, 2); + + VLOAD(vector, buffer, q, int, s, 8, 16); + VLOAD(vector, buffer, q, int, s, 16, 8); + VLOAD(vector, buffer, q, int, s, 32, 4); + VLOAD(vector, buffer, q, uint, u, 8, 16); + VLOAD(vector, buffer, q, uint, u, 16, 8); + VLOAD(vector, buffer, q, uint, u, 32, 4); + VLOAD(vector, buffer, q, float, f, 32, 4); + + /* Choose init value arbitrarily, will be used for vector + comparison. */ + VDUP(vector2, , int, s, 8, 8, -10); + VDUP(vector2, , int, s, 16, 4, -14); + VDUP(vector2, , int, s, 32, 2, -16); + VDUP(vector2, , uint, u, 8, 8, 0xF3); + VDUP(vector2, , uint, u, 16, 4, 0xFFF2); + VDUP(vector2, , uint, u, 32, 2, 0xFFFFFFF1); + VDUP(vector2, , float, f, 32, 2, -15.0f); + + VDUP(vector2, q, int, s, 8, 16, -4); + VDUP(vector2, q, int, s, 16, 8, -10); + VDUP(vector2, q, int, s, 32, 4, -14); + VDUP(vector2, q, uint, u, 8, 16, 0xF4); + VDUP(vector2, q, uint, u, 16, 8, 0xFFF6); + VDUP(vector2, q, uint, u, 32, 4, 0xFFFFFFF2); + VDUP(vector2, q, float, f, 32, 4, -14.0f); + + /* The comparison operators produce only unsigned results, which + means that our tests with uint* inputs write their results in the + same vectors as the int* variants. As a consequence, we have to + execute and test the int* first, then the uint* ones. + Same thing for float and poly8. + */ + + /* Apply operator named INSN_NAME. */ + TEST_VCOMP(INSN_NAME, , int, s, uint, 8, 8); + TEST_VCOMP(INSN_NAME, , int, s, uint, 16, 4); + TEST_VCOMP(INSN_NAME, , int, s, uint, 32, 2); + TEST_VCOMP(INSN_NAME, q, int, s, uint, 8, 16); + TEST_VCOMP(INSN_NAME, q, int, s, uint, 16, 8); + TEST_VCOMP(INSN_NAME, q, int, s, uint, 32, 4); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected, ""); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected, ""); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected, ""); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected, ""); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected, ""); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected, ""); + + /* Now the uint* variants. */ + TEST_VCOMP(INSN_NAME, , uint, u, uint, 8, 8); + TEST_VCOMP(INSN_NAME, , uint, u, uint, 16, 4); + TEST_VCOMP(INSN_NAME, , uint, u, uint, 32, 2); + TEST_VCOMP(INSN_NAME, q, uint, u, uint, 8, 16); + TEST_VCOMP(INSN_NAME, q, uint, u, uint, 16, 8); + TEST_VCOMP(INSN_NAME, q, uint, u, uint, 32, 4); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_uint, ""); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_uint, ""); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_uint, ""); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_q_uint, ""); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_q_uint, ""); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_q_uint, ""); + + /* The float variants. */ + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_float, ""); + + TEST_VCOMP(INSN_NAME, q, float, f, uint, 32, 4); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_q_float, ""); + + /* Some "special" input values to test some corner cases. */ + /* Extra tests to have 100% coverage on all the variants. */ + VDUP(vector2, , uint, u, 32, 2, 0xFFFFFFF0); + TEST_VCOMP(INSN_NAME, , uint, u, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_uint2, "uint 0xfffffff0"); + + VDUP(vector2, , int, s, 32, 2, -15); + TEST_VCOMP(INSN_NAME, , int, s, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_uint3, "int -15"); + + VDUP(vector2, , float, f, 32, 2, -16.0f); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_uint4, "float -16.0f"); + + + /* Extra FP tests with special values (NaN, ....). */ + VDUP(vector, , float, f, 32, 2, 1.0); + VDUP(vector2, , float, f, 32, 2, NAN); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_nan, "FP special (NaN)"); + + VDUP(vector, , float, f, 32, 2, 1.0); + VDUP(vector2, , float, f, 32, 2, -NAN); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_mnan, " FP special (-NaN)"); + + VDUP(vector, , float, f, 32, 2, NAN); + VDUP(vector2, , float, f, 32, 2, 1.0); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_nan2, " FP special (NaN)"); + + VDUP(vector, , float, f, 32, 2, 1.0); + VDUP(vector2, , float, f, 32, 2, HUGE_VALF); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_inf, " FP special (inf)"); + + VDUP(vector, , float, f, 32, 2, 1.0); + VDUP(vector2, , float, f, 32, 2, -HUGE_VALF); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_minf, " FP special (-inf)"); + + VDUP(vector, , float, f, 32, 2, HUGE_VALF); + VDUP(vector2, , float, f, 32, 2, 1.0); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_inf2, " FP special (inf)"); + + VDUP(vector, , float, f, 32, 2, -0.0); + VDUP(vector2, , float, f, 32, 2, 0.0); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_mzero, " FP special (-0.0)"); + +#ifdef EXTRA_TESTS + EXTRA_TESTS(); +#endif +} + +int main (void) +{ + FNNAME (INSN_NAME) (); + + return 0; +} diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vceq.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vceq.c new file mode 100644 index 0000000..aa095df --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vceq.c @@ -0,0 +1,113 @@ +#define INSN_NAME vceq +#define TEST_MSG "VCEQ/VCEQQ" + +/* Extra tests for _p8 variants, which exist only for vceq. */ +void exec_vceq_p8(void); +#define EXTRA_TESTS exec_vceq_p8 + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0x0 }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0x0, 0x0, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0xff, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0x0, 0x0, 0x0, 0xff, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0x0, 0x0, 0xffff, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0xff, 0x0, 0x0, 0x0, + 0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0xffff, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_p8,uint,8,8) [] = { 0x0, 0x0, 0x0, 0xff, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_p8,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0xff, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; + +void exec_vceq_p8(void) +{ + DECL_VARIABLE(vector, poly, 8, 8); + DECL_VARIABLE(vector, poly, 8, 16); + + DECL_VARIABLE(vector2, poly, 8, 8); + DECL_VARIABLE(vector2, poly, 8, 16); + + DECL_VARIABLE(vector_res, uint, 8, 8); + DECL_VARIABLE(vector_res, uint, 8, 16); + + clean_results (); + + VLOAD(vector, buffer, , poly, p, 8, 8); + VLOAD(vector, buffer, q, poly, p, 8, 16); + + VDUP(vector2, , poly, p, 8, 8, 0xF3); + VDUP(vector2, q, poly, p, 8, 16, 0xF4); + + TEST_VCOMP(INSN_NAME, , poly, p, uint, 8, 8); + TEST_VCOMP(INSN_NAME, q, poly, p, uint, 8, 16); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_p8, "p8"); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_q_p8, "p8"); +} diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcge.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcge.c new file mode 100644 index 0000000..236fd82 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcge.c @@ -0,0 +1,76 @@ +#define INSN_NAME vcge +#define TEST_MSG "VCGE/VCGEQ" + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0x0, 0x0, 0xffff, 0xffff }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0xffff, 0xffff }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0x0, 0x0, 0x0, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0x0, 0x0, 0xffff, 0xffff }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0, 0x0, 0xffff, 0xffff }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0xffffffff, 0xffffffff }; diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcgt.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcgt.c new file mode 100644 index 0000000..23aaa01 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcgt.c @@ -0,0 +1,76 @@ +#define INSN_NAME vcgt +#define TEST_MSG "VCGT/VCGTQ" + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0x0, 0x0, 0x0, 0xffff }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0xffff }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0x0, 0x0, 0x0, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0x0, 0x0, 0x0, 0xffff }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0xffff }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0x0, 0x0, 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0x0, 0x0, 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0x0, 0x0 }; diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcle.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcle.c new file mode 100644 index 0000000..e4cad0c --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vcle.c @@ -0,0 +1,80 @@ +#define INSN_NAME vcle +#define TEST_MSG "VCLE/VCLEQ" + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0x0 }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xffff, 0xffff, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0xffff, 0xffff, 0xffff, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0xffffffff, 0xffffffff }; diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vclt.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vclt.c new file mode 100644 index 0000000..d437eae --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vclt.c @@ -0,0 +1,79 @@ +#define INSN_NAME vclt +#define TEST_MSG "VCLT/VCLTQ" + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xffff, 0xffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xffffffff, 0xffffffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0xff, 0xff, 0xff, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0xffff, 0xffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0x0, 0x0 }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0x0, 0x0 }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0x0, 0x0 };