From patchwork Tue Jul 1 10:05:57 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 32870 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f197.google.com (mail-ie0-f197.google.com [209.85.223.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BA3F220672 for ; Tue, 1 Jul 2014 10:09:38 +0000 (UTC) Received: by mail-ie0-f197.google.com with SMTP id lx4sf59632152iec.0 for ; Tue, 01 Jul 2014 03:09:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:sender:delivered-to:from:to:subject:date:message-id :in-reply-to:references:x-original-sender :x-original-authentication-results; bh=o81VcxYgKpzs6UbOafKEjRhqTwVA8mpblc6breH9fuk=; b=jW9czNF6ov/FJOX4CYgEBJ7vYrYN0rz6v1h2VGia5oEAsxC6Xn4eNlvwXuTbsN6RNs NREwq4LfrkpiIfgc7PaKQKpcL8kEl+/Pjf4BSp8yIkwwrzAeFImDdK/8pZkS8OCWxv8r i5Ty3+Y/oZ+AXVxIoOMqrSnhkThu4PPIcJsDsAMLVC/+PXu/6nli4lrxhqV/R+9a+Muh PW0nK1kZ4bVNhxOYOKZs0ocJAD16a+C++tUrzxsvOWvLd33EHIu5m8yT+dgRcwT2VIcr CD9O31SjbYwn9Lyc3n9gsXylpQqGCTxSVdyx9vyVDh3k/4XNlKFiYg9EBzrvuXg9tRAC 72SA== X-Gm-Message-State: ALoCoQmT2Yi5/6fzJOWRjiNI9zNFS4SwTKyV9BMyblznLsGSXIDZexq7zojOYttf4+fYNNzFb09o X-Received: by 10.42.15.197 with SMTP id m5mr17625520ica.31.1404209378236; Tue, 01 Jul 2014 03:09:38 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.46.99 with SMTP id j90ls1910918qga.49.gmail; Tue, 01 Jul 2014 03:09:38 -0700 (PDT) X-Received: by 10.220.17.199 with SMTP id t7mr38464561vca.1.1404209378097; Tue, 01 Jul 2014 03:09:38 -0700 (PDT) Received: from mail-ve0-x22c.google.com (mail-ve0-x22c.google.com [2607:f8b0:400c:c01::22c]) by mx.google.com with ESMTPS id yl3si11198610vdb.18.2014.07.01.03.09.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Jul 2014 03:09:38 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::22c as permitted sender) client-ip=2607:f8b0:400c:c01::22c; Received: by mail-ve0-f172.google.com with SMTP id jz11so9511967veb.3 for ; Tue, 01 Jul 2014 03:09:37 -0700 (PDT) X-Received: by 10.58.182.234 with SMTP id eh10mr21601vec.41.1404209377913; Tue, 01 Jul 2014 03:09:37 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp203543vcb; Tue, 1 Jul 2014 03:09:37 -0700 (PDT) X-Received: by 10.68.163.100 with SMTP id yh4mr60007389pbb.122.1404209376921; Tue, 01 Jul 2014 03:09:36 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id uz10si26460322pbc.54.2014.07.01.03.09.36 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Jul 2014 03:09:36 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-371574-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 3846 invoked by alias); 1 Jul 2014 10:07:31 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 3649 invoked by uid 89); 1 Jul 2014 10:07:28 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wg0-f48.google.com Received: from mail-wg0-f48.google.com (HELO mail-wg0-f48.google.com) (74.125.82.48) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Tue, 01 Jul 2014 10:07:12 +0000 Received: by mail-wg0-f48.google.com with SMTP id n12so9353180wgh.31 for ; Tue, 01 Jul 2014 03:07:09 -0700 (PDT) X-Received: by 10.180.188.50 with SMTP id fx18mr35904613wic.76.1404209228937; Tue, 01 Jul 2014 03:07:08 -0700 (PDT) Received: from gnx2647.cec-lab.gnb.st.com ([2a01:e35:2e1d:eaf0:210:75ff:fe1a:c986]) by mx.google.com with ESMTPSA id kr6sm47178017wjb.16.2014.07.01.03.07.07 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Jul 2014 03:07:08 -0700 (PDT) From: Christophe Lyon To: gcc-patches@gcc.gnu.org Subject: [Patch ARM-AArch64/testsuite v2 04/21] Add comparison operators: vceq, vcge, vcgt, vcle and vclt. Date: Tue, 1 Jul 2014 12:05:57 +0200 Message-Id: <1404209174-25364-5-git-send-email-christophe.lyon@linaro.org> In-Reply-To: <1404209174-25364-1-git-send-email-christophe.lyon@linaro.org> References: <1404209174-25364-1-git-send-email-christophe.lyon@linaro.org> X-IsSubscribed: yes X-Original-Sender: christophe.lyon@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::22c as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 diff --git a/gcc/testsuite/ChangeLog b/gcc/testsuite/ChangeLog index 73709c6..7af7fd0 100644 --- a/gcc/testsuite/ChangeLog +++ b/gcc/testsuite/ChangeLog @@ -1,5 +1,14 @@ 2014-06-30 Christophe Lyon + * gcc.target/aarch64/neon-intrinsics/cmp_op.inc: New file. + * gcc.target/aarch64/neon-intrinsics/vceq.c: Likewise. + * gcc.target/aarch64/neon-intrinsics/vcge.c: Likewise. + * gcc.target/aarch64/neon-intrinsics/vcgt.c: Likewise. + * gcc.target/aarch64/neon-intrinsics/vcle.c: Likewise. + * gcc.target/aarch64/neon-intrinsics/vclt.c: Likewise. + +2014-06-30 Christophe Lyon + * gcc.target/aarch64/neon-intrinsics/binary_op.inc: New file. * gcc.target/aarch64/neon-intrinsics/vadd.c: Likewise. * gcc.target/aarch64/neon-intrinsics/vand.c: Likewise. diff --git a/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/cmp_op.inc b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/cmp_op.inc new file mode 100644 index 0000000..a09c5f5 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/cmp_op.inc @@ -0,0 +1,224 @@ +#include +#include "arm-neon-ref.h" +#include "compute-ref-data.h" +#include + +/* Additional expected results declaration, they are initialized in + each test file. */ +extern ARRAY(expected_uint, uint, 8, 8); +extern ARRAY(expected_uint, uint, 16, 4); +extern ARRAY(expected_uint, uint, 32, 2); +extern ARRAY(expected_q_uint, uint, 8, 16); +extern ARRAY(expected_q_uint, uint, 16, 8); +extern ARRAY(expected_q_uint, uint, 32, 4); +extern ARRAY(expected_float, uint, 32, 2); +extern ARRAY(expected_q_float, uint, 32, 4); +extern ARRAY(expected_uint2, uint, 32, 2); +extern ARRAY(expected_uint3, uint, 32, 2); +extern ARRAY(expected_uint4, uint, 32, 2); +extern ARRAY(expected_nan, uint, 32, 2); +extern ARRAY(expected_mnan, uint, 32, 2); +extern ARRAY(expected_nan2, uint, 32, 2); +extern ARRAY(expected_inf, uint, 32, 2); +extern ARRAY(expected_minf, uint, 32, 2); +extern ARRAY(expected_inf2, uint, 32, 2); +extern ARRAY(expected_mzero, uint, 32, 2); +extern ARRAY(expected_p8, uint, 8, 8); +extern ARRAY(expected_q_p8, uint, 8, 16); + +#define FNNAME1(NAME) exec_ ## NAME +#define FNNAME(NAME) FNNAME1(NAME) + +void FNNAME (INSN_NAME) (void) +{ + /* Basic test: y=vcomp(x1,x2), then store the result. */ +#define TEST_VCOMP1(INSN, Q, T1, T2, T3, W, N) \ + VECT_VAR(vector_res, T3, W, N) = \ + INSN##Q##_##T2##W(VECT_VAR(vector, T1, W, N), \ + VECT_VAR(vector2, T1, W, N)); \ + vst1##Q##_u##W(VECT_VAR(result, T3, W, N), VECT_VAR(vector_res, T3, W, N)) + +#define TEST_VCOMP(INSN, Q, T1, T2, T3, W, N) \ + TEST_VCOMP1(INSN, Q, T1, T2, T3, W, N) + + /* No need for 64 bits elements. */ + DECL_VARIABLE(vector, int, 8, 8); + DECL_VARIABLE(vector, int, 16, 4); + DECL_VARIABLE(vector, int, 32, 2); + DECL_VARIABLE(vector, uint, 8, 8); + DECL_VARIABLE(vector, uint, 16, 4); + DECL_VARIABLE(vector, uint, 32, 2); + DECL_VARIABLE(vector, float, 32, 2); + DECL_VARIABLE(vector, int, 8, 16); + DECL_VARIABLE(vector, int, 16, 8); + DECL_VARIABLE(vector, int, 32, 4); + DECL_VARIABLE(vector, uint, 8, 16); + DECL_VARIABLE(vector, uint, 16, 8); + DECL_VARIABLE(vector, uint, 32, 4); + DECL_VARIABLE(vector, float, 32, 4); + + DECL_VARIABLE(vector2, int, 8, 8); + DECL_VARIABLE(vector2, int, 16, 4); + DECL_VARIABLE(vector2, int, 32, 2); + DECL_VARIABLE(vector2, uint, 8, 8); + DECL_VARIABLE(vector2, uint, 16, 4); + DECL_VARIABLE(vector2, uint, 32, 2); + DECL_VARIABLE(vector2, float, 32, 2); + DECL_VARIABLE(vector2, int, 8, 16); + DECL_VARIABLE(vector2, int, 16, 8); + DECL_VARIABLE(vector2, int, 32, 4); + DECL_VARIABLE(vector2, uint, 8, 16); + DECL_VARIABLE(vector2, uint, 16, 8); + DECL_VARIABLE(vector2, uint, 32, 4); + DECL_VARIABLE(vector2, float, 32, 4); + + DECL_VARIABLE(vector_res, uint, 8, 8); + DECL_VARIABLE(vector_res, uint, 16, 4); + DECL_VARIABLE(vector_res, uint, 32, 2); + DECL_VARIABLE(vector_res, uint, 8, 16); + DECL_VARIABLE(vector_res, uint, 16, 8); + DECL_VARIABLE(vector_res, uint, 32, 4); + + clean_results (); + + /* There is no 64 bits variant, don't use the generic initializer. */ + VLOAD(vector, buffer, , int, s, 8, 8); + VLOAD(vector, buffer, , int, s, 16, 4); + VLOAD(vector, buffer, , int, s, 32, 2); + VLOAD(vector, buffer, , uint, u, 8, 8); + VLOAD(vector, buffer, , uint, u, 16, 4); + VLOAD(vector, buffer, , uint, u, 32, 2); + VLOAD(vector, buffer, , float, f, 32, 2); + + VLOAD(vector, buffer, q, int, s, 8, 16); + VLOAD(vector, buffer, q, int, s, 16, 8); + VLOAD(vector, buffer, q, int, s, 32, 4); + VLOAD(vector, buffer, q, uint, u, 8, 16); + VLOAD(vector, buffer, q, uint, u, 16, 8); + VLOAD(vector, buffer, q, uint, u, 32, 4); + VLOAD(vector, buffer, q, float, f, 32, 4); + + /* Choose init value arbitrarily, will be used for vector + comparison. */ + VDUP(vector2, , int, s, 8, 8, -10); + VDUP(vector2, , int, s, 16, 4, -14); + VDUP(vector2, , int, s, 32, 2, -16); + VDUP(vector2, , uint, u, 8, 8, 0xF3); + VDUP(vector2, , uint, u, 16, 4, 0xFFF2); + VDUP(vector2, , uint, u, 32, 2, 0xFFFFFFF1); + VDUP(vector2, , float, f, 32, 2, -15.0f); + + VDUP(vector2, q, int, s, 8, 16, -4); + VDUP(vector2, q, int, s, 16, 8, -10); + VDUP(vector2, q, int, s, 32, 4, -14); + VDUP(vector2, q, uint, u, 8, 16, 0xF4); + VDUP(vector2, q, uint, u, 16, 8, 0xFFF6); + VDUP(vector2, q, uint, u, 32, 4, 0xFFFFFFF2); + VDUP(vector2, q, float, f, 32, 4, -14.0f); + + /* The comparison operators produce only unsigned results, which + means that our tests with uint* inputs write their results in the + same vectors as the int* variants. As a consequence, we have to + execute and test the int* first, then the uint* ones. + Same thing for float and poly8. + */ + + /* Apply operator named INSN_NAME. */ + TEST_VCOMP(INSN_NAME, , int, s, uint, 8, 8); + TEST_VCOMP(INSN_NAME, , int, s, uint, 16, 4); + TEST_VCOMP(INSN_NAME, , int, s, uint, 32, 2); + TEST_VCOMP(INSN_NAME, q, int, s, uint, 8, 16); + TEST_VCOMP(INSN_NAME, q, int, s, uint, 16, 8); + TEST_VCOMP(INSN_NAME, q, int, s, uint, 32, 4); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected, ""); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected, ""); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected, ""); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected, ""); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected, ""); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected, ""); + + /* Now the uint* variants. */ + TEST_VCOMP(INSN_NAME, , uint, u, uint, 8, 8); + TEST_VCOMP(INSN_NAME, , uint, u, uint, 16, 4); + TEST_VCOMP(INSN_NAME, , uint, u, uint, 32, 2); + TEST_VCOMP(INSN_NAME, q, uint, u, uint, 8, 16); + TEST_VCOMP(INSN_NAME, q, uint, u, uint, 16, 8); + TEST_VCOMP(INSN_NAME, q, uint, u, uint, 32, 4); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_uint, ""); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_uint, ""); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_uint, ""); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_q_uint, ""); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_q_uint, ""); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_q_uint, ""); + + /* The float variants. */ + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_float, ""); + + TEST_VCOMP(INSN_NAME, q, float, f, uint, 32, 4); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_q_float, ""); + + /* Some "special" input values to test some corner cases. */ + /* Extra tests to have 100% coverage on all the variants. */ + VDUP(vector2, , uint, u, 32, 2, 0xFFFFFFF0); + TEST_VCOMP(INSN_NAME, , uint, u, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_uint2, "uint 0xfffffff0"); + + VDUP(vector2, , int, s, 32, 2, -15); + TEST_VCOMP(INSN_NAME, , int, s, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_uint3, "int -15"); + + VDUP(vector2, , float, f, 32, 2, -16.0f); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_uint4, "float -16.0f"); + + + /* Extra FP tests with special values (NaN, ....). */ + VDUP(vector, , float, f, 32, 2, 1.0); + VDUP(vector2, , float, f, 32, 2, NAN); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_nan, "FP special (NaN)"); + + VDUP(vector, , float, f, 32, 2, 1.0); + VDUP(vector2, , float, f, 32, 2, -NAN); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_mnan, " FP special (-NaN)"); + + VDUP(vector, , float, f, 32, 2, NAN); + VDUP(vector2, , float, f, 32, 2, 1.0); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_nan2, " FP special (NaN)"); + + VDUP(vector, , float, f, 32, 2, 1.0); + VDUP(vector2, , float, f, 32, 2, HUGE_VALF); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_inf, " FP special (inf)"); + + VDUP(vector, , float, f, 32, 2, 1.0); + VDUP(vector2, , float, f, 32, 2, -HUGE_VALF); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_minf, " FP special (-inf)"); + + VDUP(vector, , float, f, 32, 2, HUGE_VALF); + VDUP(vector2, , float, f, 32, 2, 1.0); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_inf2, " FP special (inf)"); + + VDUP(vector, , float, f, 32, 2, -0.0); + VDUP(vector2, , float, f, 32, 2, 0.0); + TEST_VCOMP(INSN_NAME, , float, f, uint, 32, 2); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_mzero, " FP special (-0.0)"); + +#ifdef EXTRA_TESTS + EXTRA_TESTS(); +#endif +} + +int main (void) +{ + FNNAME (INSN_NAME) (); + + return 0; +} diff --git a/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vceq.c b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vceq.c new file mode 100644 index 0000000..aa095df --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vceq.c @@ -0,0 +1,113 @@ +#define INSN_NAME vceq +#define TEST_MSG "VCEQ/VCEQQ" + +/* Extra tests for _p8 variants, which exist only for vceq. */ +void exec_vceq_p8(void); +#define EXTRA_TESTS exec_vceq_p8 + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0x0 }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0x0, 0x0, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0xff, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0x0, 0x0, 0x0, 0xff, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0x0, 0x0, 0xffff, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0xff, 0x0, 0x0, 0x0, + 0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0xffff, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_p8,uint,8,8) [] = { 0x0, 0x0, 0x0, 0xff, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_p8,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0xff, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; + +void exec_vceq_p8(void) +{ + DECL_VARIABLE(vector, poly, 8, 8); + DECL_VARIABLE(vector, poly, 8, 16); + + DECL_VARIABLE(vector2, poly, 8, 8); + DECL_VARIABLE(vector2, poly, 8, 16); + + DECL_VARIABLE(vector_res, uint, 8, 8); + DECL_VARIABLE(vector_res, uint, 8, 16); + + clean_results (); + + VLOAD(vector, buffer, , poly, p, 8, 8); + VLOAD(vector, buffer, q, poly, p, 8, 16); + + VDUP(vector2, , poly, p, 8, 8, 0xF3); + VDUP(vector2, q, poly, p, 8, 16, 0xF4); + + TEST_VCOMP(INSN_NAME, , poly, p, uint, 8, 8); + TEST_VCOMP(INSN_NAME, q, poly, p, uint, 8, 16); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_p8, "p8"); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_q_p8, "p8"); +} diff --git a/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcge.c b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcge.c new file mode 100644 index 0000000..236fd82 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcge.c @@ -0,0 +1,76 @@ +#define INSN_NAME vcge +#define TEST_MSG "VCGE/VCGEQ" + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0x0, 0x0, 0xffff, 0xffff }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0xffff, 0xffff }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0x0, 0x0, 0x0, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0x0, 0x0, 0xffff, 0xffff }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0, 0x0, 0xffff, 0xffff }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0x0, 0x0, 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0xffffffff, 0xffffffff }; diff --git a/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcgt.c b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcgt.c new file mode 100644 index 0000000..23aaa01 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcgt.c @@ -0,0 +1,76 @@ +#define INSN_NAME vcgt +#define TEST_MSG "VCGT/VCGTQ" + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0x0, 0x0, 0x0, 0xffff }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0xffff }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0x0, 0x0, 0x0, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0x0, 0x0, 0x0, 0xffff }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0xffff }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0x0, 0x0, 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0x0, 0x0, 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0x0, 0xffffffff }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0x0, 0xffffffff }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0x0, 0x0 }; diff --git a/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcle.c b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcle.c new file mode 100644 index 0000000..e4cad0c --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vcle.c @@ -0,0 +1,80 @@ +#define INSN_NAME vcle +#define TEST_MSG "VCLE/VCLEQ" + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0x0 }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xffff, 0xffff, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0xffff, 0xffff, 0xffff, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0xffffffff, 0xffffffff }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0xffffffff, 0xffffffff }; diff --git a/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vclt.c b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vclt.c new file mode 100644 index 0000000..d437eae --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vclt.c @@ -0,0 +1,79 @@ +#define INSN_NAME vclt +#define TEST_MSG "VCLT/VCLTQ" + +#include "cmp_op.inc" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xffff, 0xffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x333, 0x3333, 0x3333, 0x3333, + 0x333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xffffffff, 0xffffffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +VECT_VAR_DECL(expected_uint,uint,8,8) [] = { 0xff, 0xff, 0xff, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,16,4) [] = { 0xffff, 0xffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint,uint,32,2) [] = { 0xffffffff, 0x0 }; + +VECT_VAR_DECL(expected_q_uint,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0x0, 0x0 }; +VECT_VAR_DECL(expected_q_uint,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0x0, 0x0 }; + +VECT_VAR_DECL(expected_float,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected_q_float,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0x0, 0x0 }; + +VECT_VAR_DECL(expected_uint2,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_uint3,uint,32,2) [] = { 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected_uint4,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_nan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_mnan,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_nan2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_inf,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected_minf,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_inf2,uint,32,2) [] = { 0x0, 0x0 }; + +VECT_VAR_DECL(expected_mzero,uint,32,2) [] = { 0x0, 0x0 };