From patchwork Tue Oct 21 13:02:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 39125 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9068B202DB for ; Tue, 21 Oct 2014 13:07:37 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id a1sf803130wgh.5 for ; Tue, 21 Oct 2014 06:07:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:sender:delivered-to:from:to:subject:date:message-id :in-reply-to:references:x-original-sender :x-original-authentication-results; bh=xosXQqBU8L2qKKNr8URQ72+gqfWharX3Rnd/uJZal0A=; b=mJxEQLN36HotxdVhbai/LBI0EGJUjo48+lxF80iPMdX2WMlgytPH0JTY0b2thP44HD nocNy/quTLhKS4wM1MZXZr0CmFJCe4xblFydITj5Lt4VQ7Xr2hCF4oPAYiyo960khhs6 iVtgdQAse4s8km0iCgpE302lU8XdxDLMwh1q0npawDqTQL2jfPmK/ceSDzKgnpjbw0vn xdEWJVT4e4aHaQyn2pdHF1BwL8Od/eIVyhFnlOjibmJ+XTweHD96914eQlShl0KnIXaA uiTJL07iZ7PIJKlJVb16fPD50Gfqsb8AUic8yEko6X3noajTkhvz/nCs+KVTnAThpw+j 3Ulg== X-Gm-Message-State: ALoCoQkPrJfmL5lkvfv/KpECEvQt/OZNoIicsJuF3pkzjREwVX9XRCvNuYLsNVlCVscG8Kxwf3iB X-Received: by 10.112.138.202 with SMTP id qs10mr5094605lbb.5.1413896853876; Tue, 21 Oct 2014 06:07:33 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.97 with SMTP id p1ls48538laj.87.gmail; Tue, 21 Oct 2014 06:07:33 -0700 (PDT) X-Received: by 10.112.210.102 with SMTP id mt6mr17751142lbc.73.1413896853665; Tue, 21 Oct 2014 06:07:33 -0700 (PDT) Received: from mail-lb0-x22c.google.com (mail-lb0-x22c.google.com. [2a00:1450:4010:c04::22c]) by mx.google.com with ESMTPS id zn3si18846426lbb.81.2014.10.21.06.07.31 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 21 Oct 2014 06:07:31 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22c as permitted sender) client-ip=2a00:1450:4010:c04::22c; Received: by mail-lb0-f172.google.com with SMTP id b6so974466lbj.3 for ; Tue, 21 Oct 2014 06:07:30 -0700 (PDT) X-Received: by 10.153.8.164 with SMTP id dl4mr34647186lad.29.1413896850647; Tue, 21 Oct 2014 06:07:30 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp493169lbz; Tue, 21 Oct 2014 06:07:29 -0700 (PDT) X-Received: by 10.68.57.166 with SMTP id j6mr10029496pbq.145.1413896848536; Tue, 21 Oct 2014 06:07:28 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id yz2si10996192pbb.200.2014.10.21.06.07.27 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 21 Oct 2014 06:07:28 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-381276-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 27079 invoked by alias); 21 Oct 2014 13:03:28 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 27012 invoked by uid 89); 21 Oct 2014 13:03:27 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wg0-f41.google.com Received: from mail-wg0-f41.google.com (HELO mail-wg0-f41.google.com) (74.125.82.41) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Tue, 21 Oct 2014 13:03:19 +0000 Received: by mail-wg0-f41.google.com with SMTP id b13so1316883wgh.24 for ; Tue, 21 Oct 2014 06:03:13 -0700 (PDT) X-Received: by 10.180.36.38 with SMTP id n6mr28291179wij.27.1413896591711; Tue, 21 Oct 2014 06:03:11 -0700 (PDT) Received: from babel.clyon.hd.free.fr (vig38-2-82-225-222-175.fbx.proxad.net. [82.225.222.175]) by mx.google.com with ESMTPSA id ce1sm15348183wjc.2.2014.10.21.06.03.10 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 21 Oct 2014 06:03:10 -0700 (PDT) From: Christophe Lyon To: gcc-patches@gcc.gnu.org Subject: [Patch ARM-AArch64/testsuite v3 07/21] Add binary saturating operators: vqadd, vqsub. Date: Tue, 21 Oct 2014 15:02:59 +0200 Message-Id: <1413896593-26607-8-git-send-email-christophe.lyon@linaro.org> In-Reply-To: <1413896593-26607-1-git-send-email-christophe.lyon@linaro.org> References: <1413896593-26607-1-git-send-email-christophe.lyon@linaro.org> X-IsSubscribed: yes X-Original-Sender: christophe.lyon@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22c as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 2014-10-21 Christophe Lyon * gcc.target/aarch64/advsimd-intrinsics/binary_sat_op.inc: New file. * gcc.target/aarch64/advsimd-intrinsics/vqadd.c: Likewise. * gcc.target/aarch64/advsimd-intrinsics/vqsub.c: Likewise. diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/binary_sat_op.inc b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/binary_sat_op.inc new file mode 100644 index 0000000..35d7701 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/binary_sat_op.inc @@ -0,0 +1,91 @@ +/* Template file for saturating binary operator validation. + + This file is meant to be included by the relevant test files, which + have to define the intrinsic family to test. If a given intrinsic + supports variants which are not supported by all the other + saturating binary operators, these can be tested by providing a + definition for EXTRA_TESTS. */ + +#include +#include "arm-neon-ref.h" +#include "compute-ref-data.h" + +#define FNNAME1(NAME) exec_ ## NAME +#define FNNAME(NAME) FNNAME1(NAME) + +void FNNAME (INSN_NAME) (void) +{ + /* vector_res = OP(vector1,vector2), then store the result. */ + +#define TEST_BINARY_SAT_OP1(INSN, Q, T1, T2, W, N, EXPECTED_CUMULATIVE_SAT, CMT) \ + Set_Neon_Cumulative_Sat(0); \ + VECT_VAR(vector_res, T1, W, N) = \ + INSN##Q##_##T2##W(VECT_VAR(vector1, T1, W, N), \ + VECT_VAR(vector2, T1, W, N)); \ + vst1##Q##_##T2##W(VECT_VAR(result, T1, W, N), \ + VECT_VAR(vector_res, T1, W, N)); \ + CHECK_CUMULATIVE_SAT(TEST_MSG, T1, W, N, EXPECTED_CUMULATIVE_SAT, CMT) + +#define TEST_BINARY_SAT_OP(INSN, Q, T1, T2, W, N, EXPECTED_CUMULATIVE_SAT, CMT) \ + TEST_BINARY_SAT_OP1(INSN, Q, T1, T2, W, N, EXPECTED_CUMULATIVE_SAT, CMT) + + DECL_VARIABLE_ALL_VARIANTS(vector1); + DECL_VARIABLE_ALL_VARIANTS(vector2); + DECL_VARIABLE_ALL_VARIANTS(vector_res); + + clean_results (); + + /* Initialize input "vector1" from "buffer". */ + TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector1, buffer); + + /* Choose arbitrary initialization values. */ + VDUP(vector2, , int, s, 8, 8, 0x11); + VDUP(vector2, , int, s, 16, 4, 0x22); + VDUP(vector2, , int, s, 32, 2, 0x33); + VDUP(vector2, , int, s, 64, 1, 0x44); + VDUP(vector2, , uint, u, 8, 8, 0x55); + VDUP(vector2, , uint, u, 16, 4, 0x66); + VDUP(vector2, , uint, u, 32, 2, 0x77); + VDUP(vector2, , uint, u, 64, 1, 0x88); + + VDUP(vector2, q, int, s, 8, 16, 0x11); + VDUP(vector2, q, int, s, 16, 8, 0x22); + VDUP(vector2, q, int, s, 32, 4, 0x33); + VDUP(vector2, q, int, s, 64, 2, 0x44); + VDUP(vector2, q, uint, u, 8, 16, 0x55); + VDUP(vector2, q, uint, u, 16, 8, 0x66); + VDUP(vector2, q, uint, u, 32, 4, 0x77); + VDUP(vector2, q, uint, u, 64, 2, 0x88); + + /* Apply a saturating binary operator named INSN_NAME. */ + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 8, 8, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 16, 4, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 32, 2, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 8, 8, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 16, 4, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 32, 2, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat, ""); + + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 8, 16, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 16, 8, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 32, 4, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 8, 16, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 16, 8, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 32, 4, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat, ""); + + CHECK_RESULTS (TEST_MSG, ""); + +#ifdef EXTRA_TESTS + EXTRA_TESTS(); +#endif +} + +int main (void) +{ + FNNAME (INSN_NAME) (); + + return 0; +} diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vqadd.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vqadd.c new file mode 100644 index 0000000..c07f5ff --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vqadd.c @@ -0,0 +1,278 @@ +#define INSN_NAME vqadd +#define TEST_MSG "VQADD/VQADDQ" + +/* Extra tests for special cases: + - some requiring intermediate types larger than 64 bits to + compute saturation flag. + - corner case saturations with types smaller than 64 bits. +*/ +void vqadd_extras(void); +#define EXTRA_TESTS vqadd_extras + +#include "binary_sat_op.inc" + +/* Expected values of cumulative_saturation flag. */ +int VECT_VAR(expected_cumulative_sat,int,8,8) = 0; +int VECT_VAR(expected_cumulative_sat,int,16,4) = 0; +int VECT_VAR(expected_cumulative_sat,int,32,2) = 0; +int VECT_VAR(expected_cumulative_sat,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat,uint,8,8) = 1; +int VECT_VAR(expected_cumulative_sat,uint,16,4) = 1; +int VECT_VAR(expected_cumulative_sat,uint,32,2) = 1; +int VECT_VAR(expected_cumulative_sat,uint,64,1) = 1; +int VECT_VAR(expected_cumulative_sat,int,8,16) = 0; +int VECT_VAR(expected_cumulative_sat,int,16,8) = 0; +int VECT_VAR(expected_cumulative_sat,int,32,4) = 0; +int VECT_VAR(expected_cumulative_sat,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat,uint,8,16) = 1; +int VECT_VAR(expected_cumulative_sat,uint,16,8) = 1; +int VECT_VAR(expected_cumulative_sat,uint,32,4) = 1; +int VECT_VAR(expected_cumulative_sat,uint,64,2) = 1; +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x1, 0x2, 0x3, 0x4, + 0x5, 0x6, 0x7, 0x8 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x12, 0x13, 0x14, 0x15 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x23, 0x24 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x34 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xffff, 0xffff, 0xffff, 0xffff }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0xffffffffffffffff }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x1, 0x2, 0x3, 0x4, + 0x5, 0x6, 0x7, 0x8, + 0x9, 0xa, 0xb, 0xc, + 0xd, 0xe, 0xf, 0x10 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x12, 0x13, 0x14, 0x15, + 0x16, 0x17, 0x18, 0x19 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x23, 0x24, 0x25, 0x26 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x34, 0x35 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0xffffffffffffffff, + 0xffffffffffffffff }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + + +/* 64-bits types, with 0 as second input. */ +int VECT_VAR(expected_cumulative_sat_64,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64,uint,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat_64,uint,64,2) = 0; +VECT_VAR_DECL(expected_64,int,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_64,uint,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_64,int,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_64,uint,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; + +/* 64-bits types, some cases causing cumulative saturation. */ +int VECT_VAR(expected_cumulative_sat_64_2,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,uint,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_2,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,uint,64,2) = 1; +VECT_VAR_DECL(expected_64_2,int,64,1) [] = { 0x34 }; +VECT_VAR_DECL(expected_64_2,uint,64,1) [] = { 0xffffffffffffffff }; +VECT_VAR_DECL(expected_64_2,int,64,2) [] = { 0x34, 0x35 }; +VECT_VAR_DECL(expected_64_2,uint,64,2) [] = { 0xffffffffffffffff, + 0xffffffffffffffff }; + +/* 64-bits types, all causing cumulative saturation. */ +int VECT_VAR(expected_cumulative_sat_64_3,int,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,uint,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,int,64,2) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,uint,64,2) = 1; +VECT_VAR_DECL(expected_64_3,int,64,1) [] = { 0x8000000000000000 }; +VECT_VAR_DECL(expected_64_3,uint,64,1) [] = { 0xffffffffffffffff }; +VECT_VAR_DECL(expected_64_3,int,64,2) [] = { 0x7fffffffffffffff, + 0x7fffffffffffffff }; +VECT_VAR_DECL(expected_64_3,uint,64,2) [] = { 0xffffffffffffffff, + 0xffffffffffffffff }; + +/* smaller types, corner cases causing cumulative saturation. (1) */ +int VECT_VAR(expected_csat_lt_64_1,int,8,8) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,16,4) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,32,2) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,8,16) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,16,8) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,32,4) = 1; +VECT_VAR_DECL(expected_lt_64_1,int,8,8) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_lt_64_1,int,16,4) [] = { 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_lt_64_1,int,32,2) [] = { 0x80000000, 0x80000000 }; +VECT_VAR_DECL(expected_lt_64_1,int,8,16) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_lt_64_1,int,16,8) [] = { 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_lt_64_1,int,32,4) [] = { 0x80000000, 0x80000000, + 0x80000000, 0x80000000 }; + +/* smaller types, corner cases causing cumulative saturation. (2) */ +int VECT_VAR(expected_csat_lt_64_2,uint,8,8) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,16,4) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,32,2) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,8,16) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,16,8) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,32,4) = 1; +VECT_VAR_DECL(expected_lt_64_2,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_lt_64_2,uint,16,4) [] = { 0xffff, 0xffff, + 0xffff, 0xffff }; +VECT_VAR_DECL(expected_lt_64_2,uint,32,2) [] = { 0xffffffff, + 0xffffffff }; +VECT_VAR_DECL(expected_lt_64_2,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_lt_64_2,uint,16,8) [] = { 0xffff, 0xffff, + 0xffff, 0xffff, + 0xffff, 0xffff, + 0xffff, 0xffff }; +VECT_VAR_DECL(expected_lt_64_2,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff }; + +void vqadd_extras(void) +{ + DECL_VARIABLE_ALL_VARIANTS(vector1); + DECL_VARIABLE_ALL_VARIANTS(vector2); + DECL_VARIABLE_ALL_VARIANTS(vector_res); + + /* Initialize input "vector1" from "buffer". */ + TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector1, buffer); + + /* Use a second vector full of 0. */ + VDUP(vector2, , int, s, 64, 1, 0); + VDUP(vector2, , uint, u, 64, 1, 0); + VDUP(vector2, q, int, s, 64, 2, 0); + VDUP(vector2, q, uint, u, 64, 2, 0); + +#define MSG "64 bits saturation adding zero" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64, MSG); + + /* Another set of tests with non-zero values, some chosen to create + overflow. */ + VDUP(vector2, , int, s, 64, 1, 0x44); + VDUP(vector2, , uint, u, 64, 1, 0x88); + VDUP(vector2, q, int, s, 64, 2, 0x44); + VDUP(vector2, q, uint, u, 64, 2, 0x88); + +#undef MSG +#define MSG "64 bits saturation cumulative_sat (2)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64_2, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64_2, MSG); + + /* Another set of tests, with input values chosen to set + cumulative_sat in all cases. */ + VDUP(vector2, , int, s, 64, 1, 0x8000000000000003LL); + VDUP(vector2, , uint, u, 64, 1, 0x88); + /* To check positive saturation, we need to write a positive value + in vector1. */ + VDUP(vector1, q, int, s, 64, 2, 0x4000000000000000LL); + VDUP(vector2, q, int, s, 64, 2, 0x4000000000000000LL); + VDUP(vector2, q, uint, u, 64, 2, 0x22); + +#undef MSG +#define MSG "64 bits saturation cumulative_sat (3)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64_3, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64_3, MSG); + + /* To improve coverage, check saturation with less than 64 bits + too. */ + VDUP(vector2, , int, s, 8, 8, 0x81); + VDUP(vector2, , int, s, 16, 4, 0x8001); + VDUP(vector2, , int, s, 32, 2, 0x80000001); + VDUP(vector2, q, int, s, 8, 16, 0x81); + VDUP(vector2, q, int, s, 16, 8, 0x8001); + VDUP(vector2, q, int, s, 32, 4, 0x80000001); + +#undef MSG +#define MSG "less than 64 bits saturation cumulative_sat (1)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 8, 8, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 16, 4, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 32, 2, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 8, 16, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 16, 8, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 32, 4, expected_csat_lt_64_1, MSG); + + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_lt_64_1, MSG); + + /* Another set of tests with large vector1 values. */ + VDUP(vector1, , uint, u, 8, 8, 0xF0); + VDUP(vector1, , uint, u, 16, 4, 0xFFF0); + VDUP(vector1, , uint, u, 32, 2, 0xFFFFFFF0); + VDUP(vector1, q, uint, u, 8, 16, 0xF0); + VDUP(vector1, q, uint, u, 16, 8, 0xFFF0); + VDUP(vector1, q, uint, u, 32, 4, 0xFFFFFFF0); + + VDUP(vector2, , uint, u, 8, 8, 0x20); + VDUP(vector2, , uint, u, 16, 4, 0x20); + VDUP(vector2, , uint, u, 32, 2, 0x20); + VDUP(vector2, q, uint, u, 8, 16, 0x20); + VDUP(vector2, q, uint, u, 16, 8, 0x20); + VDUP(vector2, q, uint, u, 32, 4, 0x20); + +#undef MSG +#define MSG "less than 64 bits saturation cumulative_sat (2)" + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 8, 8, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 16, 4, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 32, 2, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 8, 16, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 16, 8, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 32, 4, expected_csat_lt_64_2, MSG); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_lt_64_2, MSG); +} diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vqsub.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vqsub.c new file mode 100644 index 0000000..04df5fe --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vqsub.c @@ -0,0 +1,278 @@ +#define INSN_NAME vqsub +#define TEST_MSG "VQSUB/VQSUBQ" + +/* Extra tests for special cases: + - some requiring intermediate types larger than 64 bits to + compute saturation flag. + - corner case saturations with types smaller than 64 bits. +*/ +void vqsub_extras(void); +#define EXTRA_TESTS vqsub_extras + +#include "binary_sat_op.inc" + + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0xdf, 0xe0, 0xe1, 0xe2, + 0xe3, 0xe4, 0xe5, 0xe6 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0xffce, 0xffcf, + 0xffd0, 0xffd1 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0xffffffbd, 0xffffffbe }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0xffffffffffffffac }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x9b, 0x9c, 0x9d, 0x9e, + 0x9f, 0xa0, 0xa1, 0xa2 }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xff8a, 0xff8b, + 0xff8c, 0xff8d }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffff79, 0xffffff7a }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0xffffffffffffff68 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0xdf, 0xe0, 0xe1, 0xe2, + 0xe3, 0xe4, 0xe5, 0xe6, + 0xe7, 0xe8, 0xe9, 0xea, + 0xeb, 0xec, 0xed, 0xee }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0xffce, 0xffcf, 0xffd0, 0xffd1, + 0xffd2, 0xffd3, 0xffd4, 0xffd5 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0xffffffbd, 0xffffffbe, + 0xffffffbf, 0xffffffc0 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0xffffffffffffffac, + 0xffffffffffffffad }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x9b, 0x9c, 0x9d, 0x9e, + 0x9f, 0xa0, 0xa1, 0xa2, + 0xa3, 0xa4, 0xa5, 0xa6, + 0xa7, 0xa8, 0xa9, 0xaa }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xff8a, 0xff8b, 0xff8c, 0xff8d, + 0xff8e, 0xff8f, 0xff90, 0xff91 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xffffff79, 0xffffff7a, + 0xffffff7b, 0xffffff7c }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0xffffffffffffff68, + 0xffffffffffffff69 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +/* Expected values of cumulative saturation flag. */ +int VECT_VAR(expected_cumulative_sat,int,8,8) = 0; +int VECT_VAR(expected_cumulative_sat,int,16,4) = 0; +int VECT_VAR(expected_cumulative_sat,int,32,2) = 0; +int VECT_VAR(expected_cumulative_sat,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat,uint,8,8) = 0; +int VECT_VAR(expected_cumulative_sat,uint,16,4) = 0; +int VECT_VAR(expected_cumulative_sat,uint,32,2) = 0; +int VECT_VAR(expected_cumulative_sat,uint,64,1) = 0; +int VECT_VAR(expected_cumulative_sat,int,8,16) = 0; +int VECT_VAR(expected_cumulative_sat,int,16,8) = 0; +int VECT_VAR(expected_cumulative_sat,int,32,4) = 0; +int VECT_VAR(expected_cumulative_sat,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat,uint,8,16) = 0; +int VECT_VAR(expected_cumulative_sat,uint,16,8) = 0; +int VECT_VAR(expected_cumulative_sat,uint,32,4) = 0; +int VECT_VAR(expected_cumulative_sat,uint,64,2) = 0; + +/* 64-bits types, with 0 as second input. */ +VECT_VAR_DECL(expected_64,int,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_64,uint,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_64,int,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_64,uint,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; +int VECT_VAR(expected_cumulative_sat_64,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64,uint,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat_64,uint,64,2) = 0; + +/* 64-bits types, other cases. */ +VECT_VAR_DECL(expected_64_2,int,64,1) [] = { 0xffffffffffffffac }; +VECT_VAR_DECL(expected_64_2,uint,64,1) [] = { 0xffffffffffffff68 }; +VECT_VAR_DECL(expected_64_2,int,64,2) [] = { 0xffffffffffffffac, + 0xffffffffffffffad }; +VECT_VAR_DECL(expected_64_2,uint,64,2) [] = { 0xffffffffffffff68, + 0xffffffffffffff69 }; +int VECT_VAR(expected_cumulative_sat_64_2,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,uint,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,uint,64,2) = 0; + +/* 64-bits types, all causing cumulative saturation. */ +VECT_VAR_DECL(expected_64_3,int,64,1) [] = { 0x8000000000000000 }; +VECT_VAR_DECL(expected_64_3,uint,64,1) [] = { 0x0 }; +VECT_VAR_DECL(expected_64_3,int,64,2) [] = { 0x7fffffffffffffff, + 0x7fffffffffffffff }; +VECT_VAR_DECL(expected_64_3,uint,64,2) [] = { 0x0, 0x0 }; +int VECT_VAR(expected_cumulative_sat_64_3,int,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,uint,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,int,64,2) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,uint,64,2) = 1; + +/* smaller types, corner cases causing cumulative saturation. (1) */ +VECT_VAR_DECL(expected_lt_64_1,int,8,8) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_lt_64_1,int,16,4) [] = { 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_lt_64_1,int,32,2) [] = { 0x80000000, 0x80000000 }; +VECT_VAR_DECL(expected_lt_64_1,int,8,16) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_lt_64_1,int,16,8) [] = { 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_lt_64_1,int,32,4) [] = { 0x80000000, 0x80000000, + 0x80000000, 0x80000000 }; +int VECT_VAR(expected_csat_lt_64_1,int,8,8) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,16,4) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,32,2) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,8,16) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,16,8) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,32,4) = 1; + +/* smaller types, corner cases causing cumulative saturation. (2) */ +VECT_VAR_DECL(expected_lt_64_2,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,16,4) [] = { 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,32,4) [] = { 0x0, 0x0, 0x0, 0x0 }; +int VECT_VAR(expected_csat_lt_64_2,uint,8,8) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,16,4) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,32,2) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,8,16) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,16,8) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,32,4) = 1; + +void vqsub_extras(void) +{ + DECL_VARIABLE_ALL_VARIANTS(vector1); + DECL_VARIABLE_ALL_VARIANTS(vector2); + DECL_VARIABLE_ALL_VARIANTS(vector_res); + + /* Initialize input "vector1" from "buffer". */ + TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector1, buffer); + + /* Use a second vector full of 0. */ + VDUP(vector2, , int, s, 64, 1, 0x0); + VDUP(vector2, , uint, u, 64, 1, 0x0); + VDUP(vector2, q, int, s, 64, 2, 0x0); + VDUP(vector2, q, uint, u, 64, 2, 0x0); + +#define MSG "64 bits saturation when adding zero" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64, MSG); + + /* Another set of tests with non-zero values. */ + VDUP(vector2, , int, s, 64, 1, 0x44); + VDUP(vector2, , uint, u, 64, 1, 0x88); + VDUP(vector2, q, int, s, 64, 2, 0x44); + VDUP(vector2, q, uint, u, 64, 2, 0x88); + +#undef MSG +#define MSG "64 bits saturation cumulative_sat (2)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64_2, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64_2, MSG); + + /* Another set of tests, with input values chosen to set + cumulative_sat in all cases. */ + VDUP(vector2, , int, s, 64, 1, 0x7fffffffffffffffLL); + VDUP(vector2, , uint, u, 64, 1, 0xffffffffffffffffULL); + /* To check positive saturation, we need to write a positive value + in vector1. */ + VDUP(vector1, q, int, s, 64, 2, 0x3fffffffffffffffLL); + VDUP(vector2, q, int, s, 64, 2, 0x8000000000000000LL); + VDUP(vector2, q, uint, u, 64, 2, 0xffffffffffffffffULL); + +#undef MSG +#define MSG "64 bits saturation cumulative_sat (3)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64_3, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64_3, MSG); + + /* To improve coverage, check saturation with less than 64 bits + too. */ + VDUP(vector2, , int, s, 8, 8, 0x7F); + VDUP(vector2, , int, s, 16, 4, 0x7FFF); + VDUP(vector2, , int, s, 32, 2, 0x7FFFFFFF); + VDUP(vector2, q, int, s, 8, 16, 0x7F); + VDUP(vector2, q, int, s, 16, 8, 0x7FFF); + VDUP(vector2, q, int, s, 32, 4, 0x7FFFFFFF); + +#undef MSG +#define MSG "less than 64 bits saturation cumulative_sat (1)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 8, 8, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 16, 4, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 32, 2, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 8, 16, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 16, 8, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 32, 4, expected_csat_lt_64_1, MSG); + + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_lt_64_1, MSG); + + /* Another set of tests with vector1 values smaller than + vector2. */ + VDUP(vector1, , uint, u, 8, 8, 0x10); + VDUP(vector1, , uint, u, 16, 4, 0x10); + VDUP(vector1, , uint, u, 32, 2, 0x10); + VDUP(vector1, q, uint, u, 8, 16, 0x10); + VDUP(vector1, q, uint, u, 16, 8, 0x10); + VDUP(vector1, q, uint, u, 32, 4, 0x10); + + VDUP(vector2, , uint, u, 8, 8, 0x20); + VDUP(vector2, , uint, u, 16, 4, 0x20); + VDUP(vector2, , uint, u, 32, 2, 0x20); + VDUP(vector2, q, uint, u, 8, 16, 0x20); + VDUP(vector2, q, uint, u, 16, 8, 0x20); + VDUP(vector2, q, uint, u, 32, 4, 0x20); + +#undef MSG +#define MSG "less than 64 bits saturation cumulative_sat (2)" + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 8, 8, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 16, 4, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 32, 2, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 8, 16, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 16, 8, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 32, 4, expected_csat_lt_64_2, MSG); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_lt_64_2, MSG); +}