From patchwork Tue Jul 1 10:06:12 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 32884 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f72.google.com (mail-pa0-f72.google.com [209.85.220.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D5F3F20672 for ; Tue, 1 Jul 2014 10:13:42 +0000 (UTC) Received: by mail-pa0-f72.google.com with SMTP id rd3sf50886002pab.3 for ; Tue, 01 Jul 2014 03:13:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:sender:delivered-to:from:to:subject:date:message-id :in-reply-to:references:x-original-sender :x-original-authentication-results; bh=R2LNvK+RzDWpCWpoOgIBHZcMjiY6oFEL3sxgETC57tY=; b=jzNeish72Sfo53sV9BDYF4kCBkTRUDfYdvPPadcD1xg2DVdCMGrbXTOCI6oPe1EQdk YAVC0uMB1+LHyHjVJwberXFuSx6FH83mp93x4CmwMML07Ykq8bYuWkPn0JSE8OJGKetr z//roV6Tm/Rzt8M2712scAKrSVvyTy0XgHTn2af1N3Z2Kom8BfhB5+y8UdKFiaxRF3Zq kPG/V8mhvoz+t0YKdXV2KPtLZqOk3ElfbDmNNey5gEHEMxCiMEJlbxssJGij8rCusSUe wQFKQ2G2lBWu4uvLLLCGfamPP8Yoi1xFvXPFgO6vJUmhlE3aYv497ingWFaQdsVFe8KO jisw== X-Gm-Message-State: ALoCoQksZs2rix9UH5ivUc8PdKcvN3slHaTASANGtZQDie9155+Kk1gTEAZl/tHysTgTIiNo5mE3 X-Received: by 10.69.31.75 with SMTP id kk11mr23055280pbd.8.1404209622121; Tue, 01 Jul 2014 03:13:42 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.17.19 with SMTP id 19ls1939933qgc.69.gmail; Tue, 01 Jul 2014 03:13:41 -0700 (PDT) X-Received: by 10.58.209.7 with SMTP id mi7mr3783vec.80.1404209621870; Tue, 01 Jul 2014 03:13:41 -0700 (PDT) Received: from mail-ve0-x22b.google.com (mail-ve0-x22b.google.com [2607:f8b0:400c:c01::22b]) by mx.google.com with ESMTPS id ym8si2672234vdb.32.2014.07.01.03.13.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Jul 2014 03:13:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::22b as permitted sender) client-ip=2607:f8b0:400c:c01::22b; Received: by mail-ve0-f171.google.com with SMTP id jz11so9484353veb.30 for ; Tue, 01 Jul 2014 03:13:41 -0700 (PDT) X-Received: by 10.52.23.71 with SMTP id k7mr36440090vdf.27.1404209621762; Tue, 01 Jul 2014 03:13:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp203764vcb; Tue, 1 Jul 2014 03:13:41 -0700 (PDT) X-Received: by 10.68.176.5 with SMTP id ce5mr56175287pbc.93.1404209620822; Tue, 01 Jul 2014 03:13:40 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id zk3si26434085pbb.155.2014.07.01.03.13.40 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Jul 2014 03:13:40 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-371588-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 6458 invoked by alias); 1 Jul 2014 10:07:53 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 6317 invoked by uid 89); 1 Jul 2014 10:07:52 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wg0-f47.google.com Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com) (74.125.82.47) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Tue, 01 Jul 2014 10:07:38 +0000 Received: by mail-wg0-f47.google.com with SMTP id k14so9242728wgh.6 for ; Tue, 01 Jul 2014 03:07:35 -0700 (PDT) X-Received: by 10.180.13.139 with SMTP id h11mr35875883wic.40.1404209254899; Tue, 01 Jul 2014 03:07:34 -0700 (PDT) Received: from gnx2647.cec-lab.gnb.st.com ([2a01:e35:2e1d:eaf0:210:75ff:fe1a:c986]) by mx.google.com with ESMTPSA id kr6sm47178017wjb.16.2014.07.01.03.07.33 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Jul 2014 03:07:34 -0700 (PDT) From: Christophe Lyon To: gcc-patches@gcc.gnu.org Subject: [Patch ARM-AArch64/testsuite v2 19/21] Add vld2_lane, vld3_lane and vld4_lane Date: Tue, 1 Jul 2014 12:06:12 +0200 Message-Id: <1404209174-25364-20-git-send-email-christophe.lyon@linaro.org> In-Reply-To: <1404209174-25364-1-git-send-email-christophe.lyon@linaro.org> References: <1404209174-25364-1-git-send-email-christophe.lyon@linaro.org> X-IsSubscribed: yes X-Original-Sender: christophe.lyon@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::22b as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 diff --git a/gcc/testsuite/ChangeLog b/gcc/testsuite/ChangeLog index 2a359e0..b923c53 100644 --- a/gcc/testsuite/ChangeLog +++ b/gcc/testsuite/ChangeLog @@ -1,5 +1,9 @@ 2014-06-30 Christophe Lyon + * gcc.target/aarch64/neon-intrinsics/vldX_lane.c: New file. + +2014-06-30 Christophe Lyon + * gcc.target/aarch64/neon-intrinsics/vldX.c: New file. 2014-06-30 Christophe Lyon diff --git a/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vldX_lane.c b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vldX_lane.c new file mode 100644 index 0000000..8887c3e --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/neon-intrinsics/vldX_lane.c @@ -0,0 +1,679 @@ +#include +#include "arm-neon-ref.h" +#include "compute-ref-data.h" + +/* Expected results. */ + +/* vld2/chunk 0. */ +VECT_VAR_DECL(expected_vld2_0,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld2_0,int,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_0,int,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld2_0,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_0,uint,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld2_0,uint,16,4) [] = { 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_0,uint,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld2_0,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_0,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld2_0,poly,16,4) [] = { 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 }; +VECT_VAR_DECL(expected_vld2_0,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld2_0,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_0,int,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld2_0,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_0,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld2_0,uint,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_0,uint,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld2_0,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_0,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld2_0,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_0,hfloat,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; + +/* vld2/chunk 1. */ +VECT_VAR_DECL(expected_vld2_1,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xf0, 0xf1 }; +VECT_VAR_DECL(expected_vld2_1,int,16,4) [] = { 0xfff0, 0xfff1, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_1,int,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld2_1,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_1,uint,8,8) [] = { 0xf0, 0xf1, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld2_1,uint,16,4) [] = { 0xaaaa, 0xaaaa, 0xfff0, 0xfff1 }; +VECT_VAR_DECL(expected_vld2_1,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld2_1,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_1,poly,8,8) [] = { 0xf0, 0xf1, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld2_1,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xfff0, 0xfff1 }; +VECT_VAR_DECL(expected_vld2_1,hfloat,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld2_1,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld2_1,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xfff0, 0xfff1, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_1,int,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld2_1,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_1,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld2_1,uint,16,8) [] = { 0xaaaa, 0xaaaa, 0xfff0, 0xfff1, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_1,uint,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld2_1,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_1,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld2_1,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xfff0, 0xfff1, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld2_1,hfloat,32,4) [] = { 0xc1800000, 0xc1700000, + 0xaaaaaaaa, 0xaaaaaaaa }; + +/* vld3/chunk 0. */ +VECT_VAR_DECL(expected_vld3_0,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld3_0,int,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_0,int,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld3_0,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_0,uint,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld3_0,uint,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_0,uint,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_0,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_0,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld3_0,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 }; +VECT_VAR_DECL(expected_vld3_0,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_0,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_0,int,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_0,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_0,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_0,uint,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_0,uint,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_0,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_0,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_0,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_0,hfloat,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; + +/* vld3/chunk 1. */ +VECT_VAR_DECL(expected_vld3_1,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld3_1,int,16,4) [] = { 0xaaaa, 0xaaaa, 0xfff0, 0xfff1 }; +VECT_VAR_DECL(expected_vld3_1,int,32,2) [] = { 0xfffffff2, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_1,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_1,uint,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xf0, 0xf1, 0xf2, 0xaa }; +VECT_VAR_DECL(expected_vld3_1,uint,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_1,uint,32,2) [] = { 0xaaaaaaaa, 0xfffffff0 }; +VECT_VAR_DECL(expected_vld3_1,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_1,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xf0, 0xf1, 0xf2, 0xaa }; +VECT_VAR_DECL(expected_vld3_1,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_1,hfloat,32,2) [] = { 0xc1600000, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_1,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_1,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_1,int,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld3_1,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_1,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_1,uint,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xfff0 }; +VECT_VAR_DECL(expected_vld3_1,uint,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_1,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_1,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_1,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xfff0 }; +VECT_VAR_DECL(expected_vld3_1,hfloat,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xc1800000, 0xc1700000 }; + +/* vld3/chunk 2. */ +VECT_VAR_DECL(expected_vld3_2,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xf0, 0xf1, 0xf2 }; +VECT_VAR_DECL(expected_vld3_2,int,16,4) [] = { 0xfff2, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_2,int,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_2,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_2,uint,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld3_2,uint,16,4) [] = { 0xaaaa, 0xfff0, 0xfff1, 0xfff2 }; +VECT_VAR_DECL(expected_vld3_2,uint,32,2) [] = { 0xfffffff1, 0xfffffff2 }; +VECT_VAR_DECL(expected_vld3_2,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_2,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld3_2,poly,16,4) [] = { 0xaaaa, 0xfff0, 0xfff1, 0xfff2 }; +VECT_VAR_DECL(expected_vld3_2,hfloat,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_2,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_2,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xfff0, 0xfff1, + 0xfff2, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_2,int,32,4) [] = { 0xfffffff2, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_2,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_2,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_2,uint,16,8) [] = { 0xfff1, 0xfff2, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_2,uint,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld3_2,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_2,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld3_2,poly,16,8) [] = { 0xfff1, 0xfff2, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld3_2,hfloat,32,4) [] = { 0xc1600000, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; + +/* vld4/chunk 0. */ +VECT_VAR_DECL(expected_vld4_0,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_0,int,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_0,int,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld4_0,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_0,uint,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_0,uint,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_0,uint,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_0,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_0,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_0,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 }; +VECT_VAR_DECL(expected_vld4_0,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_0,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_0,int,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_0,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_0,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_0,uint,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_0,uint,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld4_0,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_0,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_0,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_0,hfloat,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; + +/* vld4/chunk 1. */ +VECT_VAR_DECL(expected_vld4_1,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_1,int,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_1,int,32,2) [] = { 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld4_1,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_1,uint,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_1,uint,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_1,uint,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_1,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_1,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_1,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 }; +VECT_VAR_DECL(expected_vld4_1,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_1,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_1,int,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_1,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_1,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_1,uint,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_1,uint,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_1,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_1,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_1,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_1,hfloat,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; + +/* vld4/chunk 2. */ +VECT_VAR_DECL(expected_vld4_2,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_2,int,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld4_2,int,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_2,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_2,uint,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_2,uint,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_2,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld4_2,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_2,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_2,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_2,hfloat,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_2,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_2,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_2,int,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld4_2,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_2,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_2,uint,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld4_2,uint,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_2,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_2,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_2,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld4_2,hfloat,32,4) [] = { 0xc1800000, 0xc1700000, + 0xc1600000, 0xc1500000 }; + +/* vld4/chunk 3. */ +VECT_VAR_DECL(expected_vld4_3,int,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xf0, 0xf1, 0xf2, 0xf3 }; +VECT_VAR_DECL(expected_vld4_3,int,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_3,int,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_3,int,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_3,uint,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_3,uint,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld4_3,uint,32,2) [] = { 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld4_3,uint,64,1) [] = { 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_3,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa, + 0xaa, 0xaa, 0xaa, 0xaa }; +VECT_VAR_DECL(expected_vld4_3,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld4_3,hfloat,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_3,int,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_3,int,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_3,int,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_3,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_3,uint,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_3,uint,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_3,uint,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; +VECT_VAR_DECL(expected_vld4_3,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_3,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected_vld4_3,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa, + 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa }; +VECT_VAR_DECL(expected_vld4_3,hfloat,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa, + 0xaaaaaaaa, 0xaaaaaaaa }; + +/* Declare additional input buffers as needed. */ +/* Input buffers for vld2_lane */ +VECT_VAR_DECL_INIT(buffer_vld2_lane, int, 8, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, int, 16, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, int, 32, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, int, 64, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, uint, 8, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, uint, 16, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, uint, 32, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, uint, 64, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, poly, 8, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, poly, 16, 2); +VECT_VAR_DECL_INIT(buffer_vld2_lane, float, 32, 2); +#if __ARM_NEON_FP16_INTRINSICS +VECT_VAR_DECL(buffer_vld2_lane, float, 16, 2) [] = {0xcc00 /* -16 */, + 0xcb80 /* -15 */}; +#endif + +/* Input buffers for vld3_lane */ +VECT_VAR_DECL_INIT(buffer_vld3_lane, int, 8, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, int, 16, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, int, 32, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, int, 64, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, uint, 8, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, uint, 16, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, uint, 32, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, uint, 64, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, poly, 8, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, poly, 16, 3); +VECT_VAR_DECL_INIT(buffer_vld3_lane, float, 32, 3); +#if __ARM_NEON_FP16_INTRINSICS +VECT_VAR_DECL(buffer_vld3_lane, float, 16, 3) [] = {0xcc00 /* -16 */, + 0xcb80 /* -15 */, + 0xcb00 /* -14 */}; +#endif + +/* Input buffers for vld4_lane */ +VECT_VAR_DECL_INIT(buffer_vld4_lane, int, 8, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, int, 16, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, int, 32, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, int, 64, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, uint, 8, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, uint, 16, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, uint, 32, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, uint, 64, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, poly, 8, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, poly, 16, 4); +VECT_VAR_DECL_INIT(buffer_vld4_lane, float, 32, 4); +#if __ARM_NEON_FP16_INTRINSICS +VECT_VAR_DECL(buffer_vld4_lane, float, 16, 4) [] = {0xcc00 /* -16 */, + 0xcb80 /* -15 */, + 0xcb00 /* -14 */, + 0xca80 /* -13 */}; +#endif + +void exec_vldX_lane (void) +{ + /* In this case, input variables are arrays of vectors. */ +#define DECL_VLDX_LANE(T1, W, N, X) \ + VECT_ARRAY_TYPE(T1, W, N, X) VECT_ARRAY_VAR(vector, T1, W, N, X); \ + VECT_ARRAY_TYPE(T1, W, N, X) VECT_ARRAY_VAR(vector_src, T1, W, N, X); \ + VECT_VAR_DECL(result_bis_##X, T1, W, N)[X * N] + + /* We need to use a temporary result buffer (result_bis), because + the one used for other tests is not large enough. A subset of the + result data is moved from result_bis to result, and it is this + subset which is used to check the actual behaviour. The next + macro enables to move another chunk of data from result_bis to + result. */ + /* We also use another extra input buffer (buffer_src), which we + fill with 0xAA, and which it used to load a vector from which we + read a given lane. */ +#define TEST_VLDX_LANE(Q, T1, T2, W, N, X, L) \ + memset (VECT_VAR(buffer_src, T1, W, N), 0xAA, \ + sizeof(VECT_VAR(buffer_src, T1, W, N))); \ + \ + VECT_ARRAY_VAR(vector_src, T1, W, N, X) = \ + vld##X##Q##_##T2##W(VECT_VAR(buffer_src, T1, W, N)); \ + \ + VECT_ARRAY_VAR(vector, T1, W, N, X) = \ + /* Use dedicated init buffer, of size. X */ \ + vld##X##Q##_lane_##T2##W(VECT_VAR(buffer_vld##X##_lane, T1, W, X), \ + VECT_ARRAY_VAR(vector_src, T1, W, N, X), \ + L); \ + vst##X##Q##_##T2##W(VECT_VAR(result_bis_##X, T1, W, N), \ + VECT_ARRAY_VAR(vector, T1, W, N, X)); \ + memcpy(VECT_VAR(result, T1, W, N), VECT_VAR(result_bis_##X, T1, W, N), \ + sizeof(VECT_VAR(result, T1, W, N))) + + /* Overwrite "result" with the contents of "result_bis"[Y]. */ +#define TEST_EXTRA_CHUNK(T1, W, N, X, Y) \ + memcpy(VECT_VAR(result, T1, W, N), \ + &(VECT_VAR(result_bis_##X, T1, W, N)[Y*N]), \ + sizeof(VECT_VAR(result, T1, W, N))); + + /* We need all variants in 64 bits, but there is no 64x2 variant. */ +#define DECL_ALL_VLDX_LANE(X) \ + DECL_VLDX_LANE(int, 8, 8, X); \ + DECL_VLDX_LANE(int, 16, 4, X); \ + DECL_VLDX_LANE(int, 32, 2, X); \ + DECL_VLDX_LANE(uint, 8, 8, X); \ + DECL_VLDX_LANE(uint, 16, 4, X); \ + DECL_VLDX_LANE(uint, 32, 2, X); \ + DECL_VLDX_LANE(poly, 8, 8, X); \ + DECL_VLDX_LANE(poly, 16, 4, X); \ + DECL_VLDX_LANE(int, 16, 8, X); \ + DECL_VLDX_LANE(int, 32, 4, X); \ + DECL_VLDX_LANE(uint, 16, 8, X); \ + DECL_VLDX_LANE(uint, 32, 4, X); \ + DECL_VLDX_LANE(poly, 16, 8, X); \ + DECL_VLDX_LANE(float, 32, 2, X); \ + DECL_VLDX_LANE(float, 32, 4, X) + +#if __ARM_NEON_FP16_INTRINSICS +#define DECL_ALL_VLDX_LANE_FP16(X) \ + DECL_VLDX_LANE(float, 16, 4, X); \ + DECL_VLDX_LANE(float, 16, 8, X) +#endif + + /* Add some padding to try to catch out of bound accesses. */ +#define ARRAY1(V, T, W, N) VECT_VAR_DECL(V,T,W,N)[1]={42} +#define DUMMY_ARRAY(V, T, W, N, L) \ + VECT_VAR_DECL(V,T,W,N)[N*L]={0}; \ + ARRAY1(V##_pad,T,W,N) + + /* Use the same lanes regardless of the size of the array (X), for + simplicity. */ +#define TEST_ALL_VLDX_LANE(X) \ + TEST_VLDX_LANE(, int, s, 8, 8, X, 7); \ + TEST_VLDX_LANE(, int, s, 16, 4, X, 2); \ + TEST_VLDX_LANE(, int, s, 32, 2, X, 0); \ + TEST_VLDX_LANE(, uint, u, 8, 8, X, 4); \ + TEST_VLDX_LANE(, uint, u, 16, 4, X, 3); \ + TEST_VLDX_LANE(, uint, u, 32, 2, X, 1); \ + TEST_VLDX_LANE(, poly, p, 8, 8, X, 4); \ + TEST_VLDX_LANE(, poly, p, 16, 4, X, 3); \ + TEST_VLDX_LANE(q, int, s, 16, 8, X, 6); \ + TEST_VLDX_LANE(q, int, s, 32, 4, X, 2); \ + TEST_VLDX_LANE(q, uint, u, 16, 8, X, 5); \ + TEST_VLDX_LANE(q, uint, u, 32, 4, X, 0); \ + TEST_VLDX_LANE(q, poly, p, 16, 8, X, 5); \ + TEST_VLDX_LANE(, float, f, 32, 2, X, 0); \ + TEST_VLDX_LANE(q, float, f, 32, 4, X, 2) + +#if __ARM_NEON_FP16_INTRINSICS +#define TEST_ALL_VLDX_LANE_FP16(X) \ + TEST_VLDX_LANE(, float, f, 16, 4, X, 0); \ + TEST_VLDX_LANE(q, float, f, 16, 8, X, 2) +#endif + +#define TEST_ALL_EXTRA_CHUNKS(X, Y) \ + TEST_EXTRA_CHUNK(int, 8, 8, X, Y); \ + TEST_EXTRA_CHUNK(int, 16, 4, X, Y); \ + TEST_EXTRA_CHUNK(int, 32, 2, X, Y); \ + TEST_EXTRA_CHUNK(uint, 8, 8, X, Y); \ + TEST_EXTRA_CHUNK(uint, 16, 4, X, Y); \ + TEST_EXTRA_CHUNK(uint, 32, 2, X, Y); \ + TEST_EXTRA_CHUNK(poly, 8, 8, X, Y); \ + TEST_EXTRA_CHUNK(poly, 16, 4, X, Y); \ + TEST_EXTRA_CHUNK(int, 16, 8, X, Y); \ + TEST_EXTRA_CHUNK(int, 32, 4, X, Y); \ + TEST_EXTRA_CHUNK(uint, 16, 8, X, Y); \ + TEST_EXTRA_CHUNK(uint, 32, 4, X, Y); \ + TEST_EXTRA_CHUNK(poly, 16, 8, X, Y); \ + TEST_EXTRA_CHUNK(float, 32, 2, X, Y); \ + TEST_EXTRA_CHUNK(float, 32, 4, X, Y) + +#if __ARM_NEON_FP16_INTRINSICS +#define TEST_ALL_EXTRA_CHUNKS_FP16(X, Y) \ + TEST_EXTRA_CHUNK(float, 16, 4, X, Y); \ + TEST_EXTRA_CHUNK(float, 16, 8, X, Y) +#endif + + /* Declare the temporary buffers / variables. */ + DECL_ALL_VLDX_LANE(2); + DECL_ALL_VLDX_LANE(3); + DECL_ALL_VLDX_LANE(4); +#if __ARM_NEON_FP16_INTRINSICS + DECL_ALL_VLDX_LANE_FP16(2); + DECL_ALL_VLDX_LANE_FP16(3); + DECL_ALL_VLDX_LANE_FP16(4); +#endif + + /* Define dummy input arrays, large enough for x4 vectors. */ + DUMMY_ARRAY(buffer_src, int, 8, 8, 4); + DUMMY_ARRAY(buffer_src, int, 16, 4, 4); + DUMMY_ARRAY(buffer_src, int, 32, 2, 4); + DUMMY_ARRAY(buffer_src, uint, 8, 8, 4); + DUMMY_ARRAY(buffer_src, uint, 16, 4, 4); + DUMMY_ARRAY(buffer_src, uint, 32, 2, 4); + DUMMY_ARRAY(buffer_src, poly, 8, 8, 4); + DUMMY_ARRAY(buffer_src, poly, 16, 4, 4); + DUMMY_ARRAY(buffer_src, int, 16, 8, 4); + DUMMY_ARRAY(buffer_src, int, 32, 4, 4); + DUMMY_ARRAY(buffer_src, uint, 16, 8, 4); + DUMMY_ARRAY(buffer_src, uint, 32, 4, 4); + DUMMY_ARRAY(buffer_src, poly, 16, 8, 4); + DUMMY_ARRAY(buffer_src, float, 32, 2, 4); + DUMMY_ARRAY(buffer_src, float, 32, 4, 4); +#if __ARM_NEON_FP16_INTRINSICS + DUMMY_ARRAY(buffer_src, float, 16, 4, 4); + DUMMY_ARRAY(buffer_src, float, 16, 8, 4); +#endif + + /* Check vld2_lane/vld2q_lane. */ + clean_results (); +#define TEST_MSG "VLD2_LANE/VLD2Q_LANE" + TEST_ALL_VLDX_LANE(2); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_VLDX_LANE_FP16(2); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld2_0, " chunk 0"); + + TEST_ALL_EXTRA_CHUNKS(2, 1); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(2, 1); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld2_1, " chunk 1"); + + /* Check vld3_lane/vld3q_lane. */ + clean_results (); +#undef TEST_MSG +#define TEST_MSG "VLD3_LANE/VLD3Q_LANE" + TEST_ALL_VLDX_LANE(3); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_VLDX_LANE_FP16(3); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld3_0, " chunk 0"); + + TEST_ALL_EXTRA_CHUNKS(3, 1); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(3, 1); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld3_1, " chunk 1"); + + TEST_ALL_EXTRA_CHUNKS(3, 2); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(3, 2); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld3_2, " chunk 2"); + + /* Check vld4_lane/vld4q_lane. */ + clean_results (); +#undef TEST_MSG +#define TEST_MSG "VLD4_LANE/VLD4Q_LANE" + TEST_ALL_VLDX_LANE(4); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_VLDX_LANE_FP16(4); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld4_0, " chunk 0"); + + TEST_ALL_EXTRA_CHUNKS(4, 1); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(4, 1); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld4_1, " chunk 1"); + TEST_ALL_EXTRA_CHUNKS(4, 2); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(4, 2); +#endif + + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld4_2, " chunk 2"); + + TEST_ALL_EXTRA_CHUNKS(4, 3); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(4, 3); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld4_3, " chunk 3"); +} + +int main (void) +{ + exec_vldX_lane (); + return 0; +}