From patchwork Wed Jul 13 17:36:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella Netto X-Patchwork-Id: 590072 Delivered-To: patch@linaro.org Received: by 2002:a05:7000:5817:0:0:0:0 with SMTP id j23csp783714max; Wed, 13 Jul 2022 10:38:50 -0700 (PDT) X-Google-Smtp-Source: AGRyM1t/DLfB8f5ZGct2ATMxAO1BWqLcPjInlX4MuIokFm8I5Wmu8d7NXVRYU5uudWcvA/01gZ5C X-Received: by 2002:a17:907:3f02:b0:6f3:5c42:321a with SMTP id hq2-20020a1709073f0200b006f35c42321amr4503308ejc.521.1657733930723; Wed, 13 Jul 2022 10:38:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1657733930; cv=none; d=google.com; s=arc-20160816; b=kZeVLBqLMuTzte5rFysqBCOsZX2Q3t5yHxCSlNLphgaej93w+2jZvVdTgpCyKCHBFl JqFUi04oBS0Twly8FF/StBdAtBNavqZh0XS3V6TfBy6OUyZ788m0Vk4MVGRgiVyc2GZe FMrSLL5hIAZK9s8PwLyUI9Guvy/aqQg3jVALAa+ahQUhLxuNfCBRaJ3KaDUz8hgJatv6 IKc8XhR0sltS87cCh+ubNSA/8JXOflBo6wItZzMaZuQw8zARAJIBeH7aVAp71jyVPfq5 R0ykb25/dZyK9iC441ChSZ/aP7I/fUj1IPtQlRGPk+PKY5cZOLBYvr8VDv3jX9lckM5f RhEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:dmarc-filter:delivered-to:dkim-signature :dkim-filter; bh=ywneCtaLdlZ/MY/KXppyEsQ7mFW2ZXpwIa3Z+imrYBc=; b=IGuxVPuCKjG9XknvozH92QwwEYYH+pFzKz9j+DjuKUqqs4Zl5499wNhNXKR5lwhrgf DUAF5NYa3Qab2pL2lI4ror74V35ksKSs5GW8exxTbmteo1oe/8zKTQ9IIs0mISAsETb4 HNBlA39Bptci/7GEVgjsj3SJLnvISJR5BU8tWHDmp2/E/FV4srn9cCznuBUX/+ks5FNp 3p2sxbAJZ8b2890yXp1V3yn+2Na2e1FEmM8DuRwXSVnpmKqy2IROPC0N0rOkiqo0tpXg k62ZPYBgsI8ykmhC68ISVyzcpC0QjDBpRX350NOuLHkFy/XrHyTTQ/zjxQ3x+SbM5MA3 SLHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sourceware.org header.s=default header.b=K6anarqQ; spf=pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="libc-alpha-bounces+patch=linaro.org@sourceware.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sourceware.org Return-Path: Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id l21-20020a170906795500b0072aef5bb6fesi15297065ejo.183.2022.07.13.10.38.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jul 2022 10:38:50 -0700 (PDT) Received-SPF: pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@sourceware.org header.s=default header.b=K6anarqQ; spf=pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="libc-alpha-bounces+patch=linaro.org@sourceware.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7A066381E73D for ; Wed, 13 Jul 2022 17:38:49 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 7A066381E73D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1657733929; bh=ywneCtaLdlZ/MY/KXppyEsQ7mFW2ZXpwIa3Z+imrYBc=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=K6anarqQ9zPEAqefymzP0joiAsw6wHA+i9r811kXynFi3lDtUe5sCKiqw/b5tmSpD IdeaY4WzoK7v2YUvqTvgU9J5vHdzGP/msIjc6Gl/ebocWqdkNoLI/7ikX+HSJSJtPm 7FQ7im2l1f6xUmeImDPD9RreeVp4wvRAJAZUmo7c= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by sourceware.org (Postfix) with ESMTPS id CFAE4382C164 for ; Wed, 13 Jul 2022 17:37:17 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org CFAE4382C164 Received: by mail-oi1-x22f.google.com with SMTP id s204so15217618oif.5 for ; Wed, 13 Jul 2022 10:37:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ywneCtaLdlZ/MY/KXppyEsQ7mFW2ZXpwIa3Z+imrYBc=; b=JBNrS08bBCIGNkFGzJyc0p5zYJSb8KoAZ4gLFnJEHn1uwd7PIOPqTuYiUezXG9o35Z 2nooBimpoCAfFC1/knqrSmu3xEwerfY/hUyXQzHyTMwin7dsQPR27mmd/AyPHm02NT+h hCs2562Z5DAtX9x0klp/f5fZhgyaCHXh/hcpwkVsKwU0xLXjj/k9lLeIusZxSRfu8IYU SAheb0NzeVkIUE2or6TSEXpturSnUeZ/HvkN0qENa3F1n6829mbwrBXAa5MOqlF60nRw c3Z45t9pAmudaY+IH6I7HPTLmRcbsPzlJu+lvXP72yMLadpeQEFoSStRzdXUrQHSooe+ 1VVA== X-Gm-Message-State: AJIora9itsqlLG6Gis/BevaHQ569YiZMu2TFRtbhIoWdT6+kySJqjjpG Nu5+ZBPOW4lOhsKxqnxmCVpEJuwhgD4A/Q== X-Received: by 2002:a05:6808:118e:b0:335:1985:cb59 with SMTP id j14-20020a056808118e00b003351985cb59mr5709136oil.230.1657733836890; Wed, 13 Jul 2022 10:37:16 -0700 (PDT) Received: from mandiga.. ([2804:431:c7ca:19c3:4d5a:3028:34cf:1669]) by smtp.gmail.com with ESMTPSA id o127-20020aca4185000000b00339c7af0e8esm5488651oia.51.2022.07.13.10.37.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jul 2022 10:37:16 -0700 (PDT) To: libc-alpha@sourceware.org, Florian Weimer Subject: [PATCH v9 6/9] x86: Add AVX2 optimized chacha20 Date: Wed, 13 Jul 2022 14:36:54 -0300 Message-Id: <20220713173657.516725-7-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220713173657.516725-1-adhemerval.zanella@linaro.org> References: <20220713173657.516725-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-11.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_DNSWL_NONE, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Netto Reply-To: Adhemerval Zanella Errors-To: libc-alpha-bounces+patch=linaro.org@sourceware.org Sender: "Libc-alpha" From: Adhemerval Zanella Netto It adds vectorized ChaCha20 implementation based on libgcrypt cipher/chacha20-amd64-avx2.S. It is used only if AVX2 is supported and enabled by the architecture. As for generic implementation, the last step that XOR with the input is omited. The final state register clearing is also omitted. On a Ryzen 9 5900X it shows the following improvements (using formatted bench-arc4random data): SSE MB/s ----------------------------------------------- arc4random [single-thread] 704.25 arc4random_buf(16) [single-thread] 1018.17 arc4random_buf(32) [single-thread] 1315.27 arc4random_buf(48) [single-thread] 1449.36 arc4random_buf(64) [single-thread] 1511.16 arc4random_buf(80) [single-thread] 1539.48 arc4random_buf(96) [single-thread] 1571.06 arc4random_buf(112) [single-thread] 1596.16 arc4random_buf(128) [single-thread] 1613.48 ----------------------------------------------- AVX2 MB/s ----------------------------------------------- arc4random [single-thread] 922.61 arc4random_buf(16) [single-thread] 1478.70 arc4random_buf(32) [single-thread] 2241.80 arc4random_buf(48) [single-thread] 2681.28 arc4random_buf(64) [single-thread] 2913.43 arc4random_buf(80) [single-thread] 3009.73 arc4random_buf(96) [single-thread] 3141.16 arc4random_buf(112) [single-thread] 3254.46 arc4random_buf(128) [single-thread] 3305.02 ----------------------------------------------- Checked on x86_64-linux-gnu. --- LICENSES | 5 +- sysdeps/x86_64/Makefile | 1 + sysdeps/x86_64/chacha20-amd64-avx2.S | 328 +++++++++++++++++++++++++++ sysdeps/x86_64/chacha20_arch.h | 22 +- 4 files changed, 348 insertions(+), 8 deletions(-) create mode 100644 sysdeps/x86_64/chacha20-amd64-avx2.S diff --git a/LICENSES b/LICENSES index 47e9cd8e31..1617648813 100644 --- a/LICENSES +++ b/LICENSES @@ -390,8 +390,9 @@ Copyright 2001 by Stephen L. Moshier License along with this library; if not, see . */ -sysdeps/aarch64/chacha20-aarch64.S and sysdeps/x86_64/chacha20-amd64-sse2.S -imports code from libgcrypt, with the following notices: +sysdeps/aarch64/chacha20-aarch64.S, sysdeps/x86_64/chacha20-amd64-sse2.S, +and sysdeps/x86_64/chacha20-amd64-avx2.S imports code from libgcrypt, +with the following notices: Copyright (C) 2017-2019 Jussi Kivilinna diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile index a2e5af3ca9..a02fb9a114 100644 --- a/sysdeps/x86_64/Makefile +++ b/sysdeps/x86_64/Makefile @@ -8,6 +8,7 @@ endif ifeq ($(subdir),stdlib) sysdep_routines += \ chacha20-amd64-sse2 \ + chacha20-amd64-avx2 \ # sysdep_routines endif diff --git a/sysdeps/x86_64/chacha20-amd64-avx2.S b/sysdeps/x86_64/chacha20-amd64-avx2.S new file mode 100644 index 0000000000..eb07b99f48 --- /dev/null +++ b/sysdeps/x86_64/chacha20-amd64-avx2.S @@ -0,0 +1,328 @@ +/* Optimized AVX2 implementation of ChaCha20 cipher. + Copyright (C) 2022 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* chacha20-amd64-avx2.S - AVX2 implementation of ChaCha20 cipher + + Copyright (C) 2017-2019 Jussi Kivilinna + + This file is part of Libgcrypt. + + Libgcrypt is free software; you can redistribute it and/or modify + it under the terms of the GNU Lesser General Public License as + published by the Free Software Foundation; either version 2.1 of + the License, or (at your option) any later version. + + Libgcrypt is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this program; if not, see . +*/ + +/* Based on D. J. Bernstein reference implementation at + http://cr.yp.to/chacha.html: + + chacha-regs.c version 20080118 + D. J. Bernstein + Public domain. */ + +#include + +#ifdef PIC +# define rRIP (%rip) +#else +# define rRIP +#endif + +/* register macros */ +#define INPUT %rdi +#define DST %rsi +#define SRC %rdx +#define NBLKS %rcx +#define ROUND %eax + +/* stack structure */ +#define STACK_VEC_X12 (32) +#define STACK_VEC_X13 (32 + STACK_VEC_X12) +#define STACK_TMP (32 + STACK_VEC_X13) +#define STACK_TMP1 (32 + STACK_TMP) + +#define STACK_MAX (32 + STACK_TMP1) + +/* vector registers */ +#define X0 %ymm0 +#define X1 %ymm1 +#define X2 %ymm2 +#define X3 %ymm3 +#define X4 %ymm4 +#define X5 %ymm5 +#define X6 %ymm6 +#define X7 %ymm7 +#define X8 %ymm8 +#define X9 %ymm9 +#define X10 %ymm10 +#define X11 %ymm11 +#define X12 %ymm12 +#define X13 %ymm13 +#define X14 %ymm14 +#define X15 %ymm15 + +#define X0h %xmm0 +#define X1h %xmm1 +#define X2h %xmm2 +#define X3h %xmm3 +#define X4h %xmm4 +#define X5h %xmm5 +#define X6h %xmm6 +#define X7h %xmm7 +#define X8h %xmm8 +#define X9h %xmm9 +#define X10h %xmm10 +#define X11h %xmm11 +#define X12h %xmm12 +#define X13h %xmm13 +#define X14h %xmm14 +#define X15h %xmm15 + +/********************************************************************** + helper macros + **********************************************************************/ + +/* 4x4 32-bit integer matrix transpose */ +#define transpose_4x4(x0,x1,x2,x3,t1,t2) \ + vpunpckhdq x1, x0, t2; \ + vpunpckldq x1, x0, x0; \ + \ + vpunpckldq x3, x2, t1; \ + vpunpckhdq x3, x2, x2; \ + \ + vpunpckhqdq t1, x0, x1; \ + vpunpcklqdq t1, x0, x0; \ + \ + vpunpckhqdq x2, t2, x3; \ + vpunpcklqdq x2, t2, x2; + +/* 2x2 128-bit matrix transpose */ +#define transpose_16byte_2x2(x0,x1,t1) \ + vmovdqa x0, t1; \ + vperm2i128 $0x20, x1, x0, x0; \ + vperm2i128 $0x31, x1, t1, x1; + +/********************************************************************** + 8-way chacha20 + **********************************************************************/ + +#define ROTATE2(v1,v2,c,tmp) \ + vpsrld $(32 - (c)), v1, tmp; \ + vpslld $(c), v1, v1; \ + vpaddb tmp, v1, v1; \ + vpsrld $(32 - (c)), v2, tmp; \ + vpslld $(c), v2, v2; \ + vpaddb tmp, v2, v2; + +#define ROTATE_SHUF_2(v1,v2,shuf) \ + vpshufb shuf, v1, v1; \ + vpshufb shuf, v2, v2; + +#define XOR(ds,s) \ + vpxor s, ds, ds; + +#define PLUS(ds,s) \ + vpaddd s, ds, ds; + +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,\ + interleave_op1,interleave_op2,\ + interleave_op3,interleave_op4) \ + vbroadcasti128 .Lshuf_rol16 rRIP, tmp1; \ + interleave_op1; \ + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ + ROTATE_SHUF_2(d1, d2, tmp1); \ + interleave_op2; \ + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ + ROTATE2(b1, b2, 12, tmp1); \ + vbroadcasti128 .Lshuf_rol8 rRIP, tmp1; \ + interleave_op3; \ + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ + ROTATE_SHUF_2(d1, d2, tmp1); \ + interleave_op4; \ + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ + ROTATE2(b1, b2, 7, tmp1); + + .section .text.avx2, "ax", @progbits + .align 32 +chacha20_data: +L(shuf_rol16): + .byte 2,3,0,1,6,7,4,5,10,11,8,9,14,15,12,13 +L(shuf_rol8): + .byte 3,0,1,2,7,4,5,6,11,8,9,10,15,12,13,14 +L(inc_counter): + .byte 0,1,2,3,4,5,6,7 +L(unsigned_cmp): + .long 0x80000000 + + .hidden __chacha20_avx2_blocks8 +ENTRY (__chacha20_avx2_blocks8) + /* input: + * %rdi: input + * %rsi: dst + * %rdx: src + * %rcx: nblks (multiple of 8) + */ + vzeroupper; + + pushq %rbp; + cfi_adjust_cfa_offset(8); + cfi_rel_offset(rbp, 0) + movq %rsp, %rbp; + cfi_def_cfa_register(rbp); + + subq $STACK_MAX, %rsp; + andq $~31, %rsp; + +L(loop8): + mov $20, ROUND; + + /* Construct counter vectors X12 and X13 */ + vpmovzxbd L(inc_counter) rRIP, X0; + vpbroadcastd L(unsigned_cmp) rRIP, X2; + vpbroadcastd (12 * 4)(INPUT), X12; + vpbroadcastd (13 * 4)(INPUT), X13; + vpaddd X0, X12, X12; + vpxor X2, X0, X0; + vpxor X2, X12, X1; + vpcmpgtd X1, X0, X0; + vpsubd X0, X13, X13; + vmovdqa X12, (STACK_VEC_X12)(%rsp); + vmovdqa X13, (STACK_VEC_X13)(%rsp); + + /* Load vectors */ + vpbroadcastd (0 * 4)(INPUT), X0; + vpbroadcastd (1 * 4)(INPUT), X1; + vpbroadcastd (2 * 4)(INPUT), X2; + vpbroadcastd (3 * 4)(INPUT), X3; + vpbroadcastd (4 * 4)(INPUT), X4; + vpbroadcastd (5 * 4)(INPUT), X5; + vpbroadcastd (6 * 4)(INPUT), X6; + vpbroadcastd (7 * 4)(INPUT), X7; + vpbroadcastd (8 * 4)(INPUT), X8; + vpbroadcastd (9 * 4)(INPUT), X9; + vpbroadcastd (10 * 4)(INPUT), X10; + vpbroadcastd (11 * 4)(INPUT), X11; + vpbroadcastd (14 * 4)(INPUT), X14; + vpbroadcastd (15 * 4)(INPUT), X15; + vmovdqa X15, (STACK_TMP)(%rsp); + +L(round2): + QUARTERROUND2(X0, X4, X8, X12, X1, X5, X9, X13, tmp:=,X15,,,,) + vmovdqa (STACK_TMP)(%rsp), X15; + vmovdqa X8, (STACK_TMP)(%rsp); + QUARTERROUND2(X2, X6, X10, X14, X3, X7, X11, X15, tmp:=,X8,,,,) + QUARTERROUND2(X0, X5, X10, X15, X1, X6, X11, X12, tmp:=,X8,,,,) + vmovdqa (STACK_TMP)(%rsp), X8; + vmovdqa X15, (STACK_TMP)(%rsp); + QUARTERROUND2(X2, X7, X8, X13, X3, X4, X9, X14, tmp:=,X15,,,,) + sub $2, ROUND; + jnz L(round2); + + vmovdqa X8, (STACK_TMP1)(%rsp); + + /* tmp := X15 */ + vpbroadcastd (0 * 4)(INPUT), X15; + PLUS(X0, X15); + vpbroadcastd (1 * 4)(INPUT), X15; + PLUS(X1, X15); + vpbroadcastd (2 * 4)(INPUT), X15; + PLUS(X2, X15); + vpbroadcastd (3 * 4)(INPUT), X15; + PLUS(X3, X15); + vpbroadcastd (4 * 4)(INPUT), X15; + PLUS(X4, X15); + vpbroadcastd (5 * 4)(INPUT), X15; + PLUS(X5, X15); + vpbroadcastd (6 * 4)(INPUT), X15; + PLUS(X6, X15); + vpbroadcastd (7 * 4)(INPUT), X15; + PLUS(X7, X15); + transpose_4x4(X0, X1, X2, X3, X8, X15); + transpose_4x4(X4, X5, X6, X7, X8, X15); + vmovdqa (STACK_TMP1)(%rsp), X8; + transpose_16byte_2x2(X0, X4, X15); + transpose_16byte_2x2(X1, X5, X15); + transpose_16byte_2x2(X2, X6, X15); + transpose_16byte_2x2(X3, X7, X15); + vmovdqa (STACK_TMP)(%rsp), X15; + vmovdqu X0, (64 * 0 + 16 * 0)(DST) + vmovdqu X1, (64 * 1 + 16 * 0)(DST) + vpbroadcastd (8 * 4)(INPUT), X0; + PLUS(X8, X0); + vpbroadcastd (9 * 4)(INPUT), X0; + PLUS(X9, X0); + vpbroadcastd (10 * 4)(INPUT), X0; + PLUS(X10, X0); + vpbroadcastd (11 * 4)(INPUT), X0; + PLUS(X11, X0); + vmovdqa (STACK_VEC_X12)(%rsp), X0; + PLUS(X12, X0); + vmovdqa (STACK_VEC_X13)(%rsp), X0; + PLUS(X13, X0); + vpbroadcastd (14 * 4)(INPUT), X0; + PLUS(X14, X0); + vpbroadcastd (15 * 4)(INPUT), X0; + PLUS(X15, X0); + vmovdqu X2, (64 * 2 + 16 * 0)(DST) + vmovdqu X3, (64 * 3 + 16 * 0)(DST) + + /* Update counter */ + addq $8, (12 * 4)(INPUT); + + transpose_4x4(X8, X9, X10, X11, X0, X1); + transpose_4x4(X12, X13, X14, X15, X0, X1); + vmovdqu X4, (64 * 4 + 16 * 0)(DST) + vmovdqu X5, (64 * 5 + 16 * 0)(DST) + transpose_16byte_2x2(X8, X12, X0); + transpose_16byte_2x2(X9, X13, X0); + transpose_16byte_2x2(X10, X14, X0); + transpose_16byte_2x2(X11, X15, X0); + vmovdqu X6, (64 * 6 + 16 * 0)(DST) + vmovdqu X7, (64 * 7 + 16 * 0)(DST) + vmovdqu X8, (64 * 0 + 16 * 2)(DST) + vmovdqu X9, (64 * 1 + 16 * 2)(DST) + vmovdqu X10, (64 * 2 + 16 * 2)(DST) + vmovdqu X11, (64 * 3 + 16 * 2)(DST) + vmovdqu X12, (64 * 4 + 16 * 2)(DST) + vmovdqu X13, (64 * 5 + 16 * 2)(DST) + vmovdqu X14, (64 * 6 + 16 * 2)(DST) + vmovdqu X15, (64 * 7 + 16 * 2)(DST) + + sub $8, NBLKS; + lea (8 * 64)(DST), DST; + lea (8 * 64)(SRC), SRC; + jnz L(loop8); + + vzeroupper; + + /* eax zeroed by round loop. */ + leave; + cfi_adjust_cfa_offset(-8) + cfi_def_cfa_register(%rsp); + ret; + int3; +END(__chacha20_avx2_blocks8) diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h index 5738c840a9..bfdc6c0a36 100644 --- a/sysdeps/x86_64/chacha20_arch.h +++ b/sysdeps/x86_64/chacha20_arch.h @@ -23,16 +23,26 @@ unsigned int __chacha20_sse2_blocks4 (uint32_t *state, uint8_t *dst, const uint8_t *src, size_t nblks) attribute_hidden; +unsigned int __chacha20_avx2_blocks8 (uint32_t *state, uint8_t *dst, + const uint8_t *src, size_t nblks) + attribute_hidden; static inline void chacha20_crypt (uint32_t *state, uint8_t *dst, const uint8_t *src, size_t bytes) { - _Static_assert (CHACHA20_BUFSIZE % 4 == 0, - "CHACHA20_BUFSIZE not multiple of 4"); - _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 4, - "CHACHA20_BUFSIZE <= CHACHA20_BLOCK_SIZE * 4"); + _Static_assert (CHACHA20_BUFSIZE % 4 == 0 && CHACHA20_BUFSIZE % 8 == 0, + "CHACHA20_BUFSIZE not multiple of 4 or 8"); + _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 8, + "CHACHA20_BUFSIZE < CHACHA20_BLOCK_SIZE * 8"); + const struct cpu_features* cpu_features = __get_cpu_features (); - __chacha20_sse2_blocks4 (state, dst, src, - CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); + /* AVX2 version uses vzeroupper, so disable it if RTM is enabled. */ + if (CPU_FEATURE_USABLE_P (cpu_features, AVX2) + && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER)) + __chacha20_avx2_blocks8 (state, dst, src, + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); + else + __chacha20_sse2_blocks4 (state, dst, src, + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); }