From patchwork Thu Mar 26 08:01:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Masahiro Yamada X-Patchwork-Id: 197932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 117B9C2D0E8 for ; Thu, 26 Mar 2020 08:04:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D3F182078E for ; Thu, 26 Mar 2020 08:04:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585209868; bh=LJtXWqAPJl99Rne6qX17Fm0y4C9QO5JYAAYjcpkrUZ0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=J2TgHjMiWOz0iNROhfZ7qNh6l6ikkrr3MjA1Z3Cm57jXSzjP8yRzfTJLDqQxoNjEs ykSqJBXQXDGvJxO0SGqHgJNGxbhhXvoc0insWkSt/zj7KsGIMULl0aT2SQhHLOxh8P 8FdeUHX77n4OM7wWxhZIWhjxsH/cdn2A4WULewck= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727934AbgCZIEO (ORCPT ); Thu, 26 Mar 2020 04:04:14 -0400 Received: from conuserg-11.nifty.com ([210.131.2.78]:19375 "EHLO conuserg-11.nifty.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727900AbgCZIEO (ORCPT ); Thu, 26 Mar 2020 04:04:14 -0400 Received: from pug.e01.socionext.com (p14092-ipngnfx01kyoto.kyoto.ocn.ne.jp [153.142.97.92]) (authenticated) by conuserg-11.nifty.com with ESMTP id 02Q81Wpf002183; Thu, 26 Mar 2020 17:01:52 +0900 DKIM-Filter: OpenDKIM Filter v2.10.3 conuserg-11.nifty.com 02Q81Wpf002183 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nifty.com; s=dec2015msa; t=1585209713; bh=cPIEThBc4Irm4nWA19YAjNO5mJ0+ArFJUQ9YsOY4Q5I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kOSooawtsEmwKO2JX/LVo8b76eQLIZDllzdeTJSIys1PEsgWuzrjRpRNa+zP0Ha/B DueBZG6RKf3rIeP3QF84+KZ3GErO/26NjvxL4Kj7vococFT36coBU/YPaRCTGxIjaT gBtXT8TYNVBQVclrWS7nt4rWvNHK8Ck9HeJO0WQ4Zh2qtP9JsNe+0MK6QpgNl/tXS2 LWVc+0VzmFlvDs356G2J3z9Umrm1cYCsiLhksPP5Tttr5q1Lu9YcKfVdCyvaKntDFY qqT8SHMCT+RSu5IvqmaBnf2cI8R6L36dLD+64oBD0RjdJnRoUijlbSTnifags9YFYk VlzGW1+ELY91Q== X-Nifty-SrcIP: [153.142.97.92] From: Masahiro Yamada To: linux-kbuild@vger.kernel.org Cc: Thomas Gleixner , Nick Desaulniers , Borislav Petkov , Peter Zijlstra , "H . Peter Anvin" , x86@kernel.org, "Jason A . Donenfeld" , clang-built-linux@googlegroups.com, Masahiro Yamada , "David S. Miller" , Herbert Xu , Ingo Molnar , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 12/16] crypto: x86 - rework configuration based on Kconfig Date: Thu, 26 Mar 2020 17:01:00 +0900 Message-Id: <20200326080104.27286-13-masahiroy@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326080104.27286-1-masahiroy@kernel.org> References: <20200326080104.27286-1-masahiroy@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: "Jason A. Donenfeld" Now that assembler capabilities are probed inside of Kconfig, we can set up proper Kconfig-based dependencies. We also take this opportunity to reorder the Makefile, so that items are grouped logically by primitive. Signed-off-by: Jason A. Donenfeld Acked-by: Herbert Xu Signed-off-by: Masahiro Yamada --- Changes in v2: None arch/x86/crypto/Makefile | 152 +++++++++++++++++---------------------- crypto/Kconfig | 8 +-- 2 files changed, 69 insertions(+), 91 deletions(-) diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index 1a044908d42d..2f23f08fdd4b 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -1,122 +1,100 @@ # SPDX-License-Identifier: GPL-2.0 # -# Arch-specific CryptoAPI modules. -# +# x86 crypto algorithms OBJECT_FILES_NON_STANDARD := y -avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\ - $(comma)4)$(comma)%ymm2,yes,no) -avx512_supported :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,yes,no) -sha1_ni_supported :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,yes,no) -sha256_ni_supported :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,yes,no) -adx_supported := $(call as-instr,adox %r10$(comma)%r10,yes,no) - obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o obj-$(CONFIG_CRYPTO_TWOFISH_586) += twofish-i586.o +twofish-i586-y := twofish-i586-asm_32.o twofish_glue.o +obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o +twofish-x86_64-y := twofish-x86_64-asm_64.o twofish_glue.o +obj-$(CONFIG_CRYPTO_TWOFISH_X86_64_3WAY) += twofish-x86_64-3way.o +twofish-x86_64-3way-y := twofish-x86_64-asm_64-3way.o twofish_glue_3way.o +obj-$(CONFIG_CRYPTO_TWOFISH_AVX_X86_64) += twofish-avx-x86_64.o +twofish-avx-x86_64-y := twofish-avx-x86_64-asm_64.o twofish_avx_glue.o + obj-$(CONFIG_CRYPTO_SERPENT_SSE2_586) += serpent-sse2-i586.o +serpent-sse2-i586-y := serpent-sse2-i586-asm_32.o serpent_sse2_glue.o +obj-$(CONFIG_CRYPTO_SERPENT_SSE2_X86_64) += serpent-sse2-x86_64.o +serpent-sse2-x86_64-y := serpent-sse2-x86_64-asm_64.o serpent_sse2_glue.o +obj-$(CONFIG_CRYPTO_SERPENT_AVX_X86_64) += serpent-avx-x86_64.o +serpent-avx-x86_64-y := serpent-avx-x86_64-asm_64.o serpent_avx_glue.o +obj-$(CONFIG_CRYPTO_SERPENT_AVX2_X86_64) += serpent-avx2.o +serpent-avx2-y := serpent-avx2-asm_64.o serpent_avx2_glue.o obj-$(CONFIG_CRYPTO_DES3_EDE_X86_64) += des3_ede-x86_64.o +des3_ede-x86_64-y := des3_ede-asm_64.o des3_ede_glue.o + obj-$(CONFIG_CRYPTO_CAMELLIA_X86_64) += camellia-x86_64.o +camellia-x86_64-y := camellia-x86_64-asm_64.o camellia_glue.o +obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64) += camellia-aesni-avx-x86_64.o +camellia-aesni-avx-x86_64-y := camellia-aesni-avx-asm_64.o camellia_aesni_avx_glue.o +obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64) += camellia-aesni-avx2.o +camellia-aesni-avx2-y := camellia-aesni-avx2-asm_64.o camellia_aesni_avx2_glue.o + obj-$(CONFIG_CRYPTO_BLOWFISH_X86_64) += blowfish-x86_64.o -obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o -obj-$(CONFIG_CRYPTO_TWOFISH_X86_64_3WAY) += twofish-x86_64-3way.o +blowfish-x86_64-y := blowfish-x86_64-asm_64.o blowfish_glue.o + +obj-$(CONFIG_CRYPTO_CAST5_AVX_X86_64) += cast5-avx-x86_64.o +cast5-avx-x86_64-y := cast5-avx-x86_64-asm_64.o cast5_avx_glue.o + +obj-$(CONFIG_CRYPTO_CAST6_AVX_X86_64) += cast6-avx-x86_64.o +cast6-avx-x86_64-y := cast6-avx-x86_64-asm_64.o cast6_avx_glue.o + +obj-$(CONFIG_CRYPTO_AEGIS128_AESNI_SSE2) += aegis128-aesni.o +aegis128-aesni-y := aegis128-aesni-asm.o aegis128-aesni-glue.o + obj-$(CONFIG_CRYPTO_CHACHA20_X86_64) += chacha-x86_64.o -obj-$(CONFIG_CRYPTO_SERPENT_SSE2_X86_64) += serpent-sse2-x86_64.o +chacha-x86_64-y := chacha-ssse3-x86_64.o chacha_glue.o +chacha-x86_64-$(CONFIG_AS_AVX2) += chacha-avx2-x86_64.o +chacha-x86_64-$(CONFIG_AS_AVX512) += chacha-avx512vl-x86_64.o + obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o -obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o +aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o +aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o -obj-$(CONFIG_CRYPTO_CRC32C_INTEL) += crc32c-intel.o obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o -obj-$(CONFIG_CRYPTO_CRC32_PCLMUL) += crc32-pclmul.o -obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o -obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o -obj-$(CONFIG_CRYPTO_CRCT10DIF_PCLMUL) += crct10dif-pclmul.o -obj-$(CONFIG_CRYPTO_POLY1305_X86_64) += poly1305-x86_64.o - -obj-$(CONFIG_CRYPTO_AEGIS128_AESNI_SSE2) += aegis128-aesni.o +sha1-ssse3-y := sha1_ssse3_asm.o sha1_ssse3_glue.o +sha1-ssse3-$(CONFIG_AS_AVX2) += sha1_avx2_x86_64_asm.o +sha1-ssse3-$(CONFIG_AS_SHA1_NI) += sha1_ni_asm.o -obj-$(CONFIG_CRYPTO_NHPOLY1305_SSE2) += nhpoly1305-sse2.o -obj-$(CONFIG_CRYPTO_NHPOLY1305_AVX2) += nhpoly1305-avx2.o +obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o +sha256-ssse3-y := sha256-ssse3-asm.o sha256-avx-asm.o sha256-avx2-asm.o sha256_ssse3_glue.o +sha256-ssse3-$(CONFIG_AS_SHA256_NI) += sha256_ni_asm.o -# These modules require the assembler to support ADX. -ifeq ($(adx_supported),yes) - obj-$(CONFIG_CRYPTO_CURVE25519_X86) += curve25519-x86_64.o -endif +obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o +sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o -# These modules require assembler to support AVX. -obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64) += camellia-aesni-avx-x86_64.o -obj-$(CONFIG_CRYPTO_CAST5_AVX_X86_64) += cast5-avx-x86_64.o -obj-$(CONFIG_CRYPTO_CAST6_AVX_X86_64) += cast6-avx-x86_64.o -obj-$(CONFIG_CRYPTO_TWOFISH_AVX_X86_64) += twofish-avx-x86_64.o -obj-$(CONFIG_CRYPTO_SERPENT_AVX_X86_64) += serpent-avx-x86_64.o obj-$(CONFIG_CRYPTO_BLAKE2S_X86) += blake2s-x86_64.o +blake2s-x86_64-y := blake2s-core.o blake2s-glue.o -# These modules require assembler to support AVX2. -ifeq ($(avx2_supported),yes) - obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64) += camellia-aesni-avx2.o - obj-$(CONFIG_CRYPTO_SERPENT_AVX2_X86_64) += serpent-avx2.o -endif +obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o +ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o -twofish-i586-y := twofish-i586-asm_32.o twofish_glue.o -serpent-sse2-i586-y := serpent-sse2-i586-asm_32.o serpent_sse2_glue.o +obj-$(CONFIG_CRYPTO_CRC32C_INTEL) += crc32c-intel.o +crc32c-intel-y := crc32c-intel_glue.o +crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o -des3_ede-x86_64-y := des3_ede-asm_64.o des3_ede_glue.o -camellia-x86_64-y := camellia-x86_64-asm_64.o camellia_glue.o -blowfish-x86_64-y := blowfish-x86_64-asm_64.o blowfish_glue.o -twofish-x86_64-y := twofish-x86_64-asm_64.o twofish_glue.o -twofish-x86_64-3way-y := twofish-x86_64-asm_64-3way.o twofish_glue_3way.o -chacha-x86_64-y := chacha-ssse3-x86_64.o chacha_glue.o -serpent-sse2-x86_64-y := serpent-sse2-x86_64-asm_64.o serpent_sse2_glue.o +obj-$(CONFIG_CRYPTO_CRC32_PCLMUL) += crc32-pclmul.o +crc32-pclmul-y := crc32-pclmul_asm.o crc32-pclmul_glue.o -aegis128-aesni-y := aegis128-aesni-asm.o aegis128-aesni-glue.o +obj-$(CONFIG_CRYPTO_CRCT10DIF_PCLMUL) += crct10dif-pclmul.o +crct10dif-pclmul-y := crct10dif-pcl-asm_64.o crct10dif-pclmul_glue.o -nhpoly1305-sse2-y := nh-sse2-x86_64.o nhpoly1305-sse2-glue.o -blake2s-x86_64-y := blake2s-core.o blake2s-glue.o +obj-$(CONFIG_CRYPTO_POLY1305_X86_64) += poly1305-x86_64.o poly1305-x86_64-y := poly1305-x86_64-cryptogams.o poly1305_glue.o ifneq ($(CONFIG_CRYPTO_POLY1305_X86_64),) targets += poly1305-x86_64-cryptogams.S endif -camellia-aesni-avx-x86_64-y := camellia-aesni-avx-asm_64.o \ - camellia_aesni_avx_glue.o -cast5-avx-x86_64-y := cast5-avx-x86_64-asm_64.o cast5_avx_glue.o -cast6-avx-x86_64-y := cast6-avx-x86_64-asm_64.o cast6_avx_glue.o -twofish-avx-x86_64-y := twofish-avx-x86_64-asm_64.o twofish_avx_glue.o -serpent-avx-x86_64-y := serpent-avx-x86_64-asm_64.o serpent_avx_glue.o - -ifeq ($(avx2_supported),yes) - camellia-aesni-avx2-y := camellia-aesni-avx2-asm_64.o camellia_aesni_avx2_glue.o - chacha-x86_64-y += chacha-avx2-x86_64.o - serpent-avx2-y := serpent-avx2-asm_64.o serpent_avx2_glue.o - - nhpoly1305-avx2-y := nh-avx2-x86_64.o nhpoly1305-avx2-glue.o -endif - -ifeq ($(avx512_supported),yes) - chacha-x86_64-y += chacha-avx512vl-x86_64.o -endif +obj-$(CONFIG_CRYPTO_NHPOLY1305_SSE2) += nhpoly1305-sse2.o +nhpoly1305-sse2-y := nh-sse2-x86_64.o nhpoly1305-sse2-glue.o +obj-$(CONFIG_CRYPTO_NHPOLY1305_AVX2) += nhpoly1305-avx2.o +nhpoly1305-avx2-y := nh-avx2-x86_64.o nhpoly1305-avx2-glue.o -aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o -aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o -ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o -sha1-ssse3-y := sha1_ssse3_asm.o sha1_ssse3_glue.o -ifeq ($(avx2_supported),yes) -sha1-ssse3-y += sha1_avx2_x86_64_asm.o -endif -ifeq ($(sha1_ni_supported),yes) -sha1-ssse3-y += sha1_ni_asm.o -endif -crc32c-intel-y := crc32c-intel_glue.o -crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o -crc32-pclmul-y := crc32-pclmul_asm.o crc32-pclmul_glue.o -sha256-ssse3-y := sha256-ssse3-asm.o sha256-avx-asm.o sha256-avx2-asm.o sha256_ssse3_glue.o -ifeq ($(sha256_ni_supported),yes) -sha256-ssse3-y += sha256_ni_asm.o -endif -sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o -crct10dif-pclmul-y := crct10dif-pcl-asm_64.o crct10dif-pclmul_glue.o +obj-$(CONFIG_CRYPTO_CURVE25519_X86) += curve25519-x86_64.o quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $< > $@ diff --git a/crypto/Kconfig b/crypto/Kconfig index c24a47406f8f..49aae167e75c 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -267,7 +267,7 @@ config CRYPTO_CURVE25519 config CRYPTO_CURVE25519_X86 tristate "x86_64 accelerated Curve25519 scalar multiplication library" - depends on X86 && 64BIT + depends on X86 && 64BIT && AS_ADX select CRYPTO_LIB_CURVE25519_GENERIC select CRYPTO_ARCH_HAVE_LIB_CURVE25519 @@ -465,7 +465,7 @@ config CRYPTO_NHPOLY1305_SSE2 config CRYPTO_NHPOLY1305_AVX2 tristate "NHPoly1305 hash function (x86_64 AVX2 implementation)" - depends on X86 && 64BIT + depends on X86 && 64BIT && AS_AVX2 select CRYPTO_NHPOLY1305 help AVX2 optimized implementation of the hash function used by the @@ -1303,7 +1303,7 @@ config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX2)" - depends on X86 && 64BIT + depends on X86 && 64BIT && AS_AVX2 depends on CRYPTO select CRYPTO_CAMELLIA_AESNI_AVX_X86_64 help @@ -1573,7 +1573,7 @@ config CRYPTO_SERPENT_AVX_X86_64 config CRYPTO_SERPENT_AVX2_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX2)" - depends on X86 && 64BIT + depends on X86 && 64BIT && AS_AVX2 select CRYPTO_SERPENT_AVX_X86_64 help Serpent cipher algorithm, by Anderson, Biham & Knudsen. From patchwork Thu Mar 26 08:01:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Masahiro Yamada X-Patchwork-Id: 197933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20243C43331 for ; Thu, 26 Mar 2020 08:04:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DAB1B208FE for ; Thu, 26 Mar 2020 08:04:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585209861; bh=gsPbVe72WYXrvQsw2v8jIfo9yCkKbgrw1eQ2O8FA11A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=poNAwelF6WoPO88EfuxQWR+PSJKf0JprON8p1v/VFn0x2MtieszTfLEC61bLVFm1N 7TPRrpWG8j5wfqltLzlm0YRlRniGPDnSa8pgr6T+k2fYKURB4Cm4gATfUq3fu1RoAO cbGMuV4sgvp/S1PCNEQQMBiDUaPb30SabKGoto7E= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726422AbgCZIET (ORCPT ); Thu, 26 Mar 2020 04:04:19 -0400 Received: from conuserg-11.nifty.com ([210.131.2.78]:19364 "EHLO conuserg-11.nifty.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726332AbgCZIEP (ORCPT ); Thu, 26 Mar 2020 04:04:15 -0400 Received: from pug.e01.socionext.com (p14092-ipngnfx01kyoto.kyoto.ocn.ne.jp [153.142.97.92]) (authenticated) by conuserg-11.nifty.com with ESMTP id 02Q81Wpi002183; Thu, 26 Mar 2020 17:01:57 +0900 DKIM-Filter: OpenDKIM Filter v2.10.3 conuserg-11.nifty.com 02Q81Wpi002183 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nifty.com; s=dec2015msa; t=1585209718; bh=oaby6gyHkKF7lTI3dGycakbdq8pMeu7q19ja4ir5xo8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gfrKThxJybrCI8WY2RniRWSn+bV8TpSV20Ktj6/bmGOXpNZ/TRiIdRMAblwyaNJXM XxLRXB3qQmxy3ESjG/iwp2mqarPYwt30LnKXpM0tE3F920UJiKloQn7qPUr/x/n//1 eZ67Bo3+2AqHrQUzHhfwY3cYzVl379sEQE8x56n3SsuHkulrgzUUkTErQTfKcxsGuL 8QPI6T8Ema+wdFa9mJ9kOFWHR9kFQPykGNGUBjtsjKE3ulny0zIGcqIjB010zgHk41 yNa+0WkRYSlxylgG+JBCjEI1sEDbZmGtsNHva+Mz4SuNzsbm7gZRCWrJ3oBk/0XQaW OSZTLqcNOn2bQ== X-Nifty-SrcIP: [153.142.97.92] From: Masahiro Yamada To: linux-kbuild@vger.kernel.org Cc: Thomas Gleixner , Nick Desaulniers , Borislav Petkov , Peter Zijlstra , "H . Peter Anvin" , x86@kernel.org, "Jason A . Donenfeld" , clang-built-linux@googlegroups.com, Masahiro Yamada , "David S. Miller" , Herbert Xu , Ingo Molnar , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 15/16] x86: update AS_* macros to binutils >=2.23, supporting ADX and AVX2 Date: Thu, 26 Mar 2020 17:01:03 +0900 Message-Id: <20200326080104.27286-16-masahiroy@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326080104.27286-1-masahiroy@kernel.org> References: <20200326080104.27286-1-masahiroy@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: "Jason A. Donenfeld" Now that the kernel specifies binutils 2.23 as the minimum version, we can remove ifdefs for AVX2 and ADX throughout. Signed-off-by: Jason A. Donenfeld Signed-off-by: Masahiro Yamada --- Changes in v2: None arch/x86/Kconfig.assembler | 10 ---------- arch/x86/crypto/Makefile | 6 ++---- arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 --- arch/x86/crypto/aesni-intel_glue.c | 7 ------- arch/x86/crypto/chacha_glue.c | 6 ++---- arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 8 -------- arch/x86/crypto/poly1305_glue.c | 5 ++--- arch/x86/crypto/sha1_ssse3_glue.c | 6 ------ arch/x86/crypto/sha256-avx2-asm.S | 3 --- arch/x86/crypto/sha256_ssse3_glue.c | 6 ------ arch/x86/crypto/sha512-avx2-asm.S | 3 --- arch/x86/crypto/sha512_ssse3_glue.c | 5 ----- crypto/Kconfig | 8 ++++---- lib/raid6/algos.c | 6 ------ lib/raid6/avx2.c | 4 ---- lib/raid6/recov_avx2.c | 6 ------ lib/raid6/test/Makefile | 3 --- 17 files changed, 10 insertions(+), 85 deletions(-) diff --git a/arch/x86/Kconfig.assembler b/arch/x86/Kconfig.assembler index a5a1d2766b3a..13de0db38d4e 100644 --- a/arch/x86/Kconfig.assembler +++ b/arch/x86/Kconfig.assembler @@ -1,11 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 # Copyright (C) 2020 Jason A. Donenfeld . All Rights Reserved. -config AS_AVX2 - def_bool $(as-instr,vpbroadcastb %xmm0$(comma)%ymm1) - help - Supported by binutils >= 2.22 and LLVM integrated assembler - config AS_AVX512 def_bool $(as-instr,vpmovm2b %k1$(comma)%zmm5) help @@ -20,8 +15,3 @@ config AS_SHA256_NI def_bool $(as-instr,sha256msg1 %xmm0$(comma)%xmm1) help Supported by binutils >= 2.24 and LLVM integrated assembler - -config AS_ADX - def_bool $(as-instr,adox %eax$(comma)%eax) - help - Supported by binutils >= 2.23 and LLVM integrated assembler diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index 2f23f08fdd4b..928aad453c72 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -47,8 +47,7 @@ obj-$(CONFIG_CRYPTO_AEGIS128_AESNI_SSE2) += aegis128-aesni.o aegis128-aesni-y := aegis128-aesni-asm.o aegis128-aesni-glue.o obj-$(CONFIG_CRYPTO_CHACHA20_X86_64) += chacha-x86_64.o -chacha-x86_64-y := chacha-ssse3-x86_64.o chacha_glue.o -chacha-x86_64-$(CONFIG_AS_AVX2) += chacha-avx2-x86_64.o +chacha-x86_64-y := chacha-avx2-x86_64.o chacha-ssse3-x86_64.o chacha_glue.o chacha-x86_64-$(CONFIG_AS_AVX512) += chacha-avx512vl-x86_64.o obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o @@ -56,8 +55,7 @@ aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o -sha1-ssse3-y := sha1_ssse3_asm.o sha1_ssse3_glue.o -sha1-ssse3-$(CONFIG_AS_AVX2) += sha1_avx2_x86_64_asm.o +sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ssse3_glue.o sha1-ssse3-$(CONFIG_AS_SHA1_NI) += sha1_ni_asm.o obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S index cc56ee43238b..0cea33295287 100644 --- a/arch/x86/crypto/aesni-intel_avx-x86_64.S +++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S @@ -1868,7 +1868,6 @@ key_256_finalize: ret SYM_FUNC_END(aesni_gcm_finalize_avx_gen2) -#ifdef CONFIG_AS_AVX2 ############################################################################### # GHASH_MUL MACRO to implement: Data*HashKey mod (128,127,126,121,0) # Input: A and B (128-bits each, bit-reflected) @@ -2836,5 +2835,3 @@ key_256_finalize4: FUNC_RESTORE ret SYM_FUNC_END(aesni_gcm_finalize_avx_gen4) - -#endif /* CONFIG_AS_AVX2 */ diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index e0f54e00edfd..c7f8c4d7f670 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -233,7 +233,6 @@ static const struct aesni_gcm_tfm_s aesni_gcm_tfm_avx_gen2 = { .finalize = &aesni_gcm_finalize_avx_gen2, }; -#ifdef CONFIG_AS_AVX2 /* * asmlinkage void aesni_gcm_init_avx_gen4() * gcm_data *my_ctx_data, context data @@ -276,8 +275,6 @@ static const struct aesni_gcm_tfm_s aesni_gcm_tfm_avx_gen4 = { .finalize = &aesni_gcm_finalize_avx_gen4, }; -#endif - static inline struct aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm) { @@ -706,10 +703,8 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, if (!enc) left -= auth_tag_len; -#ifdef CONFIG_AS_AVX2 if (left < AVX_GEN4_OPTSIZE && gcm_tfm == &aesni_gcm_tfm_avx_gen4) gcm_tfm = &aesni_gcm_tfm_avx_gen2; -#endif if (left < AVX_GEN2_OPTSIZE && gcm_tfm == &aesni_gcm_tfm_avx_gen2) gcm_tfm = &aesni_gcm_tfm_sse; @@ -1069,12 +1064,10 @@ static int __init aesni_init(void) if (!x86_match_cpu(aesni_cpu_id)) return -ENODEV; #ifdef CONFIG_X86_64 -#ifdef CONFIG_AS_AVX2 if (boot_cpu_has(X86_FEATURE_AVX2)) { pr_info("AVX2 version of gcm_enc/dec engaged.\n"); aesni_gcm_tfm = &aesni_gcm_tfm_avx_gen4; } else -#endif if (boot_cpu_has(X86_FEATURE_AVX)) { pr_info("AVX version of gcm_enc/dec engaged.\n"); aesni_gcm_tfm = &aesni_gcm_tfm_avx_gen2; diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c index 68a74953efaf..b412c21ee06e 100644 --- a/arch/x86/crypto/chacha_glue.c +++ b/arch/x86/crypto/chacha_glue.c @@ -79,8 +79,7 @@ static void chacha_dosimd(u32 *state, u8 *dst, const u8 *src, } } - if (IS_ENABLED(CONFIG_AS_AVX2) && - static_branch_likely(&chacha_use_avx2)) { + if (static_branch_likely(&chacha_use_avx2)) { while (bytes >= CHACHA_BLOCK_SIZE * 8) { chacha_8block_xor_avx2(state, dst, src, bytes, nrounds); bytes -= CHACHA_BLOCK_SIZE * 8; @@ -288,8 +287,7 @@ static int __init chacha_simd_mod_init(void) static_branch_enable(&chacha_use_simd); - if (IS_ENABLED(CONFIG_AS_AVX2) && - boot_cpu_has(X86_FEATURE_AVX) && + if (boot_cpu_has(X86_FEATURE_AVX) && boot_cpu_has(X86_FEATURE_AVX2) && cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { static_branch_enable(&chacha_use_avx2); diff --git a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl index 5bac2d533104..137edcf038cb 100644 --- a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl +++ b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl @@ -1514,10 +1514,6 @@ ___ if ($avx>1) { -if ($kernel) { - $code .= "#ifdef CONFIG_AS_AVX2\n"; -} - my ($H0,$H1,$H2,$H3,$H4, $MASK, $T4,$T0,$T1,$T2,$T3, $D0,$D1,$D2,$D3,$D4) = map("%ymm$_",(0..15)); my $S4=$MASK; @@ -2808,10 +2804,6 @@ ___ poly1305_blocks_avxN(0); &end_function("poly1305_blocks_avx2"); -if($kernel) { - $code .= "#endif\n"; -} - ####################################################################### if ($avx>2) { # On entry we have input length divisible by 64. But since inner loop diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c index 4a6226e1d15e..6dfec19f7d57 100644 --- a/arch/x86/crypto/poly1305_glue.c +++ b/arch/x86/crypto/poly1305_glue.c @@ -108,7 +108,7 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len, kernel_fpu_begin(); if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512)) poly1305_blocks_avx512(ctx, inp, bytes, padbit); - else if (IS_ENABLED(CONFIG_AS_AVX2) && static_branch_likely(&poly1305_use_avx2)) + else if (static_branch_likely(&poly1305_use_avx2)) poly1305_blocks_avx2(ctx, inp, bytes, padbit); else poly1305_blocks_avx(ctx, inp, bytes, padbit); @@ -264,8 +264,7 @@ static int __init poly1305_simd_mod_init(void) if (boot_cpu_has(X86_FEATURE_AVX) && cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) static_branch_enable(&poly1305_use_avx); - if (IS_ENABLED(CONFIG_AS_AVX2) && boot_cpu_has(X86_FEATURE_AVX) && - boot_cpu_has(X86_FEATURE_AVX2) && + if (boot_cpu_has(X86_FEATURE_AVX) && boot_cpu_has(X86_FEATURE_AVX2) && cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) static_branch_enable(&poly1305_use_avx2); if (IS_ENABLED(CONFIG_AS_AVX512) && boot_cpu_has(X86_FEATURE_AVX) && diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 275b65dd30c9..a801ffc10cbb 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -174,7 +174,6 @@ static void unregister_sha1_avx(void) crypto_unregister_shash(&sha1_avx_alg); } -#if defined(CONFIG_AS_AVX2) #define SHA1_AVX2_BLOCK_OPTSIZE 4 /* optimal 4*64 bytes of SHA1 blocks */ asmlinkage void sha1_transform_avx2(struct sha1_state *state, @@ -246,11 +245,6 @@ static void unregister_sha1_avx2(void) crypto_unregister_shash(&sha1_avx2_alg); } -#else -static inline int register_sha1_avx2(void) { return 0; } -static inline void unregister_sha1_avx2(void) { } -#endif - #ifdef CONFIG_AS_SHA1_NI asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data, int rounds); diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S index 499d9ec129de..11ff60c29c8b 100644 --- a/arch/x86/crypto/sha256-avx2-asm.S +++ b/arch/x86/crypto/sha256-avx2-asm.S @@ -48,7 +48,6 @@ # This code schedules 2 blocks at a time, with 4 lanes per block ######################################################################## -#ifdef CONFIG_AS_AVX2 #include ## assume buffers not aligned @@ -767,5 +766,3 @@ _SHUF_00BA: .align 32 _SHUF_DC00: .octa 0x0b0a090803020100FFFFFFFFFFFFFFFF,0x0b0a090803020100FFFFFFFFFFFFFFFF - -#endif diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 8bdc3be31f64..6394b5fe8db6 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -220,7 +220,6 @@ static void unregister_sha256_avx(void) ARRAY_SIZE(sha256_avx_algs)); } -#if defined(CONFIG_AS_AVX2) asmlinkage void sha256_transform_rorx(struct sha256_state *state, const u8 *data, int blocks); @@ -295,11 +294,6 @@ static void unregister_sha256_avx2(void) ARRAY_SIZE(sha256_avx2_algs)); } -#else -static inline int register_sha256_avx2(void) { return 0; } -static inline void unregister_sha256_avx2(void) { } -#endif - #ifdef CONFIG_AS_SHA256_NI asmlinkage void sha256_ni_transform(struct sha256_state *digest, const u8 *data, int rounds); diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/arch/x86/crypto/sha512-avx2-asm.S index 3dd886b14e7d..3a44bdcfd583 100644 --- a/arch/x86/crypto/sha512-avx2-asm.S +++ b/arch/x86/crypto/sha512-avx2-asm.S @@ -49,7 +49,6 @@ # This code schedules 1 blocks at a time, with 4 lanes per block ######################################################################## -#ifdef CONFIG_AS_AVX2 #include .text @@ -749,5 +748,3 @@ PSHUFFLE_BYTE_FLIP_MASK: MASK_YMM_LO: .octa 0x00000000000000000000000000000000 .octa 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF - -#endif diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 75214982a633..82cc1b3ced1d 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -218,7 +218,6 @@ static void unregister_sha512_avx(void) ARRAY_SIZE(sha512_avx_algs)); } -#if defined(CONFIG_AS_AVX2) asmlinkage void sha512_transform_rorx(struct sha512_state *state, const u8 *data, int blocks); @@ -293,10 +292,6 @@ static void unregister_sha512_avx2(void) crypto_unregister_shashes(sha512_avx2_algs, ARRAY_SIZE(sha512_avx2_algs)); } -#else -static inline int register_sha512_avx2(void) { return 0; } -static inline void unregister_sha512_avx2(void) { } -#endif static int __init sha512_ssse3_mod_init(void) { diff --git a/crypto/Kconfig b/crypto/Kconfig index 49aae167e75c..c24a47406f8f 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -267,7 +267,7 @@ config CRYPTO_CURVE25519 config CRYPTO_CURVE25519_X86 tristate "x86_64 accelerated Curve25519 scalar multiplication library" - depends on X86 && 64BIT && AS_ADX + depends on X86 && 64BIT select CRYPTO_LIB_CURVE25519_GENERIC select CRYPTO_ARCH_HAVE_LIB_CURVE25519 @@ -465,7 +465,7 @@ config CRYPTO_NHPOLY1305_SSE2 config CRYPTO_NHPOLY1305_AVX2 tristate "NHPoly1305 hash function (x86_64 AVX2 implementation)" - depends on X86 && 64BIT && AS_AVX2 + depends on X86 && 64BIT select CRYPTO_NHPOLY1305 help AVX2 optimized implementation of the hash function used by the @@ -1303,7 +1303,7 @@ config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX2)" - depends on X86 && 64BIT && AS_AVX2 + depends on X86 && 64BIT depends on CRYPTO select CRYPTO_CAMELLIA_AESNI_AVX_X86_64 help @@ -1573,7 +1573,7 @@ config CRYPTO_SERPENT_AVX_X86_64 config CRYPTO_SERPENT_AVX2_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX2)" - depends on X86 && 64BIT && AS_AVX2 + depends on X86 && 64BIT select CRYPTO_SERPENT_AVX_X86_64 help Serpent cipher algorithm, by Anderson, Biham & Knudsen. diff --git a/lib/raid6/algos.c b/lib/raid6/algos.c index b5a02326cfb7..2dc010be793e 100644 --- a/lib/raid6/algos.c +++ b/lib/raid6/algos.c @@ -34,10 +34,8 @@ const struct raid6_calls * const raid6_algos[] = { &raid6_avx512x2, &raid6_avx512x1, #endif -#ifdef CONFIG_AS_AVX2 &raid6_avx2x2, &raid6_avx2x1, -#endif &raid6_sse2x2, &raid6_sse2x1, &raid6_sse1x2, @@ -51,11 +49,9 @@ const struct raid6_calls * const raid6_algos[] = { &raid6_avx512x2, &raid6_avx512x1, #endif -#ifdef CONFIG_AS_AVX2 &raid6_avx2x4, &raid6_avx2x2, &raid6_avx2x1, -#endif &raid6_sse2x4, &raid6_sse2x2, &raid6_sse2x1, @@ -101,9 +97,7 @@ const struct raid6_recov_calls *const raid6_recov_algos[] = { #ifdef CONFIG_AS_AVX512 &raid6_recov_avx512, #endif -#ifdef CONFIG_AS_AVX2 &raid6_recov_avx2, -#endif &raid6_recov_ssse3, #endif #ifdef CONFIG_S390 diff --git a/lib/raid6/avx2.c b/lib/raid6/avx2.c index 87184b6da28a..f299476e1d76 100644 --- a/lib/raid6/avx2.c +++ b/lib/raid6/avx2.c @@ -13,8 +13,6 @@ * */ -#ifdef CONFIG_AS_AVX2 - #include #include "x86.h" @@ -470,5 +468,3 @@ const struct raid6_calls raid6_avx2x4 = { 1 /* Has cache hints */ }; #endif - -#endif /* CONFIG_AS_AVX2 */ diff --git a/lib/raid6/recov_avx2.c b/lib/raid6/recov_avx2.c index 7a3b5e7f66ee..4e8095403ee2 100644 --- a/lib/raid6/recov_avx2.c +++ b/lib/raid6/recov_avx2.c @@ -4,8 +4,6 @@ * Author: Jim Kukunas */ -#ifdef CONFIG_AS_AVX2 - #include #include "x86.h" @@ -313,7 +311,3 @@ const struct raid6_recov_calls raid6_recov_avx2 = { #endif .priority = 2, }; - -#else -#warning "your version of binutils lacks AVX2 support" -#endif diff --git a/lib/raid6/test/Makefile b/lib/raid6/test/Makefile index 60021319ac78..a4c7cd74cff5 100644 --- a/lib/raid6/test/Makefile +++ b/lib/raid6/test/Makefile @@ -35,9 +35,6 @@ endif ifeq ($(IS_X86),yes) OBJS += mmx.o sse1.o sse2.o avx2.o recov_ssse3.o recov_avx2.o avx512.o recov_avx512.o CFLAGS += -DCONFIG_X86 - CFLAGS += $(shell echo "vpbroadcastb %xmm0, %ymm1" | \ - gcc -c -x assembler - >/dev/null 2>&1 && \ - rm ./-.o && echo -DCONFIG_AS_AVX2=1) CFLAGS += $(shell echo "vpmovm2b %k1, %zmm5" | \ gcc -c -x assembler - >/dev/null 2>&1 && \ rm ./-.o && echo -DCONFIG_AS_AVX512=1)