From patchwork Tue Sep 15 09:46:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 253038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8617C433E2 for ; Tue, 15 Sep 2020 09:46:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 78A2A21D7A for ; Tue, 15 Sep 2020 09:46:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600163204; bh=cNdO1RxZG+tw0j5g016UG+BLJjOlAdG+2AMC813YY04=; h=From:To:Cc:Subject:Date:List-ID:From; b=rEyxcfknw7chCSp8NbSRlw4ibJBuUAt4N6gKMu8ghFNAckyYSZGspZsqpo7LNWrLM WEESUOE51Wib00JoFHXMfMXY82EEjhtPLXCgKY8ZnHEPuIBhoVSnzjyqAGHc1MU4+V eN6cmXJ1eF1o/+BSay5+6Fkfd8oBs95lkYwiCHLQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726119AbgIOJqn (ORCPT ); Tue, 15 Sep 2020 05:46:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:60466 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726102AbgIOJqn (ORCPT ); Tue, 15 Sep 2020 05:46:43 -0400 Received: from e123331-lin.nice.arm.com (adsl-245.46.190.88.tellas.gr [46.190.88.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 914AF21D1B; Tue, 15 Sep 2020 09:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600163202; bh=cNdO1RxZG+tw0j5g016UG+BLJjOlAdG+2AMC813YY04=; h=From:To:Cc:Subject:Date:From; b=y8+tQZ1K0l1H/8Qak53ZiK9TMmQX33Xn+lEbRCj2Yvv5FQiAfu8GzAxAhPxd/lqpG oUW5fkMUE+otMH4pcw1rJHyEdpLRf2VwqhURizf29WFmrbc4aJR/HLqWmaaacv7t4J bAi5pDG/iCBBZdPFMl4rZqcqRI+m3DnifVYf28Ng= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, Ard Biesheuvel , Nick Desaulniers , Stefan Agner , Peter Smith Subject: [PATCH] crypto: arm/sha256-neon - avoid ADRL pseudo instruction Date: Tue, 15 Sep 2020 12:46:19 +0300 Message-Id: <20200915094619.32548-1-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The ADRL pseudo instruction is not an architectural construct, but a convenience macro that was supported by the ARM proprietary assembler and adopted by binutils GAS as well, but only when assembling in 32-bit ARM mode. Therefore, it can only be used in assembler code that is known to assemble in ARM mode only, but as it turns out, the Clang assembler does not implement ADRL at all, and so it is better to get rid of it entirely. So replace the ADRL instruction with a ADR instruction that refers to a nearer symbol, and apply the delta explicitly using an additional instruction. Cc: Nick Desaulniers Cc: Stefan Agner Cc: Peter Smith Signed-off-by: Ard Biesheuvel --- I will leave it to the Clang folks to decide whether this needs to be backported and how far, but a Cc stable seems reasonable here. arch/arm/crypto/sha256-armv4.pl | 4 ++-- arch/arm/crypto/sha256-core.S_shipped | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm/crypto/sha256-armv4.pl b/arch/arm/crypto/sha256-armv4.pl index 9f96ff48e4a8..8aeb2e82f915 100644 --- a/arch/arm/crypto/sha256-armv4.pl +++ b/arch/arm/crypto/sha256-armv4.pl @@ -175,7 +175,6 @@ $code=<<___; #else .syntax unified # ifdef __thumb2__ -# define adrl adr .thumb # else .code 32 @@ -471,7 +470,8 @@ sha256_block_data_order_neon: stmdb sp!,{r4-r12,lr} sub $H,sp,#16*4+16 - adrl $Ktbl,K256 + adr $Ktbl,.Lsha256_block_data_order + add $Ktbl,$Ktbl,#K256-.Lsha256_block_data_order bic $H,$H,#15 @ align for 128-bit stores mov $t2,sp mov sp,$H @ alloca diff --git a/arch/arm/crypto/sha256-core.S_shipped b/arch/arm/crypto/sha256-core.S_shipped index ea04b2ab0c33..1861c4e8a5ba 100644 --- a/arch/arm/crypto/sha256-core.S_shipped +++ b/arch/arm/crypto/sha256-core.S_shipped @@ -56,7 +56,6 @@ #else .syntax unified # ifdef __thumb2__ -# define adrl adr .thumb # else .code 32 @@ -1885,7 +1884,8 @@ sha256_block_data_order_neon: stmdb sp!,{r4-r12,lr} sub r11,sp,#16*4+16 - adrl r14,K256 + adr r14,.Lsha256_block_data_order + add r14,r14,#K256-.Lsha256_block_data_order bic r11,r11,#15 @ align for 128-bit stores mov r12,sp mov sp,r11 @ alloca