From patchwork Wed Jun 12 12:48:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166548 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640593ilk; Wed, 12 Jun 2019 05:48:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqxpvV4LPKo8XPC2wKikp9Dvus0Z2IRxf5I5lDhI8A/Ts2930h8kXWSwFL0FqiMGHNfmg/9T X-Received: by 2002:a62:a509:: with SMTP id v9mr84977160pfm.82.1560343732383; Wed, 12 Jun 2019 05:48:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343732; cv=none; d=google.com; s=arc-20160816; b=vtbj8jo2Emy2/wnh0AoIuF099gH0wICGsnpUylDLHM0lgm0vsJq+GfAZlX2ZlV9vyy JBb/Te6IachDq8Pqw/cb1iRkQS1JCSEt96N1uLUXm0BcJBmZeUd1L61npspi7lC9iuqt +8dkFVaG8S1c9zPeYuWiCWr85FmROT5J1UC5oORLkkvo1l+xULPwW9mGOkLWyR0eC5Pt d7y42p8l9SxX2k/nkcNhwGrZHQL1bk3g9Y2/aApBJPyAE+7mPqShpLgPRuhhW/2fcS24 rOtlL/gOZ9bOETmDSBEtukYDmbOyhd0cPYrBnTjWjepFA9f0xi9SiZVSPAgvqSiFmxad akKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=BVjKB83CGDhuLXJgz703gQuRW2ORbNsUnSLW3NNqkVc=; b=uey6F+MiPIqGutfB6pnHR6cmPRitblrhIzEbpfE3YhWe9JHjj+n1Qr9qpFEuyMI0I8 CtM7do8P8ZULMpXxi73yvVf/Az4YvCshbAUdEMzQorxI6G6HG3Y/gWcIXw0ILGeuM2to zkWhvzkX0Dm3G/IGHawn7Y4+kgMAjueker3vhgqsq4QzemzjfDLuW73M2rraCfnrYJzw coOzvQxJdYjqvlLMj/nQ5PYIzWdnnuW/WjOlgwNelTqj8nqDwWrVjVoiiAJEsh4F49ih Gi0aJhGZMpKmmUsTSiiA7Y4VcN/mHLxIEehkkNQpspSfiv/dzbOj4zR724Ml6UYDBqH1 MDXg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PEO+oml8; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.48.52; Wed, 12 Jun 2019 05:48:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PEO+oml8; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405620AbfFLMsv (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:51 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:53398 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409156AbfFLMsv (ORCPT ); Wed, 12 Jun 2019 08:48:51 -0400 Received: by mail-wm1-f67.google.com with SMTP id x15so6449787wmj.3 for ; Wed, 12 Jun 2019 05:48:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BVjKB83CGDhuLXJgz703gQuRW2ORbNsUnSLW3NNqkVc=; b=PEO+oml8sbOBJnA2JgN8yi8oG/YrG3PgZZ8hW5sCEhx3zRADsH4VIEc+C2lfQ9ID4H eY9GZRlCgvsU0wxkPjtYKurKi14Rwj+DRDWE5oP+/0q36mRyX/k3sDFFnSiZE2Z2nS0N +Kud0bquyc5memcL6tJI5k53Dc0KTcwBECh4fbh95l/UH5bVDnHpvKUrUpnU4RnoFI4L 7LQInRcFEOUc0RY3Rqu2aZd9JT/CfvDaI9GHC64ZuKEWbsGse3uAe12zj3oQt93Bvb9s 09IQH3BbPVewj67JRu9AQsvb5o2yVlE10C7OWjfdt2XooE/6wBQgQGtBQvGua0WYTjl1 d+iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BVjKB83CGDhuLXJgz703gQuRW2ORbNsUnSLW3NNqkVc=; b=i0b/SpltcuyEUX7gYwgtAG3xXL4X3z/NvlvgICYqbmKcZOgVeQ21WbIn7D3GXmdTMR +VNpJEUu+QgNAcZfdeHjFCaGpU74WvE3RjtTpPvwH+LkHdMxYIDmxpXMTeNDlfLq/XUi 40F9JRbGAXMVQbztqNpUT41x/Grx1aC0vX/U/cX0gMOnNZTujoXZDHCHlLJHw3wMkHwI yQ0K62+sVDdEKKj04crdgQ8SbJZxQ7k3fpanxfoVYa8VfIrhL2+l/Dx1aGc0UnuHu28f Kw9OEncqPZGKmzFAN5jw0BKsHRgoT9bkjRTotL4B/aYU3p+9y8PXJIL5vpwooJ6ThDXz SXBg== X-Gm-Message-State: APjAAAXMQGv0nbqzk2YMiS7XYGiAbow/uDqqklauqRpra+K44hbWgdWn ysehGqDCs5IaeX7ewpxd/rUDT4Q+62+Ayg== X-Received: by 2002:a1c:f102:: with SMTP id p2mr20392795wmh.60.1560343728001; Wed, 12 Jun 2019 05:48:48 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:47 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 01/20] crypto: arm/aes-ce - cosmetic/whitespace cleanup Date: Wed, 12 Jun 2019 14:48:19 +0200 Message-Id: <20190612124838.2492-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rearrange the aes_algs[] array for legibility. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-ce-glue.c | 116 ++++++++++---------- 1 file changed, 56 insertions(+), 60 deletions(-) -- 2.20.1 diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c index 5affb8482379..04ba66903674 100644 --- a/arch/arm/crypto/aes-ce-glue.c +++ b/arch/arm/crypto/aes-ce-glue.c @@ -337,69 +337,65 @@ static int xts_decrypt(struct skcipher_request *req) } static struct skcipher_alg aes_algs[] = { { - .base = { - .cra_name = "__ecb(aes)", - .cra_driver_name = "__ecb-aes-ce", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = ce_aes_setkey, - .encrypt = ecb_encrypt, - .decrypt = ecb_decrypt, + .base.cra_name = "__ecb(aes)", + .base.cra_driver_name = "__ecb-aes-ce", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_INTERNAL, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct crypto_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = ce_aes_setkey, + .encrypt = ecb_encrypt, + .decrypt = ecb_decrypt, }, { - .base = { - .cra_name = "__cbc(aes)", - .cra_driver_name = "__cbc-aes-ce", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = ce_aes_setkey, - .encrypt = cbc_encrypt, - .decrypt = cbc_decrypt, + .base.cra_name = "__cbc(aes)", + .base.cra_driver_name = "__cbc-aes-ce", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_INTERNAL, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct crypto_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = ce_aes_setkey, + .encrypt = cbc_encrypt, + .decrypt = cbc_decrypt, }, { - .base = { - .cra_name = "__ctr(aes)", - .cra_driver_name = "__ctr-aes-ce", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .chunksize = AES_BLOCK_SIZE, - .setkey = ce_aes_setkey, - .encrypt = ctr_encrypt, - .decrypt = ctr_encrypt, + .base.cra_name = "__ctr(aes)", + .base.cra_driver_name = "__ctr-aes-ce", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_INTERNAL, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct crypto_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .setkey = ce_aes_setkey, + .encrypt = ctr_encrypt, + .decrypt = ctr_encrypt, }, { - .base = { - .cra_name = "__xts(aes)", - .cra_driver_name = "__xts-aes-ce", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_aes_xts_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = 2 * AES_MIN_KEY_SIZE, - .max_keysize = 2 * AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = xts_set_key, - .encrypt = xts_encrypt, - .decrypt = xts_decrypt, + .base.cra_name = "__xts(aes)", + .base.cra_driver_name = "__xts-aes-ce", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_INTERNAL, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct crypto_aes_xts_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = xts_set_key, + .encrypt = xts_encrypt, + .decrypt = xts_decrypt, } }; static struct simd_skcipher_alg *aes_simd_algs[ARRAY_SIZE(aes_algs)]; From patchwork Wed Jun 12 12:48:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166549 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640604ilk; Wed, 12 Jun 2019 05:48:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqybbwYp+mI1KVBWsTqbWU1DjelqHveOHmYW9AcmsbAMLPPqDj3AiuFGrn6nw8Y4OCjjCdwA X-Received: by 2002:a17:902:21:: with SMTP id 30mr80511778pla.302.1560343732921; Wed, 12 Jun 2019 05:48:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343732; cv=none; d=google.com; s=arc-20160816; b=nxWPQ/7PZNcX+bAhnUuQsgG/vbV1wb6bVqhlUviqZEYSLO16x8GNkyFoFyWknMCzq2 VkTEhZBL2xZGc2xPhwgX//mZksgW3JpxZVJ6FzThHu8BQOiOnoitMwDVk6MlMmLvrmOK DCJhov23VtBYEtiYwlvJ53WqjSKok+sF1Y6WNi7rrGWYrMxZj6tGcjLUKJGQcK0WgOLc c/V/fCMom+5kpmaYD7OFg49JBpKFO8LTAynj+C9f2u1atDB30lAqrs22cMVFCTld+mLl pHfQ+7EewFnR46cpmPXCDfPfpGJchJn5R+bpanN6TBy+ExGVHo1w0igCtZw7hCTQ3bxJ +gHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kh+1icf/YKEvTTRJagBGMOB+f2u4C9zDf6SaxozYbCI=; b=KwAR/vEkRX+dn/t3FTCGvUrlCoubP5/uUKQn1U2odrhu6p5wNcoJA6b5lAUu2vhbz9 YEholYgZumDdlWWyOimuCsUL3vbsQAW6x9GpFiEZExGteogGp7yOnFg5nyZxpX76A37Q SV2aI4s6v4IRdiwiFXiBIu+W66ft7/UYWjuWv1nBv8vXUDR9rqBuAwN5RadPDkGmu0xZ tIlWMuOjRGvIrI5oBsCMWi/+PDS5ozCxgneP3hvoiuOF0lYwmGnrJvF2CFXH/C4IbKP2 LEbsRwp4DwTtBAyv8eDt/1nx9O+QP4tC0OqPMi+LzKF4nIf1R1fmLomHJwTRzAZKiEJU 4nNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=bm10Ep11; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.48.52; Wed, 12 Jun 2019 05:48:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=bm10Ep11; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409156AbfFLMsw (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:52 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:42808 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406812AbfFLMsv (ORCPT ); Wed, 12 Jun 2019 08:48:51 -0400 Received: by mail-wr1-f67.google.com with SMTP id x17so1484707wrl.9 for ; Wed, 12 Jun 2019 05:48:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kh+1icf/YKEvTTRJagBGMOB+f2u4C9zDf6SaxozYbCI=; b=bm10Ep11t57nCbm56g4vpVdy+Yaj8iHbmgP0FQq92PpqW6tSTKYLwLb9Xfe4PTu7c/ 5WDx8zgtCB2XRC8FbCA+JkKv/3S6EdFvXpC9hRWEwXMrxLmaFh//p5Nicq87n4EsPMNd k95tlbbrkmsBlVWG+v9r/soXrIgwaCjpTDz7066MHcjFV2FGoiAN5FvwSk3S4Q2o3vVD M4D/AYJg36iCrCnxCRYEkv8BB/xsIMEvNaWrWEXbSxT2UQjpBs/YvbzxJoHemS5LTDwL yK4MLTkbFAfzojdtDVwGHH/+LFeLkVbScOB6nefOJ+KMOCQ0yOkkCNoZl88Sy/SO2IvV 8nmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kh+1icf/YKEvTTRJagBGMOB+f2u4C9zDf6SaxozYbCI=; b=QnpCEj9Bvm1Lgwaq92ovYpGSu8aQvRqVFzbmxVCVv9IPfXM1S4eD0GPamPM9e6Hg5D hivy/UL/0IMmC/MVE4vpuUHoeDlzlH6bHMlrtWUxYu99u7odkN3zIEIMZYPigbtk5buZ 9SqZeRLI7IAHwCebrtHUlOJtIrbaktJhRKrRpSTC8GYUKV5lClKkAeVVfU4+EIuvtdcU cfKwNQkRixvajNPCarosAng9mXiD1W2Kaj5nVtUxwhEKtPfTiP5ZWBasxXs0jbZvWQsB RIg/NASvgutYKaEJXv0CrA3suY1RIYTY6+ZYByu84f1lvN67gVfHb0P9vFfuSiO6p71K hu8w== X-Gm-Message-State: APjAAAVes539iGHQ1RBma74CwHIP5XGRiF7ol0TryvtKcNAbYgpXjDPc DLR5Vw57oik07NC9ipRsHdAcYIWoxpn0rg== X-Received: by 2002:adf:e6ca:: with SMTP id y10mr40142759wrm.3.1560343729058; Wed, 12 Jun 2019 05:48:49 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:48 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 02/20] crypto: arm/aes - rename local routines to prevent future clashes Date: Wed, 12 Jun 2019 14:48:20 +0200 Message-Id: <20190612124838.2492-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename some local AES encrypt/decrypt routines so they don't clash with the names we are about to introduce for the routines exposes by the generic AES library. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-cipher-glue.c | 8 ++++---- arch/arm64/crypto/aes-cipher-glue.c | 8 ++++---- crypto/aes_generic.c | 8 ++++---- 3 files changed, 12 insertions(+), 12 deletions(-) -- 2.20.1 diff --git a/arch/arm/crypto/aes-cipher-glue.c b/arch/arm/crypto/aes-cipher-glue.c index c222f6e072ad..f6c07867b8ff 100644 --- a/arch/arm/crypto/aes-cipher-glue.c +++ b/arch/arm/crypto/aes-cipher-glue.c @@ -19,7 +19,7 @@ EXPORT_SYMBOL(__aes_arm_encrypt); asmlinkage void __aes_arm_decrypt(u32 *rk, int rounds, const u8 *in, u8 *out); EXPORT_SYMBOL(__aes_arm_decrypt); -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -27,7 +27,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) __aes_arm_encrypt(ctx->key_enc, rounds, in, out); } -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -47,8 +47,8 @@ static struct crypto_alg aes_alg = { .cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE, .cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE, .cra_cipher.cia_setkey = crypto_aes_set_key, - .cra_cipher.cia_encrypt = aes_encrypt, - .cra_cipher.cia_decrypt = aes_decrypt, + .cra_cipher.cia_encrypt = aes_arm_encrypt, + .cra_cipher.cia_decrypt = aes_arm_decrypt, #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS .cra_alignmask = 3, diff --git a/arch/arm64/crypto/aes-cipher-glue.c b/arch/arm64/crypto/aes-cipher-glue.c index 7288e7cbebff..0e90b06ebcec 100644 --- a/arch/arm64/crypto/aes-cipher-glue.c +++ b/arch/arm64/crypto/aes-cipher-glue.c @@ -18,7 +18,7 @@ EXPORT_SYMBOL(__aes_arm64_encrypt); asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds); EXPORT_SYMBOL(__aes_arm64_decrypt); -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -26,7 +26,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) __aes_arm64_encrypt(ctx->key_enc, out, in, rounds); } -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -46,8 +46,8 @@ static struct crypto_alg aes_alg = { .cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE, .cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE, .cra_cipher.cia_setkey = crypto_aes_set_key, - .cra_cipher.cia_encrypt = aes_encrypt, - .cra_cipher.cia_decrypt = aes_decrypt + .cra_cipher.cia_encrypt = aes_arm64_encrypt, + .cra_cipher.cia_decrypt = aes_arm64_decrypt }; static int __init aes_init(void) diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c index f217568917e4..3aa4a715c216 100644 --- a/crypto/aes_generic.c +++ b/crypto/aes_generic.c @@ -1332,7 +1332,7 @@ EXPORT_SYMBOL_GPL(crypto_aes_set_key); f_rl(bo, bi, 3, k); \ } while (0) -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); u32 b0[4], b1[4]; @@ -1402,7 +1402,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) i_rl(bo, bi, 3, k); \ } while (0) -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void crypto_aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); u32 b0[4], b1[4]; @@ -1454,8 +1454,8 @@ static struct crypto_alg aes_alg = { .cia_min_keysize = AES_MIN_KEY_SIZE, .cia_max_keysize = AES_MAX_KEY_SIZE, .cia_setkey = crypto_aes_set_key, - .cia_encrypt = aes_encrypt, - .cia_decrypt = aes_decrypt + .cia_encrypt = crypto_aes_encrypt, + .cia_decrypt = crypto_aes_decrypt } } }; From patchwork Wed Jun 12 12:48:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166550 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640641ilk; Wed, 12 Jun 2019 05:48:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqzNSImKtIYWtpHPMe1jLiiDdidHyGvR96cTmWi/7+BWx6uV4wcGYBZh9ZgEh7exgFS4QR0S X-Received: by 2002:a17:902:2ac8:: with SMTP id j66mr40072749plb.273.1560343734355; Wed, 12 Jun 2019 05:48:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343734; cv=none; d=google.com; s=arc-20160816; b=fqWKRHZRLji+jmFCLsQLGstuZoUPdurWswYpknvlRKxlxEgWAj36KsAC5NhPI1WViR YTREsTU5P4Rcq5359o0UUR5WGDEnA/AQVe7KHndwiNQfkvlVSmc2ePhyTLdwXFf5EIU4 XKdVDy5/joK6IGeDsxVy0cBlxsy+d40I/xS9nc1ibHYd0IYypKApZxu1TwKNNR+nY9j+ OAJ/UdaXyFDku9y//nr6P6udjc/A9utR2WelqAoi+qrQ53Vo+Ue9SP6e6AaGq0cbEcVU qvQg9dzTkgEhvBVozTLChQbUujh3mdmLyNaKQwpUgEjmZCeGqSyITLTbsPTL9OpL+9aG gWkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OVlu3R/XDhYHCSEro6NMIXQLAFRHmhGDxGC17BFWD8A=; b=SPvz1CkNpHrFVIH/6qZ/SC/6YKfl0I3Ww/6hoFQb7zAGSwagy6QI5p68CRlPAPwFm2 5VP/66FALVYPI+D2vWLMQUcf2y85PAptdNq1wAzEPilIDrlAg7fkp1T7df/7KFJQVXFa CzXWrAU3TbMV66BUgUAGLjgHR0e5Pub5eB7hGKC3H6evjmAHXsKd/tdu4EpkbovzTtbY cRqYVuXRIG5S/QS6rBo0/OISJZzF3Bgb1eTR9oLZ4Mi3WlB4DymlyzmioiwUZpto6yFG ICCIiL/i/wdSiOysCY2ASgQgrcZWTQU7u2A7+6eDCTn8omXOVlMZEXTema9vMoWIDuqz nDGQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=djsoiZR4; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.48.54; Wed, 12 Jun 2019 05:48:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=djsoiZR4; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409158AbfFLMsx (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:53 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:54690 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404447AbfFLMsx (ORCPT ); Wed, 12 Jun 2019 08:48:53 -0400 Received: by mail-wm1-f66.google.com with SMTP id g135so6451627wme.4 for ; Wed, 12 Jun 2019 05:48:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OVlu3R/XDhYHCSEro6NMIXQLAFRHmhGDxGC17BFWD8A=; b=djsoiZR4kDhV/HUvSBg3zkLEMwXN5MIzGZPokRW1HBnBFD5AJMKJYA22kfP19inVP2 mFa9evSKDMxbsAdrJ/EKxtvsEwxRuWjYse+gm2EY8P4CEpvC/vicvUbbdM6Bouq2eb8D G1d1hPrXJJivLL71rAJw9vfwlC4WUF1TBIP49bDL4Qb5nz0I8NstNWwIW+/g3bEu0cIs EwHnOP9KmB4stjvfoaN0XqJySjWZ9h2veKZxWgkTzc915hGCDQ1z2yNnjxrjKuYuwaII G8CqRExpLsAj4+zNOazJi1UnDSOm+Ns3rxdIw9Rf/mQb3Nj/40CGkz3r0WgOAoVuxJSA jcJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OVlu3R/XDhYHCSEro6NMIXQLAFRHmhGDxGC17BFWD8A=; b=GzaEzMAH+CCTsIbLo/37RswGyKX0tCxo8U4Xp+xO/6qwWaPSHFWwLIQ7B6tcu/ezDH tIJJXJvm38c8EdEydonVRV8IcDCbT+02+VfnMtdSFt1xiVmJxDSIq4Lwl4CBqHyX9/nD qtGVnQTBXOZq8YfX9zverWuhl8/nNy6vF9uggj8H44MN/MUBXemv5D9uPKxYjW5rz3iH PXqR9RPO4d9x5QdU9QqEjivJF9L3OCxJeHJFaXb+hy/U6Ne7UbluW7KzAUfOHOS0U2mg hBMHqFcfOaF8CDL1KGmCQ20WoeFBWB6V5NBSycODJAsDFdI33k4UiTNjdMAlWjFNJqSm Y5xQ== X-Gm-Message-State: APjAAAXaIoDb50PqyUXDSRsxBGBnpi3Yyn38qTeeI3CKs1Rc1aBkfOCG pMOq/5jJofGu5oDeZfIpnjfB0J1cn8n3DA== X-Received: by 2002:a7b:c7d8:: with SMTP id z24mr22343958wmk.10.1560343730272; Wed, 12 Jun 2019 05:48:50 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:49 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 03/20] crypto: aes/fixed-time - align key schedule with other implementations Date: Wed, 12 Jun 2019 14:48:21 +0200 Message-Id: <20190612124838.2492-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The fixed time AES code mangles the key schedule so that xoring the first round key with values at fixed offsets across the Sbox produces the correct value. This primes the D-cache with the entire Sbox before any data dependent lookups are done, making it more difficult to infer key bits from timing variances when the plaintext is known. The downside of this approach is that it renders the key schedule incompatible with other implementations of AES in the kernel, which makes it cumbersome to use this implementation as a fallback for SIMD based AES in contexts where this is not allowed. So let's tweak the fixed Sbox indexes so that they add up to zero under the xor operation. While at it, increase the granularity to 16 bytes so we cover the entire Sbox even on systems with 16 byte cachelines. Signed-off-by: Ard Biesheuvel --- crypto/aes_ti.c | 52 ++++++++------------ 1 file changed, 21 insertions(+), 31 deletions(-) -- 2.20.1 diff --git a/crypto/aes_ti.c b/crypto/aes_ti.c index 1ff9785b30f5..fd70dc322634 100644 --- a/crypto/aes_ti.c +++ b/crypto/aes_ti.c @@ -237,30 +237,8 @@ static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - int err; - err = aesti_expand_key(ctx, in_key, key_len); - if (err) - return err; - - /* - * In order to force the compiler to emit data independent Sbox lookups - * at the start of each block, xor the first round key with values at - * fixed indexes in the Sbox. This will need to be repeated each time - * the key is used, which will pull the entire Sbox into the D-cache - * before any data dependent Sbox lookups are performed. - */ - ctx->key_enc[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128]; - ctx->key_enc[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160]; - ctx->key_enc[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192]; - ctx->key_enc[3] ^= __aesti_sbox[96] ^ __aesti_sbox[224]; - - ctx->key_dec[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128]; - ctx->key_dec[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160]; - ctx->key_dec[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192]; - ctx->key_dec[3] ^= __aesti_inv_sbox[96] ^ __aesti_inv_sbox[224]; - - return 0; + return aesti_expand_key(ctx, in_key, key_len); } static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) @@ -283,10 +261,16 @@ static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) */ local_irq_save(flags); - st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128]; - st0[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160]; - st0[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192]; - st0[3] ^= __aesti_sbox[96] ^ __aesti_sbox[224]; + /* + * Force the compiler to emit data independent Sbox references, + * by xoring the input with Sbox values that are known to add up + * to zero. This pulls the entire Sbox into the D-cache before any + * data dependent lookups are done. + */ + st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[ 64] ^ __aesti_sbox[134] ^ __aesti_sbox[195]; + st0[1] ^= __aesti_sbox[16] ^ __aesti_sbox[ 82] ^ __aesti_sbox[158] ^ __aesti_sbox[221]; + st0[2] ^= __aesti_sbox[32] ^ __aesti_sbox[ 96] ^ __aesti_sbox[160] ^ __aesti_sbox[234]; + st0[3] ^= __aesti_sbox[48] ^ __aesti_sbox[112] ^ __aesti_sbox[186] ^ __aesti_sbox[241]; for (round = 0;; round += 2, rkp += 8) { st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0]; @@ -331,10 +315,16 @@ static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) */ local_irq_save(flags); - st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128]; - st0[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160]; - st0[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192]; - st0[3] ^= __aesti_inv_sbox[96] ^ __aesti_inv_sbox[224]; + /* + * Force the compiler to emit data independent Sbox references, + * by xoring the input with Sbox values that are known to add up + * to zero. This pulls the entire Sbox into the D-cache before any + * data dependent lookups are done. + */ + st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[ 64] ^ __aesti_inv_sbox[129] ^ __aesti_inv_sbox[200]; + st0[1] ^= __aesti_inv_sbox[16] ^ __aesti_inv_sbox[ 83] ^ __aesti_inv_sbox[150] ^ __aesti_inv_sbox[212]; + st0[2] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[ 96] ^ __aesti_inv_sbox[160] ^ __aesti_inv_sbox[236]; + st0[3] ^= __aesti_inv_sbox[48] ^ __aesti_inv_sbox[112] ^ __aesti_inv_sbox[187] ^ __aesti_inv_sbox[247]; for (round = 0;; round += 2, rkp += 8) { st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0]; From patchwork Wed Jun 12 12:48:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166553 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640723ilk; Wed, 12 Jun 2019 05:48:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqz1krNEvlQ0ZXAphRlNSKtTTQfxyW0Ds0WtXR9radjJZBLseG20+CxyySi4+BCmqQeelqLM X-Received: by 2002:a62:68c4:: with SMTP id d187mr88370322pfc.245.1560343738300; Wed, 12 Jun 2019 05:48:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343738; cv=none; d=google.com; s=arc-20160816; b=GsnCrFf7a6vGEARISV39/FMNaAVmYDkO1T4bcZ2QRZ7S4svQckiAS0jf1eQy3fSlpf CN0khqVqLRuevjCn7Qvit8l78pFx7MDlDdTOpSajY5YOPjLjFueWNMkS8eqUxnnCas+d k5mHgUFc49y/RYo3PVPCVeSIdnnC5PQK5nvHxrqUm1bHJK4LzYNOoGOz/xotpHhKXon5 Qwf1n/fIqV96cwspNUuoO/ozHx2duj5EbCkIoini+xOBYtq9+yVgT3wrXyM1+lIrlYBF IHuVv7TxuBCbeg0YaV9+qz23Z+i3i6rbOsK1fWncrjj3wtufEmy5NfpzmE2L1APggPxd cB2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=J5qDNimM3ydg5KEkTGKqT6zhN+3mBuOyob3Wh7rS+7U=; b=fuwNi3evVvKf/4CSyjK6I89xdX+lHYBIIl5zb1ZfVKyK6Kf+vkPgOJEuwY+l9qQ04K oLtUtisADttUUWMwFpIzfAIVhpfinU63JUldF27cUnuhy6hzewKG/6PIxRodPJvAPAN1 fXGVGeoytWAby0noiEgeps5A0CWAUirtlM6eUzx3o0Iejnw3gyKmHFoelwmN+488uGJp AalEwxpkVLWNmgnGbp+GIW7lmtVB8i/jSHsHPlnV01uT96b8PumakTEY0hL0u1E0xf5q l5+J0h7zYDrtYqJtLI8U93v03JXl8bKRs4D41Ut+Y2PDThojlBhZBlLq8C0TY7sbj1Q3 yA/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nI0P+5An; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.48.58; Wed, 12 Jun 2019 05:48:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nI0P+5An; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439220AbfFLMs5 (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:57 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:44428 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406812AbfFLMs5 (ORCPT ); Wed, 12 Jun 2019 08:48:57 -0400 Received: by mail-wr1-f66.google.com with SMTP id b17so16754811wrq.11 for ; Wed, 12 Jun 2019 05:48:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J5qDNimM3ydg5KEkTGKqT6zhN+3mBuOyob3Wh7rS+7U=; b=nI0P+5An6QqPZGocdoI2QLloEd2XmSwuPe4kc8+VFiJlqxXiTlfnrsGzSDPoGdSoHu yYw1dT4A+Y778LVcrajT3Y5vxnjPA2JcR1g6AHQBfk4JmJcSYPvJpeiuj8bxeyE8Ut4T n40m9Mca4cpju3Xo4A187vQYVGWQy8kr2jiQFnL4mdEL7ZgCGKVKNQkokRFivjHuJe4U rGjHznz1IZwZo2Rv1f2H1lUNOgbApmuduPUDiGmm3hOC5qfepu+8V8HvI7CG5/4YjFVK HdtTy9voA6UlDwOZSZJdskmbOtQyU6hATik0cxvXA12qVEd4sTRl5WcV9Fj86TEOVMNF waxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J5qDNimM3ydg5KEkTGKqT6zhN+3mBuOyob3Wh7rS+7U=; b=Sk65992mJXk2rpnRhpdMcgU0vPpaZebp49M0GHttIKphrVG6t6gWW6/dZxCwioUY0J 9LoP4cr0hZl78rFsaP50dudDhhJSvnBRkuXXvham6nplrtUcp1Dt99ZAE16UsLv/BflU QGYsiCvJu1xqMCiTlo4xcQRpq7NfwgiDedQn8tR5GptP+HzXa1g3jfETt9+B0r/ZdiDI CJdaC3T+LuXxPZ6Kd5sm2k8pI42UuK+FFsWF8K6uHm1gOdSt/CvYNiIufkqigitiyZWV ChhPosrAg2SqI4LbZpqojh31LRZyhwlCpSjIRI+klmXjB3T6J+wlLQamUab/VjzjJH9u es6w== X-Gm-Message-State: APjAAAVyeglc/jQ6f5R//mxlMbfedtTkFBXa3IA/SC79WTVVIpX+7GCH C6cGe/qwsx95FiDxCDbfRhv++m8QLWTj3w== X-Received: by 2002:adf:9d81:: with SMTP id p1mr5077183wre.294.1560343731706; Wed, 12 Jun 2019 05:48:51 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:51 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 04/20] crypto: aes - create AES library based on the fixed time AES code Date: Wed, 12 Jun 2019 14:48:22 +0200 Message-Id: <20190612124838.2492-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Turn the existing small footprint and mostly time invariant C code and turn it into a AES library that can be used for non-performance critical, casual use of AES, and as a fallback for, e.g., SIMD code that needs a secondary path that can be taken in contexts where the SIMD unit is off limits (e.g., in hard interrupts taken from kernel context) Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 4 + crypto/aes_ti.c | 325 +---------------- include/crypto/aes.h | 34 ++ lib/crypto/Makefile | 3 + lib/crypto/aes.c | 368 ++++++++++++++++++++ 5 files changed, 413 insertions(+), 321 deletions(-) -- 2.20.1 diff --git a/crypto/Kconfig b/crypto/Kconfig index 5114b35ef3b4..dc6f93ef3ead 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1059,6 +1059,9 @@ config CRYPTO_GHASH_CLMUL_NI_INTEL comment "Ciphers" +config CRYPTO_LIB_AES + tristate + config CRYPTO_AES tristate "AES cipher algorithms" select CRYPTO_ALGAPI @@ -1082,6 +1085,7 @@ config CRYPTO_AES config CRYPTO_AES_TI tristate "Fixed time AES cipher" select CRYPTO_ALGAPI + select CRYPTO_LIB_AES help This is a generic implementation of AES that attempts to eliminate data dependent latencies as much as possible without affecting diff --git a/crypto/aes_ti.c b/crypto/aes_ti.c index fd70dc322634..30d73b587acc 100644 --- a/crypto/aes_ti.c +++ b/crypto/aes_ti.c @@ -1,352 +1,35 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Scalar fixed time AES core transform * * Copyright (C) 2017 Linaro Ltd - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #include #include #include -#include - -/* - * Emit the sbox as volatile const to prevent the compiler from doing - * constant folding on sbox references involving fixed indexes. - */ -static volatile const u8 __cacheline_aligned __aesti_sbox[] = { - 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, - 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, - 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, - 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, - 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, - 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, - 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, - 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, - 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, - 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, - 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, - 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, - 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, - 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, - 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, - 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, - 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, - 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, - 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, - 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, - 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, - 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, - 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, - 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, - 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, - 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, - 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, - 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, - 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, - 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, - 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, - 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16, -}; - -static volatile const u8 __cacheline_aligned __aesti_inv_sbox[] = { - 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, - 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, - 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, - 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, - 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, - 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, - 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, - 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25, - 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, - 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, - 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, - 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, - 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, - 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06, - 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, - 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, - 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, - 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, - 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, - 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e, - 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, - 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, - 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, - 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, - 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, - 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f, - 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, - 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, - 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, - 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, - 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, - 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d, -}; - -static u32 mul_by_x(u32 w) -{ - u32 x = w & 0x7f7f7f7f; - u32 y = w & 0x80808080; - - /* multiply by polynomial 'x' (0b10) in GF(2^8) */ - return (x << 1) ^ (y >> 7) * 0x1b; -} - -static u32 mul_by_x2(u32 w) -{ - u32 x = w & 0x3f3f3f3f; - u32 y = w & 0x80808080; - u32 z = w & 0x40404040; - - /* multiply by polynomial 'x^2' (0b100) in GF(2^8) */ - return (x << 2) ^ (y >> 7) * 0x36 ^ (z >> 6) * 0x1b; -} - -static u32 mix_columns(u32 x) -{ - /* - * Perform the following matrix multiplication in GF(2^8) - * - * | 0x2 0x3 0x1 0x1 | | x[0] | - * | 0x1 0x2 0x3 0x1 | | x[1] | - * | 0x1 0x1 0x2 0x3 | x | x[2] | - * | 0x3 0x1 0x1 0x2 | | x[3] | - */ - u32 y = mul_by_x(x) ^ ror32(x, 16); - - return y ^ ror32(x ^ y, 8); -} - -static u32 inv_mix_columns(u32 x) -{ - /* - * Perform the following matrix multiplication in GF(2^8) - * - * | 0xe 0xb 0xd 0x9 | | x[0] | - * | 0x9 0xe 0xb 0xd | | x[1] | - * | 0xd 0x9 0xe 0xb | x | x[2] | - * | 0xb 0xd 0x9 0xe | | x[3] | - * - * which can conveniently be reduced to - * - * | 0x2 0x3 0x1 0x1 | | 0x5 0x0 0x4 0x0 | | x[0] | - * | 0x1 0x2 0x3 0x1 | | 0x0 0x5 0x0 0x4 | | x[1] | - * | 0x1 0x1 0x2 0x3 | x | 0x4 0x0 0x5 0x0 | x | x[2] | - * | 0x3 0x1 0x1 0x2 | | 0x0 0x4 0x0 0x5 | | x[3] | - */ - u32 y = mul_by_x2(x); - - return mix_columns(x ^ y ^ ror32(y, 16)); -} - -static __always_inline u32 subshift(u32 in[], int pos) -{ - return (__aesti_sbox[in[pos] & 0xff]) ^ - (__aesti_sbox[(in[(pos + 1) % 4] >> 8) & 0xff] << 8) ^ - (__aesti_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ - (__aesti_sbox[(in[(pos + 3) % 4] >> 24) & 0xff] << 24); -} - -static __always_inline u32 inv_subshift(u32 in[], int pos) -{ - return (__aesti_inv_sbox[in[pos] & 0xff]) ^ - (__aesti_inv_sbox[(in[(pos + 3) % 4] >> 8) & 0xff] << 8) ^ - (__aesti_inv_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ - (__aesti_inv_sbox[(in[(pos + 1) % 4] >> 24) & 0xff] << 24); -} -static u32 subw(u32 in) -{ - return (__aesti_sbox[in & 0xff]) ^ - (__aesti_sbox[(in >> 8) & 0xff] << 8) ^ - (__aesti_sbox[(in >> 16) & 0xff] << 16) ^ - (__aesti_sbox[(in >> 24) & 0xff] << 24); -} - -static int aesti_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len) -{ - u32 kwords = key_len / sizeof(u32); - u32 rc, i, j; - - if (key_len != AES_KEYSIZE_128 && - key_len != AES_KEYSIZE_192 && - key_len != AES_KEYSIZE_256) - return -EINVAL; - - ctx->key_length = key_len; - - for (i = 0; i < kwords; i++) - ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32)); - - for (i = 0, rc = 1; i < 10; i++, rc = mul_by_x(rc)) { - u32 *rki = ctx->key_enc + (i * kwords); - u32 *rko = rki + kwords; - - rko[0] = ror32(subw(rki[kwords - 1]), 8) ^ rc ^ rki[0]; - rko[1] = rko[0] ^ rki[1]; - rko[2] = rko[1] ^ rki[2]; - rko[3] = rko[2] ^ rki[3]; - - if (key_len == 24) { - if (i >= 7) - break; - rko[4] = rko[3] ^ rki[4]; - rko[5] = rko[4] ^ rki[5]; - } else if (key_len == 32) { - if (i >= 6) - break; - rko[4] = subw(rko[3]) ^ rki[4]; - rko[5] = rko[4] ^ rki[5]; - rko[6] = rko[5] ^ rki[6]; - rko[7] = rko[6] ^ rki[7]; - } - } - - /* - * Generate the decryption keys for the Equivalent Inverse Cipher. - * This involves reversing the order of the round keys, and applying - * the Inverse Mix Columns transformation to all but the first and - * the last one. - */ - ctx->key_dec[0] = ctx->key_enc[key_len + 24]; - ctx->key_dec[1] = ctx->key_enc[key_len + 25]; - ctx->key_dec[2] = ctx->key_enc[key_len + 26]; - ctx->key_dec[3] = ctx->key_enc[key_len + 27]; - - for (i = 4, j = key_len + 20; j > 0; i += 4, j -= 4) { - ctx->key_dec[i] = inv_mix_columns(ctx->key_enc[j]); - ctx->key_dec[i + 1] = inv_mix_columns(ctx->key_enc[j + 1]); - ctx->key_dec[i + 2] = inv_mix_columns(ctx->key_enc[j + 2]); - ctx->key_dec[i + 3] = inv_mix_columns(ctx->key_enc[j + 3]); - } - - ctx->key_dec[i] = ctx->key_enc[0]; - ctx->key_dec[i + 1] = ctx->key_enc[1]; - ctx->key_dec[i + 2] = ctx->key_enc[2]; - ctx->key_dec[i + 3] = ctx->key_enc[3]; - - return 0; -} static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - return aesti_expand_key(ctx, in_key, key_len); + return aes_expandkey(ctx, in_key, key_len); } static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - const u32 *rkp = ctx->key_enc + 4; - int rounds = 6 + ctx->key_length / 4; - u32 st0[4], st1[4]; - unsigned long flags; - int round; - st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in); - st0[1] = ctx->key_enc[1] ^ get_unaligned_le32(in + 4); - st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8); - st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12); - - /* - * Temporarily disable interrupts to avoid races where cachelines are - * evicted when the CPU is interrupted to do something else. - */ - local_irq_save(flags); - - /* - * Force the compiler to emit data independent Sbox references, - * by xoring the input with Sbox values that are known to add up - * to zero. This pulls the entire Sbox into the D-cache before any - * data dependent lookups are done. - */ - st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[ 64] ^ __aesti_sbox[134] ^ __aesti_sbox[195]; - st0[1] ^= __aesti_sbox[16] ^ __aesti_sbox[ 82] ^ __aesti_sbox[158] ^ __aesti_sbox[221]; - st0[2] ^= __aesti_sbox[32] ^ __aesti_sbox[ 96] ^ __aesti_sbox[160] ^ __aesti_sbox[234]; - st0[3] ^= __aesti_sbox[48] ^ __aesti_sbox[112] ^ __aesti_sbox[186] ^ __aesti_sbox[241]; - - for (round = 0;; round += 2, rkp += 8) { - st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0]; - st1[1] = mix_columns(subshift(st0, 1)) ^ rkp[1]; - st1[2] = mix_columns(subshift(st0, 2)) ^ rkp[2]; - st1[3] = mix_columns(subshift(st0, 3)) ^ rkp[3]; - - if (round == rounds - 2) - break; - - st0[0] = mix_columns(subshift(st1, 0)) ^ rkp[4]; - st0[1] = mix_columns(subshift(st1, 1)) ^ rkp[5]; - st0[2] = mix_columns(subshift(st1, 2)) ^ rkp[6]; - st0[3] = mix_columns(subshift(st1, 3)) ^ rkp[7]; - } - - put_unaligned_le32(subshift(st1, 0) ^ rkp[4], out); - put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4); - put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8); - put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12); - - local_irq_restore(flags); + aes_encrypt(ctx, out, in); } static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - const u32 *rkp = ctx->key_dec + 4; - int rounds = 6 + ctx->key_length / 4; - u32 st0[4], st1[4]; - unsigned long flags; - int round; - - st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in); - st0[1] = ctx->key_dec[1] ^ get_unaligned_le32(in + 4); - st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8); - st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12); - - /* - * Temporarily disable interrupts to avoid races where cachelines are - * evicted when the CPU is interrupted to do something else. - */ - local_irq_save(flags); - - /* - * Force the compiler to emit data independent Sbox references, - * by xoring the input with Sbox values that are known to add up - * to zero. This pulls the entire Sbox into the D-cache before any - * data dependent lookups are done. - */ - st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[ 64] ^ __aesti_inv_sbox[129] ^ __aesti_inv_sbox[200]; - st0[1] ^= __aesti_inv_sbox[16] ^ __aesti_inv_sbox[ 83] ^ __aesti_inv_sbox[150] ^ __aesti_inv_sbox[212]; - st0[2] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[ 96] ^ __aesti_inv_sbox[160] ^ __aesti_inv_sbox[236]; - st0[3] ^= __aesti_inv_sbox[48] ^ __aesti_inv_sbox[112] ^ __aesti_inv_sbox[187] ^ __aesti_inv_sbox[247]; - - for (round = 0;; round += 2, rkp += 8) { - st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0]; - st1[1] = inv_mix_columns(inv_subshift(st0, 1)) ^ rkp[1]; - st1[2] = inv_mix_columns(inv_subshift(st0, 2)) ^ rkp[2]; - st1[3] = inv_mix_columns(inv_subshift(st0, 3)) ^ rkp[3]; - - if (round == rounds - 2) - break; - - st0[0] = inv_mix_columns(inv_subshift(st1, 0)) ^ rkp[4]; - st0[1] = inv_mix_columns(inv_subshift(st1, 1)) ^ rkp[5]; - st0[2] = inv_mix_columns(inv_subshift(st1, 2)) ^ rkp[6]; - st0[3] = inv_mix_columns(inv_subshift(st1, 3)) ^ rkp[7]; - } - - put_unaligned_le32(inv_subshift(st1, 0) ^ rkp[4], out); - put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4); - put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8); - put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12); - local_irq_restore(flags); + aes_decrypt(ctx, out, in); } static struct crypto_alg aes_alg = { diff --git a/include/crypto/aes.h b/include/crypto/aes.h index 0fdb542c70cd..72ead82d3f98 100644 --- a/include/crypto/aes.h +++ b/include/crypto/aes.h @@ -37,4 +37,38 @@ int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len); int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len); + +/** + * aes_expandkey - Expands the AES key as described in FIPS-197 + * @ctx: The location where the computed key will be stored. + * @in_key: The supplied key. + * @key_len: The length of the supplied key. + * + * Returns 0 on success. The function fails only if an invalid key size (or + * pointer) is supplied. + * The expanded key size is 240 bytes (max of 14 rounds with a unique 16 bytes + * key schedule plus a 16 bytes key which is used before the first round). + * The decryption key is prepared for the "Equivalent Inverse Cipher" as + * described in FIPS-197. The first slot (16 bytes) of each key (enc or dec) is + * for the initial combination, the second slot for the first round and so on. + */ +int aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len); + +/** + * aes_encrypt - Encrypt a single AES block + * @ctx: Context struct containing the key schedule + * @out: Buffer to store the ciphertext + * @in: Buffer containing the plaintext + */ +void aes_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); + +/** + * aes_decrypt - Encrypt a single AES block + * @ctx: Context struct containing the key schedule + * @out: Buffer to store the plaintext + * @in: Buffer containing the ciphertext + */ +void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); + #endif diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 88195c34932d..42a91c62d96d 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -1,4 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_CRYPTO_LIB_AES) += libaes.o +libaes-y := aes.o + obj-$(CONFIG_CRYPTO_LIB_ARC4) += libarc4.o libarc4-y := arc4.o diff --git a/lib/crypto/aes.c b/lib/crypto/aes.c new file mode 100644 index 000000000000..57596148b010 --- /dev/null +++ b/lib/crypto/aes.c @@ -0,0 +1,368 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2017-2019 Linaro Ltd + */ + +#include +#include +#include +#include + +/* + * Emit the sbox as volatile const to prevent the compiler from doing + * constant folding on sbox references involving fixed indexes. + */ +static volatile const u8 __cacheline_aligned aes_sbox[] = { + 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, + 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, + 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, + 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, + 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, + 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, + 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, + 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, + 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, + 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, + 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, + 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, + 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, + 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, + 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, + 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, + 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, + 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, + 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, + 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, + 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, + 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, + 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, + 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, + 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, + 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, + 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, + 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, + 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, + 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, + 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, + 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16, +}; + +static volatile const u8 __cacheline_aligned aes_inv_sbox[] = { + 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, + 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, + 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, + 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, + 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, + 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, + 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, + 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25, + 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, + 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, + 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, + 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, + 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, + 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06, + 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, + 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, + 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, + 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, + 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, + 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e, + 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, + 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, + 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, + 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, + 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, + 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f, + 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, + 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, + 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, + 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, + 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, + 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d, +}; + +static u32 mul_by_x(u32 w) +{ + u32 x = w & 0x7f7f7f7f; + u32 y = w & 0x80808080; + + /* multiply by polynomial 'x' (0b10) in GF(2^8) */ + return (x << 1) ^ (y >> 7) * 0x1b; +} + +static u32 mul_by_x2(u32 w) +{ + u32 x = w & 0x3f3f3f3f; + u32 y = w & 0x80808080; + u32 z = w & 0x40404040; + + /* multiply by polynomial 'x^2' (0b100) in GF(2^8) */ + return (x << 2) ^ (y >> 7) * 0x36 ^ (z >> 6) * 0x1b; +} + +static u32 mix_columns(u32 x) +{ + /* + * Perform the following matrix multiplication in GF(2^8) + * + * | 0x2 0x3 0x1 0x1 | | x[0] | + * | 0x1 0x2 0x3 0x1 | | x[1] | + * | 0x1 0x1 0x2 0x3 | x | x[2] | + * | 0x3 0x1 0x1 0x2 | | x[3] | + */ + u32 y = mul_by_x(x) ^ ror32(x, 16); + + return y ^ ror32(x ^ y, 8); +} + +static u32 inv_mix_columns(u32 x) +{ + /* + * Perform the following matrix multiplication in GF(2^8) + * + * | 0xe 0xb 0xd 0x9 | | x[0] | + * | 0x9 0xe 0xb 0xd | | x[1] | + * | 0xd 0x9 0xe 0xb | x | x[2] | + * | 0xb 0xd 0x9 0xe | | x[3] | + * + * which can conveniently be reduced to + * + * | 0x2 0x3 0x1 0x1 | | 0x5 0x0 0x4 0x0 | | x[0] | + * | 0x1 0x2 0x3 0x1 | | 0x0 0x5 0x0 0x4 | | x[1] | + * | 0x1 0x1 0x2 0x3 | x | 0x4 0x0 0x5 0x0 | x | x[2] | + * | 0x3 0x1 0x1 0x2 | | 0x0 0x4 0x0 0x5 | | x[3] | + */ + u32 y = mul_by_x2(x); + + return mix_columns(x ^ y ^ ror32(y, 16)); +} + +static __always_inline u32 subshift(u32 in[], int pos) +{ + return (aes_sbox[in[pos] & 0xff]) ^ + (aes_sbox[(in[(pos + 1) % 4] >> 8) & 0xff] << 8) ^ + (aes_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ + (aes_sbox[(in[(pos + 3) % 4] >> 24) & 0xff] << 24); +} + +static __always_inline u32 inv_subshift(u32 in[], int pos) +{ + return (aes_inv_sbox[in[pos] & 0xff]) ^ + (aes_inv_sbox[(in[(pos + 3) % 4] >> 8) & 0xff] << 8) ^ + (aes_inv_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ + (aes_inv_sbox[(in[(pos + 1) % 4] >> 24) & 0xff] << 24); +} + +static u32 subw(u32 in) +{ + return (aes_sbox[in & 0xff]) ^ + (aes_sbox[(in >> 8) & 0xff] << 8) ^ + (aes_sbox[(in >> 16) & 0xff] << 16) ^ + (aes_sbox[(in >> 24) & 0xff] << 24); +} + +/** + * aes_expandkey - Expands the AES key as described in FIPS-197 + * @ctx: The location where the computed key will be stored. + * @in_key: The supplied key. + * @key_len: The length of the supplied key. + * + * Returns 0 on success. The function fails only if an invalid key size (or + * pointer) is supplied. + * The expanded key size is 240 bytes (max of 14 rounds with a unique 16 bytes + * key schedule plus a 16 bytes key which is used before the first round). + * The decryption key is prepared for the "Equivalent Inverse Cipher" as + * described in FIPS-197. The first slot (16 bytes) of each key (enc or dec) is + * for the initial combination, the second slot for the first round and so on. + */ +int aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len) +{ + u32 kwords = key_len / sizeof(u32); + u32 rc, i, j; + + if (key_len != AES_KEYSIZE_128 && + key_len != AES_KEYSIZE_192 && + key_len != AES_KEYSIZE_256) + return -EINVAL; + + ctx->key_length = key_len; + + for (i = 0; i < kwords; i++) + ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32)); + + for (i = 0, rc = 1; i < 10; i++, rc = mul_by_x(rc)) { + u32 *rki = ctx->key_enc + (i * kwords); + u32 *rko = rki + kwords; + + rko[0] = ror32(subw(rki[kwords - 1]), 8) ^ rc ^ rki[0]; + rko[1] = rko[0] ^ rki[1]; + rko[2] = rko[1] ^ rki[2]; + rko[3] = rko[2] ^ rki[3]; + + if (key_len == 24) { + if (i >= 7) + break; + rko[4] = rko[3] ^ rki[4]; + rko[5] = rko[4] ^ rki[5]; + } else if (key_len == 32) { + if (i >= 6) + break; + rko[4] = subw(rko[3]) ^ rki[4]; + rko[5] = rko[4] ^ rki[5]; + rko[6] = rko[5] ^ rki[6]; + rko[7] = rko[6] ^ rki[7]; + } + } + + /* + * Generate the decryption keys for the Equivalent Inverse Cipher. + * This involves reversing the order of the round keys, and applying + * the Inverse Mix Columns transformation to all but the first and + * the last one. + */ + ctx->key_dec[0] = ctx->key_enc[key_len + 24]; + ctx->key_dec[1] = ctx->key_enc[key_len + 25]; + ctx->key_dec[2] = ctx->key_enc[key_len + 26]; + ctx->key_dec[3] = ctx->key_enc[key_len + 27]; + + for (i = 4, j = key_len + 20; j > 0; i += 4, j -= 4) { + ctx->key_dec[i] = inv_mix_columns(ctx->key_enc[j]); + ctx->key_dec[i + 1] = inv_mix_columns(ctx->key_enc[j + 1]); + ctx->key_dec[i + 2] = inv_mix_columns(ctx->key_enc[j + 2]); + ctx->key_dec[i + 3] = inv_mix_columns(ctx->key_enc[j + 3]); + } + + ctx->key_dec[i] = ctx->key_enc[0]; + ctx->key_dec[i + 1] = ctx->key_enc[1]; + ctx->key_dec[i + 2] = ctx->key_enc[2]; + ctx->key_dec[i + 3] = ctx->key_enc[3]; + + return 0; +} +EXPORT_SYMBOL(aes_expandkey); + +/** + * aes_encrypt - Encrypt a single AES block + * @ctx: Context struct containing the key schedule + * @out: Buffer to store the ciphertext + * @in: Buffer containing the plaintext + */ +void aes_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in) +{ + const u32 *rkp = ctx->key_enc + 4; + int rounds = 6 + ctx->key_length / 4; + u32 st0[4], st1[4]; + unsigned long flags; + int round; + + st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in); + st0[1] = ctx->key_enc[1] ^ get_unaligned_le32(in + 4); + st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8); + st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12); + + /* + * Temporarily disable interrupts to avoid races where cachelines are + * evicted when the CPU is interrupted to do something else. + */ + local_irq_save(flags); + + /* + * Force the compiler to emit data independent Sbox references, + * by xoring the input with Sbox values that are known to add up + * to zero. This pulls the entire Sbox into the D-cache before any + * data dependent lookups are done. + */ + st0[0] ^= aes_sbox[ 0] ^ aes_sbox[ 64] ^ aes_sbox[134] ^ aes_sbox[195]; + st0[1] ^= aes_sbox[16] ^ aes_sbox[ 82] ^ aes_sbox[158] ^ aes_sbox[221]; + st0[2] ^= aes_sbox[32] ^ aes_sbox[ 96] ^ aes_sbox[160] ^ aes_sbox[234]; + st0[3] ^= aes_sbox[48] ^ aes_sbox[112] ^ aes_sbox[186] ^ aes_sbox[241]; + + for (round = 0;; round += 2, rkp += 8) { + st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0]; + st1[1] = mix_columns(subshift(st0, 1)) ^ rkp[1]; + st1[2] = mix_columns(subshift(st0, 2)) ^ rkp[2]; + st1[3] = mix_columns(subshift(st0, 3)) ^ rkp[3]; + + if (round == rounds - 2) + break; + + st0[0] = mix_columns(subshift(st1, 0)) ^ rkp[4]; + st0[1] = mix_columns(subshift(st1, 1)) ^ rkp[5]; + st0[2] = mix_columns(subshift(st1, 2)) ^ rkp[6]; + st0[3] = mix_columns(subshift(st1, 3)) ^ rkp[7]; + } + + put_unaligned_le32(subshift(st1, 0) ^ rkp[4], out); + put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4); + put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8); + put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12); + + local_irq_restore(flags); +} +EXPORT_SYMBOL(aes_encrypt); + +/** + * aes_decrypt - Encrypt a single AES block + * @ctx: Context struct containing the key schedule + * @out: Buffer to store the plaintext + * @in: Buffer containing the ciphertext + */ +void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in) +{ + const u32 *rkp = ctx->key_dec + 4; + int rounds = 6 + ctx->key_length / 4; + u32 st0[4], st1[4]; + unsigned long flags; + int round; + + st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in); + st0[1] = ctx->key_dec[1] ^ get_unaligned_le32(in + 4); + st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8); + st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12); + + /* + * Temporarily disable interrupts to avoid races where cachelines are + * evicted when the CPU is interrupted to do something else. + */ + local_irq_save(flags); + + /* + * Force the compiler to emit data independent Sbox references, + * by xoring the input with Sbox values that are known to add up + * to zero. This pulls the entire Sbox into the D-cache before any + * data dependent lookups are done. + */ + st0[0] ^= aes_inv_sbox[ 0] ^ aes_inv_sbox[ 64] ^ aes_inv_sbox[129] ^ aes_inv_sbox[200]; + st0[1] ^= aes_inv_sbox[16] ^ aes_inv_sbox[ 83] ^ aes_inv_sbox[150] ^ aes_inv_sbox[212]; + st0[2] ^= aes_inv_sbox[32] ^ aes_inv_sbox[ 96] ^ aes_inv_sbox[160] ^ aes_inv_sbox[236]; + st0[3] ^= aes_inv_sbox[48] ^ aes_inv_sbox[112] ^ aes_inv_sbox[187] ^ aes_inv_sbox[247]; + + for (round = 0;; round += 2, rkp += 8) { + st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0]; + st1[1] = inv_mix_columns(inv_subshift(st0, 1)) ^ rkp[1]; + st1[2] = inv_mix_columns(inv_subshift(st0, 2)) ^ rkp[2]; + st1[3] = inv_mix_columns(inv_subshift(st0, 3)) ^ rkp[3]; + + if (round == rounds - 2) + break; + + st0[0] = inv_mix_columns(inv_subshift(st1, 0)) ^ rkp[4]; + st0[1] = inv_mix_columns(inv_subshift(st1, 1)) ^ rkp[5]; + st0[2] = inv_mix_columns(inv_subshift(st1, 2)) ^ rkp[6]; + st0[3] = inv_mix_columns(inv_subshift(st1, 3)) ^ rkp[7]; + } + + put_unaligned_le32(inv_subshift(st1, 0) ^ rkp[4], out); + put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4); + put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8); + put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12); + + local_irq_restore(flags); +} +EXPORT_SYMBOL(aes_decrypt); + +MODULE_DESCRIPTION("Generic AES library"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL v2"); From patchwork Wed Jun 12 12:48:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166551 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640692ilk; Wed, 12 Jun 2019 05:48:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqwDSmU3wG54L1PrrhqgHxrFzb24UlziKgwDjFqloyMVD/Cnp7x5DFtZClEmDgUVlYSD+RiS X-Received: by 2002:a63:91c4:: with SMTP id l187mr23469791pge.95.1560343736622; Wed, 12 Jun 2019 05:48:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343736; cv=none; d=google.com; s=arc-20160816; b=JuI+7C4iL/6qYZUGct07nSkNUD0DcalUw+4bzYBTVLBElWPWq2/Wh8TRqSkkS7g/pv yIvPnEJCx7NiY+arbvcOnkGqlBuY0WkIa5o1FQyfrLmoK2ZhOyhBk/a2ainsOD3UsjJn 1JBWLn6+gQ6HhTnSKzQGULSzOS9Tlx9dZexME9tg6k3OMirYAPXXn8lWsUBa3OVzRo2z qMT6hl32Q66G92M4GuabuyfCvpUQBjCApnn0L94VqYAp7fORbWQTZLdtQwRHInNCKBvm 0WMO5qI5BWHXUgESI8siMid5PiXVDvC1Td1ekJiQuJJLFy36VZna0LaG++4zyuW21Ipa 7+5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kYNNbe+mFt8Q+RY9DxwwFbhQEO3KEw5kT4vc21gIvDg=; b=RYt8VU+rxX/YnCeKLhgy8yfs4uj3snnHgzJ7YWWIE35hWEzRsdgYX9dTPLK0kSv43+ zV45dfvLrOdRX/7KnhPxAHrujD7sfbu0eb+GSTKAMoBdytaJLuxONB04RwZcRZsfiMSN LQhOwyZx9kMNsksCCydA9/JSELTkVnadIyU5kFyxLPxZ+l8N/XkqlskA/gmUkK1BIVzR kVEaYavM93E+mhV3af9f48h8Bw6kvbvJ88nbGJuBztUpEtTyMRew+u9Cf/369IZ3xnaz JpcnqUwQgo12sZvaEnwEWwFQT7i6fLONlXLA4hU+1hKHLlTP9fCNdRli6RdqXg2PrfDk X+lw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=M+2h4fD4; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.48.56; Wed, 12 Jun 2019 05:48:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=M+2h4fD4; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409160AbfFLMsz (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:55 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:40633 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409157AbfFLMsz (ORCPT ); Wed, 12 Jun 2019 08:48:55 -0400 Received: by mail-wm1-f67.google.com with SMTP id v19so6389255wmj.5 for ; Wed, 12 Jun 2019 05:48:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kYNNbe+mFt8Q+RY9DxwwFbhQEO3KEw5kT4vc21gIvDg=; b=M+2h4fD46HUC4cJ4KOuokanGoNSZ7xsT1Q+jGhMcs61l2SOksGa3I5dHZJ786fc1qX hwWLnCna9N+1lAxU3P9SSJn8TKRYJxcyR9J6luCq5uqa+ygm7ntsNvI+jDsfVTh4wuO7 zLE28BI+hzPxUPqD4vUvwEXFBseb3e31yDKoID1mSHjLQa3Tl8iQZQHALHnCA0AjKWgX SAgnb6jlklnhZemcReGewK60rQ1ilAj72SyvQVQ35FgVVKPpAV+BYMtkHEOLnVCkUw0w C3t3tpAJqsEyHzbbjEelLLw/QJ8s+1v8JuGhRkVHs7D2rYPWmbLqi5m8rcPZLpUPDtUo 3jTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kYNNbe+mFt8Q+RY9DxwwFbhQEO3KEw5kT4vc21gIvDg=; b=cdxOuqOeL9ETOjOaO0ZvATtonhw8PB0vud7+W5Z20sXcXbL1tGDoWejy3/rPAUwMo8 Q2jui4WHjGZjUlti5RFc8YkZ9+iU51VbsiYcWurE4AHgkxxy50UdFpU6pMgE1k7yCsOD rS6NoprcIjAN6h92P+zTY96C91y7DujfT7UDWTyaXPRPYTj7YGp27FilyM5GqfJrlA2p Yk/26SK1C5xrbKNr4xwP9ETpyXrZ9kT4Plm5HfIkNTG7PDIBo1aFPq5JsI+fxZshBXci y/4A0kzZwGpv82Y2zwkUmjJhDU6Q1aZ2fH0FZr4ygL2FOhDmp5WUNb/hMV5S6hO6Erx+ J3wA== X-Gm-Message-State: APjAAAV9FD5I1Wafj+GfuvowXtJcuNJ2rt1Z4t1NUufXG0s3elcAY5B1 civF4xp9MkQP+1cYSQlPJVuHfY8rEoRzrA== X-Received: by 2002:a05:600c:2243:: with SMTP id a3mr21220424wmm.83.1560343732621; Wed, 12 Jun 2019 05:48:52 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:52 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 05/20] crypto: x86/aes-ni - switch to generic for fallback and key routines Date: Wed, 12 Jun 2019 14:48:23 +0200 Message-Id: <20190612124838.2492-6-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The AES-NI code contains fallbacks for invocations that occur from a context where the SIMD unit is unavailable, which really only occurs when running in softirq context that was entered from a hard IRQ that was taken while running kernel code that was already using the FPU. That means performance is not really a consideration, and we can just use the new library code for this use case, which has a smaller footprint and is believed to be time invariant. This will allow us to drop the non-SIMD asm routines in a subsequent patch. Signed-off-by: Ard Biesheuvel --- arch/x86/crypto/aesni-intel_glue.c | 15 +++++++-------- arch/x86/include/asm/crypto/aes.h | 12 ------------ crypto/Kconfig | 3 +-- 3 files changed, 8 insertions(+), 22 deletions(-) -- 2.20.1 diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index e9b866e87d48..9952bd312ddc 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -329,7 +328,7 @@ static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, } if (!crypto_simd_usable()) - err = crypto_aes_expand_key(ctx, in_key, key_len); + err = aes_expandkey(ctx, in_key, key_len); else { kernel_fpu_begin(); err = aesni_set_key(ctx, in_key, key_len); @@ -349,9 +348,9 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm)); - if (!crypto_simd_usable()) - crypto_aes_encrypt_x86(ctx, dst, src); - else { + if (!crypto_simd_usable()) { + aes_encrypt(ctx, dst, src); + } else { kernel_fpu_begin(); aesni_enc(ctx, dst, src); kernel_fpu_end(); @@ -362,9 +361,9 @@ static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm)); - if (!crypto_simd_usable()) - crypto_aes_decrypt_x86(ctx, dst, src); - else { + if (!crypto_simd_usable()) { + aes_decrypt(ctx, dst, src); + } else { kernel_fpu_begin(); aesni_dec(ctx, dst, src); kernel_fpu_end(); diff --git a/arch/x86/include/asm/crypto/aes.h b/arch/x86/include/asm/crypto/aes.h deleted file mode 100644 index c508521dd190..000000000000 --- a/arch/x86/include/asm/crypto/aes.h +++ /dev/null @@ -1,12 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef ASM_X86_AES_H -#define ASM_X86_AES_H - -#include -#include - -void crypto_aes_encrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, - const u8 *src); -void crypto_aes_decrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, - const u8 *src); -#endif diff --git a/crypto/Kconfig b/crypto/Kconfig index dc6f93ef3ead..0d80985016bf 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1149,8 +1149,7 @@ config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 select CRYPTO_AEAD - select CRYPTO_AES_X86_64 if 64BIT - select CRYPTO_AES_586 if !64BIT + select CRYPTO_LIB_AES select CRYPTO_ALGAPI select CRYPTO_BLKCIPHER select CRYPTO_GLUE_HELPER_X86 if 64BIT From patchwork Wed Jun 12 12:48:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166554 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640744ilk; Wed, 12 Jun 2019 05:48:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqw9PmXVt6li0aioq/T5Gtl5iRJMbhVBsyeETlk6VMl4OY/Px0fjcCb+XBNXdWrW9JmzKt4y X-Received: by 2002:a65:6284:: with SMTP id f4mr25436215pgv.14.1560343739494; Wed, 12 Jun 2019 05:48:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343739; cv=none; d=google.com; s=arc-20160816; b=RUzP+IFtGjB84Ukzxd/JkgxXJcKmaIZRTCzXuLumFacf1PYqkEg8mQErM9iSkbz1ty m5MCEARJIGuNTgXIGEmNkGToL20ufwhFEEfKuzD9OhQTFxot6Q9EhW36IWmVRTtTGPqZ 0mg1ElH8WT3qtEtDnW/RD/z8CPpvzdEqAn3FMoIfr+2iFiIoeTYdxGT8Xx3mxbUBpB0l 0h7g5FNvphgVLym5oqEKvJYq2oXMqMyOafPq8diuthGV9lexbRTWfAZo/87oGcbZB2NV WS8jJzQ9LXC/TMn2EIT5fyX8R81UFKUsb3rig6JC6tkkaf8F0DyCJa53rrXR2jzwGHll JmUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=76xN9WyeYIhCcwBpwFZTvWrdI8ZQQS6OHwuIy2N/Qdg=; b=WL9Xen/NsA+afbeNRvhKRhdeKsCHQhAgCitGboTGqsknOqjBsfKsjUi6tEDEFM3hHJ 9kn6IoT24w6frJMRGadqJ+/vobJZGoxTj3B+MKRYtoKxQ5ayCfWr92/q4tXqT8pXcgQN JDw5s+N0/AMRn7rnL1XbkXrtYV/PmzAQqqBLT+5sZcmdYudVuQXfD0wTvzQNVMnWPhTB tb2PMQ2HDCV/dhGSoGfYhVPAWqZX+oJ4i4T2QMcK9nilzyie9CeuUyOUiXa4YWlT1T4s IGDsX3HpweWloPUrcQ52OQAplhtre44Flt6z/b7kzOszdHAZQgMdFnZgD3HcPB5m7JVt dE6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VvGobrd1; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.48.59; Wed, 12 Jun 2019 05:48:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VvGobrd1; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439225AbfFLMs6 (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:58 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:42815 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409161AbfFLMs6 (ORCPT ); Wed, 12 Jun 2019 08:48:58 -0400 Received: by mail-wr1-f67.google.com with SMTP id x17so1485018wrl.9 for ; Wed, 12 Jun 2019 05:48:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=76xN9WyeYIhCcwBpwFZTvWrdI8ZQQS6OHwuIy2N/Qdg=; b=VvGobrd1ZlgnDxeVKQG7TfwNsT5TYcduKpX9ULX1FgJQ8hijRWZyTZ7TqjRUGAqCkd brbRRJuKg9Pl2NWiu//dnM9AQhquseWXwStWe9yYCYWJ/S6nkA45M4ar33v8Bn1zHnii lBIn0wNNxs6d22hZmBT70eSasLpTGgmrBVG5casRaCz0ouO+QdOmvl0PpKwTZWh9FN41 nNF1MiCKNm1gB54jvs9f/4lL7aKV/TDQ1ztD3vyNflO96GgddvvvL12qxAF5yrY5kqiI UPRJIF3XqtYpL50/ymgsTvSczERjH+tAd+Tjc0fmTMbfXxud6aC65BvEMewuzgZrrgQ7 8/Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=76xN9WyeYIhCcwBpwFZTvWrdI8ZQQS6OHwuIy2N/Qdg=; b=M6FHCG59PLAsBaNyoMztEMOP5RknOoR4wNApyNLFex0iPYZUgbjF1Xn4CIXZAF3DTu J766abPwpfky62tUyeeoA3fxar5W6d1LXd3x+LcaYAy/n1EL6H7YYAa0rLgGNz18DzOs JQkuoESsa9iUgAvLKurHsMopPf/HamDud0souK4nyrUffi964xufQayvVOKtqaif2uhU qO5gDEr1dr1cY+ABZ0nia/43KMp7JaNpT+0oPCXdz+WXq7J8MkzAy/QWbNsPxBl8BK+u t9MarzrV6Tdk2DyisIqDuojbHwpTCUWyiBOfcSKIVVBryfrJ2LTVKMcf/SjMaa39UFYp clmQ== X-Gm-Message-State: APjAAAUSSIf4ebd40VRPQwAsYIIs0fSDwUA4c1I2AJ2yNuq0qW6CQXdH XPbS508Ka2zyxjKtL5XRJHZymoHtzpdBsQ== X-Received: by 2002:a5d:43c9:: with SMTP id v9mr53672370wrr.70.1560343733768; Wed, 12 Jun 2019 05:48:53 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:53 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 06/20] crypto: x86/aes - drop scalar assembler implementations Date: Wed, 12 Jun 2019 14:48:24 +0200 Message-Id: <20190612124838.2492-7-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The AES assembler code for x86 isn't actually faster than code generated by the compiler from aes_generic.c, and considering the disproportionate maintenance burden of assembler code on x86, it is better just to drop it entirely. Modern x86 systems will use AES-NI anyway, and given that the modules being removed have a dependency on aes_generic already, we can remove them without running the risk of regressions. Signed-off-by: Ard Biesheuvel --- arch/x86/crypto/Makefile | 4 - arch/x86/crypto/aes-i586-asm_32.S | 362 -------------------- arch/x86/crypto/aes-x86_64-asm_64.S | 185 ---------- arch/x86/crypto/aes_glue.c | 71 ---- crypto/Kconfig | 44 --- 5 files changed, 666 deletions(-) -- 2.20.1 diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index 45734e1cf967..b96a14e67ab0 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -14,11 +14,9 @@ sha256_ni_supported :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,yes,no) obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o -obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o obj-$(CONFIG_CRYPTO_TWOFISH_586) += twofish-i586.o obj-$(CONFIG_CRYPTO_SERPENT_SSE2_586) += serpent-sse2-i586.o -obj-$(CONFIG_CRYPTO_AES_X86_64) += aes-x86_64.o obj-$(CONFIG_CRYPTO_DES3_EDE_X86_64) += des3_ede-x86_64.o obj-$(CONFIG_CRYPTO_CAMELLIA_X86_64) += camellia-x86_64.o obj-$(CONFIG_CRYPTO_BLOWFISH_X86_64) += blowfish-x86_64.o @@ -68,11 +66,9 @@ ifeq ($(avx2_supported),yes) obj-$(CONFIG_CRYPTO_MORUS1280_AVX2) += morus1280-avx2.o endif -aes-i586-y := aes-i586-asm_32.o aes_glue.o twofish-i586-y := twofish-i586-asm_32.o twofish_glue.o serpent-sse2-i586-y := serpent-sse2-i586-asm_32.o serpent_sse2_glue.o -aes-x86_64-y := aes-x86_64-asm_64.o aes_glue.o des3_ede-x86_64-y := des3_ede-asm_64.o des3_ede_glue.o camellia-x86_64-y := camellia-x86_64-asm_64.o camellia_glue.o blowfish-x86_64-y := blowfish-x86_64-asm_64.o blowfish_glue.o diff --git a/arch/x86/crypto/aes-i586-asm_32.S b/arch/x86/crypto/aes-i586-asm_32.S deleted file mode 100644 index 2849dbc59e11..000000000000 --- a/arch/x86/crypto/aes-i586-asm_32.S +++ /dev/null @@ -1,362 +0,0 @@ -// ------------------------------------------------------------------------- -// Copyright (c) 2001, Dr Brian Gladman < >, Worcester, UK. -// All rights reserved. -// -// LICENSE TERMS -// -// The free distribution and use of this software in both source and binary -// form is allowed (with or without changes) provided that: -// -// 1. distributions of this source code include the above copyright -// notice, this list of conditions and the following disclaimer// -// -// 2. distributions in binary form include the above copyright -// notice, this list of conditions and the following disclaimer -// in the documentation and/or other associated materials// -// -// 3. the copyright holder's name is not used to endorse products -// built using this software without specific written permission. -// -// -// ALTERNATIVELY, provided that this notice is retained in full, this product -// may be distributed under the terms of the GNU General Public License (GPL), -// in which case the provisions of the GPL apply INSTEAD OF those given above. -// -// Copyright (c) 2004 Linus Torvalds -// Copyright (c) 2004 Red Hat, Inc., James Morris - -// DISCLAIMER -// -// This software is provided 'as is' with no explicit or implied warranties -// in respect of its properties including, but not limited to, correctness -// and fitness for purpose. -// ------------------------------------------------------------------------- -// Issue Date: 29/07/2002 - -.file "aes-i586-asm.S" -.text - -#include -#include - -#define tlen 1024 // length of each of 4 'xor' arrays (256 32-bit words) - -/* offsets to parameters with one register pushed onto stack */ -#define ctx 8 -#define out_blk 12 -#define in_blk 16 - -/* offsets in crypto_aes_ctx structure */ -#define klen (480) -#define ekey (0) -#define dkey (240) - -// register mapping for encrypt and decrypt subroutines - -#define r0 eax -#define r1 ebx -#define r2 ecx -#define r3 edx -#define r4 esi -#define r5 edi - -#define eaxl al -#define eaxh ah -#define ebxl bl -#define ebxh bh -#define ecxl cl -#define ecxh ch -#define edxl dl -#define edxh dh - -#define _h(reg) reg##h -#define h(reg) _h(reg) - -#define _l(reg) reg##l -#define l(reg) _l(reg) - -// This macro takes a 32-bit word representing a column and uses -// each of its four bytes to index into four tables of 256 32-bit -// words to obtain values that are then xored into the appropriate -// output registers r0, r1, r4 or r5. - -// Parameters: -// table table base address -// %1 out_state[0] -// %2 out_state[1] -// %3 out_state[2] -// %4 out_state[3] -// idx input register for the round (destroyed) -// tmp scratch register for the round -// sched key schedule - -#define do_col(table, a1,a2,a3,a4, idx, tmp) \ - movzx %l(idx),%tmp; \ - xor table(,%tmp,4),%a1; \ - movzx %h(idx),%tmp; \ - shr $16,%idx; \ - xor table+tlen(,%tmp,4),%a2; \ - movzx %l(idx),%tmp; \ - movzx %h(idx),%idx; \ - xor table+2*tlen(,%tmp,4),%a3; \ - xor table+3*tlen(,%idx,4),%a4; - -// initialise output registers from the key schedule -// NB1: original value of a3 is in idx on exit -// NB2: original values of a1,a2,a4 aren't used -#define do_fcol(table, a1,a2,a3,a4, idx, tmp, sched) \ - mov 0 sched,%a1; \ - movzx %l(idx),%tmp; \ - mov 12 sched,%a2; \ - xor table(,%tmp,4),%a1; \ - mov 4 sched,%a4; \ - movzx %h(idx),%tmp; \ - shr $16,%idx; \ - xor table+tlen(,%tmp,4),%a2; \ - movzx %l(idx),%tmp; \ - movzx %h(idx),%idx; \ - xor table+3*tlen(,%idx,4),%a4; \ - mov %a3,%idx; \ - mov 8 sched,%a3; \ - xor table+2*tlen(,%tmp,4),%a3; - -// initialise output registers from the key schedule -// NB1: original value of a3 is in idx on exit -// NB2: original values of a1,a2,a4 aren't used -#define do_icol(table, a1,a2,a3,a4, idx, tmp, sched) \ - mov 0 sched,%a1; \ - movzx %l(idx),%tmp; \ - mov 4 sched,%a2; \ - xor table(,%tmp,4),%a1; \ - mov 12 sched,%a4; \ - movzx %h(idx),%tmp; \ - shr $16,%idx; \ - xor table+tlen(,%tmp,4),%a2; \ - movzx %l(idx),%tmp; \ - movzx %h(idx),%idx; \ - xor table+3*tlen(,%idx,4),%a4; \ - mov %a3,%idx; \ - mov 8 sched,%a3; \ - xor table+2*tlen(,%tmp,4),%a3; - - -// original Gladman had conditional saves to MMX regs. -#define save(a1, a2) \ - mov %a2,4*a1(%esp) - -#define restore(a1, a2) \ - mov 4*a2(%esp),%a1 - -// These macros perform a forward encryption cycle. They are entered with -// the first previous round column values in r0,r1,r4,r5 and -// exit with the final values in the same registers, using stack -// for temporary storage. - -// round column values -// on entry: r0,r1,r4,r5 -// on exit: r2,r1,r4,r5 -#define fwd_rnd1(arg, table) \ - save (0,r1); \ - save (1,r5); \ - \ - /* compute new column values */ \ - do_fcol(table, r2,r5,r4,r1, r0,r3, arg); /* idx=r0 */ \ - do_col (table, r4,r1,r2,r5, r0,r3); /* idx=r4 */ \ - restore(r0,0); \ - do_col (table, r1,r2,r5,r4, r0,r3); /* idx=r1 */ \ - restore(r0,1); \ - do_col (table, r5,r4,r1,r2, r0,r3); /* idx=r5 */ - -// round column values -// on entry: r2,r1,r4,r5 -// on exit: r0,r1,r4,r5 -#define fwd_rnd2(arg, table) \ - save (0,r1); \ - save (1,r5); \ - \ - /* compute new column values */ \ - do_fcol(table, r0,r5,r4,r1, r2,r3, arg); /* idx=r2 */ \ - do_col (table, r4,r1,r0,r5, r2,r3); /* idx=r4 */ \ - restore(r2,0); \ - do_col (table, r1,r0,r5,r4, r2,r3); /* idx=r1 */ \ - restore(r2,1); \ - do_col (table, r5,r4,r1,r0, r2,r3); /* idx=r5 */ - -// These macros performs an inverse encryption cycle. They are entered with -// the first previous round column values in r0,r1,r4,r5 and -// exit with the final values in the same registers, using stack -// for temporary storage - -// round column values -// on entry: r0,r1,r4,r5 -// on exit: r2,r1,r4,r5 -#define inv_rnd1(arg, table) \ - save (0,r1); \ - save (1,r5); \ - \ - /* compute new column values */ \ - do_icol(table, r2,r1,r4,r5, r0,r3, arg); /* idx=r0 */ \ - do_col (table, r4,r5,r2,r1, r0,r3); /* idx=r4 */ \ - restore(r0,0); \ - do_col (table, r1,r4,r5,r2, r0,r3); /* idx=r1 */ \ - restore(r0,1); \ - do_col (table, r5,r2,r1,r4, r0,r3); /* idx=r5 */ - -// round column values -// on entry: r2,r1,r4,r5 -// on exit: r0,r1,r4,r5 -#define inv_rnd2(arg, table) \ - save (0,r1); \ - save (1,r5); \ - \ - /* compute new column values */ \ - do_icol(table, r0,r1,r4,r5, r2,r3, arg); /* idx=r2 */ \ - do_col (table, r4,r5,r0,r1, r2,r3); /* idx=r4 */ \ - restore(r2,0); \ - do_col (table, r1,r4,r5,r0, r2,r3); /* idx=r1 */ \ - restore(r2,1); \ - do_col (table, r5,r0,r1,r4, r2,r3); /* idx=r5 */ - -// AES (Rijndael) Encryption Subroutine -/* void aes_enc_blk(struct crypto_aes_ctx *ctx, u8 *out_blk, const u8 *in_blk) */ - -.extern crypto_ft_tab -.extern crypto_fl_tab - -ENTRY(aes_enc_blk) - push %ebp - mov ctx(%esp),%ebp - -// CAUTION: the order and the values used in these assigns -// rely on the register mappings - -1: push %ebx - mov in_blk+4(%esp),%r2 - push %esi - mov klen(%ebp),%r3 // key size - push %edi -#if ekey != 0 - lea ekey(%ebp),%ebp // key pointer -#endif - -// input four columns and xor in first round key - - mov (%r2),%r0 - mov 4(%r2),%r1 - mov 8(%r2),%r4 - mov 12(%r2),%r5 - xor (%ebp),%r0 - xor 4(%ebp),%r1 - xor 8(%ebp),%r4 - xor 12(%ebp),%r5 - - sub $8,%esp // space for register saves on stack - add $16,%ebp // increment to next round key - cmp $24,%r3 - jb 4f // 10 rounds for 128-bit key - lea 32(%ebp),%ebp - je 3f // 12 rounds for 192-bit key - lea 32(%ebp),%ebp - -2: fwd_rnd1( -64(%ebp), crypto_ft_tab) // 14 rounds for 256-bit key - fwd_rnd2( -48(%ebp), crypto_ft_tab) -3: fwd_rnd1( -32(%ebp), crypto_ft_tab) // 12 rounds for 192-bit key - fwd_rnd2( -16(%ebp), crypto_ft_tab) -4: fwd_rnd1( (%ebp), crypto_ft_tab) // 10 rounds for 128-bit key - fwd_rnd2( +16(%ebp), crypto_ft_tab) - fwd_rnd1( +32(%ebp), crypto_ft_tab) - fwd_rnd2( +48(%ebp), crypto_ft_tab) - fwd_rnd1( +64(%ebp), crypto_ft_tab) - fwd_rnd2( +80(%ebp), crypto_ft_tab) - fwd_rnd1( +96(%ebp), crypto_ft_tab) - fwd_rnd2(+112(%ebp), crypto_ft_tab) - fwd_rnd1(+128(%ebp), crypto_ft_tab) - fwd_rnd2(+144(%ebp), crypto_fl_tab) // last round uses a different table - -// move final values to the output array. CAUTION: the -// order of these assigns rely on the register mappings - - add $8,%esp - mov out_blk+12(%esp),%ebp - mov %r5,12(%ebp) - pop %edi - mov %r4,8(%ebp) - pop %esi - mov %r1,4(%ebp) - pop %ebx - mov %r0,(%ebp) - pop %ebp - ret -ENDPROC(aes_enc_blk) - -// AES (Rijndael) Decryption Subroutine -/* void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out_blk, const u8 *in_blk) */ - -.extern crypto_it_tab -.extern crypto_il_tab - -ENTRY(aes_dec_blk) - push %ebp - mov ctx(%esp),%ebp - -// CAUTION: the order and the values used in these assigns -// rely on the register mappings - -1: push %ebx - mov in_blk+4(%esp),%r2 - push %esi - mov klen(%ebp),%r3 // key size - push %edi -#if dkey != 0 - lea dkey(%ebp),%ebp // key pointer -#endif - -// input four columns and xor in first round key - - mov (%r2),%r0 - mov 4(%r2),%r1 - mov 8(%r2),%r4 - mov 12(%r2),%r5 - xor (%ebp),%r0 - xor 4(%ebp),%r1 - xor 8(%ebp),%r4 - xor 12(%ebp),%r5 - - sub $8,%esp // space for register saves on stack - add $16,%ebp // increment to next round key - cmp $24,%r3 - jb 4f // 10 rounds for 128-bit key - lea 32(%ebp),%ebp - je 3f // 12 rounds for 192-bit key - lea 32(%ebp),%ebp - -2: inv_rnd1( -64(%ebp), crypto_it_tab) // 14 rounds for 256-bit key - inv_rnd2( -48(%ebp), crypto_it_tab) -3: inv_rnd1( -32(%ebp), crypto_it_tab) // 12 rounds for 192-bit key - inv_rnd2( -16(%ebp), crypto_it_tab) -4: inv_rnd1( (%ebp), crypto_it_tab) // 10 rounds for 128-bit key - inv_rnd2( +16(%ebp), crypto_it_tab) - inv_rnd1( +32(%ebp), crypto_it_tab) - inv_rnd2( +48(%ebp), crypto_it_tab) - inv_rnd1( +64(%ebp), crypto_it_tab) - inv_rnd2( +80(%ebp), crypto_it_tab) - inv_rnd1( +96(%ebp), crypto_it_tab) - inv_rnd2(+112(%ebp), crypto_it_tab) - inv_rnd1(+128(%ebp), crypto_it_tab) - inv_rnd2(+144(%ebp), crypto_il_tab) // last round uses a different table - -// move final values to the output array. CAUTION: the -// order of these assigns rely on the register mappings - - add $8,%esp - mov out_blk+12(%esp),%ebp - mov %r5,12(%ebp) - pop %edi - mov %r4,8(%ebp) - pop %esi - mov %r1,4(%ebp) - pop %ebx - mov %r0,(%ebp) - pop %ebp - ret -ENDPROC(aes_dec_blk) diff --git a/arch/x86/crypto/aes-x86_64-asm_64.S b/arch/x86/crypto/aes-x86_64-asm_64.S deleted file mode 100644 index 8739cf7795de..000000000000 --- a/arch/x86/crypto/aes-x86_64-asm_64.S +++ /dev/null @@ -1,185 +0,0 @@ -/* AES (Rijndael) implementation (FIPS PUB 197) for x86_64 - * - * Copyright (C) 2005 Andreas Steinmetz, - * - * License: - * This code can be distributed under the terms of the GNU General Public - * License (GPL) Version 2 provided that the above header down to and - * including this sentence is retained in full. - */ - -.extern crypto_ft_tab -.extern crypto_it_tab -.extern crypto_fl_tab -.extern crypto_il_tab - -.text - -#include -#include - -#define R1 %rax -#define R1E %eax -#define R1X %ax -#define R1H %ah -#define R1L %al -#define R2 %rbx -#define R2E %ebx -#define R2X %bx -#define R2H %bh -#define R2L %bl -#define R3 %rcx -#define R3E %ecx -#define R3X %cx -#define R3H %ch -#define R3L %cl -#define R4 %rdx -#define R4E %edx -#define R4X %dx -#define R4H %dh -#define R4L %dl -#define R5 %rsi -#define R5E %esi -#define R6 %rdi -#define R6E %edi -#define R7 %r9 /* don't use %rbp; it breaks stack traces */ -#define R7E %r9d -#define R8 %r8 -#define R10 %r10 -#define R11 %r11 - -#define prologue(FUNC,KEY,B128,B192,r1,r2,r5,r6,r7,r8,r9,r10,r11) \ - ENTRY(FUNC); \ - movq r1,r2; \ - leaq KEY+48(r8),r9; \ - movq r10,r11; \ - movl (r7),r5 ## E; \ - movl 4(r7),r1 ## E; \ - movl 8(r7),r6 ## E; \ - movl 12(r7),r7 ## E; \ - movl 480(r8),r10 ## E; \ - xorl -48(r9),r5 ## E; \ - xorl -44(r9),r1 ## E; \ - xorl -40(r9),r6 ## E; \ - xorl -36(r9),r7 ## E; \ - cmpl $24,r10 ## E; \ - jb B128; \ - leaq 32(r9),r9; \ - je B192; \ - leaq 32(r9),r9; - -#define epilogue(FUNC,r1,r2,r5,r6,r7,r8,r9) \ - movq r1,r2; \ - movl r5 ## E,(r9); \ - movl r6 ## E,4(r9); \ - movl r7 ## E,8(r9); \ - movl r8 ## E,12(r9); \ - ret; \ - ENDPROC(FUNC); - -#define round(TAB,OFFSET,r1,r2,r3,r4,r5,r6,r7,r8,ra,rb,rc,rd) \ - movzbl r2 ## H,r5 ## E; \ - movzbl r2 ## L,r6 ## E; \ - movl TAB+1024(,r5,4),r5 ## E;\ - movw r4 ## X,r2 ## X; \ - movl TAB(,r6,4),r6 ## E; \ - roll $16,r2 ## E; \ - shrl $16,r4 ## E; \ - movzbl r4 ## L,r7 ## E; \ - movzbl r4 ## H,r4 ## E; \ - xorl OFFSET(r8),ra ## E; \ - xorl OFFSET+4(r8),rb ## E; \ - xorl TAB+3072(,r4,4),r5 ## E;\ - xorl TAB+2048(,r7,4),r6 ## E;\ - movzbl r1 ## L,r7 ## E; \ - movzbl r1 ## H,r4 ## E; \ - movl TAB+1024(,r4,4),r4 ## E;\ - movw r3 ## X,r1 ## X; \ - roll $16,r1 ## E; \ - shrl $16,r3 ## E; \ - xorl TAB(,r7,4),r5 ## E; \ - movzbl r3 ## L,r7 ## E; \ - movzbl r3 ## H,r3 ## E; \ - xorl TAB+3072(,r3,4),r4 ## E;\ - xorl TAB+2048(,r7,4),r5 ## E;\ - movzbl r1 ## L,r7 ## E; \ - movzbl r1 ## H,r3 ## E; \ - shrl $16,r1 ## E; \ - xorl TAB+3072(,r3,4),r6 ## E;\ - movl TAB+2048(,r7,4),r3 ## E;\ - movzbl r1 ## L,r7 ## E; \ - movzbl r1 ## H,r1 ## E; \ - xorl TAB+1024(,r1,4),r6 ## E;\ - xorl TAB(,r7,4),r3 ## E; \ - movzbl r2 ## H,r1 ## E; \ - movzbl r2 ## L,r7 ## E; \ - shrl $16,r2 ## E; \ - xorl TAB+3072(,r1,4),r3 ## E;\ - xorl TAB+2048(,r7,4),r4 ## E;\ - movzbl r2 ## H,r1 ## E; \ - movzbl r2 ## L,r2 ## E; \ - xorl OFFSET+8(r8),rc ## E; \ - xorl OFFSET+12(r8),rd ## E; \ - xorl TAB+1024(,r1,4),r3 ## E;\ - xorl TAB(,r2,4),r4 ## E; - -#define move_regs(r1,r2,r3,r4) \ - movl r3 ## E,r1 ## E; \ - movl r4 ## E,r2 ## E; - -#define entry(FUNC,KEY,B128,B192) \ - prologue(FUNC,KEY,B128,B192,R2,R8,R1,R3,R4,R6,R10,R5,R11) - -#define return(FUNC) epilogue(FUNC,R8,R2,R5,R6,R3,R4,R11) - -#define encrypt_round(TAB,OFFSET) \ - round(TAB,OFFSET,R1,R2,R3,R4,R5,R6,R7,R10,R5,R6,R3,R4) \ - move_regs(R1,R2,R5,R6) - -#define encrypt_final(TAB,OFFSET) \ - round(TAB,OFFSET,R1,R2,R3,R4,R5,R6,R7,R10,R5,R6,R3,R4) - -#define decrypt_round(TAB,OFFSET) \ - round(TAB,OFFSET,R2,R1,R4,R3,R6,R5,R7,R10,R5,R6,R3,R4) \ - move_regs(R1,R2,R5,R6) - -#define decrypt_final(TAB,OFFSET) \ - round(TAB,OFFSET,R2,R1,R4,R3,R6,R5,R7,R10,R5,R6,R3,R4) - -/* void aes_enc_blk(stuct crypto_tfm *tfm, u8 *out, const u8 *in) */ - - entry(aes_enc_blk,0,.Le128,.Le192) - encrypt_round(crypto_ft_tab,-96) - encrypt_round(crypto_ft_tab,-80) -.Le192: encrypt_round(crypto_ft_tab,-64) - encrypt_round(crypto_ft_tab,-48) -.Le128: encrypt_round(crypto_ft_tab,-32) - encrypt_round(crypto_ft_tab,-16) - encrypt_round(crypto_ft_tab, 0) - encrypt_round(crypto_ft_tab, 16) - encrypt_round(crypto_ft_tab, 32) - encrypt_round(crypto_ft_tab, 48) - encrypt_round(crypto_ft_tab, 64) - encrypt_round(crypto_ft_tab, 80) - encrypt_round(crypto_ft_tab, 96) - encrypt_final(crypto_fl_tab,112) - return(aes_enc_blk) - -/* void aes_dec_blk(struct crypto_tfm *tfm, u8 *out, const u8 *in) */ - - entry(aes_dec_blk,240,.Ld128,.Ld192) - decrypt_round(crypto_it_tab,-96) - decrypt_round(crypto_it_tab,-80) -.Ld192: decrypt_round(crypto_it_tab,-64) - decrypt_round(crypto_it_tab,-48) -.Ld128: decrypt_round(crypto_it_tab,-32) - decrypt_round(crypto_it_tab,-16) - decrypt_round(crypto_it_tab, 0) - decrypt_round(crypto_it_tab, 16) - decrypt_round(crypto_it_tab, 32) - decrypt_round(crypto_it_tab, 48) - decrypt_round(crypto_it_tab, 64) - decrypt_round(crypto_it_tab, 80) - decrypt_round(crypto_it_tab, 96) - decrypt_final(crypto_il_tab,112) - return(aes_dec_blk) diff --git a/arch/x86/crypto/aes_glue.c b/arch/x86/crypto/aes_glue.c deleted file mode 100644 index 9e9d819e8bc3..000000000000 --- a/arch/x86/crypto/aes_glue.c +++ /dev/null @@ -1,71 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Glue Code for the asm optimized version of the AES Cipher Algorithm - * - */ - -#include -#include -#include - -asmlinkage void aes_enc_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); -asmlinkage void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); - -void crypto_aes_encrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src) -{ - aes_enc_blk(ctx, dst, src); -} -EXPORT_SYMBOL_GPL(crypto_aes_encrypt_x86); - -void crypto_aes_decrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src) -{ - aes_dec_blk(ctx, dst, src); -} -EXPORT_SYMBOL_GPL(crypto_aes_decrypt_x86); - -static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) -{ - aes_enc_blk(crypto_tfm_ctx(tfm), dst, src); -} - -static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) -{ - aes_dec_blk(crypto_tfm_ctx(tfm), dst, src); -} - -static struct crypto_alg aes_alg = { - .cra_name = "aes", - .cra_driver_name = "aes-asm", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_CIPHER, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - .cra_u = { - .cipher = { - .cia_min_keysize = AES_MIN_KEY_SIZE, - .cia_max_keysize = AES_MAX_KEY_SIZE, - .cia_setkey = crypto_aes_set_key, - .cia_encrypt = aes_encrypt, - .cia_decrypt = aes_decrypt - } - } -}; - -static int __init aes_init(void) -{ - return crypto_register_alg(&aes_alg); -} - -static void __exit aes_fini(void) -{ - crypto_unregister_alg(&aes_alg); -} - -module_init(aes_init); -module_exit(aes_fini); - -MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, asm optimized"); -MODULE_LICENSE("GPL"); -MODULE_ALIAS_CRYPTO("aes"); -MODULE_ALIAS_CRYPTO("aes-asm"); diff --git a/crypto/Kconfig b/crypto/Kconfig index 0d80985016bf..2ed65185dde8 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1101,50 +1101,6 @@ config CRYPTO_AES_TI block. Interrupts are also disabled to avoid races where cachelines are evicted when the CPU is interrupted to do something else. -config CRYPTO_AES_586 - tristate "AES cipher algorithms (i586)" - depends on (X86 || UML_X86) && !64BIT - select CRYPTO_ALGAPI - select CRYPTO_AES - help - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. - -config CRYPTO_AES_X86_64 - tristate "AES cipher algorithms (x86_64)" - depends on (X86 || UML_X86) && 64BIT - select CRYPTO_ALGAPI - select CRYPTO_AES - help - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. - config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 From patchwork Wed Jun 12 12:48:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166552 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640711ilk; Wed, 12 Jun 2019 05:48:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqxE2hJZa6aXqrZbCjOzLwPYLrsTXha33xYKJL3n0Ke9VDZH2j98Rbf8kza+GH3QFgrXWO3j X-Received: by 2002:a17:90a:6544:: with SMTP id f4mr32946135pjs.17.1560343737625; Wed, 12 Jun 2019 05:48:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343737; cv=none; d=google.com; s=arc-20160816; b=CGpygvSiIMXb6Hh6pdBA/w9ON1DJeZKKKlwSxQBlmdp+ZlNEODxOfzoH+ZC0nKtKGm 8iCGgxzoRGlM6FrxMhcEPWUhHr9ucLsC+/vzSs8vum93euZ1hPWFFEzEg1/Vs4PtJjjY xwgnVtAh11JqsHFfRJ+1OVPpdzPEmz0plxOL+wdLMhMKjkscQPJAXPUusnVK5hCRNdpD Gr1+fOzHVLcuOKVksXw50k7cHdq6EBkqF3z1BFB1vYtykNV/Zk1ViUmLAsdPgrKTO3ea 6Da3Ypk7Il2XcRq9kDXybG3PpCJcaDiS4SxWhl65V+tP4U0bjWb0CUHNCq9bgm9af7EI fJ3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=35aKMFTnfvApYT0bZnbnGPl1odG7lcC67cVgsN970Jc=; b=kmJJ6MXXN2GOsZ3Zk21PyEKPFwzxab1Pn8Ml3TO3oGTSt/avGhRlRgRDr9J+lvDYSc Ma606pJnoVf2la/S8iSCr89/uGN6eJi/qVRiZRF91ZNgvUieIbcR4KuCWT1elc9eK2RA DzkP+sAqmgymkZ6dYVB1pHTHwNue1uDa8y65CgF/QppYjbNVYZ1kueAcnUX4hc8jdBOc b7FBCnt//8/oBzOn9PdbgVYV/OyGG6U0Mr/f2d7Mh3fH4BSzSpQWkesxq7kp02BFw8OF um1jq8T80l3Ur7dU1kXKGN2cZbngQbSk9kQIQsoU0XKNh/4C9g5hYKlhDGjEdiIBv7IX eCdg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=uesOFonJ; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.48.57; Wed, 12 Jun 2019 05:48:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=uesOFonJ; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409163AbfFLMs4 (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:56 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:33312 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409157AbfFLMs4 (ORCPT ); Wed, 12 Jun 2019 08:48:56 -0400 Received: by mail-wr1-f66.google.com with SMTP id n9so16819636wru.0 for ; Wed, 12 Jun 2019 05:48:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=35aKMFTnfvApYT0bZnbnGPl1odG7lcC67cVgsN970Jc=; b=uesOFonJATVd0Dhi+ZGYHzoBuGkkmkYtb0S7HQv1Hz59/RBe4i5p43ZVuPUMbAGNHx 9bpEHnn3XLn+RgRUFTon52E1BHildbm3m0MKNdS97AHqyTiqUkwWy1VHjpoZLou1P7AG r6I3q/xWtdUl+qLTdEO9jqMSgUoxL5QnSYzyV4l22NQ322Y1N09yiOP49WkSD8lVneJY 0rvI97kyanKet4adT23Xdj9kb9nzLJnzZK2itXiUChvrbGQiNl/foh5tvhZ3f4RwPhug ADKDGvtOpFHPbKSrQCbUKE88bbtYeKZWOeLH/fN+NcVVMwnOC3lgFXo5Fq3XhuRYhMMG R8Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=35aKMFTnfvApYT0bZnbnGPl1odG7lcC67cVgsN970Jc=; b=cDGHxiFEoxSvLyWHQUg/XPpm5gDSl3mtUzVmCPnnphlvWBcf+Mjh80DMUP/uBl9yVq 8R8UFSruv4CoP6AYsg1eFAz+aS+2dogvfayTw7hiewXMnkPItqBKhLIqZu2jsSK1Z3P9 sBHSA37k+vVCiUPZ3zIoUQ0c8+9Vi+mSQg1rDdMbYszYMRdQjWmTH0KCbqztq30E9RsH tVCTDOB6bbqtuj8TrH1GCu3X4en2MI7+sLwE2uqdOBVcM/0LSd4TA90cVvruXIkZRIl+ cYfl8ymN3Ta/RxIMb7zPxtFctObDf4enHW2+iFOF2IflrDHrxkXRfMVABBPNLAYlZ1/L TcfQ== X-Gm-Message-State: APjAAAUz0xJnydBh/kpIh/n5dPC71TUAzjLJ82UUwO5H1tq4+iGfj+3c vrhcT6jP/QVFg3epHLJ4jFMwSepBsPQxgQ== X-Received: by 2002:a5d:4a0b:: with SMTP id m11mr7629850wrq.251.1560343734764; Wed, 12 Jun 2019 05:48:54 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:54 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 07/20] crypto: padlock/aes - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:25 +0200 Message-Id: <20190612124838.2492-8-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 2 +- drivers/crypto/padlock-aes.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) -- 2.20.1 diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 0af08081e305..b7557eb69409 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -27,7 +27,7 @@ config CRYPTO_DEV_PADLOCK_AES tristate "PadLock driver for AES algorithm" depends on CRYPTO_DEV_PADLOCK select CRYPTO_BLKCIPHER - select CRYPTO_AES + select CRYPTO_LIB_AES help Use VIA PadLock for AES algorithm. diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c index ad020133da19..e73eab9bc22a 100644 --- a/drivers/crypto/padlock-aes.c +++ b/drivers/crypto/padlock-aes.c @@ -145,7 +145,7 @@ static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, ctx->cword.encrypt.keygen = 1; ctx->cword.decrypt.keygen = 1; - if (crypto_aes_expand_key(&gen_aes, in_key, key_len)) { + if (aes_expandkey(&gen_aes, in_key, key_len)) { *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; return -EINVAL; } From patchwork Wed Jun 12 12:48:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166565 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640735ilk; Wed, 12 Jun 2019 05:48:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqz59xBZfJ48+YOt6xcAVaDAExRoq7zEOLH5APZFrqSFP3B9ejhlku3TRf90EplvxRZXVt8E X-Received: by 2002:a62:2784:: with SMTP id n126mr19650642pfn.61.1560343739214; Wed, 12 Jun 2019 05:48:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343739; cv=none; d=google.com; s=arc-20160816; b=qY18GO5dz59+A+ZWEgBTS9rv7v5/BQYVtv/eR1VJqihiCTsH5JeqeaGFZ5wW8vHlsa AI5BLZ3ltTfhPmiyScRKKWpAQQbD5w/4mRFgYY2U+WF4d0fOu69sNEUSZ7Vosqi7d/wC XMEev/yISy9a9D+/BoA2OQovCe4j/69+jLEv7qcMUn7eFy4fm2gRl1ZtUdDJN51lWOFb TBl0NtefLYhM3liF1vJMC2oViZJOPypRAhDdUD6rhlu5DH0brhm8W3xUKRGXdaoIhzOU WSlo31AyEL2qyfFiFxwkp28oCIho2R1lkSlKjORxzW9dZY5ZEGYCckqwugSZU7qQ0HfV 5aAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1JPW+Khv2z/aWoRuZihhCURcfSA9tf7hrRP493vIL8U=; b=OnL589IiSPLBw7t8yp7PdL2Wr6VrxGXmXfAgc1p+R7pTmrPaSNkejUx/c6QU254kZd fUolwT+GoDTAsgVJunpVHBuoUsotrrxkAHNG0+0M1JiDVPdd6bQDjmZUsnzWHMdOzeus /UW9/EXNEMB9JqPUkPBWsOgIGfdNpZJd/QY7hRwzKzMH/ASy0/lRJiiBIpHvVzL3XpAm BoeIhv6FsRXN1lqPg3oKrwKvhpPk53Ld/YRaWpxvvin6J8c7mPgTXJU+m1tDqm7/oeKv rI65iirIpp3TKVX+8JliDxRzpqnC+MMs6F/65aDQ9TBxbjNOIQslmt0qDtvnufiKXt+B 7hXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=L4vm5I00; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.48.59; Wed, 12 Jun 2019 05:48:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=L4vm5I00; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439224AbfFLMs6 (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:58 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:45956 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409162AbfFLMs6 (ORCPT ); Wed, 12 Jun 2019 08:48:58 -0400 Received: by mail-wr1-f67.google.com with SMTP id f9so16737687wre.12 for ; Wed, 12 Jun 2019 05:48:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1JPW+Khv2z/aWoRuZihhCURcfSA9tf7hrRP493vIL8U=; b=L4vm5I00wk/ySgVsbGUIEtgHpepAypN9jVidRQ0lAW894f729XPrQ0KYdwsDyrslWT ncRcENt6WFZV7F+xqgrkMGCPqc7lGdRyzr8KXfatIm2TYMo0rHtSS0+Hkf/tOBC19zQ4 GZzHWR6pX8J2xJ6MdwaRyP3nJI8X8xJHhg8tnDal5qR8RjrXMsIFrsQ6TKEQnKhAEoht P0zp+nBE1+pTkNhHMxJyICEKBQ5TkTwSeV4oI1eg4bnJE1cpVNx/BWJTSdxgsbhtlZxE tAiuorRRi+S00qNTUOX9smGdKAeq09GawFffIHn2lVO6eqSqhclKZBvx1PSVDtRsoYsE nt7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1JPW+Khv2z/aWoRuZihhCURcfSA9tf7hrRP493vIL8U=; b=kE7KyRNPorsUT+VXuZr/3XxvFrL2O0Tw8cu5r92aoY59jm+r5JHOBCrCaYh0IJNmiB uePHFIumV80kLP28dz5HOurGP21fFjGWmouWKgxg4/W/EdkQEnIOQadEw8BhCbf17GSH xcZ85GmSQucs6OQLer/nnKzj2zPw7yEkCQtcu+bIC/R6DE+cQMAFwu/vWUXpx7gtFb6C LsYUwrr2nlZkQcztW5D4ptbOn0/RJnOPf6gSn+P4yuUl6k6SDHqS5hGbM8NVvI6RMVou tIzGMsCTSaSL1x1pr+lBtQ28ovnyWGzfXE4u/Oh1adJVqYwvJj+OnQBJnEFJkRxH7vbp MeYA== X-Gm-Message-State: APjAAAUur7QdbFLvWSKuoTVq3n9YhEZe3NKDG5+zNWCWjQpMCR55EAgH 5db+ZUS4JpSbD/Hr5iK907ZZJWVnHgEhoA== X-Received: by 2002:a05:6000:1285:: with SMTP id f5mr13986859wrx.85.1560343735715; Wed, 12 Jun 2019 05:48:55 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:55 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 08/20] crypto: cesa/aes - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:26 +0200 Message-Id: <20190612124838.2492-9-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 2 +- drivers/crypto/marvell/cipher.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) -- 2.20.1 diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index b7557eb69409..539592e1d6f1 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -214,7 +214,7 @@ config CRYPTO_CRC32_S390 config CRYPTO_DEV_MARVELL_CESA tristate "Marvell's Cryptographic Engine driver" depends on PLAT_ORION || ARCH_MVEBU - select CRYPTO_AES + select CRYPTO_LIB_AES select CRYPTO_DES select CRYPTO_BLKCIPHER select CRYPTO_HASH diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c index 2fd936b19c6d..debe7d9f00ae 100644 --- a/drivers/crypto/marvell/cipher.c +++ b/drivers/crypto/marvell/cipher.c @@ -257,7 +257,7 @@ static int mv_cesa_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, int ret; int i; - ret = crypto_aes_expand_key(&ctx->aes, key, len); + ret = aes_expandkey(&ctx->aes, key, len); if (ret) { crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); return ret; From patchwork Wed Jun 12 12:48:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166566 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640758ilk; Wed, 12 Jun 2019 05:49:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqyB25aAzUNahKePMfAhjFcI2iy8VwE/BZGtORVh+FDAJ9EA4rRO8qYvxFoZEFNcK0P66eol X-Received: by 2002:a63:5d45:: with SMTP id o5mr25390209pgm.40.1560343740503; Wed, 12 Jun 2019 05:49:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343740; cv=none; d=google.com; s=arc-20160816; b=STL5G+jG0z3lZJ5T4mCSHyR+vwY6Rxf6KWR3EqH8w56Ls4v64IyUBiErGs9nONwuhb uHHgvWJdMERwR7iwLgEv9rxNlfQ4pTmOE37oP7Aghb93pimWZtvx9k7eJbZPZ50pGTvL rzFtqSY2gP0CFAluXtOzWIr/2Ut9737sPdMP7CI2KNmd04cO+PoPVvgHZx/3b4/LTW12 1SKMcKPOP2cfaR+ODosj4KadNSwlLJznIoKwe74Big78X4v56X883WmrYPozfRvsil3R ZJhxg7TjLIFYNCGkC1DorUg4YlmQRgPx5Eeotto5iZ0zMzulhJK0Cev+luS0vKR8FwUG bIUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Rp2SyHztXuSc5EPDjn72803h6AQOk7vXYYLU7TdxcTM=; b=NFijlbR2YJzQQZKxP0CYbbOlVa18ndR2iAIdjeS1BacCZCQwGaz9gIhhUYGgcvl+0r Juj+Xz2xCF02oEYZztJWbRIQVrIJtg5WrKUD3vHNUkJXlAiXM5eNWWEFMcB0RQgZW8wX 8gprmMu/uzuhUuxVJgzDhOEhRWpRN/yFPFAdzNOB7fIFRBMi7mltuIhT4URYNo+Hk9lm wfbjWCROu6ijg35/jYYBDwwA5gZTv3v5pSfJeYnuUoIasHLlOV26omnTmSSOgY++bKlr QJbhzo1smDFjlP3T9pEIGp1Z1DXzOezhNUrrGBX7WA/cmyepPo8hRs6/DwE05/3mLj80 G+Jg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=i589ezzq; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.00; Wed, 12 Jun 2019 05:49:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=i589ezzq; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406812AbfFLMs7 (ORCPT + 3 others); Wed, 12 Jun 2019 08:48:59 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:43857 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439221AbfFLMs7 (ORCPT ); Wed, 12 Jun 2019 08:48:59 -0400 Received: by mail-wr1-f66.google.com with SMTP id p13so6668261wru.10 for ; Wed, 12 Jun 2019 05:48:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Rp2SyHztXuSc5EPDjn72803h6AQOk7vXYYLU7TdxcTM=; b=i589ezzqAAjlFViP74UzWJqZJFiqtT1FTmC1a0q0ImhT7u9tc+tCz60xh9CJLqUg8c UNA2tMsx9Yh1HnZlM21Aw5fUZlhYeViGelWAyjbm6p++n52DTRJpXRcZzw65FVgIjJqV OjBCZYAp5n3o3nQr3y/iU0RTpzpqIXaiPOVeiHghbqoDr2wiSK57WSDk3WeZfHGfj/bK zsIRzC2I36pon57I3tT8KN3q2OnloiQuNjPfEnMDRpSJENgsjy6djRQjffwk+WgctCp/ lvuNIGWEjds6ZOpJKkedu/x9IYDIUhgSive3a62OmT6cPwWsNTLTnuRC7p7oViSpyU71 Vewg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rp2SyHztXuSc5EPDjn72803h6AQOk7vXYYLU7TdxcTM=; b=DrqPr7GtZlf+G7L/zd8Qj4Um8v0vPG/rb7soZ/0IDYIdaab7ZqmUz+wbls8uN8u6f/ KRSgKwu6Bus16q+kUn2IkrVRsm3YL75ErN3cEcPxaPXh2RKNOduwjxkt/6/dOmJ4jLny c/RxIsegu3nP7TsQDWFdbV1zNRN5OJew/rdfxlmLLV/maByFVzPcmbtGCT4gB9dmsvmv pBGQrcQGeSd5O0km36rgGtOb47lB38Id14ifPNagwc9eTIILTqyhrTfqoi8UIE/XXwOZ nT/ELBrnR7fMt+YQOnDZoMJkYe894RzMGSx5jizko7hmiI4XdslBMA7Z01qjzs8JomwA 5p9A== X-Gm-Message-State: APjAAAWMZfLUvRRucTWPSvjzPLNsb9IJKT/4/VJ+LhPnKfd2Z1SDNLdB 1ui5/Dd+HVXjPMABkRwyffB3HtCe7jNYkA== X-Received: by 2002:adf:db81:: with SMTP id u1mr53014316wri.296.1560343736822; Wed, 12 Jun 2019 05:48:56 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:56 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 09/20] crypto: safexcel/aes - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:27 +0200 Message-Id: <20190612124838.2492-10-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 2 +- drivers/crypto/inside-secure/safexcel_cipher.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) -- 2.20.1 diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 539592e1d6f1..a6067bb5a6a2 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -701,7 +701,7 @@ config CRYPTO_DEV_SAFEXCEL tristate "Inside Secure's SafeXcel cryptographic engine driver" depends on OF depends on (ARM64 && ARCH_MVEBU) || (COMPILE_TEST && 64BIT) - select CRYPTO_AES + select CRYPTO_LIB_AES select CRYPTO_AUTHENC select CRYPTO_BLKCIPHER select CRYPTO_DES diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c index de4be10b172f..483632546260 100644 --- a/drivers/crypto/inside-secure/safexcel_cipher.c +++ b/drivers/crypto/inside-secure/safexcel_cipher.c @@ -158,7 +158,7 @@ static int safexcel_skcipher_aes_setkey(struct crypto_skcipher *ctfm, struct crypto_aes_ctx aes; int ret, i; - ret = crypto_aes_expand_key(&aes, key, len); + ret = aes_expandkey(&aes, key, len); if (ret) { crypto_skcipher_set_flags(ctfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return ret; From patchwork Wed Jun 12 12:48:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166555 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640788ilk; Wed, 12 Jun 2019 05:49:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqzbcE4J5HRSjFTdAArW6Vx/o/kIy1ZlrqtRqLPYIztTP07qmw304eLZ6RwFwEtIFqlULx0p X-Received: by 2002:a17:90a:8c18:: with SMTP id a24mr31766092pjo.111.1560343741937; Wed, 12 Jun 2019 05:49:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343741; cv=none; d=google.com; s=arc-20160816; b=Q+L1SsdTW7nDXG2Ff83z7s3te7yM50X/WpYKc5CC71Go0ElQo3lRCj1LKWY7L1DLgg zn97ojHeF4V+JVTBMWmGQZZ5Q2oytrkY7mA5K+j0v+6bCmH/Ntq6n0QsQJNG8FmWoRg7 fI2gQn9FEhshhr2J4Nx0TrzNoD50asm3kzc7txPgwd1j6xjqcWEU2+nKwYjNUutJdMnr wKXorGk6EE8ExG+duF+vmBVzcW5YfGPwE76WXcJtgpqk8TSpqE1+IrHMUWSC1HFKdaJ5 uPaXG/kZJgVgG4tIZB79mIbyP1qH9pFlaUjsvTr450mZIthgxrL1m782+okBHSZi2AF1 QYAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8rzDm2pxsxDof2+7Zzrqa2tkF2eXq+Yv+cQAP5RFetY=; b=Q5JdVcay01krCIWEiTQkbeIJRCr8UltA7WNgBYVFob7y6wNOPwCZ5+bxdv8MVq0sD+ FDtaTCPNLKMBGKXgpoG6+8Jng10eLzb71tu+hEBvyn8nOUczYasS3i91ce+3cQZ2/0U1 A0FhAr50OnPt3TjU0XhqesRo1rXp1Vdyoms1o08Be1Mc1CnM3waGJPk6m2xtM5+ifGaL v1qScT+TllJpeZsLfIvJuNhJk3Fc9qxL/HWOIGvns9Lg30Gk0Dx/r6NX0vPYJE+4xE9L GsM83djA/4x1tPrzI8SCYzqXmykCLdi8T9FSg7uZJTWGOxmOqkykvF5pq0yP38UXpGka WFmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Rfb/Q/DV"; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.01; Wed, 12 Jun 2019 05:49:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Rfb/Q/DV"; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409159AbfFLMtB (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:01 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:38209 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439226AbfFLMtA (ORCPT ); Wed, 12 Jun 2019 08:49:00 -0400 Received: by mail-wr1-f67.google.com with SMTP id d18so16761116wrs.5 for ; Wed, 12 Jun 2019 05:48:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8rzDm2pxsxDof2+7Zzrqa2tkF2eXq+Yv+cQAP5RFetY=; b=Rfb/Q/DVnFDyufHbo81DeLA5Mgu81RRzPJ8RgVQOhA92jjmdOkINg6u/Q2a+Djf8+V H7jl1IjuN6mJCuKDQmh4TufNwBNNkJU3Db5exPStu0jQfCbAYkTFikAFFRUw//WhEPIW 7zeCHsJnRdYMDSwwFOP3XJDPQMjX/4bdacNOPgGwnXAhtbuwuJTN9BKLuK769EHOH+NK 7bIFZURO5vC42W6Zw2A/4LMESvl5wfXIPEq0DgRx2TLANgOxDMbQYgNpdxXieFcd8DOl fIdzB92oSna3AebyUB6Vrhp4gPUf5Yw1Oq3A2DUI5MVTgaIsvYuXlLxgNz+Oyiw5mjDw SgNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8rzDm2pxsxDof2+7Zzrqa2tkF2eXq+Yv+cQAP5RFetY=; b=kr8X39MnFBQl3L0oDK5ySbLeoZqWACyNb3SY77Cl5Kq8YEZr8hWqPoaMECA9fNOH+v ZBZk057QPZHaU+eoCw/KQyUIb5NsVogKXEpuLIhQEW/+zHeDUQuiAkBLKwm0Uw4i3RG7 RPh+4S+O98mjQ5q3qgoTKTqemKsd1VSgh1i8SaBdW601jXW+fYDNh3HE5W/ImIYaRfOa o9pY/itf3Um/qBuAMXz+Wx86fSwlzBiuw4e0wLXSEKLygBkmVBfkRaJMR5GbIXPRqaai EEmJLTcMd3V/u064inbGXlS6QA5Kxs+iAdxi4NB303a2F0S8hnKkA4vrwRSuA1mXKhKB ViIQ== X-Gm-Message-State: APjAAAXUzA0uomDF4W2C+zylJVwwl5/rJrUjGrXrZ4Q5yFyPhUqLWpWh HyATVwRR998+VzsmTTorqiHL2gqW3XRfJw== X-Received: by 2002:a5d:67cd:: with SMTP id n13mr42203811wrw.138.1560343737744; Wed, 12 Jun 2019 05:48:57 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:57 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 10/20] crypto: arm64/ghash - switch to AES library Date: Wed, 12 Jun 2019 14:48:28 +0200 Message-Id: <20190612124838.2492-11-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The GHASH code uses the generic AES key expansion routines, and calls directly into the scalar table based AES cipher for arm64 from the fallback path, and since this implementation is known to be non-time invariant, doing so from a time invariant SIMD cipher is a bit nasty. So let's switch to the AES library - this makes the code more robust, and drops the dependency on the generic AES cipher, allowing us to omit it entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/ghash-ce-glue.c | 30 +++++++------------- 2 files changed, 11 insertions(+), 22 deletions(-) -- 2.20.1 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index d9a523ecdd83..1762055e7093 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -58,8 +58,7 @@ config CRYPTO_GHASH_ARM64_CE depends on KERNEL_MODE_NEON select CRYPTO_HASH select CRYPTO_GF128MUL - select CRYPTO_AES - select CRYPTO_AES_ARM64 + select CRYPTO_LIB_AES config CRYPTO_CRCT10DIF_ARM64_CE tristate "CRCT10DIF digest algorithm using PMULL instructions" diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index b39ed99b06fb..90496765d22f 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -73,8 +73,6 @@ asmlinkage void pmull_gcm_decrypt(int blocks, u64 dg[], u8 dst[], asmlinkage void pmull_gcm_encrypt_block(u8 dst[], u8 const src[], u32 const rk[], int rounds); -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - static int ghash_init(struct shash_desc *desc) { struct ghash_desc_ctx *ctx = shash_desc_ctx(desc); @@ -312,14 +310,13 @@ static int gcm_setkey(struct crypto_aead *tfm, const u8 *inkey, u8 key[GHASH_BLOCK_SIZE]; int ret; - ret = crypto_aes_expand_key(&ctx->aes_key, inkey, keylen); + ret = aes_expandkey(&ctx->aes_key, inkey, keylen); if (ret) { tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; return -EINVAL; } - __aes_arm64_encrypt(ctx->aes_key.key_enc, key, (u8[AES_BLOCK_SIZE]){}, - num_rounds(&ctx->aes_key)); + aes_encrypt(&ctx->aes_key, key, (u8[AES_BLOCK_SIZE]){}); return __ghash_setkey(&ctx->ghash_key, key, sizeof(be128)); } @@ -470,7 +467,7 @@ static int gcm_encrypt(struct aead_request *req) rk = ctx->aes_key.key_enc; } while (walk.nbytes >= 2 * AES_BLOCK_SIZE); } else { - __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); + aes_encrypt(&ctx->aes_key, tag, iv); put_unaligned_be32(2, iv + GCM_IV_SIZE); while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) { @@ -481,8 +478,7 @@ static int gcm_encrypt(struct aead_request *req) int remaining = blocks; do { - __aes_arm64_encrypt(ctx->aes_key.key_enc, - ks, iv, nrounds); + aes_encrypt(&ctx->aes_key, ks, iv); crypto_xor_cpy(dst, src, ks, AES_BLOCK_SIZE); crypto_inc(iv, AES_BLOCK_SIZE); @@ -498,13 +494,10 @@ static int gcm_encrypt(struct aead_request *req) walk.nbytes % (2 * AES_BLOCK_SIZE)); } if (walk.nbytes) { - __aes_arm64_encrypt(ctx->aes_key.key_enc, ks, iv, - nrounds); + aes_encrypt(&ctx->aes_key, ks, iv); if (walk.nbytes > AES_BLOCK_SIZE) { crypto_inc(iv, AES_BLOCK_SIZE); - __aes_arm64_encrypt(ctx->aes_key.key_enc, - ks + AES_BLOCK_SIZE, iv, - nrounds); + aes_encrypt(&ctx->aes_key, ks + AES_BLOCK_SIZE, iv); } } } @@ -608,7 +601,7 @@ static int gcm_decrypt(struct aead_request *req) rk = ctx->aes_key.key_enc; } while (walk.nbytes >= 2 * AES_BLOCK_SIZE); } else { - __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); + aes_encrypt(&ctx->aes_key, tag, iv); put_unaligned_be32(2, iv + GCM_IV_SIZE); while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) { @@ -621,8 +614,7 @@ static int gcm_decrypt(struct aead_request *req) pmull_ghash_update_p64); do { - __aes_arm64_encrypt(ctx->aes_key.key_enc, - buf, iv, nrounds); + aes_encrypt(&ctx->aes_key, buf, iv); crypto_xor_cpy(dst, src, buf, AES_BLOCK_SIZE); crypto_inc(iv, AES_BLOCK_SIZE); @@ -640,11 +632,9 @@ static int gcm_decrypt(struct aead_request *req) memcpy(iv2, iv, AES_BLOCK_SIZE); crypto_inc(iv2, AES_BLOCK_SIZE); - __aes_arm64_encrypt(ctx->aes_key.key_enc, iv2, - iv2, nrounds); + aes_encrypt(&ctx->aes_key, iv2, iv2); } - __aes_arm64_encrypt(ctx->aes_key.key_enc, iv, iv, - nrounds); + aes_encrypt(&ctx->aes_key, iv, iv); } } From patchwork Wed Jun 12 12:48:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166556 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640805ilk; Wed, 12 Jun 2019 05:49:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqwGpqR4PhnI9kN7oLvHERXkAuPhyLUwgML+DGTRSXnmCP3O5Pgs5Y/FYXclNHpzNb0+Vnts X-Received: by 2002:a62:e518:: with SMTP id n24mr30337436pff.102.1560343742539; Wed, 12 Jun 2019 05:49:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343742; cv=none; d=google.com; s=arc-20160816; b=REzEqphwSk3/yRtKRrwgj/DkZpzd8+GN6Iwp2qMCCSfG0brNcYwc3BFJkSk3tHdprk 9mPDSQgYlUq+vqkKmqC7FEqz2JIbi+fvgB0/ARapwPTfWpy/rqzSlrvJchcjelLq7B+0 64VkmfWhh02dvKRxiOWdfEJhuSjtTBPgWliQXBfn3xUNf3K6i++n/u0JOiK8ZF4FrtYC RopXBiHCv+C0nWWX0cYir/NnWB7eGlbiB2rvLWtdlM+zBQInwHee+fILPp3yjsY55pq1 i5oixxmVs6INOQjVeGU05dC+befbmAUTnMqMQKyjtXED1kIAGx2xrSbc4SYtOEbfMphN GTZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=QXrlKaueW49CmOlW9q/RYTZvkffZ+6o0SuGgZq9akzA=; b=dcaOM+iy6E5iOL6HBRo2WGbnsRt4jAgv3Gv8tvWtJoIZZpln14FKjk3EoqmKhbmkm5 dkChHxas2ALXxTdYuojRtmbaROP80cWOBFT19i5zUtMmtSnFbGrh7qJRferP9TLXufBB YT9hZSrSW/aWiZSZ/GCA/tocUg4eV+7n/ZlvIPCL6pijMHxdoGaPv3U5NdnsKg3YY4ih 2s9+GaFiwGOAsKC8zJsrrBPhQB2ZsKzOhmkoqHswhneicRroDc8vhPrCA83N2TDaEFsZ ZuDPTW1Iip17cuSJDhfn9iWJw0gF+tM1ZHiybZ9pgfsZhioQKE2XA7C6A0Oh+RpnjP4d 7uHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Bc8YNuhg; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.02; Wed, 12 Jun 2019 05:49:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Bc8YNuhg; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439226AbfFLMtB (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:01 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:55471 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409157AbfFLMtB (ORCPT ); Wed, 12 Jun 2019 08:49:01 -0400 Received: by mail-wm1-f67.google.com with SMTP id a15so6429725wmj.5 for ; Wed, 12 Jun 2019 05:48:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QXrlKaueW49CmOlW9q/RYTZvkffZ+6o0SuGgZq9akzA=; b=Bc8YNuhgRlYAqFqT5jtNDi4QTQiRYHmu4iimah+AMG+XNBcM4k/qp0UjZj6sdDr7++ qDgYUaHtKrj+24P0Ta+7rPPz6QrRYKIg4I/0DL+NmM8+bL9KJ89p8E/Hxn0gCKlTppog 2i4GGHcLUygzvj0xSb37E6y2gtjOiHtpQIkGO6fwUBtS8KMPWnB62lO5Q60zYmwSK+9T 83x72iiwWV/iz/077WlhuVns9Z5y+NDBNjpVxKwVDQ30INPw3m/ATjYlvSiD6oN2LwNZ mC8z6DbH65y6ZcGCzPcEwvckeIfXZN/7Fv+pT1xNEt/5xNe7TGUa6TPuZ/Vq7mIFQY1i TOHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QXrlKaueW49CmOlW9q/RYTZvkffZ+6o0SuGgZq9akzA=; b=SAhXkaM4QR/poxvBAPPGsWpR2Y6HO4oEQRLEo4IBEiA9KX4IGtp9cgdMvVZ64eGnnj TFc4I+fVGTWjmgHPMjDOpW/fvMFkgb6a5PWyX0jTpzSdzpW3kgnvtohxn6GKP6DCiqA/ JP6YVjJvyGA8IkV4B1UogJ9iXbSxdBUoWb3g/LrlgzteT4o2NYHwaG0di/kq8OUuNtnE 6+AjudIoedvmMCXXxzQqbfOxsxX6qrsdhXKELIMctn1L38+/jpZTyjXFSvPVtQLZuwcE 5E87M41s034LmkdAIAwtSXdOKiteDCdEMU082DVWozwHalRjxEzvYvnEpZMAEjdGHZPH lBbQ== X-Gm-Message-State: APjAAAVnEmWONEqdvelDVutD9VHXUkUFJZQuo+KQUx2D5dmS4Jr73PLr Ksm/LMLGqV3F9cEz3toZ9pKG26iSr3JCSg== X-Received: by 2002:a1c:f415:: with SMTP id z21mr9228056wma.34.1560343738709; Wed, 12 Jun 2019 05:48:58 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:58 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 11/20] crypto: arm/aes-neonbs - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:29 +0200 Message-Id: <20190612124838.2492-12-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/Kconfig | 2 +- arch/arm/crypto/aes-neonbs-glue.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) -- 2.20.1 diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index a95322b59799..b24df84a1d7a 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -82,8 +82,8 @@ config CRYPTO_AES_ARM_BS tristate "Bit sliced AES using NEON instructions" depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER + select CRYPTO_LIB_AES select CRYPTO_SIMD - select CRYPTO_AES help Use a faster and more secure NEON based implementation of AES in CBC, CTR and XTS modes diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c index 617c2c99ebfb..f43c9365b6a9 100644 --- a/arch/arm/crypto/aes-neonbs-glue.c +++ b/arch/arm/crypto/aes-neonbs-glue.c @@ -64,7 +64,7 @@ static int aesbs_setkey(struct crypto_skcipher *tfm, const u8 *in_key, struct crypto_aes_ctx rk; int err; - err = crypto_aes_expand_key(&rk, in_key, key_len); + err = aes_expandkey(&rk, in_key, key_len); if (err) return err; @@ -123,7 +123,7 @@ static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key, struct crypto_aes_ctx rk; int err; - err = crypto_aes_expand_key(&rk, in_key, key_len); + err = aes_expandkey(&rk, in_key, key_len); if (err) return err; From patchwork Wed Jun 12 12:48:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166557 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640814ilk; Wed, 12 Jun 2019 05:49:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqz9Ncgei4cQx0tgQCphJBBLnTG5QFQeqFaTyUl2WomS9ZtelNxcQAA4M1+A/HhNuFk+zQeA X-Received: by 2002:a63:c94f:: with SMTP id y15mr25316751pgg.159.1560343743109; Wed, 12 Jun 2019 05:49:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343743; cv=none; d=google.com; s=arc-20160816; b=KXJnq8whYEzlX0u9g+u4el2iTd2Ta20HJ18n2E8OhjQEhFohuhcZH1OqPxDGQkT/ml 1TIjC66HJQCQlAFICxNc95zLX+LwbaIPkOpk+grsEz3WXT0e4jpoBgySgSzv873nwdIi 8nCk+4qUkGS1mX6KsP5C1M5i+8UtbvSpWgup0bjqySGRSMyyLDfVH3LjkJnyszq6eInf E0wHvziQRKEZvRNqycusK4w5fyg3tyU0rP/yEgPLNhB2NbsXDoc7JQzQZ36O2ZYgwMjG xmYGY6coCtVdEaDNzMKGBnenPGy1V9iEGv8JTJEIHcIepobtPbEFn0JAwm8SJVjLvXML b9DQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Pgzq94PpKRgLR2IqqS+QLXIksHPk3/jPtTuBsqGb9fM=; b=fN/ruXnNnzl5044AXMK6XzNCZ6D9ZY9CY+Pa1LahfralapbCI+kXhqPKju+cCgrUtW 2tVd9CW+bq8Xn356jL0FzpDsT37t7I2FQodDD0QjSGjbgp/mfzX/k3YK7ToF96mcZNwT 10BqJhPpt4tS/VVNMumYn1do4VMWL/AYt7NvpRr+Ojb5rjWXKTsZY5+pK8womnOjRrY3 tmrBezirO8lI6yIduOaJaFJXiZ3W1b6A0iMg+DrdGhTz04a1jOcZ/7JphwRik8EcKF0n Bjf5/+CK0USrkepECCyrNggDe+D+Y2RAICqwX9/Q5Vv14QCrC5utGv6BnKR5aC6cQfl/ KNMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pEI0k+6w; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.02; Wed, 12 Jun 2019 05:49:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pEI0k+6w; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439228AbfFLMtC (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:02 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:39434 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439221AbfFLMtC (ORCPT ); Wed, 12 Jun 2019 08:49:02 -0400 Received: by mail-wm1-f66.google.com with SMTP id z23so6394877wma.4 for ; Wed, 12 Jun 2019 05:49:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Pgzq94PpKRgLR2IqqS+QLXIksHPk3/jPtTuBsqGb9fM=; b=pEI0k+6wRBRQqYRce8OQ6w7+qzDTcY9VdLpprLUjALVpHrp3KXp8B7+l0jeIyQ8lQ7 9dp9dDJ4UejLYCht5Til3Ll+9/chqnWTJuUA8+xnk3oYaN7+z2uwhZb6FsJnU873/E29 N5bxyAaFyrbhvo6zscxxdXLBqf6XG498RaeA4Fe/ypTpVRzQWJl2I8CjFWrF3d9vfqDq s74i9Ga+4c5X+DU1wxy87mG9QKFSjkOokrrcup/UlK2XXVO+cP10OHmaMyR7c1Fufdai I6Vx47XAtXxMdhd0bcddGCJi4s6B/v4Yjtr7nPMkR90sRX+QdHxaARay5S8uSa/HegIF zQJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Pgzq94PpKRgLR2IqqS+QLXIksHPk3/jPtTuBsqGb9fM=; b=IClUpRjNQRNaGoLk8qWXjDuLhKZHhZ5MpuBz1LIXQDK3MxoiGrzraObrQKjIBjxQ5u P5n6eOiHfTdg7yTK74s2POsh37K2V2R5maG7VlWrJSXFbo9ffccYBzqkNyhJXdzmJNd1 lXpEYpV3FO/psoEdQnEmO3IjyJ02pHoiCPFuMVTY3buUv1vVJvO1VWF5lQjKgL/LTWCW XdAQ18sck0L/L4Tb8YLa1JLpq/eoNXEtAfoUY11Ef4XCfZpKcWOqAWlsGYpaKRy6teKB E2nM9IQJuUEb814ms4YTYknKXsvrzF3btofwDTjyT0Dn6K/eunXJHrW9noeV05lbmUlS 6osw== X-Gm-Message-State: APjAAAUzQPR2GOEQ6BbUGuiOoQyjOKwY4jZhfuduGgK2YArgWfNtJmb0 tv6IuTU8/5cM75XTqbQsViLSpIdTkK9LWQ== X-Received: by 2002:a7b:c7d8:: with SMTP id z24mr22344472wmk.10.1560343740210; Wed, 12 Jun 2019 05:49:00 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:59 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 12/20] crypto: arm64/aes-ccm - switch to AES library Date: Wed, 12 Jun 2019 14:48:30 +0200 Message-Id: <20190612124838.2492-13-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The CCM code calls directly into the scalar table based AES cipher for arm64 from the fallback path, and since this implementation is known to be non-time invariant, doing so from a time invariant SIMD cipher is a bit nasty. So let's switch to the AES library - this makes the code more robust, and drops the dependency on the generic AES cipher, allowing us to omit it entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/aes-ce-ccm-glue.c | 18 ++++++------------ 2 files changed, 7 insertions(+), 13 deletions(-) -- 2.20.1 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 1762055e7093..c6032bfb44fb 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -80,8 +80,8 @@ config CRYPTO_AES_ARM64_CE_CCM depends on ARM64 && KERNEL_MODE_NEON select CRYPTO_ALGAPI select CRYPTO_AES_ARM64_CE - select CRYPTO_AES_ARM64 select CRYPTO_AEAD + select CRYPTO_LIB_AES config CRYPTO_AES_ARM64_CE_BLK tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index cb89c80800b5..b9b7cf4b5a8f 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -46,8 +46,6 @@ asmlinkage void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes, asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[], u32 rounds); -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key, unsigned int key_len) { @@ -127,8 +125,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], } while (abytes >= AES_BLOCK_SIZE) { - __aes_arm64_encrypt(key->key_enc, mac, mac, - num_rounds(key)); + aes_encrypt(key, mac, mac); crypto_xor(mac, in, AES_BLOCK_SIZE); in += AES_BLOCK_SIZE; @@ -136,8 +133,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], } if (abytes > 0) { - __aes_arm64_encrypt(key->key_enc, mac, mac, - num_rounds(key)); + aes_encrypt(key, mac, mac); crypto_xor(mac, in, abytes); *macp = abytes; } @@ -209,10 +205,8 @@ static int ccm_crypt_fallback(struct skcipher_walk *walk, u8 mac[], u8 iv0[], bsize = nbytes; crypto_inc(walk->iv, AES_BLOCK_SIZE); - __aes_arm64_encrypt(ctx->key_enc, buf, walk->iv, - num_rounds(ctx)); - __aes_arm64_encrypt(ctx->key_enc, mac, mac, - num_rounds(ctx)); + aes_encrypt(ctx, buf, walk->iv); + aes_encrypt(ctx, mac, mac); if (enc) crypto_xor(mac, src, bsize); crypto_xor_cpy(dst, src, buf, bsize); @@ -227,8 +221,8 @@ static int ccm_crypt_fallback(struct skcipher_walk *walk, u8 mac[], u8 iv0[], } if (!err) { - __aes_arm64_encrypt(ctx->key_enc, buf, iv0, num_rounds(ctx)); - __aes_arm64_encrypt(ctx->key_enc, mac, mac, num_rounds(ctx)); + aes_encrypt(ctx, buf, iv0); + aes_encrypt(ctx, mac, mac); crypto_xor(mac, buf, AES_BLOCK_SIZE); } return err; From patchwork Wed Jun 12 12:48:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166558 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640850ilk; Wed, 12 Jun 2019 05:49:04 -0700 (PDT) X-Google-Smtp-Source: APXvYqxsGOs1xthY8ndAMfKb8EjVyNcPAYWl2pUUImCK69N/6wHzcxTkOoh4SazAATF6Y4VSWgOB X-Received: by 2002:a17:902:583:: with SMTP id f3mr18736688plf.137.1560343744806; Wed, 12 Jun 2019 05:49:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343744; cv=none; d=google.com; s=arc-20160816; b=j3XEmmY+Vcgta/7H4mvZSXwtlKoVBr6/bcqdEhQLeCiR9rZNlsyqa9PPQ6nRM3UbfX lBkk+amabXf3Od95Ga/qagk1YSjMH4HarfO2XERag+jVezJFOL4vKIL5UMrIXi1fC8SH Rd4Ynn/HV4KLAf3VlsCNyjT7S5gY/9dDBBWdqgaTG8Mxt/ZEE6QO5dAq+VGe9t9V25jW GQ/48o+A3DXRbO/D2YjHZyjXA8DHamp7dLR3uWCO5j5C1bqYee9M9fVcyPAO3yq1OMWy KblxSKGN5ZFr3Ni4Uow4tqg7M71ydsk9G1fsjWU915tlK/wStA9sKOLepGk3liXQBs8P N2nA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0+abpCskFH64uUUzWz6MrJWQeWRPxd08F5mJmqOJSaY=; b=NUkRaPK03xgPUyE+fy1MF60BzZrq2Ns3JB1LYpyMQ/Kk3/PSalCGFUyzGV6D5wr+uH KY/XqMUxQPtK2Go0xd4wxs3sqqaWJLNTGPPewIJejJMffM86AP33FnVTY0OuptmnZDA3 S+9xBLGhtnQIgRW4Y3MX88hJUaiVsmCXEMkAZun6zlN6av5pDmwWSXY+ZyBLHBzwk7qN PiA5Nyz57t8T9Y6f9EsjhBis6AliafpehMtqcnTdVc/QLiIjctc4sonsUpS0woK+oArr pGshfWQb/u8epWSqn4d+8uK3cSUHYDloNzHpHGoR8U+BwPL+tlTbW5+WgsEEvuqPOnUw Vj5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QQVIHwG+; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.04; Wed, 12 Jun 2019 05:49:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QQVIHwG+; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439221AbfFLMtE (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:04 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:33321 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439227AbfFLMtD (ORCPT ); Wed, 12 Jun 2019 08:49:03 -0400 Received: by mail-wr1-f67.google.com with SMTP id n9so16819986wru.0 for ; Wed, 12 Jun 2019 05:49:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0+abpCskFH64uUUzWz6MrJWQeWRPxd08F5mJmqOJSaY=; b=QQVIHwG+58pKXs93jYTMJMczRFAiZL6Osuf0fx9Pjt7dZUy4dQChhWMUhYfFvg7pxs 9BGqIDU/W51lXSWRCmpOTUpeKjNKQTTqZwtZmBloToenu9UQCO8ntGIQ9KZ7q3Hu0BkT t6C3vaBcpqxkn6eYbJWda9Qj18jzYaVseMnPS0tB9NF1/BpN7NPRcsTk4Q4aAQEW4FaL 5npBgrH+mtyv5bDOOLYLRvy1pyE1FKdf340PRXY42yOqQFlTp/AfIpGmhZzURAVbtPXt rmHdOi2UWHK6jOcFKjhOdmXjErwtWLZGQkM5S0FqBxPsD7T1yIcnGpxZOu3/7iTOI8Qk W8JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0+abpCskFH64uUUzWz6MrJWQeWRPxd08F5mJmqOJSaY=; b=PloMp0R5RsmwBSnlAeypGMTCOFO29t4EDaWlZJ4URusc+WxpyXrNYyCgDVhniFUNHi AAWuOsvv4YHHMUjx1QNu9nPkaBEt8mhgXnzorWQM+YCaN7tHvP+v8ZfB6d763/WcDzOQ bYbcJpUUoL8n3HUc2CJJYWedVWPsOZGHL1DKrvHa9w+T7cUJ44upg3U/qToubBPM+7Vu kPv9zJDVPw51h+RY5ucF+0hu/ix5FKkzuGs3HH+QJYcQzsxeBGyaPRpeixnp3EMYsN/v V6inSFo4nMscLacLncOz0JtJCdj6K37xzWFLqkeXlxbuBr4CzN3oKMOaayRXqEnbdDWo PubQ== X-Gm-Message-State: APjAAAUtqGt+G3Gj0pOIj2Ti6+/Ji8XMuHdUR+gcH5tgNZdusO0uL/46 /1M/Vj+eY2Z/a+VgMm9/xHTvZaUtQ5INOQ== X-Received: by 2002:a5d:43c9:: with SMTP id v9mr53672758wrr.70.1560343741335; Wed, 12 Jun 2019 05:49:01 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:00 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 13/20] crypto: arm64/aes-neonbs - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:31 +0200 Message-Id: <20190612124838.2492-14-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 1 + arch/arm64/crypto/aes-neonbs-glue.c | 8 ++++---- 2 files changed, 5 insertions(+), 4 deletions(-) -- 2.20.1 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index c6032bfb44fb..17bf5dc10aad 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -116,6 +116,7 @@ config CRYPTO_AES_ARM64_BS select CRYPTO_BLKCIPHER select CRYPTO_AES_ARM64_NEON_BLK select CRYPTO_AES_ARM64 + select CRYPTO_LIB_AES select CRYPTO_SIMD endif diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index 02b65d9eb947..cb8d90f795a0 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -77,7 +77,7 @@ static int aesbs_setkey(struct crypto_skcipher *tfm, const u8 *in_key, struct crypto_aes_ctx rk; int err; - err = crypto_aes_expand_key(&rk, in_key, key_len); + err = aes_expandkey(&rk, in_key, key_len); if (err) return err; @@ -136,7 +136,7 @@ static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key, struct crypto_aes_ctx rk; int err; - err = crypto_aes_expand_key(&rk, in_key, key_len); + err = aes_expandkey(&rk, in_key, key_len); if (err) return err; @@ -208,7 +208,7 @@ static int aesbs_ctr_setkey_sync(struct crypto_skcipher *tfm, const u8 *in_key, struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); int err; - err = crypto_aes_expand_key(&ctx->fallback, in_key, key_len); + err = aes_expandkey(&ctx->fallback, in_key, key_len); if (err) return err; @@ -274,7 +274,7 @@ static int aesbs_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key, return err; key_len /= 2; - err = crypto_aes_expand_key(&rk, in_key + key_len, key_len); + err = aes_expandkey(&rk, in_key + key_len, key_len); if (err) return err; From patchwork Wed Jun 12 12:48:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166559 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640868ilk; Wed, 12 Jun 2019 05:49:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqxNyvtyoWBEgDe0svuW7CbUe6Rkjf3bNiS29WP5wcm+jrjrrQumUSyyJyFJyfD/4TXg4Q+9 X-Received: by 2002:a17:902:70c4:: with SMTP id l4mr46042511plt.171.1560343745798; Wed, 12 Jun 2019 05:49:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343745; cv=none; d=google.com; s=arc-20160816; b=stY3FvNHcDvnGtrKYvsHzVdaeQcBKfRnl5gMdWVJv+4XZjGZPtRi3jGoE0itm4Ls7Q 4aESeDSNhslk4wmtWFG2Ksliu5x7GxR18AA53vjnYNF9qw3IK9w/nIHpFa3zYBtWNq0t wfXVceFNXBu3Gv7Fn8Y+l4G4BGXnNBTx4YVvmRv/KjcwXGGODuzOooKuPKoqdLg7var5 WXwUwfuZ4G23YbO86cEG6XkLB8QYeYL+5trKtpkeY3/wD1jclYrzE3RngJVzYrDUnyyf RG3462fpgQlv7TcUgXwGyuE2yE8ORfioGwKHTtMZ1jiLqkY10lEr84/J7e2PgdGiq5uW 2CgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mCCXO8E7dxoAfZNPGu9MWapyiMaDRiD0qXpOpKK9sPY=; b=T4CB8gVB1exh3jPsEnwzTV6WdMnOb2X0gegMlo7BEV040kFuet931CuCeON0Zg+GRt Jk+d3zbdsxkBbnjm3gpUEbtMxR2p7aHCDVYUpL7dnmTDkOPc0JFwAfuSg/7adPnoPxkV cFq1QsN0Tbmk8lj++pN1IDSblFLjpSzwd6jYOdamrKMne1CKO3PgscXC56NDl6ugmoYr f3zHUo5lIjWUSaSudqDIMX5JE2KsMXq1sLf+0Lep00xrBFIrhw3kuh6VP1G/XX/4RTZU MdON5t9cfG9I6X47TthC6kLTljC1hLgGA7Ai0l7qljP0YCcwaVi6otdICkU1oHkWNK/q SXaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fqMYY52G; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.05; Wed, 12 Jun 2019 05:49:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fqMYY52G; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439230AbfFLMtF (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:05 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:36706 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439229AbfFLMtE (ORCPT ); Wed, 12 Jun 2019 08:49:04 -0400 Received: by mail-wm1-f68.google.com with SMTP id u8so6408977wmm.1 for ; Wed, 12 Jun 2019 05:49:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mCCXO8E7dxoAfZNPGu9MWapyiMaDRiD0qXpOpKK9sPY=; b=fqMYY52GJRSJB7nqoddTnyp+yHfTI4dqn1gGxnvbszw1xwXLpymiUOIvsEFUqtxu83 fQread8hii0ShQ9ZUt0SZGprEoruVA1ff2dNsbTw4RwhxHZd6421rgtPir31C+yCEkMA udlT8SXMf/1FVfEtypI30iKKpOj8TCOG7nsu3/mj4jRfYl8Gi00jCNYqTaACz20+9aRU YcV+79O433Gzyiz/XeFGnmEmCBtjzXd15OOEpRBtDc2nxOQ17Q9iz/Yc9rDEeZii3WPw SrfK0zS5RXjo7u3jCzWgrsG8IhdT1UJrnKaKjU9lXpwBZhvzBEWntV04Mhbm+keYn6CH PbHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mCCXO8E7dxoAfZNPGu9MWapyiMaDRiD0qXpOpKK9sPY=; b=becOtEKy9cH4+/vK1lHSzhRlkD4FA8E87WqqYZiJKveUGtHhBBDhFmiZgqJ5quc+7w YYVMKmX4OJMS+YhVJgigJkfhI8XWgaL4R9mCFpMJZ+2yOE/UKU9XJWNAH0lZzoPrFvkG 4N9u5dfNugBCre6pTpnmWXbeVXaq73wHHbdXNl8f0nn0QAgAGIcdfymj2LJ5OW0t8mjp jLG2LlEDa9YWiNiTZ8tfA2uwAInJHsSQYOqtVjh4kwyqu/rLCgzuzehv0dfXtNlTqewt CYoCQ/E2NQWNh/y0Dz3yskyrIzF8vqrd2D3tC3bT3B3Kq6pcakPI6fo1TX/8bGobECpc HIww== X-Gm-Message-State: APjAAAX/rSS6jOFrxYEy6EuEdniiiJ3S08ay6/bYr20np/GE9KE1S2+p Lho/7KcgbpzMaW+Pxb1JhGo5dHBwNZCB1w== X-Received: by 2002:a1c:2c41:: with SMTP id s62mr22013608wms.8.1560343742297; Wed, 12 Jun 2019 05:49:02 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:01 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 14/20] crypto: arm64/aes-ce - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:32 +0200 Message-Id: <20190612124838.2492-15-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/aes-glue.c | 12 ++++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) -- 2.20.1 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 17bf5dc10aad..66dea518221c 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -96,7 +96,7 @@ config CRYPTO_AES_ARM64_NEON_BLK depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_AES_ARM64 - select CRYPTO_AES + select CRYPTO_LIB_AES select CRYPTO_SIMD config CRYPTO_CHACHA20_NEON diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index f0ceb545bd1e..8fa17a764802 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -26,7 +26,6 @@ #ifdef USE_V8_CRYPTO_EXTENSIONS #define MODE "ce" #define PRIO 300 -#define aes_setkey ce_aes_setkey #define aes_expandkey ce_aes_expandkey #define aes_ecb_encrypt ce_aes_ecb_encrypt #define aes_ecb_decrypt ce_aes_ecb_decrypt @@ -42,8 +41,6 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); #else #define MODE "neon" #define PRIO 200 -#define aes_setkey crypto_aes_set_key -#define aes_expandkey crypto_aes_expand_key #define aes_ecb_encrypt neon_aes_ecb_encrypt #define aes_ecb_decrypt neon_aes_ecb_decrypt #define aes_cbc_encrypt neon_aes_cbc_encrypt @@ -121,7 +118,14 @@ struct mac_desc_ctx { static int skcipher_aes_setkey(struct crypto_skcipher *tfm, const u8 *in_key, unsigned int key_len) { - return aes_setkey(crypto_skcipher_tfm(tfm), in_key, key_len); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + int ret; + + ret = aes_expandkey(ctx, in_key, key_len); + if (ret) + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + + return ret; } static int xts_set_key(struct crypto_skcipher *tfm, const u8 *in_key, From patchwork Wed Jun 12 12:48:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166561 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640908ilk; Wed, 12 Jun 2019 05:49:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqwBgMwJoddXJvrGSOIyOn2lTIv36svnJPGF/364WZQj2mP5H7Oa3dc2/MMeUk1pgqdqD7pp X-Received: by 2002:a17:902:24c:: with SMTP id 70mr81080710plc.2.1560343748138; Wed, 12 Jun 2019 05:49:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343748; cv=none; d=google.com; s=arc-20160816; b=SaSGIhW4ATDOS9wTgl4Lgs9w2zfUp4NDw7cMWFEO0Yu5bRmcDVx0YcsXc8PxzFwz0R MBTyfze73baG7/NEs24kHOlABebNDzvVjiy5O667EhjsN1Om2hZS+zU420CfT61/iSSZ fARzDeLXJP+pvGiOlanimERXHk8NsvA8JJuUPYSknQSVNSiV9iz0PbPVvZbgtUHyYVTn FEpC9Ff75e4qcsPwaKlvY3zwqvzAZU9eyUryFBlY1gLa0H9FL6QP6P5fbOsfmmWk8ygD vg92XfX5kCXaSCLfjJ21yHpHjG2YWPWECnQEx9+89u/VTEYGYTtuaqP5dQWCRMnoYYaB As9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Dlfj4kqVFhv44IOajcgw+06riYNUJAIzXTewbPPqMIc=; b=rmbyNGfn6mVMXgjGMDL7GP/PA7pNFXIQUJ506QYKf9GuDsKJ0w/XQ0bl0tufttKMQQ oWt/0CCs6xKYf4IWYnLch40dwkrZS0FQ6GYoQnliAPyiAyAgiz5d/qZjEKRRKDI7ZG3l kQsh/wzUyJxmDNb/H1dsYr06I8LZXFwPgUWAefQjYsTKKT4qmnI6k7O1uafA07M7/dwr s2JCRLQ6LPb34hWdQa2W3uXQI/KyoHmpwe2HhkhN3ZXGpNlJHMCma9aD7pCK4Rbbrpg2 GzZa+8wA1BAFkG/ZlhXEwoV0szycTDvFWc4CcDPJcPZx8z+yiFlDnbcjnhjnlTqHs13C 9pTA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=sjaYhXv0; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.07; Wed, 12 Jun 2019 05:49:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=sjaYhXv0; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439229AbfFLMtH (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:07 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:50650 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439227AbfFLMtH (ORCPT ); Wed, 12 Jun 2019 08:49:07 -0400 Received: by mail-wm1-f67.google.com with SMTP id c66so6457046wmf.0 for ; Wed, 12 Jun 2019 05:49:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Dlfj4kqVFhv44IOajcgw+06riYNUJAIzXTewbPPqMIc=; b=sjaYhXv0a6NKJmqvhDYeIVWz5nMWkTbteorU7HT48yNBwJPfxW6fVLxN11xDMz3hj6 kLn3cgCqGOrjCyzFqoPbetCQzxGaJCSJZHGNA2n7VkVrqi1eZEO81z7ngGWo/6HBNsMN korc6e/FR8NvFePkaQR25iLtBpdKfVI9W/dTvOCixkjEvbEDyC5vwcORJkKbX72xabRe 9pR0PNTzGeitqAx5dwNc2DmAnVZ76oGyeffiMe3++WVInp98V2iwh+MM6ooRRildX1lT Xssuejgi+hX/VJClXuYge961z8WjTdL0S0XduYeKhjE8DB9RnhKaU5ci8RL3I+68FzJi T0dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Dlfj4kqVFhv44IOajcgw+06riYNUJAIzXTewbPPqMIc=; b=fliAOod3ytsoI/v5bGrMtDGN1eAvxuZX8UTM71hMVN6IwWMnQawcIZSIYo+XNTtAHB QaXdJatHWPLu55SHZFiMVzhlc8xUjQuRtDNs5ZbyW1dX+B3Z3kvea0wYD71FNa/k/Vpj wlLlSNA9LG2RIGjegb5U9o46r4gPfhOlfmSB7GNupvZV7jYolLu8xBBvlGwcwRcZD+mH RsSgmUNEW/d0oJL6lhd144mUpdoLO5qhfbdLIwuSGmn/IpXzBvf2E8d8XNa/8o/eKU2O 1UmiuzKNO0ZIDLarZH1CDODZ5mWhrgkB9izpdK1TGw5hVmE6H14jAkbVBLDm34qAK++w FYsQ== X-Gm-Message-State: APjAAAX9KeLYXmrtE8kotfB9BN1mjDc9oEzLWdCOnHoZXsmkJpqSBVdf QeUpmclYucVixzmowaImG1QW52CzMC9/ng== X-Received: by 2002:a1c:2907:: with SMTP id p7mr21675532wmp.100.1560343743431; Wed, 12 Jun 2019 05:49:03 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:02 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 15/20] crypto: generic/aes - drop key expansion routine in favor of library version Date: Wed, 12 Jun 2019 14:48:33 +0200 Message-Id: <20190612124838.2492-16-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Drop aes-generic's version of crypto_aes_expand_key(), and switch to key expansion routine provided by the AES library. AES key expansion is not performance critical, and it is better to have a single version shared by all AES implementations. Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 1 + crypto/aes_generic.c | 153 +------------------- include/crypto/aes.h | 2 - 3 files changed, 3 insertions(+), 153 deletions(-) -- 2.20.1 diff --git a/crypto/Kconfig b/crypto/Kconfig index 2ed65185dde8..3b08230fe3ba 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1065,6 +1065,7 @@ config CRYPTO_LIB_AES config CRYPTO_AES tristate "AES cipher algorithms" select CRYPTO_ALGAPI + select CRYPTO_LIB_AES help AES cipher algorithms (FIPS-197). AES uses the Rijndael algorithm. diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c index 3aa4a715c216..426deb437f19 100644 --- a/crypto/aes_generic.c +++ b/crypto/aes_generic.c @@ -1125,155 +1125,6 @@ EXPORT_SYMBOL_GPL(crypto_fl_tab); EXPORT_SYMBOL_GPL(crypto_it_tab); EXPORT_SYMBOL_GPL(crypto_il_tab); -/* initialise the key schedule from the user supplied key */ - -#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b) - -#define imix_col(y, x) do { \ - u = star_x(x); \ - v = star_x(u); \ - w = star_x(v); \ - t = w ^ (x); \ - (y) = u ^ v ^ w; \ - (y) ^= ror32(u ^ t, 8) ^ \ - ror32(v ^ t, 16) ^ \ - ror32(t, 24); \ -} while (0) - -#define ls_box(x) \ - crypto_fl_tab[0][byte(x, 0)] ^ \ - crypto_fl_tab[1][byte(x, 1)] ^ \ - crypto_fl_tab[2][byte(x, 2)] ^ \ - crypto_fl_tab[3][byte(x, 3)] - -#define loop4(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[4 * i]; \ - ctx->key_enc[4 * i + 4] = t; \ - t ^= ctx->key_enc[4 * i + 1]; \ - ctx->key_enc[4 * i + 5] = t; \ - t ^= ctx->key_enc[4 * i + 2]; \ - ctx->key_enc[4 * i + 6] = t; \ - t ^= ctx->key_enc[4 * i + 3]; \ - ctx->key_enc[4 * i + 7] = t; \ -} while (0) - -#define loop6(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[6 * i]; \ - ctx->key_enc[6 * i + 6] = t; \ - t ^= ctx->key_enc[6 * i + 1]; \ - ctx->key_enc[6 * i + 7] = t; \ - t ^= ctx->key_enc[6 * i + 2]; \ - ctx->key_enc[6 * i + 8] = t; \ - t ^= ctx->key_enc[6 * i + 3]; \ - ctx->key_enc[6 * i + 9] = t; \ - t ^= ctx->key_enc[6 * i + 4]; \ - ctx->key_enc[6 * i + 10] = t; \ - t ^= ctx->key_enc[6 * i + 5]; \ - ctx->key_enc[6 * i + 11] = t; \ -} while (0) - -#define loop8tophalf(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[8 * i]; \ - ctx->key_enc[8 * i + 8] = t; \ - t ^= ctx->key_enc[8 * i + 1]; \ - ctx->key_enc[8 * i + 9] = t; \ - t ^= ctx->key_enc[8 * i + 2]; \ - ctx->key_enc[8 * i + 10] = t; \ - t ^= ctx->key_enc[8 * i + 3]; \ - ctx->key_enc[8 * i + 11] = t; \ -} while (0) - -#define loop8(i) do { \ - loop8tophalf(i); \ - t = ctx->key_enc[8 * i + 4] ^ ls_box(t); \ - ctx->key_enc[8 * i + 12] = t; \ - t ^= ctx->key_enc[8 * i + 5]; \ - ctx->key_enc[8 * i + 13] = t; \ - t ^= ctx->key_enc[8 * i + 6]; \ - ctx->key_enc[8 * i + 14] = t; \ - t ^= ctx->key_enc[8 * i + 7]; \ - ctx->key_enc[8 * i + 15] = t; \ -} while (0) - -/** - * crypto_aes_expand_key - Expands the AES key as described in FIPS-197 - * @ctx: The location where the computed key will be stored. - * @in_key: The supplied key. - * @key_len: The length of the supplied key. - * - * Returns 0 on success. The function fails only if an invalid key size (or - * pointer) is supplied. - * The expanded key size is 240 bytes (max of 14 rounds with a unique 16 bytes - * key schedule plus a 16 bytes key which is used before the first round). - * The decryption key is prepared for the "Equivalent Inverse Cipher" as - * described in FIPS-197. The first slot (16 bytes) of each key (enc or dec) is - * for the initial combination, the second slot for the first round and so on. - */ -int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len) -{ - u32 i, t, u, v, w, j; - - if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 && - key_len != AES_KEYSIZE_256) - return -EINVAL; - - ctx->key_length = key_len; - - ctx->key_enc[0] = get_unaligned_le32(in_key); - ctx->key_enc[1] = get_unaligned_le32(in_key + 4); - ctx->key_enc[2] = get_unaligned_le32(in_key + 8); - ctx->key_enc[3] = get_unaligned_le32(in_key + 12); - - ctx->key_dec[key_len + 24] = ctx->key_enc[0]; - ctx->key_dec[key_len + 25] = ctx->key_enc[1]; - ctx->key_dec[key_len + 26] = ctx->key_enc[2]; - ctx->key_dec[key_len + 27] = ctx->key_enc[3]; - - switch (key_len) { - case AES_KEYSIZE_128: - t = ctx->key_enc[3]; - for (i = 0; i < 10; ++i) - loop4(i); - break; - - case AES_KEYSIZE_192: - ctx->key_enc[4] = get_unaligned_le32(in_key + 16); - t = ctx->key_enc[5] = get_unaligned_le32(in_key + 20); - for (i = 0; i < 8; ++i) - loop6(i); - break; - - case AES_KEYSIZE_256: - ctx->key_enc[4] = get_unaligned_le32(in_key + 16); - ctx->key_enc[5] = get_unaligned_le32(in_key + 20); - ctx->key_enc[6] = get_unaligned_le32(in_key + 24); - t = ctx->key_enc[7] = get_unaligned_le32(in_key + 28); - for (i = 0; i < 6; ++i) - loop8(i); - loop8tophalf(i); - break; - } - - ctx->key_dec[0] = ctx->key_enc[key_len + 24]; - ctx->key_dec[1] = ctx->key_enc[key_len + 25]; - ctx->key_dec[2] = ctx->key_enc[key_len + 26]; - ctx->key_dec[3] = ctx->key_enc[key_len + 27]; - - for (i = 4; i < key_len + 24; ++i) { - j = key_len + 24 - (i & ~3) + (i & 3); - imix_col(ctx->key_dec[j], ctx->key_enc[i]); - } - return 0; -} -EXPORT_SYMBOL_GPL(crypto_aes_expand_key); - /** * crypto_aes_set_key - Set the AES key. * @tfm: The %crypto_tfm that is used in the context. @@ -1281,7 +1132,7 @@ EXPORT_SYMBOL_GPL(crypto_aes_expand_key); * @key_len: The size of the key. * * Returns 0 on success, on failure the %CRYPTO_TFM_RES_BAD_KEY_LEN flag in tfm - * is set. The function uses crypto_aes_expand_key() to expand the key. + * is set. The function uses aes_expand_key() to expand the key. * &crypto_aes_ctx _must_ be the private data embedded in @tfm which is * retrieved with crypto_tfm_ctx(). */ @@ -1292,7 +1143,7 @@ int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, u32 *flags = &tfm->crt_flags; int ret; - ret = crypto_aes_expand_key(ctx, in_key, key_len); + ret = aes_expandkey(ctx, in_key, key_len); if (!ret) return 0; diff --git a/include/crypto/aes.h b/include/crypto/aes.h index 72ead82d3f98..31ba40d803df 100644 --- a/include/crypto/aes.h +++ b/include/crypto/aes.h @@ -35,8 +35,6 @@ extern const u32 crypto_il_tab[4][256] ____cacheline_aligned; int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len); -int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len); /** * aes_expandkey - Expands the AES key as described in FIPS-197 From patchwork Wed Jun 12 12:48:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166560 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640895ilk; Wed, 12 Jun 2019 05:49:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqwxMle7OfqHBWszOnmlkk8Sagt1743IrN3lXgcdzfOkPcBme5/53IYShVjjxK4dv857crNl X-Received: by 2002:aa7:8ac9:: with SMTP id b9mr76163699pfd.260.1560343747727; Wed, 12 Jun 2019 05:49:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343747; cv=none; d=google.com; s=arc-20160816; b=erk2YfGHtbW0kZ9KrDHljBYA2kbXUiUz2XJqKH4K+bU86fdd6XK6Er83JyVmypiuxB r8YvLkvvy1iFwNUJY1QBMf1oKvQY8v5gXvAFoc0W2HVCZIFF0yGejTLx5cbfSkGkNdx3 QmNLjNGwZ9rjPwoIDUPVdZKWJMK0LZYHNjqcXKTiw9zZcrWfLtqikkgMDfTFR0a1b/Mk 9+QmNcPYI5hy/iExxJS/f7h0h2ppsG9VY1uIaeYqWFIT0i5vka6xIvsyq6B0PTGX1Hyy IfvcG04Er8o8w5ORJ60DSk7mZ9b9C1G31dvSwSVqNvwmA8u1Gid79A3htzhKVTVc+WOR u/fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Lm7oGps4teOEwcLPMuq0r39VG4DYBydnuDjdvE8qRsQ=; b=x49QzJzKlQyEz4TMPYqUhJImdq8eAAuK0b25clOd47CkV5jXD9TdbU5Yg9rBwCFvwk YeihhqyouKLbwnmnKFiXSuy8TkGIQNikOhb/2wBhS0QCppPnNtSONNNKINfPvOkPtEUY j8ibcc6DtriUkew0KTeaqg4CSkCjWGWEWDZyDErCivvht2I9M2ufC2S5ET4kxYoYOSfM rfvLZq76P11O4+vNWKCiKXGBQKWjXUf6UQIi4GhnO1yL+J3CIOsF9jiNTTwog0c5LB0M aD43zv9ilv2ZNsozDSLifDs8RWMYOajAQVbwHK0YNP4VpiNfmgW7PLNy00IcrPFar02E WbGw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PhOA519R; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.07; Wed, 12 Jun 2019 05:49:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PhOA519R; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439232AbfFLMtH (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:07 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:37905 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439229AbfFLMtG (ORCPT ); Wed, 12 Jun 2019 08:49:06 -0400 Received: by mail-wm1-f65.google.com with SMTP id s15so6411027wmj.3 for ; Wed, 12 Jun 2019 05:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Lm7oGps4teOEwcLPMuq0r39VG4DYBydnuDjdvE8qRsQ=; b=PhOA519RQ/KSm8Wq2yejvZbrXDftUtvpiVOr23t3wVMghc2GenkIWkRSLQWvCugDB1 fHLSI958DpSR+ppikmXzADwbVAEpbWu8fxOTcoTPM27vCiB0Vb6vvCNNp9FYtEQbtBOI fmSNGvDIcxoK+95wfBwL0ji7nw4Y3rZQBz15O32vV5o7WCkv2S7vxfoR2vLdx1trUutS tfiwpTr16f864dZ1gQ57VWdZ8oXo+aGb1SreN1vg7eVFbVITaAd+vLOcdsIDjh4WPPTz AJe4gMbJD1EpUW8DVBHTMvnwN7VR1Cw6wJYetUc9g5LnM2M+av50xnw/8QAG/CbWijEm GbFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Lm7oGps4teOEwcLPMuq0r39VG4DYBydnuDjdvE8qRsQ=; b=lHkpNO9yXO0R/aT0ga5qd4UYiDgsWtBoOg3u/gP3Ac6JGxXFDX3ZHAPMYbR45COhNo aqgTMZAYA3f/byjJnWX5PBniPit5IbQsdTW2yB33hlbok9kvZIj40zkIKSZREvSML2qY omrcIveagADus1MbFDQkZgBogRURl2KwP0WTetz2CG0eQDXeltbAz9lWSGEVFKtGKe6j PRvAthYgmY814rmJitHh3sepqLaUw0NbmOFkeQE2iQhyxqlMJToAHJAx/RwWRQ8Ey+Ja FUaNOd/4BlfmJ+DU6xUjclPDjpGy3GcOuB5DH3bkqvwpXdTB593g5AZPxkmCcAGgdg1b Y65w== X-Gm-Message-State: APjAAAXjpPhHc/hpXhRdiqj+7d+CAj2YsPah7ojIHSGiw5159s+IKM7I ubkwlIPPo+aHoBYodAjXjD054ogr03EpGA== X-Received: by 2002:a05:600c:23d2:: with SMTP id p18mr21441697wmb.108.1560343744412; Wed, 12 Jun 2019 05:49:04 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:03 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 16/20] crypto: arm64/aes-ce-cipher - use AES library as fallback Date: Wed, 12 Jun 2019 14:48:34 +0200 Message-Id: <20190612124838.2492-17-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of calling into the table based scalar AES code in situations where the SIMD unit may not be used, use the generic AES code, which is more appropriate since it is less likely to be susceptible to timing attacks. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/aes-ce-glue.c | 7 ++----- arch/arm64/crypto/aes-cipher-glue.c | 3 --- 3 files changed, 3 insertions(+), 9 deletions(-) -- 2.20.1 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 66dea518221c..4922c4451e7c 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -73,7 +73,7 @@ config CRYPTO_AES_ARM64_CE tristate "AES core cipher using ARMv8 Crypto Extensions" depends on ARM64 && KERNEL_MODE_NEON select CRYPTO_ALGAPI - select CRYPTO_AES_ARM64 + select CRYPTO_LIB_AES config CRYPTO_AES_ARM64_CE_CCM tristate "AES in CCM mode using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/aes-ce-glue.c b/arch/arm64/crypto/aes-ce-glue.c index 3213843fcb46..6890e003b8f1 100644 --- a/arch/arm64/crypto/aes-ce-glue.c +++ b/arch/arm64/crypto/aes-ce-glue.c @@ -23,9 +23,6 @@ MODULE_DESCRIPTION("Synchronous AES cipher using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); -asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - struct aes_block { u8 b[AES_BLOCK_SIZE]; }; @@ -54,7 +51,7 @@ static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); if (!crypto_simd_usable()) { - __aes_arm64_encrypt(ctx->key_enc, dst, src, num_rounds(ctx)); + aes_encrypt(ctx, dst, src); return; } @@ -68,7 +65,7 @@ static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); if (!crypto_simd_usable()) { - __aes_arm64_decrypt(ctx->key_dec, dst, src, num_rounds(ctx)); + aes_decrypt(ctx, dst, src); return; } diff --git a/arch/arm64/crypto/aes-cipher-glue.c b/arch/arm64/crypto/aes-cipher-glue.c index 0e90b06ebcec..bf32cc6489e1 100644 --- a/arch/arm64/crypto/aes-cipher-glue.c +++ b/arch/arm64/crypto/aes-cipher-glue.c @@ -13,10 +13,7 @@ #include asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); -EXPORT_SYMBOL(__aes_arm64_encrypt); - asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds); -EXPORT_SYMBOL(__aes_arm64_decrypt); static void aes_arm64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { From patchwork Wed Jun 12 12:48:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166567 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640956ilk; Wed, 12 Jun 2019 05:49:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqwAN9QY+6WjVJ2GYbtGoxQMB/tvS8tTDBlUGImWhtKLCtmG+VM1jEvwRg73shRwcwtwkZNv X-Received: by 2002:a17:902:d916:: with SMTP id c22mr57063366plz.195.1560343751074; Wed, 12 Jun 2019 05:49:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343751; cv=none; d=google.com; s=arc-20160816; b=DQBHsPk8OrJxvoKOiEMh41Mic9WO/9ey5lkxTX7radsFRs/w527vdvFzb5aETUZqiv 9ZA6rgfanzfZ/u3BkByY2NLEB0Tilov/YvurOVycP/BWTj+PPu8oDJOx3i4f7xgoEIS9 frUn/qaCjeWWFBoy9o0hiPT/0t2aZxpM9vywLQ1jo9aOOyG4J93f3wCXOa/53qVifGzN E1y3qlZLZ5vHvfAJ/MBTutwu1vTalWTq7i2ePooIuj0tOdiVtOUgoO4r99nDQrjrJMPQ 2494ZYMb+7BHdCMB6ehFMmxJeQK8FB7qMNz1ZB/V8dU03w/BAJgx+9xXnvuUM8wes81k PHXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FRmUfZcZrs5gwbshqgmWeJhm624aavbe4mR2Os9kVds=; b=MAXIZ6ZNiQbp48d/RuxjjRI+bNy7dG0bYCc5bmK7k+F8orC632jswsGVW/+3YhSXGL m9AD+tDTsiYNXOg9ikcs+9e7BAs47ndhBPPzBlmIVeeODURNuFNrMbIUcdCoThqXbys2 ckCIUHG/bqDOzaPEmJDySfZZL0j7twnn2n0q5X3MPZHk8T9FcGeoiy5Gash+MgsmybO0 Xhhc1C/hEs0MCk+XQE4UoKjt6y+w21y1PGZhp+tf1UaoNHKTsyJnVj8eWj//BSTuV9qv LgpBuN7rc2rCnubZCMzwqJXljWPyP4wDjkIPuB+BPb6S56UjR6IGLJKItBdVFnVxj4Xw cMUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ox4tpd6P; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.10; Wed, 12 Jun 2019 05:49:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ox4tpd6P; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439227AbfFLMtK (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:10 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:55485 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439231AbfFLMtJ (ORCPT ); Wed, 12 Jun 2019 08:49:09 -0400 Received: by mail-wm1-f66.google.com with SMTP id a15so6430130wmj.5 for ; Wed, 12 Jun 2019 05:49:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FRmUfZcZrs5gwbshqgmWeJhm624aavbe4mR2Os9kVds=; b=ox4tpd6P6ar74xvJmGEDOZ3bY7LZM6d2Wf22j9f17GQSx52dugdszFmC8DA6E1ppeL gQnDg3vb90lRTgUxt1dIpIvaeVp20WKtg6C3KQN1Cc9axY+s8kCQ7GmeSnFRBb18zhL+ 0eys7sjuzou33znvRYpG8SuL8thxqv892ZnATw5UmNEQGnyr4scvk70CUlR1dsvxb0af yx8EjN45Zers8YamyBGPXvV8l9dwGjOhwYRmzUpJf73gu6nNQ9vGG9pq+of/yNJtXFhO HUv+iSin+Pgv/FHfJE/7Wj5g1bnNppQEizdaJirgMbCCPAZXCaGxXXILj/kzeSzd/Hw3 wJjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FRmUfZcZrs5gwbshqgmWeJhm624aavbe4mR2Os9kVds=; b=IpsS6RmvJk0Si7VnY8aoNTMXkqEbAIncKXpvNlNXlQJ/8kgAWPdiwXZFK7/EKAbwXV mxjqWZQ+ct+pelkUl+WLqnPJXhswk7Q+oszNuNOtkrDpxHnDEIFYqMI8usZ4bKzvXW7q mPzeAqn7M71IvzacVkT5JAFeAgu0NbeqAteX8PnhmvZ2B1WeKetc0JKWzgkRE7DPY0d3 3NNR3sx+NA9AGXUEOZuTCO0BifjKDDM51ukZnc4EKpICBpZRnIRbIgQBzeMpPRNmO4lK YrU8UwLHSPZxZV91a2COy63NlH9LrkKQDH9QyOZQnokIZBoht7dbOste31q5vfdrGIB6 WIRw== X-Gm-Message-State: APjAAAWmwONs2Cu5COb3pytgAFrBs1EW+6xK1M22/WYsZW9ljBCPGe2l 5M9CsjyYgNF04C26HGH4h/3DOJAYDKnikA== X-Received: by 2002:a7b:c74a:: with SMTP id w10mr20999150wmk.99.1560343745519; Wed, 12 Jun 2019 05:49:05 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:04 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 17/20] crypto: aes - move ctr(aes) non-SIMD fallback to AES library Date: Wed, 12 Jun 2019 14:48:35 +0200 Message-Id: <20190612124838.2492-18-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In preparation of duplicating the sync ctr(aes) functionality to modules under arch/arm, move the helper function from a inline .h file to the AES library, which is already depended upon by the drivers that use this fallback. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-ctr-fallback.h | 53 -------------------- arch/arm64/crypto/aes-glue.c | 17 ++++--- arch/arm64/crypto/aes-neonbs-glue.c | 12 +++-- crypto/Kconfig | 1 + include/crypto/aes.h | 11 ++++ lib/crypto/aes.c | 41 +++++++++++++++ 6 files changed, 72 insertions(+), 63 deletions(-) -- 2.20.1 diff --git a/arch/arm64/crypto/aes-ctr-fallback.h b/arch/arm64/crypto/aes-ctr-fallback.h deleted file mode 100644 index c9285717b6b5..000000000000 --- a/arch/arm64/crypto/aes-ctr-fallback.h +++ /dev/null @@ -1,53 +0,0 @@ -/* - * Fallback for sync aes(ctr) in contexts where kernel mode NEON - * is not allowed - * - * Copyright (C) 2017 Linaro Ltd - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include - -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - -static inline int aes_ctr_encrypt_fallback(struct crypto_aes_ctx *ctx, - struct skcipher_request *req) -{ - struct skcipher_walk walk; - u8 buf[AES_BLOCK_SIZE]; - int err; - - err = skcipher_walk_virt(&walk, req, true); - - while (walk.nbytes > 0) { - u8 *dst = walk.dst.virt.addr; - u8 *src = walk.src.virt.addr; - int nbytes = walk.nbytes; - int tail = 0; - - if (nbytes < walk.total) { - nbytes = round_down(nbytes, AES_BLOCK_SIZE); - tail = walk.nbytes % AES_BLOCK_SIZE; - } - - do { - int bsize = min(nbytes, AES_BLOCK_SIZE); - - __aes_arm64_encrypt(ctx->key_enc, buf, walk.iv, - 6 + ctx->key_length / 4); - crypto_xor_cpy(dst, src, buf, bsize); - crypto_inc(walk.iv, AES_BLOCK_SIZE); - - dst += AES_BLOCK_SIZE; - src += AES_BLOCK_SIZE; - nbytes -= AES_BLOCK_SIZE; - } while (nbytes > 0); - - err = skcipher_walk_done(&walk, tail); - } - return err; -} diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 8fa17a764802..3d9cedbb91c9 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -21,7 +21,6 @@ #include #include "aes-ce-setkey.h" -#include "aes-ctr-fallback.h" #ifdef USE_V8_CRYPTO_EXTENSIONS #define MODE "ce" @@ -409,8 +408,15 @@ static int ctr_encrypt_sync(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - if (!crypto_simd_usable()) - return aes_ctr_encrypt_fallback(ctx, req); + if (!crypto_simd_usable()) { + struct skcipher_walk walk; + int err; + + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + return skcipher_encrypt_aes_ctr(&walk, ctx); + } return ctr_encrypt(req); } @@ -653,15 +659,14 @@ static void mac_do_update(struct crypto_aes_ctx *ctx, u8 const in[], int blocks, kernel_neon_end(); } else { if (enc_before) - __aes_arm64_encrypt(ctx->key_enc, dg, dg, rounds); + aes_encrypt(ctx, dg, dg); while (blocks--) { crypto_xor(dg, in, AES_BLOCK_SIZE); in += AES_BLOCK_SIZE; if (blocks || enc_after) - __aes_arm64_encrypt(ctx->key_enc, dg, dg, - rounds); + aes_encrypt(ctx, dg, dg); } } } diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index cb8d90f795a0..02d46e97c1e1 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -16,8 +16,6 @@ #include #include -#include "aes-ctr-fallback.h" - MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); @@ -288,9 +286,15 @@ static int ctr_encrypt_sync(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); - if (!crypto_simd_usable()) - return aes_ctr_encrypt_fallback(&ctx->fallback, req); + if (!crypto_simd_usable()) { + struct skcipher_walk walk; + int err; + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + return skcipher_encrypt_aes_ctr(&walk, &ctx->fallback); + } return ctr_encrypt(req); } diff --git a/crypto/Kconfig b/crypto/Kconfig index 3b08230fe3ba..efeb307c0594 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1061,6 +1061,7 @@ comment "Ciphers" config CRYPTO_LIB_AES tristate + select CRYPTO_ALGAPI config CRYPTO_AES tristate "AES cipher algorithms" diff --git a/include/crypto/aes.h b/include/crypto/aes.h index 31ba40d803df..f67c38500746 100644 --- a/include/crypto/aes.h +++ b/include/crypto/aes.h @@ -8,6 +8,8 @@ #include #include +#include +#include #define AES_MIN_KEY_SIZE 16 #define AES_MAX_KEY_SIZE 32 @@ -69,4 +71,13 @@ void aes_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); */ void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); +/** + * skcipher_encrypt_aes_ctr - Process a aes(ctr) skcipher encryption request + * using the generic AES implementation. + * @walk: the skcipher walk data structure that describes the data to operate on + * @ctx: the AES key schedule + */ +int skcipher_encrypt_aes_ctr(struct skcipher_walk *walk, + const struct crypto_aes_ctx *ctx); + #endif diff --git a/lib/crypto/aes.c b/lib/crypto/aes.c index 57596148b010..f5ef29eaa714 100644 --- a/lib/crypto/aes.c +++ b/lib/crypto/aes.c @@ -363,6 +363,47 @@ void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in) } EXPORT_SYMBOL(aes_decrypt); +/** + * skcipher_encrypt_aes_ctr - Process a aes(ctr) skcipher encryption request + * using the generic AES implementation. + * @walk: the skcipher walk data structure that describes the data to operate on + * @ctx: the AES key schedule + */ +int skcipher_encrypt_aes_ctr(struct skcipher_walk *walk, + const struct crypto_aes_ctx *ctx) +{ + u8 buf[AES_BLOCK_SIZE]; + int err = 0; + + while (walk->nbytes > 0) { + u8 *dst = walk->dst.virt.addr; + u8 *src = walk->src.virt.addr; + int nbytes = walk->nbytes; + int tail = 0; + + if (nbytes < walk->total) { + nbytes = round_down(nbytes, AES_BLOCK_SIZE); + tail = walk->nbytes % AES_BLOCK_SIZE; + } + + do { + int bsize = min(nbytes, AES_BLOCK_SIZE); + + aes_encrypt(ctx, buf, walk->iv); + crypto_xor_cpy(dst, src, buf, bsize); + crypto_inc(walk->iv, AES_BLOCK_SIZE); + + dst += AES_BLOCK_SIZE; + src += AES_BLOCK_SIZE; + nbytes -= AES_BLOCK_SIZE; + } while (nbytes > 0); + + err = skcipher_walk_done(walk, tail); + } + return err; +} +EXPORT_SYMBOL(skcipher_encrypt_aes_ctr); + MODULE_DESCRIPTION("Generic AES library"); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); From patchwork Wed Jun 12 12:48:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166562 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3640950ilk; Wed, 12 Jun 2019 05:49:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqwha3EMnlrniNfg3o8D+re1KihB9ubGd0/GFD0X6nxbkOVDLeYwN5Cedk7QgyHgsQcKNg7b X-Received: by 2002:a63:c94f:: with SMTP id y15mr25317139pgg.159.1560343750743; Wed, 12 Jun 2019 05:49:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343750; cv=none; d=google.com; s=arc-20160816; b=H0KSrtsDOpq1T/AfqcsNDc/4fnPVXsg5NzKPcYDBj2WQ9rotFEcIL4jD7rmc3waZ+Q /zznpwsC3PjzcVqGkt1Tl0zaPz1nEEW8XkA42eiLR3/s3BzXwN2uyqrgIB1QJSPimn5j E/a9+kGnLBAPh6BH+O3Rf6XuzUUwQblnusUzl6fX+rAQEMy2Gc8AB72icfwNY0IUISKN wFzI9vIh0dcjdXeeaMc7Bt3GIuH4S1aXOomcKoP4vy5JhwTXX15dyu6d2euh7VkWuVtR cVV3uMrRT76tVxOYrjLIt09giSTFdMASKX8GHhe9fCx6ctO9RcW27ud3GUrnc3u/YyPz tUuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1xxctaVyY+STLE0T/Ia3mG1xDByiyGRaSaAm3A+Hnfc=; b=QtW1BB2J5/tytlyVsDtfoD3esbV72HL1WUumdzG+Y3B4u4um26Y7Ef8wHuXwBOgdnu w3lSpnWUYfCe4q55YTZytaBBmUqOOqds5wY2ld1qyOISqjd4f3VONbdlYRSVkFzg6bcv nAqOzjk1sAHebvOmImiAbcFrTVbbao5BXFmN80MdovOb37xv/zI1ZIXTHGAv3sP6lowN XbrfexrtCbRDwAGKVlis+PTSoE6a1f/OgFhbAa0uu5zcdvKjqwY+w/hh0iqVTTbIta5a BBQBhmtfTOEt4AMEmqIw6EoHVmxQNDGZblfb7bDaFPOFajBfJL64cYwaFVjI76QOKksH SXaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tz4lZUGm; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.10; Wed, 12 Jun 2019 05:49:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tz4lZUGm; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439234AbfFLMtJ (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:09 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:45980 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439227AbfFLMtJ (ORCPT ); Wed, 12 Jun 2019 08:49:09 -0400 Received: by mail-wr1-f67.google.com with SMTP id f9so16738345wre.12 for ; Wed, 12 Jun 2019 05:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1xxctaVyY+STLE0T/Ia3mG1xDByiyGRaSaAm3A+Hnfc=; b=tz4lZUGm9GNsN5QswSxBu7YXktply7N1ckplmsyGaNzKPqdMjhYQIq5MA5Za8gHjVv hbmHeCSyK9Dv1HCnMTpSoZuYGyLJcOccSyP8R9Wlt5gRPw7cEK4ArZ5pLJnKCI+QeEtT jn0ql/+fmwDUAt+1c2EgEFXX1XuxCczqDAGslmpsFUU2LCQZGEhBEjt6KKYQzuNZM+VI PZQNvwUOv0/D3yTH8JTNF3Oyl65MrIBMnLBLuMDcOYPabmhfCtaAvgK/9SzQCWq2D15L 8yi/Hml1UiDmGFEws/lyBKVDlHJFjsQyUEc8MhCwy90KY9ki2vxhabPaIPfReMKal4nV C12Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1xxctaVyY+STLE0T/Ia3mG1xDByiyGRaSaAm3A+Hnfc=; b=iImMA1ahyb/c6nZ6b3P8COk8rbhMib4CSondHip2M+1iPVctbH6g0k+ryPVkiF4aXL EaviRYaAQuo31kXGrlsiJC7c+tHAoOL4rsGr7IJ9Wtj9Z17XIZXP8vKJoOEsezUlKpt+ yuTCNVxlsZYi3kt9xU/LWdEt0RFwX88DmOGFcPxJJhUOLBHk+bk++vYJbKiI1RbXLR9e ptAPVV8RUIVkAdBYLcZg6e8tlcBzAIN/RpI/3Cfxt0S0GtvNZhciv1plz5GV66RmOiBK 8TqhMBt9ELQT7POnZWw1ZqCrIjin8H7WzffO8vMDweX/QOTCIcdngPfWwb+uKjDp5iIL /ToQ== X-Gm-Message-State: APjAAAWkbL/U51IF/3X3Z4jMJa8FP3H1MnuYG9mSWbrNC/rADA5o0Y44 vr8flDz937CO89fU2zblX8/vp2ejuUrB+A== X-Received: by 2002:adf:9d81:: with SMTP id p1mr5078018wre.294.1560343746557; Wed, 12 Jun 2019 05:49:06 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:05 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 18/20] crypto: arm/aes-ce - provide a synchronous version of ctr(aes) Date: Wed, 12 Jun 2019 14:48:36 +0200 Message-Id: <20190612124838.2492-19-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org AES in CTR mode is used by modes such as GCM and CCM, which are often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of ctr(aes) based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. We have a helper for this now in the AES library, so wire that up. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-ce-glue.c | 36 ++++++++++++++++++++ 1 file changed, 36 insertions(+) -- 2.20.1 diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c index 04ba66903674..cdcc4b09e7db 100644 --- a/arch/arm/crypto/aes-ce-glue.c +++ b/arch/arm/crypto/aes-ce-glue.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -292,6 +293,23 @@ static int ctr_encrypt(struct skcipher_request *req) return err; } +static int ctr_encrypt_sync(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!crypto_simd_usable()) { + struct skcipher_walk walk; + int err; + + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + return skcipher_encrypt_aes_ctr(&walk, ctx); + } + return ctr_encrypt(req); +} + static int xts_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -381,6 +399,21 @@ static struct skcipher_alg aes_algs[] = { { .setkey = ce_aes_setkey, .encrypt = ctr_encrypt, .decrypt = ctr_encrypt, +}, { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-ce-sync", + .base.cra_priority = 300 - 1, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct crypto_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .setkey = ce_aes_setkey, + .encrypt = ctr_encrypt_sync, + .decrypt = ctr_encrypt_sync, }, { .base.cra_name = "__xts(aes)", .base.cra_driver_name = "__xts-aes-ce", @@ -424,6 +457,9 @@ static int __init aes_init(void) return err; for (i = 0; i < ARRAY_SIZE(aes_algs); i++) { + if (!(aes_algs[i].base.cra_flags & CRYPTO_ALG_INTERNAL)) + continue; + algname = aes_algs[i].base.cra_name + 2; drvname = aes_algs[i].base.cra_driver_name + 2; basename = aes_algs[i].base.cra_driver_name; From patchwork Wed Jun 12 12:48:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166563 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3641003ilk; Wed, 12 Jun 2019 05:49:13 -0700 (PDT) X-Google-Smtp-Source: APXvYqx2hdGTbfFrCIq18yK9BnT09XPJk0ZP031ybx6Axc/gUAorU9aIPJHF+1s7uFZ0GXLMD4mu X-Received: by 2002:a17:90a:6544:: with SMTP id f4mr32947028pjs.17.1560343752956; Wed, 12 Jun 2019 05:49:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343752; cv=none; d=google.com; s=arc-20160816; b=Su0diL0IGSWE7snIGxOrv1ivAMFCdvtlzwF/Ea+tAciC6RWfQLEBjv4jUH925btEUP J9W0ok4EdgBp5jLXeJqhNAupxiBIOBtJknlkrsqDuZOIwYQyhfgNlnYEP3HzGxY+5XW1 +Lf9iXOfHDrcNx4RrvBfZyn69xrAtE5ulpOTraGTuQp6+AzBTu6qc6fHfAWgjQNYCGMS ejEdFP5v4QfmudpPCD4xpPKZFYLPn9jvYpgoFQpgRfoz8yzhoYIb8J8v2QigspN9hReQ OOo6lDm2PLAQ6Q+oc+klAgR4yYEI44z0CUwNx4ObnEw3Paw3geqlarvaZ5TOSLj8nRNQ fgXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=80OaoKT+inVFqryUaesnban4gPiDdroPQoZlqPQnBMg=; b=wEWk0TtfEQwsMLUUGwV1iawvTYOAFh4VdsLUt/nngqIpXY0vIFErVJQVVu9Xk5h+GQ OoxwUG2VPaSoquCXlirBXjxzxmXJBxbRRttgCpkMKyvAmlsNyHT8xM8ghgdN3PWTYwyq 4lSPQo9fy5mZk/C6tUvIevvi9f6DyqAPPZnVtMIRYurGVcGDUsxGXyJ5/1K8nStw7rAc grultilLWwLziIB4/ERtyalPu6+yoj8GDJlIXrw8OuzxhRnR9bID5QVNYOqteN5ndVwh vNPfgEQVz7QpORhLIz/blzQgvfFur8t5jG+guXfkF+LM4v0D4j+4+b6YMEmF3k3UfCe8 pHzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=s+wuB5UF; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.12; Wed, 12 Jun 2019 05:49:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=s+wuB5UF; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439233AbfFLMtM (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:12 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:55492 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439235AbfFLMtL (ORCPT ); Wed, 12 Jun 2019 08:49:11 -0400 Received: by mail-wm1-f67.google.com with SMTP id a15so6430333wmj.5 for ; Wed, 12 Jun 2019 05:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=80OaoKT+inVFqryUaesnban4gPiDdroPQoZlqPQnBMg=; b=s+wuB5UFnOhqXz+mQtE7yzhq5X2+mZ/YAF3O3DEM+El+csWqg8aIFgALkrOWVw6NyM C3ciGneUJ589CJvJkA8fbHUB/jnjUlNGD6zGRC/U4JMYv/nTMSDbxqi9HFb2n8xawAIJ CSFF/YZKiy8GnsLn/jWHO52iFat6DzOq6o5uf2Csa/OJl2dto1xVyzFG2mLb/qpPxD5m S0C4zfGXljSZCXymur9UAJYEYqFlDejE+JR/lM/WWBFsAXScS0WCPWntLGFJ47lN0009 TsbybyYXVSfwE0bybY0mYx842U/LDK1wUcOa5scCzb5DWsZWcIweYY/S1JhzIbEd7JAJ b+qA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=80OaoKT+inVFqryUaesnban4gPiDdroPQoZlqPQnBMg=; b=RAq2eBCM202bmgefGn2dq/IoeUjBSqTu5QuKPF3Hfm1N8jnE7m173+ukkibskD68o+ J3wtbnIajuOLD2fmNdJwC0kZQ+L4LyY03VAhHXnO9WavOzDIldcCDn9s5a68fhWZBptB Q/twMXnAO5tOvJD5aEPjXNh9JkIybUSyrrM5zcWiENBV+QTJ2N1RwxkgA09ylvlgU1cT A+2AIavqf51qFacNCizGt2/mmHXTBNx4hDlDZ03cYxTWQ6bKG5WinLKzKlmfsmU9lRPy I/LFUZq9FE2DQPibAb+6obBwwuXgLokzU/epxcxMfYVUvkCg1qe7spJ07PndWvxEyfNk am1g== X-Gm-Message-State: APjAAAWucycUKR3n7Uo7yxiHNes+gTp4T4iV7pC59r4umPAsRItCrvip 4GdHgDKiLa0Rvz/mBIL+D8Mqf18/iQcvxw== X-Received: by 2002:a1c:48c5:: with SMTP id v188mr21239967wma.175.1560343747741; Wed, 12 Jun 2019 05:49:07 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:06 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 19/20] crypto: arm/aes-neonbs - provide a synchronous version of ctr(aes) Date: Wed, 12 Jun 2019 14:48:37 +0200 Message-Id: <20190612124838.2492-20-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org AES in CTR mode is used by modes such as GCM and CCM, which are often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of ctr(aes) based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. We have a helper for this now in the AES library, so wire that up. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-neonbs-glue.c | 58 ++++++++++++++++++++ 1 file changed, 58 insertions(+) -- 2.20.1 diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c index f43c9365b6a9..62cadb92379b 100644 --- a/arch/arm/crypto/aes-neonbs-glue.c +++ b/arch/arm/crypto/aes-neonbs-glue.c @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -57,6 +58,11 @@ struct aesbs_xts_ctx { struct crypto_cipher *tweak_tfm; }; +struct aesbs_ctr_ctx { + struct aesbs_ctx key; /* must be first member */ + struct crypto_aes_ctx fallback; +}; + static int aesbs_setkey(struct crypto_skcipher *tfm, const u8 *in_key, unsigned int key_len) { @@ -192,6 +198,25 @@ static void cbc_exit(struct crypto_tfm *tfm) crypto_free_cipher(ctx->enc_tfm); } +static int aesbs_ctr_setkey_sync(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); + int err; + + err = aes_expandkey(&ctx->fallback, in_key, key_len); + if (err) + return err; + + ctx->key.rounds = 6 + key_len / 4; + + kernel_neon_begin(); + aesbs_convert_key(ctx->key.rk, ctx->fallback.key_enc, ctx->key.rounds); + kernel_neon_end(); + + return 0; +} + static int ctr_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -234,6 +259,23 @@ static int ctr_encrypt(struct skcipher_request *req) return err; } +static int ctr_encrypt_sync(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!crypto_simd_usable()) { + struct skcipher_walk walk; + int err; + + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + return skcipher_encrypt_aes_ctr(&walk, &ctx->fallback); + } + return ctr_encrypt(req); +} + static int aesbs_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key, unsigned int key_len) { @@ -361,6 +403,22 @@ static struct skcipher_alg aes_algs[] = { { .setkey = aesbs_setkey, .encrypt = ctr_encrypt, .decrypt = ctr_encrypt, +}, { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-neonbs-sync", + .base.cra_priority = 250 - 1, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct aesbs_ctr_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .chunksize = AES_BLOCK_SIZE, + .walksize = 8 * AES_BLOCK_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = aesbs_ctr_setkey_sync, + .encrypt = ctr_encrypt_sync, + .decrypt = ctr_encrypt_sync, }, { .base.cra_name = "__xts(aes)", .base.cra_driver_name = "__xts-aes-neonbs", From patchwork Wed Jun 12 12:48:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 166564 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3641017ilk; Wed, 12 Jun 2019 05:49:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqzL9NzPaOawAdNnS+B+cOd4mszj/fME+YUvJA4nOywveCKXTXi+36VUrGAAuJycg+YwOrZJ X-Received: by 2002:a17:902:24c:: with SMTP id 70mr81081088plc.2.1560343754081; Wed, 12 Jun 2019 05:49:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343754; cv=none; d=google.com; s=arc-20160816; b=ibnhKE0Uxb/u+FGMniuA/WifKXGB525e0eAsZyhSXF04AJ8EpkilYTqQnIzmKHUdfm jzO66JVXBP8Zsxb3NjrvCfMIOfnk6Az4Yp72Wbu6mPspJ4r1XnkVCIdJR1zjU08N7/2I uUNwidiOsEEbNjAxd3Kh7ICwsgf7kOwJbWweMI1+noZy6XHR0JXsIZWHLiuR9cYS/Q1h x4nSV8KvfCK2j3Jqw0766nHP0IiPyzCfQ9Xa3fA9HfJGs1ZcV90NRTrXl/UIbIOb+s1Q h124OoHx8Vm2YG40aCebQbqKW5QqBnY83GwfCZswvhWeZYMIl0sHMrq2tK85nPqmFIrJ acHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0vLqup8kXDSW4TBicl2CoUwy3k2oxTX/PfRrsOfcNio=; b=KyuyEUbSQjFVsi/8iuAo3dH43GuRGO7mzGze0Z7FUyEl/5YLKRWO1s3zx6Pz2ICw96 O6KUbtyLkFyahzywMrgc6lmskgjQmm4lXaFudEMCaZWHCsVn6os7WGoIlfQX9QJmJLF6 XNDDfEXhz65GCczGJRZF68+w0AzjoK0IaYRpDrmKfMHTWpLr09G8h+Q2QkxOfXVKS2/f I2idnqqb6ARtmFf3uPGsSsoYuZr6EMVqJeBpzArhhHy5lSUH3W9HRYI/988XKExBuD4d EqNajILb8pPARYFJ9GUww32AkR3aVo/v8uoeTHOgMgEyiHeWnvPsEVPcFdwNULrn8rBh ddzw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="J5DbI/pv"; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si16011497pgk.423.2019.06.12.05.49.13; Wed, 12 Jun 2019 05:49:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="J5DbI/pv"; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439236AbfFLMtN (ORCPT + 3 others); Wed, 12 Jun 2019 08:49:13 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:55494 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439231AbfFLMtM (ORCPT ); Wed, 12 Jun 2019 08:49:12 -0400 Received: by mail-wm1-f66.google.com with SMTP id a15so6430396wmj.5 for ; Wed, 12 Jun 2019 05:49:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0vLqup8kXDSW4TBicl2CoUwy3k2oxTX/PfRrsOfcNio=; b=J5DbI/pva0+dCmUWVpXPhHY0jfiBrCLM4KEPpXNN2Cu51rLrZtYdu4RhvzX3xwY8fn VbkNxZuqFCNMler5SD2SFukanJJuSv0ghBfgXYnMs3fmhHnW6qPyJuJO/vrn1qweZVzh 64WHUwVG7/HOfYwWQgxZ/sFb+t733e16TS/CLti/ZE9f6/RFp8MoZcYr7NZt+0AAfKdw IyW4irDY8mD3VgINCyjgrFXgaaMUoMHpRo8+uG33ewigb3TMrf1rN1P6X+9VaJv924UC 3/ku+qvDm0GtGUhhbIGFeV6JWZSBznWRrimuK37R49ZOwjy53rsJufqcAer7KyajAdgC ik6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0vLqup8kXDSW4TBicl2CoUwy3k2oxTX/PfRrsOfcNio=; b=N+JSSMn6xHqySztRCW4O9ln4yKPVRsOhk4ZlkuzabUdee8WQKp9VAn03/4IIFpDiFg NVUGoMDLqGkEmDWJvFeShGhBks6B2L2+8UUgwwC/9PK30J10KDnstnnUkDwzp03bBRSX HCv7b0R/3ibiMt6WM7fQOadox57zTAQ3Kl+pmpHGn2IbEsWo4j/CXEm5lR7Zzx7b4ZF9 gd7VAGXf8PVgVFqHeKy38bAGggngjpYjxTzzJSFOU+vRtMASh7511vvzJBtDriAqpMu7 6MKq8Yz/WH/McQbdUN+4SqRo/u1UOQPnsK8SVnB+tXOoyJUyNW6n5qY31iQIvOjTaKfG AX1w== X-Gm-Message-State: APjAAAU7LewYhLLH/T+4mYGTvTWlZvmU6iA00oj/hwyH/PLSkmNIcFYv 1yhAW8qHkNbYJzq7LHa+s01sQ4eLS22Now== X-Received: by 2002:a05:600c:23d2:: with SMTP id p18mr21442037wmb.108.1560343750560; Wed, 12 Jun 2019 05:49:10 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:09 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 20/20] crypto: arm/ghash - provide a synchronous version Date: Wed, 12 Jun 2019 14:48:38 +0200 Message-Id: <20190612124838.2492-21-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org GHASH is used by the GCM mode, which is often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of GHASH based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/ghash-ce-glue.c | 78 +++++++++++++------- 1 file changed, 52 insertions(+), 26 deletions(-) -- 2.20.1 diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c index 39d1ccec1aab..ebb237ca874b 100644 --- a/arch/arm/crypto/ghash-ce-glue.c +++ b/arch/arm/crypto/ghash-ce-glue.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -33,6 +34,8 @@ struct ghash_key { u64 h2[2]; u64 h3[2]; u64 h4[2]; + + be128 k; }; struct ghash_desc_ctx { @@ -65,6 +68,36 @@ static int ghash_init(struct shash_desc *desc) return 0; } +static void ghash_do_update(int blocks, u64 dg[], const char *src, + struct ghash_key *key, const char *head) +{ + if (likely(crypto_simd_usable())) { + kernel_neon_begin(); + pmull_ghash_update(blocks, dg, src, key, head); + kernel_neon_end(); + } else { + be128 dst = { cpu_to_be64(dg[1]), cpu_to_be64(dg[0]) }; + + do { + const u8 *in = src; + + if (head) { + in = head; + blocks++; + head = NULL; + } else { + src += GHASH_BLOCK_SIZE; + } + + crypto_xor((u8 *)&dst, in, GHASH_BLOCK_SIZE); + gf128mul_lle(&dst, &key->k); + } while (--blocks); + + dg[0] = be64_to_cpu(dst.b); + dg[1] = be64_to_cpu(dst.a); + } +} + static int ghash_update(struct shash_desc *desc, const u8 *src, unsigned int len) { @@ -88,10 +121,8 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, blocks = len / GHASH_BLOCK_SIZE; len %= GHASH_BLOCK_SIZE; - kernel_neon_begin(); - pmull_ghash_update(blocks, ctx->digest, src, key, - partial ? ctx->buf : NULL); - kernel_neon_end(); + ghash_do_update(blocks, ctx->digest, src, key, + partial ? ctx->buf : NULL); src += blocks * GHASH_BLOCK_SIZE; partial = 0; } @@ -109,9 +140,7 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) struct ghash_key *key = crypto_shash_ctx(desc->tfm); memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); - kernel_neon_begin(); - pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL); - kernel_neon_end(); + ghash_do_update(1, ctx->digest, ctx->buf, key, NULL); } put_unaligned_be64(ctx->digest[1], dst); put_unaligned_be64(ctx->digest[0], dst + 8); @@ -135,24 +164,25 @@ static int ghash_setkey(struct crypto_shash *tfm, const u8 *inkey, unsigned int keylen) { struct ghash_key *key = crypto_shash_ctx(tfm); - be128 h, k; + be128 h; if (keylen != GHASH_BLOCK_SIZE) { crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } - memcpy(&k, inkey, GHASH_BLOCK_SIZE); - ghash_reflect(key->h, &k); + /* needed for the fallback */ + memcpy(&key->k, inkey, GHASH_BLOCK_SIZE); + ghash_reflect(key->h, &key->k); - h = k; - gf128mul_lle(&h, &k); + h = key->k; + gf128mul_lle(&h, &key->k); ghash_reflect(key->h2, &h); - gf128mul_lle(&h, &k); + gf128mul_lle(&h, &key->k); ghash_reflect(key->h3, &h); - gf128mul_lle(&h, &k); + gf128mul_lle(&h, &key->k); ghash_reflect(key->h4, &h); return 0; @@ -165,15 +195,13 @@ static struct shash_alg ghash_alg = { .final = ghash_final, .setkey = ghash_setkey, .descsize = sizeof(struct ghash_desc_ctx), - .base = { - .cra_name = "__ghash", - .cra_driver_name = "__driver-ghash-ce", - .cra_priority = 0, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = GHASH_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct ghash_key), - .cra_module = THIS_MODULE, - }, + + .base.cra_name = "ghash", + .base.cra_driver_name = "ghash-ce-sync", + .base.cra_priority = 300 - 1, + .base.cra_blocksize = GHASH_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct ghash_key), + .base.cra_module = THIS_MODULE, }; static int ghash_async_init(struct ahash_request *req) @@ -288,9 +316,7 @@ static int ghash_async_init_tfm(struct crypto_tfm *tfm) struct cryptd_ahash *cryptd_tfm; struct ghash_async_ctx *ctx = crypto_tfm_ctx(tfm); - cryptd_tfm = cryptd_alloc_ahash("__driver-ghash-ce", - CRYPTO_ALG_INTERNAL, - CRYPTO_ALG_INTERNAL); + cryptd_tfm = cryptd_alloc_ahash("ghash-ce-sync", 0, 0); if (IS_ERR(cryptd_tfm)) return PTR_ERR(cryptd_tfm); ctx->cryptd_tfm = cryptd_tfm;