From patchwork Mon Oct 14 12:19:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 176237 Delivered-To: patch@linaro.org Received: by 2002:ac9:3c86:0:0:0:0:0 with SMTP id w6csp4416645ocf; Mon, 14 Oct 2019 05:22:33 -0700 (PDT) X-Google-Smtp-Source: APXvYqw44XFi4ClZYdUn0GnBA9HaItn7pCwJxG0px1WBzPjVEY2YYpCTHfiCSLow9dyByeqdP0Uc X-Received: by 2002:aa7:dd18:: with SMTP id i24mr28626358edv.239.1571055608675; Mon, 14 Oct 2019 05:20:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571055608; cv=none; d=google.com; s=arc-20160816; b=xQZdWi3WNHdT/Py6JGEwYGLBz0uQgzd/TvDbAAdx3LBQfUwfJZzeWc0mDHhEHV5QCt hc37/EV9aeaA2CA92fjV4H4cfBcWuhay0wm+wBDUFIS3bgCvrv1Zn3aAN3dYskbguhPt s9XTklp0qPBEwzyI4GmuGnagvXJV2R5vP+opEEnd1k/ZsRw7+lXhulxlWp4/qp6kxxml r939nvSHajuaF3JE0se8V+ovXXYPk+8kZ5GRIwata7VP7q6xGrDQNW93qw8uqHftsLK7 F/9uPHLOTljDOUlc6+xc1KSkhzDOtBl/5v/R8i3M/u6T+8Mj3jKMxnqS2r8BaWrE51Eh G8JQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=g2guTtZQNpFmVRb5THlXvx+Ph/1cQAsy90V4CVYSsag=; b=X5jNb0sFdzLRKW1AK8WooctTx5M1grKeq9vB4xApIYTYYOAumghZVYs8Ntp/rRY1iA JG+JIwV8w8kVedS3penojf5RgkHYKRVJZyocpVIDnIsU+qS6Gjd9lajDDpHWtSHkTdB6 /AMb4Wy4QOHuHUKjmqoqe2sqCUVjgItMxiZM3gen22O2FURjaZ49oFepQwrQ/0lJX953 tacvW5MTTq/HHbs5R83u+Ti/bS/2ecSxDmphVpvBP7Em0Q62T77KygbXZTZEYgnWRWtM vKszboTbMqBNNLe3nXRMhspMXtzNjM7yPwSscTVoYJoOmVNQwrWmAaLQ8Vr5crPqqxSw 1Gbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=N9lVd+Bh; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id hk1si10860733ejb.408.2019.10.14.05.20.08; Mon, 14 Oct 2019 05:20:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=N9lVd+Bh; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730317AbfJNMUH (ORCPT + 3 others); Mon, 14 Oct 2019 08:20:07 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:33354 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729054AbfJNMUG (ORCPT ); Mon, 14 Oct 2019 08:20:06 -0400 Received: by mail-wr1-f67.google.com with SMTP id b9so19504880wrs.0 for ; Mon, 14 Oct 2019 05:20:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g2guTtZQNpFmVRb5THlXvx+Ph/1cQAsy90V4CVYSsag=; b=N9lVd+Bh39u3/S3K0f9urEmptroQKjQIFBcxrkmQgNQYPjs29A4AmhH2jqIY/sMNOn c3YO0dDW+T50NFhC1xju2vEMLSNntHPMIAxp7O2yjhxVn1I1k5RX5zsroLc9YEwaimA8 4SohgAe7TQz7CAzVOk7Ei9U5B1cRIJ7A0acxOTb6kopiAnfSivcuRMA0QGJd4V050RuD 16NC18kSyWzQO13Y3808w1bPvCcWwb8K9fDIiWB95vjNZhVirLygVlaISDThVno4egVG 65IV985pi5Md9J4My7hynPatvbM7le+QP1lad8uSiZjrBAEsxABJaGbXN9eHgbGlf5U7 JDtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g2guTtZQNpFmVRb5THlXvx+Ph/1cQAsy90V4CVYSsag=; b=oUMkNXKiWO3RMWZiLHkAx3qnqFmwWt+5FBR5Iqc7dr2l2YcBGQExTPSKo8+u+ej1zg ACtZKFfuko9LIb1W64Wd33f65ruKBLtpZFYuO1bm0rRI/FDqbT6wfFQRmmCxmz4uPR5s nYlnnl9rR0URLXC1wIuFe8bqWdr57jEeT6EX2dQJQ9+srh+Vs3JOGhrCGYqNjDygbBLU PGahQ5/haQmWEMi6+9MSC95XJ5S/IfRU9sI2wDtDoa0M/ad3tgKrtiEGNdUroQKWVqnm 8r/AnNT8NHzlDgQrOW8nBg7dib/ZdN72C7dxKa+cPuN2l486p2NPlXuHpEuLjG0RFxAY QH9w== X-Gm-Message-State: APjAAAWA8xsvII1rh3wlMlZ5qoQ9s4VjBIB4MQDHQhOn2XzlLQN1f2bf sUG3VPzjO9wyEPzkNaJlQf4sDdolFQN4+Q== X-Received: by 2002:a05:6000:49:: with SMTP id k9mr26295844wrx.289.1571055602160; Mon, 14 Oct 2019 05:20:02 -0700 (PDT) Received: from localhost.localdomain (91-167-84-221.subs.proxad.net. [91.167.84.221]) by smtp.gmail.com with ESMTPSA id i1sm20222470wmb.19.2019.10.14.05.20.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Oct 2019 05:20:01 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , "David S. Miller" , Eric Biggers , linux-arm-kernel@lists.infradead.org, Heiko Stuebner Subject: [PATCH 21/25] crypto: rockchip - switch to skcipher API Date: Mon, 14 Oct 2019 14:19:06 +0200 Message-Id: <20191014121910.7264-22-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191014121910.7264-1-ard.biesheuvel@linaro.org> References: <20191014121910.7264-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Heiko Stuebner Signed-off-by: Ard Biesheuvel --- drivers/crypto/rockchip/Makefile | 2 +- drivers/crypto/rockchip/rk3288_crypto.c | 8 +- drivers/crypto/rockchip/rk3288_crypto.h | 3 +- drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c | 556 -------------------- drivers/crypto/rockchip/rk3288_crypto_skcipher.c | 538 +++++++++++++++++++ 5 files changed, 545 insertions(+), 562 deletions(-) -- 2.20.1 diff --git a/drivers/crypto/rockchip/Makefile b/drivers/crypto/rockchip/Makefile index 6e23764e6c8a..785277aca71e 100644 --- a/drivers/crypto/rockchip/Makefile +++ b/drivers/crypto/rockchip/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_CRYPTO_DEV_ROCKCHIP) += rk_crypto.o rk_crypto-objs := rk3288_crypto.o \ - rk3288_crypto_ablkcipher.o \ + rk3288_crypto_skcipher.o \ rk3288_crypto_ahash.o diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c index e5714ef24bf2..f385587f99af 100644 --- a/drivers/crypto/rockchip/rk3288_crypto.c +++ b/drivers/crypto/rockchip/rk3288_crypto.c @@ -264,8 +264,8 @@ static int rk_crypto_register(struct rk_crypto_info *crypto_info) for (i = 0; i < ARRAY_SIZE(rk_cipher_algs); i++) { rk_cipher_algs[i]->dev = crypto_info; if (rk_cipher_algs[i]->type == ALG_TYPE_CIPHER) - err = crypto_register_alg( - &rk_cipher_algs[i]->alg.crypto); + err = crypto_register_skcipher( + &rk_cipher_algs[i]->alg.skcipher); else err = crypto_register_ahash( &rk_cipher_algs[i]->alg.hash); @@ -277,7 +277,7 @@ static int rk_crypto_register(struct rk_crypto_info *crypto_info) err_cipher_algs: for (k = 0; k < i; k++) { if (rk_cipher_algs[i]->type == ALG_TYPE_CIPHER) - crypto_unregister_alg(&rk_cipher_algs[k]->alg.crypto); + crypto_unregister_skcipher(&rk_cipher_algs[k]->alg.skcipher); else crypto_unregister_ahash(&rk_cipher_algs[i]->alg.hash); } @@ -290,7 +290,7 @@ static void rk_crypto_unregister(void) for (i = 0; i < ARRAY_SIZE(rk_cipher_algs); i++) { if (rk_cipher_algs[i]->type == ALG_TYPE_CIPHER) - crypto_unregister_alg(&rk_cipher_algs[i]->alg.crypto); + crypto_unregister_skcipher(&rk_cipher_algs[i]->alg.skcipher); else crypto_unregister_ahash(&rk_cipher_algs[i]->alg.hash); } diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h index 18e2b3f29336..2b49c677afdb 100644 --- a/drivers/crypto/rockchip/rk3288_crypto.h +++ b/drivers/crypto/rockchip/rk3288_crypto.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -256,7 +257,7 @@ enum alg_type { struct rk_crypto_tmp { struct rk_crypto_info *dev; union { - struct crypto_alg crypto; + struct skcipher_alg skcipher; struct ahash_alg hash; } alg; enum alg_type type; diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c deleted file mode 100644 index d0f4b2d18059..000000000000 --- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c +++ /dev/null @@ -1,556 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Crypto acceleration support for Rockchip RK3288 - * - * Copyright (c) 2015, Fuzhou Rockchip Electronics Co., Ltd - * - * Author: Zain Wang - * - * Some ideas are from marvell-cesa.c and s5p-sss.c driver. - */ -#include "rk3288_crypto.h" - -#define RK_CRYPTO_DEC BIT(0) - -static void rk_crypto_complete(struct crypto_async_request *base, int err) -{ - if (base->complete) - base->complete(base, err); -} - -static int rk_handle_req(struct rk_crypto_info *dev, - struct ablkcipher_request *req) -{ - if (!IS_ALIGNED(req->nbytes, dev->align_size)) - return -EINVAL; - else - return dev->enqueue(dev, &req->base); -} - -static int rk_aes_setkey(struct crypto_ablkcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); - - if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && - keylen != AES_KEYSIZE_256) { - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); - return -EINVAL; - } - ctx->keylen = keylen; - memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_KEY_0, key, keylen); - return 0; -} - -static int rk_des_setkey(struct crypto_ablkcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(cipher); - int err; - - err = verify_ablkcipher_des_key(cipher, key); - if (err) - return err; - - ctx->keylen = keylen; - memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen); - return 0; -} - -static int rk_tdes_setkey(struct crypto_ablkcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(cipher); - int err; - - err = verify_ablkcipher_des3_key(cipher, key); - if (err) - return err; - - ctx->keylen = keylen; - memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen); - return 0; -} - -static int rk_aes_ecb_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_AES_ECB_MODE; - return rk_handle_req(dev, req); -} - -static int rk_aes_ecb_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_aes_cbc_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_AES_CBC_MODE; - return rk_handle_req(dev, req); -} - -static int rk_aes_cbc_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_des_ecb_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = 0; - return rk_handle_req(dev, req); -} - -static int rk_des_ecb_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_des_cbc_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC; - return rk_handle_req(dev, req); -} - -static int rk_des_cbc_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_des3_ede_ecb_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_SELECT; - return rk_handle_req(dev, req); -} - -static int rk_des3_ede_ecb_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_des3_ede_cbc_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC; - return rk_handle_req(dev, req); -} - -static int rk_des3_ede_cbc_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC | - RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static void rk_ablk_hw_init(struct rk_crypto_info *dev) -{ - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(req); - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(cipher); - u32 ivsize, block, conf_reg = 0; - - block = crypto_tfm_alg_blocksize(tfm); - ivsize = crypto_ablkcipher_ivsize(cipher); - - if (block == DES_BLOCK_SIZE) { - ctx->mode |= RK_CRYPTO_TDES_FIFO_MODE | - RK_CRYPTO_TDES_BYTESWAP_KEY | - RK_CRYPTO_TDES_BYTESWAP_IV; - CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, ctx->mode); - memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, req->info, ivsize); - conf_reg = RK_CRYPTO_DESSEL; - } else { - ctx->mode |= RK_CRYPTO_AES_FIFO_MODE | - RK_CRYPTO_AES_KEY_CHANGE | - RK_CRYPTO_AES_BYTESWAP_KEY | - RK_CRYPTO_AES_BYTESWAP_IV; - if (ctx->keylen == AES_KEYSIZE_192) - ctx->mode |= RK_CRYPTO_AES_192BIT_key; - else if (ctx->keylen == AES_KEYSIZE_256) - ctx->mode |= RK_CRYPTO_AES_256BIT_key; - CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, ctx->mode); - memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, req->info, ivsize); - } - conf_reg |= RK_CRYPTO_BYTESWAP_BTFIFO | - RK_CRYPTO_BYTESWAP_BRFIFO; - CRYPTO_WRITE(dev, RK_CRYPTO_CONF, conf_reg); - CRYPTO_WRITE(dev, RK_CRYPTO_INTENA, - RK_CRYPTO_BCDMA_ERR_ENA | RK_CRYPTO_BCDMA_DONE_ENA); -} - -static void crypto_dma_start(struct rk_crypto_info *dev) -{ - CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAS, dev->addr_in); - CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAL, dev->count / 4); - CRYPTO_WRITE(dev, RK_CRYPTO_BTDMAS, dev->addr_out); - CRYPTO_WRITE(dev, RK_CRYPTO_CTRL, RK_CRYPTO_BLOCK_START | - _SBF(RK_CRYPTO_BLOCK_START, 16)); -} - -static int rk_set_data_start(struct rk_crypto_info *dev) -{ - int err; - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - u32 ivsize = crypto_ablkcipher_ivsize(tfm); - u8 *src_last_blk = page_address(sg_page(dev->sg_src)) + - dev->sg_src->offset + dev->sg_src->length - ivsize; - - /* Store the iv that need to be updated in chain mode. - * And update the IV buffer to contain the next IV for decryption mode. - */ - if (ctx->mode & RK_CRYPTO_DEC) { - memcpy(ctx->iv, src_last_blk, ivsize); - sg_pcopy_to_buffer(dev->first, dev->src_nents, req->info, - ivsize, dev->total - ivsize); - } - - err = dev->load_data(dev, dev->sg_src, dev->sg_dst); - if (!err) - crypto_dma_start(dev); - return err; -} - -static int rk_ablk_start(struct rk_crypto_info *dev) -{ - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - unsigned long flags; - int err = 0; - - dev->left_bytes = req->nbytes; - dev->total = req->nbytes; - dev->sg_src = req->src; - dev->first = req->src; - dev->src_nents = sg_nents(req->src); - dev->sg_dst = req->dst; - dev->dst_nents = sg_nents(req->dst); - dev->aligned = 1; - - spin_lock_irqsave(&dev->lock, flags); - rk_ablk_hw_init(dev); - err = rk_set_data_start(dev); - spin_unlock_irqrestore(&dev->lock, flags); - return err; -} - -static void rk_iv_copyback(struct rk_crypto_info *dev) -{ - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - u32 ivsize = crypto_ablkcipher_ivsize(tfm); - - /* Update the IV buffer to contain the next IV for encryption mode. */ - if (!(ctx->mode & RK_CRYPTO_DEC)) { - if (dev->aligned) { - memcpy(req->info, sg_virt(dev->sg_dst) + - dev->sg_dst->length - ivsize, ivsize); - } else { - memcpy(req->info, dev->addr_vir + - dev->count - ivsize, ivsize); - } - } -} - -static void rk_update_iv(struct rk_crypto_info *dev) -{ - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - u32 ivsize = crypto_ablkcipher_ivsize(tfm); - u8 *new_iv = NULL; - - if (ctx->mode & RK_CRYPTO_DEC) { - new_iv = ctx->iv; - } else { - new_iv = page_address(sg_page(dev->sg_dst)) + - dev->sg_dst->offset + dev->sg_dst->length - ivsize; - } - - if (ivsize == DES_BLOCK_SIZE) - memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize); - else if (ivsize == AES_BLOCK_SIZE) - memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize); -} - -/* return: - * true some err was occurred - * fault no err, continue - */ -static int rk_ablk_rx(struct rk_crypto_info *dev) -{ - int err = 0; - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - - dev->unload_data(dev); - if (!dev->aligned) { - if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents, - dev->addr_vir, dev->count, - dev->total - dev->left_bytes - - dev->count)) { - err = -EINVAL; - goto out_rx; - } - } - if (dev->left_bytes) { - rk_update_iv(dev); - if (dev->aligned) { - if (sg_is_last(dev->sg_src)) { - dev_err(dev->dev, "[%s:%d] Lack of data\n", - __func__, __LINE__); - err = -ENOMEM; - goto out_rx; - } - dev->sg_src = sg_next(dev->sg_src); - dev->sg_dst = sg_next(dev->sg_dst); - } - err = rk_set_data_start(dev); - } else { - rk_iv_copyback(dev); - /* here show the calculation is over without any err */ - dev->complete(dev->async_req, 0); - tasklet_schedule(&dev->queue_task); - } -out_rx: - return err; -} - -static int rk_ablk_cra_init(struct crypto_tfm *tfm) -{ - struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); - struct crypto_alg *alg = tfm->__crt_alg; - struct rk_crypto_tmp *algt; - - algt = container_of(alg, struct rk_crypto_tmp, alg.crypto); - - ctx->dev = algt->dev; - ctx->dev->align_size = crypto_tfm_alg_alignmask(tfm) + 1; - ctx->dev->start = rk_ablk_start; - ctx->dev->update = rk_ablk_rx; - ctx->dev->complete = rk_crypto_complete; - ctx->dev->addr_vir = (char *)__get_free_page(GFP_KERNEL); - - return ctx->dev->addr_vir ? ctx->dev->enable_clk(ctx->dev) : -ENOMEM; -} - -static void rk_ablk_cra_exit(struct crypto_tfm *tfm) -{ - struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); - - free_page((unsigned long)ctx->dev->addr_vir); - ctx->dev->disable_clk(ctx->dev); -} - -struct rk_crypto_tmp rk_ecb_aes_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x0f, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = rk_aes_setkey, - .encrypt = rk_aes_ecb_encrypt, - .decrypt = rk_aes_ecb_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_cbc_aes_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x0f, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = rk_aes_setkey, - .encrypt = rk_aes_cbc_encrypt, - .decrypt = rk_aes_cbc_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_ecb_des_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "ecb(des)", - .cra_driver_name = "ecb-des-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x07, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .setkey = rk_des_setkey, - .encrypt = rk_des_ecb_encrypt, - .decrypt = rk_des_ecb_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_cbc_des_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "cbc(des)", - .cra_driver_name = "cbc-des-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x07, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = rk_des_setkey, - .encrypt = rk_des_cbc_encrypt, - .decrypt = rk_des_cbc_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_ecb_des3_ede_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "ecb-des3-ede-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x07, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = rk_tdes_setkey, - .encrypt = rk_des3_ede_ecb_encrypt, - .decrypt = rk_des3_ede_ecb_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_cbc_des3_ede_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "cbc-des3-ede-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x07, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = rk_tdes_setkey, - .encrypt = rk_des3_ede_cbc_encrypt, - .decrypt = rk_des3_ede_cbc_decrypt, - } - } -}; diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c new file mode 100644 index 000000000000..ca4de4ddfe1f --- /dev/null +++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c @@ -0,0 +1,538 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Crypto acceleration support for Rockchip RK3288 + * + * Copyright (c) 2015, Fuzhou Rockchip Electronics Co., Ltd + * + * Author: Zain Wang + * + * Some ideas are from marvell-cesa.c and s5p-sss.c driver. + */ +#include "rk3288_crypto.h" + +#define RK_CRYPTO_DEC BIT(0) + +static void rk_crypto_complete(struct crypto_async_request *base, int err) +{ + if (base->complete) + base->complete(base, err); +} + +static int rk_handle_req(struct rk_crypto_info *dev, + struct skcipher_request *req) +{ + if (!IS_ALIGNED(req->cryptlen, dev->align_size)) + return -EINVAL; + else + return dev->enqueue(dev, &req->base); +} + +static int rk_aes_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); + + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && + keylen != AES_KEYSIZE_256) { + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + ctx->keylen = keylen; + memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_KEY_0, key, keylen); + return 0; +} + +static int rk_des_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher); + int err; + + err = verify_skcipher_des_key(cipher, key); + if (err) + return err; + + ctx->keylen = keylen; + memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen); + return 0; +} + +static int rk_tdes_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher); + int err; + + err = verify_skcipher_des3_key(cipher, key); + if (err) + return err; + + ctx->keylen = keylen; + memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen); + return 0; +} + +static int rk_aes_ecb_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_AES_ECB_MODE; + return rk_handle_req(dev, req); +} + +static int rk_aes_ecb_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_aes_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_AES_CBC_MODE; + return rk_handle_req(dev, req); +} + +static int rk_aes_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_des_ecb_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = 0; + return rk_handle_req(dev, req); +} + +static int rk_des_ecb_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_des_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC; + return rk_handle_req(dev, req); +} + +static int rk_des_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_des3_ede_ecb_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_SELECT; + return rk_handle_req(dev, req); +} + +static int rk_des3_ede_ecb_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_des3_ede_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC; + return rk_handle_req(dev, req); +} + +static int rk_des3_ede_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC | + RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static void rk_ablk_hw_init(struct rk_crypto_info *dev) +{ + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher); + u32 ivsize, block, conf_reg = 0; + + block = crypto_tfm_alg_blocksize(tfm); + ivsize = crypto_skcipher_ivsize(cipher); + + if (block == DES_BLOCK_SIZE) { + ctx->mode |= RK_CRYPTO_TDES_FIFO_MODE | + RK_CRYPTO_TDES_BYTESWAP_KEY | + RK_CRYPTO_TDES_BYTESWAP_IV; + CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, ctx->mode); + memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, req->iv, ivsize); + conf_reg = RK_CRYPTO_DESSEL; + } else { + ctx->mode |= RK_CRYPTO_AES_FIFO_MODE | + RK_CRYPTO_AES_KEY_CHANGE | + RK_CRYPTO_AES_BYTESWAP_KEY | + RK_CRYPTO_AES_BYTESWAP_IV; + if (ctx->keylen == AES_KEYSIZE_192) + ctx->mode |= RK_CRYPTO_AES_192BIT_key; + else if (ctx->keylen == AES_KEYSIZE_256) + ctx->mode |= RK_CRYPTO_AES_256BIT_key; + CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, ctx->mode); + memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, req->iv, ivsize); + } + conf_reg |= RK_CRYPTO_BYTESWAP_BTFIFO | + RK_CRYPTO_BYTESWAP_BRFIFO; + CRYPTO_WRITE(dev, RK_CRYPTO_CONF, conf_reg); + CRYPTO_WRITE(dev, RK_CRYPTO_INTENA, + RK_CRYPTO_BCDMA_ERR_ENA | RK_CRYPTO_BCDMA_DONE_ENA); +} + +static void crypto_dma_start(struct rk_crypto_info *dev) +{ + CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAS, dev->addr_in); + CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAL, dev->count / 4); + CRYPTO_WRITE(dev, RK_CRYPTO_BTDMAS, dev->addr_out); + CRYPTO_WRITE(dev, RK_CRYPTO_CTRL, RK_CRYPTO_BLOCK_START | + _SBF(RK_CRYPTO_BLOCK_START, 16)); +} + +static int rk_set_data_start(struct rk_crypto_info *dev) +{ + int err; + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + u32 ivsize = crypto_skcipher_ivsize(tfm); + u8 *src_last_blk = page_address(sg_page(dev->sg_src)) + + dev->sg_src->offset + dev->sg_src->length - ivsize; + + /* Store the iv that need to be updated in chain mode. + * And update the IV buffer to contain the next IV for decryption mode. + */ + if (ctx->mode & RK_CRYPTO_DEC) { + memcpy(ctx->iv, src_last_blk, ivsize); + sg_pcopy_to_buffer(dev->first, dev->src_nents, req->iv, + ivsize, dev->total - ivsize); + } + + err = dev->load_data(dev, dev->sg_src, dev->sg_dst); + if (!err) + crypto_dma_start(dev); + return err; +} + +static int rk_ablk_start(struct rk_crypto_info *dev) +{ + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + unsigned long flags; + int err = 0; + + dev->left_bytes = req->cryptlen; + dev->total = req->cryptlen; + dev->sg_src = req->src; + dev->first = req->src; + dev->src_nents = sg_nents(req->src); + dev->sg_dst = req->dst; + dev->dst_nents = sg_nents(req->dst); + dev->aligned = 1; + + spin_lock_irqsave(&dev->lock, flags); + rk_ablk_hw_init(dev); + err = rk_set_data_start(dev); + spin_unlock_irqrestore(&dev->lock, flags); + return err; +} + +static void rk_iv_copyback(struct rk_crypto_info *dev) +{ + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + u32 ivsize = crypto_skcipher_ivsize(tfm); + + /* Update the IV buffer to contain the next IV for encryption mode. */ + if (!(ctx->mode & RK_CRYPTO_DEC)) { + if (dev->aligned) { + memcpy(req->iv, sg_virt(dev->sg_dst) + + dev->sg_dst->length - ivsize, ivsize); + } else { + memcpy(req->iv, dev->addr_vir + + dev->count - ivsize, ivsize); + } + } +} + +static void rk_update_iv(struct rk_crypto_info *dev) +{ + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + u32 ivsize = crypto_skcipher_ivsize(tfm); + u8 *new_iv = NULL; + + if (ctx->mode & RK_CRYPTO_DEC) { + new_iv = ctx->iv; + } else { + new_iv = page_address(sg_page(dev->sg_dst)) + + dev->sg_dst->offset + dev->sg_dst->length - ivsize; + } + + if (ivsize == DES_BLOCK_SIZE) + memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize); + else if (ivsize == AES_BLOCK_SIZE) + memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize); +} + +/* return: + * true some err was occurred + * fault no err, continue + */ +static int rk_ablk_rx(struct rk_crypto_info *dev) +{ + int err = 0; + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + + dev->unload_data(dev); + if (!dev->aligned) { + if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents, + dev->addr_vir, dev->count, + dev->total - dev->left_bytes - + dev->count)) { + err = -EINVAL; + goto out_rx; + } + } + if (dev->left_bytes) { + rk_update_iv(dev); + if (dev->aligned) { + if (sg_is_last(dev->sg_src)) { + dev_err(dev->dev, "[%s:%d] Lack of data\n", + __func__, __LINE__); + err = -ENOMEM; + goto out_rx; + } + dev->sg_src = sg_next(dev->sg_src); + dev->sg_dst = sg_next(dev->sg_dst); + } + err = rk_set_data_start(dev); + } else { + rk_iv_copyback(dev); + /* here show the calculation is over without any err */ + dev->complete(dev->async_req, 0); + tasklet_schedule(&dev->queue_task); + } +out_rx: + return err; +} + +static int rk_ablk_init_tfm(struct crypto_skcipher *tfm) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct rk_crypto_tmp *algt; + + algt = container_of(alg, struct rk_crypto_tmp, alg.skcipher); + + ctx->dev = algt->dev; + ctx->dev->align_size = crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm)) + 1; + ctx->dev->start = rk_ablk_start; + ctx->dev->update = rk_ablk_rx; + ctx->dev->complete = rk_crypto_complete; + ctx->dev->addr_vir = (char *)__get_free_page(GFP_KERNEL); + + return ctx->dev->addr_vir ? ctx->dev->enable_clk(ctx->dev) : -ENOMEM; +} + +static void rk_ablk_exit_tfm(struct crypto_skcipher *tfm) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + + free_page((unsigned long)ctx->dev->addr_vir); + ctx->dev->disable_clk(ctx->dev); +} + +struct rk_crypto_tmp rk_ecb_aes_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = rk_aes_setkey, + .encrypt = rk_aes_ecb_encrypt, + .decrypt = rk_aes_ecb_decrypt, + } +}; + +struct rk_crypto_tmp rk_cbc_aes_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = rk_aes_setkey, + .encrypt = rk_aes_cbc_encrypt, + .decrypt = rk_aes_cbc_decrypt, + } +}; + +struct rk_crypto_tmp rk_ecb_des_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "ecb-des-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x07, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .setkey = rk_des_setkey, + .encrypt = rk_des_ecb_encrypt, + .decrypt = rk_des_ecb_decrypt, + } +}; + +struct rk_crypto_tmp rk_cbc_des_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "cbc-des-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x07, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = rk_des_setkey, + .encrypt = rk_des_cbc_encrypt, + .decrypt = rk_des_cbc_decrypt, + } +}; + +struct rk_crypto_tmp rk_ecb_des3_ede_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "ecb-des3-ede-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x07, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = rk_tdes_setkey, + .encrypt = rk_des3_ede_ecb_encrypt, + .decrypt = rk_des3_ede_ecb_decrypt, + } +}; + +struct rk_crypto_tmp rk_cbc_des3_ede_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cbc-des3-ede-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x07, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = rk_tdes_setkey, + .encrypt = rk_des3_ede_cbc_encrypt, + .decrypt = rk_des3_ede_cbc_decrypt, + } +};