From patchwork Thu Feb 6 12:25:03 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 24254 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f200.google.com (mail-ie0-f200.google.com [209.85.223.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 4700C20445 for ; Thu, 6 Feb 2014 12:25:12 +0000 (UTC) Received: by mail-ie0-f200.google.com with SMTP id tp5sf3606396ieb.3 for ; Thu, 06 Feb 2014 04:25:11 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=8ZDQb9om1XDuPbMrYhZMfLVXwNY4OHMTAUwoPMRvqVI=; b=QgrJeiFe6JbeNLf5Oes3VRvEO4Mipe/YQAOjl+qoluoX96/rQxRTPjuuiiEc3bf64J hndvvsmAFYRopb/0hBrB6m5XfONbpFDALUG/os/H9BWjIbqrb/OA0uPCDUra/VAHsKWj twaZnpqeiItntyD1Kqhx0zcR/okUTOjH1fZ0BWhEKV6KaaeXaeyqLHfbCIcxWe8ciUmi SuPEpIMclqwpth3VhfjfoXg+wNIb1wo+Z5WjCLOEajHMZPLszgl7Mg5Xhqmso6I+EOg+ wdms8ma0cCIONC0oAxgH+6uXQtneuFFS3HkSroRpi7sZo9Fblo2E6hOLxZLowWiwixla P5AA== X-Gm-Message-State: ALoCoQl/uHUxNc5iaykmkXPCyMYcsrgQOZdp8d1UtmARKhXKAt3WL8zVjJKeLaZeyX+lqBy/fs4B X-Received: by 10.42.82.137 with SMTP id d9mr3045522icl.25.1391689511181; Thu, 06 Feb 2014 04:25:11 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.26.40 with SMTP id 37ls573177qgu.46.gmail; Thu, 06 Feb 2014 04:25:11 -0800 (PST) X-Received: by 10.221.29.137 with SMTP id ry9mr5630678vcb.6.1391689511103; Thu, 06 Feb 2014 04:25:11 -0800 (PST) Received: from mail-ve0-f170.google.com (mail-ve0-f170.google.com [209.85.128.170]) by mx.google.com with ESMTPS id z15si192020vce.147.2014.02.06.04.25.11 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 06 Feb 2014 04:25:11 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.170 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.170; Received: by mail-ve0-f170.google.com with SMTP id cz12so1412756veb.15 for ; Thu, 06 Feb 2014 04:25:11 -0800 (PST) X-Received: by 10.52.246.133 with SMTP id xw5mr942606vdc.32.1391689511015; Thu, 06 Feb 2014 04:25:11 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp8399vcz; Thu, 6 Feb 2014 04:25:10 -0800 (PST) X-Received: by 10.66.136.71 with SMTP id py7mr362602pab.2.1391689509808; Thu, 06 Feb 2014 04:25:09 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id pg10si923480pbb.354.2014.02.06.04.25.09; Thu, 06 Feb 2014 04:25:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756197AbaBFMZH (ORCPT + 1 other); Thu, 6 Feb 2014 07:25:07 -0500 Received: from mail-we0-f179.google.com ([74.125.82.179]:38212 "EHLO mail-we0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756239AbaBFMZG (ORCPT ); Thu, 6 Feb 2014 07:25:06 -0500 Received: by mail-we0-f179.google.com with SMTP id q58so1224044wes.38 for ; Thu, 06 Feb 2014 04:25:05 -0800 (PST) X-Received: by 10.180.164.174 with SMTP id yr14mr6712099wib.18.1391689505376; Thu, 06 Feb 2014 04:25:05 -0800 (PST) Received: from ards-macbook-pro.local (cag06-7-83-153-85-71.fbx.proxad.net. [83.153.85.71]) by mx.google.com with ESMTPSA id dd3sm2028920wjb.9.2014.02.06.04.25.04 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 06 Feb 2014 04:25:04 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: davem@davemloft.net, herbert@gondor.apana.org.au, jussi.kivilinna@iki.fi, Ard Biesheuvel Subject: [RFC PATCH 2/3] crypto: take interleave into account for CBC decryption Date: Thu, 6 Feb 2014 13:25:03 +0100 Message-Id: <1391689504-28160-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1391689504-28160-1-git-send-email-ard.biesheuvel@linaro.org> References: <1391689504-28160-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.170 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , As CBC decryption can be executed in parallel, take the cipher alg's preferred interleave into account when decrypting data. Signed-off-by: Ard Biesheuvel --- crypto/cbc.c | 109 ++++++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 82 insertions(+), 27 deletions(-) diff --git a/crypto/cbc.c b/crypto/cbc.c index 61ac42e1e32b..1a9747fa2a14 100644 --- a/crypto/cbc.c +++ b/crypto/cbc.c @@ -113,24 +113,44 @@ static int crypto_cbc_encrypt(struct blkcipher_desc *desc, static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc, struct blkcipher_walk *walk, - struct crypto_cipher *tfm) + struct crypto_cipher *tfm, + int bsize, + int ilsize) { - void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = - crypto_cipher_alg(tfm)->cia_decrypt; - int bsize = crypto_cipher_blocksize(tfm); unsigned int nbytes = walk->nbytes; u8 *src = walk->src.virt.addr; u8 *dst = walk->dst.virt.addr; u8 *iv = walk->iv; - do { + while (nbytes >= ilsize) { + void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = + crypto_cipher_alg(tfm)->cia_dec_interleave; + + fn(crypto_cipher_tfm(tfm), dst, src); + if (iv == walk->iv) { + crypto_xor(dst, iv, bsize); + crypto_xor(dst + bsize, src, ilsize - bsize); + } else { + crypto_xor(dst, src - bsize, ilsize); + } + iv = src + ilsize - bsize; + + src += ilsize; + dst += ilsize; + nbytes -= ilsize; + } + while (nbytes >= bsize) { + void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = + crypto_cipher_alg(tfm)->cia_decrypt; + fn(crypto_cipher_tfm(tfm), dst, src); crypto_xor(dst, iv, bsize); iv = src; src += bsize; dst += bsize; - } while ((nbytes -= bsize) >= bsize); + nbytes -= bsize; + } memcpy(walk->iv, iv, bsize); @@ -139,29 +159,53 @@ static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc, static int crypto_cbc_decrypt_inplace(struct blkcipher_desc *desc, struct blkcipher_walk *walk, - struct crypto_cipher *tfm) + struct crypto_cipher *tfm, + int bsize, + int ilsize) { - void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = - crypto_cipher_alg(tfm)->cia_decrypt; - int bsize = crypto_cipher_blocksize(tfm); unsigned int nbytes = walk->nbytes; u8 *src = walk->src.virt.addr; - u8 last_iv[bsize]; - /* Start of the last block. */ - src += nbytes - (nbytes & (bsize - 1)) - bsize; - memcpy(last_iv, src, bsize); - - for (;;) { - fn(crypto_cipher_tfm(tfm), src, src); - if ((nbytes -= bsize) < bsize) - break; - crypto_xor(src, src - bsize, bsize); - src -= bsize; + if (nbytes >= ilsize) { + void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = + crypto_cipher_alg(tfm)->cia_dec_interleave; + u8 buf[2][ilsize]; + u8 *iv = walk->iv; + int i; + + for (i = 0; nbytes >= ilsize; nbytes -= ilsize, i = !i) { + memcpy(buf[i], src, ilsize); + fn(crypto_cipher_tfm(tfm), src, buf[i]); + if (iv + bsize == buf[i]) { + crypto_xor(src, iv, ilsize); + } else { + crypto_xor(src, iv, bsize); + crypto_xor(src + bsize, buf[i], ilsize - bsize); + } + iv = buf[i] + ilsize - bsize; + src += ilsize; + } + memcpy(walk->iv, iv, bsize); + } + if (nbytes >= bsize) { + void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = + crypto_cipher_alg(tfm)->cia_decrypt; + u8 last_iv[bsize]; + + /* Start of the last block. */ + src += nbytes - (nbytes & (bsize - 1)) - bsize; + memcpy(last_iv, src, bsize); + + for (;;) { + fn(crypto_cipher_tfm(tfm), src, src); + if ((nbytes -= bsize) < bsize) + break; + crypto_xor(src, src - bsize, bsize); + src -= bsize; + } + crypto_xor(src, walk->iv, bsize); + memcpy(walk->iv, last_iv, bsize); } - - crypto_xor(src, walk->iv, bsize); - memcpy(walk->iv, last_iv, bsize); return nbytes; } @@ -174,16 +218,27 @@ static int crypto_cbc_decrypt(struct blkcipher_desc *desc, struct crypto_blkcipher *tfm = desc->tfm; struct crypto_cbc_ctx *ctx = crypto_blkcipher_ctx(tfm); struct crypto_cipher *child = ctx->child; + unsigned int interleave = crypto_cipher_alg(child)->cia_interleave; + int bsize = crypto_cipher_blocksize(child); + int ilsize = INT_MAX; int err; blkcipher_walk_init(&walk, dst, src, nbytes); - err = blkcipher_walk_virt(desc, &walk); + + if (interleave > 1) { + ilsize = interleave * bsize; + err = blkcipher_walk_virt_block(desc, &walk, ilsize); + } else { + err = blkcipher_walk_virt(desc, &walk); + } while ((nbytes = walk.nbytes)) { if (walk.src.virt.addr == walk.dst.virt.addr) - nbytes = crypto_cbc_decrypt_inplace(desc, &walk, child); + nbytes = crypto_cbc_decrypt_inplace(desc, &walk, child, + bsize, ilsize); else - nbytes = crypto_cbc_decrypt_segment(desc, &walk, child); + nbytes = crypto_cbc_decrypt_segment(desc, &walk, child, + bsize, ilsize); err = blkcipher_walk_done(desc, &walk, nbytes); }