From patchwork Sun Dec 17 08:29:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 122176 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp1514790qgn; Sun, 17 Dec 2017 00:30:28 -0800 (PST) X-Google-Smtp-Source: ACJfBos1btZorC2tW7rThGh1q+4VFp3qPZsXTOeEpOAVP3b+tUCTPAHUh2bGhmnEpNxMmSwl8h5a X-Received: by 10.159.195.73 with SMTP id z9mr19134573pln.186.1513499428238; Sun, 17 Dec 2017 00:30:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1513499428; cv=none; d=google.com; s=arc-20160816; b=CW4EZL3UNCIRCBQ+TUlDiqtbrG6jtsDrXyKBJyBxBKQxzOa0efeeSHzyDk6yaTr7Av x9agXVPX0TJoE5dT2hIuZJYLbsgbHyJpCV2PtxS842oqR2wuwpTUSO8NlQsyOCaI0Db7 25xtjli7CBmx2pzXeuM2DxpjqA3D7HDoCevUAPdFv1rMepQx6569bYa4g2tS3di7+aJ/ /y/+zUkW63BF80njge7cqaT9S+KI+PKlGyYz9xj32etFGYZdKeOb14KOviRWo7wigHb9 DNc/CSqf8PUN7vlMYIspvhHgyLZD8JshKSYl3dNVxXz6PUng2zggB7uuONTi0F6q41/A 2YFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ik/8z54EtGKY8BUZ2woTOl2cRlc883m3+aGNiw7cdZA=; b=QW+TczzYBrxYLkxGj9Cy0WDE/m6mB0IFTjZqsQeR4QwYN3kMsjLUN5oaSN0SbgPHRA Y5J/isa6xyFx4jt8RzooEmeeo/3AIN02zERWPdgyCVJn3IZ3WmmN+F3b9VhG65VfAd09 6rr0tUnxEcUXStQK3lsOvhkFw/YSyzT19TSy34CDcMmwM11K6/uONZ1c41Tt/WUP+A8M 7gfA3MN5ypjSzofS95B5RdjUk09qWhyXaIlfpgLFCKib4Sn8VPB8OZg1ITU03zKYm6EF bo4oyEx7WFoJCWDNVROSsWcBIVKJ2Bq4G2iGkY+zgxk1ZPW46SqOu8nvQxucbSnALhmx YSXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 71si1913261pge.254.2017.12.17.00.30.27; Sun, 17 Dec 2017 00:30:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753472AbdLQIaX (ORCPT + 25 others); Sun, 17 Dec 2017 03:30:23 -0500 Received: from foss.arm.com ([217.140.101.70]:40930 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752729AbdLQIaS (ORCPT ); Sun, 17 Dec 2017 03:30:18 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1951D1435; Sun, 17 Dec 2017 00:30:18 -0800 (PST) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BCB2D3F24A; Sun, 17 Dec 2017 00:30:12 -0800 (PST) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" Cc: Ofir Drang , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/6] crypto: tcrypt: add multi buf ahash jiffies test Date: Sun, 17 Dec 2017 08:29:03 +0000 Message-Id: <1513499346-9047-5-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1513499346-9047-1-git-send-email-gilad@benyossef.com> References: <1513499346-9047-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The multi buffer concurrent requests ahash speed test only supported the cycles mode. Add support for the so called jiffies mode that test performance of bytes/sec. We only add support for digest mode at the moment. Signed-off-by: Gilad Ben-Yossef --- crypto/tcrypt.c | 112 +++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 82 insertions(+), 30 deletions(-) -- 2.7.4 diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 2604360..e406b00 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -413,13 +413,87 @@ struct test_mb_ahash_data { char *xbuf[XBUFSIZE]; }; -static void test_mb_ahash_speed(const char *algo, unsigned int sec, +static inline int do_mult_ahash_op(struct test_mb_ahash_data *data, u32 num_mb) +{ + int i, rc[num_mb], err = 0; + + /* Fire up a bunch of concurrent requests */ + for (i = 0; i < num_mb; i++) + rc[i] = crypto_ahash_digest(data[i].req); + + /* Wait for all requests to finish */ + for (i = 0; i < num_mb; i++) { + rc[i] = crypto_wait_req(rc[i], &data[i].wait); + + if (rc[i]) { + pr_info("concurrent request %d error %d\n", i, rc[i]); + err = rc[i]; + } + } + + return err; +} + +static int test_mb_ahash_jiffies(struct test_mb_ahash_data *data, int blen, + int secs, u32 num_mb) +{ + unsigned long start, end; + int bcount; + int ret; + + for (start = jiffies, end = start + secs * HZ, bcount = 0; + time_before(jiffies, end); bcount++) { + ret = do_mult_ahash_op(data, num_mb); + if (ret) + return ret; + } + + pr_cont("%d operations in %d seconds (%ld bytes)\n", + bcount * num_mb, secs, (long)bcount * blen * num_mb); + return 0; +} + +static int test_mb_ahash_cycles(struct test_mb_ahash_data *data, int blen, + u32 num_mb) +{ + unsigned long cycles = 0; + int ret = 0; + int i; + + /* Warm-up run. */ + for (i = 0; i < 4; i++) { + ret = do_mult_ahash_op(data, num_mb); + if (ret) + goto out; + } + + /* The real thing. */ + for (i = 0; i < 8; i++) { + cycles_t start, end; + + start = get_cycles(); + ret = do_mult_ahash_op(data, num_mb); + end = get_cycles(); + + if (ret) + goto out; + + cycles += end - start; + } + +out: + if (ret == 0) + pr_cont("1 operation in %lu cycles (%d bytes)\n", + (cycles + 4) / (8 * num_mb), blen); + + return ret; +} + +static void test_mb_ahash_speed(const char *algo, unsigned int secs, struct hash_speed *speed, u32 num_mb) { struct test_mb_ahash_data *data; struct crypto_ahash *tfm; - unsigned long start, end; - unsigned long cycles; unsigned int i, j, k; int ret; @@ -483,34 +557,12 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen); - start = get_cycles(); - - for (k = 0; k < num_mb; k++) { - ret = crypto_ahash_digest(data[k].req); - if (ret == -EINPROGRESS) { - ret = 0; - continue; - } - - if (ret) - break; - - crypto_req_done(&data[k].req->base, 0); - } - - for (j = 0; j < k; j++) { - struct crypto_wait *wait = &data[j].wait; - int wait_ret; - - wait_ret = crypto_wait_req(-EINPROGRESS, wait); - if (wait_ret) - ret = wait_ret; - } + if (secs) + ret = test_mb_ahash_jiffies(data, speed[i].blen, secs, + num_mb); + else + ret = test_mb_ahash_cycles(data, speed[i].blen, num_mb); - end = get_cycles(); - cycles = end - start; - pr_cont("%6lu cycles/operation, %4lu cycles/byte\n", - cycles, cycles / (num_mb * speed[i].blen)); if (ret) { pr_err("At least one hashing failed ret=%d\n", ret); From patchwork Sun Dec 17 08:29:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 122177 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp1514848qgn; Sun, 17 Dec 2017 00:30:33 -0800 (PST) X-Google-Smtp-Source: ACJfBot72U4w24/mWKD0XrvuMOmluNM/rjNHBRAu3Ysno50DCJCWFCldAXQ+gcC3IIPKJejlRZFL X-Received: by 10.98.34.199 with SMTP id p68mr18456342pfj.241.1513499433162; Sun, 17 Dec 2017 00:30:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1513499433; cv=none; d=google.com; s=arc-20160816; b=T4PETl0ZPjTw4vwgy+KV/S/1L98QMGgkj6g5sLZuKSLKt1RryuB2TEj4qKiJ7di3PH MfiC2KzHiE8GaEP2PjE0vlqmB8pAm3Y/mcK+pGtiRadu0OZPRudVNkECUvOtZO7VmcOs 18fmd261mLGTDI5dT2qmF0GgfjkUBN37RL57hqRBIk144vBLzqEUplhgID3WizHwJy5v IU3PJIIqCRG72YcXs0nqxjE3X9NmteBIUAAHInNAzpoMDosGYIM31N5mkVEydu9AM+rd Iop6SWtbt0J2c8PZqzmKLeLIYRcj5Xf+N6iH7yYhiEtsW2XW5JCj0J1Hhzg8UOnLznfd HEpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=SCIspuSFGeVI/vsOEx5gAVcn8EV8c8g/lYto1iZQruI=; b=tCR/CE+S8NRyCA9NuWYAqbs4H9fFn4fTveVBPDWUMxNoFXdPWnksXHj9lxYulRtBFp 1PbKWt+7WRFvPHKX9EKlhFQw10Vx0ue5QqTlah/IHZ/PvDEd9NjzvuxBqpHIlDkoh+Wt G9+Y3YOqAigPKL7QUHl5tpCxNre/U0dpaPLFljfRYBHx7FwUzaIXpdxxnIfdAV+k5kZw 8E5ZdDeYTMfUNLxHrgEsSSIkZ9AQ3mqBsp2CwKnP8hs9i7qO1ChErej0HnUuib9QFsES 6qmRacnXirF/ZFIL4PhvDDex6bFh6oy21lh0S8mXJMZx0732zR96QVSmrSIfnD9/O/a9 XFOg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h14si7908121pln.774.2017.12.17.00.30.32; Sun, 17 Dec 2017 00:30:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757142AbdLQIaa (ORCPT + 25 others); Sun, 17 Dec 2017 03:30:30 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:40936 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753735AbdLQIa0 (ORCPT ); Sun, 17 Dec 2017 03:30:26 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B4A991435; Sun, 17 Dec 2017 00:30:25 -0800 (PST) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 732883F24A; Sun, 17 Dec 2017 00:30:19 -0800 (PST) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" Cc: Ofir Drang , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/6] crypto: tcrypt: add multibuf skcipher speed test Date: Sun, 17 Dec 2017 08:29:04 +0000 Message-Id: <1513499346-9047-6-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1513499346-9047-1-git-send-email-gilad@benyossef.com> References: <1513499346-9047-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The performance of some skcipher tfm providers is affected by the amount of parallelism possible with the processing. Introduce an async skcipher concurrent multiple buffer processing speed test to be able to test performance of such tfm providers. Signed-off-by: Gilad Ben-Yossef --- crypto/tcrypt.c | 460 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 460 insertions(+) -- 2.7.4 diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index e406b00..d617c19 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -818,6 +818,254 @@ static void test_hash_speed(const char *algo, unsigned int secs, return test_ahash_speed_common(algo, secs, speed, CRYPTO_ALG_ASYNC); } +struct test_mb_skcipher_data { + struct scatterlist sg[XBUFSIZE]; + struct skcipher_request *req; + struct crypto_wait wait; + char *xbuf[XBUFSIZE]; +}; + +static int do_mult_acipher_op(struct test_mb_skcipher_data *data, int enc, + u32 num_mb) +{ + int i, rc[num_mb], err = 0; + + /* Fire up a bunch of concurrent requests */ + for (i = 0; i < num_mb; i++) { + if (enc == ENCRYPT) + rc[i] = crypto_skcipher_encrypt(data[i].req); + else + rc[i] = crypto_skcipher_decrypt(data[i].req); + } + + /* Wait for all requests to finish */ + for (i = 0; i < num_mb; i++) { + rc[i] = crypto_wait_req(rc[i], &data[i].wait); + + if (rc[i]) { + pr_info("concurrent request %d error %d\n", i, rc[i]); + err = rc[i]; + } + } + + return err; +} + +static int test_mb_acipher_jiffies(struct test_mb_skcipher_data *data, int enc, + int blen, int secs, u32 num_mb) +{ + unsigned long start, end; + int bcount; + int ret; + + for (start = jiffies, end = start + secs * HZ, bcount = 0; + time_before(jiffies, end); bcount++) { + ret = do_mult_acipher_op(data, enc, num_mb); + if (ret) + return ret; + } + + pr_cont("%d operations in %d seconds (%ld bytes)\n", + bcount * num_mb, secs, (long)bcount * blen * num_mb); + return 0; +} + +static int test_mb_acipher_cycles(struct test_mb_skcipher_data *data, int enc, + int blen, u32 num_mb) +{ + unsigned long cycles = 0; + int ret = 0; + int i; + + /* Warm-up run. */ + for (i = 0; i < 4; i++) { + ret = do_mult_acipher_op(data, enc, num_mb); + if (ret) + goto out; + } + + /* The real thing. */ + for (i = 0; i < 8; i++) { + cycles_t start, end; + + start = get_cycles(); + ret = do_mult_acipher_op(data, enc, num_mb); + end = get_cycles(); + + if (ret) + goto out; + + cycles += end - start; + } + +out: + if (ret == 0) + pr_cont("1 operation in %lu cycles (%d bytes)\n", + (cycles + 4) / (8 * num_mb), blen); + + return ret; +} + +static void test_mb_skcipher_speed(const char *algo, int enc, int secs, + struct cipher_speed_template *template, + unsigned int tcount, u8 *keysize, u32 num_mb) +{ + struct test_mb_skcipher_data *data; + struct crypto_skcipher *tfm; + unsigned int i, j, iv_len; + const char *key; + const char *e; + u32 *b_size; + char iv[128]; + int ret; + + if (enc == ENCRYPT) + e = "encryption"; + else + e = "decryption"; + + data = kcalloc(num_mb, sizeof(*data), GFP_KERNEL); + if (!data) + return; + + tfm = crypto_alloc_skcipher(algo, 0, 0); + if (IS_ERR(tfm)) { + pr_err("failed to load transform for %s: %ld\n", + algo, PTR_ERR(tfm)); + goto out_free_data; + } + + for (i = 0; i < num_mb; ++i) + if (testmgr_alloc_buf(data[i].xbuf)) { + while (i--) + testmgr_free_buf(data[i].xbuf); + goto out_free_tfm; + } + + + for (i = 0; i < num_mb; ++i) + if (testmgr_alloc_buf(data[i].xbuf)) { + while (i--) + testmgr_free_buf(data[i].xbuf); + goto out_free_tfm; + } + + + for (i = 0; i < num_mb; ++i) { + data[i].req = skcipher_request_alloc(tfm, GFP_KERNEL); + if (!data[i].req) { + pr_err("alg: skcipher: Failed to allocate request for %s\n", + algo); + while (i--) + skcipher_request_free(data[i].req); + goto out_free_xbuf; + } + } + + for (i = 0; i < num_mb; ++i) { + skcipher_request_set_callback(data[i].req, + CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &data[i].wait); + crypto_init_wait(&data[i].wait); + } + + pr_info("\ntesting speed of multibuffer %s (%s) %s\n", algo, + get_driver_name(crypto_skcipher, tfm), e); + + i = 0; + do { + b_size = block_sizes; + do { + if (*b_size > XBUFSIZE * PAGE_SIZE) { + pr_err("template (%u) too big for bufufer (%lu)\n", + *b_size, XBUFSIZE * PAGE_SIZE); + goto out; + } + + pr_info("test %u (%d bit key, %d byte blocks): ", i, + *keysize * 8, *b_size); + + /* Set up tfm global state, i.e. the key */ + + memset(tvmem[0], 0xff, PAGE_SIZE); + key = tvmem[0]; + for (j = 0; j < tcount; j++) { + if (template[j].klen == *keysize) { + key = template[j].key; + break; + } + } + + crypto_skcipher_clear_flags(tfm, ~0); + + ret = crypto_skcipher_setkey(tfm, key, *keysize); + if (ret) { + pr_err("setkey() failed flags=%x\n", + crypto_skcipher_get_flags(tfm)); + goto out; + } + + iv_len = crypto_skcipher_ivsize(tfm); + if (iv_len) + memset(&iv, 0xff, iv_len); + + /* Now setup per request stuff, i.e. buffers */ + + for (j = 0; j < num_mb; ++j) { + struct test_mb_skcipher_data *cur = &data[j]; + unsigned int k = *b_size; + unsigned int pages = DIV_ROUND_UP(k, PAGE_SIZE); + unsigned int p = 0; + + sg_init_table(cur->sg, pages); + + while (k > PAGE_SIZE) { + sg_set_buf(cur->sg + p, cur->xbuf[p], + PAGE_SIZE); + memset(cur->xbuf[p], 0xff, PAGE_SIZE); + p++; + k -= PAGE_SIZE; + } + + sg_set_buf(cur->sg + p, cur->xbuf[p], k); + memset(cur->xbuf[p], 0xff, k); + + skcipher_request_set_crypt(cur->req, cur->sg, + cur->sg, *b_size, + iv); + } + + if (secs) + ret = test_mb_acipher_jiffies(data, enc, + *b_size, secs, + num_mb); + else + ret = test_mb_acipher_cycles(data, enc, + *b_size, num_mb); + + if (ret) { + pr_err("%s() failed flags=%x\n", e, + crypto_skcipher_get_flags(tfm)); + break; + } + b_size++; + i++; + } while (*b_size); + keysize++; + } while (*keysize); + +out: + for (i = 0; i < num_mb; ++i) + skcipher_request_free(data[i].req); +out_free_xbuf: + for (i = 0; i < num_mb; ++i) + testmgr_free_buf(data[i].xbuf); +out_free_tfm: + crypto_free_skcipher(tfm); +out_free_data: + kfree(data); +} + static inline int do_one_acipher_op(struct skcipher_request *req, int ret) { struct crypto_wait *wait = req->base.data; @@ -2102,6 +2350,218 @@ static int do_test(const char *alg, u32 type, u32 mask, int m) speed_template_8_32); break; + case 600: + test_mb_skcipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("ecb(aes)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("cbc(aes)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("cbc(aes)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("lrw(aes)", ENCRYPT, sec, NULL, 0, + speed_template_32_40_48, num_mb); + test_mb_skcipher_speed("lrw(aes)", DECRYPT, sec, NULL, 0, + speed_template_32_40_48, num_mb); + test_mb_skcipher_speed("xts(aes)", ENCRYPT, sec, NULL, 0, + speed_template_32_64, num_mb); + test_mb_skcipher_speed("xts(aes)", DECRYPT, sec, NULL, 0, + speed_template_32_64, num_mb); + test_mb_skcipher_speed("cts(cbc(aes))", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("cts(cbc(aes))", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("ctr(aes)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("ctr(aes)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("cfb(aes)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("cfb(aes)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("ofb(aes)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("ofb(aes)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("rfc3686(ctr(aes))", ENCRYPT, sec, NULL, + 0, speed_template_20_28_36, num_mb); + test_mb_skcipher_speed("rfc3686(ctr(aes))", DECRYPT, sec, NULL, + 0, speed_template_20_28_36, num_mb); + break; + + case 601: + test_mb_skcipher_speed("ecb(des3_ede)", ENCRYPT, sec, + des3_speed_template, DES3_SPEED_VECTORS, + speed_template_24, num_mb); + test_mb_skcipher_speed("ecb(des3_ede)", DECRYPT, sec, + des3_speed_template, DES3_SPEED_VECTORS, + speed_template_24, num_mb); + test_mb_skcipher_speed("cbc(des3_ede)", ENCRYPT, sec, + des3_speed_template, DES3_SPEED_VECTORS, + speed_template_24, num_mb); + test_mb_skcipher_speed("cbc(des3_ede)", DECRYPT, sec, + des3_speed_template, DES3_SPEED_VECTORS, + speed_template_24, num_mb); + test_mb_skcipher_speed("cfb(des3_ede)", ENCRYPT, sec, + des3_speed_template, DES3_SPEED_VECTORS, + speed_template_24, num_mb); + test_mb_skcipher_speed("cfb(des3_ede)", DECRYPT, sec, + des3_speed_template, DES3_SPEED_VECTORS, + speed_template_24, num_mb); + test_mb_skcipher_speed("ofb(des3_ede)", ENCRYPT, sec, + des3_speed_template, DES3_SPEED_VECTORS, + speed_template_24, num_mb); + test_mb_skcipher_speed("ofb(des3_ede)", DECRYPT, sec, + des3_speed_template, DES3_SPEED_VECTORS, + speed_template_24, num_mb); + break; + + case 602: + test_mb_skcipher_speed("ecb(des)", ENCRYPT, sec, NULL, 0, + speed_template_8, num_mb); + test_mb_skcipher_speed("ecb(des)", DECRYPT, sec, NULL, 0, + speed_template_8, num_mb); + test_mb_skcipher_speed("cbc(des)", ENCRYPT, sec, NULL, 0, + speed_template_8, num_mb); + test_mb_skcipher_speed("cbc(des)", DECRYPT, sec, NULL, 0, + speed_template_8, num_mb); + test_mb_skcipher_speed("cfb(des)", ENCRYPT, sec, NULL, 0, + speed_template_8, num_mb); + test_mb_skcipher_speed("cfb(des)", DECRYPT, sec, NULL, 0, + speed_template_8, num_mb); + test_mb_skcipher_speed("ofb(des)", ENCRYPT, sec, NULL, 0, + speed_template_8, num_mb); + test_mb_skcipher_speed("ofb(des)", DECRYPT, sec, NULL, 0, + speed_template_8, num_mb); + break; + + case 603: + test_mb_skcipher_speed("ecb(serpent)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ecb(serpent)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("cbc(serpent)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("cbc(serpent)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ctr(serpent)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ctr(serpent)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("lrw(serpent)", ENCRYPT, sec, NULL, 0, + speed_template_32_48, num_mb); + test_mb_skcipher_speed("lrw(serpent)", DECRYPT, sec, NULL, 0, + speed_template_32_48, num_mb); + test_mb_skcipher_speed("xts(serpent)", ENCRYPT, sec, NULL, 0, + speed_template_32_64, num_mb); + test_mb_skcipher_speed("xts(serpent)", DECRYPT, sec, NULL, 0, + speed_template_32_64, num_mb); + break; + + case 604: + test_mb_skcipher_speed("ecb(twofish)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("ecb(twofish)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("cbc(twofish)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("cbc(twofish)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("ctr(twofish)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("ctr(twofish)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32, num_mb); + test_mb_skcipher_speed("lrw(twofish)", ENCRYPT, sec, NULL, 0, + speed_template_32_40_48, num_mb); + test_mb_skcipher_speed("lrw(twofish)", DECRYPT, sec, NULL, 0, + speed_template_32_40_48, num_mb); + test_mb_skcipher_speed("xts(twofish)", ENCRYPT, sec, NULL, 0, + speed_template_32_48_64, num_mb); + test_mb_skcipher_speed("xts(twofish)", DECRYPT, sec, NULL, 0, + speed_template_32_48_64, num_mb); + break; + + case 605: + test_mb_skcipher_speed("ecb(arc4)", ENCRYPT, sec, NULL, 0, + speed_template_8, num_mb); + break; + + case 606: + test_mb_skcipher_speed("ecb(cast5)", ENCRYPT, sec, NULL, 0, + speed_template_8_16, num_mb); + test_mb_skcipher_speed("ecb(cast5)", DECRYPT, sec, NULL, 0, + speed_template_8_16, num_mb); + test_mb_skcipher_speed("cbc(cast5)", ENCRYPT, sec, NULL, 0, + speed_template_8_16, num_mb); + test_mb_skcipher_speed("cbc(cast5)", DECRYPT, sec, NULL, 0, + speed_template_8_16, num_mb); + test_mb_skcipher_speed("ctr(cast5)", ENCRYPT, sec, NULL, 0, + speed_template_8_16, num_mb); + test_mb_skcipher_speed("ctr(cast5)", DECRYPT, sec, NULL, 0, + speed_template_8_16, num_mb); + break; + + case 607: + test_mb_skcipher_speed("ecb(cast6)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ecb(cast6)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("cbc(cast6)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("cbc(cast6)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ctr(cast6)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ctr(cast6)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("lrw(cast6)", ENCRYPT, sec, NULL, 0, + speed_template_32_48, num_mb); + test_mb_skcipher_speed("lrw(cast6)", DECRYPT, sec, NULL, 0, + speed_template_32_48, num_mb); + test_mb_skcipher_speed("xts(cast6)", ENCRYPT, sec, NULL, 0, + speed_template_32_64, num_mb); + test_mb_skcipher_speed("xts(cast6)", DECRYPT, sec, NULL, 0, + speed_template_32_64, num_mb); + break; + + case 608: + test_mb_skcipher_speed("ecb(camellia)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ecb(camellia)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("cbc(camellia)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("cbc(camellia)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ctr(camellia)", ENCRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("ctr(camellia)", DECRYPT, sec, NULL, 0, + speed_template_16_32, num_mb); + test_mb_skcipher_speed("lrw(camellia)", ENCRYPT, sec, NULL, 0, + speed_template_32_48, num_mb); + test_mb_skcipher_speed("lrw(camellia)", DECRYPT, sec, NULL, 0, + speed_template_32_48, num_mb); + test_mb_skcipher_speed("xts(camellia)", ENCRYPT, sec, NULL, 0, + speed_template_32_64, num_mb); + test_mb_skcipher_speed("xts(camellia)", DECRYPT, sec, NULL, 0, + speed_template_32_64, num_mb); + break; + + case 609: + test_mb_skcipher_speed("ecb(blowfish)", ENCRYPT, sec, NULL, 0, + speed_template_8_32, num_mb); + test_mb_skcipher_speed("ecb(blowfish)", DECRYPT, sec, NULL, 0, + speed_template_8_32, num_mb); + test_mb_skcipher_speed("cbc(blowfish)", ENCRYPT, sec, NULL, 0, + speed_template_8_32, num_mb); + test_mb_skcipher_speed("cbc(blowfish)", DECRYPT, sec, NULL, 0, + speed_template_8_32, num_mb); + test_mb_skcipher_speed("ctr(blowfish)", ENCRYPT, sec, NULL, 0, + speed_template_8_32, num_mb); + test_mb_skcipher_speed("ctr(blowfish)", DECRYPT, sec, NULL, 0, + speed_template_8_32, num_mb); + break; + case 1000: test_available(); break; From patchwork Sun Dec 17 08:29:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 122178 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp1514920qgn; Sun, 17 Dec 2017 00:30:39 -0800 (PST) X-Google-Smtp-Source: ACJfBou12JYo1Qi8GulWAYpBBOE3N9TjJbqOxIBFnq5Z/mqhvKjxqMBwWALHyVvoodElvdAvAF+W X-Received: by 10.101.91.193 with SMTP id o1mr16513618pgr.373.1513499439355; Sun, 17 Dec 2017 00:30:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1513499439; cv=none; d=google.com; s=arc-20160816; b=MpB2mDKzvGtGHeOg1eHh7fGCMMHnJEh1hSckIoUwcsqoanK/BTdZj3qMZgEhz/m8MI keH4VCjViVd7do2raEo34vxcDMSCeTgN3yHeF/z3Uygc3nVj1tiP9Id/QR9kwpoZe090 hQ8jBcqaA7+Kq6I205W2NqK/HiLrSGNjXNGhWO+v9lv2E73zh7MAqyz8Kh0fxPza7+P8 yuvaD+nDu6HyxS/OoXiDlBsAPPBHxPZP3H6ikeVgQZyu+Xinvv6R9pubk1jnPphwDtQT E2rWvGvrP228t8AcdhyWUhem2TfQWrn/cFfydgTlm/QZy5syJ0bLy2bZpHuAhHjviwLm 1KBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=nOxM+wMwWlGE8OPSlCtzhQHC4CTDI/y1CLU62xf6mGk=; b=06hDSg8jTU8570p6/M+VGClabsKVfyXPc3IIx+PZKeRAdItnEYbmqReh+bzRLyzho7 BPlKmS1BHiHbUkq9Gms8p0ZLu7zqmbETPB0+1mua0rdGsPKxc0SsR+0VpAiEUuo46ETd f6hCOAKAKhyseIuMUC+2bK06/pGAvzE/dmoPwNLDtHITjLD8VNULt9kvKEhTKJSJtPBQ a/MLqoDz5kkiCv5ImAgT1jBpJxeZAHwx/wVeLFBsi1hQtXp3F4sX1az2gSWfeI0TrqYJ 2Pd1SfBDZciwe/AamwbB8VGvlkaA3GorEHVBUELbnBlG1jnKpbyTmD+yUROvwSNKDb0N zA3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h14si7908121pln.774.2017.12.17.00.30.39; Sun, 17 Dec 2017 00:30:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757239AbdLQIag (ORCPT + 25 others); Sun, 17 Dec 2017 03:30:36 -0500 Received: from foss.arm.com ([217.140.101.70]:40946 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753763AbdLQIab (ORCPT ); Sun, 17 Dec 2017 03:30:31 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 075DD1435; Sun, 17 Dec 2017 00:30:31 -0800 (PST) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E0C63F24A; Sun, 17 Dec 2017 00:30:26 -0800 (PST) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" Cc: Ofir Drang , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 6/6] crypto: tcrypt: add multibuf aead speed test Date: Sun, 17 Dec 2017 08:29:05 +0000 Message-Id: <1513499346-9047-7-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1513499346-9047-1-git-send-email-gilad@benyossef.com> References: <1513499346-9047-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The performance of some aead tfm providers is affected by the amount of parallelism possible with the processing. Introduce an async aead concurrent multiple buffer processing speed test to be able to test performance of such tfm providers. Signed-off-by: Gilad Ben-Yossef --- crypto/tcrypt.c | 437 ++++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 378 insertions(+), 59 deletions(-) -- 2.7.4 diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index d617c19..58e3344 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -80,6 +80,66 @@ static char *check[] = { NULL }; +static u32 block_sizes[] = { 16, 64, 256, 1024, 8192, 0 }; +static u32 aead_sizes[] = { 16, 64, 256, 512, 1024, 2048, 4096, 8192, 0 }; + +#define XBUFSIZE 8 +#define MAX_IVLEN 32 + +static int testmgr_alloc_buf(char *buf[XBUFSIZE]) +{ + int i; + + for (i = 0; i < XBUFSIZE; i++) { + buf[i] = (void *)__get_free_page(GFP_KERNEL); + if (!buf[i]) + goto err_free_buf; + } + + return 0; + +err_free_buf: + while (i-- > 0) + free_page((unsigned long)buf[i]); + + return -ENOMEM; +} + +static void testmgr_free_buf(char *buf[XBUFSIZE]) +{ + int i; + + for (i = 0; i < XBUFSIZE; i++) + free_page((unsigned long)buf[i]); +} + +static void sg_init_aead(struct scatterlist *sg, char *xbuf[XBUFSIZE], + unsigned int buflen, const void *assoc, + unsigned int aad_size) +{ + int np = (buflen + PAGE_SIZE - 1)/PAGE_SIZE; + int k, rem; + + if (np > XBUFSIZE) { + rem = PAGE_SIZE; + np = XBUFSIZE; + } else { + rem = buflen % PAGE_SIZE; + } + + sg_init_table(sg, np + 1); + + sg_set_buf(&sg[0], assoc, aad_size); + + if (rem) + np--; + for (k = 0; k < np; k++) + sg_set_buf(&sg[k + 1], xbuf[k], PAGE_SIZE); + + if (rem) + sg_set_buf(&sg[k + 1], xbuf[k], rem); +} + static inline int do_one_aead_op(struct aead_request *req, int ret) { struct crypto_wait *wait = req->base.data; @@ -87,8 +147,44 @@ static inline int do_one_aead_op(struct aead_request *req, int ret) return crypto_wait_req(ret, wait); } -static int test_aead_jiffies(struct aead_request *req, int enc, - int blen, int secs) +struct test_mb_aead_data { + struct scatterlist sg[XBUFSIZE]; + struct scatterlist sgout[XBUFSIZE]; + struct aead_request *req; + struct crypto_wait wait; + char *xbuf[XBUFSIZE]; + char *xoutbuf[XBUFSIZE]; + char *axbuf[XBUFSIZE]; +}; + +static int do_mult_aead_op(struct test_mb_aead_data *data, int enc, + u32 num_mb) +{ + int i, rc[num_mb], err = 0; + + /* Fire up a bunch of concurrent requests */ + for (i = 0; i < num_mb; i++) { + if (enc == ENCRYPT) + rc[i] = crypto_aead_encrypt(data[i].req); + else + rc[i] = crypto_aead_decrypt(data[i].req); + } + + /* Wait for all requests to finish */ + for (i = 0; i < num_mb; i++) { + rc[i] = crypto_wait_req(rc[i], &data[i].wait); + + if (rc[i]) { + pr_info("concurrent request %d error %d\n", i, rc[i]); + err = rc[i]; + } + } + + return err; +} + +static int test_mb_aead_jiffies(struct test_mb_aead_data *data, int enc, + int blen, int secs, u32 num_mb) { unsigned long start, end; int bcount; @@ -96,21 +192,18 @@ static int test_aead_jiffies(struct aead_request *req, int enc, for (start = jiffies, end = start + secs * HZ, bcount = 0; time_before(jiffies, end); bcount++) { - if (enc) - ret = do_one_aead_op(req, crypto_aead_encrypt(req)); - else - ret = do_one_aead_op(req, crypto_aead_decrypt(req)); - + ret = do_mult_aead_op(data, enc, num_mb); if (ret) return ret; } - printk("%d operations in %d seconds (%ld bytes)\n", - bcount, secs, (long)bcount * blen); + pr_cont("%d operations in %d seconds (%ld bytes)\n", + bcount * num_mb, secs, (long)bcount * blen * num_mb); return 0; } -static int test_aead_cycles(struct aead_request *req, int enc, int blen) +static int test_mb_aead_cycles(struct test_mb_aead_data *data, int enc, + int blen, u32 num_mb) { unsigned long cycles = 0; int ret = 0; @@ -118,11 +211,7 @@ static int test_aead_cycles(struct aead_request *req, int enc, int blen) /* Warm-up run. */ for (i = 0; i < 4; i++) { - if (enc) - ret = do_one_aead_op(req, crypto_aead_encrypt(req)); - else - ret = do_one_aead_op(req, crypto_aead_decrypt(req)); - + ret = do_mult_aead_op(data, enc, num_mb); if (ret) goto out; } @@ -132,10 +221,7 @@ static int test_aead_cycles(struct aead_request *req, int enc, int blen) cycles_t start, end; start = get_cycles(); - if (enc) - ret = do_one_aead_op(req, crypto_aead_encrypt(req)); - else - ret = do_one_aead_op(req, crypto_aead_decrypt(req)); + ret = do_mult_aead_op(data, enc, num_mb); end = get_cycles(); if (ret) @@ -146,70 +232,276 @@ static int test_aead_cycles(struct aead_request *req, int enc, int blen) out: if (ret == 0) - printk("1 operation in %lu cycles (%d bytes)\n", - (cycles + 4) / 8, blen); + pr_cont("1 operation in %lu cycles (%d bytes)\n", + (cycles + 4) / (8 * num_mb), blen); return ret; } -static u32 block_sizes[] = { 16, 64, 256, 1024, 8192, 0 }; -static u32 aead_sizes[] = { 16, 64, 256, 512, 1024, 2048, 4096, 8192, 0 }; +static void test_mb_aead_speed(const char *algo, int enc, int secs, + struct aead_speed_template *template, + unsigned int tcount, u8 authsize, + unsigned int aad_size, u8 *keysize, u32 num_mb) +{ + struct test_mb_aead_data *data; + struct crypto_aead *tfm; + unsigned int i, j, iv_len; + const char *key; + const char *e; + void *assoc; + u32 *b_size; + char *iv; + int ret; -#define XBUFSIZE 8 -#define MAX_IVLEN 32 -static int testmgr_alloc_buf(char *buf[XBUFSIZE]) -{ - int i; + if (aad_size >= PAGE_SIZE) { + pr_err("associate data length (%u) too big\n", aad_size); + return; + } - for (i = 0; i < XBUFSIZE; i++) { - buf[i] = (void *)__get_free_page(GFP_KERNEL); - if (!buf[i]) - goto err_free_buf; + iv = kzalloc(MAX_IVLEN, GFP_KERNEL); + if (!iv) + return; + + if (enc == ENCRYPT) + e = "encryption"; + else + e = "decryption"; + + data = kcalloc(num_mb, sizeof(*data), GFP_KERNEL); + if (!data) + goto out_free_iv; + + tfm = crypto_alloc_aead(algo, 0, 0); + if (IS_ERR(tfm)) { + pr_err("failed to load transform for %s: %ld\n", + algo, PTR_ERR(tfm)); + goto out_free_data; } - return 0; + ret = crypto_aead_setauthsize(tfm, authsize); -err_free_buf: - while (i-- > 0) - free_page((unsigned long)buf[i]); + for (i = 0; i < num_mb; ++i) + if (testmgr_alloc_buf(data[i].xbuf)) { + while (i--) + testmgr_free_buf(data[i].xbuf); + goto out_free_tfm; + } - return -ENOMEM; + for (i = 0; i < num_mb; ++i) + if (testmgr_alloc_buf(data[i].axbuf)) { + while (i--) + testmgr_free_buf(data[i].axbuf); + goto out_free_xbuf; + } + + for (i = 0; i < num_mb; ++i) + if (testmgr_alloc_buf(data[i].xoutbuf)) { + while (i--) + testmgr_free_buf(data[i].axbuf); + goto out_free_axbuf; + } + + for (i = 0; i < num_mb; ++i) { + data[i].req = aead_request_alloc(tfm, GFP_KERNEL); + if (!data[i].req) { + pr_err("alg: skcipher: Failed to allocate request for %s\n", + algo); + while (i--) + aead_request_free(data[i].req); + goto out_free_xoutbuf; + } + } + + for (i = 0; i < num_mb; ++i) { + crypto_init_wait(&data[i].wait); + aead_request_set_callback(data[i].req, + CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &data[i].wait); + } + + pr_info("\ntesting speed of multibuffer %s (%s) %s\n", algo, + get_driver_name(crypto_aead, tfm), e); + + i = 0; + do { + b_size = aead_sizes; + do { + if (*b_size + authsize > XBUFSIZE * PAGE_SIZE) { + pr_err("template (%u) too big for bufufer (%lu)\n", + authsize + *b_size, + XBUFSIZE * PAGE_SIZE); + goto out; + } + + pr_info("test %u (%d bit key, %d byte blocks): ", i, + *keysize * 8, *b_size); + + /* Set up tfm global state, i.e. the key */ + + memset(tvmem[0], 0xff, PAGE_SIZE); + key = tvmem[0]; + for (j = 0; j < tcount; j++) { + if (template[j].klen == *keysize) { + key = template[j].key; + break; + } + } + + crypto_aead_clear_flags(tfm, ~0); + + ret = crypto_aead_setkey(tfm, key, *keysize); + if (ret) { + pr_err("setkey() failed flags=%x\n", + crypto_aead_get_flags(tfm)); + goto out; + } + + iv_len = crypto_aead_ivsize(tfm); + if (iv_len) + memset(iv, 0xff, iv_len); + + /* Now setup per request stuff, i.e. buffers */ + + for (j = 0; j < num_mb; ++j) { + struct test_mb_aead_data *cur = &data[j]; + + assoc = cur->axbuf[0]; + memset(assoc, 0xff, aad_size); + + sg_init_aead(cur->sg, cur->xbuf, + *b_size + (enc ? 0 : authsize), + assoc, aad_size); + + sg_init_aead(cur->sgout, cur->xoutbuf, + *b_size + (enc ? authsize : 0), + assoc, aad_size); + + aead_request_set_ad(cur->req, aad_size); + + if (!enc) { + + aead_request_set_crypt(cur->req, + cur->sgout, + cur->sg, + *b_size, iv); + ret = crypto_aead_encrypt(cur->req); + ret = do_one_aead_op(cur->req, ret); + + if (ret) { + pr_err("calculating auth failed failed (%d)\n", + ret); + break; + } + } + + aead_request_set_crypt(cur->req, cur->sg, + cur->sgout, *b_size + + (enc ? 0 : authsize), + iv); + + } + + if (secs) + ret = test_mb_aead_jiffies(data, enc, *b_size, + secs, num_mb); + else + ret = test_mb_aead_cycles(data, enc, *b_size, + num_mb); + + if (ret) { + pr_err("%s() failed return code=%d\n", e, ret); + break; + } + b_size++; + i++; + } while (*b_size); + keysize++; + } while (*keysize); + +out: + for (i = 0; i < num_mb; ++i) + aead_request_free(data[i].req); +out_free_xoutbuf: + for (i = 0; i < num_mb; ++i) + testmgr_free_buf(data[i].xoutbuf); +out_free_axbuf: + for (i = 0; i < num_mb; ++i) + testmgr_free_buf(data[i].axbuf); +out_free_xbuf: + for (i = 0; i < num_mb; ++i) + testmgr_free_buf(data[i].xbuf); +out_free_tfm: + crypto_free_aead(tfm); +out_free_data: + kfree(data); +out_free_iv: + kfree(iv); } -static void testmgr_free_buf(char *buf[XBUFSIZE]) +static int test_aead_jiffies(struct aead_request *req, int enc, + int blen, int secs) { - int i; + unsigned long start, end; + int bcount; + int ret; - for (i = 0; i < XBUFSIZE; i++) - free_page((unsigned long)buf[i]); + for (start = jiffies, end = start + secs * HZ, bcount = 0; + time_before(jiffies, end); bcount++) { + if (enc) + ret = do_one_aead_op(req, crypto_aead_encrypt(req)); + else + ret = do_one_aead_op(req, crypto_aead_decrypt(req)); + + if (ret) + return ret; + } + + printk("%d operations in %d seconds (%ld bytes)\n", + bcount, secs, (long)bcount * blen); + return 0; } -static void sg_init_aead(struct scatterlist *sg, char *xbuf[XBUFSIZE], - unsigned int buflen, const void *assoc, - unsigned int aad_size) +static int test_aead_cycles(struct aead_request *req, int enc, int blen) { - int np = (buflen + PAGE_SIZE - 1)/PAGE_SIZE; - int k, rem; + unsigned long cycles = 0; + int ret = 0; + int i; - if (np > XBUFSIZE) { - rem = PAGE_SIZE; - np = XBUFSIZE; - } else { - rem = buflen % PAGE_SIZE; + /* Warm-up run. */ + for (i = 0; i < 4; i++) { + if (enc) + ret = do_one_aead_op(req, crypto_aead_encrypt(req)); + else + ret = do_one_aead_op(req, crypto_aead_decrypt(req)); + + if (ret) + goto out; } - sg_init_table(sg, np + 1); + /* The real thing. */ + for (i = 0; i < 8; i++) { + cycles_t start, end; - sg_set_buf(&sg[0], assoc, aad_size); + start = get_cycles(); + if (enc) + ret = do_one_aead_op(req, crypto_aead_encrypt(req)); + else + ret = do_one_aead_op(req, crypto_aead_decrypt(req)); + end = get_cycles(); - if (rem) - np--; - for (k = 0; k < np; k++) - sg_set_buf(&sg[k + 1], xbuf[k], PAGE_SIZE); + if (ret) + goto out; - if (rem) - sg_set_buf(&sg[k + 1], xbuf[k], rem); + cycles += end - start; + } + +out: + if (ret == 0) + printk("1 operation in %lu cycles (%d bytes)\n", + (cycles + 4) / 8, blen); + + return ret; } static void test_aead_speed(const char *algo, int enc, unsigned int secs, @@ -1912,6 +2204,33 @@ static int do_test(const char *alg, u32 type, u32 mask, int m) speed_template_32); break; + case 215: + test_mb_aead_speed("rfc4106(gcm(aes))", ENCRYPT, sec, NULL, + 0, 16, 16, aead_speed_template_20, num_mb); + test_mb_aead_speed("gcm(aes)", ENCRYPT, sec, NULL, 0, 16, 8, + speed_template_16_24_32, num_mb); + test_mb_aead_speed("rfc4106(gcm(aes))", DECRYPT, sec, NULL, + 0, 16, 16, aead_speed_template_20, num_mb); + test_mb_aead_speed("gcm(aes)", DECRYPT, sec, NULL, 0, 16, 8, + speed_template_16_24_32, num_mb); + break; + + case 216: + test_mb_aead_speed("rfc4309(ccm(aes))", ENCRYPT, sec, NULL, 0, + 16, 16, aead_speed_template_19, num_mb); + test_mb_aead_speed("rfc4309(ccm(aes))", DECRYPT, sec, NULL, 0, + 16, 16, aead_speed_template_19, num_mb); + break; + + case 217: + test_mb_aead_speed("rfc7539esp(chacha20,poly1305)", ENCRYPT, + sec, NULL, 0, 16, 8, aead_speed_template_36, + num_mb); + test_mb_aead_speed("rfc7539esp(chacha20,poly1305)", DECRYPT, + sec, NULL, 0, 16, 8, aead_speed_template_36, + num_mb); + break; + case 300: if (alg) { test_hash_speed(alg, sec, generic_hash_speed_template);