From patchwork Thu May 1 15:51:25 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 29521 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f72.google.com (mail-oa0-f72.google.com [209.85.219.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8BEB6203F3 for ; Thu, 1 May 2014 15:51:47 +0000 (UTC) Received: by mail-oa0-f72.google.com with SMTP id eb12sf17386253oac.7 for ; Thu, 01 May 2014 08:51:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=BaE1DtU3ZC8Uj3UbAtuQYKlEF/OlVWlnbSOWuBq9/Xw=; b=V3JGohC8qpbIfwwyz3tAhGU7W6hFQ3x8rSKcC7efyrDG3rF/ix+5hXF94QOXSuwKBR sPkv7SnXAkPBIIFp10iyJ1gGJ9ihsvaitCkuJAeCZxrkW+UyzYKu+T2sGQOcqMeuVY4X 0/BsZOoB/yxFR0giVi7y7S7tU0WaCfVz0ykKDSMtes0X/UkqGHn7D9Cr0+QAZXQYqDlb 5zZFsP0EfGPAx3Gs12UutcUIwLJ4TsapSzgtOaWgaIERs9tsqzU5b2ui2nbetvkwybeL 4/bZnAqgeroC0tpSBd9OCDmgKmg218DI53vkuK/G0Ec+9Io9rGr2b3b8kJwDqQNfhgAM YJ/A== X-Gm-Message-State: ALoCoQn/i0fgV2s89o2L5va5+o5xzw6ffs1rw8ABrOKJBtODNF6d+yqe90Xhdi5QGpIT4cpsrBV1 X-Received: by 10.182.120.129 with SMTP id lc1mr5961938obb.21.1398959507153; Thu, 01 May 2014 08:51:47 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.91.201 with SMTP id z67ls1068003qgd.59.gmail; Thu, 01 May 2014 08:51:47 -0700 (PDT) X-Received: by 10.58.225.233 with SMTP id rn9mr476360vec.53.1398959506991; Thu, 01 May 2014 08:51:46 -0700 (PDT) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx.google.com with ESMTPS id ui2si6108968vdc.136.2014.05.01.08.51.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 01 May 2014 08:51:46 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=209.85.220.182; Received: by mail-vc0-f182.google.com with SMTP id lf12so4215784vcb.27 for ; Thu, 01 May 2014 08:51:46 -0700 (PDT) X-Received: by 10.52.0.193 with SMTP id 1mr7788797vdg.0.1398959506898; Thu, 01 May 2014 08:51:46 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp34109vcb; Thu, 1 May 2014 08:51:46 -0700 (PDT) X-Received: by 10.42.199.144 with SMTP id es16mr1682948icb.87.1398959506208; Thu, 01 May 2014 08:51:46 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w2si3639369igw.7.2014.05.01.08.51.46; Thu, 01 May 2014 08:51:46 -0700 (PDT) Received-SPF: none (google.com: linux-crypto-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754750AbaEAPvp (ORCPT ); Thu, 1 May 2014 11:51:45 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:60657 "EHLO mail-wi0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754198AbaEAPvk (ORCPT ); Thu, 1 May 2014 11:51:40 -0400 Received: by mail-wi0-f181.google.com with SMTP id f8so967735wiw.14 for ; Thu, 01 May 2014 08:51:39 -0700 (PDT) X-Received: by 10.180.80.69 with SMTP id p5mr2729788wix.54.1398959499255; Thu, 01 May 2014 08:51:39 -0700 (PDT) Received: from ards-macbook-pro.local (cag06-7-83-153-85-71.fbx.proxad.net. [83.153.85.71]) by mx.google.com with ESMTPSA id kp5sm41727776wjb.30.2014.05.01.08.51.37 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 01 May 2014 08:51:38 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, steve.capper@linaro.org, Ard Biesheuvel Subject: [PATCH resend 14/15] arm64/crypto: add voluntary preemption to Crypto Extensions SHA2 Date: Thu, 1 May 2014 17:51:25 +0200 Message-Id: <1398959486-8222-5-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1398959486-8222-1-git-send-email-ard.biesheuvel@linaro.org> References: <1398959486-8222-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The Crypto Extensions based SHA2 implementation uses the NEON register file, and hence runs with preemption disabled. This patch adds a TIF_NEED_RESCHED check to its inner loop so we at least give up the CPU voluntarily when we are running in process context and have been tagged for preemption by the scheduler. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/sha2-ce-core.S | 19 ++++++++------- arch/arm64/crypto/sha2-ce-glue.c | 51 ++++++++++++++++++++++++++++++---------- 2 files changed, 50 insertions(+), 20 deletions(-) diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S index 53e750614169..46b669d91c29 100644 --- a/arch/arm64/crypto/sha2-ce-core.S +++ b/arch/arm64/crypto/sha2-ce-core.S @@ -73,8 +73,8 @@ .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 /* - * void sha2_ce_transform(int blocks, u8 const *src, u32 *state, - * u8 *head, long bytes) + * int sha2_ce_transform(int blocks, u8 const *src, u32 *state, + * u8 *head, long bytes, struct thread_info *ti) */ ENTRY(sha2_ce_transform) /* load round constants */ @@ -131,7 +131,14 @@ CPU_LE( rev32 v19.16b, v19.16b ) add dgbv.4s, dgbv.4s, dg1v.4s /* handled all input blocks? */ - cbnz w0, 0b + cbz w0, 4f + + /* should we exit early? */ + b_if_no_resched x5, x8, 0b + + /* store new state */ +3: stp dga, dgb, [x2] + ret /* * Final block: add padding and total bit count. @@ -139,7 +146,7 @@ CPU_LE( rev32 v19.16b, v19.16b ) * size was not a round multiple of the block size, and the padding is * handled by the C code. */ - cbz x4, 3f +4: cbz x4, 3b movi v17.2d, #0 mov x8, #0x80000000 movi v18.2d, #0 @@ -149,8 +156,4 @@ CPU_LE( rev32 v19.16b, v19.16b ) mov v19.d[0], xzr mov v19.d[1], x7 b 2b - - /* store new state */ -3: stp dga, dgb, [x2] - ret ENDPROC(sha2_ce_transform) diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 81617262b3df..6566ad3fdf82 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -1,4 +1,4 @@ -/* +h/* * sha2-ce-glue.c - SHA-224/SHA-256 using ARMv8 Crypto Extensions * * Copyright (C) 2014 Linaro Ltd @@ -20,8 +20,8 @@ MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); -asmlinkage void sha2_ce_transform(int blocks, u8 const *src, u32 *state, - u8 *head, long bytes); +asmlinkage int sha2_ce_transform(int blocks, u8 const *src, u32 *state, + u8 *head, long bytes, struct thread_info *ti); static int sha224_init(struct shash_desc *desc) { @@ -58,6 +58,7 @@ static int sha2_update(struct shash_desc *desc, const u8 *data, sctx->count += len; if ((partial + len) >= SHA256_BLOCK_SIZE) { + struct thread_info *ti = NULL; int blocks; if (partial) { @@ -68,16 +69,30 @@ static int sha2_update(struct shash_desc *desc, const u8 *data, len -= p; } + /* + * Pass current's thread info pointer to sha2_ce_transform() + * below if we want it to play nice under preemption. + */ + if ((IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) || + IS_ENABLED(CONFIG_PREEMPT)) && !in_interrupt()) + ti = current_thread_info(); + blocks = len / SHA256_BLOCK_SIZE; len %= SHA256_BLOCK_SIZE; - kernel_neon_begin_partial(28); - sha2_ce_transform(blocks, data, sctx->state, - partial ? sctx->buf : NULL, 0); - kernel_neon_end(); + do { + int rem; + + kernel_neon_begin_partial(28); + rem = sha2_ce_transform(blocks, data, sctx->state, + partial ? sctx->buf : NULL, + 0, ti); + kernel_neon_end(); - data += blocks * SHA256_BLOCK_SIZE; - partial = 0; + data += (blocks - rem) * SHA256_BLOCK_SIZE; + blocks = rem; + partial = 0; + } while (unlikely(ti && blocks > 0)); } if (len) memcpy(sctx->buf + partial, data, len); @@ -131,6 +146,7 @@ static void sha2_finup(struct shash_desc *desc, const u8 *data, unsigned int len) { struct sha256_state *sctx = shash_desc_ctx(desc); + struct thread_info *ti = NULL; int blocks; if (sctx->count || !len || (len % SHA256_BLOCK_SIZE)) { @@ -147,9 +163,20 @@ static void sha2_finup(struct shash_desc *desc, const u8 *data, */ blocks = len / SHA256_BLOCK_SIZE; - kernel_neon_begin_partial(28); - sha2_ce_transform(blocks, data, sctx->state, NULL, len); - kernel_neon_end(); + if ((IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) || + IS_ENABLED(CONFIG_PREEMPT)) && !in_interrupt()) + ti = current_thread_info(); + + do { + int rem; + + kernel_neon_begin_partial(28); + rem = sha2_ce_transform(blocks, data, sctx->state, + NULL, len, ti); + kernel_neon_end(); + data += (blocks - rem) * SHA256_BLOCK_SIZE; + blocks = rem; + } while (unlikely(ti && blocks > 0)); } static int sha224_finup(struct shash_desc *desc, const u8 *data,