From patchwork Wed May 14 18:17:31 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 30189 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f70.google.com (mail-pb0-f70.google.com [209.85.160.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 45A6620446 for ; Wed, 14 May 2014 18:18:10 +0000 (UTC) Received: by mail-pb0-f70.google.com with SMTP id rq2sf10689450pbb.9 for ; Wed, 14 May 2014 11:18:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=wAi2q7C/Nkudi/vUb9lfdBDDecj7RdbbivPxwvrSisM=; b=IWeTBkoVT4O6MkanLBxh+7holybaPQ9htx4+6Xap10ySOUMBf2flru2Kg7rrNUxdqr QCp2Ihn+pmKfSFJIECN2lQ/t0+enrAKkHT/+57p9vE84GNLV6vgb0Z8/QAtfQsgfzZsC 8VJUrImqehH9Qks9Hxkygx6RVsDf1NCR34MeTcxCFNksnrd6teYzOqGzjP7RbQgjznc4 MOso9n9TdpDkciF5d76mVekegxUoULL5ZVW+xJ9uDfOw0Ev2N3bQVuGAaYpmMOQIBZpO y5nAccPGEuzKkO7jiDY8jf8LuLZDC1rb+wktOvBNSSBeLwv6sGR/Zzrhxx0CPREXyF6I cjYA== X-Gm-Message-State: ALoCoQm51+sRzGwW4wAsWWmpejEnI82y3u3ZkfcMgA7Khnxzf034e0NntqYervdPa3cy3Rnq978u X-Received: by 10.68.202.99 with SMTP id kh3mr2448540pbc.8.1400091489624; Wed, 14 May 2014 11:18:09 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.106.201 with SMTP id e67ls2555019qgf.94.gmail; Wed, 14 May 2014 11:18:09 -0700 (PDT) X-Received: by 10.58.154.10 with SMTP id vk10mr4231317veb.18.1400091489405; Wed, 14 May 2014 11:18:09 -0700 (PDT) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id tx4si458269vdc.112.2014.05.14.11.18.09 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 14 May 2014 11:18:09 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id ij19so3008542vcb.0 for ; Wed, 14 May 2014 11:18:09 -0700 (PDT) X-Received: by 10.58.116.175 with SMTP id jx15mr4161206veb.9.1400091489323; Wed, 14 May 2014 11:18:09 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp261283vcb; Wed, 14 May 2014 11:18:08 -0700 (PDT) X-Received: by 10.66.139.201 with SMTP id ra9mr2845340pab.84.1400091488699; Wed, 14 May 2014 11:18:08 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id qe5si1341172pbc.453.2014.05.14.11.18.08; Wed, 14 May 2014 11:18:08 -0700 (PDT) Received-SPF: none (google.com: linux-crypto-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751382AbaENSSH (ORCPT ); Wed, 14 May 2014 14:18:07 -0400 Received: from mail-yh0-f41.google.com ([209.85.213.41]:52091 "EHLO mail-yh0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751339AbaENSSH (ORCPT ); Wed, 14 May 2014 14:18:07 -0400 Received: by mail-yh0-f41.google.com with SMTP id f73so2063048yha.28 for ; Wed, 14 May 2014 11:18:06 -0700 (PDT) X-Received: by 10.236.105.141 with SMTP id k13mr7325158yhg.141.1400091486448; Wed, 14 May 2014 11:18:06 -0700 (PDT) Received: from ards-macbook-pro.swisscom.com ([12.153.182.133]) by mx.google.com with ESMTPSA id c66sm3641080yhk.23.2014.05.14.11.18.04 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 14 May 2014 11:18:05 -0700 (PDT) From: Ard Biesheuvel To: catalin.marinas@arm.com, jussi.kivilinna@iki.fi, herbert@gondor.apana.org.au Cc: linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, Ard Biesheuvel Subject: [PATCH v2 11/11] arm64/crypto: add voluntary preemption to Crypto Extensions GHASH Date: Wed, 14 May 2014 11:17:31 -0700 Message-Id: <1400091451-9117-12-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1400091451-9117-1-git-send-email-ard.biesheuvel@linaro.org> References: <1400091451-9117-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The Crypto Extensions based GHASH implementation uses the NEON register file, and hence runs with preemption disabled. This patch adds a TIF_NEED_RESCHED check to its inner loop so we at least give up the CPU voluntarily when we are running in process context and have been tagged for preemption by the scheduler. Signed-off-by: Ard Biesheuvel Acked-by: Herbert Xu --- arch/arm64/crypto/ghash-ce-core.S | 10 ++++++---- arch/arm64/crypto/ghash-ce-glue.c | 34 ++++++++++++++++++++++++++-------- 2 files changed, 32 insertions(+), 12 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index b9e6eaf41c9b..523432f24ed2 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -31,8 +31,9 @@ .arch armv8-a+crypto /* - * void pmull_ghash_update(int blocks, u64 dg[], const char *src, - * struct ghash_key const *k, const char *head) + * int pmull_ghash_update(int blocks, u64 dg[], const char *src, + * struct ghash_key const *k, const char *head, + * struct thread_info *ti) */ ENTRY(pmull_ghash_update) ld1 {DATA.16b}, [x1] @@ -88,8 +89,9 @@ CPU_LE( rev64 IN1.16b, IN1.16b ) eor T1.16b, T1.16b, T2.16b eor DATA.16b, DATA.16b, T1.16b - cbnz w0, 0b + cbz w0, 2f + b_if_no_resched x5, x7, 0b - st1 {DATA.16b}, [x1] +2: st1 {DATA.16b}, [x1] ret ENDPROC(pmull_ghash_update) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index b92baf3f68c7..b8f58f9bcf00 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -33,8 +33,9 @@ struct ghash_desc_ctx { u32 count; }; -asmlinkage void pmull_ghash_update(int blocks, u64 dg[], const char *src, - struct ghash_key const *k, const char *head); +asmlinkage int pmull_ghash_update(int blocks, u64 dg[], const char *src, + struct ghash_key const *k, const char *head, + struct thread_info *ti); static int ghash_init(struct shash_desc *desc) { @@ -54,6 +55,7 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, if ((partial + len) >= GHASH_BLOCK_SIZE) { struct ghash_key *key = crypto_shash_ctx(desc->tfm); + struct thread_info *ti = NULL; int blocks; if (partial) { @@ -64,14 +66,30 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, len -= p; } + /* + * Pass current's thread info pointer to pmull_ghash_update() + * below if we want it to play nice under preemption. + */ + if ((IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) || + IS_ENABLED(CONFIG_PREEMPT)) + && (desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP)) + ti = current_thread_info(); + blocks = len / GHASH_BLOCK_SIZE; len %= GHASH_BLOCK_SIZE; - kernel_neon_begin_partial(6); - pmull_ghash_update(blocks, ctx->digest, src, key, - partial ? ctx->buf : NULL); - kernel_neon_end(); - src += blocks * GHASH_BLOCK_SIZE; + do { + int rem; + + kernel_neon_begin_partial(6); + rem = pmull_ghash_update(blocks, ctx->digest, src, key, + partial ? ctx->buf : NULL, ti); + kernel_neon_end(); + + src += (blocks - rem) * GHASH_BLOCK_SIZE; + blocks = rem; + partial = 0; + } while (unlikely(ti && blocks > 0)); } if (len) memcpy(ctx->buf + partial, src, len); @@ -89,7 +107,7 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); kernel_neon_begin_partial(6); - pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL); + pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL, NULL); kernel_neon_end(); } put_unaligned_be64(ctx->digest[1], dst);