From patchwork Fri Sep 9 10:56:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 75871 Delivered-To: patch@linaro.org Received: by 10.140.106.11 with SMTP id d11csp280486qgf; Fri, 9 Sep 2016 03:56:36 -0700 (PDT) X-Received: by 10.98.55.1 with SMTP id e1mr5426032pfa.58.1473418596273; Fri, 09 Sep 2016 03:56:36 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id uk10si3459844pac.61.2016.09.09.03.56.36; Fri, 09 Sep 2016 03:56:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753728AbcIIK4f (ORCPT + 1 other); Fri, 9 Sep 2016 06:56:35 -0400 Received: from mail-oi0-f46.google.com ([209.85.218.46]:36018 "EHLO mail-oi0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752956AbcIIK4e (ORCPT ); Fri, 9 Sep 2016 06:56:34 -0400 Received: by mail-oi0-f46.google.com with SMTP id q188so18443419oia.3 for ; Fri, 09 Sep 2016 03:56:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=wute+5+y+RDfLNFqsY7iG66dxQorQnkeV3xZd0Jx6j0=; b=jK/f0L6hHYud0dtKZjQ+0KHiAs8ABxEfAmbCjPBXIo7+KIPpaCyuJQdovTpyU3Xeh2 +Pbu0WR5/EVMeMD2MmWlE4D6E5jCOg0JhS2omFplbqPuA5wx4K+GJJL+BTCu8EHmzqGC y86PnygQR3x7XSZFOQMTB+fyqQ3UrCP7OMreo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=wute+5+y+RDfLNFqsY7iG66dxQorQnkeV3xZd0Jx6j0=; b=PkCZzmrUAUQ/Y9zY8o6kFGIKhLbgads6reViZ2CWhgXX9vRvrfcwhGjvWeOgGkenPO pY6YtjEV42RM3XNL81kDA1H2B1om5wyOHlMIh656gcK7t9XWQqVHFH2QMzeJ2ITFxsUM n7wmO+BNnRFJFxnvZKkC6J6bGuhjXmL72+60ebBX37XJHwRRB0R4HgIxbGAgOPT5caDE /G4jMLZHeIKeqAgiI6GPjnNjUHISMNk/CgqL8e1Tq86PLaE7vCmfhe3mvMP4k/ptKPtg X8lzGeeX1StUFFsQIlHHRAsBiNmi81Y+ClzrxpcC2wTjj+RkdxhGy4ZsOqnMduToxN68 TRLA== X-Gm-Message-State: AE9vXwN4Qioh3u7EwDiQ8HqF7IEM8qVa6/uLvZNb8igeNTq0KELgNcS0EaEDP5Sb/0MKFyFr7837Sc4OsuKx5q/u X-Received: by 10.202.65.196 with SMTP id o187mr4328739oia.105.1473418593644; Fri, 09 Sep 2016 03:56:33 -0700 (PDT) MIME-Version: 1.0 Received: by 10.36.204.195 with HTTP; Fri, 9 Sep 2016 03:56:33 -0700 (PDT) In-Reply-To: References: <57D15BD3.40903@huawei.com> <20160908124709.GA26586@gondor.apana.org.au> <57D28CB8.4080904@huawei.com> From: Ard Biesheuvel Date: Fri, 9 Sep 2016 11:56:33 +0100 Message-ID: Subject: Re: Kernel panic - encryption/decryption failed when open file on Arm64 To: xiakaixu Cc: Herbert Xu , "David S. Miller" , "Theodore Ts'o" , Jaegeuk Kim , nhorman@tuxdriver.com, mh1@iki.fi, "linux-crypto@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Bintian , liushuoran@huawei.com, Huxinwei , zhangzhibin.zhang@huawei.com Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On 9 September 2016 at 11:31, Ard Biesheuvel wrote: > On 9 September 2016 at 11:19, xiakaixu wrote: >> Hi, >> >> After a deeply research about this crash, seems it is a specific >> bug that only exists in armv8 board. And it occurs in this function >> in arch/arm64/crypto/aes-glue.c. >> >> static int ctr_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, >> struct scatterlist *src, unsigned int nbytes) >> { >> ... >> >> desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; >> blkcipher_walk_init(&walk, dst, src, nbytes); >> err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE); ---> >> page allocation failed >> >> ... >> >> while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { ----> >> walk.nbytes = 0, and skip this loop >> aes_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, >> (u8 *)ctx->key_enc, rounds, blocks, walk.iv, >> first); >> ... >> err = blkcipher_walk_done(desc, &walk, >> walk.nbytes % AES_BLOCK_SIZE); >> } >> if (nbytes) { ----> >> enter this if() statement >> u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; >> u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE; >> ... >> >> aes_ctr_encrypt(tail, tsrc, (u8 *)ctx->key_enc, rounds, >> ----> the the sencond input parameter is NULL, so crash... >> blocks, walk.iv, first); >> ... >> } >> ... >> } >> >> >> If the page allocation failed in the function blkcipher_walk_virt_block(), >> the variable walk.nbytes = 0, so it will skip the while() loop and enter >> the if(nbytes) statment. But here the varibale tsrc is NULL and it is also >> the sencond input parameter of the function aes_ctr_encrypt()... Kernel >> Panic... >> >> I have also researched the similar function in other architectures, and >> there if(walk.nbytes) is used, not this if(nbytes) statement in the armv8. >> so I think this armv8 function ctr_encrypt() should deal with the page >> allocation failed situation. >> Does this solve your problem? u8 __aligned(8) tail[AES_BLOCK_SIZE]; -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 5c888049d061..6b2aa0fd6cd0 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -216,7 +216,7 @@ static int ctr_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE); } - if (nbytes) { + if (walk.nbytes % AES_BLOCK_SIZE) { u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;