From patchwork Sat Feb 11 19:25:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 93827 Delivered-To: patch@linaro.org Received: by 10.140.20.99 with SMTP id 90csp429767qgi; Sat, 11 Feb 2017 11:25:39 -0800 (PST) X-Received: by 10.84.200.200 with SMTP id u8mr19479327plh.98.1486841138972; Sat, 11 Feb 2017 11:25:38 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m1si5243073plb.1.2017.02.11.11.25.38; Sat, 11 Feb 2017 11:25:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751029AbdBKTZh (ORCPT + 1 other); Sat, 11 Feb 2017 14:25:37 -0500 Received: from mail-wr0-f173.google.com ([209.85.128.173]:34626 "EHLO mail-wr0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750989AbdBKTZg (ORCPT ); Sat, 11 Feb 2017 14:25:36 -0500 Received: by mail-wr0-f173.google.com with SMTP id o16so128697624wra.1 for ; Sat, 11 Feb 2017 11:25:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=D/yM6DxUURjqf+b3lkvNWC4YhONmfeotmt6bV0R48V0=; b=WOUS14bJUkrxDTMgyEC15fPkAR5fKel5drvXUSnrdb1BIPNBACVmYXCo9WP35RgijK 7/5qrAUTkFjXLcT9n9HuqR/gKb7A7b9T9J3n5vxzaz7IYLNYL8pq89gC285p2vmo3xlJ NHWtTc5a9XNq+r2cNLlnh80KUHN5e16ifpFuQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=D/yM6DxUURjqf+b3lkvNWC4YhONmfeotmt6bV0R48V0=; b=LPM6bSjEA84KTPmQ5PAta2CCf0amSVF1Uw4+OkB3MrNgCP34KNh3Y+AoQyRrg/C2gG yJsZZU1jBjIGcQIvV5LR/vJ8+xOgiRvjUYGLH1LeK71g61r2ixCaqPFJxtFTCqjlWOd4 WWFKygl5br0shmomRR1eISXMKj8iLqxrWtZATA0Zu9n0yrFDOIhFQxp9n5noSGlDUEM/ 3RqqrIKlpIWTdPMB6v7Vmh5QPbMIcGyhBx4YgcLP/2BO8IFBwth6jWYTH1Fll5lYbcVS VD19XT5f49CYFbRP7cUidlBp1KjlPawjrIlCq2C7/gbyhFg2yIHESpyzBDJygKCHHSSE qMMQ== X-Gm-Message-State: AMke39lenDKboXKFrKqVYOZeJ8nIef5WXYJFVFtaskLq7uv0XLh1zhYvnT6KjlorhFj1SaNE X-Received: by 10.223.162.133 with SMTP id s5mr13848103wra.157.1486841134959; Sat, 11 Feb 2017 11:25:34 -0800 (PST) Received: from localhost.localdomain ([197.131.131.89]) by smtp.gmail.com with ESMTPSA id k142sm4498288wmg.31.2017.02.11.11.25.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 11 Feb 2017 11:25:34 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au Cc: Ard Biesheuvel Subject: [PATCH 2/2] crypto: ccm - drop unnecessary minimum 32-bit alignment Date: Sat, 11 Feb 2017 19:25:22 +0000 Message-Id: <1486841122-1686-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1486841122-1686-1-git-send-email-ard.biesheuvel@linaro.org> References: <1486841122-1686-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The CCM driver forces 32-bit alignment even if the underlying ciphers don't care about alignment. This is because crypto_xor() used to require this, but since this is no longer the case, drop the hardcoded minimum of 32 bits. Signed-off-by: Ard Biesheuvel --- crypto/ccm.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- 2.7.4 diff --git a/crypto/ccm.c b/crypto/ccm.c index 24c26ab052ca..442848807a52 100644 --- a/crypto/ccm.c +++ b/crypto/ccm.c @@ -525,8 +525,7 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl, ctr->base.cra_priority) / 2; inst->alg.base.cra_blocksize = 1; inst->alg.base.cra_alignmask = mac->base.cra_alignmask | - ctr->base.cra_alignmask | - (__alignof__(u32) - 1); + ctr->base.cra_alignmask; inst->alg.ivsize = 16; inst->alg.chunksize = crypto_skcipher_alg_chunksize(ctr); inst->alg.maxauthsize = 16;