From patchwork Thu Jul 13 08:19:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 107580 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp1909253qge; Thu, 13 Jul 2017 01:20:56 -0700 (PDT) X-Received: by 10.84.196.131 with SMTP id l3mr8944215pld.62.1499934056286; Thu, 13 Jul 2017 01:20:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1499934056; cv=none; d=google.com; s=arc-20160816; b=HknpXReNodCJmFZG/rjl+ep85HHMK87TiM058/yfeBKbpsNygC0xsywt7mzdsm2aIu JaCYzRWyIJUr2k0FESoPovYnKdeaUIv/IkeSBAOW2oOf3fNKV2Sza69RVb8suERRo7CG rYt+dqXP20S+ugXUqZ6e5vEutadmZ1BHpbsWNljn6mPiC3dz8tgMxtvRngecTWBdATlf 21mjgBr4ocjCuDiqSChIEU78WFjQTCGt13uDzbBdzSauNaONZMFKyYdTcKjlDVjZtuTB kX5CrTkokHlgtYlrZtlRqndoztAXh9dtXw32aZnBT+m9zf+r06prB2i17jRtndVumLdK bgEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=lG7wYzLsSoxcBWxQI5eETDLVcN6kHyFDTzJvxr6jzoQ=; b=mq5sHPBPjg/SSU+LAXAri7ijgillNuAS0ClRSri8ZJl9L3wUgRreAxt2P4BVF6shGl Ww3yct3RJazVkfEXaImsI2thctUtQKNCLNULnb7nhKj1tNA8Gl5DCwtqFpD6QkH7Px6a IV9EiKfAaqOFz8NVyUUN+1m2bImx/RXLfzGzMqReJHiJxcQnYwfSq6OrBhMamxiq67n2 P2cEV158CsGZ3m5jUZ6+mplVt5fqLENMn/RHh68KPseRWQ0un0K4/xOKRNcSgYxuJaLO Lu0L97IJY1AJF1CvRDIK1rw0VfLKKdG3a7A2mZlE+MZ8qP1X0fPZHHxYs9bxOBAavvvB EFTg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t1si3935931plb.54.2017.07.13.01.20.56; Thu, 13 Jul 2017 01:20:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751965AbdGMIUv (ORCPT + 25 others); Thu, 13 Jul 2017 04:20:51 -0400 Received: from foss.arm.com ([217.140.101.70]:34254 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751926AbdGMIUs (ORCPT ); Thu, 13 Jul 2017 04:20:48 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 97D5D1596; Thu, 13 Jul 2017 01:20:22 -0700 (PDT) Received: from gby.kfn.arm.com (unknown [10.45.48.214]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 236433F3E1; Thu, 13 Jul 2017 01:20:17 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman , linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org, linux-kernel@vger.kernel.org Cc: Ofir Drang Subject: [PATCH 02/12] staging: ccree: clean up struct ssi_aead_ctx Date: Thu, 13 Jul 2017 11:19:52 +0300 Message-Id: <1499934004-28513-3-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1499934004-28513-1-git-send-email-gilad@benyossef.com> References: <1499934004-28513-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org struct ssi_aead_ctx has some nested structure defined, resulting in code accessing them to be very unreadable. Move out the nested structure definitions out of the struct and use the change to make the code accessing it more readable and better coding style compliant by shortening lines and properly matching alignment, Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/ssi_aead.c | 108 +++++++++++++++++++++++---------------- 1 file changed, 64 insertions(+), 44 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c index 99eeeda..fd37dde 100644 --- a/drivers/staging/ccree/ssi_aead.c +++ b/drivers/staging/ccree/ssi_aead.c @@ -56,22 +56,26 @@ struct ssi_aead_handle { struct list_head aead_list; }; +struct cc_hmac_s { + u8 *padded_authkey; + u8 *ipad_opad; /* IPAD, OPAD*/ + dma_addr_t padded_authkey_dma_addr; + dma_addr_t ipad_opad_dma_addr; +}; + +struct cc_xcbc_s { + u8 *xcbc_keys; /* K1,K2,K3 */ + dma_addr_t xcbc_keys_dma_addr; +}; + struct ssi_aead_ctx { struct ssi_drvdata *drvdata; u8 ctr_nonce[MAX_NONCE_SIZE]; /* used for ctr3686 iv and aes ccm */ u8 *enckey; dma_addr_t enckey_dma_addr; union { - struct { - u8 *padded_authkey; - u8 *ipad_opad; /* IPAD, OPAD*/ - dma_addr_t padded_authkey_dma_addr; - dma_addr_t ipad_opad_dma_addr; - } hmac; - struct { - u8 *xcbc_keys; /* K1,K2,K3 */ - dma_addr_t xcbc_keys_dma_addr; - } xcbc; + struct cc_hmac_s hmac; + struct cc_xcbc_s xcbc; } auth_state; unsigned int enc_keylen; unsigned int auth_keylen; @@ -105,33 +109,37 @@ static void ssi_aead_exit(struct crypto_aead *tfm) } if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */ - if (ctx->auth_state.xcbc.xcbc_keys) { + struct cc_xcbc_s *xcbc = &ctx->auth_state.xcbc; + + if (xcbc->xcbc_keys) { dma_free_coherent(dev, CC_AES_128_BIT_KEY_SIZE * 3, - ctx->auth_state.xcbc.xcbc_keys, - ctx->auth_state.xcbc.xcbc_keys_dma_addr); + xcbc->xcbc_keys, + xcbc->xcbc_keys_dma_addr); } SSI_LOG_DEBUG("Freed xcbc_keys DMA buffer xcbc_keys_dma_addr=0x%llX\n", - (unsigned long long)ctx->auth_state.xcbc.xcbc_keys_dma_addr); - ctx->auth_state.xcbc.xcbc_keys_dma_addr = 0; - ctx->auth_state.xcbc.xcbc_keys = NULL; + (unsigned long long)xcbc->xcbc_keys_dma_addr); + xcbc->xcbc_keys_dma_addr = 0; + xcbc->xcbc_keys = NULL; } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC auth. */ - if (ctx->auth_state.hmac.ipad_opad) { + struct cc_hmac_s *hmac = &ctx->auth_state.hmac; + + if (hmac->ipad_opad) { dma_free_coherent(dev, 2 * MAX_HMAC_DIGEST_SIZE, - ctx->auth_state.hmac.ipad_opad, - ctx->auth_state.hmac.ipad_opad_dma_addr); + hmac->ipad_opad, + hmac->ipad_opad_dma_addr); SSI_LOG_DEBUG("Freed ipad_opad DMA buffer ipad_opad_dma_addr=0x%llX\n", - (unsigned long long)ctx->auth_state.hmac.ipad_opad_dma_addr); - ctx->auth_state.hmac.ipad_opad_dma_addr = 0; - ctx->auth_state.hmac.ipad_opad = NULL; + (unsigned long long)hmac->ipad_opad_dma_addr); + hmac->ipad_opad_dma_addr = 0; + hmac->ipad_opad = NULL; } - if (ctx->auth_state.hmac.padded_authkey) { + if (hmac->padded_authkey) { dma_free_coherent(dev, MAX_HMAC_BLOCK_SIZE, - ctx->auth_state.hmac.padded_authkey, - ctx->auth_state.hmac.padded_authkey_dma_addr); + hmac->padded_authkey, + hmac->padded_authkey_dma_addr); SSI_LOG_DEBUG("Freed padded_authkey DMA buffer padded_authkey_dma_addr=0x%llX\n", - (unsigned long long)ctx->auth_state.hmac.padded_authkey_dma_addr); - ctx->auth_state.hmac.padded_authkey_dma_addr = 0; - ctx->auth_state.hmac.padded_authkey = NULL; + (unsigned long long)hmac->padded_authkey_dma_addr); + hmac->padded_authkey_dma_addr = 0; + hmac->padded_authkey = NULL; } } } @@ -165,31 +173,42 @@ static int ssi_aead_init(struct crypto_aead *tfm) /* Set default authlen value */ if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */ + struct cc_xcbc_s *xcbc = &ctx->auth_state.xcbc; + const unsigned int key_size = CC_AES_128_BIT_KEY_SIZE * 3; + /* Allocate dma-coherent buffer for XCBC's K1+K2+K3 */ /* (and temporary for user key - up to 256b) */ - ctx->auth_state.xcbc.xcbc_keys = dma_alloc_coherent(dev, - CC_AES_128_BIT_KEY_SIZE * 3, - &ctx->auth_state.xcbc.xcbc_keys_dma_addr, GFP_KERNEL); - if (!ctx->auth_state.xcbc.xcbc_keys) { + xcbc->xcbc_keys = dma_alloc_coherent(dev, key_size, + &xcbc->xcbc_keys_dma_addr, + GFP_KERNEL); + if (!xcbc->xcbc_keys) { SSI_LOG_ERR("Failed allocating buffer for XCBC keys\n"); goto init_failed; } } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC authentication */ + struct cc_hmac_s *hmac = &ctx->auth_state.hmac; + const unsigned int digest_size = 2 * MAX_HMAC_DIGEST_SIZE; + dma_addr_t *pkey_dma = &hmac->padded_authkey_dma_addr; + /* Allocate dma-coherent buffer for IPAD + OPAD */ - ctx->auth_state.hmac.ipad_opad = dma_alloc_coherent(dev, - 2 * MAX_HMAC_DIGEST_SIZE, - &ctx->auth_state.hmac.ipad_opad_dma_addr, GFP_KERNEL); - if (!ctx->auth_state.hmac.ipad_opad) { + hmac->ipad_opad = dma_alloc_coherent(dev, digest_size, + &hmac->ipad_opad_dma_addr, + GFP_KERNEL); + + if (!hmac->ipad_opad) { SSI_LOG_ERR("Failed allocating IPAD/OPAD buffer\n"); goto init_failed; } + SSI_LOG_DEBUG("Allocated authkey buffer in context ctx->authkey=@%p\n", - ctx->auth_state.hmac.ipad_opad); + hmac->ipad_opad); + + hmac->padded_authkey = dma_alloc_coherent(dev, + MAX_HMAC_BLOCK_SIZE, + pkey_dma, + GFP_KERNEL); - ctx->auth_state.hmac.padded_authkey = dma_alloc_coherent(dev, - MAX_HMAC_BLOCK_SIZE, - &ctx->auth_state.hmac.padded_authkey_dma_addr, GFP_KERNEL); - if (!ctx->auth_state.hmac.padded_authkey) { + if (!hmac->padded_authkey) { SSI_LOG_ERR("failed to allocate padded_authkey\n"); goto init_failed; } @@ -295,6 +314,7 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx) DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256; unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ? CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE; + struct cc_hmac_s *hmac = &ctx->auth_state.hmac; int idx = 0; int i; @@ -331,7 +351,7 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx) /* Perform HASH update */ hw_desc_init(&desc[idx]); set_din_type(&desc[idx], DMA_DLLI, - ctx->auth_state.hmac.padded_authkey_dma_addr, + hmac->padded_authkey_dma_addr, SHA256_BLOCK_SIZE, NS_BIT); set_cipher_mode(&desc[idx], hash_mode); set_xor_active(&desc[idx]); @@ -342,8 +362,8 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx) hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hash_mode); set_dout_dlli(&desc[idx], - (ctx->auth_state.hmac.ipad_opad_dma_addr + - digest_ofs), digest_size, NS_BIT, 0); + (hmac->ipad_opad_dma_addr + digest_ofs), + digest_size, NS_BIT, 0); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);