From patchwork Wed May 23 09:31:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 136638 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp659184lji; Wed, 23 May 2018 02:34:34 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpdCkgBuLrK6DFBop5W8agFPEppQVOVBKBTDENH73mo04yhOY5zmo74h388dzAdoUcmK5TH X-Received: by 2002:a62:f17:: with SMTP id x23-v6mr2153784pfi.3.1527068074470; Wed, 23 May 2018 02:34:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527068074; cv=none; d=google.com; s=arc-20160816; b=W/QkDt3jq8Y7UMNvtRu7aO7Eryy/DMpQoZE3WTzWBR1Df7OxSHVHFFiuXlzf87NQZd 2zvu1O9s+42dGXyw00rU0UknuLIojn5eNkCdvSI/e+wmGbGLaNPVs66WIFgdfd099BV1 WN/feIYMq+EBsqfkRoRDhL9Wg8nSAbIMPSe26NRWRTwBWSn4/mwB+FfFV0aMocC2xzNF TpB11huQIgMo3f1WaUVhCRzFY/mw2Pxb3wEric21IMLNTMitascLjslrKKbsjznTdrfA erYQ4ifAQ7aGxTJ0WykEkJF3AWDZq2NHZ9XefxpLWRo0KnlBXYuv7MEcPj6PQr3QWb+q NLJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=RgzoQP3PGg1G+B0F80WUKmGdoAM1cMkWIi00fXk97KU=; b=W+kFeV1amcxxOQr+uQwpFnX2a2BsYD9jsmn19ovr4e2IlHF4tPGx5DFPJ8j8WVkgQm OF9fG51kBmna1BQaH+iM0JwadGG/GZ64HUYRyM//SioanhtHBy1FQ38j5/LZPzypC48Y PqZGbrDd0zk8ejNxErASSjvt4ech8nJMjkUC6PS0X/anY+Xfo4+6CXGRw0NV2gGUcwgh HSzBeAc//z2skZN+lnIjI75wrGnsmRUNq4CPgtVCEEEV3/gomJXJm/iuWL71NmQuLTX5 kPXPY94pCAzpleQsKhDmHsco0qi36QUDBUgvkroaRhRzHUylqEvXRBjibTiKJEKR3oiJ Lp4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Mc0wsTLD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 5-v6si18795679plx.517.2018.05.23.02.34.34; Wed, 23 May 2018 02:34:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Mc0wsTLD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932389AbeEWJe3 (ORCPT + 30 others); Wed, 23 May 2018 05:34:29 -0400 Received: from mail-pg0-f65.google.com ([74.125.83.65]:43183 "EHLO mail-pg0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932285AbeEWJbv (ORCPT ); Wed, 23 May 2018 05:31:51 -0400 Received: by mail-pg0-f65.google.com with SMTP id p8-v6so9154352pgq.10 for ; Wed, 23 May 2018 02:31:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=RgzoQP3PGg1G+B0F80WUKmGdoAM1cMkWIi00fXk97KU=; b=Mc0wsTLDz4IexGagSNp2cn1l1dozDgwSH1DzsNMN1Pue+Df6wLadhxLPPTUxNL5/mG MQlEqVM5lqa52y8ptcC2DOaB5L0Bx6/QlKByWO4wZ/uyI4WkkOb9h4HO3ygZTRleRlMM V/RSUWTz3Y+fnFo/zDIr1LpxtQRaH9P+0cHag= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=RgzoQP3PGg1G+B0F80WUKmGdoAM1cMkWIi00fXk97KU=; b=RVzlOUgtkaHcVMldm2omPUbDdq34Ad/oFa8rbHIFvdnkXDuPlCuVkf/2RbqoE1StPU BSupZ2ZdWKVZxh2iP7GslDq2UxX6sJ2X9BLr3KzDHniwH4nC9rlLSLp/17/5hBMmrB/U rnU7xX/hhoSaq4ocFOH8cxRISr5tSs4ha22DXqyaWaz3lAOZKocdkA02XdyE/dH3cgnc oXBwjVgEmbH3YXNmGyO0JhMIPRV8zC0wHA06YCaces6D5XLYuI8ptVuhi+cIABd/G2+c jk8BwTGpQIsxuWhvRuOlKeIbYCj4FPL9lrDnSGRIjf+47kRzTSvSW9/qrQqu+ebB0LYM rOww== X-Gm-Message-State: ALKqPwf26oF0jWOb9gm+L9bR0yZEBm9RnaAWydGTGp8IvnBzasuToBo7 S8HyEpnTsjEzg4iFRvdmKfV9VWNmuKo= X-Received: by 2002:a62:f619:: with SMTP id x25-v6mr2131473pfh.106.1527067910292; Wed, 23 May 2018 02:31:50 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id x2-v6sm45995387pfk.113.2018.05.23.02.31.46 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 23 May 2018 02:31:49 -0700 (PDT) From: Baolin Wang To: dan.j.williams@intel.com, vkoul@kernel.org Cc: eric.long@spreadtrum.com, broonie@kernel.org, baolin.wang@linaro.org, dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V4 1/2] dmaengine: sprd: Optimize the sprd_dma_prep_dma_memcpy() Date: Wed, 23 May 2018 17:31:10 +0800 Message-Id: <08819489e52add194fecf2b4b234fff9deecdb4c.1527065569.git.baolin.wang@linaro.org> X-Mailer: git-send-email 1.7.9.5 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Eric Long This is one preparation patch, we can use default DMA configuration to implement the device_prep_dma_memcpy() interface instead of issuing sprd_dma_config(). We will implement one new sprd_dma_config() function with introducing device_prep_slave_sg() interface in following patch. So we can remove the obsolete sprd_dma_config() firstly. Signed-off-by: Eric Long Signed-off-by: Baolin Wang --- Changes since v3: - No updates. Changes since v2: - Change logic to make code more readable. Changes since v1: - No updates. --- drivers/dma/sprd-dma.c | 167 +++++++++++------------------------------------- 1 file changed, 39 insertions(+), 128 deletions(-) -- 1.7.9.5 diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c index e715d07..924ada4 100644 --- a/drivers/dma/sprd-dma.c +++ b/drivers/dma/sprd-dma.c @@ -552,147 +552,58 @@ static void sprd_dma_issue_pending(struct dma_chan *chan) spin_unlock_irqrestore(&schan->vc.lock, flags); } -static int sprd_dma_config(struct dma_chan *chan, struct sprd_dma_desc *sdesc, - dma_addr_t dest, dma_addr_t src, size_t len) -{ - struct sprd_dma_dev *sdev = to_sprd_dma_dev(chan); - struct sprd_dma_chn_hw *hw = &sdesc->chn_hw; - u32 datawidth, src_step, des_step, fragment_len; - u32 block_len, req_mode, irq_mode, transcation_len; - u32 fix_mode = 0, fix_en = 0; - - if (IS_ALIGNED(len, 4)) { - datawidth = SPRD_DMA_DATAWIDTH_4_BYTES; - src_step = SPRD_DMA_WORD_STEP; - des_step = SPRD_DMA_WORD_STEP; - } else if (IS_ALIGNED(len, 2)) { - datawidth = SPRD_DMA_DATAWIDTH_2_BYTES; - src_step = SPRD_DMA_SHORT_STEP; - des_step = SPRD_DMA_SHORT_STEP; - } else { - datawidth = SPRD_DMA_DATAWIDTH_1_BYTE; - src_step = SPRD_DMA_BYTE_STEP; - des_step = SPRD_DMA_BYTE_STEP; - } - - fragment_len = SPRD_DMA_MEMCPY_MIN_SIZE; - if (len <= SPRD_DMA_BLK_LEN_MASK) { - block_len = len; - transcation_len = 0; - req_mode = SPRD_DMA_BLK_REQ; - irq_mode = SPRD_DMA_BLK_INT; - } else { - block_len = SPRD_DMA_MEMCPY_MIN_SIZE; - transcation_len = len; - req_mode = SPRD_DMA_TRANS_REQ; - irq_mode = SPRD_DMA_TRANS_INT; - } - - hw->cfg = SPRD_DMA_DONOT_WAIT_BDONE << SPRD_DMA_WAIT_BDONE_OFFSET; - hw->wrap_ptr = (u32)((src >> SPRD_DMA_HIGH_ADDR_OFFSET) & - SPRD_DMA_HIGH_ADDR_MASK); - hw->wrap_to = (u32)((dest >> SPRD_DMA_HIGH_ADDR_OFFSET) & - SPRD_DMA_HIGH_ADDR_MASK); - - hw->src_addr = (u32)(src & SPRD_DMA_LOW_ADDR_MASK); - hw->des_addr = (u32)(dest & SPRD_DMA_LOW_ADDR_MASK); - - if ((src_step != 0 && des_step != 0) || (src_step | des_step) == 0) { - fix_en = 0; - } else { - fix_en = 1; - if (src_step) - fix_mode = 1; - else - fix_mode = 0; - } - - hw->frg_len = datawidth << SPRD_DMA_SRC_DATAWIDTH_OFFSET | - datawidth << SPRD_DMA_DES_DATAWIDTH_OFFSET | - req_mode << SPRD_DMA_REQ_MODE_OFFSET | - fix_mode << SPRD_DMA_FIX_SEL_OFFSET | - fix_en << SPRD_DMA_FIX_EN_OFFSET | - (fragment_len & SPRD_DMA_FRG_LEN_MASK); - hw->blk_len = block_len & SPRD_DMA_BLK_LEN_MASK; - - hw->intc = SPRD_DMA_CFG_ERR_INT_EN; - - switch (irq_mode) { - case SPRD_DMA_NO_INT: - break; - - case SPRD_DMA_FRAG_INT: - hw->intc |= SPRD_DMA_FRAG_INT_EN; - break; - - case SPRD_DMA_BLK_INT: - hw->intc |= SPRD_DMA_BLK_INT_EN; - break; - - case SPRD_DMA_BLK_FRAG_INT: - hw->intc |= SPRD_DMA_BLK_INT_EN | SPRD_DMA_FRAG_INT_EN; - break; - - case SPRD_DMA_TRANS_INT: - hw->intc |= SPRD_DMA_TRANS_INT_EN; - break; - - case SPRD_DMA_TRANS_FRAG_INT: - hw->intc |= SPRD_DMA_TRANS_INT_EN | SPRD_DMA_FRAG_INT_EN; - break; - - case SPRD_DMA_TRANS_BLK_INT: - hw->intc |= SPRD_DMA_TRANS_INT_EN | SPRD_DMA_BLK_INT_EN; - break; - - case SPRD_DMA_LIST_INT: - hw->intc |= SPRD_DMA_LIST_INT_EN; - break; - - case SPRD_DMA_CFGERR_INT: - hw->intc |= SPRD_DMA_CFG_ERR_INT_EN; - break; - - default: - dev_err(sdev->dma_dev.dev, "invalid irq mode\n"); - return -EINVAL; - } - - if (transcation_len == 0) - hw->trsc_len = block_len & SPRD_DMA_TRSC_LEN_MASK; - else - hw->trsc_len = transcation_len & SPRD_DMA_TRSC_LEN_MASK; - - hw->trsf_step = (des_step & SPRD_DMA_TRSF_STEP_MASK) << - SPRD_DMA_DEST_TRSF_STEP_OFFSET | - (src_step & SPRD_DMA_TRSF_STEP_MASK) << - SPRD_DMA_SRC_TRSF_STEP_OFFSET; - - hw->frg_step = 0; - hw->src_blk_step = 0; - hw->des_blk_step = 0; - hw->src_blk_step = 0; - return 0; -} - static struct dma_async_tx_descriptor * sprd_dma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, size_t len, unsigned long flags) { struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); struct sprd_dma_desc *sdesc; - int ret; + struct sprd_dma_chn_hw *hw; + enum sprd_dma_datawidth datawidth; + u32 step, temp; sdesc = kzalloc(sizeof(*sdesc), GFP_NOWAIT); if (!sdesc) return NULL; - ret = sprd_dma_config(chan, sdesc, dest, src, len); - if (ret) { - kfree(sdesc); - return NULL; + hw = &sdesc->chn_hw; + + hw->cfg = SPRD_DMA_DONOT_WAIT_BDONE << SPRD_DMA_WAIT_BDONE_OFFSET; + hw->intc = SPRD_DMA_TRANS_INT | SPRD_DMA_CFG_ERR_INT_EN; + hw->src_addr = src & SPRD_DMA_LOW_ADDR_MASK; + hw->des_addr = dest & SPRD_DMA_LOW_ADDR_MASK; + hw->wrap_ptr = (src >> SPRD_DMA_HIGH_ADDR_OFFSET) & + SPRD_DMA_HIGH_ADDR_MASK; + hw->wrap_to = (dest >> SPRD_DMA_HIGH_ADDR_OFFSET) & + SPRD_DMA_HIGH_ADDR_MASK; + + if (IS_ALIGNED(len, 8)) { + datawidth = SPRD_DMA_DATAWIDTH_8_BYTES; + step = SPRD_DMA_DWORD_STEP; + } else if (IS_ALIGNED(len, 4)) { + datawidth = SPRD_DMA_DATAWIDTH_4_BYTES; + step = SPRD_DMA_WORD_STEP; + } else if (IS_ALIGNED(len, 2)) { + datawidth = SPRD_DMA_DATAWIDTH_2_BYTES; + step = SPRD_DMA_SHORT_STEP; + } else { + datawidth = SPRD_DMA_DATAWIDTH_1_BYTE; + step = SPRD_DMA_BYTE_STEP; } + temp = datawidth << SPRD_DMA_SRC_DATAWIDTH_OFFSET; + temp |= datawidth << SPRD_DMA_DES_DATAWIDTH_OFFSET; + temp |= SPRD_DMA_TRANS_REQ << SPRD_DMA_REQ_MODE_OFFSET; + temp |= len & SPRD_DMA_FRG_LEN_MASK; + hw->frg_len = temp; + + hw->blk_len = len & SPRD_DMA_BLK_LEN_MASK; + hw->trsc_len = len & SPRD_DMA_TRSC_LEN_MASK; + + temp = (step & SPRD_DMA_TRSF_STEP_MASK) << SPRD_DMA_DEST_TRSF_STEP_OFFSET; + temp |= (step & SPRD_DMA_TRSF_STEP_MASK) << SPRD_DMA_SRC_TRSF_STEP_OFFSET; + hw->trsf_step = temp; + return vchan_tx_prep(&schan->vc, &sdesc->vd, flags); } From patchwork Wed May 23 09:31:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 136637 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp656810lji; Wed, 23 May 2018 02:32:05 -0700 (PDT) X-Google-Smtp-Source: AB8JxZomYetJs/0bqS/reaxfQDctHuz2UeBySppwFL7SC91KaWLGnM8b/6bHBhkYhXjwNXjH9OTF X-Received: by 2002:a65:4282:: with SMTP id j2-v6mr1698690pgp.189.1527067925434; Wed, 23 May 2018 02:32:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527067925; cv=none; d=google.com; s=arc-20160816; b=xLyP/DqEDRyiPVldLANvIDmHjISoUrkV+99MYq6UJVNN+/H6BLsPvrYIJoKgSjjXg8 zY8KkygqkwCmKlF1tj1W/QtjbG58tQn3J114urJVhPiNCLfBOl4jEs39X4yIB3fmrTqQ XTV/dMvjAVu51TBuNJakynN2Tw3TIy3L6B7F/gknz6XeZgwAkWFEKWY7fU6OJ1No5uk/ RZL/RlUClDJRTAE/3/tjdwH2yPV5CQftTrcxWtRdykSoj8If60dgMuEKRXBzRZ9xJOWk 1fm1fBD+/5gYNkPRKNi3B5bDoXHFg4BNt6Zz5DkdJ8MifWWvvQfieedU5g4tuZ5X7oNa o6Wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=+KzSWoty87LppDRhIXPis/Ehr87JksEEXnEMuygBqJs=; b=sGTamATT2L2RRi94zDV0xWOhWA4TzzKYkMvxsDuv+D2/Z6Ifj7zt0PyfDOqwiTmbgn o6igTbVMOXxWXA0uDWNP6T7+jx7DuXPMngOzAVxGJ9E25YaOCgrFXEoNaN2vJRQYDUHA 4Nq3dBNpzN244Edcxj0tYOIPF1s3T3fzqfUYSyElWNEjKXNTJaSe6/I42zFez/1y18S0 GYaD9am5imkGtSGMy2f+8OO5RWCn3gXHGUAS91CHgR6ThRc14t1jqH/jtxD9k8N2JW54 g2evLJrtzjmkCt0O07cZMSLM/xT/cQ6cL6FW1J+TY4nPBIKN1aa1TNU0VrcCZGaf58VA dRWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eheBm0B2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n79-v6si18262252pfj.152.2018.05.23.02.32.05; Wed, 23 May 2018 02:32:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eheBm0B2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932320AbeEWJcC (ORCPT + 30 others); Wed, 23 May 2018 05:32:02 -0400 Received: from mail-pg0-f67.google.com ([74.125.83.67]:45488 "EHLO mail-pg0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932292AbeEWJby (ORCPT ); Wed, 23 May 2018 05:31:54 -0400 Received: by mail-pg0-f67.google.com with SMTP id w3-v6so9152412pgv.12 for ; Wed, 23 May 2018 02:31:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=+KzSWoty87LppDRhIXPis/Ehr87JksEEXnEMuygBqJs=; b=eheBm0B23yeHZUb8chPdSVBfJ66jJw5UOmYlTnbElcw6MzYB7JdrpfjLai+O5raT+t Vy2Q71oFcH8PpNdBxbFCWumwxDyvVJcqnTdoECPOsm9IGGggaryxP5WU+hXktGiKiMRu R5bNnUM0Kb4hbA2+BKY+B6bCACeLZOP3H9SAU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=+KzSWoty87LppDRhIXPis/Ehr87JksEEXnEMuygBqJs=; b=tPi0lVFIY2DkVwDg/Bexh7oo2AttmJKKjpp9Ga1OaGZ6nDzL02Q8ZE9ML7GK+kDBPL BQ39SBWuRpa2T4ZALWzzpBuPL2BV0NQVH77Da9LETT7xHxeK3qKTIX+M/sKhCQoCE5JT wWUjBwjr1uKjVpqHLeLcoPBjAiOTuhRX2F5qXfj5qLW9zuRu/1pNJRTZd44yfeOkzq9F pNGrU2GUp8XIPsrT2vRDiX5AOf+CxRxtxrWDa1sIUlS+ZP1iWHXWCJC5J/b5bLSewJuv ekF1qp06ZYbVa4BqpPsMJnccb5rQ4ka9OOmmkS0qwnDQnAXawXXTpHkHNTevy8Gq/z4F M+LA== X-Gm-Message-State: ALKqPwcoPqE96B/Zfh8DqZuOUlHMzblgoDUksFczHhZ6BNZORgXFULe0 1l83nQDinwLc7dKxwFCP4TmGiQ== X-Received: by 2002:a63:7e13:: with SMTP id z19-v6mr1705541pgc.205.1527067913416; Wed, 23 May 2018 02:31:53 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id x2-v6sm45995387pfk.113.2018.05.23.02.31.50 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 23 May 2018 02:31:52 -0700 (PDT) From: Baolin Wang To: dan.j.williams@intel.com, vkoul@kernel.org Cc: eric.long@spreadtrum.com, broonie@kernel.org, baolin.wang@linaro.org, dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V4 2/2] dmaengine: sprd: Add Spreadtrum DMA configuration Date: Wed, 23 May 2018 17:31:11 +0800 Message-Id: X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <08819489e52add194fecf2b4b234fff9deecdb4c.1527065569.git.baolin.wang@linaro.org> References: <08819489e52add194fecf2b4b234fff9deecdb4c.1527065569.git.baolin.wang@linaro.org> In-Reply-To: <08819489e52add194fecf2b4b234fff9deecdb4c.1527065569.git.baolin.wang@linaro.org> References: <08819489e52add194fecf2b4b234fff9deecdb4c.1527065569.git.baolin.wang@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Eric Long This patch adds the 'device_config' and 'device_prep_slave_sg' interfaces for users to configure DMA, as well as adding one 'struct sprd_dma_config' structure to save Spreadtrum DMA configuration for each DMA channel. Signed-off-by: Eric Long Signed-off-by: Baolin Wang --- Changes since v3: - Remove the 'struct sprd_dma_config'. - Optimize the sprd_dma_fill_desc() function. - Error out for default operation when checking datawidth. - Add some comments to explain what we do. - Remove some current unused configuration. Changes since v2: - Remove src/dst from struct sprd_dma_config. - Simplify sprd_dma_get_datawidth()/sprd_dma_get_step(). - Change some logic to make code more readable. - Other optimization. Changes since v1: - Fix the incorrect parameter type of sprd_dma_get_step(). --- drivers/dma/sprd-dma.c | 182 ++++++++++++++++++++++++++++++++++++++++++ include/linux/dma/sprd-dma.h | 4 + 2 files changed, 186 insertions(+) -- 1.7.9.5 diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c index 924ada4..c3161c1 100644 --- a/drivers/dma/sprd-dma.c +++ b/drivers/dma/sprd-dma.c @@ -164,6 +164,7 @@ struct sprd_dma_desc { struct sprd_dma_chn { struct virt_dma_chan vc; void __iomem *chn_base; + struct dma_slave_config slave_cfg; u32 chn_num; u32 dev_id; struct sprd_dma_desc *cur_desc; @@ -552,6 +553,129 @@ static void sprd_dma_issue_pending(struct dma_chan *chan) spin_unlock_irqrestore(&schan->vc.lock, flags); } +static int sprd_dma_get_datawidth(enum dma_slave_buswidth buswidth) +{ + switch (buswidth) { + case DMA_SLAVE_BUSWIDTH_1_BYTE: + case DMA_SLAVE_BUSWIDTH_2_BYTES: + case DMA_SLAVE_BUSWIDTH_4_BYTES: + case DMA_SLAVE_BUSWIDTH_8_BYTES: + return ffs(buswidth) - 1; + + default: + return -EINVAL; + } +} + +static int sprd_dma_get_step(enum dma_slave_buswidth buswidth) +{ + switch (buswidth) { + case DMA_SLAVE_BUSWIDTH_1_BYTE: + case DMA_SLAVE_BUSWIDTH_2_BYTES: + case DMA_SLAVE_BUSWIDTH_4_BYTES: + case DMA_SLAVE_BUSWIDTH_8_BYTES: + return buswidth; + + default: + return -EINVAL; + } +} + +static int sprd_dma_fill_desc(struct dma_chan *chan, + struct sprd_dma_desc *sdesc, + dma_addr_t src, dma_addr_t dst, u32 len, + enum dma_transfer_direction dir, + unsigned long flags, + struct dma_slave_config *slave_cfg) +{ + struct sprd_dma_dev *sdev = to_sprd_dma_dev(chan); + struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); + struct sprd_dma_chn_hw *hw = &sdesc->chn_hw; + u32 req_mode = (flags >> SPRD_DMA_REQ_SHIFT) & SPRD_DMA_REQ_MODE_MASK; + u32 int_mode = flags & SPRD_DMA_INT_MASK; + int src_datawidth, dst_datawidth, src_step, dst_step; + u32 temp, fix_mode = 0, fix_en = 0; + + if (dir == DMA_MEM_TO_DEV) { + src_step = sprd_dma_get_step(slave_cfg->src_addr_width); + if (src_step < 0) { + dev_err(sdev->dma_dev.dev, "invalid source step\n"); + return src_step; + } + dst_step = SPRD_DMA_NONE_STEP; + } else { + dst_step = sprd_dma_get_step(slave_cfg->dst_addr_width); + if (dst_step < 0) { + dev_err(sdev->dma_dev.dev, "invalid destination step\n"); + return dst_step; + } + src_step = SPRD_DMA_NONE_STEP; + } + + src_datawidth = sprd_dma_get_datawidth(slave_cfg->src_addr_width); + if (src_datawidth < 0) { + dev_err(sdev->dma_dev.dev, "invalid source datawidth\n"); + return src_datawidth; + } + + dst_datawidth = sprd_dma_get_datawidth(slave_cfg->dst_addr_width); + if (dst_datawidth < 0) { + dev_err(sdev->dma_dev.dev, "invalid destination datawidth\n"); + return dst_datawidth; + } + + if (slave_cfg->slave_id) + schan->dev_id = slave_cfg->slave_id; + + hw->cfg = SPRD_DMA_DONOT_WAIT_BDONE << SPRD_DMA_WAIT_BDONE_OFFSET; + + /* + * wrap_ptr and wrap_to will save the high 4 bits source address and + * destination address. + */ + hw->wrap_ptr = (src >> SPRD_DMA_HIGH_ADDR_OFFSET) & SPRD_DMA_HIGH_ADDR_MASK; + hw->wrap_to = (dst >> SPRD_DMA_HIGH_ADDR_OFFSET) & SPRD_DMA_HIGH_ADDR_MASK; + hw->src_addr = src & SPRD_DMA_LOW_ADDR_MASK; + hw->des_addr = dst & SPRD_DMA_LOW_ADDR_MASK; + + /* + * If the src step and dst step both are 0 or both are not 0, that means + * we can not enable the fix mode. If one is 0 and another one is not, + * we can enable the fix mode. + */ + if ((src_step != 0 && dst_step != 0) || (src_step | dst_step) == 0) { + fix_en = 0; + } else { + fix_en = 1; + if (src_step) + fix_mode = 1; + else + fix_mode = 0; + } + + hw->intc = int_mode | SPRD_DMA_CFG_ERR_INT_EN; + + temp = src_datawidth << SPRD_DMA_SRC_DATAWIDTH_OFFSET; + temp |= dst_datawidth << SPRD_DMA_DES_DATAWIDTH_OFFSET; + temp |= req_mode << SPRD_DMA_REQ_MODE_OFFSET; + temp |= fix_mode << SPRD_DMA_FIX_SEL_OFFSET; + temp |= fix_en << SPRD_DMA_FIX_EN_OFFSET; + temp |= slave_cfg->src_maxburst & SPRD_DMA_FRG_LEN_MASK; + hw->frg_len = temp; + + hw->blk_len = len & SPRD_DMA_BLK_LEN_MASK; + hw->trsc_len = len & SPRD_DMA_TRSC_LEN_MASK; + + temp = (dst_step & SPRD_DMA_TRSF_STEP_MASK) << SPRD_DMA_DEST_TRSF_STEP_OFFSET; + temp |= (src_step & SPRD_DMA_TRSF_STEP_MASK) << SPRD_DMA_SRC_TRSF_STEP_OFFSET; + hw->trsf_step = temp; + + hw->frg_step = 0; + hw->src_blk_step = 0; + hw->des_blk_step = 0; + return 0; +} + static struct dma_async_tx_descriptor * sprd_dma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, size_t len, unsigned long flags) @@ -607,6 +731,62 @@ static void sprd_dma_issue_pending(struct dma_chan *chan) return vchan_tx_prep(&schan->vc, &sdesc->vd, flags); } +static struct dma_async_tx_descriptor * +sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, + unsigned int sglen, enum dma_transfer_direction dir, + unsigned long flags, void *context) +{ + struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); + struct dma_slave_config *slave_cfg = &schan->slave_cfg; + dma_addr_t src = 0, dst = 0; + struct sprd_dma_desc *sdesc; + struct scatterlist *sg; + u32 len = 0; + int ret, i; + + /* TODO: now we only support one sg for each DMA configuration. */ + if (!is_slave_direction(dir) || sglen > 1) + return NULL; + + sdesc = kzalloc(sizeof(*sdesc), GFP_NOWAIT); + if (!sdesc) + return NULL; + + for_each_sg(sgl, sg, sglen, i) { + len = sg_dma_len(sg); + + if (dir == DMA_MEM_TO_DEV) { + src = sg_dma_address(sg); + dst = slave_cfg->dst_addr; + } else { + src = slave_cfg->src_addr; + dst = sg_dma_address(sg); + } + } + + ret = sprd_dma_fill_desc(chan, sdesc, src, dst, len, dir, flags, + slave_cfg); + if (ret) { + kfree(sdesc); + return NULL; + } + + return vchan_tx_prep(&schan->vc, &sdesc->vd, flags); +} + +static int sprd_dma_slave_config(struct dma_chan *chan, + struct dma_slave_config *config) +{ + struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); + struct dma_slave_config *slave_cfg = &schan->slave_cfg; + + if (!is_slave_direction(config->direction)) + return -EINVAL; + + memcpy(slave_cfg, config, sizeof(*config)); + return 0; +} + static int sprd_dma_pause(struct dma_chan *chan) { struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); @@ -733,6 +913,8 @@ static int sprd_dma_probe(struct platform_device *pdev) sdev->dma_dev.device_tx_status = sprd_dma_tx_status; sdev->dma_dev.device_issue_pending = sprd_dma_issue_pending; sdev->dma_dev.device_prep_dma_memcpy = sprd_dma_prep_dma_memcpy; + sdev->dma_dev.device_prep_slave_sg = sprd_dma_prep_slave_sg; + sdev->dma_dev.device_config = sprd_dma_slave_config; sdev->dma_dev.device_pause = sprd_dma_pause; sdev->dma_dev.device_resume = sprd_dma_resume; sdev->dma_dev.device_terminate_all = sprd_dma_terminate_all; diff --git a/include/linux/dma/sprd-dma.h b/include/linux/dma/sprd-dma.h index c545162..b0115e3 100644 --- a/include/linux/dma/sprd-dma.h +++ b/include/linux/dma/sprd-dma.h @@ -3,6 +3,10 @@ #ifndef _SPRD_DMA_H_ #define _SPRD_DMA_H_ +#define SPRD_DMA_REQ_SHIFT 16 +#define SPRD_DMA_FLAGS(req_mode, int_type) \ + ((req_mode) << SPRD_DMA_REQ_SHIFT | (int_type)) + /* * enum sprd_dma_req_mode: define the DMA request mode * @SPRD_DMA_FRAG_REQ: fragment request mode