From patchwork Sun Sep 20 11:23:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 258196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEA2DC43469 for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5AC321D43 for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726267AbgITLcZ (ORCPT ); Sun, 20 Sep 2020 07:32:25 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54024 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726290AbgITLcX (ORCPT ); Sun, 20 Sep 2020 07:32:23 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 9697B8030718; Sun, 20 Sep 2020 11:23:27 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id C8I5CXQTa1vV; Sun, 20 Sep 2020 14:23:27 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 03/11] spi: dw-dma: Configure the DMA channels in dma_setup Date: Sun, 20 Sep 2020 14:23:14 +0300 Message-ID: <20200920112322.24585-4-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Mainly this is a preparation patch before adding one-by-one DMA SG entries transmission. But logically the Tx and Rx DMA channels setup should be performed in the dma_setup() callback anyway. So we'll move the DMA slave channels src/dst burst lengths, address and address width configuration from the Tx/Rx channels preparation methods to the dedicated functions and then make sure it's called at the DMA setup stage. Note we now make sure the return value of the dmaengine_slave_config() method doesn't indicate an error. It has been unnecessary in case if Dw DMAC is utilized as a DMA engine, since its device_config() callback always returns zero (though it might change in future). But since DW APB SSI driver now supports any DMA back-end we must make sure the DMA device configuration has been successful before proceeding with further setups. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 42 +++++++++++++++++++++++++++++----------- 1 file changed, 31 insertions(+), 11 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 1b013ac94a3f..da17897b8acb 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -256,11 +256,9 @@ static void dw_spi_dma_tx_done(void *arg) complete(&dws->dma_completion); } -static struct dma_async_tx_descriptor * -dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_config_tx(struct dw_spi *dws) { struct dma_slave_config txconf; - struct dma_async_tx_descriptor *txdesc; memset(&txconf, 0, sizeof(txconf)); txconf.direction = DMA_MEM_TO_DEV; @@ -270,7 +268,13 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) txconf.dst_addr_width = dw_spi_dma_convert_width(dws->n_bytes); txconf.device_fc = false; - dmaengine_slave_config(dws->txchan, &txconf); + return dmaengine_slave_config(dws->txchan, &txconf); +} + +static struct dma_async_tx_descriptor * +dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer) +{ + struct dma_async_tx_descriptor *txdesc; txdesc = dmaengine_prep_slave_sg(dws->txchan, xfer->tx_sg.sgl, @@ -345,14 +349,9 @@ static void dw_spi_dma_rx_done(void *arg) complete(&dws->dma_completion); } -static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, - struct spi_transfer *xfer) +static int dw_spi_dma_config_rx(struct dw_spi *dws) { struct dma_slave_config rxconf; - struct dma_async_tx_descriptor *rxdesc; - - if (!xfer->rx_buf) - return NULL; memset(&rxconf, 0, sizeof(rxconf)); rxconf.direction = DMA_DEV_TO_MEM; @@ -362,7 +361,16 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, rxconf.src_addr_width = dw_spi_dma_convert_width(dws->n_bytes); rxconf.device_fc = false; - dmaengine_slave_config(dws->rxchan, &rxconf); + return dmaengine_slave_config(dws->rxchan, &rxconf); +} + +static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, + struct spi_transfer *xfer) +{ + struct dma_async_tx_descriptor *rxdesc; + + if (!xfer->rx_buf) + return NULL; rxdesc = dmaengine_prep_slave_sg(dws->rxchan, xfer->rx_sg.sgl, @@ -381,10 +389,22 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) { u16 imr, dma_ctrl; + int ret; if (!xfer->tx_buf) return -EINVAL; + /* Setup DMA channels */ + ret = dw_spi_dma_config_tx(dws); + if (ret) + return ret; + + if (xfer->rx_buf) { + ret = dw_spi_dma_config_rx(dws); + if (ret) + return ret; + } + /* Set the DMA handshaking interface */ dma_ctrl = SPI_DMA_TDMAE; if (xfer->rx_buf) From patchwork Sun Sep 20 11:23:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 258197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A517CC43468 for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D890216C4 for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726402AbgITLcZ (ORCPT ); Sun, 20 Sep 2020 07:32:25 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54012 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726267AbgITLcW (ORCPT ); Sun, 20 Sep 2020 07:32:22 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 32EFC80305E2; Sun, 20 Sep 2020 11:23:28 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id aGs3rQhg65en; Sun, 20 Sep 2020 14:23:27 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 04/11] spi: dw-dma: Check rx_buf availability in the xfer method Date: Sun, 20 Sep 2020 14:23:15 +0300 Message-ID: <20200920112322.24585-5-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Checking rx_buf for being NULL and returning NULL from the Rx-channel preparation method doesn't let us to distinguish that situation from errors happening during the Rx SG-list preparation. So it's better to make sure that the rx_buf not-NULL and full-duplex communication is requested prior calling the Rx preparation method. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index da17897b8acb..d2a67dee1a66 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -369,9 +369,6 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, { struct dma_async_tx_descriptor *rxdesc; - if (!xfer->rx_buf) - return NULL; - rxdesc = dmaengine_prep_slave_sg(dws->rxchan, xfer->rx_sg.sgl, xfer->rx_sg.nents, @@ -435,10 +432,12 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) return -EINVAL; /* Prepare the RX dma transfer */ - rxdesc = dw_spi_dma_prepare_rx(dws, xfer); + if (xfer->rx_buf) { + rxdesc = dw_spi_dma_prepare_rx(dws, xfer); + if (!rxdesc) + return -EINVAL; - /* rx must be started before tx due to spi instinct */ - if (rxdesc) { + /* rx must be started before tx due to spi instinct */ set_bit(RX_BUSY, &dws->dma_chan_busy); dmaengine_submit(rxdesc); dma_async_issue_pending(dws->rxchan); @@ -458,7 +457,7 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) return ret; } - if (rxdesc && dws->master->cur_msg->status == -EINPROGRESS) + if (xfer->rx_buf && dws->master->cur_msg->status == -EINPROGRESS) ret = dw_spi_dma_wait_rx_done(dws); return ret; From patchwork Sun Sep 20 11:23:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 258194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FEC2C43465 for ; Sun, 20 Sep 2020 11:32:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A6A320EDD for ; Sun, 20 Sep 2020 11:32:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726389AbgITLc2 (ORCPT ); Sun, 20 Sep 2020 07:32:28 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54052 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726397AbgITLc1 (ORCPT ); Sun, 20 Sep 2020 07:32:27 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 8DD0B8030171; Sun, 20 Sep 2020 11:23:29 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RpDXG3G2kXML; Sun, 20 Sep 2020 14:23:29 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 06/11] spi: dw-dma: Check DMA Tx-desc submission status Date: Sun, 20 Sep 2020 14:23:17 +0300 Message-ID: <20200920112322.24585-7-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org We suggest to add the dmaengine_submit() return value test for errors. It has been unnecessary while the driver was expected to be utilized in pair with DW DMAC. But since now the driver can be used with any DMA engine, it might be useful to track the errors on DMA submissions so not miss them and get into an unpredictable driver behaviour. Signed-off-by: Serge Semin --- Changelog v2: - Replace negative conditional statements with the positive ones. - Terminate the prepared descriptors on submission errors. --- drivers/spi/spi-dw-dma.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 769d10ca74b4..aa3900809126 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -275,6 +275,8 @@ static struct dma_async_tx_descriptor * dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *txdesc; + dma_cookie_t cookie; + int ret; txdesc = dmaengine_prep_slave_sg(dws->txchan, xfer->tx_sg.sgl, @@ -287,7 +289,13 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) txdesc->callback = dw_spi_dma_tx_done; txdesc->callback_param = dws; - dmaengine_submit(txdesc); + cookie = dmaengine_submit(txdesc); + ret = dma_submit_error(cookie); + if (ret) { + dmaengine_terminate_sync(dws->txchan); + return NULL; + } + set_bit(TX_BUSY, &dws->dma_chan_busy); return txdesc; @@ -371,6 +379,8 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *rxdesc; + dma_cookie_t cookie; + int ret; rxdesc = dmaengine_prep_slave_sg(dws->rxchan, xfer->rx_sg.sgl, @@ -383,7 +393,13 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, rxdesc->callback = dw_spi_dma_rx_done; rxdesc->callback_param = dws; - dmaengine_submit(rxdesc); + cookie = dmaengine_submit(rxdesc); + ret = dma_submit_error(cookie); + if (ret) { + dmaengine_terminate_sync(dws->rxchan); + return NULL; + } + set_bit(RX_BUSY, &dws->dma_chan_busy); return rxdesc; From patchwork Sun Sep 20 11:23:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 258198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D29EC43465 for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06CC4235FA for ; Sun, 20 Sep 2020 11:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726343AbgITLcY (ORCPT ); Sun, 20 Sep 2020 07:32:24 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54020 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726285AbgITLcX (ORCPT ); Sun, 20 Sep 2020 07:32:23 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 4655B8030833; Sun, 20 Sep 2020 11:23:30 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3DB0P7RBo_ww; Sun, 20 Sep 2020 14:23:29 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 07/11] spi: dw-dma: Remove DMA Tx-desc passing around Date: Sun, 20 Sep 2020 14:23:18 +0300 Message-ID: <20200920112322.24585-8-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org It's pointless to pass the Rx and Tx transfers DMA Tx-descriptors, since they are used in the Tx/Rx submit method only. Instead just return the submission status from these methods. This alteration will make the code less complex. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index aa3900809126..9f70818acce6 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -271,8 +271,7 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws) return dmaengine_slave_config(dws->txchan, &txconf); } -static struct dma_async_tx_descriptor * -dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *txdesc; dma_cookie_t cookie; @@ -284,7 +283,7 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!txdesc) - return NULL; + return -ENOMEM; txdesc->callback = dw_spi_dma_tx_done; txdesc->callback_param = dws; @@ -293,12 +292,12 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) ret = dma_submit_error(cookie); if (ret) { dmaengine_terminate_sync(dws->txchan); - return NULL; + return ret; } set_bit(TX_BUSY, &dws->dma_chan_busy); - return txdesc; + return 0; } static inline bool dw_spi_dma_rx_busy(struct dw_spi *dws) @@ -375,8 +374,7 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws) return dmaengine_slave_config(dws->rxchan, &rxconf); } -static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, - struct spi_transfer *xfer) +static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *rxdesc; dma_cookie_t cookie; @@ -388,7 +386,7 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!rxdesc) - return NULL; + return -ENOMEM; rxdesc->callback = dw_spi_dma_rx_done; rxdesc->callback_param = dws; @@ -397,12 +395,12 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws, ret = dma_submit_error(cookie); if (ret) { dmaengine_terminate_sync(dws->rxchan); - return NULL; + return ret; } set_bit(RX_BUSY, &dws->dma_chan_busy); - return rxdesc; + return 0; } static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) @@ -445,19 +443,18 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) { - struct dma_async_tx_descriptor *txdesc, *rxdesc; int ret; /* Submit the DMA Tx transfer */ - txdesc = dw_spi_dma_submit_tx(dws, xfer); - if (!txdesc) - return -EINVAL; + ret = dw_spi_dma_submit_tx(dws, xfer); + if (ret) + return ret; /* Submit the DMA Rx transfer if required */ if (xfer->rx_buf) { - rxdesc = dw_spi_dma_submit_rx(dws, xfer); - if (!rxdesc) - return -EINVAL; + ret = dw_spi_dma_submit_rx(dws, xfer); + if (ret) + return ret; /* rx must be started before tx due to spi instinct */ dma_async_issue_pending(dws->rxchan); From patchwork Sun Sep 20 11:23:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 258195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A32CFC43466 for ; Sun, 20 Sep 2020 11:32:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6EB0020EDD for ; Sun, 20 Sep 2020 11:32:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726338AbgITLc1 (ORCPT ); Sun, 20 Sep 2020 07:32:27 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54044 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726279AbgITLc0 (ORCPT ); Sun, 20 Sep 2020 07:32:26 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id CE1C7803017C; Sun, 20 Sep 2020 11:23:30 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QHf1cNj_oOvJ; Sun, 20 Sep 2020 14:23:30 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 08/11] spi: dw-dma: Detach DMA transfer into a dedicated method Date: Sun, 20 Sep 2020 14:23:19 +0300 Message-ID: <20200920112322.24585-9-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org In order to add an alternative method of DMA-based SPI transfer first we need to detach the currently available one from the common code. Here we move the normal DMA-based SPI transfer execution functionality into a dedicated method. It will be utilized if either the DMA engine supports an unlimited number SG entries or Tx-only SPI transfer is requested. But currently just use it for any SPI transfer. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 9f70818acce6..f2baefcae9ae 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -441,7 +441,8 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) return 0; } -static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_transfer_all(struct dw_spi *dws, + struct spi_transfer *xfer) { int ret; @@ -462,7 +463,14 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) dma_async_issue_pending(dws->txchan); - ret = dw_spi_dma_wait(dws, xfer); + return dw_spi_dma_wait(dws, xfer); +} + +static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) +{ + int ret; + + ret = dw_spi_dma_transfer_all(dws, xfer); if (ret) return ret; From patchwork Sun Sep 20 11:23:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 258193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85EDCC43463 for ; Sun, 20 Sep 2020 11:32:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B2C520EDD for ; Sun, 20 Sep 2020 11:32:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726314AbgITLc3 (ORCPT ); Sun, 20 Sep 2020 07:32:29 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:54048 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726344AbgITLc1 (ORCPT ); Sun, 20 Sep 2020 07:32:27 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 5C817803017E; Sun, 20 Sep 2020 11:23:32 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mVfBb3tl_Let; Sun, 20 Sep 2020 14:23:31 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Alexey Malahov , Georgy Vlasov , Ramil Zaripov , Pavel Parkhomenko , Peter Ujfalusi , Andy Shevchenko , Andy Shevchenko , Feng Tang , Vinod Koul , , Subject: [PATCH v2 10/11] spi: dw-dma: Pass exact data to the DMA submit and wait methods Date: Sun, 20 Sep 2020 14:23:21 +0300 Message-ID: <20200920112322.24585-11-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> References: <20200920112322.24585-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org In order to use the DMA submission and waiting methods in both generic DMA-based SPI transfer and one-by-one DMA SG entries transmission functions, we need to modify the dw_spi_dma_wait() and dw_spi_dma_submit_tx()/dw_spi_dma_submit_rx() prototypes. So instead of getting the SPI transfer object as the second argument they must accept the exact data structure instances they imply to use. Those are the current transfer length and the SPI bus frequency in case of dw_spi_dma_wait(), and SG list together with number of list entries in case of the DMA Tx/Rx submission methods. Signed-off-by: Serge Semin --- drivers/spi/spi-dw-dma.c | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c index 935f073a3523..f333c2e23bf6 100644 --- a/drivers/spi/spi-dw-dma.c +++ b/drivers/spi/spi-dw-dma.c @@ -188,12 +188,12 @@ static enum dma_slave_buswidth dw_spi_dma_convert_width(u8 n_bytes) return DMA_SLAVE_BUSWIDTH_UNDEFINED; } -static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_wait(struct dw_spi *dws, unsigned int len, u32 speed) { unsigned long long ms; - ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE; - do_div(ms, xfer->effective_speed_hz); + ms = len * MSEC_PER_SEC * BITS_PER_BYTE; + do_div(ms, speed); ms += ms + 200; if (ms > UINT_MAX) @@ -268,17 +268,16 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws) return dmaengine_slave_config(dws->txchan, &txconf); } -static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct scatterlist *sgl, + unsigned int nents) { struct dma_async_tx_descriptor *txdesc; dma_cookie_t cookie; int ret; - txdesc = dmaengine_prep_slave_sg(dws->txchan, - xfer->tx_sg.sgl, - xfer->tx_sg.nents, - DMA_MEM_TO_DEV, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + txdesc = dmaengine_prep_slave_sg(dws->txchan, sgl, nents, + DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!txdesc) return -ENOMEM; @@ -370,17 +369,16 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws) return dmaengine_slave_config(dws->rxchan, &rxconf); } -static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer) +static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct scatterlist *sgl, + unsigned int nents) { struct dma_async_tx_descriptor *rxdesc; dma_cookie_t cookie; int ret; - rxdesc = dmaengine_prep_slave_sg(dws->rxchan, - xfer->rx_sg.sgl, - xfer->rx_sg.nents, - DMA_DEV_TO_MEM, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + rxdesc = dmaengine_prep_slave_sg(dws->rxchan, sgl, nents, + DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!rxdesc) return -ENOMEM; @@ -443,13 +441,14 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, int ret; /* Submit the DMA Tx transfer */ - ret = dw_spi_dma_submit_tx(dws, xfer); + ret = dw_spi_dma_submit_tx(dws, xfer->tx_sg.sgl, xfer->tx_sg.nents); if (ret) goto err_clear_dmac; /* Submit the DMA Rx transfer if required */ if (xfer->rx_buf) { - ret = dw_spi_dma_submit_rx(dws, xfer); + ret = dw_spi_dma_submit_rx(dws, xfer->rx_sg.sgl, + xfer->rx_sg.nents); if (ret) goto err_clear_dmac; @@ -459,7 +458,7 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws, dma_async_issue_pending(dws->txchan); - ret = dw_spi_dma_wait(dws, xfer); + ret = dw_spi_dma_wait(dws, xfer->len, xfer->effective_speed_hz); err_clear_dmac: dw_writel(dws, DW_SPI_DMACR, 0);