From patchwork Fri Oct 25 18:10:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jyothi Kumar Seerapu X-Patchwork-Id: 838444 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 884D021440D; Fri, 25 Oct 2024 18:10:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729879842; cv=none; b=YwitjfRW/v9gnwF1pBayBAoD9zLwSoTwVFgPxzBfOTaaK/asHOd9UkomxMaRWH/bDgLKJuo+l4uKmVmrAWM9l734dPF/xb1cZ/NoSKMwxrNAg3MwV1BTsZXzouUCj0J/40tw1vMfDTZy752bi4bdBrB/SA5irdH2JPOPkr9X7Ys= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729879842; c=relaxed/simple; bh=awb7Bv3AWakeIwdh1M+IuaLx3w5XGQVaYCNBDUaGy/8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kHtl9gt/u7tW+6OS29OwWnDKQQfR7jj7EAG8Ei5z5/qlii1WmeJ6iPj0WjyfPKNHpHXXuCE/cZgkASKCH4YUcKGpAHbRqlfYGu0Pi/+zcx6c7JyId7ZcZZHtl8Ui8e8jOtpGxsrHiWUjTsw4s0zDrWVvdWTZZ817sINBijBtHxE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=czotbMW8; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="czotbMW8" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49PB5oh9015410; Fri, 25 Oct 2024 18:10:33 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=qcppdkim1; bh=HD3drT7vJ6YvOKiHvxFg0hnG H6++zUVY6Hmiv2iDntk=; b=czotbMW8pYP2jSE/L+AH3qDNty4400lREBgVE/hL w6ImsqeXiFyr+smoZrH1dN5Tw2RxKWMxlKrRs8ZM+d9pmekgnjQWWIeBKbRHZqJ/ iXWDaiCtiLqKjGbfzH3HkDFgXpVQha3X2JLaqjkMi4HecXyLR0nyoot9Pp+dT0Sk pxgk+pLmQeg2ngouMBRWl8k+eTc+ZXnjSOKB2+sq6Ia0VYFiVW/TYR66Pz3+khWO cQK6Neo4kTB8jrp2r5rHaYsenIprRL2ZxXgYux6x8Xy4oEcgctRSfzG8mlINDWG3 hUvjAj8PBDDx2OD9422m1nvxsB4xmN/q1VhNo3VUIa6c5w== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 42ga5js9fp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 25 Oct 2024 18:10:33 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 49PIAWBV003556 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 25 Oct 2024 18:10:32 GMT Received: from hu-jseerapu-hyd.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Fri, 25 Oct 2024 11:10:28 -0700 From: Jyothi Kumar Seerapu To: Vinod Koul , Andi Shyti , "Sumit Semwal" , =?utf-8?q?Christian_K=C3=B6nig?= CC: , , , , , , , , Subject: [PATCH v2 1/3] dmaengine: qcom: gpi: Add GPI Block event interrupt support Date: Fri, 25 Oct 2024 23:40:08 +0530 Message-ID: <20241025181010.7555-2-quic_jseerapu@quicinc.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20241025181010.7555-1-quic_jseerapu@quicinc.com> References: <20241025181010.7555-1-quic_jseerapu@quicinc.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 1UyfCcqDAS6pmCd0hg4HZmOzOQCYGloT X-Proofpoint-GUID: 1UyfCcqDAS6pmCd0hg4HZmOzOQCYGloT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 mlxlogscore=999 malwarescore=0 clxscore=1015 priorityscore=1501 adultscore=0 mlxscore=0 spamscore=0 suspectscore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410250138 GSI hardware generates an interrupt for each transfer completion. For multiple messages within a single transfer, this results in receiving N interrupts for N messages, which can introduce significant software interrupt latency. To mitigate this latency, utilize Block Event Interrupt (BEI) only when an interrupt is necessary. When using BEI, consider splitting a single multi-message transfer into chunks of 8. This approach can enhance overall transfer time and efficiency. Signed-off-by: Jyothi Kumar Seerapu --- v1 -> v2: - Changed dma_addr type from array of pointers to array. - To support BEI functionality with the TRE size of 64 defined in GPI driver, updated QCOM_GPI_MAX_NUM_MSGS to 16 and NUM_MSGS_PER_IRQ to 4. drivers/dma/qcom/gpi.c | 49 ++++++++++++++++++++++++++++++++ include/linux/dma/qcom-gpi-dma.h | 37 ++++++++++++++++++++++++ 2 files changed, 86 insertions(+) diff --git a/drivers/dma/qcom/gpi.c b/drivers/dma/qcom/gpi.c index 52a7c8f2498f..a98de3178764 100644 --- a/drivers/dma/qcom/gpi.c +++ b/drivers/dma/qcom/gpi.c @@ -1693,6 +1693,9 @@ static int gpi_create_i2c_tre(struct gchan *chan, struct gpi_desc *desc, tre->dword[3] = u32_encode_bits(TRE_TYPE_DMA, TRE_FLAGS_TYPE); tre->dword[3] |= u32_encode_bits(1, TRE_FLAGS_IEOT); + + if (i2c->flags & QCOM_GPI_BLOCK_EVENT_IRQ) + tre->dword[3] |= u32_encode_bits(1, TRE_FLAGS_BEI); } for (i = 0; i < tre_idx; i++) @@ -2098,6 +2101,52 @@ static int gpi_find_avail_gpii(struct gpi_dev *gpi_dev, u32 seid) return -EIO; } +/** + * gpi_multi_desc_process() - Process received transfers from GSI HW + * @dev: pointer to the corresponding dev node + * @multi_xfer: pointer to the gpi_multi_xfer + * @num_xfers: total number of transfers + * @transfer_timeout_msecs: transfer timeout value + * @transfer_comp: completion object of the transfer + * + * This function is used to process the received transfers based on the + * completion events + * + * Return: On success returns 0, otherwise return error code + */ +int gpi_multi_desc_process(struct device *dev, struct gpi_multi_xfer *multi_xfer, + u32 num_xfers, u32 transfer_timeout_msecs, + struct completion *transfer_comp) +{ + int i; + u32 max_irq_cnt, time_left; + + max_irq_cnt = num_xfers / NUM_MSGS_PER_IRQ; + if (num_xfers % NUM_MSGS_PER_IRQ) + max_irq_cnt++; + + /* + * Wait for the interrupts of the processed transfers in multiple + * of 64 and for the last transfer. If the hardware is fast and + * already processed all the transfers then no need to wait. + */ + for (i = 0; i < max_irq_cnt; i++) { + reinit_completion(transfer_comp); + if (max_irq_cnt != multi_xfer->irq_cnt) { + time_left = wait_for_completion_timeout(transfer_comp, + transfer_timeout_msecs); + if (!time_left) { + dev_err(dev, "%s: Transfer timeout\n", __func__); + return -ETIMEDOUT; + } + } + if (num_xfers > multi_xfer->msg_idx_cnt) + return 0; + } + return 0; +} +EXPORT_SYMBOL_GPL(gpi_multi_desc_process); + /* gpi_of_dma_xlate: open client requested channel */ static struct dma_chan *gpi_of_dma_xlate(struct of_phandle_args *args, struct of_dma *of_dma) diff --git a/include/linux/dma/qcom-gpi-dma.h b/include/linux/dma/qcom-gpi-dma.h index 6680dd1a43c6..1341ff0db808 100644 --- a/include/linux/dma/qcom-gpi-dma.h +++ b/include/linux/dma/qcom-gpi-dma.h @@ -15,6 +15,12 @@ enum spi_transfer_cmd { SPI_DUPLEX, }; +#define QCOM_GPI_BLOCK_EVENT_IRQ BIT(0) + +#define QCOM_GPI_MAX_NUM_MSGS 16 +#define NUM_MSGS_PER_IRQ 8 +#define MIN_NUM_OF_MSGS_MULTI_DESC 4 + /** * struct gpi_spi_config - spi config for peripheral * @@ -51,6 +57,29 @@ enum i2c_op { I2C_READ, }; +/** + * struct gpi_multi_xfer - Used for multi transfer support + * + * @msg_idx_cnt: message index for the transfer + * @buf_idx: dma buffer index + * @unmap_msg_cnt: unampped transfer index + * @freed_msg_cnt: freed transfer index + * @irq_cnt: received interrupt count + * @irq_msg_cnt: transfer message count for the received irqs + * @dma_buf: virtual address of the buffer + * @dma_addr: dma address of the buffer + */ +struct gpi_multi_xfer { + u32 msg_idx_cnt; + u32 buf_idx; + u32 unmap_msg_cnt; + u32 freed_msg_cnt; + u32 irq_cnt; + u32 irq_msg_cnt; + void *dma_buf[QCOM_GPI_MAX_NUM_MSGS]; + dma_addr_t dma_addr[QCOM_GPI_MAX_NUM_MSGS]; +}; + /** * struct gpi_i2c_config - i2c config for peripheral * @@ -65,6 +94,8 @@ enum i2c_op { * @rx_len: receive length for buffer * @op: i2c cmd * @muli-msg: is part of multi i2c r-w msgs + * @flags: true for block event interrupt support + * @multi_xfer: indicates transfer has multi messages */ struct gpi_i2c_config { u8 set_config; @@ -78,6 +109,12 @@ struct gpi_i2c_config { u32 rx_len; enum i2c_op op; bool multi_msg; + u8 flags; + struct gpi_multi_xfer multi_xfer; }; +int gpi_multi_desc_process(struct device *dev, struct gpi_multi_xfer *multi_xfer, + u32 num_xfers, u32 tranfer_timeout_msecs, + struct completion *transfer_comp); + #endif /* QCOM_GPI_DMA_H */ From patchwork Fri Oct 25 18:10:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jyothi Kumar Seerapu X-Patchwork-Id: 838443 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8BB4F215C43; Fri, 25 Oct 2024 18:10:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729879852; cv=none; b=n8fsyOYW9nGeKe7ZPVW+KpNsv3Li3YjQOfTh2f/j9cUh/uf/i7r+5Wq+mEa8AiMVIZaRb8su8A5Itj4ubcx8dHTX69GcW+zCRedaJkgMwJuqJ05gwxuWxxYCyiQzcZAnV7xi/qgiXqGxAaHo67YiMGLrjS/R3u23zNMeIbbs8TM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729879852; c=relaxed/simple; bh=gQcFxx3pnBO/GO+JGIlO3BkTnnRua6rqfLo2BlWyXv8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rF6vDwi3a8Cj7iCLq84HmfUsfwC4DWD66bfXdXqBIiicjh5pEYGd5WRIwPVoIN7cNJE5AVRx+34J/IvfeRNwQjfFUp4UWLxISU7BKUBg+dw1rMgGTv5sPXyu5Evz9hmFZ4G2/NalnqdvkeNQHLZFbVjvGDFf9C6+MEcD5WP2iHI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=HYXmGvKj; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="HYXmGvKj" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49PAcIVA009177; Fri, 25 Oct 2024 18:10:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=qcppdkim1; bh=taSydBJMBWIg5c2/q1Rx92ov l65hfz0H1XM2bzk0f5g=; b=HYXmGvKjR45Qcjrxdw9K3gdbJYrQwpDIwTmQLM5t YeEDX4ZpnTDAaT90S1dDdqZYCNCrt1q313BljZ/jHzCHbVZPGWlpHBlp6fXC05n8 5K2/Ri5KWt8ftrt0j4STWhoAiba0pnIXsdrty4nxl1y+Go4rPJoAX6wAULRPELIS zxsHBQiJREZUW0PulopotseypNSlFL7okR16wWbc+1lP3cXZL4s0evAnq0UiG8Zk TPcODPbuc0sqHr3DLNhVc3ww38WqM5/1vT0Ed3m6xLOGAlL1Q2hDSzvqLYwj3dbL Cei/RbtAPIwZky9AEcuzbn3N70qrREhYfkUVOiA2wARxGQ== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 42em43j1qh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 25 Oct 2024 18:10:41 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 49PIAeJF003069 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 25 Oct 2024 18:10:40 GMT Received: from hu-jseerapu-hyd.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Fri, 25 Oct 2024 11:10:36 -0700 From: Jyothi Kumar Seerapu To: Vinod Koul , Andi Shyti , "Sumit Semwal" , =?utf-8?q?Christian_K=C3=B6nig?= CC: , , , , , , , , Subject: [PATCH v2 3/3] i2c: i2c-qcom-geni: Add Block event interrupt support Date: Fri, 25 Oct 2024 23:40:10 +0530 Message-ID: <20241025181010.7555-4-quic_jseerapu@quicinc.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20241025181010.7555-1-quic_jseerapu@quicinc.com> References: <20241025181010.7555-1-quic_jseerapu@quicinc.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 2xhqd2caDL5IfOCE91-6R9iFCACwOczs X-Proofpoint-ORIG-GUID: 2xhqd2caDL5IfOCE91-6R9iFCACwOczs X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0 adultscore=0 mlxlogscore=999 spamscore=0 malwarescore=0 impostorscore=0 phishscore=0 bulkscore=0 clxscore=1015 priorityscore=1501 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410250138 The I2C driver gets an interrupt upon transfer completion. For multiple messages in a single transfer, N interrupts will be received for N messages, leading to significant software interrupt latency. To mitigate this latency, utilize Block Event Interrupt (BEI) only when an interrupt is necessary. This means large transfers can be split into multiple chunks of 8 messages internally, without expecting interrupts for the first 7 message completions, only the last one will trigger an interrupt indicating 8 messages completed. By implementing BEI, multi-message transfers can be divided into chunks of 8 messages, improving overall transfer time. This optimization reduces transfer time from 168 ms to 48 ms for a series of 200 I2C write messages in a single transfer, with a clock frequency support of 100 kHz. BEI optimizations are currently implemented for I2C write transfers only, as there is no use case for multiple I2C read messages in a single transfer at this time. Signed-off-by: Jyothi Kumar Seerapu --- v1 -> v2: - Moved gi2c_gpi_xfer->msg_idx_cnt to separate local variable. - Updated goto labels for error scenarios in geni_i2c_gpi function - memset tx_multi_xfer to 0. - Removed passing current msg index to geni_i2c_gpi. - Fixed kernel test robot reported compilation issues. drivers/i2c/busses/i2c-qcom-geni.c | 203 +++++++++++++++++++++++++---- 1 file changed, 178 insertions(+), 25 deletions(-) diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c index 7a22e1f46e60..04a7d926dadc 100644 --- a/drivers/i2c/busses/i2c-qcom-geni.c +++ b/drivers/i2c/busses/i2c-qcom-geni.c @@ -100,6 +100,10 @@ struct geni_i2c_dev { struct dma_chan *rx_c; bool gpi_mode; bool abort_done; + bool is_tx_multi_xfer; + u32 num_msgs; + u32 tx_irq_cnt; + struct gpi_i2c_config *gpi_config; }; struct geni_i2c_desc { @@ -500,6 +504,7 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, static void i2c_gpi_cb_result(void *cb, const struct dmaengine_result *result) { struct geni_i2c_dev *gi2c = cb; + struct gpi_multi_xfer *tx_multi_xfer; if (result->result != DMA_TRANS_NOERROR) { dev_err(gi2c->se.dev, "DMA txn failed:%d\n", result->result); @@ -508,7 +513,21 @@ static void i2c_gpi_cb_result(void *cb, const struct dmaengine_result *result) dev_dbg(gi2c->se.dev, "DMA xfer has pending: %d\n", result->residue); } - complete(&gi2c->done); + if (gi2c->is_tx_multi_xfer) { + tx_multi_xfer = &gi2c->gpi_config->multi_xfer; + + /* + * Send Completion for last message or multiple of NUM_MSGS_PER_IRQ. + */ + if ((tx_multi_xfer->irq_msg_cnt == gi2c->num_msgs - 1) || + (!((tx_multi_xfer->irq_msg_cnt + 1) % NUM_MSGS_PER_IRQ))) { + tx_multi_xfer->irq_cnt++; + complete(&gi2c->done); + } + tx_multi_xfer->irq_msg_cnt++; + } else { + complete(&gi2c->done); + } } static void geni_i2c_gpi_unmap(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, @@ -526,7 +545,42 @@ static void geni_i2c_gpi_unmap(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, } } -static int geni_i2c_gpi(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, +/** + * gpi_i2c_multi_desc_unmap() - unmaps the buffers post multi message TX transfers + * @dev: pointer to the corresponding dev node + * @gi2c: i2c dev handle + * @msgs: i2c messages array + * @peripheral: pointer to the gpi_i2c_config + */ +static void gpi_i2c_multi_desc_unmap(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], + struct gpi_i2c_config *peripheral) +{ + u32 msg_xfer_cnt, wr_idx = 0; + struct gpi_multi_xfer *tx_multi_xfer = &peripheral->multi_xfer; + + /* + * In error case, need to unmap all messages based on the msg_idx_cnt. + * Non-error case unmap all the processed messages. + */ + if (gi2c->err) + msg_xfer_cnt = tx_multi_xfer->msg_idx_cnt; + else + msg_xfer_cnt = tx_multi_xfer->irq_cnt * NUM_MSGS_PER_IRQ; + + /* Unmap the processed DMA buffers based on the received interrupt count */ + for (; tx_multi_xfer->unmap_msg_cnt < msg_xfer_cnt; tx_multi_xfer->unmap_msg_cnt++) { + if (tx_multi_xfer->unmap_msg_cnt == gi2c->num_msgs) + break; + wr_idx = tx_multi_xfer->unmap_msg_cnt % QCOM_GPI_MAX_NUM_MSGS; + geni_i2c_gpi_unmap(gi2c, &msgs[tx_multi_xfer->unmap_msg_cnt], + tx_multi_xfer->dma_buf[wr_idx], + tx_multi_xfer->dma_addr[wr_idx], + NULL, (dma_addr_t)NULL); + tx_multi_xfer->freed_msg_cnt++; + } +} + +static int geni_i2c_gpi(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], struct dma_slave_config *config, dma_addr_t *dma_addr_p, void **buf, unsigned int op, struct dma_chan *dma_chan) { @@ -538,26 +592,48 @@ static int geni_i2c_gpi(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, enum dma_transfer_direction dma_dirn; struct dma_async_tx_descriptor *desc; int ret; + struct gpi_multi_xfer *gi2c_gpi_xfer; + dma_cookie_t cookie; + u32 msg_idx; peripheral = config->peripheral_config; - - dma_buf = i2c_get_dma_safe_msg_buf(msg, 1); - if (!dma_buf) - return -ENOMEM; + gi2c_gpi_xfer = &peripheral->multi_xfer; + dma_buf = gi2c_gpi_xfer->dma_buf[gi2c_gpi_xfer->buf_idx]; + addr = gi2c_gpi_xfer->dma_addr[gi2c_gpi_xfer->buf_idx]; + msg_idx = gi2c_gpi_xfer->msg_idx_cnt; + + dma_buf = i2c_get_dma_safe_msg_buf(&msgs[msg_idx], 1); + if (!dma_buf) { + ret = -ENOMEM; + goto out; + } if (op == I2C_WRITE) map_dirn = DMA_TO_DEVICE; else map_dirn = DMA_FROM_DEVICE; - addr = dma_map_single(gi2c->se.dev->parent, dma_buf, msg->len, map_dirn); + addr = dma_map_single(gi2c->se.dev->parent, dma_buf, + msgs[msg_idx].len, map_dirn); if (dma_mapping_error(gi2c->se.dev->parent, addr)) { - i2c_put_dma_safe_msg_buf(dma_buf, msg, false); - return -ENOMEM; + i2c_put_dma_safe_msg_buf(dma_buf, &msgs[msg_idx], false); + ret = -ENOMEM; + goto out; + } + + if (gi2c->is_tx_multi_xfer) { + if (((msg_idx + 1) % NUM_MSGS_PER_IRQ)) + peripheral->flags |= QCOM_GPI_BLOCK_EVENT_IRQ; + else + peripheral->flags &= ~QCOM_GPI_BLOCK_EVENT_IRQ; + + /* BEI bit to be cleared for last TRE */ + if (msg_idx == gi2c->num_msgs - 1) + peripheral->flags &= ~QCOM_GPI_BLOCK_EVENT_IRQ; } /* set the length as message for rx txn */ - peripheral->rx_len = msg->len; + peripheral->rx_len = msgs[msg_idx].len; peripheral->op = op; ret = dmaengine_slave_config(dma_chan, config); @@ -575,7 +651,8 @@ static int geni_i2c_gpi(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, else dma_dirn = DMA_DEV_TO_MEM; - desc = dmaengine_prep_slave_single(dma_chan, addr, msg->len, dma_dirn, flags); + desc = dmaengine_prep_slave_single(dma_chan, addr, msgs[msg_idx].len, + dma_dirn, flags); if (!desc) { dev_err(gi2c->se.dev, "prep_slave_sg failed\n"); ret = -EIO; @@ -585,15 +662,48 @@ static int geni_i2c_gpi(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, desc->callback_result = i2c_gpi_cb_result; desc->callback_param = gi2c; - dmaengine_submit(desc); - *buf = dma_buf; - *dma_addr_p = addr; + if (!((msgs[msg_idx].flags & I2C_M_RD) && op == I2C_WRITE)) { + gi2c_gpi_xfer->msg_idx_cnt++; + gi2c_gpi_xfer->buf_idx = (msg_idx + 1) % QCOM_GPI_MAX_NUM_MSGS; + } + cookie = dmaengine_submit(desc); + if (dma_submit_error(cookie)) { + dev_err(gi2c->se.dev, + "%s: dmaengine_submit failed (%d)\n", __func__, cookie); + ret = -EINVAL; + goto err_config; + } + if (gi2c->is_tx_multi_xfer) { + dma_async_issue_pending(gi2c->tx_c); + if ((msg_idx == (gi2c->num_msgs - 1)) || + (gi2c_gpi_xfer->msg_idx_cnt >= + QCOM_GPI_MAX_NUM_MSGS + gi2c_gpi_xfer->freed_msg_cnt)) { + ret = gpi_multi_desc_process(gi2c->se.dev, gi2c_gpi_xfer, + gi2c->num_msgs, XFER_TIMEOUT, + &gi2c->done); + if (ret) { + dev_err(gi2c->se.dev, + "I2C multi write msg transfer timeout: %d\n", + ret); + gi2c->err = ret; + goto err_config; + } + } + } else { + /* Non multi descriptor message transfer */ + *buf = dma_buf; + *dma_addr_p = addr; + } return 0; err_config: - dma_unmap_single(gi2c->se.dev->parent, addr, msg->len, map_dirn); - i2c_put_dma_safe_msg_buf(dma_buf, msg, false); + dma_unmap_single(gi2c->se.dev->parent, addr, + msgs[msg_idx].len, map_dirn); + i2c_put_dma_safe_msg_buf(dma_buf, &msgs[msg_idx], false); + +out: + gi2c->err = ret; return ret; } @@ -605,6 +715,7 @@ static int geni_i2c_gpi_xfer(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], i unsigned long time_left; dma_addr_t tx_addr, rx_addr; void *tx_buf = NULL, *rx_buf = NULL; + struct gpi_multi_xfer *tx_multi_xfer; const struct geni_i2c_clk_fld *itr = gi2c->clk_fld; config.peripheral_config = &peripheral; @@ -618,6 +729,34 @@ static int geni_i2c_gpi_xfer(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], i peripheral.set_config = 1; peripheral.multi_msg = false; + gi2c->gpi_config = &peripheral; + gi2c->num_msgs = num; + gi2c->is_tx_multi_xfer = false; + gi2c->tx_irq_cnt = 0; + + tx_multi_xfer = &peripheral.multi_xfer; + memset(tx_multi_xfer, 0, sizeof(struct gpi_multi_xfer)); + + /* + * If number of write messages are four and higher then + * configure hardware for multi descriptor transfers with BEI. + */ + if (num >= MIN_NUM_OF_MSGS_MULTI_DESC) { + gi2c->is_tx_multi_xfer = true; + for (i = 0; i < num; i++) { + if (msgs[i].flags & I2C_M_RD) { + /* + * Multi descriptor transfer with BEI + * support is enabled for write transfers. + * Add BEI optimization support for read + * transfers later. + */ + gi2c->is_tx_multi_xfer = false; + break; + } + } + } + for (i = 0; i < num; i++) { gi2c->cur = &msgs[i]; gi2c->err = 0; @@ -628,14 +767,16 @@ static int geni_i2c_gpi_xfer(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], i peripheral.stretch = 1; peripheral.addr = msgs[i].addr; + if (i > 0 && (!(msgs[i].flags & I2C_M_RD))) + peripheral.multi_msg = false; - ret = geni_i2c_gpi(gi2c, &msgs[i], &config, + ret = geni_i2c_gpi(gi2c, msgs, &config, &tx_addr, &tx_buf, I2C_WRITE, gi2c->tx_c); if (ret) goto err; if (msgs[i].flags & I2C_M_RD) { - ret = geni_i2c_gpi(gi2c, &msgs[i], &config, + ret = geni_i2c_gpi(gi2c, msgs, &config, &rx_addr, &rx_buf, I2C_READ, gi2c->rx_c); if (ret) goto err; @@ -643,18 +784,26 @@ static int geni_i2c_gpi_xfer(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], i dma_async_issue_pending(gi2c->rx_c); } - dma_async_issue_pending(gi2c->tx_c); - - time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT); - if (!time_left) - gi2c->err = -ETIMEDOUT; + if (!gi2c->is_tx_multi_xfer) { + dma_async_issue_pending(gi2c->tx_c); + time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT); + if (!time_left) { + dev_err(gi2c->se.dev, "%s:I2C timeout\n", __func__); + gi2c->err = -ETIMEDOUT; + } + } if (gi2c->err) { ret = gi2c->err; goto err; } - geni_i2c_gpi_unmap(gi2c, &msgs[i], tx_buf, tx_addr, rx_buf, rx_addr); + if (!gi2c->is_tx_multi_xfer) { + geni_i2c_gpi_unmap(gi2c, &msgs[i], tx_buf, tx_addr, rx_buf, rx_addr); + } else if (gi2c->tx_irq_cnt != tx_multi_xfer->irq_cnt) { + gi2c->tx_irq_cnt = tx_multi_xfer->irq_cnt; + gpi_i2c_multi_desc_unmap(gi2c, msgs, &peripheral); + } } return num; @@ -663,7 +812,11 @@ static int geni_i2c_gpi_xfer(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], i dev_err(gi2c->se.dev, "GPI transfer failed: %d\n", ret); dmaengine_terminate_sync(gi2c->rx_c); dmaengine_terminate_sync(gi2c->tx_c); - geni_i2c_gpi_unmap(gi2c, &msgs[i], tx_buf, tx_addr, rx_buf, rx_addr); + if (gi2c->is_tx_multi_xfer) + gpi_i2c_multi_desc_unmap(gi2c, msgs, &peripheral); + else + geni_i2c_gpi_unmap(gi2c, &msgs[i], tx_buf, tx_addr, rx_buf, rx_addr); + return ret; }