From patchwork Wed Sep 30 09:13:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313855 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21091ilj; Wed, 30 Sep 2020 02:14:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxws6ysu+b0BqUTXn1BoVtF6/P+AcFSTAb+rk7cysczaItjNp0uP2hxlPZs+2Zb6kE3CEco X-Received: by 2002:a17:906:8143:: with SMTP id z3mr1759810ejw.323.1601457251916; Wed, 30 Sep 2020 02:14:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457251; cv=none; d=google.com; s=arc-20160816; b=tUjYJl5kSsdFQRT8aIl8J5yKn+D8tUY6j4Ej+feJGg02TcncgoXTSv+T7TZ8kT73pu 0djnBY+/nikg6ZIdt0p11dF5k6/sHdJECrHvbw1u8cknaIwnBntbN58jFwvij7KQedDK kmfg8FIYbMNXS+QQV3I7XvrXm/XURmdqNtx7WBBZ89J4lU0gKuBo/3E59Vwyhsv9uHbZ V9GXEjsKlTJxd2p1KCHonZAyNRi+b80HPcSQZz6rtiAJRfexrWIUAjp5jRUF8SKxweYg HByoKNgzV7UQEwKoC4asyoFt9XB0xgwXmCXv2e8utKj3cZ6QQBtWWuZma4fnEMVOHXo8 xSCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5ridk5920UfMB0nCEIpCyF/07n7stscwVpGaO1rQQhA=; b=N5rB46hescmmKca+SGeq/OBEqbRD0bR5d77xzmpBDQ6DzLJzZLknT5JOJcl0PbBaI9 A6YkChJnW3S95hnOevYRNt3VoGkwRexUWkLyNkzojjikRS4MsNGdoUDtSnT242WCiy5q Ddf06waksMNLb+hxz85kogoq3PY2b5v/2tSAD3VWjHQ3kM8gV3MFHbg+FB+Rx4RcG8mg 5huu2UkG6FfvtrFKR760a/NKeEvT17cQuycIK3P7VFsKR0HAbPECbCNuNJmj3ZKjfSUi tmIhZEhlkXFB5Iajkhzbi3g9HDvQKEYlT1YftbFYGcJT2DFsqCJBVuMID02wGBnh/r3y 7ooA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b="hAI/eQIp"; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m17si572171edj.540.2020.09.30.02.14.11; Wed, 30 Sep 2020 02:14:11 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b="hAI/eQIp"; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728673AbgI3JOK (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:10 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35008 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725779AbgI3JOJ (ORCPT ); Wed, 30 Sep 2020 05:14:09 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9E35X034882; Wed, 30 Sep 2020 04:14:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457243; bh=5ridk5920UfMB0nCEIpCyF/07n7stscwVpGaO1rQQhA=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=hAI/eQIpcwJQUk0v9xFVj0B/6YmQCfL80Xrg/yduhWEsIRKVXZOIKXFFDi5MX11gu uP1TDOA+UeSEcvhtZ0zLj21Vy8JquVYtCfA16JaiksmhnfxFXsgIUZAGcHggrtQffr cr9QI/UlAe+cvcSFFMlZvY1qtlL9mbyyk6jh9tUg= Received: from DLEE106.ent.ti.com (dlee106.ent.ti.com [157.170.170.36]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9E2CL111169 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:02 -0500 Received: from DLEE106.ent.ti.com (157.170.170.36) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:02 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:02 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJY116385; Wed, 30 Sep 2020 04:13:59 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 01/18] dmaengine: of-dma: Add support for optional router configuration callback Date: Wed, 30 Sep 2020 12:13:55 +0300 Message-ID: <20200930091412.8020-2-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Additional configuration for the DMA event router might be needed for a channel which can not be done during device_alloc_chan_resources callback since the router information is not yet present for the drivers. If there is a need for additional configuration for the channel if DMA router is in use, then the driver can implement the device_router_config callback. Signed-off-by: Peter Ujfalusi --- drivers/dma/of-dma.c | 10 ++++++++++ include/linux/dmaengine.h | 2 ++ 2 files changed, 12 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c index 8a4f608904b9..ec00b20ae8e4 100644 --- a/drivers/dma/of-dma.c +++ b/drivers/dma/of-dma.c @@ -75,8 +75,18 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec, ofdma->dma_router->route_free(ofdma->dma_router->dev, route_data); } else { + int ret = 0; + chan->router = ofdma->dma_router; chan->route_data = route_data; + + if (chan->device->device_router_config) + ret = chan->device->device_router_config(chan); + + if (ret) { + dma_release_channel(chan); + chan = ERR_PTR(ret); + } } /* diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index dd357a747780..d6197fe875af 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -800,6 +800,7 @@ struct dma_filter { * by tx_status * @device_alloc_chan_resources: allocate resources and return the * number of allocated descriptors + * @device_router_config: optional callback for DMA router configuration * @device_free_chan_resources: release DMA channel's resources * @device_prep_dma_memcpy: prepares a memcpy operation * @device_prep_dma_xor: prepares a xor operation @@ -874,6 +875,7 @@ struct dma_device { enum dma_residue_granularity residue_granularity; int (*device_alloc_chan_resources)(struct dma_chan *chan); + int (*device_router_config)(struct dma_chan *chan); void (*device_free_chan_resources)(struct dma_chan *chan); struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)( From patchwork Wed Sep 30 09:13:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313859 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21261ilj; Wed, 30 Sep 2020 02:14:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxa/kAYHqbHOg1hXb/se5aPVEZT+emFVCjFAO8W3O+/ano4jW+yC6MIbrhElH0IK0CRS/M3 X-Received: by 2002:a17:906:b097:: with SMTP id x23mr1781192ejy.21.1601457271911; Wed, 30 Sep 2020 02:14:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457271; cv=none; d=google.com; s=arc-20160816; b=q0S+red7P2HogrubIZe9LkSbMH/iw6ehm25b63PPLTmuALUjeH2mD0UgQ3aHnr+R+9 HBr1UbyyYX/WH7i6z2cLoAMmO259UhGgBn53IJi9JWQR/55moULMAi4NML7CvGk1lUV9 fk14q4zc6UpYc3wB7b6Kf1NkE+QQp9mCTYDmOA8tYzc4Zf7NHKy/lIL0RTmGgjXJLn/q JgfKMH2z/kg7H1VMmghI3G2Q/81rJMTDmN8yRzfxBExIuQP00iNSqtKNx69wf39D7CzU bcerSPi0dk44w3/G9YVRLKeij2BayAUJt/xHRyyNOO9O3HbFHLN8gvOm8Emf7o05Vznw oVPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=24q5eD4y4xZgELs8UTDdNZFf1mU01Jty7BUIc3+XGk4=; b=FTyLtfeQGQrDYCTd/JX69vn9ncYh1Pi5eerHxy8QhLmPGhOYLOq18SkhhjPw3L2vGs eH4d1VXd2isg0XXOvraqwEX3ABBGC5P6yYn+8Q3mJYtlYEf2OiZJjZM/5feWzI1JdDlk c6/U41TWBjykYb0sHx/48rsB9o+4IUNRu+vC7FG7h49G6y/fiu6iienqIXCKgFG4dSEU fBC1GYSljbdZDmfLpok5+qTaki6f8gKpBzEVuAg08wtTPW7HkC+tg2RsajNIwbrsgFQV kePsL4MaZ5YO6813vSVHBpbp0Mx1/hv6TAgQghae+uHbV5ZWFr2cixqhUCOA9YwHO0Ye amvg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Eu0tzELm; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d17si665996edj.42.2020.09.30.02.14.31; Wed, 30 Sep 2020 02:14:31 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Eu0tzELm; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728764AbgI3JOL (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:11 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35022 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728638AbgI3JOK (ORCPT ); Wed, 30 Sep 2020 05:14:10 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9E5Gn034888; Wed, 30 Sep 2020 04:14:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457245; bh=24q5eD4y4xZgELs8UTDdNZFf1mU01Jty7BUIc3+XGk4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=Eu0tzELmEDYfv6KlmUeoSiKtMyo3jDlvSWanA9cxOXpb530t1NVJZJ2KRh3PrzdPB Uoosxfb4Lh5eqygJHmKdCE4B9zP14hquHRZJlTQiZOkFNwOo5vsfxSD0I1b0mb/g1h XNEoGZw15iUPd3soBB4aqVyBjpbuISGiBfQ1PRMo= Received: from DFLE103.ent.ti.com (dfle103.ent.ti.com [10.64.6.24]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9E5fg121766; Wed, 30 Sep 2020 04:14:05 -0500 Received: from DFLE108.ent.ti.com (10.64.6.29) by DFLE103.ent.ti.com (10.64.6.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:05 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE108.ent.ti.com (10.64.6.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:05 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJZ116385; Wed, 30 Sep 2020 04:14:02 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 02/18] dmaengine: Add support for per channel coherency handling Date: Wed, 30 Sep 2020 12:13:56 +0300 Message-ID: <20200930091412.8020-3-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org If the DMA device supports per channel coherency configuration (a channel can be configured to have coherent or not coherent view) then a single device (the DMA controller's device) can not be used for dma_api for all channels as channels can have different coherency. Introduce custom_dma_mapping flag for the dma_chan and a new helper to get the device pointer to be used for dma_api for the given channel. Client drivers should be updated to be able to support per channel coherency by: - dma_map_single(chan->device->dev, ptr, size, DMA_TO_DEVICE); + struct device *dma_dev = dmaengine_get_dma_device(chan); + + dma_map_single(dma_dev, ptr, size, DMA_TO_DEVICE); Signed-off-by: Peter Ujfalusi --- include/linux/dmaengine.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index d6197fe875af..182a1a2e7793 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -357,11 +357,14 @@ struct dma_chan { * @chan: driver channel device * @device: sysfs device * @dev_id: parent dma_device dev_id + * @chan_dma_dev: The channel is using custom/different dma-mapping + * compared to the parent dma_device */ struct dma_chan_dev { struct dma_chan *chan; struct device device; int dev_id; + bool chan_dma_dev; }; /** @@ -1613,4 +1616,13 @@ dmaengine_get_direction_text(enum dma_transfer_direction dir) return "invalid"; } } + +static inline struct device *dmaengine_get_dma_device(struct dma_chan *chan) +{ + if (chan->dev->chan_dma_dev) + return &chan->dev->device; + + return chan->device->dev; +} + #endif /* DMAENGINE_H */ From patchwork Wed Sep 30 09:13:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313856 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21136ilj; Wed, 30 Sep 2020 02:14:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyocAPdGm9ZEQE+VvY4zr4DbdAtwA7iIwLmMAx8/j4WU3CgypIkFMfc5NlMpuC3QU79CHO6 X-Received: by 2002:a17:906:6855:: with SMTP id a21mr1238082ejs.289.1601457257155; Wed, 30 Sep 2020 02:14:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457257; cv=none; d=google.com; s=arc-20160816; b=da2A9Bb3pHbQhfgyazLM/OH0S9d1BsloHvpg56aH3p3UL/LGT+bjVDDXDS9drD6wo+ CgrjUIASesQqpqSaqvMFKTQn1EQcJLgomQom8ETwyN1PlcTnN/CSSv1zJjkeg2FvE9dW 2N8DW3fzmmfiLs7rRheIgRo1DDoKUWHdgNvfmHcIjr5qRPkiWCLAWx3gDccrM6rGLtx+ nQ/w135ONQQaD+oqW0meJ5ivuxgWKLQG2IfTJcY16JJgO6C3VApvsauhsFAxk25p9zV+ 31BEZ3T3zcMfnornFvbXcv68nF5Denu7P8nUuo/giDkYzfwuQzbFeZmSS4Q/wv3FyXXa BjXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9ls1rrDJzgkFi+HLj+7Y+2Kxc/KjoBYF/oSuS633YiQ=; b=Z6hCh6g8XrFst+SA6v4z6LT8e6GS3MraEBiKzCWryIyJlziWnVNC22aLec7sS7hmnI /JbXn1tEtUderC39NhNEDgVG+MuZ4F5HVJSz8/ic9udvaHIuCsMUQdLEsJx5qyC/UUX6 qHVtWmUxjY7pm6OK904cneL7I7/QjHAdoF/lKqJ5wGQgvSm36G1TQCs9STFLmjRx9yf1 3n+NUAJaJJSaKZgX/f3LhxSyDzoiAhmmFulA+HbHPeqG9AtG57F6SJVLg20ZYnIx1UGS lAIzNOWrwvV83DgKkt2HcjkBElUiF/icao6xTXiERdt7GXaf8qdq9zAncCun3SU7NkF8 CMsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=h4YVWUGn; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m17si572171edj.540.2020.09.30.02.14.16; Wed, 30 Sep 2020 02:14:17 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=h4YVWUGn; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728897AbgI3JOP (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:15 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35044 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728851AbgI3JOO (ORCPT ); Wed, 30 Sep 2020 05:14:14 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9E8Lt034919; Wed, 30 Sep 2020 04:14:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457248; bh=9ls1rrDJzgkFi+HLj+7Y+2Kxc/KjoBYF/oSuS633YiQ=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=h4YVWUGnaERUkgr9ax/6yF+JlLziWtlqgMuccKJde2vzkzSVeo+w5rbLzvSQ3yqTY p/u/ILnwxWWJkpxfjRUodFWJ732d1NMqqMqHXLkjAYI9/3kEDg/bliUC/q48cuB5rT dwntlTwH9ly70cB4KZOdB5G4zan6AUwMcUoI9uQk= Received: from DLEE100.ent.ti.com (dlee100.ent.ti.com [157.170.170.30]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9E88M049189 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:08 -0500 Received: from DLEE102.ent.ti.com (157.170.170.32) by DLEE100.ent.ti.com (157.170.170.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:08 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE102.ent.ti.com (157.170.170.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:08 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJa116385; Wed, 30 Sep 2020 04:14:05 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 03/18] dmaengine: doc: client: Update for dmaengine_get_dma_device() usage Date: Wed, 30 Sep 2020 12:13:57 +0300 Message-ID: <20200930091412.8020-4-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Client drivers should use the dmaengine_get_dma_device(chan) to get the device pointer which should be used for DMA API for allocations and mapping. Signed-off-by: Peter Ujfalusi --- Documentation/driver-api/dmaengine/client.rst | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/Documentation/driver-api/dmaengine/client.rst b/Documentation/driver-api/dmaengine/client.rst index 09a3f66dcd26..bfd057b21a00 100644 --- a/Documentation/driver-api/dmaengine/client.rst +++ b/Documentation/driver-api/dmaengine/client.rst @@ -120,7 +120,9 @@ The details of these operations are: .. code-block:: c - nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len); + struct device *dma_dev = dmaengine_get_dma_device(chan); + + nr_sg = dma_map_sg(dma_dev, sgl, sg_len); if (nr_sg == 0) /* error */ From patchwork Wed Sep 30 09:13:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313857 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21184ilj; Wed, 30 Sep 2020 02:14:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJylYB5BPgbwe5eqLyUpyHRfdLfJKwVrGMS+9qnZ5IAydso/3NKd33ZPb2oo7qjZMWimlve5 X-Received: by 2002:a50:ccd2:: with SMTP id b18mr1618868edj.51.1601457263546; Wed, 30 Sep 2020 02:14:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457263; cv=none; d=google.com; s=arc-20160816; b=KTqkVa+ccTdVncLM+itEkzNiUg5W8+YXxHxMr3qZD49Mr8301tJbhVYI6fHtv9HQm1 yJF1z5z3MGtzwzBHprj0adgg93HA7YBjgKDgNqg2StQ+1mm+VigGFolqprKQVGzQLy2A 0L3z+unesJUw3XN2CAdHGVnW2tN7GptiZBC1om0k4YyFE+boRJ223ni7vYB+Vs59aU/c H6EslDjB/YH3JwgaM1r7Y3lCtskJIMWeItWEsOM9qTa6WBhA1d70TjzzS5N1vi4UpdBP wQuIjiysbX4G3RPoyi0WS5Ub8pIKDXhpFKb9ztkg84wGLHhMhe9LUs2vjlXjiqwpGqNE odlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GcxwvtwaXqOgpXBurlcj+/JKCfjnKyirXPuoynaLPoY=; b=nrKhAn4mPDzBeIdt9DcQTmMvOXaQ3MhKecFqS76n96UAk1llmIc8RR2S1nseVs5yP3 KWteliiO3mcgsEKJBQwfhgOiMlau7eo2ZCM2OJ6agmUiqogKGlFZHNNhyO8VzUrx/PhV YAiY1OLB/KZiH4aB/KN3mifTQLNQwI5K79VwnR1bWyik3hlyVx3NXo5BfgpXZ13/sGDY wDGTkqruy+lNp/LlByHzb/Bu+WdnQp8emFlgnAUEyhahQNzYPllBEoiLrzyym/RPehyR woQMM2IYpJUx4WYXGbB7Iniv4bzlxU1WX1FHGkVHkG9sCJ4TSVgKkmNHH4reVmkcOmcN v/uQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=cR283PKK; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n18si671304edt.207.2020.09.30.02.14.23; Wed, 30 Sep 2020 02:14:23 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=cR283PKK; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728951AbgI3JOU (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:20 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35058 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728851AbgI3JOS (ORCPT ); Wed, 30 Sep 2020 05:14:18 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EBRG034949; Wed, 30 Sep 2020 04:14:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457251; bh=GcxwvtwaXqOgpXBurlcj+/JKCfjnKyirXPuoynaLPoY=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=cR283PKKVRkA1UvJzKq0rKhfRVKMRoKAQIu4Sha+mqxdvAd6P+7LoB1Bd+iVpsbRt lXfgV70ZTFRgjycB8GDj3v3CnW/MNLr4j2jXd8annHLf1prn08JXiwo/8N1Ph/r7Ya qOtdrdY1U1fcUqKgwWK8jUMJ2pRkbqKRYuUIzTls= Received: from DFLE108.ent.ti.com (dfle108.ent.ti.com [10.64.6.29]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EBJV121952; Wed, 30 Sep 2020 04:14:11 -0500 Received: from DFLE108.ent.ti.com (10.64.6.29) by DFLE108.ent.ti.com (10.64.6.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:11 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE108.ent.ti.com (10.64.6.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:11 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJb116385; Wed, 30 Sep 2020 04:14:08 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 04/18] dmaengine: dmatest: Use dmaengine_get_dma_device Date: Wed, 30 Sep 2020 12:13:58 +0300 Message-ID: <20200930091412.8020-5-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org By using the dmaengine_get_dma_device() to get the device for dma_api use, the dmatest can support per channel coherency if it is supported by the DMA controller. Signed-off-by: Peter Ujfalusi --- drivers/dma/dmatest.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c index a3a172173e34..f696246f57fd 100644 --- a/drivers/dma/dmatest.c +++ b/drivers/dma/dmatest.c @@ -573,6 +573,7 @@ static int dmatest_func(void *data) struct dmatest_params *params; struct dma_chan *chan; struct dma_device *dev; + struct device *dma_dev; unsigned int error_count; unsigned int failed_tests = 0; unsigned int total_tests = 0; @@ -606,6 +607,8 @@ static int dmatest_func(void *data) params = &info->params; chan = thread->chan; dev = chan->device; + dma_dev = dmaengine_get_dma_device(chan); + src = &thread->src; dst = &thread->dst; if (thread->type == DMA_MEMCPY) { @@ -730,7 +733,7 @@ static int dmatest_func(void *data) filltime = ktime_add(filltime, diff); } - um = dmaengine_get_unmap_data(dev->dev, src->cnt + dst->cnt, + um = dmaengine_get_unmap_data(dma_dev, src->cnt + dst->cnt, GFP_KERNEL); if (!um) { failed_tests++; @@ -745,10 +748,10 @@ static int dmatest_func(void *data) struct page *pg = virt_to_page(buf); unsigned long pg_off = offset_in_page(buf); - um->addr[i] = dma_map_page(dev->dev, pg, pg_off, + um->addr[i] = dma_map_page(dma_dev, pg, pg_off, um->len, DMA_TO_DEVICE); srcs[i] = um->addr[i] + src->off; - ret = dma_mapping_error(dev->dev, um->addr[i]); + ret = dma_mapping_error(dma_dev, um->addr[i]); if (ret) { result("src mapping error", total_tests, src->off, dst->off, len, ret); @@ -763,9 +766,9 @@ static int dmatest_func(void *data) struct page *pg = virt_to_page(buf); unsigned long pg_off = offset_in_page(buf); - dsts[i] = dma_map_page(dev->dev, pg, pg_off, um->len, + dsts[i] = dma_map_page(dma_dev, pg, pg_off, um->len, DMA_BIDIRECTIONAL); - ret = dma_mapping_error(dev->dev, dsts[i]); + ret = dma_mapping_error(dma_dev, dsts[i]); if (ret) { result("dst mapping error", total_tests, src->off, dst->off, len, ret); From patchwork Wed Sep 30 09:13:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313858 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21233ilj; Wed, 30 Sep 2020 02:14:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx4H2DMGsnYZ/sAN4ao7YKWIYhd2G+1OVkKppQpcwvhPLXumyW2hTR9+vN4H7sVRVDcKhqU X-Received: by 2002:a50:da84:: with SMTP id q4mr1619461edj.238.1601457267848; Wed, 30 Sep 2020 02:14:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457267; cv=none; d=google.com; s=arc-20160816; b=hXFKEBWp6Dho7bRcqHVZCKfDAqE3CrbiPl2fmEoBxZhW8Z9D2y/tHAGMjpo4ag7lm5 PNOBrCBNktjxW0nq2uXNBc0AbZVPEmAZBzAZoDeLsGFThZxTO1WjitgibR+r4/wZNLpB /XSk+c9qnJrtc/HPBZRaLCRjByQwXWSI5MsU+0B54SUT22T+PzpyB2QSfT+eoFqkO4Rl DaMhE5FrClusDXY0krozkwWAexLf46YrSvGqU8FQyLct01lUtt2EU2Ld/UUgDJBS0Vqv gFeeG8Eqw4+r1iatU8vLCWU7G5oem9o2CIKcc6r4XSRH5+ouyzy6ItSvGKhWZWrEE56+ tG9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=uajOkBw+qG5SkQgCuPiRovW8kdqx1dHM4++EH9YD2UM=; b=tqirl3gW5wtSz96zpBNkeyUn3qgaDMpgZ4/2WH96VCt/nZaVPMIqDRzTgcRa+iuVse iNipZdm0RAqWic6Meb/s9JvUo1vyq7OMukyPX/WTQ+JJm8Fg99FfcMbVi7MMwygxdOdT rU7dqk1jT05wBeEV/3I5gNOruBLaGyWvogLuamLrtHDegSScgEafMxMMg0lqhA4cQfcp jDFPohBGK0/LVHOmuUlPk2Ij/cWxU4gwFO8OrtdFs899pTkKqixVT1ET/FtdOgtuYqXV 2Yy2jTsxdPQM081vXjEEV8my5apa4NSRL3KumYTZhSg5MoTNcDBnu9VU67PsjN1vHkf1 /AAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=aw+HeqcU; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dd23si388541edb.534.2020.09.30.02.14.27; Wed, 30 Sep 2020 02:14:27 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=aw+HeqcU; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728973AbgI3JOY (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:24 -0400 Received: from fllv0016.ext.ti.com ([198.47.19.142]:53024 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728938AbgI3JOT (ORCPT ); Wed, 30 Sep 2020 05:14:19 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EEFK067322; Wed, 30 Sep 2020 04:14:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457254; bh=uajOkBw+qG5SkQgCuPiRovW8kdqx1dHM4++EH9YD2UM=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=aw+HeqcU8C9VIFIa+Y6W+4tXwp9mCLuhk9PaHXtlNaZcGWDOSEcC+C6hTsKQH73Ga im1zbCGc6y5oDzOOi4Ic/KxxmKad4wP/hGACsIWFjRsGYYZ/QhYU1bXtpEAI9hF9xo EOre/23/yc9HgeLwBX12aCAC7kCEGTs24geDn4K8= Received: from DFLE105.ent.ti.com (dfle105.ent.ti.com [10.64.6.26]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9EEHi049314 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:14 -0500 Received: from DFLE100.ent.ti.com (10.64.6.21) by DFLE105.ent.ti.com (10.64.6.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:14 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE100.ent.ti.com (10.64.6.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:14 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJc116385; Wed, 30 Sep 2020 04:14:11 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 05/18] dmaengine: ti: k3-udma: Wait for peer teardown completion if supported Date: Wed, 30 Sep 2020 12:13:59 +0300 Message-ID: <20200930091412.8020-6-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Set the TDTYPE if it is supported on the platform (j721e) which will cause UDMAP to wait for the remote peer to finish the teardown before returning the teardown completed message. Signed-off-by: Peter Ujfalusi Tested-by: Keerthy Reviewed-by: Grygorii Strashko --- drivers/dma/ti/k3-udma.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index de7bfc02a2de..3f798993a8b1 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -86,6 +86,7 @@ struct udma_rchan { #define UDMA_FLAG_PDMA_ACC32 BIT(0) #define UDMA_FLAG_PDMA_BURST BIT(1) +#define UDMA_FLAG_TDTYPE BIT(2) struct udma_match_data { u32 psil_base; @@ -1591,6 +1592,13 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc) req_tx.tx_fetch_size = fetch_size >> 2; req_tx.txcq_qnum = tc_ring; req_tx.tx_atype = uc->config.atype; + if (uc->config.ep_type == PSIL_EP_PDMA_XY && + ud->match_data->flags & UDMA_FLAG_TDTYPE) { + /* wait for peer to complete the teardown for PDMAs */ + req_tx.valid_params |= + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_TDTYPE_VALID; + req_tx.tx_tdtype = 1; + } ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); if (ret) @@ -3107,14 +3115,14 @@ static struct udma_match_data am654_mcu_data = { static struct udma_match_data j721e_main_data = { .psil_base = 0x1000, .enable_memcpy_support = true, - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST, + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, .statictr_z_mask = GENMASK(23, 0), }; static struct udma_match_data j721e_mcu_data = { .psil_base = 0x6000, .enable_memcpy_support = false, /* MEM_TO_MEM is slow via MCU UDMA */ - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST, + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, .statictr_z_mask = GENMASK(23, 0), }; From patchwork Wed Sep 30 09:14:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313873 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21812ilj; Wed, 30 Sep 2020 02:15:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzOANXv9ro9V+GuDo6XwcemZOUWL/Ix8Q6aE56fVmePPsJR1eXS2YV7R90rAIGBzowzPHXK X-Received: by 2002:a17:906:1f43:: with SMTP id d3mr1828683ejk.395.1601457330459; Wed, 30 Sep 2020 02:15:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457330; cv=none; d=google.com; s=arc-20160816; b=KlX/Wz0jNRGMI+OarlAQ14tYW4fjjv5VCF9mTZRgC8jXvZD/tpcMESM8NPOEcOBe8T muMJUwqZgf01jmp7/qr+VyyQmMhSmTqmGpg3Crzs236Ux8/WKeRe2aUgxmotQOEAami6 s+BqtLBwjtfwWBAulW9wuGGObKnEZYNuS45oH/NgOO2nmE3ycmmiyVXI5U3eSGB2vpee cwIWfjIEr/UZct11B7hHgJQHtAXdHmD9nV8euAMRrHhxg2pTx9Vj6zs4zPd5j+8/sipN rAkiSSg6CSieHMcYT15ZrIkMmAOlqHEjnxLmdOuZegu62m9Hy6U0gNUcweedch0SUOqV vL7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DYA/H5vm9+3x5eXfOQH9d6aiZorGnTpzvLz7q7IKrrs=; b=Bo20gwMxcpjrxcfuxAZGgs4h0YLVKGN7l2ymED2dTGj2b93h1IEJc+zEvHLnA1P7aR f4VU0ZvRSzIXWBkR46HS6Va3LAf7o4qlviZxIEQ31/sPW5vnYwp5P59msPCtS5dt/Gka IXz5tQqykXODbjroZ2jS0HzJC8toqdNaamtdsxABss5Ojz84knRbyoZSzb+bwq8OB1RU mV13Bax/ZV0nWDTJtLAogkcb1LyJ15LsbVVh3i9jVBjtFrcNRaXGjujDnNOT+Mg8W0GF UQjlqmW0V9gobJNqhl9Yb5gkPvrgSYVPk4aLyl5QjdCBZZuf/Z84WG+p8Q8hfplaE+rX RCUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=GJ28ZA3Y; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id pw9si758999ejb.624.2020.09.30.02.15.30; Wed, 30 Sep 2020 02:15:30 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=GJ28ZA3Y; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729435AbgI3JP3 (ORCPT + 6 others); Wed, 30 Sep 2020 05:15:29 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35064 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728957AbgI3JOW (ORCPT ); Wed, 30 Sep 2020 05:14:22 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EHkW034980; Wed, 30 Sep 2020 04:14:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457257; bh=DYA/H5vm9+3x5eXfOQH9d6aiZorGnTpzvLz7q7IKrrs=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=GJ28ZA3YSfzd2/aCM26LPUHAU80Psi634thOumnDBIpN0+LQ+xUG5ch9DeLiwTE68 gpDaNLuFwk516Z4XAIf8w/uoTM7NxofrXynUONkOqiK+4ZF1F32eDefKxp/d0f5Y/9 7fljVlwovugOPqv1nIy6G1K1jilJQ4JB/0q+x2g4= Received: from DLEE112.ent.ti.com (dlee112.ent.ti.com [157.170.170.23]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9EHVj111747 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:17 -0500 Received: from DLEE103.ent.ti.com (157.170.170.33) by DLEE112.ent.ti.com (157.170.170.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:17 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:16 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJd116385; Wed, 30 Sep 2020 04:14:14 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 06/18] dmaengine: ti: k3-udma: Add support for second resource range from sysfw Date: Wed, 30 Sep 2020 12:14:00 +0300 Message-ID: <20200930091412.8020-7-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Resource allocation via sysfw can use up to two ranges per resource subtype to support more complex resource assignment, mainly for DMA channels. Take the second range also into consideration when setting up the maps for available resources. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 55 ++++++++++++++++++++++------------------ 1 file changed, 31 insertions(+), 24 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 3f798993a8b1..1ae5d09e2059 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -3179,12 +3179,22 @@ static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud) return 0; } +static void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *map, + struct ti_sci_resource_desc *rm_desc, + char *name) +{ + bitmap_clear(map, rm_desc->start, rm_desc->num); + bitmap_clear(map, rm_desc->start_sec, rm_desc->num_sec); + dev_dbg(ud->dev, "ti_sci resource range for %s: %d:%d | %d:%d\n", name, + rm_desc->start, rm_desc->num, rm_desc->start_sec, + rm_desc->num_sec); +} + static int udma_setup_resources(struct udma_dev *ud) { struct device *dev = ud->dev; int ch_count, ret, i, j; u32 cap2, cap3; - struct ti_sci_resource_desc *rm_desc; struct ti_sci_resource *rm_res, irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; static const char * const range_names[] = { "ti,sci-rm-range-tchan", @@ -3270,13 +3280,9 @@ static int udma_setup_resources(struct udma_dev *ud) bitmap_zero(ud->tchan_map, ud->tchan_cnt); } else { bitmap_fill(ud->tchan_map, ud->tchan_cnt); - for (i = 0; i < rm_res->sets; i++) { - rm_desc = &rm_res->desc[i]; - bitmap_clear(ud->tchan_map, rm_desc->start, - rm_desc->num); - dev_dbg(dev, "ti-sci-res: tchan: %d:%d\n", - rm_desc->start, rm_desc->num); - } + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->tchan_map, + &rm_res->desc[i], "tchan"); } irq_res.sets = rm_res->sets; @@ -3286,13 +3292,9 @@ static int udma_setup_resources(struct udma_dev *ud) bitmap_zero(ud->rchan_map, ud->rchan_cnt); } else { bitmap_fill(ud->rchan_map, ud->rchan_cnt); - for (i = 0; i < rm_res->sets; i++) { - rm_desc = &rm_res->desc[i]; - bitmap_clear(ud->rchan_map, rm_desc->start, - rm_desc->num); - dev_dbg(dev, "ti-sci-res: rchan: %d:%d\n", - rm_desc->start, rm_desc->num); - } + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rchan_map, + &rm_res->desc[i], "rchan"); } irq_res.sets += rm_res->sets; @@ -3301,12 +3303,21 @@ static int udma_setup_resources(struct udma_dev *ud) for (i = 0; i < rm_res->sets; i++) { irq_res.desc[i].start = rm_res->desc[i].start; irq_res.desc[i].num = rm_res->desc[i].num; + irq_res.desc[i].start_sec = rm_res->desc[i].start_sec; + irq_res.desc[i].num_sec = rm_res->desc[i].num_sec; } rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; for (j = 0; j < rm_res->sets; j++, i++) { - irq_res.desc[i].start = rm_res->desc[j].start + + if (rm_res->desc[j].num) { + irq_res.desc[i].start = rm_res->desc[j].start + ud->soc_data->rchan_oes_offset; - irq_res.desc[i].num = rm_res->desc[j].num; + irq_res.desc[i].num = rm_res->desc[j].num; + } + if (rm_res->desc[j].num_sec) { + irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + + ud->soc_data->rchan_oes_offset; + irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; + } } ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); kfree(irq_res.desc); @@ -3322,13 +3333,9 @@ static int udma_setup_resources(struct udma_dev *ud) bitmap_clear(ud->rflow_gp_map, ud->rchan_cnt, ud->rflow_cnt - ud->rchan_cnt); } else { - for (i = 0; i < rm_res->sets; i++) { - rm_desc = &rm_res->desc[i]; - bitmap_clear(ud->rflow_gp_map, rm_desc->start, - rm_desc->num); - dev_dbg(dev, "ti-sci-res: rflow: %d:%d\n", - rm_desc->start, rm_desc->num); - } + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rflow_gp_map, + &rm_res->desc[i], "gp-rflow"); } ch_count -= bitmap_weight(ud->tchan_map, ud->tchan_cnt); From patchwork Wed Sep 30 09:14:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313872 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21804ilj; Wed, 30 Sep 2020 02:15:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyZlXH+B597ulhJnzEggMRbnEyc5kbyq2EMbC5tnO/dcnn4a2EHAAAhqDVeiI++ryLvL/9X X-Received: by 2002:a05:6402:13d3:: with SMTP id a19mr1571054edx.255.1601457330019; Wed, 30 Sep 2020 02:15:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457330; cv=none; d=google.com; s=arc-20160816; b=qJ0bawcJejZAkuq7HUwTcpKXyJzdnQVi/5i0FvbB/ikm6P7oYPtqLo+E8diXlxbl6X +Pzh79FFLBtrbLT3oJxNiJultMw6CH+NTlIbBtsLeYXtP+ewX/zjc/O2LYZUv/kLhZHE MFkUB/UPTNBlACFxgtiN+/n02QyArKIW0Hb3KKKL1wqvKViE6N8xi8Ct3TBX2/H5xJbl 0k+0/w8+BFEizeqOwIsR+YJGkOH5Ko8epEpcqlPKLamRbnNcYYEOpVFxmctHNbtIQeKS qLAamv9nu7BFj854d5IqP2JUVsdJDmr8l6Uy+wv8iGobDNOlRjgMb5F2EXCP6ObWhiVN q63Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=I4tVn3WwDtzTGz5dMWdQmFJs/TmE1NGcX6lzD1o0+zs=; b=v+DVYpQN9yRQgjOSrfZAIiihtvTbUShuhZsemulc+ORAFiPxu+OjztKkjdDvVIt7Ck jhMS2Ihc7FkgCB63P5V+Mv/ub1AAjQ7ri7ft4G0W07naHEHsr29HB9L1LnJmyb+11ydU zieKuwrJF1D2//9yj6BlKSagZMesxn2KgdJnWwkRCCd7meobWVwWbhEKZJaYRmTIm4HD hVuv+2fGzB35nShKBF3d7fij3nTrbSt57pwlLOv0hGh6hfJqC1A/2qL3l71wCq2B2NRF 3wRgoh/5PGmPeSqJhAHH+YGu8FYeoCiXJ7W8Ncf1yQGXidwc7GXnRiEW/GWg+RCP/Mwr OFDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=QHBZHH6w; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id pw9si758999ejb.624.2020.09.30.02.15.29; Wed, 30 Sep 2020 02:15:30 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=QHBZHH6w; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729038AbgI3JP2 (ORCPT + 6 others); Wed, 30 Sep 2020 05:15:28 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35110 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728983AbgI3JO0 (ORCPT ); Wed, 30 Sep 2020 05:14:26 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EKCw034993; Wed, 30 Sep 2020 04:14:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457260; bh=I4tVn3WwDtzTGz5dMWdQmFJs/TmE1NGcX6lzD1o0+zs=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=QHBZHH6wF8OXRU84imH323vEmXocJqQW4UejvauIrILaIxyFlA74MlfbSRFOhJFzD CrQd/9aQil0X+vC2vks4wnXB9ZmiJvyaGiUs1c6IEBtaa9FkIZIsi1k9vdEvBJfibX u/ds/MFkUvKXZwvzh7ZzvYqa3nvtdBbYFmtdwSjE= Received: from DLEE113.ent.ti.com (dlee113.ent.ti.com [157.170.170.24]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9EKVj099858 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:20 -0500 Received: from DLEE104.ent.ti.com (157.170.170.34) by DLEE113.ent.ti.com (157.170.170.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:19 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE104.ent.ti.com (157.170.170.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:20 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJe116385; Wed, 30 Sep 2020 04:14:17 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 07/18] dmaengine: ti: k3-udma-glue: Add function to get device pointer for DMA API Date: Wed, 30 Sep 2020 12:14:01 +0300 Message-ID: <20200930091412.8020-8-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Glue layer users should use the device of the DMA for DMA mapping and allocations as it is the DMA which accesses to descriptors and buffers, not the clients Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-glue.c | 14 ++++++++++++++ drivers/dma/ti/k3-udma-private.c | 6 ++++++ drivers/dma/ti/k3-udma.h | 1 + include/linux/dma/k3-udma-glue.h | 4 ++++ 4 files changed, 25 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c index a367584f0d7b..a53bc4707ae8 100644 --- a/drivers/dma/ti/k3-udma-glue.c +++ b/drivers/dma/ti/k3-udma-glue.c @@ -487,6 +487,13 @@ int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn) } EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq); +struct device * + k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn) +{ + return xudma_get_device(tx_chn->common.udmax); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_dma_device); + static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) { const struct udma_tisci_rm *tisci_rm = rx_chn->common.tisci_rm; @@ -1189,3 +1196,10 @@ int k3_udma_glue_rx_get_irq(struct k3_udma_glue_rx_channel *rx_chn, return flow->virq; } EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_irq); + +struct device * + k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn) +{ + return xudma_get_device(rx_chn->common.udmax); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_dma_device); diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c index aa24e554f7b4..8ff7a264be03 100644 --- a/drivers/dma/ti/k3-udma-private.c +++ b/drivers/dma/ti/k3-udma-private.c @@ -50,6 +50,12 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property) } EXPORT_SYMBOL(of_xudma_dev_get); +struct device *xudma_get_device(struct udma_dev *ud) +{ + return ud->dev; +} +EXPORT_SYMBOL(xudma_get_device); + u32 xudma_dev_get_psil_base(struct udma_dev *ud) { return ud->psil_base; diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index 09c4529e013d..d1cace0cb43b 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -112,6 +112,7 @@ int xudma_navss_psil_unpair(struct udma_dev *ud, u32 src_thread, u32 dst_thread); struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property); +struct device *xudma_get_device(struct udma_dev *ud); void xudma_dev_put(struct udma_dev *ud); u32 xudma_dev_get_psil_base(struct udma_dev *ud); struct udma_tisci_rm *xudma_dev_get_tisci_rm(struct udma_dev *ud); diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h index 5eb34ad973a7..d7c12f31377c 100644 --- a/include/linux/dma/k3-udma-glue.h +++ b/include/linux/dma/k3-udma-glue.h @@ -41,6 +41,8 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, u32 k3_udma_glue_tx_get_hdesc_size(struct k3_udma_glue_tx_channel *tx_chn); u32 k3_udma_glue_tx_get_txcq_id(struct k3_udma_glue_tx_channel *tx_chn); int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn); +struct device * + k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn); enum { K3_UDMA_GLUE_SRC_TAG_LO_KEEP = 0, @@ -130,5 +132,7 @@ int k3_udma_glue_rx_flow_enable(struct k3_udma_glue_rx_channel *rx_chn, u32 flow_idx); int k3_udma_glue_rx_flow_disable(struct k3_udma_glue_rx_channel *rx_chn, u32 flow_idx); +struct device * + k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn); #endif /* K3_UDMA_GLUE_H_ */ From patchwork Wed Sep 30 09:14:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313870 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21736ilj; Wed, 30 Sep 2020 02:15:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx8iFOse43177Db8vz1TfaiG6Ohhssw6nSAgCpzXVTYmMjcV1TsHAtqIBK0rUmwcrLmzumE X-Received: by 2002:a17:906:ce4b:: with SMTP id se11mr1792665ejb.386.1601457322156; Wed, 30 Sep 2020 02:15:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457322; cv=none; d=google.com; s=arc-20160816; b=0fNhJpG1jYe9cwT1Vm/TTc/foZ0U+fKumbWzxnAmWOnGS3/epenvpYrbeWC4caKzCR cuV1wPSKP0Zb8Bs36INh/kbeIPfm7E7v4S54DcxAvslsUmLKtyy9BZFbgRxaSslno7hD 2bK4/Hso8n/qvf9Dk6fQEi1hAm+zqwgCp55CZQhRv2SHv5G2Uo6pHKGV0KKnMqDWVkWB LceTmDjtwWb2H3FN4Oku1Fba6IC/Kjhd/YG2aZGTzYpppqPm9k2qV4u33NVNvAbnjJBu A77fOJXGvYrDpY651coc2nbQzG3PaltiLJHwGlM18csPN4pXxgcDT99FKEisO4IxFhA1 rmwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=F1UBBxuJBBt4i9B5EabYSjDpVXsLLY8WCh/WDwvs4ko=; b=MKoqEurl4XJlkQFq0ya5VD4dcXRcNkVPw6tP/4hE9hegP5/yt0q4rB2rdOzpxlW8BR SlYDy+vL75idYdFTbSd/qbekkFeiCA7/latvMvsz801etVTdQsrIFUHwQv16bC7uFbVb Rn1xa9QYQ6Tano6vYKqfW8gpDbaxZ0M+odhOKNzqIIpHAHhSA4rqHLQ74d7aHKsl9I8G AJlxKCS73MTDO4xmeanTmXD6XfnWHVAnvZ5cBTeJHIAtr+CGBM/gKZAB+YLz0+GE4Gi+ nWxQKuO8Y7SfgNHmiIzyM4SX9CBnNOxZl0kUpICNacdofU+9xT72POzXqTaDgiwb72jM yUrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=YXaWa3Jf; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j7si732058ejn.639.2020.09.30.02.15.22; Wed, 30 Sep 2020 02:15:22 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=YXaWa3Jf; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729080AbgI3JOe (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:34 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35120 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725872AbgI3JO2 (ORCPT ); Wed, 30 Sep 2020 05:14:28 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9ENoI035023; Wed, 30 Sep 2020 04:14:23 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457263; bh=F1UBBxuJBBt4i9B5EabYSjDpVXsLLY8WCh/WDwvs4ko=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=YXaWa3Jf0MhiprwqLEwVg4A5Jr59PfudNiFSsnKL52m5t8a+SG3ZT/iRpyL3wHQbc 64hQangtImQ41T/BPQuDQ1BxfbQx7mwO4GyFz9gdqTOJTtEzot4b55cd9hfyfYmFbL wO6Be1UVoG0OBfYQ5BE/r/wtcGGiIB8hl0JudUAQ= Received: from DFLE104.ent.ti.com (dfle104.ent.ti.com [10.64.6.25]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EN6X122184; Wed, 30 Sep 2020 04:14:23 -0500 Received: from DFLE101.ent.ti.com (10.64.6.22) by DFLE104.ent.ti.com (10.64.6.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:22 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE101.ent.ti.com (10.64.6.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:22 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJf116385; Wed, 30 Sep 2020 04:14:20 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 08/18] dmaengine: ti: k3-udma-glue: Configure the dma_dev for rings Date: Wed, 30 Sep 2020 12:14:02 +0300 Message-ID: <20200930091412.8020-9-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Rings in RING mode should be using the DMA device for DMA API as in this mode the ringacc will not access the ring memory in any ways, but the DMA is. Fix up the ring configuration and set the dma_dev unconditionally and let the ringacc driver to select the correct device to use for DMA API. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-glue.c | 8 ++++++++ 1 file changed, 8 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c index a53bc4707ae8..f39825ce288a 100644 --- a/drivers/dma/ti/k3-udma-glue.c +++ b/drivers/dma/ti/k3-udma-glue.c @@ -280,6 +280,10 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, goto err; } + /* Set the dma_dev for the rings to be configured */ + cfg->tx_cfg.dma_dev = k3_udma_glue_tx_get_dma_device(tx_chn); + cfg->txcq_cfg.dma_dev = cfg->tx_cfg.dma_dev; + ret = k3_ringacc_ring_cfg(tx_chn->ringtx, &cfg->tx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringtx %d\n", ret); @@ -589,6 +593,10 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn, goto err_rflow_put; } + /* Set the dma_dev for the rings to be configured */ + flow_cfg->rx_cfg.dma_dev = k3_udma_glue_rx_get_dma_device(rx_chn); + flow_cfg->rxfdq_cfg.dma_dev = flow_cfg->rx_cfg.dma_dev; + ret = k3_ringacc_ring_cfg(flow->ringrx, &flow_cfg->rx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringrx %d\n", ret); From patchwork Wed Sep 30 09:14:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313871 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21746ilj; Wed, 30 Sep 2020 02:15:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwmaXtKFOiPtpyg7Sho/TaeLUWvPEIfFmtO5Vkho0S5i1XNjfBPOBIpEfMe4wXuK5GBW0xr X-Received: by 2002:a17:906:770c:: with SMTP id q12mr1779784ejm.518.1601457323344; Wed, 30 Sep 2020 02:15:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457323; cv=none; d=google.com; s=arc-20160816; b=t+lfjPYgs61Aa44kYo81Haky3mBo27VzXAu0m7t4IWbs7k8grV+DpMJcqRVdRDhuo5 WsV+kvrOFN29Qu4UNPiHAzjyYLzLClLz5YtSvcx+BWK0iWsT6BY3EsssPbeCeFAw+Xo9 J3iDJIYCrfd8c/Y2YF+Gw78YovseuTg8cGgv0mxbBThPZ/MrpZKOfOTKUlsGDfDtP/u6 bKwuQsozCoG+aVPt25hJu8DX/hoJwv1NwSUF5yFc2C8y2+XIRqu3PrzLkt17QY/rUpDS A7Pqom0Lf0U9S76wIMuc506kOF/q+47mpaj9Yfnj5c3jLEfWkcfhrQ4g2jDOZ49Oz64o 4sFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pcsvG8Wes2t4bU1KP19bSuHVD+XfFqvrO0hd8Cj0z4U=; b=BBlZtbOqWgBcN2vMEvxWmUnetmPlAnBqRNIL/S7S5RwHmTJs2/3lENvBRFXz2DoogJ qvOVZdL9XMFMNiVB8A7W0lwLjTYNc3yrxbiNXzybuK97l9DR0Ksj7HdHrt2vZGeQjhag CgCC0bfDH4uyQY8dmKaf+ZSHwI47AU9zwB3yQGCJi5rnkNC9EMI4jkyY5XIkBj1dXyw2 HNphdSQkOlmjrwPRJIzLvt03Z9q20igF9+7Ke/2R05KMgGDngcwO3c/VmQa8ZDQmIAi3 TT4OJUnv6FtL8W3Mox3myk4uJwZinO6srdMLKhJa4juT3A7Wk8PmZ3fQtCSHDArs8i2T 4JGw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=RfwzJkNp; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j7si732058ejn.639.2020.09.30.02.15.23; Wed, 30 Sep 2020 02:15:23 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=RfwzJkNp; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729334AbgI3JPW (ORCPT + 6 others); Wed, 30 Sep 2020 05:15:22 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35132 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729038AbgI3JOb (ORCPT ); Wed, 30 Sep 2020 05:14:31 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EQdL035044; Wed, 30 Sep 2020 04:14:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457266; bh=pcsvG8Wes2t4bU1KP19bSuHVD+XfFqvrO0hd8Cj0z4U=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=RfwzJkNpPssNDkanfcY42/t66tnWUa7mRrhWRncw3Ew6NNWZDhenUa9uxzBbyBAZd CAPRv5WiaxySFS3t3C1Pwk0XBHhN9HTRDb34LEQNotdqL45x8adl0YvL59p/u/7O+2 PXqt3fQRZZfeuTjmaJNKYRd7LDmgeUzLu/SRWe2c= Received: from DFLE104.ent.ti.com (dfle104.ent.ti.com [10.64.6.25]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9EQfa111915 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:26 -0500 Received: from DFLE114.ent.ti.com (10.64.6.35) by DFLE104.ent.ti.com (10.64.6.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:25 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE114.ent.ti.com (10.64.6.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:25 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJg116385; Wed, 30 Sep 2020 04:14:23 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 09/18] dt-bindings: dma: ti: Add document for K3 BCDMA Date: Wed, 30 Sep 2020 12:14:03 +0300 Message-ID: <20200930091412.8020-10-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org New binding document for Texas Instruments K3 Block Copy DMA (BCDMA). BCDMA is introduced as part of AM64. Signed-off-by: Peter Ujfalusi --- .../devicetree/bindings/dma/ti/k3-bcdma.yaml | 183 ++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml new file mode 100644 index 000000000000..c84fb641738f --- /dev/null +++ b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml @@ -0,0 +1,183 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/dma/ti/k3-bcdma.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Texas Instruments K3 DMSS BCDMA Device Tree Bindings + +maintainers: + - Peter Ujfalusi + +description: | + The Block Copy DMA (BCDMA) is intended to perform similar functions as the TR + mode channels of K3 UDMA-P. + BCDMA includes block copy channels and Split channels. + + Block copy channels mainly used for memory to memory transfers, but with + optional triggers a block copy channel can service peripherals by accessing + directly to memory mapped registers or area. + + Split channels can be used to service PSI-L based peripherals. + The peripherals can be PSI-L native or legacy, non PSI-L native peripherals + with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the + legacy peripheral. + + PDMAs can be configured via BCDMA split channel's peer registers to match with + the configuration of the legacy peripheral. + +allOf: + - $ref: /schemas/dma/dma-controller.yaml# + +properties: + "#dma-cells": + const: 3 + description: | + cell 1: type of the BCDMA channel to be used to service the peripheral: + 0 - split channel + 1 - block copy channel using global trigger 1 + 2 - block copy channel using global trigger 2 + 3 - block copy channel using local trigger + + cell 2: parameter for the channel: + if cell 1 is 0 (split channel): + PSI-L thread ID of the remote (to BCDMA) end. + Valid ranges for thread ID depends on the data movement direction: + for source thread IDs (rx): 0 - 0x7fff + for destination thread IDs (tx): 0x8000 - 0xffff + + Please refer to the device documentation for the PSI-L thread map and + also the PSI-L peripheral chapter for the correct thread ID. + if cell 1 is 1 or 2 (block copy channel using global trigger): + Unused, ignored + + The trigger must be configured for the channel externally to BCDMA, + channels using global triggers should not be requested directly, but + via DMA event router. + if cell 1 is 3 (block copy channel using local trigger): + bchan number of the locally triggered channel + + cell 3: ASEL value for the channel + + compatible: + enum: + - ti,am64-dmss-bcdma + + "#address-cells": + const: 2 + + "#size-cells": + const: 2 + + reg: + maxItems: 5 + + reg-names: + items: + - const: gcfg + - const: bchanrt + - const: rchanrt + - const: tchanrt + - const: ringrt + + msi-parent: true + + ti,sci: + description: phandle to TI-SCI compatible System controller node + allOf: + - $ref: /schemas/types.yaml#/definitions/phandle + + ti,sci-dev-id: + description: TI-SCI device id of BCDMA + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32 + + ti,asel: + description: ASEL value for non slave channels + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32 + + ti,sci-rm-range-bchan: + description: | + Array of BCDMA block-copy channel resource subtypes for resource + allocation for this host + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32-array + minItems: 1 + # Should be enough + maxItems: 255 + + ti,sci-rm-range-tchan: + description: | + Array of BCDMA split tx channel resource subtypes for resource allocation + for this host + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32-array + minItems: 1 + # Should be enough + maxItems: 255 + + ti,sci-rm-range-rchan: + description: | + Array of BCDMA split rx channel resource subtypes for resource allocation + for this host + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32-array + minItems: 1 + # Should be enough + maxItems: 255 + +required: + - compatible + - "#address-cells" + - "#size-cells" + - "#dma-cells" + - reg + - reg-names + - msi-parent + - ti,sci + - ti,sci-dev-id + - ti,sci-rm-range-bchan + - ti,sci-rm-range-tchan + - ti,sci-rm-range-rchan + +additionalProperties: false + +examples: + - |+ + cbass_main { + #address-cells = <2>; + #size-cells = <2>; + + main_dmss { + compatible = "simple-mfd"; + #address-cells = <2>; + #size-cells = <2>; + dma-ranges; + ranges; + + ti,sci-dev-id = <25>; + + main_bcdma: dma-controller@485c0100 { + compatible = "ti,am64-dmss-bcdma"; + #address-cells = <2>; + #size-cells = <2>; + + reg = <0x0 0x485c0100 0x0 0x100>, + <0x0 0x4c000000 0x0 0x20000>, + <0x0 0x4a820000 0x0 0x20000>, + <0x0 0x4aa40000 0x0 0x20000>, + <0x0 0x4bc00000 0x0 0x100000>; + reg-names = "gcfg", "bchanrt", "rchanrt", "tchanrt", "ringrt"; + msi-parent = <&inta_main_dmss>; + #dma-cells = <3>; + + ti,sci = <&dmsc>; + ti,sci-dev-id = <26>; + + ti,sci-rm-range-bchan = <0x20>; /* BLOCK_COPY_CHAN */ + ti,sci-rm-range-rchan = <0x21>; /* SPLIT_TR_RX_CHAN */ + ti,sci-rm-range-tchan = <0x22>; /* SPLIT_TR_TX_CHAN */ + }; + }; + }; From patchwork Wed Sep 30 09:14:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313863 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21533ilj; Wed, 30 Sep 2020 02:14:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx/nmxmTDdxMY6zLRoP6aQea+g0HGjGqa40oBcqHidS11pv15FFNHGYZll672fieHXLNAAu X-Received: by 2002:a17:906:8682:: with SMTP id g2mr1830991ejx.110.1601457299108; Wed, 30 Sep 2020 02:14:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457299; cv=none; d=google.com; s=arc-20160816; b=HwRHGtynJlNdmMNXXqQwYHEJgFnhSmGEugYY1EDYT5jDSQYbky5Ce0XNhTvbpgp+Cy 0+Fvb/eI9ZbVV/ci/lmCdjhtIqD9jXxqycqhzVD9wKaXG8DygiikD5NEOf3QLsjZ2JGG GOOXemHZaSjZUuXiNqH6D+Bv+zdtOJnF52McD3esYOJpcejdHIs/H5qbae/njT3w4g1M R4OqbIadrR0BxHJx8dhJih9zJbYDUbt4R4Cv6WBl7xE8cDt/tEKGkDolWo5dqu80KOi0 7zTL2ddLUBbOnhHNhBGYYrGseKN4M0ZVPMUSy+fsoFacdcnglJgpouqFD3yrsHO0DH10 gevw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=QqJ+KtVkkwtEEs8vHrO8kwub4apioUn3mdcrgdWfutw=; b=miFxfNfZYuPQHaC0k89YJ5Y6CA+OpsU1jd/25xgfKrk2mHd5h10MFtYXpU/0tZTjqe i2Sp4C+NQBFfi1cgpQqSbtjT2GLunAXosvlkws31tjQ4Xt8bGjYXTJzBnRFSJZU4nKVb DgpvzyKdBjlD4SwcbuGPCohv0ks2nLvh43S11gdZjwzdOAEM9B8ycWHepa7Lb+wVu2cv Ej5ivSg0eMdvnhmytUCShi3Tr27LBi7gWubC2qQaQS4mnbCefJ1s3sxRSKhNl/BKMjWu XCuxmAux/jYOC/CwtM83UZz1n/JSOfiGEl2ajxecLD4rXU8woB6M7NtzB5RknwqHyJ6/ YjcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=GVtpfI++; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cb11si660538edb.361.2020.09.30.02.14.58; Wed, 30 Sep 2020 02:14:59 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=GVtpfI++; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729179AbgI3JOm (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:42 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35172 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725872AbgI3JOg (ORCPT ); Wed, 30 Sep 2020 05:14:36 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9ETpn035054; Wed, 30 Sep 2020 04:14:29 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457269; bh=QqJ+KtVkkwtEEs8vHrO8kwub4apioUn3mdcrgdWfutw=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=GVtpfI+++dLVVn7jIdZWObw3choRGVHwwTtRrAn0xGWk0M/A+ojksxFDjvaOA0Wr2 dIZGqrICZAUjxv4VeLIEN9MeD7SaT16TlJgwYZqJa8heGNuVTBJdAGD/JXo2wNZThY N2fU55/fdH7k3XO1gqQI8N4oVmdxs4WIxUh3XdR4= Received: from DLEE107.ent.ti.com (dlee107.ent.ti.com [157.170.170.37]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9ET82111953 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:29 -0500 Received: from DLEE109.ent.ti.com (157.170.170.41) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:28 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:28 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJh116385; Wed, 30 Sep 2020 04:14:26 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 10/18] dt-bindings: dma: ti: Add document for K3 PKTDMA Date: Wed, 30 Sep 2020 12:14:04 +0300 Message-ID: <20200930091412.8020-11-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org New binding document for Texas Instruments K3 Packet DMA (PKTDMA). PKTDMA is introduced as part of AM64. Signed-off-by: Peter Ujfalusi --- .../devicetree/bindings/dma/ti/k3-pktdma.yaml | 189 ++++++++++++++++++ 1 file changed, 189 insertions(+) create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml new file mode 100644 index 000000000000..75b35bd85830 --- /dev/null +++ b/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml @@ -0,0 +1,189 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/dma/ti/k3-pktdma.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Texas Instruments K3 DMSS PKTDMA Device Tree Bindings + +maintainers: + - Peter Ujfalusi + +description: | + The Packet DMA (PKTDMA) is intended to perform similar functions as the packet + mode channels of K3 UDMA-P. + PKTDMA only includes Split channels to service PSI-L based peripherals. + + The peripherals can be PSI-L native or legacy, non PSI-L native peripherals + with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the + legacy peripheral. + + PDMAs can be configured via PKTDMA split channel's peer registers to match + with the configuration of the legacy peripheral. + +allOf: + - $ref: /schemas/dma/dma-controller.yaml# + +properties: + "#dma-cells": + const: 2 + description: | + The first cell is the PSI-L thread ID of the remote (to PKTDMA) end. + Valid ranges for thread ID depends on the data movement direction: + for source thread IDs (rx): 0 - 0x7fff + for destination thread IDs (tx): 0x8000 - 0xffff + + Please refer to the device documentation for the PSI-L thread map and also + the PSI-L peripheral chapter for the correct thread ID. + + The second cell is the ASEL value for the channel + + compatible: + enum: + - ti,am64-dmss-pktdma + + "#address-cells": + const: 2 + + "#size-cells": + const: 2 + + reg: + maxItems: 4 + + reg-names: + items: + - const: gcfg + - const: rchanrt + - const: tchanrt + - const: ringrt + + msi-parent: true + + ti,sci: + description: phandle to TI-SCI compatible System controller node + allOf: + - $ref: /schemas/types.yaml#/definitions/phandle + + ti,sci-dev-id: + description: TI-SCI device id of PKTDMA + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32 + + ti,sci-rm-range-tchan: + description: | + Array of PKTDMA split tx channel resource subtypes for resource allocation + for this host + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32-array + minItems: 1 + # Should be enough + maxItems: 255 + + ti,sci-rm-range-tflow: + description: | + Array of PKTDMA split tx flow resource subtypes for resource allocation + for this host + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32-array + minItems: 1 + # Should be enough + maxItems: 255 + + ti,sci-rm-range-rchan: + description: | + Array of PKTDMA split rx channel resource subtypes for resource allocation + for this host + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32-array + minItems: 1 + # Should be enough + maxItems: 255 + + ti,sci-rm-range-rflow: + description: | + Array of PKTDMA split rx flow resource subtypes for resource allocation + for this host + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32-array + minItems: 1 + # Should be enough + maxItems: 255 + +required: + - compatible + - "#address-cells" + - "#size-cells" + - "#dma-cells" + - reg + - reg-names + - msi-parent + - ti,sci + - ti,sci-dev-id + - ti,sci-rm-range-tchan + - ti,sci-rm-range-tflow + - ti,sci-rm-range-rchan + - ti,sci-rm-range-rflow + +additionalProperties: false + +examples: + - |+ + cbass_main { + #address-cells = <2>; + #size-cells = <2>; + + main_dmss { + compatible = "simple-mfd"; + #address-cells = <2>; + #size-cells = <2>; + dma-ranges; + ranges; + + ti,sci-dev-id = <25>; + + main_pktdma: dma-controller@485c0000 { + compatible = "ti,am64-dmss-pktdma"; + #address-cells = <2>; + #size-cells = <2>; + + reg = <0x0 0x485c0000 0x0 0x100>, + <0x0 0x4a800000 0x0 0x20000>, + <0x0 0x4aa00000 0x0 0x40000>, + <0x0 0x4b800000 0x0 0x400000>; + reg-names = "gcfg", "rchanrt", "tchanrt", "ringrt"; + msi-parent = <&inta_main_dmss>; + #dma-cells = <2>; + + ti,sci = <&dmsc>; + ti,sci-dev-id = <30>; + + ti,sci-rm-range-tchan = <0x23>, /* UNMAPPED_TX_CHAN */ + <0x24>, /* CPSW_TX_CHAN */ + <0x25>, /* SAUL_TX_0_CHAN */ + <0x26>, /* SAUL_TX_1_CHAN */ + <0x27>, /* ICSSG_0_TX_CHAN */ + <0x28>; /* ICSSG_1_TX_CHAN */ + ti,sci-rm-range-tflow = <0x10>, /* RING_UNMAPPED_TX_CHAN */ + <0x11>, /* RING_CPSW_TX_CHAN */ + <0x12>, /* RING_SAUL_TX_0_CHAN */ + <0x13>, /* RING_SAUL_TX_1_CHAN */ + <0x14>, /* RING_ICSSG_0_TX_CHAN */ + <0x15>; /* RING_ICSSG_1_TX_CHAN */ + ti,sci-rm-range-rchan = <0x29>, /* UNMAPPED_RX_CHAN */ + <0x2b>, /* CPSW_RX_CHAN */ + <0x2d>, /* SAUL_RX_0_CHAN */ + <0x2f>, /* SAUL_RX_1_CHAN */ + <0x31>, /* SAUL_RX_2_CHAN */ + <0x33>, /* SAUL_RX_3_CHAN */ + <0x35>, /* ICSSG_0_RX_CHAN */ + <0x37>; /* ICSSG_1_RX_CHAN */ + ti,sci-rm-range-rflow = <0x2a>, /* FLOW_UNMAPPED_RX_CHAN */ + <0x2c>, /* FLOW_CPSW_RX_CHAN */ + <0x2e>, /* FLOW_SAUL_RX_0/1_CHAN */ + <0x32>, /* FLOW_SAUL_RX_2/3_CHAN */ + <0x36>, /* FLOW_ICSSG_0_RX_CHAN */ + <0x38>; /* FLOW_ICSSG_1_RX_CHAN */ + }; + }; + }; From patchwork Wed Sep 30 09:14:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313864 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21539ilj; Wed, 30 Sep 2020 02:14:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzMhKxy2AWNZkZl/wYe1E9xVbos+O+nItOhBe8q1Sk++PwGngSpDQfCv0oIN+UnClqpDcQV X-Received: by 2002:a17:906:fca7:: with SMTP id qw7mr1781538ejb.522.1601457299860; Wed, 30 Sep 2020 02:14:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457299; cv=none; d=google.com; s=arc-20160816; b=HuK0+i2v/XEQ0YxE7SRr8Mw2bLMLYHSFZz8of042iYIbfGBMo78012b5t+nobSyh11 UOiOyQrH9NrfkSq0lbNuYVQGZEklIDHybQonqF/ewX6ofC/0zkcZexEN/nwUNHK+JxBe DerPt22D7fqQROTG8I3ZOlpPg1z6p81RZhyrLChFRG7xFh/jScZIh5EtF2N7bwIK/cIM ZCLLZpZtjYWDzEspFd3/y9BS88KHeC699eT3JRI2jQnqDeSkBpd5rO1L0B7ewPJNDmkC z4Cj5OgQO/7hf9apCgOYucm42HE6t4Om9KuSjUvUzcWMpuA7d/Ua4PoRuwxCbycY0uaD /0hw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=KrUTk8ul87troFJrpjX2nEBHGpMAgPlBoUHLfqkKttI=; b=Ks70zKNgk9pZfDfApJ4yKHS+tDQq1FNDecdfkFH5fKlhcvUOZuRHWj2JU2doHXUpO7 WgHBHIaU0NfCp6GSnCWicrhLyE0AiBOiUAUg2DmPkPbs0OlkqLhW0X+WHxHnmkDgOLPQ 6A3lG/cNJVQPDuau/DInBoTMxgcibxwcL+0dkfyFutV0nK3E7stI8+2W1n7z7UzM9Dq6 2QecXE3VtRECgneIvqsvd9Y8QFscBh1nYat07HiwSJNINwukc4qpdc9/07pjTueqO4mn UkMVboOj5qrGcRH+BSVw52SBdEsfjO34HKcGXYtKsJldjqv+dvoTo4f4/9m4As3A3aBz ffNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=erhe2dmh; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cb11si660538edb.361.2020.09.30.02.14.59; Wed, 30 Sep 2020 02:14:59 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=erhe2dmh; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729158AbgI3JOl (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:41 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35182 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729053AbgI3JOh (ORCPT ); Wed, 30 Sep 2020 05:14:37 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EWkf035089; Wed, 30 Sep 2020 04:14:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457272; bh=KrUTk8ul87troFJrpjX2nEBHGpMAgPlBoUHLfqkKttI=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=erhe2dmhTBg6xdtP29WtORaC141HNSk954n2A6GFkVMAvz2YRhMEHqxrel6QgsYAb 6eZpfianipRCucVy3eYFSyTxB+UuYx75spzg33QobGQgBoOZSReQ6egF3Klk9hUW9h YurHoRMUmBd5i3j20AjyZ73I5VpAEm83KKPTA4cQ= Received: from DFLE103.ent.ti.com (dfle103.ent.ti.com [10.64.6.24]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EWnu122272; Wed, 30 Sep 2020 04:14:32 -0500 Received: from DFLE101.ent.ti.com (10.64.6.22) by DFLE103.ent.ti.com (10.64.6.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:31 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE101.ent.ti.com (10.64.6.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:31 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJi116385; Wed, 30 Sep 2020 04:14:29 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 11/18] dmaengine: ti: k3-psil: Extend psil_endpoint_config for K3 PKTDMA Date: Wed, 30 Sep 2020 12:14:05 +0300 Message-ID: <20200930091412.8020-12-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Additional fields needed for K3 PKTDMA to be able to handle the mapped channels (channels are locked to handle specific threads) and flow ranges for these mapped threads. PKTDMA also introduces tflow for tx channels which can not be found in K3 UDMA architecture. Signed-off-by: Peter Ujfalusi --- include/linux/dma/k3-psil.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/include/linux/dma/k3-psil.h b/include/linux/dma/k3-psil.h index 1962f75fa2d3..36e22c5a0f29 100644 --- a/include/linux/dma/k3-psil.h +++ b/include/linux/dma/k3-psil.h @@ -50,6 +50,15 @@ enum psil_endpoint_type { * @channel_tpl: Desired throughput level for the channel * @pdma_acc32: ACC32 must be enabled on the PDMA side * @pdma_burst: BURST must be enabled on the PDMA side + * @mapped_channel_id: PKTDMA thread to channel mapping for mapped channels. + * The thread must be serviced by the specified channel if + * mapped_channel_id is >= 0 in case of PKTDMA + * @flow_start: PKDMA flow range start of mapped channel. Unmapped + * channels use flow_id == chan_id + * @flow_num: PKDMA flow count of mapped channel. Unmapped channels + * use flow_id == chan_id + * @default_flow_id: PKDMA default (r)flow index of mapped channel. + * Must be within the flow range of the mapped channel. */ struct psil_endpoint_config { enum psil_endpoint_type ep_type; @@ -63,6 +72,13 @@ struct psil_endpoint_config { /* PDMA properties, valid for PSIL_EP_PDMA_* */ unsigned pdma_acc32:1; unsigned pdma_burst:1; + + /* PKDMA mapped channel */ + int mapped_channel_id; + /* PKTDMA tflow and rflow ranges for mapped channel */ + u16 flow_start; + u16 flow_num; + u16 default_flow_id; }; int psil_set_new_ep_config(struct device *dev, const char *name, From patchwork Wed Sep 30 09:14:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313865 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21561ilj; Wed, 30 Sep 2020 02:15:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyTX7KqnCH1FQkZZDZcGFf0iQB9NY4d8QmBT1JYY0lHFgNK4/IodWb9dBLZDm/WCEFed7hu X-Received: by 2002:a17:907:10db:: with SMTP id rv27mr1719598ejb.223.1601457302261; Wed, 30 Sep 2020 02:15:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457302; cv=none; d=google.com; s=arc-20160816; b=SRGRhWUzRDtWFIunmVFBFuufxr13jfzG0Gm9hnbh93ytebhhs6RSdFeR0LfwHc2i71 HoRuyBwLMjahcSQtYoxtrL/9hyYiaLhRtozjF9gcg50GUtyfRyT2QpL4ylDwXm9yHrQh 067p8UDy+OPzr/7XiL1kgVjsNpbbbn0lssWpxwcY499f+ZPihulIEL4gPRB2IlwOJDV8 VBsB2ZGgn/GUW03vgFQc1bY7V0yoRMWmh+543o1X1RrrhHv17udhZ4OYaVd8kqB/SZiW l4mNzWDiKpqxOg+AqeyQe/aaE4+t5bHmoEFG4IxSiNV5nZhDyNT1+wtgVkWx+/Tlui+f fYzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=l9G58ibFUtxG8++JUKfGyhAqNqMXDMHnQ2pgcxVZc90=; b=sfk6aP5i68ItC7RHbpPizxEVZ3zVNmVjiWchulEg3PxM3bEHhET5i1dqj9soigWSXK zEQGNfRal1aBstF8ZZlNOpAQu/DjWMGgkrvUuI0J4VwJpR0CABPoUDuU1T/NNfjL8K5y 57LbXYo8gkCHVkOPBC/QnkH8rhEG//u5eJtmd7rKi7De509cQQtPGsFDdZg804EufUxV Q/6TPXB0c/lpPt+5Wz2UI10e/OrWVL2NRVGKPRe4dZ6418LEL2YD2cSfA6Gg1mG2ILe2 kPR6T2twqQDhYLkOs1tnSYPlSqZeCB296rYMu25sKavOJHtbibr2wRpTYPoMgf9QJPib ye+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=y42PtxEm; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f14si330991edx.31.2020.09.30.02.15.02; Wed, 30 Sep 2020 02:15:02 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=y42PtxEm; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729135AbgI3JOl (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:41 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35192 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728149AbgI3JOk (ORCPT ); Wed, 30 Sep 2020 05:14:40 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EZ4h035098; Wed, 30 Sep 2020 04:14:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457275; bh=l9G58ibFUtxG8++JUKfGyhAqNqMXDMHnQ2pgcxVZc90=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=y42PtxEmB0f+cTHPdSfYCk+saa4j6J3M+0UrQrfbnf5jF6nk3DOLCDDDD0h48r7ZD g6PNYsut3bJaz3tlbETlB8qGwyISd8qvkEv1tMH2PfXcLgUcambg3LYJDiB4E0hvDs Kk+mGzBl5Rg36zuCdu274ohxL9ytcF2ZzpDbggyI= Received: from DLEE107.ent.ti.com (dlee107.ent.ti.com [157.170.170.37]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9EZJI112013 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:35 -0500 Received: from DLEE109.ent.ti.com (157.170.170.41) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:34 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:34 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJj116385; Wed, 30 Sep 2020 04:14:32 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 12/18] dmaengine: ti: k3-psil: Add initial map for AM64 Date: Wed, 30 Sep 2020 12:14:06 +0300 Message-ID: <20200930091412.8020-13-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add initial PSI-L map file for AM64. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/Makefile | 3 +- drivers/dma/ti/k3-psil-am64.c | 75 +++++++++++++++++++++++++++++++++++ drivers/dma/ti/k3-psil-priv.h | 1 + drivers/dma/ti/k3-psil.c | 1 + 4 files changed, 79 insertions(+), 1 deletion(-) create mode 100644 drivers/dma/ti/k3-psil-am64.c -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/Makefile b/drivers/dma/ti/Makefile index 0c67254caee6..bd496efadff7 100644 --- a/drivers/dma/ti/Makefile +++ b/drivers/dma/ti/Makefile @@ -7,5 +7,6 @@ obj-$(CONFIG_TI_K3_UDMA_GLUE_LAYER) += k3-udma-glue.o obj-$(CONFIG_TI_K3_PSIL) += k3-psil.o \ k3-psil-am654.o \ k3-psil-j721e.o \ - k3-psil-j7200.o + k3-psil-j7200.o \ + k3-psil-am64.o obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o diff --git a/drivers/dma/ti/k3-psil-am64.c b/drivers/dma/ti/k3-psil-am64.c new file mode 100644 index 000000000000..e88f57a36ac1 --- /dev/null +++ b/drivers/dma/ti/k3-psil-am64.c @@ -0,0 +1,75 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020 Texas Instruments Incorporated - https://www.ti.com + * Author: Peter Ujfalusi + */ + +#include + +#include "k3-psil-priv.h" + +#define PSIL_ETHERNET(x, ch, flow_base, flow_cnt) \ + { \ + .thread_id = x, \ + .ep_config = { \ + .ep_type = PSIL_EP_NATIVE, \ + .pkt_mode = 1, \ + .needs_epib = 1, \ + .psd_size = 16, \ + .mapped_channel_id = ch, \ + .flow_start = flow_base, \ + .flow_num = flow_cnt, \ + .default_flow_id = flow_base, \ + }, \ + } + +#define PSIL_SAUL(x, ch, flow_base, flow_cnt, default_flow, tx) \ + { \ + .thread_id = x, \ + .ep_config = { \ + .ep_type = PSIL_EP_NATIVE, \ + .pkt_mode = 1, \ + .needs_epib = 1, \ + .psd_size = 64, \ + .mapped_channel_id = ch, \ + .flow_start = flow_base, \ + .flow_num = flow_cnt, \ + .default_flow_id = default_flow, \ + .notdpkt = tx, \ + }, \ + } + +/* PSI-L source thread IDs, used for RX (DMA_DEV_TO_MEM) */ +static struct psil_ep am64_src_ep_map[] = { + /* SA2UL */ + PSIL_SAUL(0x4000, 17, 32, 8, 32, 0), + PSIL_SAUL(0x4001, 18, 32, 8, 33, 0), + PSIL_SAUL(0x4002, 19, 40, 8, 40, 0), + PSIL_SAUL(0x4003, 20, 40, 8, 41, 0), + /* CPSW3G */ + PSIL_ETHERNET(0x4500, 16, 16, 16), +}; + +/* PSI-L destination thread IDs, used for TX (DMA_MEM_TO_DEV) */ +static struct psil_ep am64_dst_ep_map[] = { + /* SA2UL */ + PSIL_SAUL(0xc000, 24, 80, 8, 80, 1), + PSIL_SAUL(0xc001, 25, 88, 8, 88, 1), + /* CPSW3G */ + PSIL_ETHERNET(0xc500, 16, 16, 8), + PSIL_ETHERNET(0xc501, 17, 24, 8), + PSIL_ETHERNET(0xc502, 18, 32, 8), + PSIL_ETHERNET(0xc503, 19, 40, 8), + PSIL_ETHERNET(0xc504, 20, 48, 8), + PSIL_ETHERNET(0xc505, 21, 56, 8), + PSIL_ETHERNET(0xc506, 22, 64, 8), + PSIL_ETHERNET(0xc507, 23, 72, 8), +}; + +struct psil_ep_map am64_ep_map = { + .name = "am64", + .src = am64_src_ep_map, + .src_count = ARRAY_SIZE(am64_src_ep_map), + .dst = am64_dst_ep_map, + .dst_count = ARRAY_SIZE(am64_dst_ep_map), +}; diff --git a/drivers/dma/ti/k3-psil-priv.h b/drivers/dma/ti/k3-psil-priv.h index b4b0fb359eff..b74e192e3c2d 100644 --- a/drivers/dma/ti/k3-psil-priv.h +++ b/drivers/dma/ti/k3-psil-priv.h @@ -40,5 +40,6 @@ struct psil_endpoint_config *psil_get_ep_config(u32 thread_id); extern struct psil_ep_map am654_ep_map; extern struct psil_ep_map j721e_ep_map; extern struct psil_ep_map j7200_ep_map; +extern struct psil_ep_map am64_ep_map; #endif /* K3_PSIL_PRIV_H_ */ diff --git a/drivers/dma/ti/k3-psil.c b/drivers/dma/ti/k3-psil.c index 837853aab95a..9da3027a6fbb 100644 --- a/drivers/dma/ti/k3-psil.c +++ b/drivers/dma/ti/k3-psil.c @@ -20,6 +20,7 @@ static const struct soc_device_attribute k3_soc_devices[] = { { .family = "AM65X", .data = &am654_ep_map }, { .family = "J721E", .data = &j721e_ep_map }, { .family = "J7200", .data = &j7200_ep_map }, + { .family = "AM64", .data = &am64_ep_map }, { /* sentinel */ } }; From patchwork Wed Sep 30 09:14:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313861 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21510ilj; Wed, 30 Sep 2020 02:14:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxDjU4VERtiLv5kG4EKizeKJ5uc3uihr062SfhBm+Ko/PNbwVKDI+1MBxjVNgV/ot6EDOHx X-Received: by 2002:a17:906:488d:: with SMTP id v13mr1786886ejq.524.1601457297080; Wed, 30 Sep 2020 02:14:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457297; cv=none; d=google.com; s=arc-20160816; b=UdrsoS4rKuc38aLDvQIxFUaMP8I6hr163VbQTq+ykfRCPPLOGohClMjnKrtO06d0L6 zNirea1ZjRKlhga5pX45TM1bVmHcNErx/vhefF2KcfHhOpnroEz5yYAetxOdU6DOiMri zd11pD0AXY43tRykYf8Bo60PuY1KZzH0VXupz6TGpZ2uQKPUhxjdxgnTys0C1WKoq0by 7M56amZcMlw/hzcOQBikHRtFyEbJ7a/CQicKFhYgRT9um3Kj0RYViWn43g0Q9dT80hDt sVq57rzGf7U/IRW3B12ChYsA21YYdIgArQqydNglxFTnspjx1Ye9kRw+z0gewBlKJt9E 47tA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=r6BZcfFspKekPk2CsVfLnHhDTFjI4u11uU/6NZJdVNU=; b=O/+i69YVPWfeF7L2jn48hreDwkBTumXorl+j0hk45YNsiXstqZwGbwqYZctTQaR2yC AlRBg2jLYLWVMoOQpsePdcF1XKzdCTz6WhAyyavdlyg2G6zEZ68i2EJW2U0bk/A3agnf dLMNmnhHAgD0YhmIqaj0h4abZSSaLMytUWDwLLoQDNEVMUyYsDtBLid21WGvD27ho79p QKbCbkCFJOWqBh1Ypqm36e3XBHRzomX2/JxfPtE2jlQCz9Y/rYQQbnRCl8p+V0089D0N umPkByiQXC43fu62RqeUirDUv9/i8hhhb2dLvC9teQoEwu8TD8cfQPd2Bo2ILsZdSoIu 2yxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=CBQ9P5Si; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z5si781646ejj.403.2020.09.30.02.14.56; Wed, 30 Sep 2020 02:14:57 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=CBQ9P5Si; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728399AbgI3JOs (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:48 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35202 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729178AbgI3JOm (ORCPT ); Wed, 30 Sep 2020 05:14:42 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EcHp035112; Wed, 30 Sep 2020 04:14:38 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457278; bh=r6BZcfFspKekPk2CsVfLnHhDTFjI4u11uU/6NZJdVNU=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=CBQ9P5SiEzy0dcX4PJlIBgutt2z55EpzRhTyLcms2dWr3JLZGLb7ISDOMupMQ0Mpz JOuon+dme+rPUNTGYiLUQpiu9IQXXj6iY0pSheclceNaqLnGlCe6ggpl5rzUqfhumz 5VSOWaijk4BzjJYEvGqcn8ZKsJH4cp8rE4kkssHc= Received: from DLEE106.ent.ti.com (dlee106.ent.ti.com [157.170.170.36]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9EcHJ100066 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:38 -0500 Received: from DLEE106.ent.ti.com (157.170.170.36) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:37 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:37 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJk116385; Wed, 30 Sep 2020 04:14:35 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 13/18] dmaengine: ti: Add support for k3 event routers Date: Wed, 30 Sep 2020 12:14:07 +0300 Message-ID: <20200930091412.8020-14-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In k3 architecture a DMA channel (in TR momde) can be triggered by global events, origination from different modules. The events for triggers can be sent from any module which is connected to PSI-L fabric, but the event number to be sent is DMA channel specific, it is only known after the channel itself is requested. The router operation needs to be split up: - route_allocate: configure the dma_spec for the DMA and store the configuration which is needed for the router's input - set_event: callback used by the DMA driver to set the event number for the channel and enable the routing Signed-off-by: Peter Ujfalusi --- include/linux/dma/k3-event-router.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 include/linux/dma/k3-event-router.h -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/include/linux/dma/k3-event-router.h b/include/linux/dma/k3-event-router.h new file mode 100644 index 000000000000..e3f88b2f87be --- /dev/null +++ b/include/linux/dma/k3-event-router.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020 Texas Instruments Incorporated - https://www.ti.com + */ + +#ifndef K3_EVENT_ROUTER_ +#define K3_EVENT_ROUTER_ + +#include + +struct k3_event_route_data { + void *priv; + int (*set_event)(void *priv, u32 event); +}; + +#endif /* K3_EVENT_ROUTER_ */ From patchwork Wed Sep 30 09:14:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313862 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21507ilj; Wed, 30 Sep 2020 02:14:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyJp3UJwk/cgHkLw3lcd6JoUDeUeO5vFPKOx2K+BAKqrUdbMCAz5tH0uwyihox9Mxq7kLt0 X-Received: by 2002:a17:906:1dd0:: with SMTP id v16mr1775590ejh.309.1601457296555; Wed, 30 Sep 2020 02:14:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457296; cv=none; d=google.com; s=arc-20160816; b=lAZzjQq7d49h+XNWqDrqn1QONkCQQa5zoEymFS0RQhTK6Caarn8cPJ6ySjN1bR3Snz ZmZCSKasr0yvEuz5sOocRJ0QtXXsfcOjXasZjbkcy6nPAcrrAqlcaGghv7GY0OrAmXg7 RvvKbFuaonLm9jRBq0Ugda/RpEa9RcFVxwqIKky/wR/OoI09rNAScNRq44d1ZDz2p8uq L5F9mHcCyl9EVjH/IvmZtmKdafPTRxr+sS/I28H76a1VpCtFlortWTWPKF8i2wf/kv7b yDm2DgSTMWssT16haJRlw3FWXx8K1Qpab+Fdtg8EOl8lFghHh7+ltmaIEges1NvxR6DM G8Tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Zm2StBppKUJioCl8cgX2HsiR4MlDKduJUSfCgbeNBRk=; b=c5MhlP3ZBU6JGaO5BfBsXiw/b20bKHuTrXVvQif91FQe7BxktQqO2Rz05+/UTSLuZT 6qdIhK6mCNKYTbQtulXISgq69AOxP/7ZKg4F67MNLMEYxOG4iXz19kYCF1JQE04YxKoM ktG0yQUFm+88Dlxm/PXvMJLlW2l8iBlldRHPXcfYXVywSgeU2gzfJo3veJHY4ujUNCaV G0oui7eOsV6aSl6daP56PNDNXLCdaN4EVyL28Dq+eDqGokEZrqiPS70sYRhwbnUSYZ0i k2Lsp/OYpogqguvvBeKJL/PAPY0o4vM2886M8HvE1A/+w5WZq902AdFsUd9v0+Comwv8 Qjhw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Ew6dFxpl; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z5si781646ejj.403.2020.09.30.02.14.56; Wed, 30 Sep 2020 02:14:56 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Ew6dFxpl; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729247AbgI3JOu (ORCPT + 6 others); Wed, 30 Sep 2020 05:14:50 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35226 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725872AbgI3JOt (ORCPT ); Wed, 30 Sep 2020 05:14:49 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9Eghw035140; Wed, 30 Sep 2020 04:14:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457282; bh=Zm2StBppKUJioCl8cgX2HsiR4MlDKduJUSfCgbeNBRk=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=Ew6dFxplIX6dJB9U4SMK9gYg5b9SzPPbZH/NKs39riFBd9mDfiCBhiFuSM+Tbgibd g+L+K322+OcmMiXtYhgWwFq4/nS2kcpDBWJEAL7hj/KKDctkjk7qvjGfqkdxakjhaO yEk5SwerOskDkNkK78aw+ed7nU0ZqQEPNbdpDho8= Received: from DLEE106.ent.ti.com (dlee106.ent.ti.com [157.170.170.36]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9Egm9112070 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:42 -0500 Received: from DLEE105.ent.ti.com (157.170.170.35) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:40 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:40 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJl116385; Wed, 30 Sep 2020 04:14:38 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 14/18] soc: ti: k3-ringacc: add AM64 DMA rings support. Date: Wed, 30 Sep 2020 12:14:08 +0300 Message-ID: <20200930091412.8020-15-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Grygorii Strashko The DMAs in AM64 have built in rings compared to AM654/J721e/J7200 where a separate and generic ringacc is used. The ring SW interface is similar to ringacc with some major architectural differences, like They are part of the DMA (BCDMA or PKTDMA). They are dual mode rings are modeled as pair of Rings objects which has common configuration and memory buffer, but separate real-time control register sets for each direction mem2dev (forward) and dev2mem (reverse). The ringacc driver must be initialized for DMA rings use with k3_ringacc_dmarings_init() as it is not an independent device as ringacc is. AM64 rings must be requested only using k3_ringacc_request_rings_pair(), and forward ring must always be initialized/configured. After this any other Ringacc APIs can be used without any callers changes. Signed-off-by: Grygorii Strashko Signed-off-by: Peter Ujfalusi --- drivers/soc/ti/k3-ringacc.c | 325 +++++++++++++++++++++++++++++- include/linux/soc/ti/k3-ringacc.h | 17 ++ 2 files changed, 335 insertions(+), 7 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/soc/ti/k3-ringacc.c b/drivers/soc/ti/k3-ringacc.c index 7d0b4092fce8..25eca75b859a 100644 --- a/drivers/soc/ti/k3-ringacc.c +++ b/drivers/soc/ti/k3-ringacc.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -21,6 +22,7 @@ static LIST_HEAD(k3_ringacc_list); static DEFINE_MUTEX(k3_ringacc_list_lock); #define K3_RINGACC_CFG_RING_SIZE_ELCNT_MASK GENMASK(19, 0) +#define K3_DMARING_CFG_RING_SIZE_ELCNT_MASK GENMASK(15, 0) /** * struct k3_ring_rt_regs - The RA realtime Control/Status Registers region @@ -43,7 +45,13 @@ struct k3_ring_rt_regs { u32 hwindx; }; -#define K3_RINGACC_RT_REGS_STEP 0x1000 +#define K3_RINGACC_RT_REGS_STEP 0x1000 +#define K3_DMARING_RT_REGS_STEP 0x2000 +#define K3_DMARING_RT_REGS_REVERSE_OFS 0x1000 +#define K3_RINGACC_RT_OCC_MASK GENMASK(20, 0) +#define K3_DMARING_RT_OCC_TDOWN_COMPLETE BIT(31) +#define K3_DMARING_RT_DB_ENTRY_MASK GENMASK(7, 0) +#define K3_DMARING_RT_DB_TDOWN_ACK BIT(31) /** * struct k3_ring_fifo_regs - The Ring Accelerator Queues Registers region @@ -122,6 +130,7 @@ struct k3_ring_state { u32 occ; u32 windex; u32 rindex; + u32 tdown_complete:1; }; /** @@ -142,6 +151,7 @@ struct k3_ring_state { * @use_count: Use count for shared rings * @proxy_id: RA Ring Proxy Id (only if @K3_RINGACC_RING_USE_PROXY) * @dma_dev: device to be used for DMA API (allocation, mapping) + * @asel: Address Space Select value for physical addresses */ struct k3_ring { struct k3_ring_rt_regs __iomem *rt; @@ -156,12 +166,15 @@ struct k3_ring { u32 flags; #define K3_RING_FLAG_BUSY BIT(1) #define K3_RING_FLAG_SHARED BIT(2) +#define K3_RING_FLAG_REVERSE BIT(3) struct k3_ring_state state; u32 ring_id; struct k3_ringacc *parent; u32 use_count; int proxy_id; struct device *dma_dev; + u32 asel; +#define K3_ADDRESS_ASEL_SHIFT 48 }; struct k3_ringacc_ops { @@ -187,6 +200,7 @@ struct k3_ringacc_ops { * @tisci_ring_ops: ti-sci rings ops * @tisci_dev_id: ti-sci device id * @ops: SoC specific ringacc operation + * @dma_rings: indicate DMA ring (dual ring within BCDMA/PKTDMA) */ struct k3_ringacc { struct device *dev; @@ -209,6 +223,7 @@ struct k3_ringacc { u32 tisci_dev_id; const struct k3_ringacc_ops *ops; + bool dma_rings; }; /** @@ -220,6 +235,21 @@ struct k3_ringacc_soc_data { unsigned dma_ring_reset_quirk:1; }; +static int k3_ringacc_ring_read_occ(struct k3_ring *ring) +{ + return readl(&ring->rt->occ) & K3_RINGACC_RT_OCC_MASK; +} + +static void k3_ringacc_ring_update_occ(struct k3_ring *ring) +{ + u32 val; + + val = readl(&ring->rt->occ); + + ring->state.occ = val & K3_RINGACC_RT_OCC_MASK; + ring->state.tdown_complete = !!(val & K3_DMARING_RT_OCC_TDOWN_COMPLETE); +} + static long k3_ringacc_ring_get_fifo_pos(struct k3_ring *ring) { return K3_RINGACC_FIFO_WINDOW_SIZE_BYTES - @@ -233,12 +263,24 @@ static void *k3_ringacc_get_elm_addr(struct k3_ring *ring, u32 idx) static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem); static int k3_ringacc_ring_pop_mem(struct k3_ring *ring, void *elem); +static int k3_dmaring_fwd_pop(struct k3_ring *ring, void *elem); +static int k3_dmaring_reverse_pop(struct k3_ring *ring, void *elem); static struct k3_ring_ops k3_ring_mode_ring_ops = { .push_tail = k3_ringacc_ring_push_mem, .pop_head = k3_ringacc_ring_pop_mem, }; +static struct k3_ring_ops k3_dmaring_fwd_ops = { + .push_tail = k3_ringacc_ring_push_mem, + .pop_head = k3_dmaring_fwd_pop, +}; + +static struct k3_ring_ops k3_dmaring_reverse_ops = { + /* Reverse side of the DMA ring can only be popped by SW */ + .pop_head = k3_dmaring_reverse_pop, +}; + static int k3_ringacc_ring_push_io(struct k3_ring *ring, void *elem); static int k3_ringacc_ring_pop_io(struct k3_ring *ring, void *elem); static int k3_ringacc_ring_push_head_io(struct k3_ring *ring, void *elem); @@ -341,6 +383,40 @@ struct k3_ring *k3_ringacc_request_ring(struct k3_ringacc *ringacc, } EXPORT_SYMBOL_GPL(k3_ringacc_request_ring); +static int k3_dmaring_request_dual_ring(struct k3_ringacc *ringacc, int fwd_id, + struct k3_ring **fwd_ring, + struct k3_ring **compl_ring) +{ + int ret = 0; + + /* + * DMA rings must be requested by ID, completion ring is the reverse + * side of the forward ring + */ + if (fwd_id < 0) + return -EINVAL; + + mutex_lock(&ringacc->req_lock); + + if (test_bit(fwd_id, ringacc->rings_inuse)) { + ret = -EBUSY; + goto error; + } + + *fwd_ring = &ringacc->rings[fwd_id]; + *compl_ring = &ringacc->rings[fwd_id + ringacc->num_rings]; + set_bit(fwd_id, ringacc->rings_inuse); + ringacc->rings[fwd_id].use_count++; + dev_dbg(ringacc->dev, "Giving ring#%d\n", fwd_id); + + mutex_unlock(&ringacc->req_lock); + return 0; + +error: + mutex_unlock(&ringacc->req_lock); + return ret; +} + int k3_ringacc_request_rings_pair(struct k3_ringacc *ringacc, int fwd_id, int compl_id, struct k3_ring **fwd_ring, @@ -351,6 +427,10 @@ int k3_ringacc_request_rings_pair(struct k3_ringacc *ringacc, if (!fwd_ring || !compl_ring) return -EINVAL; + if (ringacc->dma_rings) + return k3_dmaring_request_dual_ring(ringacc, fwd_id, + fwd_ring, compl_ring); + *fwd_ring = k3_ringacc_request_ring(ringacc, fwd_id, 0); if (!(*fwd_ring)) return -ENODEV; @@ -420,7 +500,7 @@ void k3_ringacc_ring_reset_dma(struct k3_ring *ring, u32 occ) goto reset; if (!occ) - occ = readl(&ring->rt->occ); + occ = k3_ringacc_ring_read_occ(ring); if (occ) { u32 db_ring_cnt, db_ring_cnt_cur; @@ -495,6 +575,13 @@ int k3_ringacc_ring_free(struct k3_ring *ring) ringacc = ring->parent; + /* + * DMA rings: rings shared memory and configuration, only forward ring + * is configured and reverse ring considered as slave. + */ + if (ringacc->dma_rings && (ring->flags & K3_RING_FLAG_REVERSE)) + return 0; + dev_dbg(ring->parent->dev, "flags: 0x%08x\n", ring->flags); if (!test_bit(ring->ring_id, ringacc->rings_inuse)) @@ -516,6 +603,8 @@ int k3_ringacc_ring_free(struct k3_ring *ring) ring->flags = 0; ring->ops = NULL; ring->dma_dev = NULL; + ring->asel = 0; + if (ring->proxy_id != K3_RINGACC_PROXY_NOT_USED) { clear_bit(ring->proxy_id, ringacc->proxy_inuse); ring->proxy = NULL; @@ -580,6 +669,7 @@ static int k3_ringacc_ring_cfg_sci(struct k3_ring *ring) ring_cfg.count = ring->size; ring_cfg.mode = ring->mode; ring_cfg.size = ring->elm_size; + ring_cfg.asel = ring->asel; ret = ringacc->tisci_ring_ops->set_cfg(ringacc->tisci, &ring_cfg); if (ret) @@ -589,6 +679,90 @@ static int k3_ringacc_ring_cfg_sci(struct k3_ring *ring) return ret; } +static int k3_dmaring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg) +{ + struct k3_ringacc *ringacc; + struct k3_ring *reverse_ring; + int ret = 0; + + if (cfg->elm_size != K3_RINGACC_RING_ELSIZE_8 || + cfg->mode != K3_RINGACC_RING_MODE_RING || + cfg->size & ~K3_DMARING_CFG_RING_SIZE_ELCNT_MASK) + return -EINVAL; + + ringacc = ring->parent; + + /* + * DMA rings: rings shared memory and configuration, only forward ring + * is configured and reverse ring considered as slave. + */ + if (ringacc->dma_rings && (ring->flags & K3_RING_FLAG_REVERSE)) + return 0; + + if (!test_bit(ring->ring_id, ringacc->rings_inuse)) + return -EINVAL; + + ring->size = cfg->size; + ring->elm_size = cfg->elm_size; + ring->mode = cfg->mode; + ring->asel = cfg->asel; + ring->dma_dev = cfg->dma_dev; + if (!ring->dma_dev) { + dev_warn(ringacc->dev, "dma_dev is not provided for ring%d\n", + ring->ring_id); + ring->dma_dev = ringacc->dev; + } + + memset(&ring->state, 0, sizeof(ring->state)); + + ring->ops = &k3_dmaring_fwd_ops; + + ring->ring_mem_virt = dma_alloc_coherent(ring->dma_dev, + ring->size * (4 << ring->elm_size), + &ring->ring_mem_dma, GFP_KERNEL); + if (!ring->ring_mem_virt) { + dev_err(ringacc->dev, "Failed to alloc ring mem\n"); + ret = -ENOMEM; + goto err_free_ops; + } + + ret = k3_ringacc_ring_cfg_sci(ring); + if (ret) + goto err_free_mem; + + ring->flags |= K3_RING_FLAG_BUSY; + + k3_ringacc_ring_dump(ring); + + /* DMA rings: configure reverse ring */ + reverse_ring = &ringacc->rings[ring->ring_id + ringacc->num_rings]; + reverse_ring->size = cfg->size; + reverse_ring->elm_size = cfg->elm_size; + reverse_ring->mode = cfg->mode; + reverse_ring->asel = cfg->asel; + memset(&reverse_ring->state, 0, sizeof(reverse_ring->state)); + reverse_ring->ops = &k3_dmaring_reverse_ops; + + reverse_ring->ring_mem_virt = ring->ring_mem_virt; + reverse_ring->ring_mem_dma = ring->ring_mem_dma; + reverse_ring->flags |= K3_RING_FLAG_BUSY; + k3_ringacc_ring_dump(reverse_ring); + + return 0; + +err_free_mem: + dma_free_coherent(ring->dma_dev, + ring->size * (4 << ring->elm_size), + ring->ring_mem_virt, + ring->ring_mem_dma); +err_free_ops: + ring->ops = NULL; + ring->proxy = NULL; + ring->dma_dev = NULL; + ring->asel = 0; + return ret; +} + int k3_ringacc_ring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg) { struct k3_ringacc *ringacc; @@ -596,8 +770,12 @@ int k3_ringacc_ring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg) if (!ring || !cfg) return -EINVAL; + ringacc = ring->parent; + if (ringacc->dma_rings) + return k3_dmaring_cfg(ring, cfg); + if (cfg->elm_size > K3_RINGACC_RING_ELSIZE_256 || cfg->mode >= K3_RINGACC_RING_MODE_INVALID || cfg->size & ~K3_RINGACC_CFG_RING_SIZE_ELCNT_MASK || @@ -704,7 +882,7 @@ u32 k3_ringacc_ring_get_free(struct k3_ring *ring) return -EINVAL; if (!ring->state.free) - ring->state.free = ring->size - readl(&ring->rt->occ); + ring->state.free = ring->size - k3_ringacc_ring_read_occ(ring); return ring->state.free; } @@ -715,7 +893,7 @@ u32 k3_ringacc_ring_get_occ(struct k3_ring *ring) if (!ring || !(ring->flags & K3_RING_FLAG_BUSY)) return -EINVAL; - return readl(&ring->rt->occ); + return k3_ringacc_ring_read_occ(ring); } EXPORT_SYMBOL_GPL(k3_ringacc_ring_get_occ); @@ -891,6 +1069,72 @@ static int k3_ringacc_ring_pop_tail_io(struct k3_ring *ring, void *elem) K3_RINGACC_ACCESS_MODE_POP_HEAD); } +/* + * The element is 48 bits of address + ASEL bits in the ring. + * ASEL is used by the DMAs and should be removed for the kernel as it is not + * part of the physical memory address. + */ +static void k3_dmaring_remove_asel_from_elem(u64 *elem) +{ + *elem &= GENMASK_ULL(K3_ADDRESS_ASEL_SHIFT - 1, 0); +} + +static int k3_dmaring_fwd_pop(struct k3_ring *ring, void *elem) +{ + void *elem_ptr; + u32 elem_idx; + + /* + * DMA rings: forward ring is always tied DMA channel and HW does not + * maintain any state data required for POP operation and its unknown + * how much elements were consumed by HW. So, to actually + * do POP, the read pointer has to be recalculated every time. + */ + ring->state.occ = k3_ringacc_ring_read_occ(ring); + if (ring->state.windex >= ring->state.occ) + elem_idx = ring->state.windex - ring->state.occ; + else + elem_idx = ring->size - (ring->state.occ - ring->state.windex); + + elem_ptr = k3_ringacc_get_elm_addr(ring, elem_idx); + memcpy(elem, elem_ptr, (4 << ring->elm_size)); + k3_dmaring_remove_asel_from_elem(elem); + + ring->state.occ--; + writel(-1, &ring->rt->db); + + dev_dbg(ring->parent->dev, "%s: occ%d Windex%d Rindex%d pos_ptr%px\n", + __func__, ring->state.occ, ring->state.windex, elem_idx, + elem_ptr); + return 0; +} + +static int k3_dmaring_reverse_pop(struct k3_ring *ring, void *elem) +{ + void *elem_ptr; + + elem_ptr = k3_ringacc_get_elm_addr(ring, ring->state.rindex); + + if (ring->state.occ) { + memcpy(elem, elem_ptr, (4 << ring->elm_size)); + k3_dmaring_remove_asel_from_elem(elem); + + ring->state.rindex = (ring->state.rindex + 1) % ring->size; + ring->state.occ--; + writel(-1 & K3_DMARING_RT_DB_ENTRY_MASK, &ring->rt->db); + } else if (ring->state.tdown_complete) { + dma_addr_t *value = elem; + + *value = CPPI5_TDCM_MARKER; + writel(K3_DMARING_RT_DB_TDOWN_ACK, &ring->rt->db); + ring->state.tdown_complete = false; + } + + dev_dbg(ring->parent->dev, "%s: occ%d index%d pos_ptr%px\n", + __func__, ring->state.occ, ring->state.rindex, elem_ptr); + return 0; +} + static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem) { void *elem_ptr; @@ -898,6 +1142,11 @@ static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem) elem_ptr = k3_ringacc_get_elm_addr(ring, ring->state.windex); memcpy(elem_ptr, elem, (4 << ring->elm_size)); + if (ring->parent->dma_rings) { + u64 *addr = elem_ptr; + + *addr |= ((u64)ring->asel << K3_ADDRESS_ASEL_SHIFT); + } ring->state.windex = (ring->state.windex + 1) % ring->size; ring->state.free--; @@ -974,12 +1223,12 @@ int k3_ringacc_ring_pop(struct k3_ring *ring, void *elem) return -EINVAL; if (!ring->state.occ) - ring->state.occ = k3_ringacc_ring_get_occ(ring); + k3_ringacc_ring_update_occ(ring); dev_dbg(ring->parent->dev, "ring_pop: occ%d index%d\n", ring->state.occ, ring->state.rindex); - if (!ring->state.occ) + if (!ring->state.occ && !ring->state.tdown_complete) return -ENODATA; if (ring->ops && ring->ops->pop_head) @@ -997,7 +1246,7 @@ int k3_ringacc_ring_pop_tail(struct k3_ring *ring, void *elem) return -EINVAL; if (!ring->state.occ) - ring->state.occ = k3_ringacc_ring_get_occ(ring); + k3_ringacc_ring_update_occ(ring); dev_dbg(ring->parent->dev, "ring_pop_tail: occ%d index%d\n", ring->state.occ, ring->state.rindex); @@ -1202,6 +1451,68 @@ static const struct of_device_id k3_ringacc_of_match[] = { {}, }; +struct k3_ringacc *k3_ringacc_dmarings_init(struct platform_device *pdev, + struct k3_ringacc_init_data *data) +{ + struct device *dev = &pdev->dev; + struct k3_ringacc *ringacc; + void __iomem *base_rt; + struct resource *res; + int i; + + ringacc = devm_kzalloc(dev, sizeof(*ringacc), GFP_KERNEL); + if (!ringacc) + return ERR_PTR(-ENOMEM); + + ringacc->dev = dev; + ringacc->dma_rings = true; + ringacc->num_rings = data->num_rings; + ringacc->tisci = data->tisci; + ringacc->tisci_dev_id = data->tisci_dev_id; + + mutex_init(&ringacc->req_lock); + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ringrt"); + base_rt = devm_ioremap_resource(dev, res); + if (IS_ERR(base_rt)) + return base_rt; + + ringacc->rings = devm_kzalloc(dev, + sizeof(*ringacc->rings) * + ringacc->num_rings * 2, + GFP_KERNEL); + ringacc->rings_inuse = devm_kcalloc(dev, + BITS_TO_LONGS(ringacc->num_rings), + sizeof(unsigned long), GFP_KERNEL); + + if (!ringacc->rings || !ringacc->rings_inuse) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < ringacc->num_rings; i++) { + struct k3_ring *ring = &ringacc->rings[i]; + + ring->rt = base_rt + K3_DMARING_RT_REGS_STEP * i; + ring->parent = ringacc; + ring->ring_id = i; + ring->proxy_id = K3_RINGACC_PROXY_NOT_USED; + + ring = &ringacc->rings[ringacc->num_rings + i]; + ring->rt = base_rt + K3_DMARING_RT_REGS_STEP * i + + K3_DMARING_RT_REGS_REVERSE_OFS; + ring->parent = ringacc; + ring->ring_id = i; + ring->proxy_id = K3_RINGACC_PROXY_NOT_USED; + ring->flags = K3_RING_FLAG_REVERSE; + } + + ringacc->tisci_ring_ops = &ringacc->tisci->ops.rm_ring_ops; + + dev_info(dev, "Number of rings: %u\n", ringacc->num_rings); + + return ringacc; +} +EXPORT_SYMBOL_GPL(k3_ringacc_dmarings_init); + static int k3_ringacc_probe(struct platform_device *pdev) { const struct ringacc_match_data *match_data; diff --git a/include/linux/soc/ti/k3-ringacc.h b/include/linux/soc/ti/k3-ringacc.h index 658dc71d2901..39b022b92598 100644 --- a/include/linux/soc/ti/k3-ringacc.h +++ b/include/linux/soc/ti/k3-ringacc.h @@ -70,6 +70,7 @@ struct k3_ring; * @dma_dev: Master device which is using and accessing to the ring * memory when the mode is K3_RINGACC_RING_MODE_RING. Memory allocations * should be done using this device. + * @asel: Address Space Select value for physical addresses */ struct k3_ring_cfg { u32 size; @@ -79,6 +80,7 @@ struct k3_ring_cfg { u32 flags; struct device *dma_dev; + u32 asel; }; #define K3_RINGACC_RING_ID_ANY (-1) @@ -250,4 +252,19 @@ int k3_ringacc_ring_pop_tail(struct k3_ring *ring, void *elem); u32 k3_ringacc_get_tisci_dev_id(struct k3_ring *ring); +/* DMA ring support */ +struct ti_sci_handle; + +/** + * struct struct k3_ringacc_init_data - Initialization data for DMA rings + */ +struct k3_ringacc_init_data { + const struct ti_sci_handle *tisci; + u32 tisci_dev_id; + u32 num_rings; +}; + +struct k3_ringacc *k3_ringacc_dmarings_init(struct platform_device *pdev, + struct k3_ringacc_init_data *data); + #endif /* __SOC_TI_K3_RINGACC_API_H_ */ From patchwork Wed Sep 30 09:14:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313867 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21596ilj; Wed, 30 Sep 2020 02:15:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyUSE4+FysSx85vJ1kAWZyIz7SINwl+0XMGI886ZW1rZHD0kR5w4NH0aHz9wISLxOqMcv3v X-Received: by 2002:a17:906:7f06:: with SMTP id d6mr1726763ejr.553.1601457307524; Wed, 30 Sep 2020 02:15:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457307; cv=none; d=google.com; s=arc-20160816; b=0zTMHzMNkI/o/or0qjsAPBm4jIxyHJcfM4dzvTjtgDfV1+cEvttrv20qh9IDpQePXX +zYLoA8z53WKtW1AHQ8KE6JJQxxC75glD2V6hI8Rrva0ToU2B/2VfB46LE1JQlLSIFE0 rbcMU0iCnR/rVI+nBwUo3dJOirpS0aDbP61xjjczanOXs8R2cLXr3f+cHnqC+pz06I1f hXXukDrdkhjbX/A2BMKgCc01JYA9oHxZWb4qnm59ovYNDUNP3FseZEZzWU+9xaM2J8sW hxlBcFDVRcPj5QTp1uNFeEe4o5aosy4542YFwK5fxFmmNSZq4JSr85oNnAbIoNKzq8Zk VYNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=SX2/H+r0fVDD4nVg6wc+mvkVZ3G2+nIGKs/M4uC+qzg=; b=QekPiPJMSBXCBVyvANCxeghQiwETkuc0/oVTL00qzdmDQm0wT0WdoBUIzLyNC42c8G UPbdqDe7t2CmQsBNSopnkRY+vjcGX4QRGmmMR9ZKBPuJ0x6jPDiWkwSJfXlEZFq4exS3 BYcIms98iKEQchyuCGLAdOvIzZ7CEpfskH47Ls9jpZbh3fk0E6V18tpp5BDWg4dlS0cA hiRYrxx+Vl3p4OIu326I00WPCJ1QmpYsR+p5YgD9o5CfbmT6mB5tx+1IS6xzOZ3SMyZP oI+Bv0ZOD+TNbVe4Yc91eWeWHFrMuUz8nZpKZiwz6AigKMuUT86od3yboYGfYPgxtiuB l7Eg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=nIz+XjXu; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n11si640885edt.479.2020.09.30.02.15.06; Wed, 30 Sep 2020 02:15:07 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=nIz+XjXu; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729293AbgI3JPB (ORCPT + 6 others); Wed, 30 Sep 2020 05:15:01 -0400 Received: from fllv0016.ext.ti.com ([198.47.19.142]:53166 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728673AbgI3JO6 (ORCPT ); Wed, 30 Sep 2020 05:14:58 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9Ei97067445; Wed, 30 Sep 2020 04:14:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457284; bh=SX2/H+r0fVDD4nVg6wc+mvkVZ3G2+nIGKs/M4uC+qzg=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=nIz+XjXuzsV2MeXIS/misEyG9GpvlGgy/ZEQp/QE4/jEcpX1agqxWyWpyip6RInnM qU9+7AihFY1bMN6IfOSLlOcqmf+50IYcwm2cCNyAQebpJgAE40Ba/u0mnI2aoj/mtQ wZG1AZT/aX9ePBFB3xEHy4r/2vfiSXs14TBuoTUU= Received: from DFLE103.ent.ti.com (dfle103.ent.ti.com [10.64.6.24]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9EiW4100136 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:44 -0500 Received: from DFLE101.ent.ti.com (10.64.6.22) by DFLE103.ent.ti.com (10.64.6.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:44 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE101.ent.ti.com (10.64.6.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:43 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJm116385; Wed, 30 Sep 2020 04:14:41 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 15/18] dmaengine: ti: k3-udma: Initial support for K3 BCDMA Date: Wed, 30 Sep 2020 12:14:09 +0300 Message-ID: <20200930091412.8020-16-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org One of the DMAs introduced with AM64 is the Block Copy DMA (BCDMA). It serves similar purpose as K3 UDMAP channels in TR mode. The rings for the BCDMA is integrated within the DMA itself instead of using rings from the general purpose ringacc. A BCDMA have two different type of channels: - Block Copy Channels (bchan) - Split Channels (tchan and rchan) tchan and rchan can be used to service PSI-L peripherals, similarly to K3 UDMA channels. bchan can be only used for block copy operation (TR type15) like the paired K3 UDMA tchan/rchan configured in block copy mode. bchans can be also used to service peripherals directly if an external trigger is selected for the channel. Most of the driver code can be reused for BCDMA bchan/tchan/rchan support but new setup and allocation functions are needed to handle the differences between the DMAs. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 1386 ++++++++++++++++++++++++++++++++++---- drivers/dma/ti/k3-udma.h | 12 +- 2 files changed, 1247 insertions(+), 151 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 1ae5d09e2059..a342e89a4bae 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include "../virt-dma.h" @@ -55,14 +56,25 @@ struct udma_static_tr { struct udma_chan; +enum k3_dma_type { + DMA_TYPE_UDMA = 0, + DMA_TYPE_BCDMA, +}; + enum udma_mmr { MMR_GCFG = 0, + MMR_BCHANRT, MMR_RCHANRT, MMR_TCHANRT, MMR_LAST, }; -static const char * const mmr_names[] = { "gcfg", "rchanrt", "tchanrt" }; +static const char * const mmr_names[] = { + [MMR_GCFG] = "gcfg", + [MMR_BCHANRT] = "bchanrt", + [MMR_RCHANRT] = "rchanrt", + [MMR_TCHANRT] = "tchanrt", +}; struct udma_tchan { void __iomem *reg_rt; @@ -72,6 +84,8 @@ struct udma_tchan { struct k3_ring *tc_ring; /* Transmit Completion ring */ }; +#define udma_bchan udma_tchan + struct udma_rflow { int id; struct k3_ring *fd_ring; /* Free Descriptor ring */ @@ -84,11 +98,25 @@ struct udma_rchan { int id; }; +struct udma_oes_offsets { + /* K3 UDMA Output Event Offset */ + u32 udma_rchan; + + /* BCDMA Output Event Offsets */ + u32 bcdma_bchan_data; + u32 bcdma_bchan_ring; + u32 bcdma_tchan_data; + u32 bcdma_tchan_ring; + u32 bcdma_rchan_data; + u32 bcdma_rchan_ring; +}; + #define UDMA_FLAG_PDMA_ACC32 BIT(0) #define UDMA_FLAG_PDMA_BURST BIT(1) #define UDMA_FLAG_TDTYPE BIT(2) struct udma_match_data { + enum k3_dma_type type; u32 psil_base; bool enable_memcpy_support; u32 flags; @@ -96,7 +124,8 @@ struct udma_match_data { }; struct udma_soc_data { - u32 rchan_oes_offset; + struct udma_oes_offsets oes; + u32 bcdma_trigger_event_offset; }; struct udma_hwdesc { @@ -139,16 +168,19 @@ struct udma_dev { struct udma_rx_flush rx_flush; + int bchan_cnt; int tchan_cnt; int echan_cnt; int rchan_cnt; int rflow_cnt; + unsigned long *bchan_map; unsigned long *tchan_map; unsigned long *rchan_map; unsigned long *rflow_gp_map; unsigned long *rflow_gp_map_allocated; unsigned long *rflow_in_use; + struct udma_bchan *bchans; struct udma_tchan *tchans; struct udma_rchan *rchans; struct udma_rflow *rflows; @@ -156,6 +188,7 @@ struct udma_dev { struct udma_chan *channels; u32 psil_base; u32 atype; + u32 asel; }; struct udma_desc { @@ -200,6 +233,7 @@ struct udma_chan_config { bool notdpkt; /* Suppress sending TDC packet */ int remote_thread_id; u32 atype; + u32 asel; u32 src_thread; u32 dst_thread; enum psil_endpoint_type ep_type; @@ -207,6 +241,8 @@ struct udma_chan_config { bool enable_burst; enum udma_tp_level channel_tpl; /* Channel Throughput Level */ + u32 tr_trigger_type; + enum dma_transfer_direction dir; }; @@ -214,11 +250,13 @@ struct udma_chan { struct virt_dma_chan vc; struct dma_slave_config cfg; struct udma_dev *ud; + struct device *dma_dev; struct udma_desc *desc; struct udma_desc *terminated_desc; struct udma_static_tr static_tr; char *name; + struct udma_bchan *bchan; struct udma_tchan *tchan; struct udma_rchan *rchan; struct udma_rflow *rflow; @@ -354,6 +392,30 @@ static int navss_psil_unpair(struct udma_dev *ud, u32 src_thread, src_thread, dst_thread); } +static void k3_configure_chan_coherency(struct dma_chan *chan, u32 asel) +{ + struct device *chan_dev = &chan->dev->device; + + if (asel == 0) { + /* No special handling for the channel */ + chan->dev->chan_dma_dev = false; + + chan_dev->dma_coherent = false; + chan_dev->dma_parms = NULL; + } else if (asel == 14 || asel == 15) { + chan->dev->chan_dma_dev = true; + + chan_dev->dma_coherent = true; + dma_coerce_mask_and_coherent(chan_dev, DMA_BIT_MASK(48)); + chan_dev->dma_parms = chan_dev->parent->dma_parms; + } else { + dev_warn(chan->device->dev, "Invalid ASEL value: %u\n", asel); + + chan_dev->dma_coherent = false; + chan_dev->dma_parms = NULL; + } +} + static void udma_reset_uchan(struct udma_chan *uc) { memset(&uc->config, 0, sizeof(uc->config)); @@ -440,9 +502,7 @@ static void udma_free_hwdesc(struct udma_chan *uc, struct udma_desc *d) d->hwdesc[i].cppi5_desc_vaddr = NULL; } } else if (d->hwdesc[0].cppi5_desc_vaddr) { - struct udma_dev *ud = uc->ud; - - dma_free_coherent(ud->dev, d->hwdesc[0].cppi5_desc_size, + dma_free_coherent(uc->dma_dev, d->hwdesc[0].cppi5_desc_size, d->hwdesc[0].cppi5_desc_vaddr, d->hwdesc[0].cppi5_desc_paddr); @@ -671,8 +731,10 @@ static void udma_reset_counters(struct udma_chan *uc) val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PCNT_REG); udma_tchanrt_write(uc, UDMA_CHAN_RT_PCNT_REG, val); - val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG); - udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val); + if (!uc->bchan) { + val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG); + udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val); + } } if (uc->rchan) { @@ -1235,6 +1297,42 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ UDMA_RESERVE_RESOURCE(tchan); UDMA_RESERVE_RESOURCE(rchan); +static struct udma_bchan *__bcdma_reserve_bchan(struct udma_dev *ud, int id) +{ + if (id >= 0) { + if (test_bit(id, ud->bchan_map)) { + dev_err(ud->dev, "bchan%d is in use\n", id); + return ERR_PTR(-ENOENT); + } + } else { + id = find_next_zero_bit(ud->bchan_map, ud->bchan_cnt, 0); + if (id == ud->bchan_cnt) + return ERR_PTR(-ENOENT); + } + + set_bit(id, ud->bchan_map); + return &ud->bchans[id]; +} + +static int bcdma_get_bchan(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->bchan) { + dev_dbg(ud->dev, "chan%d: already have bchan%d allocated\n", + uc->id, uc->bchan->id); + return 0; + } + + uc->bchan = __bcdma_reserve_bchan(ud, -1); + if (IS_ERR(uc->bchan)) + return PTR_ERR(uc->bchan); + + uc->tchan = uc->bchan; + + return 0; +} + static int udma_get_tchan(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1327,6 +1425,19 @@ static int udma_get_rflow(struct udma_chan *uc, int flow_id) return PTR_ERR_OR_ZERO(uc->rflow); } +static void bcdma_put_bchan(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->bchan) { + dev_dbg(ud->dev, "chan%d: put bchan%d\n", uc->id, + uc->bchan->id); + clear_bit(uc->bchan->id, ud->bchan_map); + uc->bchan = NULL; + uc->tchan = NULL; + } +} + static void udma_put_rchan(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1363,6 +1474,65 @@ static void udma_put_rflow(struct udma_chan *uc) } } +static void bcdma_free_bchan_resources(struct udma_chan *uc) +{ + if (!uc->bchan) + return; + + k3_ringacc_ring_free(uc->bchan->tc_ring); + k3_ringacc_ring_free(uc->bchan->t_ring); + uc->bchan->tc_ring = NULL; + uc->bchan->t_ring = NULL; + k3_configure_chan_coherency(&uc->vc.chan, 0); + + bcdma_put_bchan(uc); +} + +static int bcdma_alloc_bchan_resources(struct udma_chan *uc) +{ + struct k3_ring_cfg ring_cfg; + struct udma_dev *ud = uc->ud; + int ret; + + ret = bcdma_get_bchan(uc); + if (ret) + return ret; + + ret = k3_ringacc_request_rings_pair(ud->ringacc, uc->bchan->id, -1, + &uc->bchan->t_ring, + &uc->bchan->tc_ring); + if (ret) { + ret = -EBUSY; + goto err_ring; + } + + memset(&ring_cfg, 0, sizeof(ring_cfg)); + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; + ring_cfg.mode = K3_RINGACC_RING_MODE_RING; + + k3_configure_chan_coherency(&uc->vc.chan, ud->asel); + ring_cfg.asel = ud->asel; + ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan); + + ret = k3_ringacc_ring_cfg(uc->bchan->t_ring, &ring_cfg); + if (ret) + goto err_ringcfg; + + return 0; + +err_ringcfg: + k3_ringacc_ring_free(uc->bchan->tc_ring); + uc->bchan->tc_ring = NULL; + k3_ringacc_ring_free(uc->bchan->t_ring); + uc->bchan->t_ring = NULL; + k3_configure_chan_coherency(&uc->vc.chan, 0); +err_ring: + bcdma_put_bchan(uc); + + return ret; +} + static void udma_free_tx_resources(struct udma_chan *uc) { if (!uc->tchan) @@ -1380,15 +1550,19 @@ static int udma_alloc_tx_resources(struct udma_chan *uc) { struct k3_ring_cfg ring_cfg; struct udma_dev *ud = uc->ud; - int ret; + struct udma_tchan *tchan; + int ring_idx, ret; ret = udma_get_tchan(uc); if (ret) return ret; - ret = k3_ringacc_request_rings_pair(ud->ringacc, uc->tchan->id, -1, - &uc->tchan->t_ring, - &uc->tchan->tc_ring); + tchan = uc->tchan; + ring_idx = ud->bchan_cnt + tchan->id; + + ret = k3_ringacc_request_rings_pair(ud->ringacc, ring_idx, -1, + &tchan->t_ring, + &tchan->tc_ring); if (ret) { ret = -EBUSY; goto err_ring; @@ -1397,10 +1571,18 @@ static int udma_alloc_tx_resources(struct udma_chan *uc) memset(&ring_cfg, 0, sizeof(ring_cfg)); ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; - ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + if (ud->match_data->type == DMA_TYPE_UDMA) { + ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + } else { + ring_cfg.mode = K3_RINGACC_RING_MODE_RING; + + k3_configure_chan_coherency(&uc->vc.chan, uc->config.asel); + ring_cfg.asel = uc->config.asel; + ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan); + } - ret = k3_ringacc_ring_cfg(uc->tchan->t_ring, &ring_cfg); - ret |= k3_ringacc_ring_cfg(uc->tchan->tc_ring, &ring_cfg); + ret = k3_ringacc_ring_cfg(tchan->t_ring, &ring_cfg); + ret |= k3_ringacc_ring_cfg(tchan->tc_ring, &ring_cfg); if (ret) goto err_ringcfg; @@ -1460,7 +1642,8 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) } rflow = uc->rflow; - fd_ring_id = ud->tchan_cnt + ud->echan_cnt + uc->rchan->id; + fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt + + uc->rchan->id; ret = k3_ringacc_request_rings_pair(ud->ringacc, fd_ring_id, -1, &rflow->fd_ring, &rflow->r_ring); if (ret) { @@ -1470,15 +1653,25 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) memset(&ring_cfg, 0, sizeof(ring_cfg)); - if (uc->config.pkt_mode) - ring_cfg.size = SG_MAX_SEGMENTS; - else + ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; + if (ud->match_data->type == DMA_TYPE_UDMA) { + if (uc->config.pkt_mode) + ring_cfg.size = SG_MAX_SEGMENTS; + else + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + + ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + } else { ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + ring_cfg.mode = K3_RINGACC_RING_MODE_RING; - ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; - ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + k3_configure_chan_coherency(&uc->vc.chan, uc->config.asel); + ring_cfg.asel = uc->config.asel; + ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan); + } ret = k3_ringacc_ring_cfg(rflow->fd_ring, &ring_cfg); + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; ret |= k3_ringacc_ring_cfg(rflow->r_ring, &ring_cfg); @@ -1500,7 +1693,18 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) return ret; } -#define TISCI_TCHAN_VALID_PARAMS ( \ +#define TISCI_BCDMA_BCHAN_VALID_PARAMS ( \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_EXTENDED_CH_TYPE_VALID) + +#define TISCI_BCDMA_TCHAN_VALID_PARAMS ( \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_SUPR_TDPKT_VALID) + +#define TISCI_BCDMA_RCHAN_VALID_PARAMS ( \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID) + +#define TISCI_UDMA_TCHAN_VALID_PARAMS ( \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_EINFO_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_PSWORDS_VALID | \ @@ -1510,7 +1714,7 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID) -#define TISCI_RCHAN_VALID_PARAMS ( \ +#define TISCI_UDMA_RCHAN_VALID_PARAMS ( \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | \ @@ -1535,7 +1739,7 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc) struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; - req_tx.valid_params = TISCI_TCHAN_VALID_PARAMS; + req_tx.valid_params = TISCI_UDMA_TCHAN_VALID_PARAMS; req_tx.nav_id = tisci_rm->tisci_dev_id; req_tx.index = tchan->id; req_tx.tx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR; @@ -1549,7 +1753,7 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc) return ret; } - req_rx.valid_params = TISCI_RCHAN_VALID_PARAMS; + req_rx.valid_params = TISCI_UDMA_RCHAN_VALID_PARAMS; req_rx.nav_id = tisci_rm->tisci_dev_id; req_rx.index = rchan->id; req_rx.rx_fetch_size = sizeof(struct cppi5_desc_hdr_t) >> 2; @@ -1564,6 +1768,27 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc) return ret; } +static int bcdma_tisci_m2m_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; + struct udma_bchan *bchan = uc->bchan; + int ret = 0; + + req_tx.valid_params = TISCI_BCDMA_BCHAN_VALID_PARAMS; + req_tx.nav_id = tisci_rm->tisci_dev_id; + req_tx.extended_ch_type = TI_SCI_RM_BCDMA_EXTENDED_CH_TYPE_BCHAN; + req_tx.index = bchan->id; + + ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); + if (ret) + dev_err(ud->dev, "bchan%d cfg failed %d\n", bchan->id, ret); + + return ret; +} + static int udma_tisci_tx_channel_config(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1584,7 +1809,7 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc) fetch_size = sizeof(struct cppi5_desc_hdr_t); } - req_tx.valid_params = TISCI_TCHAN_VALID_PARAMS; + req_tx.valid_params = TISCI_UDMA_TCHAN_VALID_PARAMS; req_tx.nav_id = tisci_rm->tisci_dev_id; req_tx.index = tchan->id; req_tx.tx_chan_type = mode; @@ -1607,6 +1832,33 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc) return ret; } +static int bcdma_tisci_tx_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct udma_tchan *tchan = uc->tchan; + struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; + int ret = 0; + + req_tx.valid_params = TISCI_BCDMA_TCHAN_VALID_PARAMS; + req_tx.nav_id = tisci_rm->tisci_dev_id; + req_tx.index = tchan->id; + req_tx.tx_supr_tdpkt = uc->config.notdpkt; + if (ud->match_data->flags & UDMA_FLAG_TDTYPE) { + /* wait for peer to complete the teardown for PDMAs */ + req_tx.valid_params |= + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_TDTYPE_VALID; + req_tx.tx_tdtype = 1; + } + + ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); + if (ret) + dev_err(ud->dev, "tchan%d cfg failed %d\n", tchan->id, ret); + + return ret; +} + static int udma_tisci_rx_channel_config(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1629,7 +1881,7 @@ static int udma_tisci_rx_channel_config(struct udma_chan *uc) fetch_size = sizeof(struct cppi5_desc_hdr_t); } - req_rx.valid_params = TISCI_RCHAN_VALID_PARAMS; + req_rx.valid_params = TISCI_UDMA_RCHAN_VALID_PARAMS; req_rx.nav_id = tisci_rm->tisci_dev_id; req_rx.index = rchan->id; req_rx.rx_fetch_size = fetch_size >> 2; @@ -1688,6 +1940,26 @@ static int udma_tisci_rx_channel_config(struct udma_chan *uc) return 0; } +static int bcdma_tisci_rx_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct udma_rchan *rchan = uc->rchan; + struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; + int ret = 0; + + req_rx.valid_params = TISCI_BCDMA_RCHAN_VALID_PARAMS; + req_rx.nav_id = tisci_rm->tisci_dev_id; + req_rx.index = rchan->id; + + ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx); + if (ret) + dev_err(ud->dev, "rchan%d cfg failed %d\n", rchan->id, ret); + + return ret; +} + static int udma_alloc_chan_resources(struct dma_chan *chan) { struct udma_chan *uc = to_udma_chan(chan); @@ -1697,6 +1969,8 @@ static int udma_alloc_chan_resources(struct dma_chan *chan) u32 irq_udma_idx; int ret; + uc->dma_dev = ud->dev; + if (uc->config.pkt_mode || uc->config.dir == DMA_MEM_TO_MEM) { uc->use_dma_pool = true; /* in case of MEM_TO_MEM we have maximum of two TRs */ @@ -1792,7 +2066,7 @@ static int udma_alloc_chan_resources(struct dma_chan *chan) K3_PSIL_DST_THREAD_ID_OFFSET; irq_ring = uc->rflow->r_ring; - irq_udma_idx = soc_data->rchan_oes_offset + uc->rchan->id; + irq_udma_idx = soc_data->oes.udma_rchan + uc->rchan->id; ret = udma_tisci_rx_channel_config(uc); break; @@ -1892,81 +2166,293 @@ static int udma_alloc_chan_resources(struct dma_chan *chan) return ret; } -static int udma_slave_config(struct dma_chan *chan, - struct dma_slave_config *cfg) +static int bcdma_alloc_chan_resources(struct dma_chan *chan) { struct udma_chan *uc = to_udma_chan(chan); + struct udma_dev *ud = to_udma_dev(chan->device); + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u32 irq_udma_idx, irq_ring_idx; + int ret; - memcpy(&uc->cfg, cfg, sizeof(uc->cfg)); + /* Only TR mode is supported */ + uc->config.pkt_mode = false; - return 0; -} + /* + * Make sure that the completion is in a known state: + * No teardown, the channel is idle + */ + reinit_completion(&uc->teardown_completed); + complete_all(&uc->teardown_completed); + uc->state = UDMA_CHAN_IS_IDLE; -static struct udma_desc *udma_alloc_tr_desc(struct udma_chan *uc, - size_t tr_size, int tr_count, - enum dma_transfer_direction dir) -{ - struct udma_hwdesc *hwdesc; - struct cppi5_desc_hdr_t *tr_desc; - struct udma_desc *d; - u32 reload_count = 0; - u32 ring_id; + switch (uc->config.dir) { + case DMA_MEM_TO_MEM: + /* Non synchronized - mem to mem type of transfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-MEM\n", __func__, + uc->id); - switch (tr_size) { - case 16: - case 32: - case 64: - case 128: - break; - default: - dev_err(uc->ud->dev, "Unsupported TR size of %zu\n", tr_size); - return NULL; - } + ret = bcdma_alloc_bchan_resources(uc); + if (ret) + return ret; - /* We have only one descriptor containing multiple TRs */ - d = kzalloc(sizeof(*d) + sizeof(d->hwdesc[0]), GFP_NOWAIT); - if (!d) - return NULL; + irq_ring_idx = uc->bchan->id + oes->bcdma_bchan_ring; + irq_udma_idx = uc->bchan->id + oes->bcdma_bchan_data; - d->sglen = tr_count; + ret = bcdma_tisci_m2m_channel_config(uc); + break; + case DMA_MEM_TO_DEV: + /* Slave transfer synchronized - mem to dev (TX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-DEV\n", __func__, + uc->id); - d->hwdesc_count = 1; - hwdesc = &d->hwdesc[0]; + ret = udma_alloc_tx_resources(uc); + if (ret) { + uc->config.remote_thread_id = -1; + return ret; + } - /* Allocate memory for DMA ring descriptor */ - if (uc->use_dma_pool) { - hwdesc->cppi5_desc_size = uc->config.hdesc_size; - hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool, - GFP_NOWAIT, - &hwdesc->cppi5_desc_paddr); - } else { - hwdesc->cppi5_desc_size = cppi5_trdesc_calc_size(tr_size, - tr_count); - hwdesc->cppi5_desc_size = ALIGN(hwdesc->cppi5_desc_size, - uc->ud->desc_align); - hwdesc->cppi5_desc_vaddr = dma_alloc_coherent(uc->ud->dev, - hwdesc->cppi5_desc_size, - &hwdesc->cppi5_desc_paddr, - GFP_NOWAIT); - } + uc->config.src_thread = ud->psil_base + uc->tchan->id; + uc->config.dst_thread = uc->config.remote_thread_id; + uc->config.dst_thread |= K3_PSIL_DST_THREAD_ID_OFFSET; - if (!hwdesc->cppi5_desc_vaddr) { - kfree(d); - return NULL; - } + irq_ring_idx = uc->tchan->id + oes->bcdma_tchan_ring; + irq_udma_idx = uc->tchan->id + oes->bcdma_tchan_data; - /* Start of the TR req records */ - hwdesc->tr_req_base = hwdesc->cppi5_desc_vaddr + tr_size; - /* Start address of the TR response array */ - hwdesc->tr_resp_base = hwdesc->tr_req_base + tr_size * tr_count; + ret = bcdma_tisci_tx_channel_config(uc); + break; + case DMA_DEV_TO_MEM: + /* Slave transfer synchronized - dev to mem (RX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as DEV-to-MEM\n", __func__, + uc->id); - tr_desc = hwdesc->cppi5_desc_vaddr; + ret = udma_alloc_rx_resources(uc); + if (ret) { + uc->config.remote_thread_id = -1; + return ret; + } - if (uc->cyclic) - reload_count = CPPI5_INFO0_TRDESC_RLDCNT_INFINITE; + uc->config.src_thread = uc->config.remote_thread_id; + uc->config.dst_thread = (ud->psil_base + uc->rchan->id) | + K3_PSIL_DST_THREAD_ID_OFFSET; - if (dir == DMA_DEV_TO_MEM) - ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring); + irq_ring_idx = uc->rchan->id + oes->bcdma_rchan_ring; + irq_udma_idx = uc->rchan->id + oes->bcdma_rchan_data; + + ret = bcdma_tisci_rx_channel_config(uc); + break; + default: + /* Can not happen */ + dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n", + __func__, uc->id, uc->config.dir); + return -EINVAL; + } + + /* check if the channel configuration was successful */ + if (ret) + goto err_res_free; + + if (udma_is_chan_running(uc)) { + dev_warn(ud->dev, "chan%d: is running!\n", uc->id); + udma_reset_chan(uc, false); + if (udma_is_chan_running(uc)) { + dev_err(ud->dev, "chan%d: won't stop!\n", uc->id); + ret = -EBUSY; + goto err_res_free; + } + } + + uc->dma_dev = dmaengine_get_dma_device(chan); + if (uc->config.dir == DMA_MEM_TO_MEM && !uc->config.tr_trigger_type) { + uc->config.hdesc_size = cppi5_trdesc_calc_size( + sizeof(struct cppi5_tr_type15_t), 2); + + uc->hdesc_pool = dma_pool_create(uc->name, ud->ddev.dev, + uc->config.hdesc_size, + ud->desc_align, + 0); + if (!uc->hdesc_pool) { + dev_err(ud->ddev.dev, + "Descriptor pool allocation failed\n"); + uc->use_dma_pool = false; + return -ENOMEM; + } + + uc->use_dma_pool = true; + } else if (uc->config.dir != DMA_MEM_TO_MEM) { + /* PSI-L pairing */ + ret = navss_psil_pair(ud, uc->config.src_thread, + uc->config.dst_thread); + if (ret) { + dev_err(ud->dev, + "PSI-L pairing failed: 0x%04x -> 0x%04x\n", + uc->config.src_thread, uc->config.dst_thread); + goto err_res_free; + } + + uc->psil_paired = true; + } + + uc->irq_num_ring = ti_sci_inta_msi_get_virq(ud->dev, irq_ring_idx); + if (uc->irq_num_ring <= 0) { + dev_err(ud->dev, "Failed to get ring irq (index: %u)\n", + irq_ring_idx); + ret = -EINVAL; + goto err_psi_free; + } + + ret = request_irq(uc->irq_num_ring, udma_ring_irq_handler, + IRQF_TRIGGER_HIGH, uc->name, uc); + if (ret) { + dev_err(ud->dev, "chan%d: ring irq request failed\n", uc->id); + goto err_irq_free; + } + + /* Event from BCDMA (TR events) only needed for slave channels */ + if (is_slave_direction(uc->config.dir)) { + uc->irq_num_udma = ti_sci_inta_msi_get_virq(ud->dev, + irq_udma_idx); + if (uc->irq_num_udma <= 0) { + dev_err(ud->dev, "Failed to get bcdma irq (index: %u)\n", + irq_udma_idx); + free_irq(uc->irq_num_ring, uc); + ret = -EINVAL; + goto err_irq_free; + } + + ret = request_irq(uc->irq_num_udma, udma_udma_irq_handler, 0, + uc->name, uc); + if (ret) { + dev_err(ud->dev, "chan%d: BCDMA irq request failed\n", + uc->id); + free_irq(uc->irq_num_ring, uc); + goto err_irq_free; + } + } else { + uc->irq_num_udma = 0; + } + + udma_reset_rings(uc); + + INIT_DELAYED_WORK_ONSTACK(&uc->tx_drain.work, + udma_check_tx_completion); + return 0; + +err_irq_free: + uc->irq_num_ring = 0; + uc->irq_num_udma = 0; +err_psi_free: + if (uc->psil_paired) + navss_psil_unpair(ud, uc->config.src_thread, + uc->config.dst_thread); + uc->psil_paired = false; +err_res_free: + bcdma_free_bchan_resources(uc); + udma_free_tx_resources(uc); + udma_free_rx_resources(uc); + + udma_reset_uchan(uc); + + if (uc->use_dma_pool) { + dma_pool_destroy(uc->hdesc_pool); + uc->use_dma_pool = false; + } + + return ret; +} + +static int bcdma_router_config(struct dma_chan *chan) +{ + struct k3_event_route_data *router_data = chan->route_data; + struct udma_chan *uc = to_udma_chan(chan); + u32 trigger_event; + + if (!uc->bchan) + return -EINVAL; + + if (uc->config.tr_trigger_type != 1 && uc->config.tr_trigger_type != 2) + return -EINVAL; + + trigger_event = uc->ud->soc_data->bcdma_trigger_event_offset; + trigger_event += (uc->bchan->id * 2) + uc->config.tr_trigger_type - 1; + + return router_data->set_event(router_data->priv, trigger_event); +} + +static int udma_slave_config(struct dma_chan *chan, + struct dma_slave_config *cfg) +{ + struct udma_chan *uc = to_udma_chan(chan); + + memcpy(&uc->cfg, cfg, sizeof(uc->cfg)); + + return 0; +} + +static struct udma_desc *udma_alloc_tr_desc(struct udma_chan *uc, + size_t tr_size, int tr_count, + enum dma_transfer_direction dir) +{ + struct udma_hwdesc *hwdesc; + struct cppi5_desc_hdr_t *tr_desc; + struct udma_desc *d; + u32 reload_count = 0; + u32 ring_id; + + switch (tr_size) { + case 16: + case 32: + case 64: + case 128: + break; + default: + dev_err(uc->ud->dev, "Unsupported TR size of %zu\n", tr_size); + return NULL; + } + + /* We have only one descriptor containing multiple TRs */ + d = kzalloc(sizeof(*d) + sizeof(d->hwdesc[0]), GFP_NOWAIT); + if (!d) + return NULL; + + d->sglen = tr_count; + + d->hwdesc_count = 1; + hwdesc = &d->hwdesc[0]; + + /* Allocate memory for DMA ring descriptor */ + if (uc->use_dma_pool) { + hwdesc->cppi5_desc_size = uc->config.hdesc_size; + hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool, + GFP_NOWAIT, + &hwdesc->cppi5_desc_paddr); + } else { + hwdesc->cppi5_desc_size = cppi5_trdesc_calc_size(tr_size, + tr_count); + hwdesc->cppi5_desc_size = ALIGN(hwdesc->cppi5_desc_size, + uc->ud->desc_align); + hwdesc->cppi5_desc_vaddr = dma_alloc_coherent(uc->ud->dev, + hwdesc->cppi5_desc_size, + &hwdesc->cppi5_desc_paddr, + GFP_NOWAIT); + } + + if (!hwdesc->cppi5_desc_vaddr) { + kfree(d); + return NULL; + } + + /* Start of the TR req records */ + hwdesc->tr_req_base = hwdesc->cppi5_desc_vaddr + tr_size; + /* Start address of the TR response array */ + hwdesc->tr_resp_base = hwdesc->tr_req_base + tr_size * tr_count; + + tr_desc = hwdesc->cppi5_desc_vaddr; + + if (uc->cyclic) + reload_count = CPPI5_INFO0_TRDESC_RLDCNT_INFINITE; + + if (dir == DMA_DEV_TO_MEM) + ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring); else ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); @@ -2036,6 +2522,7 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, size_t tr_size; int num_tr = 0; int tr_idx = 0; + u64 asel; /* estimate the number of TRs we will need */ for_each_sg(sgl, sgent, sglen, i) { @@ -2053,6 +2540,11 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, d->sglen = sglen; + if (uc->ud->match_data->type == DMA_TYPE_UDMA) + asel = 0; + else + asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT; + tr_req = d->hwdesc[0].tr_req_base; for_each_sg(sgl, sgent, sglen, i) { dma_addr_t sg_addr = sg_dma_address(sgent); @@ -2071,6 +2563,7 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, false, CPPI5_TR_EVENT_SIZE_COMPLETION, 0); cppi5_tr_csf_set(&tr_req[tr_idx].flags, CPPI5_TR_CSF_SUPR_EVT); + sg_addr |= asel; tr_req[tr_idx].addr = sg_addr; tr_req[tr_idx].icnt0 = tr0_cnt0; tr_req[tr_idx].icnt1 = tr0_cnt1; @@ -2100,6 +2593,205 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, return d; } +static struct udma_desc * +udma_prep_slave_sg_triggered_tr(struct udma_chan *uc, struct scatterlist *sgl, + unsigned int sglen, + enum dma_transfer_direction dir, + unsigned long tx_flags, void *context) +{ + struct scatterlist *sgent; + struct cppi5_tr_type15_t *tr_req = NULL; + enum dma_slave_buswidth dev_width; + u16 tr_cnt0, tr_cnt1; + dma_addr_t dev_addr; + struct udma_desc *d; + unsigned int i; + size_t tr_size, sg_len; + int num_tr = 0; + int tr_idx = 0; + u32 burst, trigger_size, port_window; + u64 asel; + + if (dir == DMA_DEV_TO_MEM) { + dev_addr = uc->cfg.src_addr; + dev_width = uc->cfg.src_addr_width; + burst = uc->cfg.src_maxburst; + port_window = uc->cfg.src_port_window_size; + } else if (dir == DMA_MEM_TO_DEV) { + dev_addr = uc->cfg.dst_addr; + dev_width = uc->cfg.dst_addr_width; + burst = uc->cfg.dst_maxburst; + port_window = uc->cfg.dst_port_window_size; + } else { + dev_err(uc->ud->dev, "%s: bad direction?\n", __func__); + return NULL; + } + + if (!burst) + burst = 1; + + if (port_window) { + if (port_window != burst) { + dev_err(uc->ud->dev, + "The burst must be equal to port_window\n"); + return NULL; + } + + tr_cnt0 = dev_width * port_window; + tr_cnt1 = 1; + } else { + tr_cnt0 = dev_width; + tr_cnt1 = burst; + } + trigger_size = tr_cnt0 * tr_cnt1; + + /* estimate the number of TRs we will need */ + for_each_sg(sgl, sgent, sglen, i) { + sg_len = sg_dma_len(sgent); + + if (sg_len % trigger_size) { + dev_err(uc->ud->dev, + "Not aligned SG entry (%zu for %u)\n", sg_len, + trigger_size); + return NULL; + } + + if (sg_len / trigger_size < SZ_64K) + num_tr++; + else + num_tr += 2; + } + + /* Now allocate and setup the descriptor. */ + tr_size = sizeof(struct cppi5_tr_type15_t); + d = udma_alloc_tr_desc(uc, tr_size, num_tr, dir); + if (!d) + return NULL; + + d->sglen = sglen; + + if (uc->ud->match_data->type == DMA_TYPE_UDMA) { + asel = 0; + } else { + asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT; + dev_addr |= asel; + } + + tr_req = d->hwdesc[0].tr_req_base; + for_each_sg(sgl, sgent, sglen, i) { + u16 tr0_cnt2, tr0_cnt3, tr1_cnt2; + dma_addr_t sg_addr = sg_dma_address(sgent); + + sg_len = sg_dma_len(sgent); + num_tr = udma_get_tr_counters(sg_len / trigger_size, 0, + &tr0_cnt2, &tr0_cnt3, &tr1_cnt2); + if (num_tr < 0) { + dev_err(uc->ud->dev, "size %zu is not supported\n", + sg_len); + udma_free_hwdesc(uc, d); + kfree(d); + return NULL; + } + + cppi5_tr_init(&tr_req[tr_idx].flags, CPPI5_TR_TYPE15, false, + true, CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + cppi5_tr_csf_set(&tr_req[tr_idx].flags, CPPI5_TR_CSF_SUPR_EVT); + cppi5_tr_set_trigger(&tr_req[tr_idx].flags, + uc->config.tr_trigger_type, + CPPI5_TR_TRIGGER_TYPE_ICNT2_DEC, 0, 0); + + sg_addr |= asel; + if (dir == DMA_DEV_TO_MEM) { + tr_req[tr_idx].addr = dev_addr; + tr_req[tr_idx].icnt0 = tr_cnt0; + tr_req[tr_idx].icnt1 = tr_cnt1; + tr_req[tr_idx].icnt2 = tr0_cnt2; + tr_req[tr_idx].icnt3 = tr0_cnt3; + tr_req[tr_idx].dim1 = (-1) * tr_cnt0; + + tr_req[tr_idx].daddr = sg_addr; + tr_req[tr_idx].dicnt0 = tr_cnt0; + tr_req[tr_idx].dicnt1 = tr_cnt1; + tr_req[tr_idx].dicnt2 = tr0_cnt2; + tr_req[tr_idx].dicnt3 = tr0_cnt3; + tr_req[tr_idx].ddim1 = tr_cnt0; + tr_req[tr_idx].ddim2 = trigger_size; + tr_req[tr_idx].ddim3 = trigger_size * tr0_cnt2; + } else { + tr_req[tr_idx].addr = sg_addr; + tr_req[tr_idx].icnt0 = tr_cnt0; + tr_req[tr_idx].icnt1 = tr_cnt1; + tr_req[tr_idx].icnt2 = tr0_cnt2; + tr_req[tr_idx].icnt3 = tr0_cnt3; + tr_req[tr_idx].dim1 = tr_cnt0; + tr_req[tr_idx].dim2 = trigger_size; + tr_req[tr_idx].dim3 = trigger_size * tr0_cnt2; + + tr_req[tr_idx].daddr = dev_addr; + tr_req[tr_idx].dicnt0 = tr_cnt0; + tr_req[tr_idx].dicnt1 = tr_cnt1; + tr_req[tr_idx].dicnt2 = tr0_cnt2; + tr_req[tr_idx].dicnt3 = tr0_cnt3; + tr_req[tr_idx].ddim1 = (-1) * tr_cnt0; + } + + tr_idx++; + + if (num_tr == 2) { + cppi5_tr_init(&tr_req[tr_idx].flags, CPPI5_TR_TYPE15, + false, true, + CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + cppi5_tr_csf_set(&tr_req[tr_idx].flags, + CPPI5_TR_CSF_SUPR_EVT); + cppi5_tr_set_trigger(&tr_req[tr_idx].flags, + uc->config.tr_trigger_type, + CPPI5_TR_TRIGGER_TYPE_ICNT2_DEC, + 0, 0); + + sg_addr += trigger_size * tr0_cnt2 * tr0_cnt3; + if (dir == DMA_DEV_TO_MEM) { + tr_req[tr_idx].addr = dev_addr; + tr_req[tr_idx].icnt0 = tr_cnt0; + tr_req[tr_idx].icnt1 = tr_cnt1; + tr_req[tr_idx].icnt2 = tr1_cnt2; + tr_req[tr_idx].icnt3 = 1; + tr_req[tr_idx].dim1 = (-1) * tr_cnt0; + + tr_req[tr_idx].daddr = sg_addr; + tr_req[tr_idx].dicnt0 = tr_cnt0; + tr_req[tr_idx].dicnt1 = tr_cnt1; + tr_req[tr_idx].dicnt2 = tr1_cnt2; + tr_req[tr_idx].dicnt3 = 1; + tr_req[tr_idx].ddim1 = tr_cnt0; + tr_req[tr_idx].ddim2 = trigger_size; + } else { + tr_req[tr_idx].addr = sg_addr; + tr_req[tr_idx].icnt0 = tr_cnt0; + tr_req[tr_idx].icnt1 = tr_cnt1; + tr_req[tr_idx].icnt2 = tr1_cnt2; + tr_req[tr_idx].icnt3 = 1; + tr_req[tr_idx].dim1 = tr_cnt0; + tr_req[tr_idx].dim2 = trigger_size; + + tr_req[tr_idx].daddr = dev_addr; + tr_req[tr_idx].dicnt0 = tr_cnt0; + tr_req[tr_idx].dicnt1 = tr_cnt1; + tr_req[tr_idx].dicnt2 = tr1_cnt2; + tr_req[tr_idx].dicnt3 = 1; + tr_req[tr_idx].ddim1 = (-1) * tr_cnt0; + } + tr_idx++; + } + + d->residue += sg_len; + } + + cppi5_tr_csf_set(&tr_req[tr_idx - 1].flags, + CPPI5_TR_CSF_SUPR_EVT | CPPI5_TR_CSF_EOP); + + return d; +} + static int udma_configure_statictr(struct udma_chan *uc, struct udma_desc *d, enum dma_slave_buswidth dev_width, u16 elcnt) @@ -2341,7 +3033,8 @@ udma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, struct udma_desc *d; u32 burst; - if (dir != uc->config.dir) { + if (dir != uc->config.dir && + (uc->config.dir == DMA_MEM_TO_MEM && !uc->config.tr_trigger_type)) { dev_err(chan->device->dev, "%s: chan%d is for %s, not supporting %s\n", __func__, uc->id, @@ -2367,9 +3060,12 @@ udma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, if (uc->config.pkt_mode) d = udma_prep_slave_sg_pkt(uc, sgl, sglen, dir, tx_flags, context); - else + else if (is_slave_direction(uc->config.dir)) d = udma_prep_slave_sg_tr(uc, sgl, sglen, dir, tx_flags, context); + else + d = udma_prep_slave_sg_triggered_tr(uc, sgl, sglen, dir, + tx_flags, context); if (!d) return NULL; @@ -2423,7 +3119,12 @@ udma_prep_dma_cyclic_tr(struct udma_chan *uc, dma_addr_t buf_addr, return NULL; tr_req = d->hwdesc[0].tr_req_base; - period_addr = buf_addr; + if (uc->ud->match_data->type == DMA_TYPE_UDMA) + period_addr = buf_addr; + else + period_addr = buf_addr | + ((u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT); + for (i = 0; i < periods; i++) { int tr_idx = i * num_tr; @@ -2629,6 +3330,11 @@ udma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, d->tr_idx = 0; d->residue = len; + if (uc->ud->match_data->type != DMA_TYPE_UDMA) { + src |= (u64)uc->ud->asel << K3_ADDRESS_ASEL_SHIFT; + dest |= (u64)uc->ud->asel << K3_ADDRESS_ASEL_SHIFT; + } + tr_req = d->hwdesc[0].tr_req_base; cppi5_tr_init(&tr_req[0].flags, CPPI5_TR_TYPE15, false, true, @@ -2986,6 +3692,7 @@ static void udma_free_chan_resources(struct dma_chan *chan) vchan_free_chan_resources(&uc->vc); tasklet_kill(&uc->vc.task); + bcdma_free_bchan_resources(uc); udma_free_tx_resources(uc); udma_free_rx_resources(uc); udma_reset_uchan(uc); @@ -2997,10 +3704,13 @@ static void udma_free_chan_resources(struct dma_chan *chan) } static struct platform_driver udma_driver; +static struct platform_driver bcdma_driver; struct udma_filter_param { int remote_thread_id; u32 atype; + u32 asel; + u32 tr_trigger_type; }; static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) @@ -3011,7 +3721,8 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) struct udma_chan *uc; struct udma_dev *ud; - if (chan->device->dev->driver != &udma_driver.driver) + if (chan->device->dev->driver != &udma_driver.driver && + chan->device->dev->driver != &bcdma_driver.driver) return false; uc = to_udma_chan(chan); @@ -3025,13 +3736,25 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) return false; } + if (filter_param->asel > 15) { + dev_err(ud->dev, "Invalid channel asel: %u\n", + filter_param->asel); + return false; + } + ucc->remote_thread_id = filter_param->remote_thread_id; ucc->atype = filter_param->atype; + ucc->asel = filter_param->asel; + ucc->tr_trigger_type = filter_param->tr_trigger_type; - if (ucc->remote_thread_id & K3_PSIL_DST_THREAD_ID_OFFSET) + if (ucc->tr_trigger_type) { + ucc->dir = DMA_MEM_TO_MEM; + goto triggered_bchan; + } else if (ucc->remote_thread_id & K3_PSIL_DST_THREAD_ID_OFFSET) { ucc->dir = DMA_MEM_TO_DEV; - else + } else { ucc->dir = DMA_DEV_TO_MEM; + } ep_config = psil_get_ep_config(ucc->remote_thread_id); if (IS_ERR(ep_config)) { @@ -3040,6 +3763,19 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) ucc->dir = DMA_MEM_TO_MEM; ucc->remote_thread_id = -1; ucc->atype = 0; + ucc->asel = 0; + return false; + } + + if (ud->match_data->type == DMA_TYPE_BCDMA && + ep_config->pkt_mode) { + dev_err(ud->dev, + "Only TR mode is supported (psi-l thread 0x%04x)\n", + ucc->remote_thread_id); + ucc->dir = DMA_MEM_TO_MEM; + ucc->remote_thread_id = -1; + ucc->atype = 0; + ucc->asel = 0; return false; } @@ -3071,6 +3807,13 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) ucc->remote_thread_id, dmaengine_get_direction_text(ucc->dir)); return true; + +triggered_bchan: + dev_dbg(ud->dev, "chan%d: triggered channel (type: %u)\n", uc->id, + ucc->tr_trigger_type); + + return true; + } static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec, @@ -3081,14 +3824,33 @@ static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec, struct udma_filter_param filter_param; struct dma_chan *chan; - if (dma_spec->args_count != 1 && dma_spec->args_count != 2) - return NULL; + if (ud->match_data->type == DMA_TYPE_BCDMA) { + if (dma_spec->args_count != 3) + return NULL; - filter_param.remote_thread_id = dma_spec->args[0]; - if (dma_spec->args_count == 2) - filter_param.atype = dma_spec->args[1]; - else + filter_param.tr_trigger_type = dma_spec->args[0]; + filter_param.remote_thread_id = dma_spec->args[1]; + filter_param.asel = dma_spec->args[2]; filter_param.atype = 0; + } else { + if (dma_spec->args_count != 1 && dma_spec->args_count != 2) + return NULL; + + filter_param.remote_thread_id = dma_spec->args[0]; + filter_param.tr_trigger_type = 0; + if (dma_spec->args_count == 2) { + if (ud->match_data->type == DMA_TYPE_UDMA) { + filter_param.atype = dma_spec->args[1]; + filter_param.asel = 0; + } else { + filter_param.atype = 0; + filter_param.asel = dma_spec->args[1]; + } + } else { + filter_param.atype = 0; + filter_param.asel = 0; + } + } chan = __dma_request_channel(&mask, udma_dma_filter_fn, &filter_param, ofdma->of_node); @@ -3101,18 +3863,21 @@ static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec, } static struct udma_match_data am654_main_data = { + .type = DMA_TYPE_UDMA, .psil_base = 0x1000, .enable_memcpy_support = true, .statictr_z_mask = GENMASK(11, 0), }; static struct udma_match_data am654_mcu_data = { + .type = DMA_TYPE_UDMA, .psil_base = 0x6000, .enable_memcpy_support = false, .statictr_z_mask = GENMASK(11, 0), }; static struct udma_match_data j721e_main_data = { + .type = DMA_TYPE_UDMA, .psil_base = 0x1000, .enable_memcpy_support = true, .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, @@ -3120,12 +3885,21 @@ static struct udma_match_data j721e_main_data = { }; static struct udma_match_data j721e_mcu_data = { + .type = DMA_TYPE_UDMA, .psil_base = 0x6000, .enable_memcpy_support = false, /* MEM_TO_MEM is slow via MCU UDMA */ .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, .statictr_z_mask = GENMASK(23, 0), }; +static struct udma_match_data am64_bcdma_data = { + .type = DMA_TYPE_BCDMA, + .psil_base = 0x2000, /* for tchan and rchan, not applicable to bchan */ + .enable_memcpy_support = true, /* Supported via bchan */ + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, + .statictr_z_mask = GENMASK(23, 0), +}; + static const struct of_device_id udma_of_match[] = { { .compatible = "ti,am654-navss-main-udmap", @@ -3144,31 +3918,91 @@ static const struct of_device_id udma_of_match[] = { { /* Sentinel */ }, }; +static const struct of_device_id bcdma_of_match[] = { + { + .compatible = "ti,am64-dmss-bcdma", + .data = &am64_bcdma_data, + }, + { /* Sentinel */ }, +}; + static struct udma_soc_data am654_soc_data = { - .rchan_oes_offset = 0x200, + .oes = { + .udma_rchan = 0x200, + }, }; static struct udma_soc_data j721e_soc_data = { - .rchan_oes_offset = 0x400, + .oes = { + .udma_rchan = 0x400, + }, }; static struct udma_soc_data j7200_soc_data = { - .rchan_oes_offset = 0x80, + .oes = { + .udma_rchan = 0x80, + }, +}; + +static struct udma_soc_data am64_soc_data = { + .oes = { + .bcdma_bchan_data = 0x2200, + .bcdma_bchan_ring = 0x2400, + .bcdma_tchan_data = 0x2800, + .bcdma_tchan_ring = 0x2a00, + .bcdma_rchan_data = 0x2e00, + .bcdma_rchan_ring = 0x3000, + }, + .bcdma_trigger_event_offset = 0xc400, }; static const struct soc_device_attribute k3_soc_devices[] = { { .family = "AM65X", .data = &am654_soc_data }, { .family = "J721E", .data = &j721e_soc_data }, { .family = "J7200", .data = &j7200_soc_data }, + { .family = "AM64", .data = &am64_soc_data }, { /* sentinel */ } }; static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud) { struct resource *res; + u32 cap2, cap3; int i; - for (i = 0; i < MMR_LAST; i++) { + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, + mmr_names[MMR_GCFG]); + ud->mmrs[MMR_GCFG] = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(ud->mmrs[MMR_GCFG])) + return PTR_ERR(ud->mmrs[MMR_GCFG]); + + cap2 = udma_read(ud->mmrs[MMR_GCFG], 0x28); + cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); + + switch (ud->match_data->type) { + case DMA_TYPE_UDMA: + ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3); + ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2); + ud->echan_cnt = UDMA_CAP2_ECHAN_CNT(cap2); + ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2); + break; + case DMA_TYPE_BCDMA: + ud->bchan_cnt = BCDMA_CAP2_BCHAN_CNT(cap2); + ud->tchan_cnt = BCDMA_CAP2_TCHAN_CNT(cap2); + ud->rchan_cnt = BCDMA_CAP2_RCHAN_CNT(cap2); + break; + default: + return -EINVAL; + } + + for (i = 1; i < MMR_LAST; i++) { + if (i == MMR_BCHANRT && ud->bchan_cnt == 0) + continue; + if (i == MMR_TCHANRT && ud->tchan_cnt == 0) + continue; + if (i == MMR_RCHANRT && ud->rchan_cnt == 0) + continue; + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, mmr_names[i]); ud->mmrs[i] = devm_ioremap_resource(&pdev->dev, res); @@ -3190,27 +4024,23 @@ static void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *map, rm_desc->num_sec); } +static const char * const range_names[] = { + [RM_RANGE_BCHAN] = "ti,sci-rm-range-bchan", + [RM_RANGE_TCHAN] = "ti,sci-rm-range-tchan", + [RM_RANGE_RCHAN] = "ti,sci-rm-range-rchan", + [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow" +}; + static int udma_setup_resources(struct udma_dev *ud) { + int ret, i, j; struct device *dev = ud->dev; - int ch_count, ret, i, j; - u32 cap2, cap3; struct ti_sci_resource *rm_res, irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; - static const char * const range_names[] = { "ti,sci-rm-range-tchan", - "ti,sci-rm-range-rchan", - "ti,sci-rm-range-rflow" }; - - cap2 = udma_read(ud->mmrs[MMR_GCFG], UDMA_CAP_REG(2)); - cap3 = udma_read(ud->mmrs[MMR_GCFG], UDMA_CAP_REG(3)); - - ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3); - ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2); - ud->echan_cnt = UDMA_CAP2_ECHAN_CNT(cap2); - ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2); - ch_count = ud->tchan_cnt + ud->rchan_cnt; + u32 cap3; /* Set up the throughput level start indexes */ + cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); if (of_device_is_compatible(dev->of_node, "ti,am654-navss-main-udmap")) { ud->tpl_levels = 2; @@ -3268,11 +4098,15 @@ static int udma_setup_resources(struct udma_dev *ud) bitmap_set(ud->rflow_gp_map, 0, ud->rflow_cnt); /* Get resource ranges from tisci */ - for (i = 0; i < RM_RANGE_LAST; i++) + for (i = 0; i < RM_RANGE_LAST; i++) { + if (i == RM_RANGE_BCHAN) + continue; + tisci_rm->rm_ranges[i] = devm_ti_sci_get_of_resource(tisci_rm->tisci, dev, tisci_rm->tisci_dev_id, (char *)range_names[i]); + } /* tchan ranges */ rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; @@ -3310,12 +4144,12 @@ static int udma_setup_resources(struct udma_dev *ud) for (j = 0; j < rm_res->sets; j++, i++) { if (rm_res->desc[j].num) { irq_res.desc[i].start = rm_res->desc[j].start + - ud->soc_data->rchan_oes_offset; + ud->soc_data->oes.udma_rchan; irq_res.desc[i].num = rm_res->desc[j].num; } if (rm_res->desc[j].num_sec) { irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + - ud->soc_data->rchan_oes_offset; + ud->soc_data->oes.udma_rchan; irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; } } @@ -3338,6 +4172,174 @@ static int udma_setup_resources(struct udma_dev *ud) &rm_res->desc[i], "gp-rflow"); } + return 0; +} + +static int bcdma_setup_resources(struct udma_dev *ud) +{ + int ret, i, j; + struct device *dev = ud->dev; + struct ti_sci_resource *rm_res, irq_res; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + + ud->bchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->bchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->bchans = devm_kcalloc(dev, ud->bchan_cnt, sizeof(*ud->bchans), + GFP_KERNEL); + ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans), + GFP_KERNEL); + ud->rchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->rchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->rchans = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rchans), + GFP_KERNEL); + /* BCDMA do not really have flows, but the driver expect it */ + ud->rflow_in_use = devm_kcalloc(dev, BITS_TO_LONGS(ud->rchan_cnt), + sizeof(unsigned long), + GFP_KERNEL); + ud->rflows = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rflows), + GFP_KERNEL); + + if (!ud->bchan_map || !ud->tchan_map || !ud->rchan_map || + !ud->rflow_in_use || !ud->bchans || !ud->tchans || !ud->rchans || + !ud->rflows) + return -ENOMEM; + + /* TPL is not yet supported for BCDMA */ + ud->tpl_levels = 1; + + /* Get resource ranges from tisci */ + for (i = 0; i < RM_RANGE_LAST; i++) { + if (i == RM_RANGE_RFLOW) + continue; + if (i == RM_RANGE_BCHAN && ud->bchan_cnt == 0) + continue; + if (i == RM_RANGE_TCHAN && ud->tchan_cnt == 0) + continue; + if (i == RM_RANGE_RCHAN && ud->rchan_cnt == 0) + continue; + + tisci_rm->rm_ranges[i] = + devm_ti_sci_get_of_resource(tisci_rm->tisci, dev, + tisci_rm->tisci_dev_id, + (char *)range_names[i]); + } + + irq_res.sets = 0; + + /* bchan ranges */ + if (ud->bchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->bchan_map, ud->bchan_cnt); + } else { + bitmap_fill(ud->bchan_map, ud->bchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->bchan_map, + &rm_res->desc[i], + "bchan"); + } + irq_res.sets += rm_res->sets; + } + + /* tchan ranges */ + if (ud->tchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->tchan_map, ud->tchan_cnt); + } else { + bitmap_fill(ud->tchan_map, ud->tchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->tchan_map, + &rm_res->desc[i], + "tchan"); + } + irq_res.sets += rm_res->sets * 2; + } + + /* rchan ranges */ + if (ud->rchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->rchan_map, ud->rchan_cnt); + } else { + bitmap_fill(ud->rchan_map, ud->rchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rchan_map, + &rm_res->desc[i], + "rchan"); + } + irq_res.sets += rm_res->sets * 2; + } + + irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL); + if (ud->bchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN]; + for (i = 0; i < rm_res->sets; i++) { + irq_res.desc[i].start = rm_res->desc[i].start + + oes->bcdma_bchan_ring; + irq_res.desc[i].num = rm_res->desc[i].num; + } + } + if (ud->tchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; + for (j = 0; j < rm_res->sets; j++, i += 2) { + irq_res.desc[i].start = rm_res->desc[j].start + + oes->bcdma_tchan_data; + irq_res.desc[i].num = rm_res->desc[j].num; + + irq_res.desc[i + 1].start = rm_res->desc[j].start + + oes->bcdma_tchan_ring; + irq_res.desc[i + 1].num = rm_res->desc[j].num; + } + } + if (ud->rchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; + for (j = 0; j < rm_res->sets; j++, i += 2) { + irq_res.desc[i].start = rm_res->desc[j].start + + oes->bcdma_rchan_data; + irq_res.desc[i].num = rm_res->desc[j].num; + + irq_res.desc[i + 1].start = rm_res->desc[j].start + + oes->bcdma_rchan_ring; + irq_res.desc[i + 1].num = rm_res->desc[j].num; + } + } + + ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); + kfree(irq_res.desc); + if (ret) { + dev_err(ud->dev, "Failed to allocate MSI interrupts\n"); + return ret; + } + + return 0; +} + +static int setup_resources(struct udma_dev *ud) +{ + struct device *dev = ud->dev; + int ch_count, ret; + + switch (ud->match_data->type) { + case DMA_TYPE_UDMA: + ret = udma_setup_resources(ud); + break; + case DMA_TYPE_BCDMA: + ret = bcdma_setup_resources(ud); + break; + default: + return -EINVAL; + } + + if (ret) + return ret; + + ch_count = ud->bchan_cnt + ud->tchan_cnt + ud->rchan_cnt; + if (ud->bchan_cnt) + ch_count -= bitmap_weight(ud->bchan_map, ud->bchan_cnt); ch_count -= bitmap_weight(ud->tchan_map, ud->tchan_cnt); ch_count -= bitmap_weight(ud->rchan_map, ud->rchan_cnt); if (!ch_count) @@ -3348,12 +4350,32 @@ static int udma_setup_resources(struct udma_dev *ud) if (!ud->channels) return -ENOMEM; - dev_info(dev, "Channels: %d (tchan: %u, rchan: %u, gp-rflow: %u)\n", - ch_count, - ud->tchan_cnt - bitmap_weight(ud->tchan_map, ud->tchan_cnt), - ud->rchan_cnt - bitmap_weight(ud->rchan_map, ud->rchan_cnt), - ud->rflow_cnt - bitmap_weight(ud->rflow_gp_map, - ud->rflow_cnt)); + switch (ud->match_data->type) { + case DMA_TYPE_UDMA: + dev_info(dev, + "Channels: %d (tchan: %u, rchan: %u, gp-rflow: %u)\n", + ch_count, + ud->tchan_cnt - bitmap_weight(ud->tchan_map, + ud->tchan_cnt), + ud->rchan_cnt - bitmap_weight(ud->rchan_map, + ud->rchan_cnt), + ud->rflow_cnt - bitmap_weight(ud->rflow_gp_map, + ud->rflow_cnt)); + break; + case DMA_TYPE_BCDMA: + dev_info(dev, + "Channels: %d (bchan: %u, tchan: %u, rchan: %u)\n", + ch_count, + ud->bchan_cnt - bitmap_weight(ud->bchan_map, + ud->bchan_cnt), + ud->tchan_cnt - bitmap_weight(ud->tchan_map, + ud->tchan_cnt), + ud->rchan_cnt - bitmap_weight(ud->rchan_map, + ud->rchan_cnt)); + break; + default: + break; + } return ch_count; } @@ -3462,10 +4484,19 @@ static void udma_dbg_summary_show_chan(struct seq_file *s, seq_printf(s, " %-13s| %s", dma_chan_name(chan), chan->dbg_client_name ?: "in-use"); - seq_printf(s, " (%s, ", dmaengine_get_direction_text(uc->config.dir)); + if (ucc->tr_trigger_type) + seq_puts(s, " (triggered, "); + else + seq_printf(s, " (%s, ", + dmaengine_get_direction_text(uc->config.dir)); switch (uc->config.dir) { case DMA_MEM_TO_MEM: + if (uc->ud->match_data->type == DMA_TYPE_BCDMA) { + seq_printf(s, "bchan%d)\n", uc->bchan->id); + return; + } + seq_printf(s, "chan%d pair [0x%04x -> 0x%04x], ", uc->tchan->id, ucc->src_thread, ucc->dst_thread); break; @@ -3537,6 +4568,23 @@ static int udma_probe(struct platform_device *pdev) if (!ud) return -ENOMEM; + match = of_match_node(udma_of_match, dev->of_node); + if (!match) { + match = of_match_node(bcdma_of_match, dev->of_node); + if (!match) { + dev_err(dev, "No compatible match found\n"); + return -ENODEV; + } + } + ud->match_data = match->data; + + soc = soc_device_match(k3_soc_devices); + if (!soc) { + dev_err(dev, "No compatible SoC found\n"); + return -ENODEV; + } + ud->soc_data = soc->data; + ret = udma_get_mmrs(pdev, ud); if (ret) return ret; @@ -3560,16 +4608,38 @@ static int udma_probe(struct platform_device *pdev) return ret; } - ret = of_property_read_u32(dev->of_node, "ti,udma-atype", &ud->atype); - if (!ret && ud->atype > 2) { - dev_err(dev, "Invalid atype: %u\n", ud->atype); - return -EINVAL; + if (ud->match_data->type == DMA_TYPE_UDMA) { + ret = of_property_read_u32(dev->of_node, "ti,udma-atype", + &ud->atype); + if (!ret && ud->atype > 2) { + dev_err(dev, "Invalid atype: %u\n", ud->atype); + return -EINVAL; + } + } else { + ret = of_property_read_u32(dev->of_node, "ti,asel", + &ud->asel); + if (!ret && ud->asel > 15) { + dev_err(dev, "Invalid asel: %u\n", ud->asel); + return -EINVAL; + } } ud->tisci_rm.tisci_udmap_ops = &ud->tisci_rm.tisci->ops.rm_udmap_ops; ud->tisci_rm.tisci_psil_ops = &ud->tisci_rm.tisci->ops.rm_psil_ops; - ud->ringacc = of_k3_ringacc_get_by_phandle(dev->of_node, "ti,ringacc"); + if (ud->match_data->type == DMA_TYPE_UDMA) { + ud->ringacc = of_k3_ringacc_get_by_phandle(dev->of_node, "ti,ringacc"); + } else { + struct k3_ringacc_init_data ring_init_data; + + ring_init_data.tisci = ud->tisci_rm.tisci; + ring_init_data.tisci_dev_id = ud->tisci_rm.tisci_dev_id; + ring_init_data.num_rings = ud->bchan_cnt + ud->tchan_cnt + + ud->rchan_cnt; + + ud->ringacc = k3_ringacc_dmarings_init(pdev, &ring_init_data); + } + if (IS_ERR(ud->ringacc)) return PTR_ERR(ud->ringacc); @@ -3580,24 +4650,9 @@ static int udma_probe(struct platform_device *pdev) return -EPROBE_DEFER; } - match = of_match_node(udma_of_match, dev->of_node); - if (!match) { - dev_err(dev, "No compatible match found\n"); - return -ENODEV; - } - ud->match_data = match->data; - - soc = soc_device_match(k3_soc_devices); - if (!soc) { - dev_err(dev, "No compatible SoC found\n"); - return -ENODEV; - } - ud->soc_data = soc->data; - dma_cap_set(DMA_SLAVE, ud->ddev.cap_mask); dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask); - ud->ddev.device_alloc_chan_resources = udma_alloc_chan_resources; ud->ddev.device_config = udma_slave_config; ud->ddev.device_prep_slave_sg = udma_prep_slave_sg; ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic; @@ -3611,7 +4666,21 @@ static int udma_probe(struct platform_device *pdev) ud->ddev.dbg_summary_show = udma_dbg_summary_show; #endif + switch (ud->match_data->type) { + case DMA_TYPE_UDMA: + ud->ddev.device_alloc_chan_resources = + udma_alloc_chan_resources; + break; + case DMA_TYPE_BCDMA: + ud->ddev.device_alloc_chan_resources = + bcdma_alloc_chan_resources; + ud->ddev.device_router_config = bcdma_router_config; + break; + default: + return -EINVAL; + } ud->ddev.device_free_chan_resources = udma_free_chan_resources; + ud->ddev.src_addr_widths = TI_UDMAC_BUSWIDTHS; ud->ddev.dst_addr_widths = TI_UDMAC_BUSWIDTHS; ud->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); @@ -3619,7 +4688,8 @@ static int udma_probe(struct platform_device *pdev) ud->ddev.copy_align = DMAENGINE_ALIGN_8_BYTES; ud->ddev.desc_metadata_modes = DESC_METADATA_CLIENT | DESC_METADATA_ENGINE; - if (ud->match_data->enable_memcpy_support) { + if (ud->match_data->enable_memcpy_support && + !(ud->match_data->type == DMA_TYPE_BCDMA && ud->bchan_cnt == 0)) { dma_cap_set(DMA_MEMCPY, ud->ddev.cap_mask); ud->ddev.device_prep_dma_memcpy = udma_prep_dma_memcpy; ud->ddev.directions |= BIT(DMA_MEM_TO_MEM); @@ -3632,7 +4702,7 @@ static int udma_probe(struct platform_device *pdev) INIT_LIST_HEAD(&ud->ddev.channels); INIT_LIST_HEAD(&ud->desc_to_purge); - ch_count = udma_setup_resources(ud); + ch_count = setup_resources(ud); if (ch_count <= 0) return ch_count; @@ -3647,6 +4717,13 @@ static int udma_probe(struct platform_device *pdev) if (ret) return ret; + for (i = 0; i < ud->bchan_cnt; i++) { + struct udma_bchan *bchan = &ud->bchans[i]; + + bchan->id = i; + bchan->reg_rt = ud->mmrs[MMR_BCHANRT] + i * 0x1000; + } + for (i = 0; i < ud->tchan_cnt; i++) { struct udma_tchan *tchan = &ud->tchans[i]; @@ -3673,6 +4750,7 @@ static int udma_probe(struct platform_device *pdev) uc->ud = ud; uc->vc.desc_free = udma_desc_free; uc->id = i; + uc->bchan = NULL; uc->tchan = NULL; uc->rchan = NULL; uc->config.remote_thread_id = -1; @@ -3715,5 +4793,15 @@ static struct platform_driver udma_driver = { }; builtin_platform_driver(udma_driver); +static struct platform_driver bcdma_driver = { + .driver = { + .name = "ti-bcdma", + .of_match_table = bcdma_of_match, + .suppress_bind_attrs = true, + }, + .probe = udma_probe, +}; +builtin_platform_driver(bcdma_driver); + /* Private interfaces to UDMA */ #include "k3-udma-private.c" diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index d1cace0cb43b..bf78ad94354a 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -18,7 +18,7 @@ #define UDMA_RX_FLOW_ID_FW_OES_REG 0x80 #define UDMA_RX_FLOW_ID_FW_STATUS_REG 0x88 -/* TCHANRT/RCHANRT registers */ +/* BCHANRT/TCHANRT/RCHANRT registers */ #define UDMA_CHAN_RT_CTL_REG 0x0 #define UDMA_CHAN_RT_SWTRIG_REG 0x8 #define UDMA_CHAN_RT_STDATA_REG 0x80 @@ -45,6 +45,10 @@ #define UDMA_CAP3_HCHAN_CNT(val) (((val) >> 14) & 0x1ff) #define UDMA_CAP3_UCHAN_CNT(val) (((val) >> 23) & 0x1ff) +#define BCDMA_CAP2_BCHAN_CNT(val) ((val) & 0x1ff) +#define BCDMA_CAP2_TCHAN_CNT(val) (((val) >> 9) & 0x1ff) +#define BCDMA_CAP2_RCHAN_CNT(val) (((val) >> 18) & 0x1ff) + /* UDMA_CHAN_RT_CTL_REG */ #define UDMA_CHAN_RT_CTL_EN BIT(31) #define UDMA_CHAN_RT_CTL_TDOWN BIT(30) @@ -82,13 +86,17 @@ */ #define PDMA_STATIC_TR_Z(x, mask) ((x) & (mask)) +/* Address Space Select */ +#define K3_ADDRESS_ASEL_SHIFT 48 + struct udma_dev; struct udma_tchan; struct udma_rchan; struct udma_rflow; enum udma_rm_range { - RM_RANGE_TCHAN = 0, + RM_RANGE_BCHAN = 0, + RM_RANGE_TCHAN, RM_RANGE_RCHAN, RM_RANGE_RFLOW, RM_RANGE_LAST, From patchwork Wed Sep 30 09:14:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313866 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21607ilj; Wed, 30 Sep 2020 02:15:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzBsSzQJRKiaTadwRpDZfjpMlAcAa45bQmhcRbu21uV+7/dHieWy6mCpaE9F3L1GF/+7Gtu X-Received: by 2002:a17:906:e24d:: with SMTP id gq13mr1771230ejb.152.1601457308504; Wed, 30 Sep 2020 02:15:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457308; cv=none; d=google.com; s=arc-20160816; b=QqSNOAJ5gBgtKKBzmhXnt3sQwJo6QiAgZBtXwJ4snkWxbCRhwCfP6TlgNlhKotWMAC mdt6X5b72nnDXCz33FZtw3bjIIXU3Mmx0NKhK3TAWNXCh4DFaKw41jKwzpyqKYaNyTBH iAZ+W0kxRg5tb/AY8RQbCy9625J1VqtodueqdpjZfY5J6K1dz1eM55H+V/ncP8o5xGMp /I6nhFfA3GWNnWYf2lJ66oRiv59EIGp6pJ36bUIwhr6kq82gAd2ooSgby5w3ciZncseZ szPuBUK8hialmTYvn4lsc2M+o65U2o+ixTCziRgsbYVv7z9KhmkXZtERJXJJbPFO0+wm TsTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+dtb+GLQmyLWMPm0Yi5m4W7j+Wt9ST2MHyGH0n4G2AY=; b=b71zAoeqZBEd9Jz2qf0N8mh2ODM8PXI4sqoiMi25fNcaTBiEa6UqD26VmRpgdNNcvC fmi9FPID9yqyYA81vIDWTh7DUUmdrmNhsBkH1F0h07kTQHewSqc6bNFRUR5fuMF4HP/S JYgCldCW6ss+DzTxNEZM3Y0MIjiLBN+ShyAGCe9DbT7rm3AvPX5eHjRm9CcmLE4OAMdK 2m2+qDpLm+2fWvi8gC8Kgp0MmzOjy6LTH/MyDYR2sGFKBRTfM68hy7b6HpFzu0EJJXmb sanvvM6JSZ6l4u61vRet+S98yI9K8Y8pk7cIM+EjtdzcdzcIbwwTNYbBcasuCs6nMquZ cSxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=ecm+sxAu; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n11si640885edt.479.2020.09.30.02.15.08; Wed, 30 Sep 2020 02:15:08 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=ecm+sxAu; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728974AbgI3JPF (ORCPT + 6 others); Wed, 30 Sep 2020 05:15:05 -0400 Received: from fllv0016.ext.ti.com ([198.47.19.142]:53174 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729041AbgI3JPB (ORCPT ); Wed, 30 Sep 2020 05:15:01 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9El54067480; Wed, 30 Sep 2020 04:14:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457287; bh=+dtb+GLQmyLWMPm0Yi5m4W7j+Wt9ST2MHyGH0n4G2AY=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=ecm+sxAuDlAqERQJXK2IN+mxwgHV9cRrnY9Gbx45hVjPQp9QLUdvSbTahmNpaQall g4f2IoyyqxGBfJs4XNFMSlCE+OJYWzbtdqPi6fB4wrDCqv+MbLgT2h14towxJzubE4 HIwOW+mCvza3zZBCKqw/+fyDG0gE59RVj5ewGt60= Received: from DLEE105.ent.ti.com (dlee105.ent.ti.com [157.170.170.35]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9El0S112137 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:47 -0500 Received: from DLEE101.ent.ti.com (157.170.170.31) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:47 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE101.ent.ti.com (157.170.170.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:47 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJn116385; Wed, 30 Sep 2020 04:14:44 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 16/18] dmaengine: ti: k3-udma: Add support for BCDMA channel TPL handling Date: Wed, 30 Sep 2020 12:14:10 +0300 Message-ID: <20200930091412.8020-17-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Unlike UDMAP the BCDMA defines the channel TPL levels per channel type. In UDMAP the number of high and ultra-high channels applies to both tchan and rchan. BCDMA defines the TPL per channel types: bchan, tchan and rchan can have different number of high and ultra-high channels. In order to support BCDMA channel TPL we need to move the tpl information as per channel type property for the DMAs. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 122 ++++++++++++++++++++++++++------------- drivers/dma/ti/k3-udma.h | 6 ++ 2 files changed, 89 insertions(+), 39 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index a342e89a4bae..69f2c43354d4 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -146,6 +146,11 @@ struct udma_rx_flush { dma_addr_t buffer_paddr; }; +struct udma_tpl { + u8 levels; + u32 start_idx[3]; +}; + struct udma_dev { struct dma_device ddev; struct device *dev; @@ -153,8 +158,9 @@ struct udma_dev { const struct udma_match_data *match_data; const struct udma_soc_data *soc_data; - u8 tpl_levels; - u32 tpl_start_idx[3]; + struct udma_tpl bchan_tpl; + struct udma_tpl tchan_tpl; + struct udma_tpl rchan_tpl; size_t desc_align; /* alignment to use for descriptors */ @@ -1278,10 +1284,10 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ } else { \ int start; \ \ - if (tpl >= ud->tpl_levels) \ - tpl = ud->tpl_levels - 1; \ + if (tpl >= ud->res##_tpl.levels) \ + tpl = ud->res##_tpl.levels - 1; \ \ - start = ud->tpl_start_idx[tpl]; \ + start = ud->res##_tpl.start_idx[tpl]; \ \ id = find_next_zero_bit(ud->res##_map, ud->res##_cnt, \ start); \ @@ -1294,29 +1300,14 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ return &ud->res##s[id]; \ } +UDMA_RESERVE_RESOURCE(bchan); UDMA_RESERVE_RESOURCE(tchan); UDMA_RESERVE_RESOURCE(rchan); -static struct udma_bchan *__bcdma_reserve_bchan(struct udma_dev *ud, int id) -{ - if (id >= 0) { - if (test_bit(id, ud->bchan_map)) { - dev_err(ud->dev, "bchan%d is in use\n", id); - return ERR_PTR(-ENOENT); - } - } else { - id = find_next_zero_bit(ud->bchan_map, ud->bchan_cnt, 0); - if (id == ud->bchan_cnt) - return ERR_PTR(-ENOENT); - } - - set_bit(id, ud->bchan_map); - return &ud->bchans[id]; -} - static int bcdma_get_bchan(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; + enum udma_tp_level tpl; if (uc->bchan) { dev_dbg(ud->dev, "chan%d: already have bchan%d allocated\n", @@ -1324,7 +1315,16 @@ static int bcdma_get_bchan(struct udma_chan *uc) return 0; } - uc->bchan = __bcdma_reserve_bchan(ud, -1); + /* + * Use normal channels for peripherals, and highest TPL channel for + * mem2mem + */ + if (uc->config.tr_trigger_type) + tpl = 0; + else + tpl = ud->bchan_tpl.levels - 1; + + uc->bchan = __udma_reserve_bchan(ud, tpl, -1); if (IS_ERR(uc->bchan)) return PTR_ERR(uc->bchan); @@ -1386,8 +1386,11 @@ static int udma_get_chan_pair(struct udma_chan *uc) /* Can be optimized, but let's have it like this for now */ end = min(ud->tchan_cnt, ud->rchan_cnt); - /* Try to use the highest TPL channel pair for MEM_TO_MEM channels */ - chan_id = ud->tpl_start_idx[ud->tpl_levels - 1]; + /* + * Try to use the highest TPL channel pair for MEM_TO_MEM channels + * Note: in UDMAP the channel TPL is symmetric between tchan and rchan + */ + chan_id = ud->tchan_tpl.start_idx[ud->tchan_tpl.levels - 1]; for (; chan_id < end; chan_id++) { if (!test_bit(chan_id, ud->tchan_map) && !test_bit(chan_id, ud->rchan_map)) @@ -4043,24 +4046,28 @@ static int udma_setup_resources(struct udma_dev *ud) cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); if (of_device_is_compatible(dev->of_node, "ti,am654-navss-main-udmap")) { - ud->tpl_levels = 2; - ud->tpl_start_idx[0] = 8; + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = 8; } else if (of_device_is_compatible(dev->of_node, "ti,am654-navss-mcu-udmap")) { - ud->tpl_levels = 2; - ud->tpl_start_idx[0] = 2; + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = 2; } else if (UDMA_CAP3_UCHAN_CNT(cap3)) { - ud->tpl_levels = 3; - ud->tpl_start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3); - ud->tpl_start_idx[0] = ud->tpl_start_idx[1] + - UDMA_CAP3_HCHAN_CNT(cap3); + ud->tchan_tpl.levels = 3; + ud->tchan_tpl.start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3); + ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[1] + + UDMA_CAP3_HCHAN_CNT(cap3); } else if (UDMA_CAP3_HCHAN_CNT(cap3)) { - ud->tpl_levels = 2; - ud->tpl_start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); } else { - ud->tpl_levels = 1; + ud->tchan_tpl.levels = 1; } + ud->rchan_tpl.levels = ud->tchan_tpl.levels; + ud->rchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[0]; + ud->rchan_tpl.start_idx[1] = ud->tchan_tpl.start_idx[1]; + ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt), sizeof(unsigned long), GFP_KERNEL); ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans), @@ -4182,6 +4189,46 @@ static int bcdma_setup_resources(struct udma_dev *ud) struct ti_sci_resource *rm_res, irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u32 cap; + + /* Set up the throughput level start indexes */ + cap = udma_read(ud->mmrs[MMR_GCFG], 0x2c); + if (BCDMA_CAP3_UBCHAN_CNT(cap)) { + ud->bchan_tpl.levels = 3; + ud->bchan_tpl.start_idx[1] = BCDMA_CAP3_UBCHAN_CNT(cap); + ud->bchan_tpl.start_idx[0] = ud->bchan_tpl.start_idx[0] + + BCDMA_CAP3_HBCHAN_CNT(cap); + } else if (BCDMA_CAP3_HBCHAN_CNT(cap)) { + ud->bchan_tpl.levels = 2; + ud->bchan_tpl.start_idx[0] = BCDMA_CAP3_HBCHAN_CNT(cap); + } else { + ud->bchan_tpl.levels = 1; + } + + cap = udma_read(ud->mmrs[MMR_GCFG], 0x30); + if (BCDMA_CAP4_URCHAN_CNT(cap)) { + ud->rchan_tpl.levels = 3; + ud->rchan_tpl.start_idx[1] = BCDMA_CAP4_URCHAN_CNT(cap); + ud->rchan_tpl.start_idx[0] = ud->rchan_tpl.start_idx[1] + + BCDMA_CAP4_HRCHAN_CNT(cap); + } else if (BCDMA_CAP4_HRCHAN_CNT(cap)) { + ud->rchan_tpl.levels = 2; + ud->rchan_tpl.start_idx[0] = BCDMA_CAP4_HRCHAN_CNT(cap); + } else { + ud->rchan_tpl.levels = 1; + } + + if (BCDMA_CAP4_UTCHAN_CNT(cap)) { + ud->tchan_tpl.levels = 3; + ud->tchan_tpl.start_idx[1] = BCDMA_CAP4_UTCHAN_CNT(cap); + ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[1] + + BCDMA_CAP4_HTCHAN_CNT(cap); + } else if (BCDMA_CAP4_HTCHAN_CNT(cap)) { + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = BCDMA_CAP4_HTCHAN_CNT(cap); + } else { + ud->tchan_tpl.levels = 1; + } ud->bchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->bchan_cnt), sizeof(unsigned long), GFP_KERNEL); @@ -4207,9 +4254,6 @@ static int bcdma_setup_resources(struct udma_dev *ud) !ud->rflows) return -ENOMEM; - /* TPL is not yet supported for BCDMA */ - ud->tpl_levels = 1; - /* Get resource ranges from tisci */ for (i = 0; i < RM_RANGE_LAST; i++) { if (i == RM_RANGE_RFLOW) diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index bf78ad94354a..8cb32681afaf 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -48,6 +48,12 @@ #define BCDMA_CAP2_BCHAN_CNT(val) ((val) & 0x1ff) #define BCDMA_CAP2_TCHAN_CNT(val) (((val) >> 9) & 0x1ff) #define BCDMA_CAP2_RCHAN_CNT(val) (((val) >> 18) & 0x1ff) +#define BCDMA_CAP3_HBCHAN_CNT(val) (((val) >> 14) & 0x1ff) +#define BCDMA_CAP3_UBCHAN_CNT(val) (((val) >> 23) & 0x1ff) +#define BCDMA_CAP4_HRCHAN_CNT(val) ((val) & 0xff) +#define BCDMA_CAP4_URCHAN_CNT(val) (((val) >> 8) & 0xff) +#define BCDMA_CAP4_HTCHAN_CNT(val) (((val) >> 16) & 0xff) +#define BCDMA_CAP4_UTCHAN_CNT(val) (((val) >> 24) & 0xff) /* UDMA_CHAN_RT_CTL_REG */ #define UDMA_CHAN_RT_CTL_EN BIT(31) From patchwork Wed Sep 30 09:14:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313868 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21663ilj; Wed, 30 Sep 2020 02:15:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzD6pbR5zyJztosTikC4g9zoNx6Bv3JUPfHr0mQUOtntGbUpCWltZo+5e6ap9NJbFsfzNIi X-Received: by 2002:a05:6402:396:: with SMTP id o22mr1598935edv.316.1601457315966; Wed, 30 Sep 2020 02:15:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457315; cv=none; d=google.com; s=arc-20160816; b=bOQsWxSn5Cmgp/yCp4uNOE47g6/5NvBF7zucLjy4JvtC63lZVAaKRnmzi4L2BmaDLB NIpl9U335Ti4FzdG0Q3NBx/kP/pYk5+rSeV83wJEJ8TmWvv4ukS5uLQeHBsZpBDibRGb VcEv4vKMpOCklpWij/H+9n+yUQaFDafHe0cOAb4bounkbRvrhbn2NXI61SEFnc708254 L5RD078WniK+0YjePAZViQu07V5XGlulw+GxPmiWvqgaDe+Bj2o9iVKHa6K2WWruPWtY jrT5c6XblUc26UTwTlk0HhHcG9ACFlALG1U/0LJeepRKLz1gJ/5wWsG+CtOs2TGngXJu wwUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=lsEimC7eojBl1Mg3DwJv4372XumRMA4wh9uPpcjRSQA=; b=XfLhfoRv5MdVO5WcSJdlbVPmnYPBuR8VV6UmEPj/BOktsWIk528SoNubILdRTV7Ss4 8WEOrxmt34MH+oUz+JLE52Uy0QymnZymUVBAg9tExVawAee78GGyN+PRnKiEGeKB+d+v XUNkEB/DzkDGHsBsVvr8Y4xCmks2KI1iGH3+ghWazZQf7jhijneW3c6/8lyOKW8lZOaK /SAAEY/kvvmuslnGfLL7kWs/MVkYw0uIJ9d8NSe2r9gwCuAViof8KQfjxdVj7C4KCm4e GWF8AOGYFYcQQbAADuwGUI25z595Uj0SGRbslEWhpnSrV4LxnJ0KJnuBo0ncRIoWalRa YpPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Gw3UtgvN; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j7si732058ejn.639.2020.09.30.02.15.15; Wed, 30 Sep 2020 02:15:15 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Gw3UtgvN; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727746AbgI3JPN (ORCPT + 6 others); Wed, 30 Sep 2020 05:15:13 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:35288 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729332AbgI3JPL (ORCPT ); Wed, 30 Sep 2020 05:15:11 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9EoRJ035171; Wed, 30 Sep 2020 04:14:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457290; bh=lsEimC7eojBl1Mg3DwJv4372XumRMA4wh9uPpcjRSQA=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=Gw3UtgvNsWP+BfNHAUW3KcBUBBP6OJd4YrsuUinzxU5JVdzhpck+NRIoNrKdWMd+B ufvF9gAbp087dN12mDEp3WnR4hs7UDknd/WgVGfXSv7l6NbMOUYFaDRVbqv8i8Kip0 Y0Fzxr9vtmaRdB04oC+Fl05IH+VDd/o9A+UzAbfo= Received: from DLEE102.ent.ti.com (dlee102.ent.ti.com [157.170.170.32]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 08U9EoDC050005 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 30 Sep 2020 04:14:50 -0500 Received: from DLEE113.ent.ti.com (157.170.170.24) by DLEE102.ent.ti.com (157.170.170.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:49 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE113.ent.ti.com (157.170.170.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:49 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJo116385; Wed, 30 Sep 2020 04:14:47 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 17/18] dmaengine: ti: k3-udma: Initial support for K3 PKTDMA Date: Wed, 30 Sep 2020 12:14:11 +0300 Message-ID: <20200930091412.8020-18-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org One of the DMAs introduced with AM64 is the Packet DMA (PKTDMA). It serves similar purpose as K3 UDMAP channels in packet mode, but with notable differences, like tflow support and channels being allocated to service specific peripherals. The rings for the PKTDMA is integrated within the DMA itself instead of using rings from the general purpose ringacc. PKTDMA can be used to service PSI-L peripherals, similarly to K3 UDMA channels. Most of the driver code can be reused for PKTDMA tchan/rchan support but new setup and allocation functions are needed to handle the differences between the DMAs. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-private.c | 9 + drivers/dma/ti/k3-udma.c | 546 +++++++++++++++++++++++++++++-- drivers/dma/ti/k3-udma.h | 4 + 3 files changed, 534 insertions(+), 25 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c index 8ff7a264be03..f0cecd29cff1 100644 --- a/drivers/dma/ti/k3-udma-private.c +++ b/drivers/dma/ti/k3-udma-private.c @@ -82,6 +82,9 @@ EXPORT_SYMBOL(xudma_free_gp_rflow_range); bool xudma_rflow_is_gp(struct udma_dev *ud, int id) { + if (!ud->rflow_gp_map) + return false; + return !test_bit(id, ud->rflow_gp_map); } EXPORT_SYMBOL(xudma_rflow_is_gp); @@ -113,6 +116,12 @@ void xudma_rflow_put(struct udma_dev *ud, struct udma_rflow *p) } EXPORT_SYMBOL(xudma_rflow_put); +int xudma_get_rflow_ring_offset(struct udma_dev *ud) +{ + return ud->tflow_cnt; +} +EXPORT_SYMBOL(xudma_get_rflow_ring_offset); + #define XUDMA_GET_RESOURCE_ID(res) \ int xudma_##res##_get_id(struct udma_##res *p) \ { \ diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 69f2c43354d4..bcec3d5c7be1 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -59,6 +59,7 @@ struct udma_chan; enum k3_dma_type { DMA_TYPE_UDMA = 0, DMA_TYPE_BCDMA, + DMA_TYPE_PKTDMA, }; enum udma_mmr { @@ -82,6 +83,8 @@ struct udma_tchan { int id; struct k3_ring *t_ring; /* Transmit ring */ struct k3_ring *tc_ring; /* Transmit Completion ring */ + int tflow_id; /* applicable only for PKTDMA */ + }; #define udma_bchan udma_tchan @@ -109,6 +112,10 @@ struct udma_oes_offsets { u32 bcdma_tchan_ring; u32 bcdma_rchan_data; u32 bcdma_rchan_ring; + + /* PKTDMA Output Event Offsets */ + u32 pktdma_tchan_flow; + u32 pktdma_rchan_flow; }; #define UDMA_FLAG_PDMA_ACC32 BIT(0) @@ -179,12 +186,14 @@ struct udma_dev { int echan_cnt; int rchan_cnt; int rflow_cnt; + int tflow_cnt; unsigned long *bchan_map; unsigned long *tchan_map; unsigned long *rchan_map; unsigned long *rflow_gp_map; unsigned long *rflow_gp_map_allocated; unsigned long *rflow_in_use; + unsigned long *tflow_map; struct udma_bchan *bchans; struct udma_tchan *tchans; @@ -249,6 +258,11 @@ struct udma_chan_config { u32 tr_trigger_type; + /* PKDMA mapped channel */ + int mapped_channel_id; + /* PKTDMA default tflow or rflow for mapped channel */ + int default_flow_id; + enum dma_transfer_direction dir; }; @@ -426,6 +440,8 @@ static void udma_reset_uchan(struct udma_chan *uc) { memset(&uc->config, 0, sizeof(uc->config)); uc->config.remote_thread_id = -1; + uc->config.mapped_channel_id = -1; + uc->config.default_flow_id = -1; uc->state = UDMA_CHAN_IS_IDLE; } @@ -815,10 +831,16 @@ static void udma_start_desc(struct udma_chan *uc) { struct udma_chan_config *ucc = &uc->config; - if (ucc->pkt_mode && (uc->cyclic || ucc->dir == DMA_DEV_TO_MEM)) { + if (uc->ud->match_data->type == DMA_TYPE_UDMA && ucc->pkt_mode && + (uc->cyclic || ucc->dir == DMA_DEV_TO_MEM)) { int i; - /* Push all descriptors to ring for packet mode cyclic or RX */ + /* + * UDMA only: Push all descriptors to ring for packet mode + * cyclic or RX + * PKTDMA supports pre-linked descriptor and cyclic is not + * supported + */ for (i = 0; i < uc->desc->sglen; i++) udma_push_to_ring(uc, i); } else { @@ -1250,10 +1272,12 @@ static struct udma_rflow *__udma_get_rflow(struct udma_dev *ud, int id) if (test_bit(id, ud->rflow_in_use)) return ERR_PTR(-ENOENT); - /* GP rflow has to be allocated first */ - if (!test_bit(id, ud->rflow_gp_map) && - !test_bit(id, ud->rflow_gp_map_allocated)) - return ERR_PTR(-EINVAL); + if (ud->rflow_gp_map) { + /* GP rflow has to be allocated first */ + if (!test_bit(id, ud->rflow_gp_map) && + !test_bit(id, ud->rflow_gp_map_allocated)) + return ERR_PTR(-EINVAL); + } dev_dbg(ud->dev, "get rflow%d\n", id); set_bit(id, ud->rflow_in_use); @@ -1343,9 +1367,39 @@ static int udma_get_tchan(struct udma_chan *uc) return 0; } - uc->tchan = __udma_reserve_tchan(ud, uc->config.channel_tpl, -1); + /* + * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels. + * For PKTDMA mapped channels it is configured to a channel which must + * be used to service the peripheral. + */ + uc->tchan = __udma_reserve_tchan(ud, uc->config.channel_tpl, + uc->config.mapped_channel_id); + if (IS_ERR(uc->tchan)) + return PTR_ERR(uc->tchan); + + if (ud->tflow_cnt) { + int tflow_id; + + /* Only PKTDMA have support for tx flows */ + if (uc->config.default_flow_id >= 0) + tflow_id = uc->config.default_flow_id; + else + tflow_id = uc->tchan->id; + + if (test_bit(tflow_id, ud->tflow_map)) { + dev_err(ud->dev, "tflow%d is in use\n", tflow_id); + clear_bit(uc->tchan->id, ud->tchan_map); + uc->tchan = NULL; + return -ENOENT; + } + + uc->tchan->tflow_id = tflow_id; + set_bit(tflow_id, ud->tflow_map); + } else { + uc->tchan->tflow_id = -1; + } - return PTR_ERR_OR_ZERO(uc->tchan); + return 0; } static int udma_get_rchan(struct udma_chan *uc) @@ -1358,7 +1412,13 @@ static int udma_get_rchan(struct udma_chan *uc) return 0; } - uc->rchan = __udma_reserve_rchan(ud, uc->config.channel_tpl, -1); + /* + * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels. + * For PKTDMA mapped channels it is configured to a channel which must + * be used to service the peripheral. + */ + uc->rchan = __udma_reserve_rchan(ud, uc->config.channel_tpl, + uc->config.mapped_channel_id); return PTR_ERR_OR_ZERO(uc->rchan); } @@ -1405,6 +1465,9 @@ static int udma_get_chan_pair(struct udma_chan *uc) uc->tchan = &ud->tchans[chan_id]; uc->rchan = &ud->rchans[chan_id]; + /* UDMA does not use tx flows */ + uc->tchan->tflow_id = -1; + return 0; } @@ -1461,6 +1524,10 @@ static void udma_put_tchan(struct udma_chan *uc) dev_dbg(ud->dev, "chan%d: put tchan%d\n", uc->id, uc->tchan->id); clear_bit(uc->tchan->id, ud->tchan_map); + + if (uc->tchan->tflow_id >= 0) + clear_bit(uc->tchan->tflow_id, ud->tflow_map); + uc->tchan = NULL; } } @@ -1561,7 +1628,10 @@ static int udma_alloc_tx_resources(struct udma_chan *uc) return ret; tchan = uc->tchan; - ring_idx = ud->bchan_cnt + tchan->id; + if (tchan->tflow_id >= 0) + ring_idx = tchan->tflow_id; + else + ring_idx = ud->bchan_cnt + tchan->id; ret = k3_ringacc_request_rings_pair(ud->ringacc, ring_idx, -1, &tchan->t_ring, @@ -1638,15 +1708,23 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) if (uc->config.dir == DMA_MEM_TO_MEM) return 0; - ret = udma_get_rflow(uc, uc->rchan->id); + if (uc->config.default_flow_id >= 0) + ret = udma_get_rflow(uc, uc->config.default_flow_id); + else + ret = udma_get_rflow(uc, uc->rchan->id); + if (ret) { ret = -EBUSY; goto err_rflow; } rflow = uc->rflow; - fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt + - uc->rchan->id; + if (ud->tflow_cnt) + fd_ring_id = ud->tflow_cnt + rflow->id; + else + fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt + + uc->rchan->id; + ret = k3_ringacc_request_rings_pair(ud->ringacc, fd_ring_id, -1, &rflow->fd_ring, &rflow->r_ring); if (ret) { @@ -1862,6 +1940,8 @@ static int bcdma_tisci_tx_channel_config(struct udma_chan *uc) return ret; } +#define pktdma_tisci_tx_channel_config bcdma_tisci_tx_channel_config + static int udma_tisci_rx_channel_config(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1963,6 +2043,52 @@ static int bcdma_tisci_rx_channel_config(struct udma_chan *uc) return ret; } +static int pktdma_tisci_rx_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; + struct ti_sci_msg_rm_udmap_flow_cfg flow_req = { 0 }; + int ret = 0; + + req_rx.valid_params = TISCI_BCDMA_RCHAN_VALID_PARAMS; + req_rx.nav_id = tisci_rm->tisci_dev_id; + req_rx.index = uc->rchan->id; + + ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx); + if (ret) { + dev_err(ud->dev, "rchan%d cfg failed %d\n", uc->rchan->id, ret); + return ret; + } + + flow_req.valid_params = + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID; + + flow_req.nav_id = tisci_rm->tisci_dev_id; + flow_req.flow_index = uc->rflow->id; + + if (uc->config.needs_epib) + flow_req.rx_einfo_present = 1; + else + flow_req.rx_einfo_present = 0; + if (uc->config.psd_size) + flow_req.rx_psinfo_present = 1; + else + flow_req.rx_psinfo_present = 0; + flow_req.rx_error_handling = 1; + + ret = tisci_ops->rx_flow_cfg(tisci_rm->tisci, &flow_req); + + if (ret) + dev_err(ud->dev, "flow%d config failed: %d\n", uc->rflow->id, + ret); + + return ret; +} + static int udma_alloc_chan_resources(struct dma_chan *chan) { struct udma_chan *uc = to_udma_chan(chan); @@ -2381,6 +2507,157 @@ static int bcdma_router_config(struct dma_chan *chan) return router_data->set_event(router_data->priv, trigger_event); } +static int pktdma_alloc_chan_resources(struct dma_chan *chan) +{ + struct udma_chan *uc = to_udma_chan(chan); + struct udma_dev *ud = to_udma_dev(chan->device); + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u32 irq_ring_idx; + int ret; + + /* + * Make sure that the completion is in a known state: + * No teardown, the channel is idle + */ + reinit_completion(&uc->teardown_completed); + complete_all(&uc->teardown_completed); + uc->state = UDMA_CHAN_IS_IDLE; + + switch (uc->config.dir) { + case DMA_MEM_TO_DEV: + /* Slave transfer synchronized - mem to dev (TX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-DEV\n", __func__, + uc->id); + + ret = udma_alloc_tx_resources(uc); + if (ret) { + uc->config.remote_thread_id = -1; + return ret; + } + + uc->config.src_thread = ud->psil_base + uc->tchan->id; + uc->config.dst_thread = uc->config.remote_thread_id; + uc->config.dst_thread |= K3_PSIL_DST_THREAD_ID_OFFSET; + + irq_ring_idx = uc->tchan->tflow_id + oes->pktdma_tchan_flow; + + ret = pktdma_tisci_tx_channel_config(uc); + break; + case DMA_DEV_TO_MEM: + /* Slave transfer synchronized - dev to mem (RX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as DEV-to-MEM\n", __func__, + uc->id); + + ret = udma_alloc_rx_resources(uc); + if (ret) { + uc->config.remote_thread_id = -1; + return ret; + } + + uc->config.src_thread = uc->config.remote_thread_id; + uc->config.dst_thread = (ud->psil_base + uc->rchan->id) | + K3_PSIL_DST_THREAD_ID_OFFSET; + + irq_ring_idx = uc->rflow->id + oes->pktdma_rchan_flow; + + ret = pktdma_tisci_rx_channel_config(uc); + break; + default: + /* Can not happen */ + dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n", + __func__, uc->id, uc->config.dir); + return -EINVAL; + } + + /* check if the channel configuration was successful */ + if (ret) + goto err_res_free; + + if (udma_is_chan_running(uc)) { + dev_warn(ud->dev, "chan%d: is running!\n", uc->id); + udma_reset_chan(uc, false); + if (udma_is_chan_running(uc)) { + dev_err(ud->dev, "chan%d: won't stop!\n", uc->id); + ret = -EBUSY; + goto err_res_free; + } + } + + uc->dma_dev = dmaengine_get_dma_device(chan); + uc->hdesc_pool = dma_pool_create(uc->name, uc->dma_dev, + uc->config.hdesc_size, ud->desc_align, + 0); + if (!uc->hdesc_pool) { + dev_err(ud->ddev.dev, + "Descriptor pool allocation failed\n"); + uc->use_dma_pool = false; + ret = -ENOMEM; + goto err_res_free; + } + + uc->use_dma_pool = true; + + /* PSI-L pairing */ + ret = navss_psil_pair(ud, uc->config.src_thread, uc->config.dst_thread); + if (ret) { + dev_err(ud->dev, "PSI-L pairing failed: 0x%04x -> 0x%04x\n", + uc->config.src_thread, uc->config.dst_thread); + goto err_res_free; + } + + uc->psil_paired = true; + + uc->irq_num_ring = ti_sci_inta_msi_get_virq(ud->dev, irq_ring_idx); + if (uc->irq_num_ring <= 0) { + dev_err(ud->dev, "Failed to get ring irq (index: %u)\n", + irq_ring_idx); + ret = -EINVAL; + goto err_psi_free; + } + + ret = request_irq(uc->irq_num_ring, udma_ring_irq_handler, + IRQF_TRIGGER_HIGH, uc->name, uc); + if (ret) { + dev_err(ud->dev, "chan%d: ring irq request failed\n", uc->id); + goto err_irq_free; + } + + uc->irq_num_udma = 0; + + udma_reset_rings(uc); + + INIT_DELAYED_WORK_ONSTACK(&uc->tx_drain.work, + udma_check_tx_completion); + + if (uc->tchan) + dev_dbg(ud->dev, + "chan%d: tchan%d, tflow%d, Remote thread: 0x%04x\n", + uc->id, uc->tchan->id, uc->tchan->tflow_id, + uc->config.remote_thread_id); + else if (uc->rchan) + dev_dbg(ud->dev, + "chan%d: rchan%d, rflow%d, Remote thread: 0x%04x\n", + uc->id, uc->rchan->id, uc->rflow->id, + uc->config.remote_thread_id); + return 0; + +err_irq_free: + uc->irq_num_ring = 0; +err_psi_free: + navss_psil_unpair(ud, uc->config.src_thread, uc->config.dst_thread); + uc->psil_paired = false; +err_res_free: + udma_free_tx_resources(uc); + udma_free_rx_resources(uc); + + udma_reset_uchan(uc); + + dma_pool_destroy(uc->hdesc_pool); + uc->use_dma_pool = false; + + return ret; +} + static int udma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg) { @@ -2859,6 +3136,7 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl, struct udma_desc *d; u32 ring_id; unsigned int i; + u64 asel; d = kzalloc(struct_size(d, hwdesc, sglen), GFP_NOWAIT); if (!d) @@ -2872,6 +3150,11 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl, else ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); + if (uc->ud->match_data->type == DMA_TYPE_UDMA) + asel = 0; + else + asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT; + for_each_sg(sgl, sgent, sglen, i) { struct udma_hwdesc *hwdesc = &d->hwdesc[i]; dma_addr_t sg_addr = sg_dma_address(sgent); @@ -2906,14 +3189,16 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl, } /* attach the sg buffer to the descriptor */ + sg_addr |= asel; cppi5_hdesc_attach_buf(desc, sg_addr, sg_len, sg_addr, sg_len); /* Attach link as host buffer descriptor */ if (h_desc) cppi5_hdesc_link_hbdesc(h_desc, - hwdesc->cppi5_desc_paddr); + hwdesc->cppi5_desc_paddr | asel); - if (dir == DMA_MEM_TO_DEV) + if (uc->ud->match_data->type == DMA_TYPE_PKTDMA || + dir == DMA_MEM_TO_DEV) h_desc = desc; } @@ -3192,6 +3477,9 @@ udma_prep_dma_cyclic_pkt(struct udma_chan *uc, dma_addr_t buf_addr, else ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); + if (uc->ud->match_data->type != DMA_TYPE_UDMA) + buf_addr |= (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT; + for (i = 0; i < periods; i++) { struct udma_hwdesc *hwdesc = &d->hwdesc[i]; dma_addr_t period_addr = buf_addr + (period_len * i); @@ -3708,6 +3996,7 @@ static void udma_free_chan_resources(struct dma_chan *chan) static struct platform_driver udma_driver; static struct platform_driver bcdma_driver; +static struct platform_driver pktdma_driver; struct udma_filter_param { int remote_thread_id; @@ -3725,7 +4014,8 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) struct udma_dev *ud; if (chan->device->dev->driver != &udma_driver.driver && - chan->device->dev->driver != &bcdma_driver.driver) + chan->device->dev->driver != &bcdma_driver.driver && + chan->device->dev->driver != &pktdma_driver.driver) return false; uc = to_udma_chan(chan); @@ -3787,6 +4077,15 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) ucc->notdpkt = ep_config->notdpkt; ucc->ep_type = ep_config->ep_type; + if (ud->match_data->type == DMA_TYPE_PKTDMA && + ep_config->mapped_channel_id >= 0) { + ucc->mapped_channel_id = ep_config->mapped_channel_id; + ucc->default_flow_id = ep_config->default_flow_id; + } else { + ucc->mapped_channel_id = -1; + ucc->default_flow_id = -1; + } + if (ucc->ep_type != PSIL_EP_NATIVE) { const struct udma_match_data *match_data = ud->match_data; @@ -3903,6 +4202,14 @@ static struct udma_match_data am64_bcdma_data = { .statictr_z_mask = GENMASK(23, 0), }; +static struct udma_match_data am64_pktdma_data = { + .type = DMA_TYPE_PKTDMA, + .psil_base = 0x1000, + .enable_memcpy_support = false, /* PKTDMA does not support MEM_TO_MEM */ + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, + .statictr_z_mask = GENMASK(23, 0), +}; + static const struct of_device_id udma_of_match[] = { { .compatible = "ti,am654-navss-main-udmap", @@ -3929,6 +4236,14 @@ static const struct of_device_id bcdma_of_match[] = { { /* Sentinel */ }, }; +static const struct of_device_id pktdma_of_match[] = { + { + .compatible = "ti,am64-dmss-pktdma", + .data = &am64_pktdma_data, + }, + { /* Sentinel */ }, +}; + static struct udma_soc_data am654_soc_data = { .oes = { .udma_rchan = 0x200, @@ -3955,6 +4270,8 @@ static struct udma_soc_data am64_soc_data = { .bcdma_tchan_ring = 0x2a00, .bcdma_rchan_data = 0x2e00, .bcdma_rchan_ring = 0x3000, + .pktdma_tchan_flow = 0x1200, + .pktdma_rchan_flow = 0x1600, }, .bcdma_trigger_event_offset = 0xc400, }; @@ -3970,7 +4287,7 @@ static const struct soc_device_attribute k3_soc_devices[] = { static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud) { struct resource *res; - u32 cap2, cap3; + u32 cap2, cap3, cap4; int i; res = platform_get_resource_byname(pdev, IORESOURCE_MEM, @@ -3994,6 +4311,13 @@ static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud) ud->tchan_cnt = BCDMA_CAP2_TCHAN_CNT(cap2); ud->rchan_cnt = BCDMA_CAP2_RCHAN_CNT(cap2); break; + case DMA_TYPE_PKTDMA: + cap4 = udma_read(ud->mmrs[MMR_GCFG], 0x30); + ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2); + ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2); + ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3); + ud->tflow_cnt = PKTDMA_CAP4_TFLOW_CNT(cap4); + break; default: return -EINVAL; } @@ -4031,7 +4355,8 @@ static const char * const range_names[] = { [RM_RANGE_BCHAN] = "ti,sci-rm-range-bchan", [RM_RANGE_TCHAN] = "ti,sci-rm-range-tchan", [RM_RANGE_RCHAN] = "ti,sci-rm-range-rchan", - [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow" + [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow", + [RM_RANGE_TFLOW] = "ti,sci-rm-range-tflow", }; static int udma_setup_resources(struct udma_dev *ud) @@ -4106,7 +4431,7 @@ static int udma_setup_resources(struct udma_dev *ud) /* Get resource ranges from tisci */ for (i = 0; i < RM_RANGE_LAST; i++) { - if (i == RM_RANGE_BCHAN) + if (i == RM_RANGE_BCHAN || i == RM_RANGE_TFLOW) continue; tisci_rm->rm_ranges[i] = @@ -4256,7 +4581,7 @@ static int bcdma_setup_resources(struct udma_dev *ud) /* Get resource ranges from tisci */ for (i = 0; i < RM_RANGE_LAST; i++) { - if (i == RM_RANGE_RFLOW) + if (i == RM_RANGE_RFLOW || i == RM_RANGE_TFLOW) continue; if (i == RM_RANGE_BCHAN && ud->bchan_cnt == 0) continue; @@ -4362,6 +4687,135 @@ static int bcdma_setup_resources(struct udma_dev *ud) return 0; } +static int pktdma_setup_resources(struct udma_dev *ud) +{ + int ret, i, j; + struct device *dev = ud->dev; + struct ti_sci_resource *rm_res, irq_res; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u32 cap3; + + /* Set up the throughput level start indexes */ + cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); + if (UDMA_CAP3_UCHAN_CNT(cap3)) { + ud->tchan_tpl.levels = 3; + ud->tchan_tpl.start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3); + ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[1] + + UDMA_CAP3_HCHAN_CNT(cap3); + } else if (UDMA_CAP3_HCHAN_CNT(cap3)) { + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); + } else { + ud->tchan_tpl.levels = 1; + } + + ud->tchan_tpl.levels = ud->tchan_tpl.levels; + ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[0]; + ud->tchan_tpl.start_idx[1] = ud->tchan_tpl.start_idx[1]; + + ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans), + GFP_KERNEL); + ud->rchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->rchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->rchans = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rchans), + GFP_KERNEL); + ud->rflow_in_use = devm_kcalloc(dev, BITS_TO_LONGS(ud->rflow_cnt), + sizeof(unsigned long), + GFP_KERNEL); + ud->rflows = devm_kcalloc(dev, ud->rflow_cnt, sizeof(*ud->rflows), + GFP_KERNEL); + ud->tflow_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tflow_cnt), + sizeof(unsigned long), GFP_KERNEL); + + if (!ud->tchan_map || !ud->rchan_map || !ud->tflow_map || !ud->tchans || + !ud->rchans || !ud->rflows || !ud->rflow_in_use) + return -ENOMEM; + + /* Get resource ranges from tisci */ + for (i = 0; i < RM_RANGE_LAST; i++) { + if (i == RM_RANGE_BCHAN) + continue; + + tisci_rm->rm_ranges[i] = + devm_ti_sci_get_of_resource(tisci_rm->tisci, dev, + tisci_rm->tisci_dev_id, + (char *)range_names[i]); + } + + /* tchan ranges */ + rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->tchan_map, ud->tchan_cnt); + } else { + bitmap_fill(ud->tchan_map, ud->tchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->tchan_map, + &rm_res->desc[i], "tchan"); + } + + /* rchan ranges */ + rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->rchan_map, ud->rchan_cnt); + } else { + bitmap_fill(ud->rchan_map, ud->rchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rchan_map, + &rm_res->desc[i], "rchan"); + } + + /* rflow ranges */ + rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW]; + if (IS_ERR(rm_res)) { + /* all rflows are assigned exclusively to Linux */ + bitmap_zero(ud->rflow_in_use, ud->rflow_cnt); + } else { + bitmap_fill(ud->rflow_in_use, ud->rflow_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rflow_in_use, + &rm_res->desc[i], "rflow"); + } + irq_res.sets = rm_res->sets; + + /* tflow ranges */ + rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW]; + if (IS_ERR(rm_res)) { + /* all tflows are assigned exclusively to Linux */ + bitmap_zero(ud->tflow_map, ud->tflow_cnt); + } else { + bitmap_fill(ud->tflow_map, ud->tflow_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->tflow_map, + &rm_res->desc[i], "tflow"); + } + irq_res.sets += rm_res->sets; + + irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL); + rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW]; + for (i = 0; i < rm_res->sets; i++) { + irq_res.desc[i].start = rm_res->desc[i].start + + oes->pktdma_tchan_flow; + irq_res.desc[i].num = rm_res->desc[i].num; + } + rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW]; + for (j = 0; j < rm_res->sets; j++, i++) { + irq_res.desc[i].start = rm_res->desc[j].start + + oes->pktdma_rchan_flow; + irq_res.desc[i].num = rm_res->desc[j].num; + } + ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); + kfree(irq_res.desc); + if (ret) { + dev_err(ud->dev, "Failed to allocate MSI interrupts\n"); + return ret; + } + + return 0; +} + static int setup_resources(struct udma_dev *ud) { struct device *dev = ud->dev; @@ -4374,6 +4828,9 @@ static int setup_resources(struct udma_dev *ud) case DMA_TYPE_BCDMA: ret = bcdma_setup_resources(ud); break; + case DMA_TYPE_PKTDMA: + ret = pktdma_setup_resources(ud); + break; default: return -EINVAL; } @@ -4417,6 +4874,14 @@ static int setup_resources(struct udma_dev *ud) ud->rchan_cnt - bitmap_weight(ud->rchan_map, ud->rchan_cnt)); break; + case DMA_TYPE_PKTDMA: + dev_info(dev, + "Channels: %d (tchan: %u, rchan: %u)\n", + ch_count, + ud->tchan_cnt - bitmap_weight(ud->tchan_map, + ud->tchan_cnt), + ud->rchan_cnt - bitmap_weight(ud->rchan_map, + ud->rchan_cnt)); default: break; } @@ -4547,10 +5012,14 @@ static void udma_dbg_summary_show_chan(struct seq_file *s, case DMA_DEV_TO_MEM: seq_printf(s, "rchan%d [0x%04x -> 0x%04x], ", uc->rchan->id, ucc->src_thread, ucc->dst_thread); + if (uc->ud->match_data->type == DMA_TYPE_PKTDMA) + seq_printf(s, "rflow%d, ", uc->rflow->id); break; case DMA_MEM_TO_DEV: seq_printf(s, "tchan%d [0x%04x -> 0x%04x], ", uc->tchan->id, ucc->src_thread, ucc->dst_thread); + if (uc->ud->match_data->type == DMA_TYPE_PKTDMA) + seq_printf(s, "tflow%d, ", uc->tchan->tflow_id); break; default: seq_printf(s, ")\n"); @@ -4613,8 +5082,10 @@ static int udma_probe(struct platform_device *pdev) return -ENOMEM; match = of_match_node(udma_of_match, dev->of_node); - if (!match) { + if (!match) match = of_match_node(bcdma_of_match, dev->of_node); + if (!match) { + match = of_match_node(pktdma_of_match, dev->of_node); if (!match) { dev_err(dev, "No compatible match found\n"); return -ENODEV; @@ -4678,8 +5149,14 @@ static int udma_probe(struct platform_device *pdev) ring_init_data.tisci = ud->tisci_rm.tisci; ring_init_data.tisci_dev_id = ud->tisci_rm.tisci_dev_id; - ring_init_data.num_rings = ud->bchan_cnt + ud->tchan_cnt + - ud->rchan_cnt; + if (ud->match_data->type == DMA_TYPE_BCDMA) { + ring_init_data.num_rings = ud->bchan_cnt + + ud->tchan_cnt + + ud->rchan_cnt; + } else { + ring_init_data.num_rings = ud->rflow_cnt + + ud->tflow_cnt; + } ud->ringacc = k3_ringacc_dmarings_init(pdev, &ring_init_data); } @@ -4695,11 +5172,14 @@ static int udma_probe(struct platform_device *pdev) } dma_cap_set(DMA_SLAVE, ud->ddev.cap_mask); - dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask); + /* cyclic operation is not supported via PKTDMA */ + if (ud->match_data->type != DMA_TYPE_PKTDMA) { + dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask); + ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic; + } ud->ddev.device_config = udma_slave_config; ud->ddev.device_prep_slave_sg = udma_prep_slave_sg; - ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic; ud->ddev.device_issue_pending = udma_issue_pending; ud->ddev.device_tx_status = udma_tx_status; ud->ddev.device_pause = udma_pause; @@ -4720,6 +5200,10 @@ static int udma_probe(struct platform_device *pdev) bcdma_alloc_chan_resources; ud->ddev.device_router_config = bcdma_router_config; break; + case DMA_TYPE_PKTDMA: + ud->ddev.device_alloc_chan_resources = + pktdma_alloc_chan_resources; + break; default: return -EINVAL; } @@ -4798,6 +5282,8 @@ static int udma_probe(struct platform_device *pdev) uc->tchan = NULL; uc->rchan = NULL; uc->config.remote_thread_id = -1; + uc->config.mapped_channel_id = -1; + uc->config.default_flow_id = -1; uc->config.dir = DMA_MEM_TO_MEM; uc->name = devm_kasprintf(dev, GFP_KERNEL, "%s chan%d", dev_name(dev), i); @@ -4847,5 +5333,15 @@ static struct platform_driver bcdma_driver = { }; builtin_platform_driver(bcdma_driver); +static struct platform_driver pktdma_driver = { + .driver = { + .name = "ti-pktdma", + .of_match_table = pktdma_of_match, + .suppress_bind_attrs = true, + }, + .probe = udma_probe, +}; +builtin_platform_driver(pktdma_driver); + /* Private interfaces to UDMA */ #include "k3-udma-private.c" diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index 8cb32681afaf..078cc3aa4126 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -55,6 +55,8 @@ #define BCDMA_CAP4_HTCHAN_CNT(val) (((val) >> 16) & 0xff) #define BCDMA_CAP4_UTCHAN_CNT(val) (((val) >> 24) & 0xff) +#define PKTDMA_CAP4_TFLOW_CNT(val) ((val) & 0x3fff) + /* UDMA_CHAN_RT_CTL_REG */ #define UDMA_CHAN_RT_CTL_EN BIT(31) #define UDMA_CHAN_RT_CTL_TDOWN BIT(30) @@ -105,6 +107,7 @@ enum udma_rm_range { RM_RANGE_TCHAN, RM_RANGE_RCHAN, RM_RANGE_RFLOW, + RM_RANGE_TFLOW, RM_RANGE_LAST, }; @@ -151,5 +154,6 @@ void xudma_tchanrt_write(struct udma_tchan *tchan, int reg, u32 val); u32 xudma_rchanrt_read(struct udma_rchan *rchan, int reg); void xudma_rchanrt_write(struct udma_rchan *rchan, int reg, u32 val); bool xudma_rflow_is_gp(struct udma_dev *ud, int id); +int xudma_get_rflow_ring_offset(struct udma_dev *ud); #endif /* K3_UDMA_H_ */ From patchwork Wed Sep 30 09:14:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 313869 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:1081:0:0:0:0 with SMTP id r1csp21704ilj; Wed, 30 Sep 2020 02:15:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzw6F0ICZAHmX26QAla1WbV16tvWcqn2GymusFyV9/h5WJOk9jnva47wJIY1FJ6rJWSgyU9 X-Received: by 2002:a05:6402:d6:: with SMTP id i22mr1295022edu.53.1601457320073; Wed, 30 Sep 2020 02:15:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601457320; cv=none; d=google.com; s=arc-20160816; b=dMuEJArZafEucCN27PDAwZaCOHzE6mqaFU1tOiJpZE6kjgSC9AIEKEw/TMcZS8DcTg Y5qvVJ+DO288aPqVZGeoQPqnhzVwgBhQmEX7VS83jSTmA5zcqFyvox7IAvDyJMVR2uRU /u7D/L4UZMpHgtyLbj9mjFphJX3DF1WMeZOIa7hEuh+AL83mPKuhJdU8Sno0EKLNNcCh /BSQ05JN+CVA1AF/Bz0P+ta0nLYIKCiy3Vha6o6+jFyOldcFHiVUXdoBeMBg+3c/5/GL ZjAkwy+nZLlp4qD13TU01LcLFkdERhABG4XTQ9pwnQxUM8cZmUB8WprWY39uEJ13H2/2 1T4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sxnhla0ZiGcwmF3+OurG7ESfOVoCPHYxeMR2b0fot0w=; b=lOdly19ost7Jw8C2T7VNZFvlO2noC3c7Qh5ZZItS1Acr3qbZO+coCSXWHWwM2zAoZq i9rM/RO4BDZOhABHGXnfAL/Ru4V3afy8fUv7can82uDn9S0Qd1bu0JXJavEWyzSJh4yK FNcKn5GlV1FGifFmuIb4HRnXHr/sbLGzUhx1pZPYKAFD3DYjeagbcgBZSabLCXeeaAih O5dcqAakdVGg4Ul996NxSI9uXcQHP3aBYKmZIV2i9CSNqPY1bfVNVQozDlg7eUdHWYa6 vNZEQNdjXVoExpXfxjCwJUWedzwc8KTQvK1OH6WDJfQBxCCsWY1DufAm70hBdZ8011Zb JxcQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=tLnnhTTT; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j7si732058ejn.639.2020.09.30.02.15.19; Wed, 30 Sep 2020 02:15:20 -0700 (PDT) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=tLnnhTTT; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729367AbgI3JPO (ORCPT + 6 others); Wed, 30 Sep 2020 05:15:14 -0400 Received: from fllv0016.ext.ti.com ([198.47.19.142]:53234 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729334AbgI3JPO (ORCPT ); Wed, 30 Sep 2020 05:15:14 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 08U9Erq3067506; Wed, 30 Sep 2020 04:14:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1601457293; bh=sxnhla0ZiGcwmF3+OurG7ESfOVoCPHYxeMR2b0fot0w=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=tLnnhTTT5YuF8LTSInU8hF2phjrROsnMtJR57k72sPkjU0ZHGkl+TTbmuJcKedMLv HRSxUuY28apjXQllIcqN8n63u31CWQyNdfRcwln+bi3SAYGk1D1lmYuW1AVg8NHY60 rIa0jVkxQYIzRSJ2bMo9/esYlMx+JcY3/jEp11Uw= Received: from DFLE106.ent.ti.com (dfle106.ent.ti.com [10.64.6.27]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9ErNh122457; Wed, 30 Sep 2020 04:14:53 -0500 Received: from DFLE106.ent.ti.com (10.64.6.27) by DFLE106.ent.ti.com (10.64.6.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Wed, 30 Sep 2020 04:14:52 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE106.ent.ti.com (10.64.6.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Wed, 30 Sep 2020 04:14:52 -0500 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 08U9DuJp116385; Wed, 30 Sep 2020 04:14:50 -0500 From: Peter Ujfalusi To: , , , , CC: , , , , , , Subject: [PATCH 18/18] dmaengine: ti: k3-udma-glue: Add support for K3 PKTDMA Date: Wed, 30 Sep 2020 12:14:12 +0300 Message-ID: <20200930091412.8020-19-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930091412.8020-1-peter.ujfalusi@ti.com> References: <20200930091412.8020-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Vignesh Raghavendra This commit adds support for PKTDMA in k3-udma glue driver. Use new psil_endpoint_config struct to get static data for a given channel or a flow during setup. Make sure that the RX flows being mapped to a RX channel is within the range of flows that is been allocated to that RX channel. Signed-off-by: Vignesh Raghavendra Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-glue.c | 272 ++++++++++++++++++++++++++----- drivers/dma/ti/k3-udma-private.c | 24 +++ drivers/dma/ti/k3-udma.h | 4 + include/linux/dma/k3-udma-glue.h | 8 + 4 files changed, 270 insertions(+), 38 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c index f39825ce288a..6730bc296043 100644 --- a/drivers/dma/ti/k3-udma-glue.c +++ b/drivers/dma/ti/k3-udma-glue.c @@ -22,6 +22,7 @@ struct k3_udma_glue_common { struct device *dev; + struct device chan_dev; struct udma_dev *udmax; const struct udma_tisci_rm *tisci_rm; struct k3_ringacc *ringacc; @@ -32,7 +33,8 @@ struct k3_udma_glue_common { bool epib; u32 psdata_size; u32 swdata_size; - u32 atype; + u32 atype_asel; + struct psil_endpoint_config *ep_config; }; struct k3_udma_glue_tx_channel { @@ -53,6 +55,8 @@ struct k3_udma_glue_tx_channel { bool tx_filt_einfo; bool tx_filt_pswords; bool tx_supr_tdpkt; + + int udma_tflow_id; }; struct k3_udma_glue_rx_flow { @@ -104,7 +108,6 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np, const char *name, struct k3_udma_glue_common *common, bool tx_chn) { - struct psil_endpoint_config *ep_config; struct of_phandle_args dma_spec; u32 thread_id; int ret = 0; @@ -121,15 +124,26 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np, &dma_spec)) return -ENOENT; + ret = of_k3_udma_glue_parse(dma_spec.np, common); + if (ret) + goto out_put_spec; + thread_id = dma_spec.args[0]; if (dma_spec.args_count == 2) { - if (dma_spec.args[1] > 2) { + if (dma_spec.args[1] > 2 && !xudma_is_pktdma(common->udmax)) { dev_err(common->dev, "Invalid channel atype: %u\n", dma_spec.args[1]); ret = -EINVAL; goto out_put_spec; } - common->atype = dma_spec.args[1]; + if (dma_spec.args[1] > 15 && xudma_is_pktdma(common->udmax)) { + dev_err(common->dev, "Invalid channel asel: %u\n", + dma_spec.args[1]); + ret = -EINVAL; + goto out_put_spec; + } + + common->atype_asel = dma_spec.args[1]; } if (tx_chn && !(thread_id & K3_PSIL_DST_THREAD_ID_OFFSET)) { @@ -143,25 +157,23 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np, } /* get psil endpoint config */ - ep_config = psil_get_ep_config(thread_id); - if (IS_ERR(ep_config)) { + common->ep_config = psil_get_ep_config(thread_id); + if (IS_ERR(common->ep_config)) { dev_err(common->dev, "No configuration for psi-l thread 0x%04x\n", thread_id); - ret = PTR_ERR(ep_config); + ret = PTR_ERR(common->ep_config); goto out_put_spec; } - common->epib = ep_config->needs_epib; - common->psdata_size = ep_config->psd_size; + common->epib = common->ep_config->needs_epib; + common->psdata_size = common->ep_config->psd_size; if (tx_chn) common->dst_thread = thread_id; else common->src_thread = thread_id; - ret = of_k3_udma_glue_parse(dma_spec.np, common); - out_put_spec: of_node_put(dma_spec.np); return ret; @@ -227,7 +239,7 @@ static int k3_udma_glue_cfg_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) req.tx_supr_tdpkt = 1; req.tx_fetch_size = tx_chn->common.hdesc_size >> 2; req.txcq_qnum = k3_ringacc_get_ring_id(tx_chn->ringtxcq); - req.tx_atype = tx_chn->common.atype; + req.tx_atype = tx_chn->common.atype_asel; return tisci_rm->tisci_udmap_ops->tx_ch_cfg(tisci_rm->tisci, &req); } @@ -259,8 +271,14 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, tx_chn->common.psdata_size, tx_chn->common.swdata_size); + if (xudma_is_pktdma(tx_chn->common.udmax)) + tx_chn->udma_tchan_id = tx_chn->common.ep_config->mapped_channel_id; + else + tx_chn->udma_tchan_id = -1; + /* request and cfg UDMAP TX channel */ - tx_chn->udma_tchanx = xudma_tchan_get(tx_chn->common.udmax, -1); + tx_chn->udma_tchanx = xudma_tchan_get(tx_chn->common.udmax, + tx_chn->udma_tchan_id); if (IS_ERR(tx_chn->udma_tchanx)) { ret = PTR_ERR(tx_chn->udma_tchanx); dev_err(dev, "UDMAX tchanx get err %d\n", ret); @@ -268,11 +286,33 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, } tx_chn->udma_tchan_id = xudma_tchan_get_id(tx_chn->udma_tchanx); + tx_chn->common.chan_dev.parent = xudma_get_device(tx_chn->common.udmax); + dev_set_name(&tx_chn->common.chan_dev, "tchan%d-0x%04x", + tx_chn->udma_tchan_id, tx_chn->common.dst_thread); + ret = device_register(&tx_chn->common.chan_dev); + if (ret) { + dev_err(dev, "Channel Device registration failed %d\n", ret); + tx_chn->common.chan_dev.parent = NULL; + goto err; + } + + if (xudma_is_pktdma(tx_chn->common.udmax)) { + /* prepare the channel device as coherent */ + tx_chn->common.chan_dev.dma_coherent = true; + dma_coerce_mask_and_coherent(&tx_chn->common.chan_dev, + DMA_BIT_MASK(48)); + } + atomic_set(&tx_chn->free_pkts, cfg->txcq_cfg.size); + if (xudma_is_pktdma(tx_chn->common.udmax)) + tx_chn->udma_tflow_id = tx_chn->common.ep_config->default_flow_id; + else + tx_chn->udma_tflow_id = tx_chn->udma_tchan_id; + /* request and cfg rings */ ret = k3_ringacc_request_rings_pair(tx_chn->common.ringacc, - tx_chn->udma_tchan_id, -1, + tx_chn->udma_tflow_id, -1, &tx_chn->ringtx, &tx_chn->ringtxcq); if (ret) { @@ -284,6 +324,12 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, cfg->tx_cfg.dma_dev = k3_udma_glue_tx_get_dma_device(tx_chn); cfg->txcq_cfg.dma_dev = cfg->tx_cfg.dma_dev; + /* Set the ASEL value for DMA rings of PKTDMA */ + if (xudma_is_pktdma(tx_chn->common.udmax)) { + cfg->tx_cfg.asel = tx_chn->common.atype_asel; + cfg->txcq_cfg.asel = tx_chn->common.atype_asel; + } + ret = k3_ringacc_ring_cfg(tx_chn->ringtx, &cfg->tx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringtx %d\n", ret); @@ -348,6 +394,11 @@ void k3_udma_glue_release_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) if (tx_chn->ringtx) k3_ringacc_ring_free(tx_chn->ringtx); + + if (tx_chn->common.chan_dev.parent) { + device_unregister(&tx_chn->common.chan_dev); + tx_chn->common.chan_dev.parent = NULL; + } } EXPORT_SYMBOL_GPL(k3_udma_glue_release_tx_chn); @@ -441,13 +492,10 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, void *data, void (*cleanup)(void *data, dma_addr_t desc_dma)) { + struct device *dev = tx_chn->common.dev; dma_addr_t desc_dma; int occ_tx, i, ret; - /* reset TXCQ as it is not input for udma - expected to be empty */ - if (tx_chn->ringtxcq) - k3_ringacc_ring_reset(tx_chn->ringtxcq); - /* * TXQ reset need to be special way as it is input for udma and its * state cached by udma, so: @@ -456,17 +504,20 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, * 3) reset TXQ in a special way */ occ_tx = k3_ringacc_ring_get_occ(tx_chn->ringtx); - dev_dbg(tx_chn->common.dev, "TX reset occ_tx %u\n", occ_tx); + dev_dbg(dev, "TX reset occ_tx %u\n", occ_tx); for (i = 0; i < occ_tx; i++) { ret = k3_ringacc_ring_pop(tx_chn->ringtx, &desc_dma); if (ret) { - dev_err(tx_chn->common.dev, "TX reset pop %d\n", ret); + if (ret != -ENODATA) + dev_err(dev, "TX reset pop %d\n", ret); break; } cleanup(data, desc_dma); } + /* reset TXCQ as it is not input for udma - expected to be empty */ + k3_ringacc_ring_reset(tx_chn->ringtxcq); k3_ringacc_ring_reset_dma(tx_chn->ringtx, occ_tx); } EXPORT_SYMBOL_GPL(k3_udma_glue_reset_tx_chn); @@ -485,7 +536,12 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_txcq_id); int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn) { - tx_chn->virq = k3_ringacc_get_ring_irq_num(tx_chn->ringtxcq); + if (xudma_is_pktdma(tx_chn->common.udmax)) { + tx_chn->virq = xudma_pktdma_tflow_get_irq(tx_chn->common.udmax, + tx_chn->udma_tflow_id); + } else { + tx_chn->virq = k3_ringacc_get_ring_irq_num(tx_chn->ringtxcq); + } return tx_chn->virq; } @@ -494,10 +550,36 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq); struct device * k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn) { + if (xudma_is_pktdma(tx_chn->common.udmax) && + (tx_chn->common.atype_asel == 14 || tx_chn->common.atype_asel == 15)) + return &tx_chn->common.chan_dev; + return xudma_get_device(tx_chn->common.udmax); } EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_dma_device); +void k3_udma_glue_tx_dma_to_cppi5_addr(struct k3_udma_glue_tx_channel *tx_chn, + dma_addr_t *addr) +{ + if (!xudma_is_pktdma(tx_chn->common.udmax) || + !tx_chn->common.atype_asel) + return; + + *addr |= (u64)tx_chn->common.atype_asel << K3_ADDRESS_ASEL_SHIFT; +} +EXPORT_SYMBOL_GPL(k3_udma_glue_tx_dma_to_cppi5_addr); + +void k3_udma_glue_tx_cppi5_to_dma_addr(struct k3_udma_glue_tx_channel *tx_chn, + dma_addr_t *addr) +{ + if (!xudma_is_pktdma(tx_chn->common.udmax) || + !tx_chn->common.atype_asel) + return; + + *addr &= (u64)GENMASK(K3_ADDRESS_ASEL_SHIFT - 1, 0); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_tx_cppi5_to_dma_addr); + static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) { const struct udma_tisci_rm *tisci_rm = rx_chn->common.tisci_rm; @@ -509,8 +591,6 @@ static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) req.valid_params = TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID | TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID | - TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID | - TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID | TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID; req.nav_id = tisci_rm->tisci_dev_id; @@ -522,13 +602,16 @@ static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) * req.rxcq_qnum = k3_ringacc_get_ring_id(rx_chn->flows[0].ringrx); */ req.rxcq_qnum = 0xFFFF; - if (rx_chn->flow_num && rx_chn->flow_id_base != rx_chn->udma_rchan_id) { + if (!xudma_is_pktdma(rx_chn->common.udmax) && rx_chn->flow_num && + rx_chn->flow_id_base != rx_chn->udma_rchan_id) { /* Default flow + extra ones */ + req.valid_params |= TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID; req.flowid_start = rx_chn->flow_id_base; req.flowid_cnt = rx_chn->flow_num; } req.rx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR; - req.rx_atype = rx_chn->common.atype; + req.rx_atype = rx_chn->common.atype_asel; ret = tisci_rm->tisci_udmap_ops->rx_ch_cfg(tisci_rm->tisci, &req); if (ret) @@ -582,10 +665,18 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn, goto err_rflow_put; } + if (xudma_is_pktdma(rx_chn->common.udmax)) { + rx_ring_id = flow->udma_rflow_id + + xudma_get_rflow_ring_offset(rx_chn->common.udmax); + rx_ringfdq_id = 0; + } else { + rx_ring_id = flow_cfg->ring_rxq_id; + rx_ringfdq_id = flow_cfg->ring_rxfdq0_id; + } + /* request and cfg rings */ ret = k3_ringacc_request_rings_pair(rx_chn->common.ringacc, - flow_cfg->ring_rxfdq0_id, - flow_cfg->ring_rxq_id, + rx_ringfdq_id, rx_ring_id, &flow->ringrxfdq, &flow->ringrx); if (ret) { @@ -597,6 +688,12 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn, flow_cfg->rx_cfg.dma_dev = k3_udma_glue_rx_get_dma_device(rx_chn); flow_cfg->rxfdq_cfg.dma_dev = flow_cfg->rx_cfg.dma_dev; + /* Set the ASEL value for DMA rings of PKTDMA */ + if (xudma_is_pktdma(rx_chn->common.udmax)) { + flow_cfg->rx_cfg.asel = rx_chn->common.atype_asel; + flow_cfg->rxfdq_cfg.asel = rx_chn->common.atype_asel; + } + ret = k3_ringacc_ring_cfg(flow->ringrx, &flow_cfg->rx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringrx %d\n", ret); @@ -755,6 +852,7 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name, struct k3_udma_glue_rx_channel_cfg *cfg) { struct k3_udma_glue_rx_channel *rx_chn; + struct psil_endpoint_config *ep_cfg; int ret, i; if (cfg->flow_id_num <= 0) @@ -782,8 +880,16 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name, rx_chn->common.psdata_size, rx_chn->common.swdata_size); + ep_cfg = rx_chn->common.ep_config; + + if (xudma_is_pktdma(rx_chn->common.udmax)) + rx_chn->udma_rchan_id = ep_cfg->mapped_channel_id; + else + rx_chn->udma_rchan_id = -1; + /* request and cfg UDMAP RX channel */ - rx_chn->udma_rchanx = xudma_rchan_get(rx_chn->common.udmax, -1); + rx_chn->udma_rchanx = xudma_rchan_get(rx_chn->common.udmax, + rx_chn->udma_rchan_id); if (IS_ERR(rx_chn->udma_rchanx)) { ret = PTR_ERR(rx_chn->udma_rchanx); dev_err(dev, "UDMAX rchanx get err %d\n", ret); @@ -791,12 +897,47 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name, } rx_chn->udma_rchan_id = xudma_rchan_get_id(rx_chn->udma_rchanx); - rx_chn->flow_num = cfg->flow_id_num; - rx_chn->flow_id_base = cfg->flow_id_base; + rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax); + dev_set_name(&rx_chn->common.chan_dev, "rchan%d-0x%04x", + rx_chn->udma_rchan_id, rx_chn->common.src_thread); + ret = device_register(&rx_chn->common.chan_dev); + if (ret) { + dev_err(dev, "Channel Device registration failed %d\n", ret); + rx_chn->common.chan_dev.parent = NULL; + goto err; + } - /* Use RX channel id as flow id: target dev can't generate flow_id */ - if (cfg->flow_id_use_rxchan_id) - rx_chn->flow_id_base = rx_chn->udma_rchan_id; + if (xudma_is_pktdma(rx_chn->common.udmax)) { + /* prepare the channel device as coherent */ + rx_chn->common.chan_dev.dma_coherent = true; + dma_coerce_mask_and_coherent(&rx_chn->common.chan_dev, + DMA_BIT_MASK(48)); + } + + if (xudma_is_pktdma(rx_chn->common.udmax)) { + int flow_start = cfg->flow_id_base; + int flow_end; + + if (flow_start == -1) + flow_start = ep_cfg->flow_start; + + flow_end = flow_start + cfg->flow_id_num - 1; + if (flow_start < ep_cfg->flow_start || + flow_end > (ep_cfg->flow_start + ep_cfg->flow_num - 1)) { + dev_err(dev, "Invalid flow range requested\n"); + ret = -EINVAL; + goto err; + } + rx_chn->flow_id_base = flow_start; + } else { + rx_chn->flow_id_base = cfg->flow_id_base; + + /* Use RX channel id as flow id: target dev can't generate flow_id */ + if (cfg->flow_id_use_rxchan_id) + rx_chn->flow_id_base = rx_chn->udma_rchan_id; + } + + rx_chn->flow_num = cfg->flow_id_num; rx_chn->flows = devm_kcalloc(dev, rx_chn->flow_num, sizeof(*rx_chn->flows), GFP_KERNEL); @@ -899,6 +1040,23 @@ k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name, goto err; } + rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax); + dev_set_name(&rx_chn->common.chan_dev, "rchan_remote-0x%04x", + rx_chn->common.src_thread); + ret = device_register(&rx_chn->common.chan_dev); + if (ret) { + dev_err(dev, "Channel Device registration failed %d\n", ret); + rx_chn->common.chan_dev.parent = NULL; + goto err; + } + + if (xudma_is_pktdma(rx_chn->common.udmax)) { + /* prepare the channel device as coherent */ + rx_chn->common.chan_dev.dma_coherent = true; + dma_coerce_mask_and_coherent(&rx_chn->common.chan_dev, + DMA_BIT_MASK(48)); + } + ret = k3_udma_glue_allocate_rx_flows(rx_chn, cfg); if (ret) goto err; @@ -951,6 +1109,11 @@ void k3_udma_glue_release_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) if (!IS_ERR_OR_NULL(rx_chn->udma_rchanx)) xudma_rchan_put(rx_chn->common.udmax, rx_chn->udma_rchanx); + + if (rx_chn->common.chan_dev.parent) { + device_unregister(&rx_chn->common.chan_dev); + rx_chn->common.chan_dev.parent = NULL; + } } EXPORT_SYMBOL_GPL(k3_udma_glue_release_rx_chn); @@ -1143,12 +1306,10 @@ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn, /* reset RXCQ as it is not input for udma - expected to be empty */ occ_rx = k3_ringacc_ring_get_occ(flow->ringrx); dev_dbg(dev, "RX reset flow %u occ_rx %u\n", flow_num, occ_rx); - if (flow->ringrx) - k3_ringacc_ring_reset(flow->ringrx); /* Skip RX FDQ in case one FDQ is used for the set of flows */ if (skip_fdq) - return; + goto do_reset; /* * RX FDQ reset need to be special way as it is input for udma and its @@ -1163,13 +1324,17 @@ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn, for (i = 0; i < occ_rx; i++) { ret = k3_ringacc_ring_pop(flow->ringrxfdq, &desc_dma); if (ret) { - dev_err(dev, "RX reset pop %d\n", ret); + if (ret != -ENODATA) + dev_err(dev, "RX reset pop %d\n", ret); break; } cleanup(data, desc_dma); } k3_ringacc_ring_reset_dma(flow->ringrxfdq, occ_rx); + +do_reset: + k3_ringacc_ring_reset(flow->ringrx); } EXPORT_SYMBOL_GPL(k3_udma_glue_reset_rx_chn); @@ -1199,7 +1364,12 @@ int k3_udma_glue_rx_get_irq(struct k3_udma_glue_rx_channel *rx_chn, flow = &rx_chn->flows[flow_num]; - flow->virq = k3_ringacc_get_ring_irq_num(flow->ringrx); + if (xudma_is_pktdma(rx_chn->common.udmax)) { + flow->virq = xudma_pktdma_rflow_get_irq(rx_chn->common.udmax, + flow->udma_rflow_id); + } else { + flow->virq = k3_ringacc_get_ring_irq_num(flow->ringrx); + } return flow->virq; } @@ -1208,6 +1378,32 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_irq); struct device * k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn) { + if (xudma_is_pktdma(rx_chn->common.udmax) && + (rx_chn->common.atype_asel == 14 || rx_chn->common.atype_asel == 15)) + return &rx_chn->common.chan_dev; + return xudma_get_device(rx_chn->common.udmax); } EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_dma_device); + +void k3_udma_glue_rx_dma_to_cppi5_addr(struct k3_udma_glue_rx_channel *rx_chn, + dma_addr_t *addr) +{ + if (!xudma_is_pktdma(rx_chn->common.udmax) || + !rx_chn->common.atype_asel) + return; + + *addr |= (u64)rx_chn->common.atype_asel << K3_ADDRESS_ASEL_SHIFT; +} +EXPORT_SYMBOL_GPL(k3_udma_glue_rx_dma_to_cppi5_addr); + +void k3_udma_glue_rx_cppi5_to_dma_addr(struct k3_udma_glue_rx_channel *rx_chn, + dma_addr_t *addr) +{ + if (!xudma_is_pktdma(rx_chn->common.udmax) || + !rx_chn->common.atype_asel) + return; + + *addr &= (u64)GENMASK(K3_ADDRESS_ASEL_SHIFT - 1, 0); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_rx_cppi5_to_dma_addr); diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c index f0cecd29cff1..cef4890cfa42 100644 --- a/drivers/dma/ti/k3-udma-private.c +++ b/drivers/dma/ti/k3-udma-private.c @@ -151,3 +151,27 @@ void xudma_##res##rt_write(struct udma_##res *p, int reg, u32 val) \ EXPORT_SYMBOL(xudma_##res##rt_write) XUDMA_RT_IO_FUNCTIONS(tchan); XUDMA_RT_IO_FUNCTIONS(rchan); + +int xudma_is_pktdma(struct udma_dev *ud) +{ + return ud->match_data->type == DMA_TYPE_PKTDMA; +} +EXPORT_SYMBOL(xudma_is_pktdma); + +int xudma_pktdma_tflow_get_irq(struct udma_dev *ud, int udma_tflow_id) +{ + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + + return ti_sci_inta_msi_get_virq(ud->dev, udma_tflow_id + + oes->pktdma_tchan_flow); +} +EXPORT_SYMBOL(xudma_pktdma_tflow_get_irq); + +int xudma_pktdma_rflow_get_irq(struct udma_dev *ud, int udma_rflow_id) +{ + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + + return ti_sci_inta_msi_get_virq(ud->dev, udma_rflow_id + + oes->pktdma_rchan_flow); +} +EXPORT_SYMBOL(xudma_pktdma_rflow_get_irq); diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index 078cc3aa4126..c02080bb5866 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -156,4 +156,8 @@ void xudma_rchanrt_write(struct udma_rchan *rchan, int reg, u32 val); bool xudma_rflow_is_gp(struct udma_dev *ud, int id); int xudma_get_rflow_ring_offset(struct udma_dev *ud); +int xudma_is_pktdma(struct udma_dev *ud); + +int xudma_pktdma_tflow_get_irq(struct udma_dev *ud, int udma_tflow_id); +int xudma_pktdma_rflow_get_irq(struct udma_dev *ud, int udma_rflow_id); #endif /* K3_UDMA_H_ */ diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h index d7c12f31377c..e443be4d3b4b 100644 --- a/include/linux/dma/k3-udma-glue.h +++ b/include/linux/dma/k3-udma-glue.h @@ -43,6 +43,10 @@ u32 k3_udma_glue_tx_get_txcq_id(struct k3_udma_glue_tx_channel *tx_chn); int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn); struct device * k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn); +void k3_udma_glue_tx_dma_to_cppi5_addr(struct k3_udma_glue_tx_channel *tx_chn, + dma_addr_t *addr); +void k3_udma_glue_tx_cppi5_to_dma_addr(struct k3_udma_glue_tx_channel *tx_chn, + dma_addr_t *addr); enum { K3_UDMA_GLUE_SRC_TAG_LO_KEEP = 0, @@ -134,5 +138,9 @@ int k3_udma_glue_rx_flow_disable(struct k3_udma_glue_rx_channel *rx_chn, u32 flow_idx); struct device * k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn); +void k3_udma_glue_rx_dma_to_cppi5_addr(struct k3_udma_glue_rx_channel *rx_chn, + dma_addr_t *addr); +void k3_udma_glue_rx_cppi5_to_dma_addr(struct k3_udma_glue_rx_channel *rx_chn, + dma_addr_t *addr); #endif /* K3_UDMA_GLUE_H_ */