From patchwork Thu Sep 28 07:06:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 727890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B3ACE7B09 for ; Thu, 28 Sep 2023 07:10:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231573AbjI1HKK (ORCPT ); Thu, 28 Sep 2023 03:10:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231495AbjI1HKG (ORCPT ); Thu, 28 Sep 2023 03:10:06 -0400 Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 924461A3; Thu, 28 Sep 2023 00:08:28 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPA id 1BA3120003; Thu, 28 Sep 2023 07:08:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1695884905; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y3si6n+8C/3nGddG/bex4gaYwAJkDU16eps4QpP35QQ=; b=MXygfz/dCjtEhx/9Xq3QOPW+SLgrnOvKBaOjESPIijmoKkdvLlpHMOBdVdtOF/BhqTrH41 jGU2dXX8iERTlomJNTERVgQ4Os7xekP92Gz/riy7/cXho3vt15UYgNHz580G8haPVK+bzI i6gSaRraILTMYTcjKpl/L96sHjdi8SZQwStpm09rCLi4bz1BMDi4biB+66eT91+kRTMpsb XqFQOts/gYW6u1oTzks4KY1HIcYX+eAH+TfPFJZ1UZZ9ejXi4AP0Pbdy+Es822vddM9F8f cZNTBeJgeBefE2rCj5pRfNydkse7Yl3v4edR92pl/3O8rb8vXVN5IULQeVJB6w== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Linus Walleij , Qiang Zhao , Li Yang , Liam Girdwood , Mark Brown , Jaroslav Kysela , Takashi Iwai , Shengjiu Wang , Xiubo Li , Fabio Estevam , Nicolin Chen , Christophe Leroy , Randy Dunlap Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-gpio@vger.kernel.org, linux-arm-kernel@lists.infradead.org, alsa-devel@alsa-project.org, Simon Horman , Christophe JAILLET , Thomas Petazzoni Subject: [PATCH v7 22/30] soc: fsl: cpm1: qmc: Introduce functions to change timeslots at runtime Date: Thu, 28 Sep 2023 09:06:40 +0200 Message-ID: <20230928070652.330429-23-herve.codina@bootlin.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230928070652.330429-1-herve.codina@bootlin.com> References: <20230928070652.330429-1-herve.codina@bootlin.com> MIME-Version: 1.0 X-GND-Sasl: herve.codina@bootlin.com Precedence: bulk List-ID: X-Mailing-List: linux-gpio@vger.kernel.org Introduce qmc_chan_{get,set}_ts_info() function to allow timeslots modification at runtime. The modification is provided using qmc_chan_set_ts_info() and will be applied on next qmc_chan_start(). qmc_chan_set_ts_info() must be called with the channel rx and/or tx stopped. Signed-off-by: Herve Codina Reviewed-by: Christophe Leroy --- drivers/soc/fsl/qe/qmc.c | 51 ++++++++++++++++++++++++++++++++++++++++ include/soc/fsl/qe/qmc.h | 10 ++++++++ 2 files changed, 61 insertions(+) diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c index b63b54ec0a3a..6e22b96b4e7a 100644 --- a/drivers/soc/fsl/qe/qmc.c +++ b/drivers/soc/fsl/qe/qmc.c @@ -290,6 +290,57 @@ int qmc_chan_get_info(struct qmc_chan *chan, struct qmc_chan_info *info) } EXPORT_SYMBOL(qmc_chan_get_info); +int qmc_chan_get_ts_info(struct qmc_chan *chan, struct qmc_chan_ts_info *ts_info) +{ + unsigned long flags; + + spin_lock_irqsave(&chan->ts_lock, flags); + + ts_info->rx_ts_mask_avail = chan->rx_ts_mask_avail; + ts_info->tx_ts_mask_avail = chan->tx_ts_mask_avail; + ts_info->rx_ts_mask = chan->rx_ts_mask; + ts_info->tx_ts_mask = chan->tx_ts_mask; + + spin_unlock_irqrestore(&chan->ts_lock, flags); + + return 0; +} +EXPORT_SYMBOL(qmc_chan_get_ts_info); + +int qmc_chan_set_ts_info(struct qmc_chan *chan, const struct qmc_chan_ts_info *ts_info) +{ + unsigned long flags; + int ret; + + /* Only a subset of available timeslots is allowed */ + if ((ts_info->rx_ts_mask & chan->rx_ts_mask_avail) != ts_info->rx_ts_mask) + return -EINVAL; + if ((ts_info->tx_ts_mask & chan->tx_ts_mask_avail) != ts_info->tx_ts_mask) + return -EINVAL; + + /* In case of common rx/tx table, rx/tx masks must be identical */ + if (chan->qmc->is_tsa_64rxtx) { + if (ts_info->rx_ts_mask != ts_info->tx_ts_mask) + return -EINVAL; + } + + spin_lock_irqsave(&chan->ts_lock, flags); + + if ((chan->tx_ts_mask != ts_info->tx_ts_mask && !chan->is_tx_stopped) || + (chan->rx_ts_mask != ts_info->rx_ts_mask && !chan->is_rx_stopped)) { + dev_err(chan->qmc->dev, "Channel rx and/or tx not stopped\n"); + ret = -EBUSY; + } else { + chan->tx_ts_mask = ts_info->tx_ts_mask; + chan->rx_ts_mask = ts_info->rx_ts_mask; + ret = 0; + } + spin_unlock_irqrestore(&chan->ts_lock, flags); + + return ret; +} +EXPORT_SYMBOL(qmc_chan_set_ts_info); + int qmc_chan_set_param(struct qmc_chan *chan, const struct qmc_chan_param *param) { if (param->mode != chan->mode) diff --git a/include/soc/fsl/qe/qmc.h b/include/soc/fsl/qe/qmc.h index 166484bb4294..2a333fc1ea81 100644 --- a/include/soc/fsl/qe/qmc.h +++ b/include/soc/fsl/qe/qmc.h @@ -40,6 +40,16 @@ struct qmc_chan_info { int qmc_chan_get_info(struct qmc_chan *chan, struct qmc_chan_info *info); +struct qmc_chan_ts_info { + u64 rx_ts_mask_avail; + u64 tx_ts_mask_avail; + u64 rx_ts_mask; + u64 tx_ts_mask; +}; + +int qmc_chan_get_ts_info(struct qmc_chan *chan, struct qmc_chan_ts_info *ts_info); +int qmc_chan_set_ts_info(struct qmc_chan *chan, const struct qmc_chan_ts_info *ts_info); + struct qmc_chan_param { enum qmc_mode mode; union {