mbox series

[RFC,00/14] Introduce QC USB SND audio offloading support

Message ID 20221223233200.26089-1-quic_wcheng@quicinc.com
Headers show
Series Introduce QC USB SND audio offloading support | expand

Message

Wesley Cheng Dec. 23, 2022, 11:31 p.m. UTC
Several Qualcomm based chipsets can support USB audio offloading to a
dedicated audio DSP, which can take over issuing transfers to the USB
host controller.  The intention is to reduce the load on the main
processors in the SoC, and allow them to be placed into lower power modes.
There are several parts to this design:
  1. Adding ASoC binding layer
  2. Create a USB backend for Q6DSP
  3. Introduce XHCI interrupter support
  4. Create vendor ops for the USB SND driver

Adding ASoC binding layer:
soc-usb: Intention is to treat a USB port similar to a headphone jack.
The port is always present on the device, but cable/pin status can be
enabled/disabled.  Expose mechanisms for USB backend ASoC drivers to
communicate with USB SND.

Create a USB backend for Q6DSP:
q6usb: Basic backend driver that will be responsible for maintaining the
resources needed to initiate a playback stream using the Q6DSP.  Will
be the entity that checks to make sure the connected USB audio device
supports the requested PCM format.  If it does not, the PCM open call will
fail, and userpsace ALSA can take action accordingly.

Introduce XHCI interrupter support:
XHCI HCD supports multiple interrupters, which allows for events to be routed
to different event rings.  This is determined by "Interrupter Target" field
specified in Section "6.4.1.1 Normal TRB" of the XHCI specification.

Events in the offloading case will be routed to an event ring that is assigned
to the audio DSP.

Create vendor ops for the USB SND driver:
qc_audio_offload: This particular driver has several components associated
with it:
- QMI stream request handler
- XHCI interrupter and resource management
- audio DSP memory management

When the audio DSP wants to enable a playback stream, the request is first
received by the ASoC platform sound card.  Depending on the selected route,
ASoC will bring up the individual DAIs in the path.  The Q6USB backend DAI
will send an AFE port start command (with enabling the USB playback path), and
the audio DSP will handle the request accordingly.

Part of the AFE USB port start handling will have an exchange of control
messages using the QMI protocol.  The qc_audio_offload driver will populate the
buffer information:
- Event ring base address
- EP transfer ring base address

and pass it along to the audio DSP.  All endpoint management will now be handed
over to the DSP, and the main processor is not involved in transfers.

Overall, implementing this feature will still expose separate sound card and PCM
devices for both the platorm card and USB audio device:
 0 [SM8250MTPWCD938]: sm8250 - SM8250-MTP-WCD9380-WSA8810-VA-D
                      SM8250-MTP-WCD9380-WSA8810-VA-DMIC
 1 [Audio          ]: USB-Audio - USB Audio
                      Generic USB Audio at usb-xhci-hcd.1.auto-1.4, high speed

This is to ensure that userspace ALSA entities can decide which route to take
when executing the audio playback.  In the above, if card#1 is selected, then
USB audio data will take the legacy path over the USB PCM drivers, etc...

This feature was validated using:
- tinymix: set/enable the multimedia path to route to USB backend
- tinyplay: issue playback on platform card

Wesley Cheng (14):
  ASoC: Add SOC USB APIs for adding an USB backend
  ASoC: qcom: qdsp6: Introduce USB AFE port to q6dsp
  ASoC: qcom: Add USB backend ASoC driver for Q6
  sound: usb: card: Introduce USB SND vendor op callbacks
  sound: usb: Export USB SND APIs for modules
  usb: core: hcd: Introduce USB HCD APIs for interrupter management
  usb: host: xhci: Add XHCI secondary interrupter support
  usb: dwc3: Add DT parameter to specify maximum number of interrupters
  sound: usb: Introduce QC USB SND offloading support
  sound: usb: card: Check for support for requested audio format
  sound: soc: soc-usb: Add PCM format check API for USB backend
  sound: soc: qcom: qusb6: Ensure PCM format is supported by USB audio
    device
  ASoC: dt-bindings: Add Q6USB backend bindings
  ASoC: dt-bindings: Update example for enabling USB offload on SM8250

 .../bindings/sound/qcom,q6usb-dais.yaml       |   55 +
 .../bindings/sound/qcom,sm8250.yaml           |   13 +
 drivers/usb/core/hcd.c                        |   86 +
 drivers/usb/dwc3/core.c                       |   12 +
 drivers/usb/dwc3/core.h                       |    2 +
 drivers/usb/dwc3/host.c                       |    5 +-
 drivers/usb/host/xhci-mem.c                   |  219 ++-
 drivers/usb/host/xhci-plat.c                  |    2 +
 drivers/usb/host/xhci.c                       |  169 +-
 drivers/usb/host/xhci.h                       |   15 +
 .../sound/qcom,q6dsp-lpass-ports.h            |    1 +
 include/linux/usb.h                           |    7 +
 include/linux/usb/hcd.h                       |   16 +-
 include/sound/q6usboffload.h                  |   20 +
 include/sound/soc-usb.h                       |   34 +
 sound/soc/Makefile                            |    2 +-
 sound/soc/qcom/Kconfig                        |    4 +
 sound/soc/qcom/qdsp6/Makefile                 |    1 +
 sound/soc/qcom/qdsp6/q6afe-dai.c              |   47 +
 sound/soc/qcom/qdsp6/q6afe.c                  |  183 ++
 sound/soc/qcom/qdsp6/q6afe.h                  |   46 +-
 sound/soc/qcom/qdsp6/q6dsp-lpass-ports.c      |   23 +
 sound/soc/qcom/qdsp6/q6dsp-lpass-ports.h      |    1 +
 sound/soc/qcom/qdsp6/q6routing.c              |    8 +
 sound/soc/qcom/qdsp6/q6usb.c                  |  239 +++
 sound/soc/soc-usb.c                           |   79 +
 sound/usb/Kconfig                             |   14 +
 sound/usb/Makefile                            |    2 +-
 sound/usb/card.c                              |   43 +
 sound/usb/card.h                              |   10 +
 sound/usb/endpoint.c                          |    2 +
 sound/usb/helper.c                            |    1 +
 sound/usb/pcm.c                               |    9 +-
 sound/usb/pcm.h                               |   12 +
 sound/usb/qcom/Makefile                       |    2 +
 sound/usb/qcom/qc_audio_offload.c             | 1610 +++++++++++++++++
 sound/usb/qcom/usb_audio_qmi_v01.c            |  892 +++++++++
 sound/usb/qcom/usb_audio_qmi_v01.h            |  162 ++
 38 files changed, 3998 insertions(+), 50 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/sound/qcom,q6usb-dais.yaml
 create mode 100644 include/sound/q6usboffload.h
 create mode 100644 include/sound/soc-usb.h
 create mode 100644 sound/soc/qcom/qdsp6/q6usb.c
 create mode 100644 sound/soc/soc-usb.c
 create mode 100644 sound/usb/qcom/Makefile
 create mode 100644 sound/usb/qcom/qc_audio_offload.c
 create mode 100644 sound/usb/qcom/usb_audio_qmi_v01.c
 create mode 100644 sound/usb/qcom/usb_audio_qmi_v01.h

Comments

Greg Kroah-Hartman Dec. 24, 2022, 6:45 a.m. UTC | #1
On Fri, Dec 23, 2022 at 03:31:46PM -0800, Wesley Cheng wrote:
> Several Qualcomm based chipsets can support USB audio offloading to a
> dedicated audio DSP, which can take over issuing transfers to the USB
> host controller.  The intention is to reduce the load on the main
> processors in the SoC, and allow them to be placed into lower power modes.
> There are several parts to this design:
>   1. Adding ASoC binding layer
>   2. Create a USB backend for Q6DSP
>   3. Introduce XHCI interrupter support
>   4. Create vendor ops for the USB SND driver
> 
> Adding ASoC binding layer:
> soc-usb: Intention is to treat a USB port similar to a headphone jack.
> The port is always present on the device, but cable/pin status can be
> enabled/disabled.  Expose mechanisms for USB backend ASoC drivers to
> communicate with USB SND.
> 
> Create a USB backend for Q6DSP:
> q6usb: Basic backend driver that will be responsible for maintaining the
> resources needed to initiate a playback stream using the Q6DSP.  Will
> be the entity that checks to make sure the connected USB audio device
> supports the requested PCM format.  If it does not, the PCM open call will
> fail, and userpsace ALSA can take action accordingly.
> 
> Introduce XHCI interrupter support:
> XHCI HCD supports multiple interrupters, which allows for events to be routed
> to different event rings.  This is determined by "Interrupter Target" field
> specified in Section "6.4.1.1 Normal TRB" of the XHCI specification.
> 
> Events in the offloading case will be routed to an event ring that is assigned
> to the audio DSP.
> 
> Create vendor ops for the USB SND driver:
> qc_audio_offload: This particular driver has several components associated
> with it:
> - QMI stream request handler
> - XHCI interrupter and resource management
> - audio DSP memory management
> 
> When the audio DSP wants to enable a playback stream, the request is first
> received by the ASoC platform sound card.  Depending on the selected route,
> ASoC will bring up the individual DAIs in the path.  The Q6USB backend DAI
> will send an AFE port start command (with enabling the USB playback path), and
> the audio DSP will handle the request accordingly.
> 
> Part of the AFE USB port start handling will have an exchange of control
> messages using the QMI protocol.  The qc_audio_offload driver will populate the
> buffer information:
> - Event ring base address
> - EP transfer ring base address
> 
> and pass it along to the audio DSP.  All endpoint management will now be handed
> over to the DSP, and the main processor is not involved in transfers.
> 
> Overall, implementing this feature will still expose separate sound card and PCM
> devices for both the platorm card and USB audio device:
>  0 [SM8250MTPWCD938]: sm8250 - SM8250-MTP-WCD9380-WSA8810-VA-D
>                       SM8250-MTP-WCD9380-WSA8810-VA-DMIC
>  1 [Audio          ]: USB-Audio - USB Audio
>                       Generic USB Audio at usb-xhci-hcd.1.auto-1.4, high speed
> 
> This is to ensure that userspace ALSA entities can decide which route to take
> when executing the audio playback.  In the above, if card#1 is selected, then
> USB audio data will take the legacy path over the USB PCM drivers, etc...
> 
> This feature was validated using:
> - tinymix: set/enable the multimedia path to route to USB backend
> - tinyplay: issue playback on platform card

This looks to duplicate a bunch of the same things that a number of
different google developers have posted recently.  Please work with them
to come up with a unified set of patches that you all can agree with,
AND get them to sign off on the changes before resubmitting them.

This uncoordinated drip of patches from different people doing the same
thing is almost impossible to review from our side, as I'm sure you can
imagine.

That being said, thank you finally for at least submitting all of the
needed changes together as one patch set.  That's a first, and something
we had been asking for for years.

Have a good holiday break,

greg k-h
Sergey Shtylyov Dec. 24, 2022, 8:19 a.m. UTC | #2
Hello!

On 12/24/22 2:31 AM, Wesley Cheng wrote:

> Check for if the PCM format is supported during the hw_params callback.  If
> the profile is not supported then the userspace ALSA entity will receive an
> error, and can take further action.
> 
> Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
> ---
>  sound/soc/qcom/qdsp6/q6usb.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/sound/soc/qcom/qdsp6/q6usb.c b/sound/soc/qcom/qdsp6/q6usb.c
> index a9da6dec6c6f..128e0974db4e 100644
> --- a/sound/soc/qcom/qdsp6/q6usb.c
> +++ b/sound/soc/qcom/qdsp6/q6usb.c
> @@ -42,7 +42,14 @@ static int q6usb_hw_params(struct snd_pcm_substream *substream,
>  			   struct snd_pcm_hw_params *params,
>  			   struct snd_soc_dai *dai)
>  {
> -	return 0;
> +	struct q6usb_port_data *data = dev_get_drvdata(dai->dev);
> +	int direction = substream->stream;
> +	int ret;

   You don't seem to need this variable, just use *return*
snd_soc_usb_find_format(...).

> +
> +	ret = snd_soc_usb_find_format(data->active_idx, params, direction);
> +
> +	return ret;
> +
>  }
>  static const struct snd_soc_dai_ops q6usb_ops = {
>  	.hw_params	= q6usb_hw_params,

MBR, Sergey
Greg Kroah-Hartman Dec. 24, 2022, 9:02 a.m. UTC | #3
On Fri, Dec 23, 2022 at 03:31:49PM -0800, Wesley Cheng wrote:
> Create a USB BE component that will register a new USB port to the ASoC USB
> framework.  This will handle determination on if the requested audio
> profile is supported by the USB device currently selected.
> 
> Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
> ---
>  include/sound/q6usboffload.h  |  20 +++
>  sound/soc/qcom/Kconfig        |   4 +
>  sound/soc/qcom/qdsp6/Makefile |   1 +
>  sound/soc/qcom/qdsp6/q6usb.c  | 232 ++++++++++++++++++++++++++++++++++
>  4 files changed, 257 insertions(+)
>  create mode 100644 include/sound/q6usboffload.h
>  create mode 100644 sound/soc/qcom/qdsp6/q6usb.c
> 
> diff --git a/include/sound/q6usboffload.h b/include/sound/q6usboffload.h
> new file mode 100644
> index 000000000000..e576808901d9
> --- /dev/null
> +++ b/include/sound/q6usboffload.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: GPL-2.0
> + *
> + * linux/sound/q6usboffload.h -- QDSP6 USB offload
> + *
> + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
> + */
> +
> +/**
> + * struct q6usb_offload
> + * @dev - dev handle to usb be

"be"?  What is that?

> + * @sid - streamID for iommu
> + * @intr_num - usb interrupter number
> + * @domain - allocated iommu domain
> + **/
> +struct q6usb_offload {
> +	struct device *dev;

Do you properly reference count this?

> +	u32 sid;
> +	u32 intr_num;
> +	struct iommu_domain *domain;
> +};

What is the lifetime of this structure?  Who owns it?  Where is the lock
for accessing it?

> diff --git a/sound/soc/qcom/Kconfig b/sound/soc/qcom/Kconfig
> index 8c7398bc1ca8..d65c365116e5 100644
> --- a/sound/soc/qcom/Kconfig
> +++ b/sound/soc/qcom/Kconfig
> @@ -111,6 +111,9 @@ config SND_SOC_QDSP6_APM
>  config SND_SOC_QDSP6_PRM_LPASS_CLOCKS
>  	tristate
>  
> +config SND_SOC_QDSP6_USB
> +	tristate
> +
>  config SND_SOC_QDSP6_PRM
>  	tristate
>  	select SND_SOC_QDSP6_PRM_LPASS_CLOCKS
> @@ -131,6 +134,7 @@ config SND_SOC_QDSP6
>  	select SND_SOC_TOPOLOGY
>  	select SND_SOC_QDSP6_APM
>  	select SND_SOC_QDSP6_PRM
> +	select SND_SOC_QDSP6_USB
>  	help
>  	 To add support for MSM QDSP6 Soc Audio.
>  	 This will enable sound soc platform specific
> diff --git a/sound/soc/qcom/qdsp6/Makefile b/sound/soc/qcom/qdsp6/Makefile
> index 3963bf234664..c9457ee898d0 100644
> --- a/sound/soc/qcom/qdsp6/Makefile
> +++ b/sound/soc/qcom/qdsp6/Makefile
> @@ -17,3 +17,4 @@ obj-$(CONFIG_SND_SOC_QDSP6_APM_DAI) += q6apm-dai.o
>  obj-$(CONFIG_SND_SOC_QDSP6_APM_LPASS_DAI) += q6apm-lpass-dais.o
>  obj-$(CONFIG_SND_SOC_QDSP6_PRM) += q6prm.o
>  obj-$(CONFIG_SND_SOC_QDSP6_PRM_LPASS_CLOCKS) += q6prm-clocks.o
> +obj-$(CONFIG_SND_SOC_QDSP6_USB) += q6usb.o
> diff --git a/sound/soc/qcom/qdsp6/q6usb.c b/sound/soc/qcom/qdsp6/q6usb.c
> new file mode 100644
> index 000000000000..a9da6dec6c6f
> --- /dev/null
> +++ b/sound/soc/qcom/qdsp6/q6usb.c
> @@ -0,0 +1,232 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.

All of the code in this patch series is older than 2022 as I know it has
been in shipping devices for many years.  Please use the proper
copyright year to make your lawyers happy...

> + */
> +
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/device.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/iommu.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dma-map-ops.h>
> +
> +#include <sound/pcm.h>
> +#include <sound/soc.h>
> +#include <sound/soc-usb.h>
> +#include <sound/pcm_params.h>
> +#include <sound/asound.h>
> +#include <sound/q6usboffload.h>
> +
> +#include "q6dsp-lpass-ports.h"
> +#include "q6afe.h"
> +
> +struct q6usb_port_data {
> +	struct q6afe_usb_cfg usb_cfg;
> +	struct snd_soc_usb *usb;
> +	struct q6usb_offload priv;
> +	int active_idx;
> +};
> +
> +static const struct snd_soc_dapm_widget q6usb_dai_widgets[] = {
> +	SND_SOC_DAPM_HP("USB_RX_BE", NULL),
> +};
> +
> +static const struct snd_soc_dapm_route q6usb_dapm_routes[] = {
> +	{"USB Playback", NULL, "USB_RX_BE"},
> +};

No terminating entry?  How does this not break?  Why do you need to
specify the size of the array, that feels like a design bug somewhere.


> +
> +static int q6usb_hw_params(struct snd_pcm_substream *substream,
> +			   struct snd_pcm_hw_params *params,
> +			   struct snd_soc_dai *dai)
> +{
> +	return 0;
> +}
> +static const struct snd_soc_dai_ops q6usb_ops = {
> +	.hw_params	= q6usb_hw_params,
> +};
> +
> +static struct snd_soc_dai_driver q6usb_be_dais[] = {
> +	{
> +		.playback = {
> +			.stream_name = "USB BE RX",
> +			.rates = SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_11025 |
> +					SNDRV_PCM_RATE_16000 | SNDRV_PCM_RATE_22050 |
> +					SNDRV_PCM_RATE_32000 | SNDRV_PCM_RATE_44100 |
> +					SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_96000 |
> +					SNDRV_PCM_RATE_192000,
> +			.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S16_BE |
> +					SNDRV_PCM_FMTBIT_U16_LE | SNDRV_PCM_FMTBIT_U16_BE |
> +					SNDRV_PCM_FMTBIT_S24_LE | SNDRV_PCM_FMTBIT_S24_BE |
> +					SNDRV_PCM_FMTBIT_U24_LE | SNDRV_PCM_FMTBIT_U24_BE,
> +			.channels_min = 1,
> +			.channels_max = 2,
> +			.rate_max =     192000,
> +			.rate_min =	8000,
> +		},
> +		.id = USB_RX,
> +		.name = "USB_RX_BE",
> +		.ops = &q6usb_ops,
> +	},
> +};
> +
> +int q6usb_audio_ports_of_xlate_dai_name(struct snd_soc_component *component,
> +					const struct of_phandle_args *args,
> +					const char **dai_name)
> +{
> +	int id = args->args[0];
> +	int ret = -EINVAL;
> +	int i;
> +
> +	for (i = 0; i  < ARRAY_SIZE(q6usb_be_dais); i++) {
> +		if (q6usb_be_dais[i].id == id) {
> +			*dai_name = q6usb_be_dais[i].name;
> +			ret = 0;
> +			break;
> +		}
> +	}
> +
> +	return ret;
> +}
> +
> +static int q6usb_component_probe(struct snd_soc_component *component)
> +{
> +	struct q6usb_port_data *data = dev_get_drvdata(component->dev);
> +	struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component);
> +
> +	data->usb->component = component;
> +
> +	snd_soc_dapm_disable_pin(dapm, "USB_RX_BE");
> +	snd_soc_dapm_sync(dapm);
> +
> +	return 0;
> +}
> +
> +static const struct snd_soc_component_driver q6usb_dai_component = {
> +	.probe		= q6usb_component_probe,
> +	.name		= "q6usb-dai-component",
> +	.dapm_widgets = q6usb_dai_widgets,
> +	.num_dapm_widgets = ARRAY_SIZE(q6usb_dai_widgets),
> +	.dapm_routes = q6usb_dapm_routes,
> +	.num_dapm_routes = ARRAY_SIZE(q6usb_dapm_routes),
> +	.of_xlate_dai_name = q6usb_audio_ports_of_xlate_dai_name,
> +};
> +
> +int q6usb_alsa_connection_cb(struct snd_soc_usb *usb, int card_idx,
> +			int connected)
> +{
> +	struct snd_soc_dapm_context *dapm;
> +	struct q6usb_port_data *data;
> +
> +	if (!usb->component)
> +		return 0;

How can this happen?

Why is this not an error?

> +
> +	dapm = snd_soc_component_get_dapm(usb->component);
> +	data = dev_get_drvdata(usb->component->dev);
> +
> +	if (connected) {
> +		snd_soc_dapm_enable_pin(dapm, "USB_RX_BE");
> +		/* We only track the latest USB headset plugged in */
> +		data->active_idx = card_idx;
> +	} else {
> +		snd_soc_dapm_disable_pin(dapm, "USB_RX_BE");
> +	}
> +	snd_soc_dapm_sync(dapm);
> +
> +	return 0;
> +}
> +
> +static int q6usb_dai_dev_probe(struct platform_device *pdev)
> +{
> +	struct device_node *node = pdev->dev.of_node;
> +	struct q6usb_port_data *data;
> +	struct device *dev = &pdev->dev;
> +	int ret;
> +
> +	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
> +	if (!data)
> +		return -ENOMEM;
> +
> +	ret = of_property_read_u32(node, "qcom,usb-audio-stream-id",
> +				&data->priv.sid);
> +	if (ret) {
> +		dev_err(&pdev->dev, "failed to read sid.\n");
> +		return -ENODEV;
> +	}
> +
> +	ret = of_property_read_u32(node, "qcom,usb-audio-intr-num",
> +				&data->priv.intr_num);
> +	if (ret) {
> +		dev_err(&pdev->dev, "failed to read intr num.\n");
> +		return -ENODEV;
> +	}
> +
> +	data->priv.domain = iommu_domain_alloc(pdev->dev.bus);
> +	if (!data->priv.domain) {
> +		dev_err(&pdev->dev, "failed to allocate iommu domain\n");
> +		return -ENODEV;
> +	}
> +
> +	/* attach to external processor iommu */
> +	ret = iommu_attach_device(data->priv.domain, &pdev->dev);
> +	if (ret) {
> +		dev_err(&pdev->dev, "failed to attach device ret = %d\n", ret);
> +		goto free_domain;
> +	}
> +
> +	data->usb = snd_soc_usb_add_port(dev, q6usb_alsa_connection_cb);
> +	if (IS_ERR(data->usb)) {
> +		dev_err(&pdev->dev, "failed to add usb port\n");
> +		goto detach_device;
> +	}
> +
> +	data->priv.dev = dev;
> +	dev_set_drvdata(dev, data);
> +	devm_snd_soc_register_component(dev, &q6usb_dai_component,
> +							q6usb_be_dais, ARRAY_SIZE(q6usb_be_dais));

Very odd indentation.  Please do this properly everywhere.


> +	snd_soc_usb_set_priv_data(&data->priv);
> +
> +	return 0;
> +
> +detach_device:
> +	iommu_detach_device(data->priv.domain, &pdev->dev);
> +free_domain:
> +	iommu_domain_free(data->priv.domain);
> +
> +	return ret;
> +}
> +
> +static int q6usb_dai_dev_remove(struct platform_device *pdev)
> +{
> +	struct q6usb_port_data *data = platform_get_drvdata(pdev);
> +
> +	iommu_detach_device(data->priv.domain, &pdev->dev);
> +	iommu_domain_free(data->priv.domain);
> +
> +	snd_soc_usb_remove_port();
> +
> +	return 0;
> +}
> +
> +#ifdef CONFIG_OF

Is this really needed still?

thanks,

greg k-h
Mark Brown Dec. 27, 2022, 1:04 p.m. UTC | #4
On Sat, Dec 24, 2022 at 10:02:59AM +0100, Greg KH wrote:
> On Fri, Dec 23, 2022 at 03:31:49PM -0800, Wesley Cheng wrote:

> > + * struct q6usb_offload
> > + * @dev - dev handle to usb be

> "be"?  What is that?

Back end.  This is a concept in DPCM which should be reasonably
discoverable to people working on the audio portions of this code.

> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.

> All of the code in this patch series is older than 2022 as I know it has
> been in shipping devices for many years.  Please use the proper
> copyright year to make your lawyers happy...

Are you *positive* about this.  Based on some preparatory discussions
the Qualcomm people had with Takashi and I it seemed like this was a new
version of existing concepts.  I'm sure they had something already but
it's not obvious to me that they're just posting the same code.

> > +static const struct snd_soc_dapm_route q6usb_dapm_routes[] = {
> > +	{"USB Playback", NULL, "USB_RX_BE"},
> > +};

> No terminating entry?  How does this not break?  Why do you need to
> specify the size of the array, that feels like a design bug somewhere.

It's how the interface works, the number of entries is passed in when
adding routes.
Takashi Iwai Dec. 27, 2022, 2:02 p.m. UTC | #5
On Tue, 27 Dec 2022 14:45:13 +0100,
Greg KH wrote:
> 
> On Tue, Dec 27, 2022 at 01:04:55PM +0000, Mark Brown wrote:
> > On Sat, Dec 24, 2022 at 10:02:59AM +0100, Greg KH wrote:
> > > On Fri, Dec 23, 2022 at 03:31:49PM -0800, Wesley Cheng wrote:
> > 
> > > > + * struct q6usb_offload
> > > > + * @dev - dev handle to usb be
> > 
> > > "be"?  What is that?
> > 
> > Back end.  This is a concept in DPCM which should be reasonably
> > discoverable to people working on the audio portions of this code.
> 
> Ok, then how is the reference counting logic handled here?  USB devices
> can be removed from the system at any point in time...

The whole picture is fairly complex, and this patch is a part
belonging to the ASoC machine driver -- that is, it's bound to the
Qualcomm host, and there can be only one on a system.  

OTOH, USB audio devices are still managed by the existing USB audio
driver as is, and they can be multiple and any devices.  The basic
idea here is a hijack of the USB data processing in USB audio driver
with the offloading mechanism by this ASoC driver (only if the
condition met).


thanks,

Takashi
Mark Brown Dec. 27, 2022, 3:11 p.m. UTC | #6
On Tue, Dec 27, 2022 at 02:45:13PM +0100, Greg KH wrote:
> On Tue, Dec 27, 2022 at 01:04:55PM +0000, Mark Brown wrote:
> > On Sat, Dec 24, 2022 at 10:02:59AM +0100, Greg KH wrote:

> > > > + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.

> > > All of the code in this patch series is older than 2022 as I know it has
> > > been in shipping devices for many years.  Please use the proper
> > > copyright year to make your lawyers happy...

> > Are you *positive* about this.  Based on some preparatory discussions
> > the Qualcomm people had with Takashi and I it seemed like this was a new
> > version of existing concepts.  I'm sure they had something already but
> > it's not obvious to me that they're just posting the same code.

> I thought that this same code has been shipping in devices for a few
> years now in the last few Samsung phone models.  Is this not the same
> code that is in those devices?

> If not, why not, what happened to that codebase that makes it not worthy
> of being submitted upstream?

I don't specifically know anything about that code but I'd expect that
for out of tree code breaking new ground like this there'd be a strong
likelyhood that there'd be design level issues and that's what the pre
submission discussions were all about - how to fit the concept into the
kernel subsystems in a way that might be maintainable.  There's also
been the whole transition to their new DSP firmware going on.  It's
possible that what was shipped was implemented in the same way with the
same code but I'd not assume that this is the case without actually
comparing the two.
Wesley Cheng Dec. 27, 2022, 9:06 p.m. UTC | #7
Hi Mark/Greg,

On 12/27/2022 7:11 AM, Mark Brown wrote:
> On Tue, Dec 27, 2022 at 02:45:13PM +0100, Greg KH wrote:
>> On Tue, Dec 27, 2022 at 01:04:55PM +0000, Mark Brown wrote:
>>> On Sat, Dec 24, 2022 at 10:02:59AM +0100, Greg KH wrote:
> 
>>>>> + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
> 
>>>> All of the code in this patch series is older than 2022 as I know it has
>>>> been in shipping devices for many years.  Please use the proper
>>>> copyright year to make your lawyers happy...
> 
>>> Are you *positive* about this.  Based on some preparatory discussions
>>> the Qualcomm people had with Takashi and I it seemed like this was a new
>>> version of existing concepts.  I'm sure they had something already but
>>> it's not obvious to me that they're just posting the same code.
> 
>> I thought that this same code has been shipping in devices for a few
>> years now in the last few Samsung phone models.  Is this not the same
>> code that is in those devices?
> 
>> If not, why not, what happened to that codebase that makes it not worthy
>> of being submitted upstream?
> 
> I don't specifically know anything about that code but I'd expect that
> for out of tree code breaking new ground like this there'd be a strong
> likelyhood that there'd be design level issues and that's what the pre
> submission discussions were all about - how to fit the concept into the
> kernel subsystems in a way that might be maintainable.  There's also
> been the whole transition to their new DSP firmware going on.  It's
> possible that what was shipped was implemented in the same way with the
> same code but I'd not assume that this is the case without actually
> comparing the two.

It's correct that all the ASoC related patches are new, and didn't exist 
previously in QC products.  It is due to the fact that Android has a 
different ALSA userspace concept which allowed for a lot of the Q6 audio 
front end (AFE) communication to be done from userspace, I believe. 
This is the reason that we had to introduce this new ASoC based design.

I can't comment much more about the Android ALSA design, maybe @Patrick 
Lai can.

Thanks
Wesley Cheng
Wesley Cheng Dec. 27, 2022, 9:07 p.m. UTC | #8
Hi Alan,

On 12/24/2022 7:29 AM, Alan Stern wrote:
> On Fri, Dec 23, 2022 at 03:31:52PM -0800, Wesley Cheng wrote:
>> For USB HCDs that can support multiple USB interrupters, expose functions
>> that class drivers can utilize for setting up secondary interrupters.
>> Class drivers can pass this information to its respective clients, i.e.
>> a dedicated DSP.
>>
>> Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
>> ---
>>   drivers/usb/core/hcd.c  | 86 +++++++++++++++++++++++++++++++++++++++++
>>   include/linux/usb.h     |  7 ++++
>>   include/linux/usb/hcd.h | 16 +++++++-
>>   3 files changed, 108 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
>> index 8300baedafd2..90ead90faf1d 100644
>> --- a/drivers/usb/core/hcd.c
>> +++ b/drivers/usb/core/hcd.c
> 
>> +/**
>> + * usb_hcd_stop_endpoint - Halt USB EP transfers
>> + * @udev: usb device
>> + * @ep: usb ep to stop
>> + *
>> + * Stop pending transfers on a specific USB endpoint.
>> + **/
>> +int usb_hcd_stop_endpoint(struct usb_device *udev,
>> +					struct usb_host_endpoint *ep)
>> +{
>> +	struct usb_hcd *hcd = bus_to_hcd(udev->bus);
>> +	int ret = 0;
>> +
>> +	if (hcd->driver->stop_endpoint)
>> +		ret = hcd->driver->stop_endpoint(hcd, udev, ep);
>> +
>> +	return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(usb_hcd_stop_endpoint);
> 
> You know, there already is a function that does this.  It's named
> usb_hcd_flush_endpoint().  No need to add another function that does the
> same thing.
> 

Thanks for the suggestion and review.

Hmmm...maybe I should change the name of the API then to avoid the 
confusion.  Yes, usb_hcd_flush_endpoint() does ensure that URBs 
submitted to the EP are stopped.  However, with this offloading concept, 
we aren't actually submitting URBs from the main processor, so the 
ep->urb_list will be empty.

This means the usb_hcd_flush_endpoint() API won't actually do anything. 
  What we need is to ensure that we send a XHCI stop ep command to the 
controller.

Thanks
Wesley Cheng
Wesley Cheng Dec. 27, 2022, 9:13 p.m. UTC | #9
Hi Greg,

On 12/24/2022 12:54 AM, Greg KH wrote:
> On Fri, Dec 23, 2022 at 03:31:52PM -0800, Wesley Cheng wrote:
>> For USB HCDs that can support multiple USB interrupters, expose functions
>> that class drivers can utilize for setting up secondary interrupters.
>> Class drivers can pass this information to its respective clients, i.e.
>> a dedicated DSP.
> 
> Where is the locking here that seems to be required when a hcd is
> removed from the system and you have data in flight?  What am I missing
> here in the design of this?

The XHCI driver is the one that maintains the list of interrupters that 
are available, so the locking was placed in the XHCI driver versus 
adding it in the core hcd layer.

> 
> And yes, HCDs get removed all the time, and will happen more and more in
> the future with the design of more systems moving to Thunderbolt/PCIe
> designs due to the simplicity of it.
> 

As part of the HCD removal, it has to first ensure that class driver 
interfaces, and the connected udevs are removed first.  qc_audio_offload 
will first handle the transfer cleanup and stopping of the audio stream 
before returning from the disconnect callback. (this includes ensuring 
that the interrupter is released)

This concept is how all usb class drivers are currently implemented. 
When the HCD remove occurs, the class drivers are the ones responsible 
for ensuring that all URBs are stopped, and removed before it returns 
from its respective disconnect callback.

>> +/**
>> + * usb_set_interruper - Reserve an interrupter
> 
> Where is an "interrupter" defined?  I don't know what this term means
> sorry, is this in the USB specification somewhere?
> 

Interrupter is defined in the XHCI spec, refer to "section 4.17 
Interrupters"

> 
>> + * @udev: usb device which requested the interrupter
>> + * @intr_num: interrupter number to reserve
>> + * @dma: iova address to event ring
>> + *
>> + * Request for a specific interrupter to be reserved for a USB class driver.
>> + * This will return the base address to the event ring that was allocated to
>> + * the specific interrupter.
>> + **/
>> +phys_addr_t usb_set_interruper(struct usb_device *udev, int intr_num,
>> +							dma_addr_t *dma)
>> +{
>> +	struct usb_hcd *hcd;
>> +	phys_addr_t pa = 0;
>> +
>> +	hcd = bus_to_hcd(udev->bus);
>> +	if (hcd->driver->update_interrupter)
>> +		pa = hcd->driver->update_interrupter(hcd, intr_num, dma);
>> +
>> +	return pa;
> 
> Wait, you return a physical address?  What are you going to do with
> that?  And what guarantees that the address is valid after you return it
> (again, remember memory and devices can be removed at any point in time.
> 

The interrupter is basically another event ring which is the buffer 
allocated for the controller to copy events into.  Since the audio dsp 
now takes over handling of the endpoint events, then it needs to know 
where the buffer resides.  Will fix the interruper typo as well.

The allocation and freeing of this event ring follows how XHCI allocates 
and frees the main event ring as well.  This API just reserves the 
interrupter for the class driver, and return the previously allocated 
(during XHCI init) memory address.

Thanks
Wesley Cheng
Oliver Neukum Dec. 28, 2022, 8:59 a.m. UTC | #10
On 27.12.22 22:07, Wesley Cheng wrote:

> 
> Hmmm...maybe I should change the name of the API then to avoid the confusion.  Yes, usb_hcd_flush_endpoint() does ensure that URBs submitted to the EP are stopped.  However, with this offloading concept, we aren't actually submitting URBs from the main processor, so the ep->urb_list will be empty.
> 
> This means the usb_hcd_flush_endpoint() API won't actually do anything.  What we need is to ensure that we send a XHCI stop ep command to the controller.

That is a concept specific to XHCI, yet you are adding a generic
API. The namin should reflect that. usb_quiesce_endpoint() ?

	Regards
		Oliver
Wesley Cheng Dec. 28, 2022, 8:31 p.m. UTC | #11
Hi Alan,

On 12/28/2022 7:16 AM, Alan Stern wrote:
> On Wed, Dec 28, 2022 at 09:59:03AM +0100, Oliver Neukum wrote:
>>
>>
>> On 27.12.22 22:07, Wesley Cheng wrote:
>>
>>>
>>> Hmmm...maybe I should change the name of the API then to avoid the confusion.  Yes, usb_hcd_flush_endpoint() does ensure that URBs submitted to the EP are stopped.  However, with this offloading concept, we aren't actually submitting URBs from the main processor, so the ep->urb_list will be empty.
>>>
>>> This means the usb_hcd_flush_endpoint() API won't actually do anything.  What we need is to ensure that we send a XHCI stop ep command to the controller.
>>
>> That is a concept specific to XHCI, yet you are adding a generic
>> API. The namin should reflect that. usb_quiesce_endpoint() ?
> 
> Or even xhci_send_stop_ep_cmd(), which is what the routine is intended
> to do.
> 

Just to clarify, you're talking about renaming the API that was added in 
the XHCI driver, correct?

Thanks
Wesley Cheng
Takashi Iwai Jan. 2, 2023, 5:28 p.m. UTC | #12
On Sat, 24 Dec 2022 00:31:55 +0100,
Wesley Cheng wrote:
> 
> Several Qualcomm SoCs have a dedicated audio DSP, which has the ability to
> support USB sound devices.  This vendor driver will implement the required
> handshaking with the DSP, in order to pass along required resources that
> will be utilized by the DSP's USB SW.  The communication channel used for
> this handshaking will be using the QMI protocol.  Required resources
> include:
> - Allocated secondary event ring address
> - EP transfer ring address
> - Interrupter number
> 
> The above information will allow for the audio DSP to execute USB transfers
> over the USB bus.  It will also be able to support devices that have an
> implicit feedback and sync endpoint as well.  Offloading these data
> transfers will allow the main/applications processor to enter lower CPU
> power modes, and sustain a longer duration in those modes.
> 
> Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>

Hmm, this must be the main part that works to bypass the normal USB
packet handling in USB audio driver but hooks to the own offload one,
but there is no description how to take over and manage.
A missing "big picture" makes it difficult to understand and review.

Also, since both drivers are asynchronous, we may need some proper
locking.

More on the code change:

> +static int snd_interval_refine_set(struct snd_interval *i, unsigned int val)
> +{
> +	struct snd_interval t;
> +
> +	t.empty = 0;
> +	t.min = t.max = val;
> +	t.openmin = t.openmax = 0;
> +	t.integer = 1;
> +	return snd_interval_refine(i, &t);
> +}
> +
> +static int _snd_pcm_hw_param_set(struct snd_pcm_hw_params *params,
> +				 snd_pcm_hw_param_t var, unsigned int val,
> +				 int dir)
> +{
> +	int changed;
> +
> +	if (hw_is_mask(var)) {
> +		struct snd_mask *m = hw_param_mask(params, var);
> +
> +		if (val == 0 && dir < 0) {
> +			changed = -EINVAL;
> +			snd_mask_none(m);
> +		} else {
> +			if (dir > 0)
> +				val++;
> +			else if (dir < 0)
> +				val--;
> +			changed = snd_mask_refine_set(
> +					hw_param_mask(params, var), val);
> +		}
> +	} else if (hw_is_interval(var)) {
> +		struct snd_interval *i = hw_param_interval(params, var);
> +
> +		if (val == 0 && dir < 0) {
> +			changed = -EINVAL;
> +			snd_interval_none(i);
> +		} else if (dir == 0)
> +			changed = snd_interval_refine_set(i, val);
> +		else {
> +			struct snd_interval t;
> +
> +			t.openmin = 1;
> +			t.openmax = 1;
> +			t.empty = 0;
> +			t.integer = 0;
> +			if (dir < 0) {
> +				t.min = val - 1;
> +				t.max = val;
> +			} else {
> +				t.min = val;
> +				t.max = val+1;
> +			}
> +			changed = snd_interval_refine(i, &t);
> +		}
> +	} else
> +		return -EINVAL;
> +	if (changed) {
> +		params->cmask |= 1 << var;
> +		params->rmask |= 1 << var;
> +	}
> +	return changed;
> +}

Those are taken from sound/core/oss/pcm_oss.c?  We may put to the
common PCM helper instead of duplication.

> +static void disable_audio_stream(struct snd_usb_substream *subs)
> +{
> +	struct snd_usb_audio *chip = subs->stream->chip;
> +
> +	if (subs->data_endpoint || subs->sync_endpoint) {
> +		close_endpoints(chip, subs);
> +
> +		mutex_lock(&chip->mutex);
> +		subs->cur_audiofmt = NULL;
> +		mutex_unlock(&chip->mutex);
> +	}
> +
> +	snd_usb_autosuspend(chip);
> +}
> +
> +static int enable_audio_stream(struct snd_usb_substream *subs,
> +				snd_pcm_format_t pcm_format,
> +				unsigned int channels, unsigned int cur_rate,
> +				int datainterval)
> +{
> +	struct snd_usb_audio *chip = subs->stream->chip;
> +	struct snd_pcm_hw_params params;
> +	const struct audioformat *fmt;
> +	int ret;
> +
> +	_snd_pcm_hw_params_any(&params);
> +	_snd_pcm_hw_param_set(&params, SNDRV_PCM_HW_PARAM_FORMAT,
> +			pcm_format, 0);
> +	_snd_pcm_hw_param_set(&params, SNDRV_PCM_HW_PARAM_CHANNELS,
> +			channels, 0);
> +	_snd_pcm_hw_param_set(&params, SNDRV_PCM_HW_PARAM_RATE,
> +			cur_rate, 0);

What about other parameters like period / buffer sizes?

> +struct qmi_uaudio_stream_req_msg_v01 {
> +	u8 enable;
> +	u32 usb_token;
> +	u8 audio_format_valid;
> +	u32 audio_format;
> +	u8 number_of_ch_valid;
> +	u32 number_of_ch;
> +	u8 bit_rate_valid;
> +	u32 bit_rate;
> +	u8 xfer_buff_size_valid;
> +	u32 xfer_buff_size;
> +	u8 service_interval_valid;
> +	u32 service_interval;
> +};

Are this and the other structs a part of DSP ABI?
Or is it a definition only used in kernel?  I'm asking because
__packed attribute is required for most of ABI definitions with
different field types.


thanks,

Takashi
Mark Brown Jan. 3, 2023, 5:44 p.m. UTC | #13
On Fri, Dec 23, 2022 at 03:31:58PM -0800, Wesley Cheng wrote:

> Check for if the PCM format is supported during the hw_params callback.  If
> the profile is not supported then the userspace ALSA entity will receive an
> error, and can take further action.

Ideally we'd wire up constraints for this but that gets complicated with
DPCM so it's probably disproportionate effort.  Otherwise other than the
subject lines not using ASoC on this and the previous change I don't
have any issues that other people didn't raise, but then most of the
complication is in the USB bits.
Wesley Cheng Jan. 4, 2023, 10:38 p.m. UTC | #14
Hi Takashi,

On 1/2/2023 9:28 AM, Takashi Iwai wrote:
> On Sat, 24 Dec 2022 00:31:55 +0100,
> Wesley Cheng wrote:
>>
>> Several Qualcomm SoCs have a dedicated audio DSP, which has the ability to
>> support USB sound devices.  This vendor driver will implement the required
>> handshaking with the DSP, in order to pass along required resources that
>> will be utilized by the DSP's USB SW.  The communication channel used for
>> this handshaking will be using the QMI protocol.  Required resources
>> include:
>> - Allocated secondary event ring address
>> - EP transfer ring address
>> - Interrupter number
>>
>> The above information will allow for the audio DSP to execute USB transfers
>> over the USB bus.  It will also be able to support devices that have an
>> implicit feedback and sync endpoint as well.  Offloading these data
>> transfers will allow the main/applications processor to enter lower CPU
>> power modes, and sustain a longer duration in those modes.
>>
>> Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
> 
> Hmm, this must be the main part that works to bypass the normal USB
> packet handling in USB audio driver but hooks to the own offload one,
> but there is no description how to take over and manage.
> A missing "big picture" makes it difficult to understand and review.
> 

Technically, we are not taking over the functionality of the USB SND, as 
we still want the normal path to be accessible in case there is an audio 
profile/format that can't be supported by the audio DSP.  I can add some 
more information on how this offload driver co-exists with the USB SND.

> Also, since both drivers are asynchronous, we may need some proper
> locking.
> 

Yes, I think locking is needed in some places.  Will add that in the 
next revision.

> More on the code change:
> 
>> +static int snd_interval_refine_set(struct snd_interval *i, unsigned int val)
>> +{
>> +	struct snd_interval t;
>> +
>> +	t.empty = 0;
>> +	t.min = t.max = val;
>> +	t.openmin = t.openmax = 0;
>> +	t.integer = 1;
>> +	return snd_interval_refine(i, &t);
>> +}
>> +
>> +static int _snd_pcm_hw_param_set(struct snd_pcm_hw_params *params,
>> +				 snd_pcm_hw_param_t var, unsigned int val,
>> +				 int dir)
>> +{
>> +	int changed;
>> +
>> +	if (hw_is_mask(var)) {
>> +		struct snd_mask *m = hw_param_mask(params, var);
>> +
>> +		if (val == 0 && dir < 0) {
>> +			changed = -EINVAL;
>> +			snd_mask_none(m);
>> +		} else {
>> +			if (dir > 0)
>> +				val++;
>> +			else if (dir < 0)
>> +				val--;
>> +			changed = snd_mask_refine_set(
>> +					hw_param_mask(params, var), val);
>> +		}
>> +	} else if (hw_is_interval(var)) {
>> +		struct snd_interval *i = hw_param_interval(params, var);
>> +
>> +		if (val == 0 && dir < 0) {
>> +			changed = -EINVAL;
>> +			snd_interval_none(i);
>> +		} else if (dir == 0)
>> +			changed = snd_interval_refine_set(i, val);
>> +		else {
>> +			struct snd_interval t;
>> +
>> +			t.openmin = 1;
>> +			t.openmax = 1;
>> +			t.empty = 0;
>> +			t.integer = 0;
>> +			if (dir < 0) {
>> +				t.min = val - 1;
>> +				t.max = val;
>> +			} else {
>> +				t.min = val;
>> +				t.max = val+1;
>> +			}
>> +			changed = snd_interval_refine(i, &t);
>> +		}
>> +	} else
>> +		return -EINVAL;
>> +	if (changed) {
>> +		params->cmask |= 1 << var;
>> +		params->rmask |= 1 << var;
>> +	}
>> +	return changed;
>> +}
> 
> Those are taken from sound/core/oss/pcm_oss.c?  We may put to the
> common PCM helper instead of duplication.
> 

Sure, I can do that.

>> +static void disable_audio_stream(struct snd_usb_substream *subs)
>> +{
>> +	struct snd_usb_audio *chip = subs->stream->chip;
>> +
>> +	if (subs->data_endpoint || subs->sync_endpoint) {
>> +		close_endpoints(chip, subs);
>> +
>> +		mutex_lock(&chip->mutex);
>> +		subs->cur_audiofmt = NULL;
>> +		mutex_unlock(&chip->mutex);
>> +	}
>> +
>> +	snd_usb_autosuspend(chip);
>> +}
>> +
>> +static int enable_audio_stream(struct snd_usb_substream *subs,
>> +				snd_pcm_format_t pcm_format,
>> +				unsigned int channels, unsigned int cur_rate,
>> +				int datainterval)
>> +{
>> +	struct snd_usb_audio *chip = subs->stream->chip;
>> +	struct snd_pcm_hw_params params;
>> +	const struct audioformat *fmt;
>> +	int ret;
>> +
>> +	_snd_pcm_hw_params_any(&params);
>> +	_snd_pcm_hw_param_set(&params, SNDRV_PCM_HW_PARAM_FORMAT,
>> +			pcm_format, 0);
>> +	_snd_pcm_hw_param_set(&params, SNDRV_PCM_HW_PARAM_CHANNELS,
>> +			channels, 0);
>> +	_snd_pcm_hw_param_set(&params, SNDRV_PCM_HW_PARAM_RATE,
>> +			cur_rate, 0);
> 
> What about other parameters like period / buffer sizes?
> 

I don't think we will need those parameters on the audio DSP.  The 
"params" here is used to pass the pcm format into the qmi response.

>> +struct qmi_uaudio_stream_req_msg_v01 {
>> +	u8 enable;
>> +	u32 usb_token;
>> +	u8 audio_format_valid;
>> +	u32 audio_format;
>> +	u8 number_of_ch_valid;
>> +	u32 number_of_ch;
>> +	u8 bit_rate_valid;
>> +	u32 bit_rate;
>> +	u8 xfer_buff_size_valid;
>> +	u32 xfer_buff_size;
>> +	u8 service_interval_valid;
>> +	u32 service_interval;
>> +};
> 
> Are this and the other structs a part of DSP ABI?
> Or is it a definition only used in kernel?  I'm asking because
> __packed attribute is required for most of ABI definitions with
> different field types.
> 

This would be in the kernel only.

Thanks
Wesley Cheng
Krzysztof Kozlowski Jan. 5, 2023, 6:09 p.m. UTC | #15
On 24/12/2022 00:31, Wesley Cheng wrote:
> The QC ADSP is able to support USB playback and capture, so that the
> main application processor can be placed into lower CPU power modes.  This
> adds the required AFE port configurations and port start command to start
> an audio session.
> 
> Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
> ---
>  .../sound/qcom,q6dsp-lpass-ports.h            |   1 +
>  sound/soc/qcom/qdsp6/q6afe-dai.c              |  47 +++++
>  sound/soc/qcom/qdsp6/q6afe.c                  | 183 ++++++++++++++++++
>  sound/soc/qcom/qdsp6/q6afe.h                  |  46 ++++-
>  sound/soc/qcom/qdsp6/q6dsp-lpass-ports.c      |  23 +++
>  sound/soc/qcom/qdsp6/q6dsp-lpass-ports.h      |   1 +
>  sound/soc/qcom/qdsp6/q6routing.c              |   8 +
>  7 files changed, 308 insertions(+), 1 deletion(-)
> 
> diff --git a/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h b/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
> index 9f7c5103bc82..746bc462bb2e 100644
> --- a/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
> +++ b/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
> @@ -131,6 +131,7 @@
>  #define RX_CODEC_DMA_RX_7	126
>  #define QUINARY_MI2S_RX		127
>  #define QUINARY_MI2S_TX		128
> +#define USB_RX				129
>  
>  #define LPASS_CLK_ID_PRI_MI2S_IBIT	1

Bindings are separate patches. Please split.

Best regards,
Krzysztof
Krzysztof Kozlowski Jan. 5, 2023, 6:09 p.m. UTC | #16
On 03/01/2023 18:46, Mark Brown wrote:
> On Mon, Dec 26, 2022 at 01:27:21PM +0100, Krzysztof Kozlowski wrote:
>> On 24/12/2022 00:32, Wesley Cheng wrote:
> 
>>> +            link-name = "USB Playback";
>>> +            cpu {
>>> +                sound-dai = <&q6afedai USB_RX>;
> 
>> Hmm, that makes me wonder if you really tested the bindings before
>> sending? If yes, where is the USB_RX defined?
> 
> It was added in patch 2, it's in include/dt-bindings.

Thanks, indeed, I was looking for another bindings patch but this was
squashed with a driver.

Best regards,
Krzysztof
Wesley Cheng Jan. 6, 2023, 1:05 a.m. UTC | #17
Hi Pierre,

On 1/4/2023 3:19 PM, Pierre-Louis Bossart wrote:
> 
> 
> On 12/23/22 17:31, Wesley Cheng wrote:
>> Several Qualcomm based chipsets can support USB audio offloading to a
>> dedicated audio DSP, which can take over issuing transfers to the USB
>> host controller.  The intention is to reduce the load on the main
>> processors in the SoC, and allow them to be placed into lower power modes.
> 
> It would be nice to clarify what you want to offload
> a) audio data transfers for isoc ports
> b) control for e.g. volume settings (those go to endpoint 0 IIRC)
> c) Both?
> 

Thanks for sharing your experience, and inputs!

It would be the audio related endpoints only, so ISOC and potentially 
feedback ep.

> This has a lot of implications on the design. ASoC/DPCM is mainly
> intended for audio data transfers, control is a separate problem with
> configurations handled with register settings or bus-specific commands.
> 

Control would still be handled by the main processor.

>> There are several parts to this design:
>>    1. Adding ASoC binding layer
>>    2. Create a USB backend for Q6DSP
>>    3. Introduce XHCI interrupter support
>>    4. Create vendor ops for the USB SND driver
>>
>> Adding ASoC binding layer:
>> soc-usb: Intention is to treat a USB port similar to a headphone jack.
>> The port is always present on the device, but cable/pin status can be
>> enabled/disabled.  Expose mechanisms for USB backend ASoC drivers to
>> communicate with USB SND.
>>
>> Create a USB backend for Q6DSP:
>> q6usb: Basic backend driver that will be responsible for maintaining the
>> resources needed to initiate a playback stream using the Q6DSP.  Will
>> be the entity that checks to make sure the connected USB audio device
>> supports the requested PCM format.  If it does not, the PCM open call will
>> fail, and userpsace ALSA can take action accordingly.
>>
>> Introduce XHCI interrupter support:
>> XHCI HCD supports multiple interrupters, which allows for events to be routed
>> to different event rings.  This is determined by "Interrupter Target" field
>> specified in Section "6.4.1.1 Normal TRB" of the XHCI specification.
>>
>> Events in the offloading case will be routed to an event ring that is assigned
>> to the audio DSP.
>>
>> Create vendor ops for the USB SND driver:
>> qc_audio_offload: This particular driver has several components associated
>> with it:
>> - QMI stream request handler
>> - XHCI interrupter and resource management
>> - audio DSP memory management
>>
>> When the audio DSP wants to enable a playback stream, the request is first
>> received by the ASoC platform sound card.  Depending on the selected route,
>> ASoC will bring up the individual DAIs in the path.  The Q6USB backend DAI
>> will send an AFE port start command (with enabling the USB playback path), and
>> the audio DSP will handle the request accordingly.
>>
>> Part of the AFE USB port start handling will have an exchange of control
>> messages using the QMI protocol.  The qc_audio_offload driver will populate the
>> buffer information:
>> - Event ring base address
>> - EP transfer ring base address
>>
>> and pass it along to the audio DSP.  All endpoint management will now be handed
>> over to the DSP, and the main processor is not involved in transfers.
>>
>> Overall, implementing this feature will still expose separate sound card and PCM
>> devices for both the platorm card and USB audio device:
>>   0 [SM8250MTPWCD938]: sm8250 - SM8250-MTP-WCD9380-WSA8810-VA-D
>>                        SM8250-MTP-WCD9380-WSA8810-VA-DMIC
>>   1 [Audio          ]: USB-Audio - USB Audio
>>                        Generic USB Audio at usb-xhci-hcd.1.auto-1.4, high speed
>>
>> This is to ensure that userspace ALSA entities can decide which route to take
>> when executing the audio playback.  In the above, if card#1 is selected, then
>> USB audio data will take the legacy path over the USB PCM drivers, etc...
> 
> You would still need some sort of mutual exclusion to make sure the isoc
> endpoints are not used concurrently by the two cards. Relying on
> userspace intelligence to enforce that exclusion is not safe IMHO.
> 

Sure, I think we can make the USB card as being used if the offloading 
path is currently being enabled.  Kernel could return an error to 
userspace when this situation happens.

> Intel looked at this sort of offload support a while ago and our
> directions were very different - for a variety of reasons USB offload is
> enabled on Windows platforms but remains a TODO for Linux. Rather than
> having two cards, you could have a single card and addition subdevices
> that expose the paths through the DSP. The benefits were that there was
> a single set of controls that userspace needed to know about, and volume
> settings were the same no matter which path you used (legacy or
> DSP-optimized paths). That's consistent with the directions to use 'Deep
> Buffer' PCM paths for local playback, it's the same idea of reducing
> power consumption with optimized routing.
> 

Volume control would still be done through the legacy path as mentioned 
above.  For example, if a USB headset w/ a HID interface exposed (for 
volume control) was connected, those HID events would be routed to 
userspace to adjust volume accordingly on the main processor. (although 
you're right about having separate controls still present - one for the 
ASoC card and another for USB card)

> Another point is that there may be cases where the DSP paths are not
> available if the DSP memory and MCPS budget is exceeded. In those cases,
> the DSP parts needs the ability to notify userspace that the legacy path
> should be used.

If we ran into this scenario, the audio DSP AFE port start command would 
fail, and this would be propagated to the userspace entity.  It could 
then potentially re-route to the legacy/non-offload path.

> 
> Another case to handle is that some USB devices can handle way more data
> than DSPs can chew, for example Pro audio boxes that can deal with 8ch
> 192kHz will typically use the legacy paths. Some also handle specific
> formats such as DSD over PCM. So it's quite likely that PCM devices for
> card0 and card1 above do NOT expose support for the same formats, or put
> differently that only a subset of the USB device capabilities are
> handled through the DSP.

Same as the above.  We have programmed the USB backend to support the 
profiles that the audio DSP can handle.  I assume if there was any other 
request, the userspace entity would fail the PCM open for that requested 
profile.

> 
> And last, power optimizations with DSPs typically come from additional
> latency helping put the SoC in low-power modes. That's not necessarily
> ideal for all usages, e.g. for music recording and mixing I am not
> convinced the DSP path would help at all.
> 

That's true.  At the same time, this feature is more for power related 
benefits, not specifically for performance. (although we haven't seen 
any performance related issues w/ this approach on the audio profiles 
the DSP supports)  I think if its an audio profile that supports a high 
sample rate and large number of channels, then the DSP wouldn't be able 
to support it anyway, and userspace could still use the legacy path. 
This would allow for those high-performance audio devices to not be 
affected.

Thanks
Wesley Cheng
Wesley Cheng Jan. 6, 2023, 1:05 a.m. UTC | #18
Hi Pierre,

On 1/4/2023 3:33 PM, Pierre-Louis Bossart wrote:
> 
> 
> On 12/23/22 17:31, Wesley Cheng wrote:
>> The QC ADSP is able to support USB playback and capture, so that the
>> main application processor can be placed into lower CPU power modes.  This
>> adds the required AFE port configurations and port start command to start
>> an audio session.
> 
> It would be good to clarify what sort of endpoints can be supported. I
> presume the SOF-synchronous case is handled, but how would you deal with
> async endpoints with feedback (be it explicit or implicit)?
> 

Sure, both types of feedback endpoints are expected to be supported by 
the audio DSP, as well as sync eps.  We have the logic there to modify 
the audio sample size accordingly.

> Note that it's very hard to make the decision not to support async
> endpoints, there are quite a few devices that are exposed as async to
> work around an obscure legacy issue on Windows but are really
> sof-synchronous endpoints that never ask for any change of pace.
> 
>> Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
>> ---
>>   .../sound/qcom,q6dsp-lpass-ports.h            |   1 +
>>   sound/soc/qcom/qdsp6/q6afe-dai.c              |  47 +++++
>>   sound/soc/qcom/qdsp6/q6afe.c                  | 183 ++++++++++++++++++
>>   sound/soc/qcom/qdsp6/q6afe.h                  |  46 ++++-
>>   sound/soc/qcom/qdsp6/q6dsp-lpass-ports.c      |  23 +++
>>   sound/soc/qcom/qdsp6/q6dsp-lpass-ports.h      |   1 +
>>   sound/soc/qcom/qdsp6/q6routing.c              |   8 +
>>   7 files changed, 308 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h b/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
>> index 9f7c5103bc82..746bc462bb2e 100644
>> --- a/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
>> +++ b/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
>> @@ -131,6 +131,7 @@
>>   #define RX_CODEC_DMA_RX_7	126
>>   #define QUINARY_MI2S_RX		127
>>   #define QUINARY_MI2S_TX		128
>> +#define USB_RX				129
> 
> the commit message says the DSP can support Playback and capture, but
> here there's capture only ...
> 
> 

Sorry I will update the commit message properly.  At the moment we've 
only verified audio playback.

>>   
>>   static const struct snd_soc_dapm_route q6afe_dapm_routes[] = {
>> +	{"USB Playback", NULL, "USB_RX"},
> 
> ... but here RX means playback?
> 
> I am not sure I get the convention on directions and what is actually
> supported?
> 

The notation is based on the direction of which the audio data is 
sourced or pushed on to the DSP.  So in playback, the DSP is receiving 
audio data to send, and capture, it is transmitting audio data received. 
(although we do not support capture yet)

>> +struct afe_param_id_usb_cfg {
>> +/* Minor version used for tracking USB audio device configuration.
>> + * Supported values: AFE_API_MINOR_VERSION_USB_AUDIO_CONFIG
>> + */
>> +	u32                  cfg_minor_version;
>> +/* Sampling rate of the port.
>> + * Supported values:
>> + * - AFE_PORT_SAMPLE_RATE_8K
>> + * - AFE_PORT_SAMPLE_RATE_11025
>> + * - AFE_PORT_SAMPLE_RATE_12K
>> + * - AFE_PORT_SAMPLE_RATE_16K
>> + * - AFE_PORT_SAMPLE_RATE_22050
>> + * - AFE_PORT_SAMPLE_RATE_24K
>> + * - AFE_PORT_SAMPLE_RATE_32K
>> + * - AFE_PORT_SAMPLE_RATE_44P1K
>> + * - AFE_PORT_SAMPLE_RATE_48K
>> + * - AFE_PORT_SAMPLE_RATE_96K
>> + * - AFE_PORT_SAMPLE_RATE_192K
>> + */
>> +	u32                  sample_rate;
>> +/* Bit width of the sample.
>> + * Supported values: 16, 24
>> + */
>> +	u16                  bit_width;
>> +/* Number of channels.
>> + * Supported values: 1 and 2
> 
> that aligns with my feedback on the cover letter, if you connect a
> device that can support from than 2 channels should the DSP even expose
> this DSP-optimized path?
> 

My assumption is that I programmed the DAIs w/ PCM formats supported by 
the DSP, so I think the ASoC core should not allow userspace to choose 
that path if the hw params don't fit/match.

> Oh and I forgot, what happens if there are multiple audio devices
> connected, can the DSP deal with all of them? If not, how is this handled?
> 

This is one topic that we were pretty open ended on.  At least on our 
implementation, only one audio device can be supported at a time.  We 
choose the latest device that was plugged in or discovered by the USB 
SND class driver.

>> + */
>> +	u16                  num_channels;
>> +/* Data format supported by the USB. The supported value is
>> + * 0 (#AFE_USB_AUDIO_DATA_FORMAT_LINEAR_PCM).
>> + */
>> +	u16                  data_format;
>> +/* this field must be 0 */
>> +	u16                  reserved;
>> +/* device token of actual end USB aduio device */
> 
> typo: audio
> 

Thanks

>> +	u32                  dev_token;
>> +/* endianness of this interface */
>> +	u32                   endian;
> 
> Is this a USB concept? I can't recall having seen any parts of the USB
> audio class spec that the data can be big or little endian?
> 

No, this is probably just something our audio DSP uses on the AFE 
commands that it receives.

>> +/* service interval */
>> +	u32                  service_interval;
>> +} __packed;
> 
>> +int afe_port_send_usb_dev_param(struct q6afe_port *port, struct q6afe_usb_cfg *cfg)
>> +{
>> +	union afe_port_config *pcfg = &port->port_cfg;
>> +	struct afe_param_id_usb_audio_dev_params usb_dev;
>> +	struct afe_param_id_usb_audio_dev_lpcm_fmt lpcm_fmt;
>> +	struct afe_param_id_usb_audio_svc_interval svc_int;
>> +	int ret = 0;
>> +
>> +	if (!pcfg) {
>> +		pr_err("%s: Error, no configuration data\n", __func__);
> 
> can you use a dev_err() here and the rest of the code?
> 

Sure.

Thanks
Wesley Cheng
Wesley Cheng Jan. 6, 2023, 1:05 a.m. UTC | #19
Hi Pierre,

On 1/4/2023 3:41 PM, Pierre-Louis Bossart wrote:
> 
>> +int q6usb_alsa_connection_cb(struct snd_soc_usb *usb, int card_idx,
>> +			int connected)
>> +{
>> +	struct snd_soc_dapm_context *dapm;
>> +	struct q6usb_port_data *data;
>> +
>> +	if (!usb->component)
>> +		return 0;
>> +
>> +	dapm = snd_soc_component_get_dapm(usb->component);
>> +	data = dev_get_drvdata(usb->component->dev);
>> +
>> +	if (connected) {
>> +		snd_soc_dapm_enable_pin(dapm, "USB_RX_BE");
>> +		/* We only track the latest USB headset plugged in */
> 
> that answers to my earlier question on how to deal with multiple
> devices, but is this a desirable policy? This could lead to a lot of
> confusion. If there are restrictions to a single device, then it might
> be more interesting for userspace or the user to indicate which USB
> device gets to use USB offload and all others use legacy.
> 

Yeah, as mentioned its still pretty open ended.  I think from the 
feedback received from Mark/Takashi, this was a viable option for now. 
Would you happen to have any insight/input on how the userspace can pass 
down that selection to the ASoC framework?  Maybe some kind of PCM IOCTL 
call?

Thanks
Wesley Cheng
Wesley Cheng Jan. 6, 2023, 1:06 a.m. UTC | #20
Hi Pierre,

On 1/4/2023 3:51 PM, Pierre-Louis Bossart wrote:
> 
> 
> On 12/23/22 17:31, Wesley Cheng wrote:
>> Several Qualcomm SoCs have a dedicated audio DSP, which has the ability to
>> support USB sound devices.  This vendor driver will implement the required
>> handshaking with the DSP, in order to pass along required resources that
>> will be utilized by the DSP's USB SW.  The communication channel used for
>> this handshaking will be using the QMI protocol.  Required resources
>> include:
>> - Allocated secondary event ring address
>> - EP transfer ring address
>> - Interrupter number
>>
>> The above information will allow for the audio DSP to execute USB transfers
>> over the USB bus.  It will also be able to support devices that have an
>> implicit feedback and sync endpoint as well.  Offloading these data
>> transfers will allow the main/applications processor to enter lower CPU
>> power modes, and sustain a longer duration in those modes.
> 
> Are you suggesting that the entire feedback loop be handled in the DSP?
> It's not clear what "Offloading these data transfers" refers to, the
> data part or the feedback path?
> 

Yes, as mentioned in the cover letter response, we'll handle the 
feedback endpoints. (feedback eps are part of the audio data interface)

> Comments are almost inexistent in this patch so it's hard to figure out
> what it really does.
> 

OK, will add some more comments.

Thanks
Wesley Cheng
Wesley Cheng Jan. 6, 2023, 1:32 a.m. UTC | #21
Hi Krzysztof,

On 1/5/2023 10:09 AM, Krzysztof Kozlowski wrote:
> On 24/12/2022 00:31, Wesley Cheng wrote:
>> The QC ADSP is able to support USB playback and capture, so that the
>> main application processor can be placed into lower CPU power modes.  This
>> adds the required AFE port configurations and port start command to start
>> an audio session.
>>
>> Signed-off-by: Wesley Cheng <quic_wcheng@quicinc.com>
>> ---
>>   .../sound/qcom,q6dsp-lpass-ports.h            |   1 +
>>   sound/soc/qcom/qdsp6/q6afe-dai.c              |  47 +++++
>>   sound/soc/qcom/qdsp6/q6afe.c                  | 183 ++++++++++++++++++
>>   sound/soc/qcom/qdsp6/q6afe.h                  |  46 ++++-
>>   sound/soc/qcom/qdsp6/q6dsp-lpass-ports.c      |  23 +++
>>   sound/soc/qcom/qdsp6/q6dsp-lpass-ports.h      |   1 +
>>   sound/soc/qcom/qdsp6/q6routing.c              |   8 +
>>   7 files changed, 308 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h b/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
>> index 9f7c5103bc82..746bc462bb2e 100644
>> --- a/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
>> +++ b/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
>> @@ -131,6 +131,7 @@
>>   #define RX_CODEC_DMA_RX_7	126
>>   #define QUINARY_MI2S_RX		127
>>   #define QUINARY_MI2S_TX		128
>> +#define USB_RX				129
>>   
>>   #define LPASS_CLK_ID_PRI_MI2S_IBIT	1
> 
> Bindings are separate patches. Please split.
> 

Thanks for catching this.  Will do.

Thanks
Wesley Cheng
Pierre-Louis Bossart Jan. 6, 2023, 3:57 p.m. UTC | #22
>> On 12/23/22 17:31, Wesley Cheng wrote:
>>> Several Qualcomm based chipsets can support USB audio offloading to a
>>> dedicated audio DSP, which can take over issuing transfers to the USB
>>> host controller.  The intention is to reduce the load on the main
>>> processors in the SoC, and allow them to be placed into lower power
>>> modes.
>>
>> It would be nice to clarify what you want to offload
>> a) audio data transfers for isoc ports
>> b) control for e.g. volume settings (those go to endpoint 0 IIRC)
>> c) Both?
>>
> 
> Thanks for sharing your experience, and inputs!
> 
> It would be the audio related endpoints only, so ISOC and potentially
> feedback ep.

That's good, that means there's a common basis for at least two separate
hardware implementations.
>> This has a lot of implications on the design. ASoC/DPCM is mainly
>> intended for audio data transfers, control is a separate problem with
>> configurations handled with register settings or bus-specific commands.
>>
> 
> Control would still be handled by the main processor.

Excellent, one more thing in common. Maintainers like this sort of
alignment :-)

>>> There are several parts to this design:
>>>    1. Adding ASoC binding layer
>>>    2. Create a USB backend for Q6DSP
>>>    3. Introduce XHCI interrupter support
>>>    4. Create vendor ops for the USB SND driver
>>>
>>> Adding ASoC binding layer:
>>> soc-usb: Intention is to treat a USB port similar to a headphone jack.
>>> The port is always present on the device, but cable/pin status can be
>>> enabled/disabled.  Expose mechanisms for USB backend ASoC drivers to
>>> communicate with USB SND.
>>>
>>> Create a USB backend for Q6DSP:
>>> q6usb: Basic backend driver that will be responsible for maintaining the
>>> resources needed to initiate a playback stream using the Q6DSP.  Will
>>> be the entity that checks to make sure the connected USB audio device
>>> supports the requested PCM format.  If it does not, the PCM open call
>>> will
>>> fail, and userpsace ALSA can take action accordingly.
>>>
>>> Introduce XHCI interrupter support:
>>> XHCI HCD supports multiple interrupters, which allows for events to
>>> be routed
>>> to different event rings.  This is determined by "Interrupter Target"
>>> field
>>> specified in Section "6.4.1.1 Normal TRB" of the XHCI specification.
>>>
>>> Events in the offloading case will be routed to an event ring that is
>>> assigned
>>> to the audio DSP.

To the best of my knowledge this isn't needed on Intel platforms, but
that's something we would need to double-check.
>>> Create vendor ops for the USB SND driver:
>>> qc_audio_offload: This particular driver has several components
>>> associated
>>> with it:
>>> - QMI stream request handler
>>> - XHCI interrupter and resource management
>>> - audio DSP memory management
>>>
>>> When the audio DSP wants to enable a playback stream, the request is
>>> first
>>> received by the ASoC platform sound card.  Depending on the selected
>>> route,
>>> ASoC will bring up the individual DAIs in the path.  The Q6USB
>>> backend DAI
>>> will send an AFE port start command (with enabling the USB playback
>>> path), and
>>> the audio DSP will handle the request accordingly.
>>>
>>> Part of the AFE USB port start handling will have an exchange of control
>>> messages using the QMI protocol.  The qc_audio_offload driver will
>>> populate the
>>> buffer information:
>>> - Event ring base address
>>> - EP transfer ring base address
>>>
>>> and pass it along to the audio DSP.  All endpoint management will now
>>> be handed
>>> over to the DSP, and the main processor is not involved in transfers.
>>>
>>> Overall, implementing this feature will still expose separate sound
>>> card and PCM
>>> devices for both the platorm card and USB audio device:
>>>   0 [SM8250MTPWCD938]: sm8250 - SM8250-MTP-WCD9380-WSA8810-VA-D
>>>                        SM8250-MTP-WCD9380-WSA8810-VA-DMIC
>>>   1 [Audio          ]: USB-Audio - USB Audio
>>>                        Generic USB Audio at usb-xhci-hcd.1.auto-1.4,
>>> high speed
>>>
>>> This is to ensure that userspace ALSA entities can decide which route
>>> to take
>>> when executing the audio playback.  In the above, if card#1 is
>>> selected, then
>>> USB audio data will take the legacy path over the USB PCM drivers,
>>> etc...
>>
>> You would still need some sort of mutual exclusion to make sure the isoc
>> endpoints are not used concurrently by the two cards. Relying on
>> userspace intelligence to enforce that exclusion is not safe IMHO.
>>
> 
> Sure, I think we can make the USB card as being used if the offloading
> path is currently being enabled.  Kernel could return an error to
> userspace when this situation happens.

It's problematic for servers such as PipeWire/PulseAudio that open all
possible PCMs to figure out what they support in terms of formats. I am
not sure we can enforce a user-space serialization when discovering
capabilities?

> 
>> Intel looked at this sort of offload support a while ago and our
>> directions were very different - for a variety of reasons USB offload is
>> enabled on Windows platforms but remains a TODO for Linux. Rather than
>> having two cards, you could have a single card and addition subdevices
>> that expose the paths through the DSP. The benefits were that there was
>> a single set of controls that userspace needed to know about, and volume
>> settings were the same no matter which path you used (legacy or
>> DSP-optimized paths). That's consistent with the directions to use 'Deep
>> Buffer' PCM paths for local playback, it's the same idea of reducing
>> power consumption with optimized routing.
>>
> 
> Volume control would still be done through the legacy path as mentioned
> above.  For example, if a USB headset w/ a HID interface exposed (for
> volume control) was connected, those HID events would be routed to
> userspace to adjust volume accordingly on the main processor. (although
> you're right about having separate controls still present - one for the
> ASoC card and another for USB card)

The two sets of controls implied by the use of two cards is really
problematic IMHO. This adds complexity for userspace to figure out that
the controls are really the same and synchronize/mirror changes.

The premise of offload is that it should really not get in the way of
user-experience, design constructs that result in delayed starts/stop,
changed volumes or quality differences should be avoided, or
users/distros will disable this optimization.

One card with additional DSP-based PCM devices seems simpler to me in
terms of usage, but it's not without technical challenges either: with
the use of the ASoC topology framework we only know what the DSP
supports when registering a card and probing the ASoC components.

The interaction between USB audio and ASoC would also be at least as
complicated as display audio, in that it needs to work no matter what
the probe order is, and even survive the Linux device/driver model
requirement that there are no timing dependencies in the driver
bind/unbind sequences.

>> Another point is that there may be cases where the DSP paths are not
>> available if the DSP memory and MCPS budget is exceeded. In those cases,
>> the DSP parts needs the ability to notify userspace that the legacy path
>> should be used.
> 
> If we ran into this scenario, the audio DSP AFE port start command would
> fail, and this would be propagated to the userspace entity.  It could
> then potentially re-route to the legacy/non-offload path.

'start' or 'open'? This is a rather important design difference. Usually
we try to make decisions in the .open or .hw_params stage. The 'start'
or 'trigger' are usually not meant to fail due to unavailable resources
in ALSA.
>> Another case to handle is that some USB devices can handle way more data
>> than DSPs can chew, for example Pro audio boxes that can deal with 8ch
>> 192kHz will typically use the legacy paths. Some also handle specific
>> formats such as DSD over PCM. So it's quite likely that PCM devices for
>> card0 and card1 above do NOT expose support for the same formats, or put
>> differently that only a subset of the USB device capabilities are
>> handled through the DSP.
> 
> Same as the above.  We have programmed the USB backend to support the
> profiles that the audio DSP can handle.  I assume if there was any other
> request, the userspace entity would fail the PCM open for that requested
> profile.

What's not clear to me is whether there's any matching process between
the DSP capabilities and what the USB device exposes? if the USB device
is already way more complicated that what the ASoC back-end can deal
with, why expose a card?

>> And last, power optimizations with DSPs typically come from additional
>> latency helping put the SoC in low-power modes. That's not necessarily
>> ideal for all usages, e.g. for music recording and mixing I am not
>> convinced the DSP path would help at all.
>>
> 
> That's true.  At the same time, this feature is more for power related
> benefits, not specifically for performance. (although we haven't seen
> any performance related issues w/ this approach on the audio profiles
> the DSP supports)  I think if its an audio profile that supports a high
> sample rate and large number of channels, then the DSP wouldn't be able
> to support it anyway, and userspace could still use the legacy path.
> This would allow for those high-performance audio devices to not be
> affected.

ok, we are aligned as well here. Excellent. With the on-going work to
introduce 'Deep Buffer' capabilities, we'll have a need to tag PCM
devices with a usage or 'modifier', or have this information in
UCM/topology. Logic will have to be introduced in userspace to use the
best routing, I would think this work can be reused for USB cases to
indicate the offload solution is geared to power optimizations.
Pierre-Louis Bossart Jan. 6, 2023, 4:09 p.m. UTC | #23
>>> The QC ADSP is able to support USB playback and capture, so that the
>>> main application processor can be placed into lower CPU power modes. 
>>> This
>>> adds the required AFE port configurations and port start command to
>>> start
>>> an audio session.
>>
>> It would be good to clarify what sort of endpoints can be supported. I
>> presume the SOF-synchronous case is handled, but how would you deal with
>> async endpoints with feedback (be it explicit or implicit)?
>>
> 
> Sure, both types of feedback endpoints are expected to be supported by
> the audio DSP, as well as sync eps.  We have the logic there to modify
> the audio sample size accordingly.

did you mean modify samples per USB frame (or uframe), so as to change
the pace at which data is transferred? If yes it'd be the same for Intel.

>>>     static const struct snd_soc_dapm_route q6afe_dapm_routes[] = {
>>> +    {"USB Playback", NULL, "USB_RX"},
>>
>> ... but here RX means playback?
>>
>> I am not sure I get the convention on directions and what is actually
>> supported?
>>
> 
> The notation is based on the direction of which the audio data is
> sourced or pushed on to the DSP.  So in playback, the DSP is receiving
> audio data to send, and capture, it is transmitting audio data received.
> (although we do not support capture yet)

ok, it'd be good to add a comment on this convention. Usually RX/TX is
bus-centric.

> 
>>> +struct afe_param_id_usb_cfg {
>>> +/* Minor version used for tracking USB audio device configuration.
>>> + * Supported values: AFE_API_MINOR_VERSION_USB_AUDIO_CONFIG
>>> + */
>>> +    u32                  cfg_minor_version;
>>> +/* Sampling rate of the port.
>>> + * Supported values:
>>> + * - AFE_PORT_SAMPLE_RATE_8K
>>> + * - AFE_PORT_SAMPLE_RATE_11025
>>> + * - AFE_PORT_SAMPLE_RATE_12K
>>> + * - AFE_PORT_SAMPLE_RATE_16K
>>> + * - AFE_PORT_SAMPLE_RATE_22050
>>> + * - AFE_PORT_SAMPLE_RATE_24K
>>> + * - AFE_PORT_SAMPLE_RATE_32K
>>> + * - AFE_PORT_SAMPLE_RATE_44P1K
>>> + * - AFE_PORT_SAMPLE_RATE_48K
>>> + * - AFE_PORT_SAMPLE_RATE_96K
>>> + * - AFE_PORT_SAMPLE_RATE_192K
>>> + */
>>> +    u32                  sample_rate;
>>> +/* Bit width of the sample.
>>> + * Supported values: 16, 24
>>> + */
>>> +    u16                  bit_width;
>>> +/* Number of channels.
>>> + * Supported values: 1 and 2
>>
>> that aligns with my feedback on the cover letter, if you connect a
>> device that can support from than 2 channels should the DSP even expose
>> this DSP-optimized path?
>>
> 
> My assumption is that I programmed the DAIs w/ PCM formats supported by
> the DSP, so I think the ASoC core should not allow userspace to choose
> that path if the hw params don't fit/match.

Right, but the point I was trying to make is that if the device can do
more, why create this DSP path at all?

> 
>> Oh and I forgot, what happens if there are multiple audio devices
>> connected, can the DSP deal with all of them? If not, how is this
>> handled?
>>
> 
> This is one topic that we were pretty open ended on.  At least on our
> implementation, only one audio device can be supported at a time.  We
> choose the latest device that was plugged in or discovered by the USB
> SND class driver.

Similar case for Intel. I have to revisit this, I don't recall the details.

>>> +    u32                  dev_token;
>>> +/* endianness of this interface */
>>> +    u32                   endian;
>>
>> Is this a USB concept? I can't recall having seen any parts of the USB
>> audio class spec that the data can be big or little endian?
>>
> 
> No, this is probably just something our audio DSP uses on the AFE
> commands that it receives.

ok.
Pierre-Louis Bossart Jan. 6, 2023, 4:16 p.m. UTC | #24
On 1/5/23 19:05, Wesley Cheng wrote:
> Hi Pierre,
> 
> On 1/4/2023 3:41 PM, Pierre-Louis Bossart wrote:
>>
>>> +int q6usb_alsa_connection_cb(struct snd_soc_usb *usb, int card_idx,
>>> +            int connected)
>>> +{
>>> +    struct snd_soc_dapm_context *dapm;
>>> +    struct q6usb_port_data *data;
>>> +
>>> +    if (!usb->component)
>>> +        return 0;
>>> +
>>> +    dapm = snd_soc_component_get_dapm(usb->component);
>>> +    data = dev_get_drvdata(usb->component->dev);
>>> +
>>> +    if (connected) {
>>> +        snd_soc_dapm_enable_pin(dapm, "USB_RX_BE");
>>> +        /* We only track the latest USB headset plugged in */
>>
>> that answers to my earlier question on how to deal with multiple
>> devices, but is this a desirable policy? This could lead to a lot of
>> confusion. If there are restrictions to a single device, then it might
>> be more interesting for userspace or the user to indicate which USB
>> device gets to use USB offload and all others use legacy.
>>
> 
> Yeah, as mentioned its still pretty open ended.  I think from the
> feedback received from Mark/Takashi, this was a viable option for now.
> Would you happen to have any insight/input on how the userspace can pass
> down that selection to the ASoC framework?  Maybe some kind of PCM IOCTL
> call?

I don't have a turn-key solution either :-)
We'd need userspace to make one device as 'preferred' or 'optimized' and
give it a priority somehow. It can't be a PCM IOCTL, it has to be at the
device level.

It's really a second-level optimization that can be parked for now, the
bulk of the work is really the interaction between USB audio and ASoC
stacks, we should probably focus on that BIG topic with a design that
can be shared across implementations.
Wesley Cheng Jan. 7, 2023, 12:46 a.m. UTC | #25
Hi Pierre,

On 1/6/2023 7:57 AM, Pierre-Louis Bossart wrote:
> 
>>> On 12/23/22 17:31, Wesley Cheng wrote:
>>>> Several Qualcomm based chipsets can support USB audio offloading to a
>>>> dedicated audio DSP, which can take over issuing transfers to the USB
>>>> host controller.  The intention is to reduce the load on the main
>>>> processors in the SoC, and allow them to be placed into lower power
>>>> modes.
>>>
>>> It would be nice to clarify what you want to offload
>>> a) audio data transfers for isoc ports
>>> b) control for e.g. volume settings (those go to endpoint 0 IIRC)
>>> c) Both?
>>>
>>
>> Thanks for sharing your experience, and inputs!
>>
>> It would be the audio related endpoints only, so ISOC and potentially
>> feedback ep.
> 
> That's good, that means there's a common basis for at least two separate
> hardware implementations.
>>> This has a lot of implications on the design. ASoC/DPCM is mainly
>>> intended for audio data transfers, control is a separate problem with
>>> configurations handled with register settings or bus-specific commands.
>>>
>>
>> Control would still be handled by the main processor.
> 
> Excellent, one more thing in common. Maintainers like this sort of
> alignment :-)
> 
>>>> There are several parts to this design:
>>>>     1. Adding ASoC binding layer
>>>>     2. Create a USB backend for Q6DSP
>>>>     3. Introduce XHCI interrupter support
>>>>     4. Create vendor ops for the USB SND driver
>>>>
>>>> Adding ASoC binding layer:
>>>> soc-usb: Intention is to treat a USB port similar to a headphone jack.
>>>> The port is always present on the device, but cable/pin status can be
>>>> enabled/disabled.  Expose mechanisms for USB backend ASoC drivers to
>>>> communicate with USB SND.
>>>>
>>>> Create a USB backend for Q6DSP:
>>>> q6usb: Basic backend driver that will be responsible for maintaining the
>>>> resources needed to initiate a playback stream using the Q6DSP.  Will
>>>> be the entity that checks to make sure the connected USB audio device
>>>> supports the requested PCM format.  If it does not, the PCM open call
>>>> will
>>>> fail, and userpsace ALSA can take action accordingly.
>>>>
>>>> Introduce XHCI interrupter support:
>>>> XHCI HCD supports multiple interrupters, which allows for events to
>>>> be routed
>>>> to different event rings.  This is determined by "Interrupter Target"
>>>> field
>>>> specified in Section "6.4.1.1 Normal TRB" of the XHCI specification.
>>>>
>>>> Events in the offloading case will be routed to an event ring that is
>>>> assigned
>>>> to the audio DSP.
> 
> To the best of my knowledge this isn't needed on Intel platforms, but
> that's something we would need to double-check.

I think Mathias mentioned that he was looking into adding some XHCI 
secondary interrupter support as well.  However, it did have some 
slightly different requirements compared to what this offloading feature 
is trying to do.

I'll first have to split up the XHCI/HCD changes into separate parts 
(interrupter specific and offloading specific), and then I'll work with 
him to see what can be improved from there.

>>>> Create vendor ops for the USB SND driver:
>>>> qc_audio_offload: This particular driver has several components
>>>> associated
>>>> with it:
>>>> - QMI stream request handler
>>>> - XHCI interrupter and resource management
>>>> - audio DSP memory management
>>>>
>>>> When the audio DSP wants to enable a playback stream, the request is
>>>> first
>>>> received by the ASoC platform sound card.  Depending on the selected
>>>> route,
>>>> ASoC will bring up the individual DAIs in the path.  The Q6USB
>>>> backend DAI
>>>> will send an AFE port start command (with enabling the USB playback
>>>> path), and
>>>> the audio DSP will handle the request accordingly.
>>>>
>>>> Part of the AFE USB port start handling will have an exchange of control
>>>> messages using the QMI protocol.  The qc_audio_offload driver will
>>>> populate the
>>>> buffer information:
>>>> - Event ring base address
>>>> - EP transfer ring base address
>>>>
>>>> and pass it along to the audio DSP.  All endpoint management will now
>>>> be handed
>>>> over to the DSP, and the main processor is not involved in transfers.
>>>>
>>>> Overall, implementing this feature will still expose separate sound
>>>> card and PCM
>>>> devices for both the platorm card and USB audio device:
>>>>    0 [SM8250MTPWCD938]: sm8250 - SM8250-MTP-WCD9380-WSA8810-VA-D
>>>>                         SM8250-MTP-WCD9380-WSA8810-VA-DMIC
>>>>    1 [Audio          ]: USB-Audio - USB Audio
>>>>                         Generic USB Audio at usb-xhci-hcd.1.auto-1.4,
>>>> high speed
>>>>
>>>> This is to ensure that userspace ALSA entities can decide which route
>>>> to take
>>>> when executing the audio playback.  In the above, if card#1 is
>>>> selected, then
>>>> USB audio data will take the legacy path over the USB PCM drivers,
>>>> etc...
>>>
>>> You would still need some sort of mutual exclusion to make sure the isoc
>>> endpoints are not used concurrently by the two cards. Relying on
>>> userspace intelligence to enforce that exclusion is not safe IMHO.
>>>
>>
>> Sure, I think we can make the USB card as being used if the offloading
>> path is currently being enabled.  Kernel could return an error to
>> userspace when this situation happens.
> 
> It's problematic for servers such as PipeWire/PulseAudio that open all
> possible PCMs to figure out what they support in terms of formats. I am
> not sure we can enforce a user-space serialization when discovering
> capabilities?
> 

I see...I'm not too familiar yet with all the different implementations 
from userspace yet, so that is something I'll need to look up on the 
side.  Takashi, would you happen to have any inputs with regards to how 
flexible PCM device selection can be from the userspace level?  If the 
offload PCM device can't be supported, can it fallback to another PCM 
device?

>>
>>> Intel looked at this sort of offload support a while ago and our
>>> directions were very different - for a variety of reasons USB offload is
>>> enabled on Windows platforms but remains a TODO for Linux. Rather than
>>> having two cards, you could have a single card and addition subdevices
>>> that expose the paths through the DSP. The benefits were that there was
>>> a single set of controls that userspace needed to know about, and volume
>>> settings were the same no matter which path you used (legacy or
>>> DSP-optimized paths). That's consistent with the directions to use 'Deep
>>> Buffer' PCM paths for local playback, it's the same idea of reducing
>>> power consumption with optimized routing.
>>>
>>
>> Volume control would still be done through the legacy path as mentioned
>> above.  For example, if a USB headset w/ a HID interface exposed (for
>> volume control) was connected, those HID events would be routed to
>> userspace to adjust volume accordingly on the main processor. (although
>> you're right about having separate controls still present - one for the
>> ASoC card and another for USB card)
> 
> The two sets of controls implied by the use of two cards is really
> problematic IMHO. This adds complexity for userspace to figure out that
> the controls are really the same and synchronize/mirror changes.
>  > The premise of offload is that it should really not get in the way of
> user-experience, design constructs that result in delayed starts/stop,
> changed volumes or quality differences should be avoided, or
> users/distros will disable this optimization.
>

Makes sense.  I think in terms of controls, we know that for an USB 
audio device, anything will still be handled through the USB card. 
Again, since I'm not too familiar yet with all the userspace 
implementations, does it have mechanisms to treat the control and data 
interfaces separately?

> One card with additional DSP-based PCM devices seems simpler to me in
> terms of usage, but it's not without technical challenges either: with
> the use of the ASoC topology framework we only know what the DSP
> supports when registering a card and probing the ASoC components.
> 
> The interaction between USB audio and ASoC would also be at least as
> complicated as display audio, in that it needs to work no matter what
> the probe order is, and even survive the Linux device/driver model
> requirement that there are no timing dependencies in the driver
> bind/unbind sequences.
> 

Yes, this was my initial approach as well, but from the technical 
perspective it was very very messy, and potentially could have affected 
functionality on certain devices if not handled correctly.  I think the 
difficult part was that the USB SND framework itself is an independent 
entity, and it was tough to dissect the portions which created PCM/sound 
card devices.

I don't think that was something which would have gone well if 
introduced all at once.  It would require a lot of discussion before 
getting the proper implementation.  At least this series introduces the 
initial communication between ASoC and USB SND, and maybe as use cases 
become clearer we can always improve/build on top of it.

>>> Another point is that there may be cases where the DSP paths are not
>>> available if the DSP memory and MCPS budget is exceeded. In those cases,
>>> the DSP parts needs the ability to notify userspace that the legacy path
>>> should be used.
>>
>> If we ran into this scenario, the audio DSP AFE port start command would
>> fail, and this would be propagated to the userspace entity.  It could
>> then potentially re-route to the legacy/non-offload path.
> 
> 'start' or 'open'? This is a rather important design difference. Usually
> we try to make decisions in the .open or .hw_params stage. The 'start'
> or 'trigger' are usually not meant to fail due to unavailable resources
> in ALSA.

This happens during the .prepare() phase.

>>> Another case to handle is that some USB devices can handle way more data
>>> than DSPs can chew, for example Pro audio boxes that can deal with 8ch
>>> 192kHz will typically use the legacy paths. Some also handle specific
>>> formats such as DSD over PCM. So it's quite likely that PCM devices for
>>> card0 and card1 above do NOT expose support for the same formats, or put
>>> differently that only a subset of the USB device capabilities are
>>> handled through the DSP.
>>
>> Same as the above.  We have programmed the USB backend to support the
>> profiles that the audio DSP can handle.  I assume if there was any other
>> request, the userspace entity would fail the PCM open for that requested
>> profile.
> 
> What's not clear to me is whether there's any matching process between
> the DSP capabilities and what the USB device exposes? if the USB device
> is already way more complicated that what the ASoC back-end can deal
> with, why expose a card?
> 

That's something I thought was done by the ASoC core.  I can check that 
and see if that's the case.  There is a check added in hw_params of our 
ASoC component where we do query the USB audio descriptors to ensure 
that the PCM format being used is supported by the device.  I guess this 
is when the DSP capabilities are better than what the headset can 
support :).

>>> And last, power optimizations with DSPs typically come from additional
>>> latency helping put the SoC in low-power modes. That's not necessarily
>>> ideal for all usages, e.g. for music recording and mixing I am not
>>> convinced the DSP path would help at all.
>>>
>>
>> That's true.  At the same time, this feature is more for power related
>> benefits, not specifically for performance. (although we haven't seen
>> any performance related issues w/ this approach on the audio profiles
>> the DSP supports)  I think if its an audio profile that supports a high
>> sample rate and large number of channels, then the DSP wouldn't be able
>> to support it anyway, and userspace could still use the legacy path.
>> This would allow for those high-performance audio devices to not be
>> affected.
> 
> ok, we are aligned as well here. Excellent. With the on-going work to
> introduce 'Deep Buffer' capabilities, we'll have a need to tag PCM
> devices with a usage or 'modifier', or have this information in
> UCM/topology. Logic will have to be introduced in userspace to use the
> best routing, I would think this work can be reused for USB cases to
> indicate the offload solution is geared to power optimizations.

Great, I like that idea to see if we can help userspace choose the 
desired path based on what the overall system is looking for.  I wonder 
if that would also potentially help with some of the PCM device 
selection complexities you brought up as well.  If the system just wants 
best possible performance then it would just completely ignore the power 
optimized (offload) path for any device.

Thanks
Wesley Cheng
Wesley Cheng Jan. 7, 2023, 12:51 a.m. UTC | #26
Hi Pierre,

On 1/6/2023 8:09 AM, Pierre-Louis Bossart wrote:
> 
>>>> The QC ADSP is able to support USB playback and capture, so that the
>>>> main application processor can be placed into lower CPU power modes.
>>>> This
>>>> adds the required AFE port configurations and port start command to
>>>> start
>>>> an audio session.
>>>
>>> It would be good to clarify what sort of endpoints can be supported. I
>>> presume the SOF-synchronous case is handled, but how would you deal with
>>> async endpoints with feedback (be it explicit or implicit)?
>>>
>>
>> Sure, both types of feedback endpoints are expected to be supported by
>> the audio DSP, as well as sync eps.  We have the logic there to modify
>> the audio sample size accordingly.
> 
> did you mean modify samples per USB frame (or uframe), so as to change
> the pace at which data is transferred? If yes it'd be the same for Intel.
> 

Yes, sorry for not being clear.  Your understanding is correct.

>>>>      static const struct snd_soc_dapm_route q6afe_dapm_routes[] = {
>>>> +    {"USB Playback", NULL, "USB_RX"},
>>>
>>> ... but here RX means playback?
>>>
>>> I am not sure I get the convention on directions and what is actually
>>> supported?
>>>
>>
>> The notation is based on the direction of which the audio data is
>> sourced or pushed on to the DSP.  So in playback, the DSP is receiving
>> audio data to send, and capture, it is transmitting audio data received.
>> (although we do not support capture yet)
> 
> ok, it'd be good to add a comment on this convention. Usually RX/TX is
> bus-centric.
> 

Sure, will do.

>>
>>>> +struct afe_param_id_usb_cfg {
>>>> +/* Minor version used for tracking USB audio device configuration.
>>>> + * Supported values: AFE_API_MINOR_VERSION_USB_AUDIO_CONFIG
>>>> + */
>>>> +    u32                  cfg_minor_version;
>>>> +/* Sampling rate of the port.
>>>> + * Supported values:
>>>> + * - AFE_PORT_SAMPLE_RATE_8K
>>>> + * - AFE_PORT_SAMPLE_RATE_11025
>>>> + * - AFE_PORT_SAMPLE_RATE_12K
>>>> + * - AFE_PORT_SAMPLE_RATE_16K
>>>> + * - AFE_PORT_SAMPLE_RATE_22050
>>>> + * - AFE_PORT_SAMPLE_RATE_24K
>>>> + * - AFE_PORT_SAMPLE_RATE_32K
>>>> + * - AFE_PORT_SAMPLE_RATE_44P1K
>>>> + * - AFE_PORT_SAMPLE_RATE_48K
>>>> + * - AFE_PORT_SAMPLE_RATE_96K
>>>> + * - AFE_PORT_SAMPLE_RATE_192K
>>>> + */
>>>> +    u32                  sample_rate;
>>>> +/* Bit width of the sample.
>>>> + * Supported values: 16, 24
>>>> + */
>>>> +    u16                  bit_width;
>>>> +/* Number of channels.
>>>> + * Supported values: 1 and 2
>>>
>>> that aligns with my feedback on the cover letter, if you connect a
>>> device that can support from than 2 channels should the DSP even expose
>>> this DSP-optimized path?
>>>
>>
>> My assumption is that I programmed the DAIs w/ PCM formats supported by
>> the DSP, so I think the ASoC core should not allow userspace to choose
>> that path if the hw params don't fit/match.
> 
> Right, but the point I was trying to make is that if the device can do
> more, why create this DSP path at all?
> 

Yeah, I think this brings me back to needing to understand a bit more of 
how the userspace chooses which PCM device to use.  At least for our 
current use cases, userspace would always route through the offload 
path, regardless of if the device can do more.  It will just select a 
lower audio profile if so.

>>
>>> Oh and I forgot, what happens if there are multiple audio devices
>>> connected, can the DSP deal with all of them? If not, how is this
>>> handled?
>>>
>>
>> This is one topic that we were pretty open ended on.  At least on our
>> implementation, only one audio device can be supported at a time.  We
>> choose the latest device that was plugged in or discovered by the USB
>> SND class driver.
> 
> Similar case for Intel. I have to revisit this, I don't recall the details.
> 

Got it.

Thanks
Wesley Cheng