mbox series

[RFC,v2,0/5] Apple Macs machine/platform ASoC driver

Message ID 20220606191910.16580-1-povik+lin@cutebit.org
Headers show
Series Apple Macs machine/platform ASoC driver | expand

Message

Martin Povišer June 6, 2022, 7:19 p.m. UTC
Hi all,

This is again RFC with a machine-level ASoC driver for recent Apple Macs
with the M1 line of chips. This time I attached the platform driver too
for good measure. What I am interested in the most is checking the overall
approach, especially on two points (both in some ways already discussed
in previous RFC [0]):

 - The way the platform/machine driver handles the fact that multiple I2S
   ports (now backend DAIs) can be driven by/connected to the same SERDES
   unit (now in effect a frontend DAI). After previous discussion I have
   transitioned to DPCM to model this. I took the opportunity of dynamic
   backend/frontend routing to support speakers/headphones runtime
   switching. More on this in comments at top of the machine and platform
   driver.

 - The way the machine driver deactivates some of the controls where
   suitable, and limits volume on others. I added a new ASoC card method
   to that end.

Kind regards,
Martin

[0] https://lore.kernel.org/linux-devicetree/20220331000449.41062-1-povik+lin@cutebit.org/

Martin Povišer (5):
  dt-bindings: sound: Add Apple MCA I2S transceiver
  dt-bindings: sound: Add Apple Macs sound peripherals
  ASoC: apple: Add MCA platform driver for Apple SoCs
  ASoC: Introduce 'fixup_controls' card method
  ASoC: apple: Add macaudio machine driver

 .../bindings/sound/apple,macaudio.yaml        |  157 +++
 .../devicetree/bindings/sound/apple,mca.yaml  |  102 ++
 include/sound/soc-card.h                      |    1 +
 include/sound/soc.h                           |    1 +
 sound/soc/Kconfig                             |    1 +
 sound/soc/Makefile                            |    1 +
 sound/soc/apple/Kconfig                       |   25 +
 sound/soc/apple/Makefile                      |    5 +
 sound/soc/apple/macaudio.c                    | 1004 +++++++++++++++
 sound/soc/apple/mca.c                         | 1122 +++++++++++++++++
 sound/soc/soc-card.c                          |    6 +
 sound/soc/soc-core.c                          |    1 +
 12 files changed, 2426 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/sound/apple,macaudio.yaml
 create mode 100644 Documentation/devicetree/bindings/sound/apple,mca.yaml
 create mode 100644 sound/soc/apple/Kconfig
 create mode 100644 sound/soc/apple/Makefile
 create mode 100644 sound/soc/apple/macaudio.c
 create mode 100644 sound/soc/apple/mca.c

Comments

Mark Brown June 6, 2022, 7:49 p.m. UTC | #1
On Mon, Jun 06, 2022 at 09:19:07PM +0200, Martin Povišer wrote:

> +      dai-link@1 {
> +        reg = <1>;
> +        link-name = "Headphones Jack";

Incredibly trivial bikeshed but normally that'd be "Headphone Jack"
(singular).
Mark Brown June 6, 2022, 8:17 p.m. UTC | #2
On Mon, Jun 06, 2022 at 09:19:08PM +0200, Martin Povišer wrote:

> +++ b/sound/soc/apple/mca.c
> @@ -0,0 +1,1122 @@
> +/*
> + * Apple SoCs MCA driver

Please add SPDX headers to all your files.

> +		mca_modify(cl, serdes_conf,
> +			SERDES_CONF_SOME_RST, SERDES_CONF_SOME_RST);
> +		(void) readl_relaxed(cl->base + serdes_conf);

Please drop the cast, casts to/from void are generally a warning sign as
they're unneeded in C.  If you want to document the barrier use a
comment or wrapper function.

> +	/*
> +	 * Codecs require clocks at time of umute with the 'mute_stream' op.
> +	 * We need to enable them here at the latest (frontend prepare would
> +	 * be too late).
> +	 */
> +	if (!mca_fe_clocks_in_use(fe_cl)) {
> +		ret = mca_fe_enable_clocks(fe_cl);
> +		if (ret < 0)
> +			return ret;
> +	}

This requirement is CODEC specific.  It's fine to bodge around to
satisfy it though, especially given the restricted set of platforms this
can be used with.

> +	fe_cl = &mca->clusters[cl->port_driver];
> +	if (!mca_fe_clocks_in_use(fe_cl))
> +		return 0; /* Nothing to do */
> +
> +	cl->clocks_in_use[substream->stream] = false;
> +
> +	if (!mca_fe_clocks_in_use(fe_cl))
> +		mca_fe_disable_clocks(fe_cl);

Are you sure this doesn't need locking?
Pierre-Louis Bossart June 6, 2022, 9:22 p.m. UTC | #3
On 6/6/22 15:46, Martin Povišer wrote:
> (I am having trouble delivering mail to linux.intel.com, so I reply to the list
> and CC at least...)
> 
>> On 6. 6. 2022, at 22:02, Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> wrote:
>>
>>
>>> + * Virtual FE/BE Playback Topology
>>> + * -------------------------------
>>> + *
>>> + * The platform driver has independent frontend and backend DAIs with the
>>> + * option of routing backends to any of the frontends. The platform
>>> + * driver configures the routing based on DPCM couplings in ASoC runtime
>>> + * structures, which in turn is determined from DAPM paths by ASoC. But the
>>> + * platform driver doesn't supply relevant DAPM paths and leaves that up for
>>> + * the machine driver to fill in. The filled-in virtual topology can be
>>> + * anything as long as a particular backend isn't connected to more than one
>>> + * frontend at any given time. (The limitation is due to the unsupported case
>>> + * of reparenting of live BEs.)
>>> + *
>>> + * The DAPM routing that this machine-level driver makes up has two use-cases
>>> + * in mind:
>>> + *
>>> + * - Using a single PCM for playback such that it conditionally sinks to either
>>> + *   speakers or headphones based on the plug-in state of the headphones jack.
>>> + *   All the while making the switch transparent to userspace. This has the
>>> + *   drawback of requiring a sample stream suited for both speakers and
>>> + *   headphones, which is hard to come by on machines where tailored DSP for
>>> + *   speakers in userspace is desirable or required.
>>> + *
>>> + * - Driving the headphones and speakers from distinct PCMs, having userspace
>>> + *   bridge the difference and apply different signal processing to the two.
>>> + *
>>> + * In the end the topology supplied by this driver looks like this:
>>> + *
>>> + *  PCMs (frontends)                   I2S Port Groups (backends)
>>> + *  ────────────────                   ──────────────────────────
>>> + *
>>> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
>>> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
>>> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
>>> + *                ┌─── │ ───┘             ▲
>>> + *  ┌──────────┐  │    │                  │
>>> + *  │Secondary ├──┘    │     ┌────────────┴┐
>>> + *  └──────────┘       ├────►│Plug-in Demux│
>>> + *                     │     └────────────┬┘
>>> + *                     │                  │
>>> + *                     │                  ▼
>>> + *                     │                 ┌─────┐     ┌──────────┐
>>> + *                     └───────────────► │ Mux │ ──► │Headphones│
>>> + *                                       └─────┘     └──────────┘
>>> + */
>>
>> In Patch2, the 'clusters' are described as front-ends, with I2S as
>> back-ends. Here the PCMs are described as front-ends, but there's no
>> mention of clusters. Either one of the two descriptions is outdated, or
>> there's something missing to help reconcile the two pieces of information?
> 
> Both descriptions should be in sync. Maybe I don’t know the proper
> terminology. In both cases the frontend is meant to be the actual I2S
> transceiver unit, and backend the I2S port on the SoC’s periphery,
> which can be routed to any of transceiver units. (Multiple ports can
> be routed to the same unit, which means the ports will have the same
> clocks and data line -- that's a configuration we need to support to
> drive some of the speaker arrays, hence the backend/frontend
> distinction).
> 
> Maybe I am using 'PCM' in a confusing way here? What I meant is a
> subdevice that’s visible from userspace, because I have seen it used
> that way in ALSA codebase.

Yes, I think most people familiar with DPCM would take the 'PCM
frontend' as some sort of generic DMA transfer from system memory, while
the 'back end' is more the actual serial link. In your case, the
front-end is already very low-level and tied to I2S already. I think
that's fine, it's just that using different terms for 'cluster' and
'PCM' in different patches could lead to confusions.

>>> +static int macaudio_get_runtime_mclk_fs(struct snd_pcm_substream *substream)
>>> +{
>>> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(rtd->card);
>>> +	struct snd_soc_dpcm *dpcm;
>>> +
>>> +	/*
>>> +	 * If this is a FE, look it up in link_props directly.
>>> +	 * If this is a BE, look it up in the respective FE.
>>> +	 */
>>> +	if (!rtd->dai_link->no_pcm)
>>> +		return ma->link_props[rtd->dai_link->id].mclk_fs;
>>> +
>>> +	for_each_dpcm_fe(rtd, substream->stream, dpcm) {
>>> +		int fe_id = dpcm->fe->dai_link->id;
>>> +
>>> +		return ma->link_props[fe_id].mclk_fs;
>>> +	}
>>
>> I am not sure what the concept of mclk would mean for a front-end? This
>> is typically very I2S-specific, i.e. a back-end property, no?
> 
> Right, that’s a result of the confusion from above. Hope I cleared it up
> somehow. The frontend already decides the clocks and data serialization,
> hence mclk/fs is a frontend-prop here.

What confuses me in this code is that it looks possible that the front-
and back-end could have different mclk values? I think a comment is
missing that the values are identical, it's just that there's a
different way to access it depending on the link type?


>>> +static int macaudio_be_init(struct snd_soc_pcm_runtime *rtd)
>>> +{
>>> +	struct snd_soc_card *card = rtd->card;
>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>>> +	struct snd_soc_dai *dai;
>>> +	int i, ret;
>>> +
>>> +	ret = macaudio_be_assign_tdm(rtd);
>>> +	if (ret < 0)
>>> +		return ret;
>>> +
>>> +	if (props->is_headphones) {
>>> +		for_each_rtd_codec_dais(rtd, i, dai)
>>> +			snd_soc_component_set_jack(dai->component, &ma->jack, NULL);
>>> +	}
>>
>> this is weird, set_jack() is invoked by the ASoC core. You shouldn't
>> need to do this?
> 
> That’s interesting. Where would it be invoked? How does ASoC know which codec
> it attaches to?

sorry, my comment was partly invalid.

set_jack() is invoked in the machine driver indeed, what I found strange
is that you may have different codecs handling the jack? What is the
purpose of that loop?


>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>> +				void *data);
>>> +
>>> +static struct notifier_block macaudio_jack_nb = {
>>> +	.notifier_call = macaudio_jack_event,
>>> +};
>>
>> why is this needed? we have already many ways of dealing with the jack
>> events (dare I say too many ways?).
> 
> Because I want to update the DAPM paths based on the jack status,
> specifically I want to set macaudio_plugin_demux. I don’t know how
> else it could be done.

I don't know either but I have never seen notifier blocks being used. I
would think there are already ways to do this with DAPM events.


>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>> +				void *data)
>>> +{
>>> +	struct snd_soc_jack *jack = data;
>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
>>> +
>>> +	ma->jack_plugin_state = !!event;
>>> +
>>> +	if (!ma->plugin_demux_kcontrol)
>>> +		return 0;
>>> +
>>> +	snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
>>> +				      ma->jack_plugin_state,
>>> +				      (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);
>>
>> the term 'plugin' can be understood in many ways by different audio
>> folks. 'plugin' is usually the term used for processing libraries (VST,
>> LADSPA, etc). I think here you meant 'jack control'?
> 
> So ‘jack control’ would be understood as the jack plugged/unplugged status?

The 'Headphone Jack' or 'Headset Mic Jack' kcontrols typically track the
status. Other terms are 'jack detection'. "plugin" is not a very common
term here.
Martin Povišer June 6, 2022, 9:33 p.m. UTC | #4
> On 6. 6. 2022, at 23:22, Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> wrote:
> 
> On 6/6/22 15:46, Martin Povišer wrote:
>> (I am having trouble delivering mail to linux.intel.com, so I reply to the list
>> and CC at least...)
>> 
>>> On 6. 6. 2022, at 22:02, Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> wrote:
>>> 
>>> 
>>>> + * Virtual FE/BE Playback Topology
>>>> + * -------------------------------
>>>> + *
>>>> + * The platform driver has independent frontend and backend DAIs with the
>>>> + * option of routing backends to any of the frontends. The platform
>>>> + * driver configures the routing based on DPCM couplings in ASoC runtime
>>>> + * structures, which in turn is determined from DAPM paths by ASoC. But the
>>>> + * platform driver doesn't supply relevant DAPM paths and leaves that up for
>>>> + * the machine driver to fill in. The filled-in virtual topology can be
>>>> + * anything as long as a particular backend isn't connected to more than one
>>>> + * frontend at any given time. (The limitation is due to the unsupported case
>>>> + * of reparenting of live BEs.)
>>>> + *
>>>> + * The DAPM routing that this machine-level driver makes up has two use-cases
>>>> + * in mind:
>>>> + *
>>>> + * - Using a single PCM for playback such that it conditionally sinks to either
>>>> + *   speakers or headphones based on the plug-in state of the headphones jack.
>>>> + *   All the while making the switch transparent to userspace. This has the
>>>> + *   drawback of requiring a sample stream suited for both speakers and
>>>> + *   headphones, which is hard to come by on machines where tailored DSP for
>>>> + *   speakers in userspace is desirable or required.
>>>> + *
>>>> + * - Driving the headphones and speakers from distinct PCMs, having userspace
>>>> + *   bridge the difference and apply different signal processing to the two.
>>>> + *
>>>> + * In the end the topology supplied by this driver looks like this:
>>>> + *
>>>> + *  PCMs (frontends)                   I2S Port Groups (backends)
>>>> + *  ────────────────                   ──────────────────────────
>>>> + *
>>>> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
>>>> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
>>>> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
>>>> + *                ┌─── │ ───┘             ▲
>>>> + *  ┌──────────┐  │    │                  │
>>>> + *  │Secondary ├──┘    │     ┌────────────┴┐
>>>> + *  └──────────┘       ├────►│Plug-in Demux│
>>>> + *                     │     └────────────┬┘
>>>> + *                     │                  │
>>>> + *                     │                  ▼
>>>> + *                     │                 ┌─────┐     ┌──────────┐
>>>> + *                     └───────────────► │ Mux │ ──► │Headphones│
>>>> + *                                       └─────┘     └──────────┘
>>>> + */
>>> 
>>> In Patch2, the 'clusters' are described as front-ends, with I2S as
>>> back-ends. Here the PCMs are described as front-ends, but there's no
>>> mention of clusters. Either one of the two descriptions is outdated, or
>>> there's something missing to help reconcile the two pieces of information?
>> 
>> Both descriptions should be in sync. Maybe I don’t know the proper
>> terminology. In both cases the frontend is meant to be the actual I2S
>> transceiver unit, and backend the I2S port on the SoC’s periphery,
>> which can be routed to any of transceiver units. (Multiple ports can
>> be routed to the same unit, which means the ports will have the same
>> clocks and data line -- that's a configuration we need to support to
>> drive some of the speaker arrays, hence the backend/frontend
>> distinction).
>> 
>> Maybe I am using 'PCM' in a confusing way here? What I meant is a
>> subdevice that’s visible from userspace, because I have seen it used
>> that way in ALSA codebase.
> 
> Yes, I think most people familiar with DPCM would take the 'PCM
> frontend' as some sort of generic DMA transfer from system memory, while
> the 'back end' is more the actual serial link. In your case, the
> front-end is already very low-level and tied to I2S already. I think
> that's fine, it's just that using different terms for 'cluster' and
> 'PCM' in different patches could lead to confusions.

OK, will take this into account then.

>>>> +static int macaudio_get_runtime_mclk_fs(struct snd_pcm_substream *substream)
>>>> +{
>>>> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(rtd->card);
>>>> +	struct snd_soc_dpcm *dpcm;
>>>> +
>>>> +	/*
>>>> +	 * If this is a FE, look it up in link_props directly.
>>>> +	 * If this is a BE, look it up in the respective FE.
>>>> +	 */
>>>> +	if (!rtd->dai_link->no_pcm)
>>>> +		return ma->link_props[rtd->dai_link->id].mclk_fs;
>>>> +
>>>> +	for_each_dpcm_fe(rtd, substream->stream, dpcm) {
>>>> +		int fe_id = dpcm->fe->dai_link->id;
>>>> +
>>>> +		return ma->link_props[fe_id].mclk_fs;
>>>> +	}
>>> 
>>> I am not sure what the concept of mclk would mean for a front-end? This
>>> is typically very I2S-specific, i.e. a back-end property, no?
>> 
>> Right, that’s a result of the confusion from above. Hope I cleared it up
>> somehow. The frontend already decides the clocks and data serialization,
>> hence mclk/fs is a frontend-prop here.
> 
> What confuses me in this code is that it looks possible that the front-
> and back-end could have different mclk values? I think a comment is
> missing that the values are identical, it's just that there's a
> different way to access it depending on the link type?

Well, that’s exactly what I am trying to convey by the comment above
in macaudio_get_runtime_mclk_fs. Maybe I should do something to make
it more obvious.

>>>> +static int macaudio_be_init(struct snd_soc_pcm_runtime *rtd)
>>>> +{
>>>> +	struct snd_soc_card *card = rtd->card;
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>>>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>>>> +	struct snd_soc_dai *dai;
>>>> +	int i, ret;
>>>> +
>>>> +	ret = macaudio_be_assign_tdm(rtd);
>>>> +	if (ret < 0)
>>>> +		return ret;
>>>> +
>>>> +	if (props->is_headphones) {
>>>> +		for_each_rtd_codec_dais(rtd, i, dai)
>>>> +			snd_soc_component_set_jack(dai->component, &ma->jack, NULL);
>>>> +	}
>>> 
>>> this is weird, set_jack() is invoked by the ASoC core. You shouldn't
>>> need to do this?
>> 
>> That’s interesting. Where would it be invoked? How does ASoC know which codec
>> it attaches to?
> 
> sorry, my comment was partly invalid.
> 
> set_jack() is invoked in the machine driver indeed, what I found strange
> is that you may have different codecs handling the jack? What is the
> purpose of that loop?

There’s no need for the loop, there’s a single codec. OK, will remove the loop
to make it less confusing.

>>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>>> +				void *data);
>>>> +
>>>> +static struct notifier_block macaudio_jack_nb = {
>>>> +	.notifier_call = macaudio_jack_event,
>>>> +};
>>> 
>>> why is this needed? we have already many ways of dealing with the jack
>>> events (dare I say too many ways?).
>> 
>> Because I want to update the DAPM paths based on the jack status,
>> specifically I want to set macaudio_plugin_demux. I don’t know how
>> else it could be done.
> 
> I don't know either but I have never seen notifier blocks being used. I
> would think there are already ways to do this with DAPM events.
> 
> 
>>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>>> +				void *data)
>>>> +{
>>>> +	struct snd_soc_jack *jack = data;
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
>>>> +
>>>> +	ma->jack_plugin_state = !!event;
>>>> +
>>>> +	if (!ma->plugin_demux_kcontrol)
>>>> +		return 0;
>>>> +
>>>> +	snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
>>>> +				      ma->jack_plugin_state,
>>>> +				      (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);
>>> 
>>> the term 'plugin' can be understood in many ways by different audio
>>> folks. 'plugin' is usually the term used for processing libraries (VST,
>>> LADSPA, etc). I think here you meant 'jack control'?
>> 
>> So ‘jack control’ would be understood as the jack plugged/unplugged status?
> 
> The 'Headphone Jack' or 'Headset Mic Jack' kcontrols typically track the
> status. Other terms are 'jack detection'. "plugin" is not a very common
> term here.

OK
Mark Brown June 9, 2022, 3:53 p.m. UTC | #5
On Mon, Jun 06, 2022 at 09:19:05PM +0200, Martin Povišer wrote:

>  - The way the platform/machine driver handles the fact that multiple I2S
>    ports (now backend DAIs) can be driven by/connected to the same SERDES
>    unit (now in effect a frontend DAI). After previous discussion I have
>    transitioned to DPCM to model this. I took the opportunity of dynamic
>    backend/frontend routing to support speakers/headphones runtime
>    switching. More on this in comments at top of the machine and platform
>    driver.

This looks roughly like I'd expect now, there's some issues from myself
and Pierre but it's more around the edges than anything big picture.
Mark Brown June 10, 2022, 3:58 p.m. UTC | #6
On Mon, 6 Jun 2022 21:19:05 +0200, Martin Povišer wrote:
> This is again RFC with a machine-level ASoC driver for recent Apple Macs
> with the M1 line of chips. This time I attached the platform driver too
> for good measure. What I am interested in the most is checking the overall
> approach, especially on two points (both in some ways already discussed
> in previous RFC [0]):
> 
>  - The way the platform/machine driver handles the fact that multiple I2S
>    ports (now backend DAIs) can be driven by/connected to the same SERDES
>    unit (now in effect a frontend DAI). After previous discussion I have
>    transitioned to DPCM to model this. I took the opportunity of dynamic
>    backend/frontend routing to support speakers/headphones runtime
>    switching. More on this in comments at top of the machine and platform
>    driver.
> 
> [...]

Applied to

   https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-next

Thanks!

[4/5] ASoC: Introduce 'fixup_controls' card method
      commit: df4d27b19b892f464685ea45fa6132dd1a2b6864

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark