mbox series

[V3,0/3] Tegra TPM driver with HW flow control

Message ID 20230223162635.19747-1-kyarlagadda@nvidia.com
Headers show
Series Tegra TPM driver with HW flow control | expand

Message

Krishna Yarlagadda Feb. 23, 2023, 4:26 p.m. UTC
TPM interface spec defines flow control where TPM device would drive
MISO at same cycle as last address bit sent by controller on MOSI. This
state of wait can be detected by software reading the MISO line or
by controller hardware. Support sending transfers to controller in
single message and handle flow control in hardware. Half duplex
controllers have to support flow control in hardware.

Tegra234 and Tegra241 chips have QSPI controller that supports TPM
Interface Specification (TIS) flow control.
Since the controller only supports half duplex, SW wait polling
(flow control using full duplex transfers) method implemented in
tpm_tis_spi_main.c will not work and have to us HW flow control.

Updates in this patchset 
 - Tegra QSPI identifies itself as half duplex.
 - TPM TIS SPI driver skips flow control for half duplex and send
   transfers in single message for controller to handle it.
 - TPM device identifies as TPM device for controller to detect and
   enable HW TPM wait poll feature.

Verified with a TPM device on Tegra241 ref board using TPM2 tools.

V3:
 - Use SPI device mode flag and SPI controller flags.
 - Drop usage of device tree flags.
 - Generic TPM half duplex controller handling.
 - HW & SW flow control for TPM. Drop additional driver.
V2:
 - Fix dt schema errors.


Krishna Yarlagadda (3):
  tpm_tis-spi: Support hardware wait polling
  spi: tegra210-quad: set half duplex flag
  spi: tegra210-quad: Enable TPM wait polling

 drivers/char/tpm/tpm_tis_spi_main.c | 90 ++++++++++++++++++++++++++++-
 drivers/spi/spi-tegra210-quad.c     | 22 +++++++
 include/linux/spi/spi.h             |  7 ++-
 3 files changed, 114 insertions(+), 5 deletions(-)

Comments

Mark Brown Feb. 23, 2023, 5:28 p.m. UTC | #1
On Thu, Feb 23, 2023 at 09:56:35PM +0530, Krishna Yarlagadda wrote:

> Trusted Platform Module requires flow control. As defined in TPM
> interface specification, client would drive MISO line at same cycle as
> last address bit on MOSI.
> Tegra241 QSPI controller has TPM wait state detection feature which is
> enabled for TPM client devices reported in SPI device mode bits.
> Set half duplex flag for TPM device to detect and send entire message
> to controller in one shot.

I don't really understand what the controller is actually doing here, or
what the intended effect of the SPI_TPM_HW_FLOW flag is supposed to be.

>  	/* Enable Combined sequence mode */
>  	val = tegra_qspi_readl(tqspi, QSPI_GLOBAL_CONFIG);
> +	if (spi->mode & SPI_TPM_HW_FLOW) {
> +		if (tqspi->soc_data->tpm_wait_poll)
> +			val |= QSPI_TPM_WAIT_POLL_EN;
> +		else
> +			return -EIO;
> +	}

This just sets a bit in a register...

>  	val |= QSPI_CMB_SEQ_EN;
>  	tegra_qspi_writel(tqspi, val, QSPI_GLOBAL_CONFIG);
>  	/* Process individual transfer list */

...my guess is that setting that bit causes the individual transfers to
be delayed in completing without further changes?  Is is just some
transfers or all of them?
Mark Brown Feb. 23, 2023, 6:31 p.m. UTC | #2
On Thu, 23 Feb 2023 21:56:32 +0530, Krishna Yarlagadda wrote:
> TPM interface spec defines flow control where TPM device would drive
> MISO at same cycle as last address bit sent by controller on MOSI. This
> state of wait can be detected by software reading the MISO line or
> by controller hardware. Support sending transfers to controller in
> single message and handle flow control in hardware. Half duplex
> controllers have to support flow control in hardware.
> 
> [...]

Applied to

   https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git for-next

Thanks!

[2/3] spi: tegra210-quad: set half duplex flag
      commit: f7482d8285b638be87a594a30edaaf1341135c1a

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark
Krishna Yarlagadda Feb. 23, 2023, 6:41 p.m. UTC | #3
> -----Original Message-----
> From: Mark Brown <broonie@kernel.org>
> Sent: 23 February 2023 22:49
> To: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> Cc: robh+dt@kernel.org; peterhuewe@gmx.de; jgg@ziepe.ca;
> jarkko@kernel.org; krzysztof.kozlowski+dt@linaro.org; linux-
> spi@vger.kernel.org; linux-tegra@vger.kernel.org; linux-
> integrity@vger.kernel.org; linux-kernel@vger.kernel.org;
> thierry.reding@gmail.com; Jonathan Hunter <jonathanh@nvidia.com>;
> Sowjanya Komatineni <skomatineni@nvidia.com>; Laxman Dewangan
> <ldewangan@nvidia.com>
> Subject: Re: [Patch V3 1/3] tpm_tis-spi: Support hardware wait polling
> 
> On Thu, Feb 23, 2023 at 09:56:33PM +0530, Krishna Yarlagadda wrote:
> 
> > +       spi_bus_lock(phy->spi_device->master);
> > +
> > +       while (len) {
> 
> Why?
TPM supports max 64B in single transaction. Loop to send rest of it.
> 
> > +		spi_xfer[0].tx_buf = phy->iobuf;
> > +		spi_xfer[0].len = 1;
> > +		spi_message_add_tail(&spi_xfer[0], &m);
> > +
> > +		spi_xfer[1].tx_buf = phy->iobuf + 1;
> > +		spi_xfer[1].len = 3;
> > +		spi_message_add_tail(&spi_xfer[1], &m);
> 
> Why would we make these two separate transfers?
Tegra QSPI combined sequence requires cmd, addr and data in different
transfers. This eliminates need for additional flag for combined sequence.
> 
> > +		if (out) {
> > +			spi_xfer[2].tx_buf = &phy->iobuf[4];
> > +			spi_xfer[2].rx_buf = NULL;
> > +			memcpy(&phy->iobuf[4], out, transfer_len);
> > +			out += transfer_len;
> > +		}
> > +
> > +		if (in) {
> > +			spi_xfer[2].tx_buf = NULL;
> > +			spi_xfer[2].rx_buf = &phy->iobuf[4];
> > +		}
> 
> This will use the same buffer for rx and tx if some bug manages to leave
> them both set.  That shouldn't be an issue but it's an alarm bell
> reading the code.
> 
> > index 988aabc31871..b88494e31239 100644
> > --- a/include/linux/spi/spi.h
> > +++ b/include/linux/spi/spi.h
> > @@ -184,8 +184,9 @@ struct spi_device {
> >  	u8			chip_select;
> >  	u8			bits_per_word;
> >  	bool			rt;
> > -#define SPI_NO_TX	BIT(31)		/* No transmit wire */
> > -#define SPI_NO_RX	BIT(30)		/* No receive wire */
> > +#define SPI_NO_TX		BIT(31)		/* No transmit wire */
> > +#define SPI_NO_RX		BIT(30)		/* No receive wire */
> > +#define SPI_TPM_HW_FLOW		BIT(29)		/* TPM flow
> control */
> 
> Additions to the SPI API should be a separate commit for SPI rather than
> merged into a driver change.
Will split this into new patch.
Mark Brown Feb. 23, 2023, 6:43 p.m. UTC | #4
On Thu, Feb 23, 2023 at 06:41:43PM +0000, Krishna Yarlagadda wrote:

> > > +       spi_bus_lock(phy->spi_device->master);
> > > +
> > > +       while (len) {

> > Why?

> TPM supports max 64B in single transaction. Loop to send rest of it.

No, why is there a bus lock?

> > > +		spi_xfer[0].tx_buf = phy->iobuf;
> > > +		spi_xfer[0].len = 1;
> > > +		spi_message_add_tail(&spi_xfer[0], &m);
> > > +
> > > +		spi_xfer[1].tx_buf = phy->iobuf + 1;
> > > +		spi_xfer[1].len = 3;
> > > +		spi_message_add_tail(&spi_xfer[1], &m);

> > Why would we make these two separate transfers?

> Tegra QSPI combined sequence requires cmd, addr and data in different
> transfers. This eliminates need for additional flag for combined sequence.

That needs some documentation, and we might need a flag to ensure the
core doesn't coalesce the transfers.
Krishna Yarlagadda Feb. 23, 2023, 6:46 p.m. UTC | #5
> -----Original Message-----
> From: Mark Brown <broonie@kernel.org>
> Sent: 23 February 2023 22:59
> To: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> Cc: robh+dt@kernel.org; peterhuewe@gmx.de; jgg@ziepe.ca;
> jarkko@kernel.org; krzysztof.kozlowski+dt@linaro.org; linux-
> spi@vger.kernel.org; linux-tegra@vger.kernel.org; linux-
> integrity@vger.kernel.org; linux-kernel@vger.kernel.org;
> thierry.reding@gmail.com; Jonathan Hunter <jonathanh@nvidia.com>;
> Sowjanya Komatineni <skomatineni@nvidia.com>; Laxman Dewangan
> <ldewangan@nvidia.com>
> Subject: Re: [Patch V3 3/3] spi: tegra210-quad: Enable TPM wait polling
> 
> On Thu, Feb 23, 2023 at 09:56:35PM +0530, Krishna Yarlagadda wrote:
> 
> > Trusted Platform Module requires flow control. As defined in TPM
> > interface specification, client would drive MISO line at same cycle as
> > last address bit on MOSI.
> > Tegra241 QSPI controller has TPM wait state detection feature which is
> > enabled for TPM client devices reported in SPI device mode bits.
> > Set half duplex flag for TPM device to detect and send entire message
> > to controller in one shot.
> 
> I don't really understand what the controller is actually doing here, or
> what the intended effect of the SPI_TPM_HW_FLOW flag is supposed to be.
> 
> >  	/* Enable Combined sequence mode */
> >  	val = tegra_qspi_readl(tqspi, QSPI_GLOBAL_CONFIG);
> > +	if (spi->mode & SPI_TPM_HW_FLOW) {
> > +		if (tqspi->soc_data->tpm_wait_poll)
> > +			val |= QSPI_TPM_WAIT_POLL_EN;
> > +		else
> > +			return -EIO;
> > +	}
> 
> This just sets a bit in a register...
> 
> >  	val |= QSPI_CMB_SEQ_EN;
> >  	tegra_qspi_writel(tqspi, val, QSPI_GLOBAL_CONFIG);
> >  	/* Process individual transfer list */
> 
> ...my guess is that setting that bit causes the individual transfers to
> be delayed in completing without further changes?  Is is just some
> transfers or all of them?
TPM spec define flow control over SPI.  TPM device/client inserts
wait state on MISO line when address is transferred on MOSI. This state
has to be detected by reading the MISO line which needs full duplex
transfer during address phase. Tegra QSPI controller can only support
half duplex. But in combined sequence mode, it can detect wait state
inserted by TPM device and send or receive data when device is ready.
Detection happens on all transfers with TPM.
Krishna Yarlagadda Feb. 24, 2023, 2:16 p.m. UTC | #6
> -----Original Message-----
> From: Mark Brown <broonie@kernel.org>
> Sent: 24 February 2023 00:13
> To: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> Cc: robh+dt@kernel.org; peterhuewe@gmx.de; jgg@ziepe.ca;
> jarkko@kernel.org; krzysztof.kozlowski+dt@linaro.org; linux-
> spi@vger.kernel.org; linux-tegra@vger.kernel.org; linux-
> integrity@vger.kernel.org; linux-kernel@vger.kernel.org;
> thierry.reding@gmail.com; Jonathan Hunter <jonathanh@nvidia.com>;
> Sowjanya Komatineni <skomatineni@nvidia.com>; Laxman Dewangan
> <ldewangan@nvidia.com>
> Subject: Re: [Patch V3 1/3] tpm_tis-spi: Support hardware wait polling
> 
> On Thu, Feb 23, 2023 at 06:41:43PM +0000, Krishna Yarlagadda wrote:
> 
> > > > +       spi_bus_lock(phy->spi_device->master);
> > > > +
> > > > +       while (len) {
> 
> > > Why?
> 
> > TPM supports max 64B in single transaction. Loop to send rest of it.
> 
> No, why is there a bus lock?
Bus lock to avoid other clients to be accessed between TPM transfers.

> 
> > > > +		spi_xfer[0].tx_buf = phy->iobuf;
> > > > +		spi_xfer[0].len = 1;
> > > > +		spi_message_add_tail(&spi_xfer[0], &m);
> > > > +
> > > > +		spi_xfer[1].tx_buf = phy->iobuf + 1;
> > > > +		spi_xfer[1].len = 3;
> > > > +		spi_message_add_tail(&spi_xfer[1], &m);
> 
> > > Why would we make these two separate transfers?
> 
> > Tegra QSPI combined sequence requires cmd, addr and data in different
> > transfers. This eliminates need for additional flag for combined sequence.
> 
> That needs some documentation, and we might need a flag to ensure the
> core doesn't coalesce the transfers.
Will add comment at top of the function. Bus lock should avoid coalesce of
transfer of single message from others.
KY
Mark Brown Feb. 24, 2023, 3:51 p.m. UTC | #7
On Fri, Feb 24, 2023 at 02:16:27PM +0000, Krishna Yarlagadda wrote:

> > > > > +       spi_bus_lock(phy->spi_device->master);
> > > > > +
> > > > > +       while (len) {

> > > > Why?

> > > TPM supports max 64B in single transaction. Loop to send rest of it.

> > No, why is there a bus lock?

> Bus lock to avoid other clients to be accessed between TPM transfers.

That's what a bus lock does but what would be the problem if something
else sent a message between messages?  Note that a message will always
be sent atomically.
Krishna Yarlagadda Feb. 24, 2023, 4:21 p.m. UTC | #8
> -----Original Message-----
> From: Mark Brown <broonie@kernel.org>
> Sent: 24 February 2023 21:22
> To: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> Cc: robh+dt@kernel.org; peterhuewe@gmx.de; jgg@ziepe.ca;
> jarkko@kernel.org; krzysztof.kozlowski+dt@linaro.org; linux-
> spi@vger.kernel.org; linux-tegra@vger.kernel.org; linux-
> integrity@vger.kernel.org; linux-kernel@vger.kernel.org;
> thierry.reding@gmail.com; Jonathan Hunter <jonathanh@nvidia.com>;
> Sowjanya Komatineni <skomatineni@nvidia.com>; Laxman Dewangan
> <ldewangan@nvidia.com>
> Subject: Re: [Patch V3 1/3] tpm_tis-spi: Support hardware wait polling
> 
> On Fri, Feb 24, 2023 at 02:16:27PM +0000, Krishna Yarlagadda wrote:
> 
> > > > > > +       spi_bus_lock(phy->spi_device->master);
> > > > > > +
> > > > > > +       while (len) {
> 
> > > > > Why?
> 
> > > > TPM supports max 64B in single transaction. Loop to send rest of it.
> 
> > > No, why is there a bus lock?
> 
> > Bus lock to avoid other clients to be accessed between TPM transfers.
> 
> That's what a bus lock does but what would be the problem if something
> else sent a message between messages?  Note that a message will always
> be sent atomically.
QSPI has multi-chip-select support. Idea was to prevent transfers from both
devices at the same time. But it should be fine if it is single message.
I will verify with bus lock removed.