mbox series

[v12,0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing

Message ID 20230825-dma_iommu-v12-0-4134455994a7@linux.ibm.com
Headers show
Series iommu/dma: s390 DMA API conversion and optimized IOTLB flushing | expand

Message

Niklas Schnelle Aug. 25, 2023, 10:11 a.m. UTC
Hi All,

This patch series converts s390's PCI support from its platform specific DMA
API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer.
The conversion itself is done in patches 3-4 with patch 2 providing the final
necessary IOMMU driver improvement to handle s390's special IOTLB flush
out-of-resource indication in virtualized environments. The conversion
itself only touches the s390 IOMMU driver and s390 arch code moving over
remaining functions from the s390 DMA API implementation. No changes to
common code are necessary.

After patch 4 the basic conversion is done and on our partitioning
machine hypervisor LPAR performance matches the previous implementation.
When running under z/VM or KVM however, performance plummets to about
half of the existing code due to a much higher rate of IOTLB flushes for
unmapped pages. Due to the hypervisors use of IOTLB flushes to
synchronize their shadow tables these are very expensive and minimizing
them is key for regaining the performance loss.

To this end patches 5-6 add a new, single queue, IOTLB flushing scheme
as an alternative to the existing per-CPU flush queues. Introducing an
alternative scheme was suggested by Robin Murphy[1]. The single queue
mode is introduced in patch 4 together with a new .shadow_on_flush flag
bit in struct dev_iommu. This allows IOMMU drivers to indicate that
their IOTLB flushes do the extra work of shadowing. This then lets the
dma-iommu code use a single queue.

Then patch 6 enables variable queue sizes using power of 2 values and
shift/mask to keep performance as close to the fixed size queue code as
possible. A larger queue size and timeout is used by dma-iommu when
shadow_on_flush is set. This same scheme may also be used by other IOMMU
drivers with similar requirements. Particularly virtio-iommu may be
a candidate.

I tested this code on s390x with LPAR, z/VM and KVM hypervisors on an
AMD Ryzen x86 system with native IOMMU and a guest with a modified
virtio-iommu[4] that set .shadow_on_flush = true.

This code is also available in the b4/dma_iommu topic branch of my
git.kernel.org repository[3] with tags matching the version sent.

NOTE: Due to the large drop in performance I think we should not merge
the DMA API conversion (patch 4) until we have a more suited IOVA
flushing scheme with similar improvements as the proposed changes.

Best regards,
Niklas

[0] https://lore.kernel.org/linux-iommu/20221109142903.4080275-1-schnelle@linux.ibm.com/
[1] https://lore.kernel.org/linux-iommu/3e402947-61f9-b7e8-1414-fde006257b6f@arm.com/
[2] https://lore.kernel.org/linux-iommu/a8e778da-7b41-a6ba-83c3-c366a426c3da@arm.com/
[3] https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/
[4] https://lore.kernel.org/lkml/20230726111433.1105665-1-schnelle@linux.ibm.com/

---
Changes in v12:
- Rebased on v6.5-rc7
- Changed queue type flag to an enum
- Incorporated feedback from Robin Murphy
  - Set options centrally and only once in iommu_dma_init_domain() with
    new helper iommu_dma_init_options()
  - Do not reset options of failing to init FQ
  - Fixed rebase mishap that partially rolled back patch 2
  - Simplified patch 4 by simply no claiming the deferred flush
    capability for ISM
  - Inlined and removed fq_flush_percpu()
  - Changed vzalloc() to vmalloc() for queue
- Added Acked-by's from Robin Murphy
- Link to v11: https://lore.kernel.org/r/20230717-dma_iommu-v11-0-a7a0b83c355c@linux.ibm.com

Changes in v11:
- Rebased on v6.5-rc2
- Added patch to force IOMMU_DOMAIN_DMA on s390 specific ISM devices
- Dropped the patch to properly set DMA mask on ISM devices which went upstream separately.
- s390 IOMMU driver now uses IOMMU_CAP_DEFERRED_FLUSH to enable DMA-FQ
  leaving no uses of IOMMU_DOMAIN_DMA_FQ in the driver.
- Link to v10: https://lore.kernel.org/r/20230310-dma_iommu-v10-0-f1fbd8310854@linux.ibm.com

Changes in v10:
- Rebased on v6.4-rc3
- Removed the .tune_dma_iommu() op in favor of a .shadow_on_flush flag
  in struct dev_iommu which then let's the dma-iommu choose a single
  queue and larger timeouts and IOVA counts. This leaves the dma-iommu
  with full responsibility for the settings.
- The above change affects patches 5 and 6 and lead to a new subject for
  patch 6 since the flush queue size and timeout is no longer driver
  controlled
- Link to v9: https://lore.kernel.org/r/20230310-dma_iommu-v9-0-65bb8edd2beb@linux.ibm.com

Changes in v9:
- Rebased on v6.4-rc2
- Re-ordered iommu_group_store_type() to allow passing the device to
  iommu_dma_init_fq()
- Link to v8: https://lore.kernel.org/r/20230310-dma_iommu-v8-0-2347dfbed7af@linux.ibm.com

---
Niklas Schnelle (6):
      iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
      s390/pci: prepare is_passed_through() for dma-iommu
      s390/pci: Use dma-iommu layer
      iommu/s390: Disable deferred flush for ISM devices
      iommu/dma: Allow a single FQ in addition to per-CPU FQs
      iommu/dma: Use a large flush queue and timeout for shadow_on_flush

 Documentation/admin-guide/kernel-parameters.txt |   9 +-
 arch/s390/include/asm/pci.h                     |   7 -
 arch/s390/include/asm/pci_clp.h                 |   3 +
 arch/s390/include/asm/pci_dma.h                 | 119 +---
 arch/s390/pci/Makefile                          |   2 +-
 arch/s390/pci/pci.c                             |  22 +-
 arch/s390/pci/pci_bus.c                         |   5 -
 arch/s390/pci/pci_debug.c                       |  12 +-
 arch/s390/pci/pci_dma.c                         | 735 ------------------------
 arch/s390/pci/pci_event.c                       |  17 +-
 arch/s390/pci/pci_sysfs.c                       |  19 +-
 drivers/iommu/Kconfig                           |   4 +-
 drivers/iommu/amd/iommu.c                       |   5 +-
 drivers/iommu/apple-dart.c                      |   5 +-
 drivers/iommu/dma-iommu.c                       | 200 +++++--
 drivers/iommu/intel/iommu.c                     |   5 +-
 drivers/iommu/iommu.c                           |  20 +-
 drivers/iommu/msm_iommu.c                       |   5 +-
 drivers/iommu/mtk_iommu.c                       |   5 +-
 drivers/iommu/s390-iommu.c                      | 425 ++++++++++++--
 drivers/iommu/sprd-iommu.c                      |   5 +-
 drivers/iommu/sun50i-iommu.c                    |   6 +-
 drivers/iommu/tegra-gart.c                      |   5 +-
 include/linux/iommu.h                           |   6 +-
 24 files changed, 643 insertions(+), 1003 deletions(-)
---
base-commit: 706a741595047797872e669b3101429ab8d378ef
change-id: 20230310-dma_iommu-5e048c538647

Best regards,

Comments

Robin Murphy Sept. 5, 2023, 4:09 p.m. UTC | #1
On 2023-08-25 19:26, Matthew Rosato wrote:
> On 8/25/23 6:11 AM, Niklas Schnelle wrote:
>> Hi All,
>>
>> This patch series converts s390's PCI support from its platform specific DMA
>> API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer.
>> The conversion itself is done in patches 3-4 with patch 2 providing the final
>> necessary IOMMU driver improvement to handle s390's special IOTLB flush
>> out-of-resource indication in virtualized environments. The conversion
>> itself only touches the s390 IOMMU driver and s390 arch code moving over
>> remaining functions from the s390 DMA API implementation. No changes to
>> common code are necessary.
>>
> 
> I also picked up this latest version and ran various tests with ISM, mlx5 and some NVMe drives.  FWIW, I have been including versions of this series in my s390 dev environments for a number of months now and have also been building my s390 pci iommufd nested translation series on top of this, so it's seen quite a bit of testing from me at least.
> 
> So as far as I'm concerned anyway, this series is ready for -next (after the merge window).

Agreed; I'll trust your reviews for the s390-specific parts, so indeed 
it looks like this should have all it needs now and is ready for a nice 
long soak in -next once Joerg opens the tree for 6.7 material.

Cheers,
Robin.
Joerg Roedel Sept. 25, 2023, 9:56 a.m. UTC | #2
On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
> Niklas Schnelle (6):
>       iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
>       s390/pci: prepare is_passed_through() for dma-iommu
>       s390/pci: Use dma-iommu layer
>       iommu/s390: Disable deferred flush for ISM devices
>       iommu/dma: Allow a single FQ in addition to per-CPU FQs
>       iommu/dma: Use a large flush queue and timeout for shadow_on_flush

Applied, thanks.
Joerg Roedel Sept. 26, 2023, 3:04 p.m. UTC | #3
Hi Niklas,

On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
> Niklas Schnelle (6):
>       iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
>       s390/pci: prepare is_passed_through() for dma-iommu
>       s390/pci: Use dma-iommu layer
>       iommu/s390: Disable deferred flush for ISM devices
>       iommu/dma: Allow a single FQ in addition to per-CPU FQs
>       iommu/dma: Use a large flush queue and timeout for shadow_on_flush

Turned out this series has non-trivial conflicts with Jasons
default-domain work so I had to remove it from the IOMMU tree for now.
Can you please rebase it to the latest iommu/core branch and re-send? I
will take it into the tree again then.

Thanks,

	Joerg
Jason Gunthorpe Sept. 26, 2023, 4:08 p.m. UTC | #4
On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote:
> Hi Niklas,
> 
> On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
> > Niklas Schnelle (6):
> >       iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
> >       s390/pci: prepare is_passed_through() for dma-iommu
> >       s390/pci: Use dma-iommu layer
> >       iommu/s390: Disable deferred flush for ISM devices
> >       iommu/dma: Allow a single FQ in addition to per-CPU FQs
> >       iommu/dma: Use a large flush queue and timeout for shadow_on_flush
> 
> Turned out this series has non-trivial conflicts with Jasons
> default-domain work so I had to remove it from the IOMMU tree for now.
> Can you please rebase it to the latest iommu/core branch and re-send? I
> will take it into the tree again then.

Niklas, I think you just 'take yours' to resolve this. All the
IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be
removed. Let me know if you need anything

Thanks,
Jason
Niklas Schnelle Sept. 27, 2023, 8:55 a.m. UTC | #5
On Tue, 2023-09-26 at 13:08 -0300, Jason Gunthorpe wrote:
> On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote:
> > Hi Niklas,
> > 
> > On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
> > > Niklas Schnelle (6):
> > >       iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
> > >       s390/pci: prepare is_passed_through() for dma-iommu
> > >       s390/pci: Use dma-iommu layer
> > >       iommu/s390: Disable deferred flush for ISM devices
> > >       iommu/dma: Allow a single FQ in addition to per-CPU FQs
> > >       iommu/dma: Use a large flush queue and timeout for shadow_on_flush
> > 
> > Turned out this series has non-trivial conflicts with Jasons
> > default-domain work so I had to remove it from the IOMMU tree for now.
> > Can you please rebase it to the latest iommu/core branch and re-send? I
> > will take it into the tree again then.
> 
> Niklas, I think you just 'take yours' to resolve this. All the
> IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be
> removed. Let me know if you need anything
> 
> Thanks,
> Jason

Hi Joerg, Hi Jason,

I've run into an unfortunate problem, not with the rebase itself but
with the iommu/core branch. 

Jason is right, I basically need to just remove the platform ops and
.default_domain ops. This seems to work fine for an NVMe both in the
host and also when using the IOMMU with vfio-pci + KVM. I've already
pushed the result of that to my git.kernel.org:
https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/log/?h=b4/dma_iommu

The problem is that something seems to  be broken in the iommu/core
branch. Regardless of whether I have my DMA API conversion on top or
with the base iommu/core branch I can not use ConnectX-4 VFs.

# lspci
111a:00:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
# dmesg | grep mlx
[    3.189749] mlx5_core 111a:00:00.0: mlx5_mdev_init:1802:(pid 464): Failed initializing cmdif SW structs, aborting
[    3.189783] mlx5_core: probe of 111a:00:00.0 failed with error -12

This same card works on v6.6-rc3 both with and without my DMA API
conversion patch series applied. Looking at mlx5_mdev_init() -> 
mlx5_cmd_init(). The -ENOMEM seems to come from the following
dma_pool_create():

cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0);

I'll try to debug this further but wanted to let you know already in
case you have some ideas. Either way as it doesn't seem to be related
to the DMA API conversion I can sent that out again regardless if you
want, really don't want to miss another cycle.

Thanks,
Niklas
Robin Murphy Sept. 27, 2023, 9:26 a.m. UTC | #6
On 2023-09-27 09:55, Niklas Schnelle wrote:
> On Tue, 2023-09-26 at 13:08 -0300, Jason Gunthorpe wrote:
>> On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote:
>>> Hi Niklas,
>>>
>>> On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
>>>> Niklas Schnelle (6):
>>>>        iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
>>>>        s390/pci: prepare is_passed_through() for dma-iommu
>>>>        s390/pci: Use dma-iommu layer
>>>>        iommu/s390: Disable deferred flush for ISM devices
>>>>        iommu/dma: Allow a single FQ in addition to per-CPU FQs
>>>>        iommu/dma: Use a large flush queue and timeout for shadow_on_flush
>>>
>>> Turned out this series has non-trivial conflicts with Jasons
>>> default-domain work so I had to remove it from the IOMMU tree for now.
>>> Can you please rebase it to the latest iommu/core branch and re-send? I
>>> will take it into the tree again then.
>>
>> Niklas, I think you just 'take yours' to resolve this. All the
>> IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be
>> removed. Let me know if you need anything
>>
>> Thanks,
>> Jason
> 
> Hi Joerg, Hi Jason,
> 
> I've run into an unfortunate problem, not with the rebase itself but
> with the iommu/core branch.
> 
> Jason is right, I basically need to just remove the platform ops and
> .default_domain ops. This seems to work fine for an NVMe both in the
> host and also when using the IOMMU with vfio-pci + KVM. I've already
> pushed the result of that to my git.kernel.org:
> https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/log/?h=b4/dma_iommu
> 
> The problem is that something seems to  be broken in the iommu/core
> branch. Regardless of whether I have my DMA API conversion on top or
> with the base iommu/core branch I can not use ConnectX-4 VFs.
> 
> # lspci
> 111a:00:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
> # dmesg | grep mlx
> [    3.189749] mlx5_core 111a:00:00.0: mlx5_mdev_init:1802:(pid 464): Failed initializing cmdif SW structs, aborting
> [    3.189783] mlx5_core: probe of 111a:00:00.0 failed with error -12
> 
> This same card works on v6.6-rc3 both with and without my DMA API
> conversion patch series applied. Looking at mlx5_mdev_init() ->
> mlx5_cmd_init(). The -ENOMEM seems to come from the following
> dma_pool_create():
> 
> cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0);
> 
> I'll try to debug this further but wanted to let you know already in
> case you have some ideas.

I could imagine that potentially something in the initial default domain 
conversion somehow interferes with the DMA ops in a way that ends up 
causing alloc_cmd_page() to fail (maybe calling zpci_dma_init_device() 
at the wrong point, or too many times?). FWIW I see nothing that would 
obviously affect dma_pool_create() itself.

Robin.

> Either way as it doesn't seem to be related
> to the DMA API conversion I can sent that out again regardless if you
> want, really don't want to miss another cycle.
> 
> Thanks,
> Niklas
Jason Gunthorpe Sept. 27, 2023, 3:40 p.m. UTC | #7
On Wed, Sep 27, 2023 at 05:24:20PM +0200, Niklas Schnelle wrote:

> Ok, another update. On trying it out again this problem actually also
> occurs when applying this v12 on top of v6.6-rc3 too. Also I guess
> unlike my prior thinking it probably doesn't occur with
> iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might
> be the only ones who don't support those. From my point of view this
> sounds like a mlx5_core issue they really should call
> dma_set_mask_and_coherent() before their first call to
> dma_alloc_coherent() not after. So I guess I'll send a v13 of this
> series rebased on iommu/core and with an additional mlx5 patch and then
> let's hope we can get that merged in a way that doesn't leave us with
> broken ConnectX VFs for too long.

Yes, OK. It definitely sounds wrong that mlx5 is doing dma allocations before
setting it's dma_set_mask_and_coherent(). Please link to this thread
and we can get Leon or Saeed to ack it for Joerg.

(though wondering why s390 is the only case that ever hit this?)

Jason