Message ID | 20230825-dma_iommu-v12-0-4134455994a7@linux.ibm.com |
---|---|
Headers | show |
Series | iommu/dma: s390 DMA API conversion and optimized IOTLB flushing | expand |
On 2023-08-25 19:26, Matthew Rosato wrote: > On 8/25/23 6:11 AM, Niklas Schnelle wrote: >> Hi All, >> >> This patch series converts s390's PCI support from its platform specific DMA >> API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer. >> The conversion itself is done in patches 3-4 with patch 2 providing the final >> necessary IOMMU driver improvement to handle s390's special IOTLB flush >> out-of-resource indication in virtualized environments. The conversion >> itself only touches the s390 IOMMU driver and s390 arch code moving over >> remaining functions from the s390 DMA API implementation. No changes to >> common code are necessary. >> > > I also picked up this latest version and ran various tests with ISM, mlx5 and some NVMe drives. FWIW, I have been including versions of this series in my s390 dev environments for a number of months now and have also been building my s390 pci iommufd nested translation series on top of this, so it's seen quite a bit of testing from me at least. > > So as far as I'm concerned anyway, this series is ready for -next (after the merge window). Agreed; I'll trust your reviews for the s390-specific parts, so indeed it looks like this should have all it needs now and is ready for a nice long soak in -next once Joerg opens the tree for 6.7 material. Cheers, Robin.
On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: > Niklas Schnelle (6): > iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return > s390/pci: prepare is_passed_through() for dma-iommu > s390/pci: Use dma-iommu layer > iommu/s390: Disable deferred flush for ISM devices > iommu/dma: Allow a single FQ in addition to per-CPU FQs > iommu/dma: Use a large flush queue and timeout for shadow_on_flush Applied, thanks.
Hi Niklas, On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: > Niklas Schnelle (6): > iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return > s390/pci: prepare is_passed_through() for dma-iommu > s390/pci: Use dma-iommu layer > iommu/s390: Disable deferred flush for ISM devices > iommu/dma: Allow a single FQ in addition to per-CPU FQs > iommu/dma: Use a large flush queue and timeout for shadow_on_flush Turned out this series has non-trivial conflicts with Jasons default-domain work so I had to remove it from the IOMMU tree for now. Can you please rebase it to the latest iommu/core branch and re-send? I will take it into the tree again then. Thanks, Joerg
On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote: > Hi Niklas, > > On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: > > Niklas Schnelle (6): > > iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return > > s390/pci: prepare is_passed_through() for dma-iommu > > s390/pci: Use dma-iommu layer > > iommu/s390: Disable deferred flush for ISM devices > > iommu/dma: Allow a single FQ in addition to per-CPU FQs > > iommu/dma: Use a large flush queue and timeout for shadow_on_flush > > Turned out this series has non-trivial conflicts with Jasons > default-domain work so I had to remove it from the IOMMU tree for now. > Can you please rebase it to the latest iommu/core branch and re-send? I > will take it into the tree again then. Niklas, I think you just 'take yours' to resolve this. All the IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be removed. Let me know if you need anything Thanks, Jason
On Tue, 2023-09-26 at 13:08 -0300, Jason Gunthorpe wrote: > On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote: > > Hi Niklas, > > > > On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: > > > Niklas Schnelle (6): > > > iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return > > > s390/pci: prepare is_passed_through() for dma-iommu > > > s390/pci: Use dma-iommu layer > > > iommu/s390: Disable deferred flush for ISM devices > > > iommu/dma: Allow a single FQ in addition to per-CPU FQs > > > iommu/dma: Use a large flush queue and timeout for shadow_on_flush > > > > Turned out this series has non-trivial conflicts with Jasons > > default-domain work so I had to remove it from the IOMMU tree for now. > > Can you please rebase it to the latest iommu/core branch and re-send? I > > will take it into the tree again then. > > Niklas, I think you just 'take yours' to resolve this. All the > IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be > removed. Let me know if you need anything > > Thanks, > Jason Hi Joerg, Hi Jason, I've run into an unfortunate problem, not with the rebase itself but with the iommu/core branch. Jason is right, I basically need to just remove the platform ops and .default_domain ops. This seems to work fine for an NVMe both in the host and also when using the IOMMU with vfio-pci + KVM. I've already pushed the result of that to my git.kernel.org: https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/log/?h=b4/dma_iommu The problem is that something seems to be broken in the iommu/core branch. Regardless of whether I have my DMA API conversion on top or with the base iommu/core branch I can not use ConnectX-4 VFs. # lspci 111a:00:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] # dmesg | grep mlx [ 3.189749] mlx5_core 111a:00:00.0: mlx5_mdev_init:1802:(pid 464): Failed initializing cmdif SW structs, aborting [ 3.189783] mlx5_core: probe of 111a:00:00.0 failed with error -12 This same card works on v6.6-rc3 both with and without my DMA API conversion patch series applied. Looking at mlx5_mdev_init() -> mlx5_cmd_init(). The -ENOMEM seems to come from the following dma_pool_create(): cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0); I'll try to debug this further but wanted to let you know already in case you have some ideas. Either way as it doesn't seem to be related to the DMA API conversion I can sent that out again regardless if you want, really don't want to miss another cycle. Thanks, Niklas
On 2023-09-27 09:55, Niklas Schnelle wrote: > On Tue, 2023-09-26 at 13:08 -0300, Jason Gunthorpe wrote: >> On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote: >>> Hi Niklas, >>> >>> On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: >>>> Niklas Schnelle (6): >>>> iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return >>>> s390/pci: prepare is_passed_through() for dma-iommu >>>> s390/pci: Use dma-iommu layer >>>> iommu/s390: Disable deferred flush for ISM devices >>>> iommu/dma: Allow a single FQ in addition to per-CPU FQs >>>> iommu/dma: Use a large flush queue and timeout for shadow_on_flush >>> >>> Turned out this series has non-trivial conflicts with Jasons >>> default-domain work so I had to remove it from the IOMMU tree for now. >>> Can you please rebase it to the latest iommu/core branch and re-send? I >>> will take it into the tree again then. >> >> Niklas, I think you just 'take yours' to resolve this. All the >> IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be >> removed. Let me know if you need anything >> >> Thanks, >> Jason > > Hi Joerg, Hi Jason, > > I've run into an unfortunate problem, not with the rebase itself but > with the iommu/core branch. > > Jason is right, I basically need to just remove the platform ops and > .default_domain ops. This seems to work fine for an NVMe both in the > host and also when using the IOMMU with vfio-pci + KVM. I've already > pushed the result of that to my git.kernel.org: > https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/log/?h=b4/dma_iommu > > The problem is that something seems to be broken in the iommu/core > branch. Regardless of whether I have my DMA API conversion on top or > with the base iommu/core branch I can not use ConnectX-4 VFs. > > # lspci > 111a:00:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] > # dmesg | grep mlx > [ 3.189749] mlx5_core 111a:00:00.0: mlx5_mdev_init:1802:(pid 464): Failed initializing cmdif SW structs, aborting > [ 3.189783] mlx5_core: probe of 111a:00:00.0 failed with error -12 > > This same card works on v6.6-rc3 both with and without my DMA API > conversion patch series applied. Looking at mlx5_mdev_init() -> > mlx5_cmd_init(). The -ENOMEM seems to come from the following > dma_pool_create(): > > cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0); > > I'll try to debug this further but wanted to let you know already in > case you have some ideas. I could imagine that potentially something in the initial default domain conversion somehow interferes with the DMA ops in a way that ends up causing alloc_cmd_page() to fail (maybe calling zpci_dma_init_device() at the wrong point, or too many times?). FWIW I see nothing that would obviously affect dma_pool_create() itself. Robin. > Either way as it doesn't seem to be related > to the DMA API conversion I can sent that out again regardless if you > want, really don't want to miss another cycle. > > Thanks, > Niklas
On Wed, Sep 27, 2023 at 05:24:20PM +0200, Niklas Schnelle wrote: > Ok, another update. On trying it out again this problem actually also > occurs when applying this v12 on top of v6.6-rc3 too. Also I guess > unlike my prior thinking it probably doesn't occur with > iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might > be the only ones who don't support those. From my point of view this > sounds like a mlx5_core issue they really should call > dma_set_mask_and_coherent() before their first call to > dma_alloc_coherent() not after. So I guess I'll send a v13 of this > series rebased on iommu/core and with an additional mlx5 patch and then > let's hope we can get that merged in a way that doesn't leave us with > broken ConnectX VFs for too long. Yes, OK. It definitely sounds wrong that mlx5 is doing dma allocations before setting it's dma_set_mask_and_coherent(). Please link to this thread and we can get Leon or Saeed to ack it for Joerg. (though wondering why s390 is the only case that ever hit this?) Jason