mbox series

[v3,00/10] Add Intel VT-d nested translation

Message ID 20230511145110.27707-1-yi.l.liu@intel.com
Headers show
Series Add Intel VT-d nested translation | expand

Message

Yi Liu May 11, 2023, 2:51 p.m. UTC
This is to add Intel VT-d nested translation based on IOMMUFD nesting
infrastructure. As the iommufd nesting infrastructure series[1], iommu
core supports new ops to report iommu hardware information, allocate
domains with user data and sync stage-1 IOTLB. The data required in
the three paths are vendor-specific, so

1) IOMMU_HW_INFO_TYPE_INTEL_VTD and struct iommu_device_info_vtd are
   defined to report iommu hardware information for Intel VT-d .
2) IOMMU_HWPT_DATA_VTD_S1 is defined for the Intel VT-d stage-1 page
   table, it will be used in the stage-1 domain allocation and IOTLB
   syncing path. struct iommu_hwpt_intel_vtd is defined to pass user_data
   for the Intel VT-d stage-1 domain allocation.
   struct iommu_hwpt_invalidate_intel_vtd is defined to pass the data for
   the Intel VT-d stage-1 IOTLB invalidation.

With above IOMMUFD extensions, the intel iommu driver implements the three
paths to support nested translation.

The first Intel platform supporting nested translation is Sapphire
Rapids which, unfortunately, has a hardware errata [2] requiring special
treatment. This errata happens when a stage-1 page table page (either
level) is located in a stage-2 read-only region. In that case the IOMMU
hardware may ignore the stage-2 RO permission and still set the A/D bit
in stage-1 page table entries during page table walking.

A flag IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17 is introduced to report
this errata to userspace. With that restriction the user should either
disable nested translation to favor RO stage-2 mappings or ensure no
RO stage-2 mapping to enable nested translation.

Intel-iommu driver is armed with necessary checks to prevent such mix
in patch10 of this series.

Qemu currently does add RO mappings though. The vfio agent in Qemu
simply maps all valid regions in the GPA address space which certainly
includes RO regions e.g. vbios.

In reality we don't know a usage relying on DMA reads from the BIOS
region. Hence finding a way to allow user opt-out RO mappings in
Qemu might be an acceptable tradeoff. But how to achieve it cleanly
needs more discussion in Qemu community. For now we just hacked Qemu
to test.

Complete code can be found in [3], QEMU could can be found in [4].

base-commit: ce9b593b1f74ccd090edc5d2ad397da84baa9946

[1] https://lore.kernel.org/linux-iommu/20230511143844.22693-1-yi.l.liu@intel.com/
[2] https://www.intel.com/content/www/us/en/content-details/772415/content-details.html
[3] https://github.com/yiliu1765/iommufd/tree/iommufd_nesting
[4] https://github.com/yiliu1765/qemu/tree/wip/iommufd_rfcv4.mig.reset.v4_var3%2Bnesting

Change log:
v3:
 - Further split the patches into an order of adding helpers for nested
   domain, iotlb flush, nested domain attachment and nested domain allocation
   callback, then report the hw_info to userspace.
 - Add batch support in cache invalidation from userspace
 - Disallow nested translation usage if RO mappings exists in stage-2 domain
   due to errata on readonly mappings on Sapphire Rapids platform.

v2: https://lore.kernel.org/linux-iommu/20230309082207.612346-1-yi.l.liu@intel.com/
 - The iommufd infrastructure is split to be separate series.

v1: https://lore.kernel.org/linux-iommu/20230209043153.14964-1-yi.l.liu@intel.com/

Regards,
	Yi Liu

Lu Baolu (5):
  iommu/vt-d: Extend dmar_domain to support nested domain
  iommu/vt-d: Add helper for nested domain allocation
  iommu/vt-d: Add helper to setup pasid nested translation
  iommu/vt-d: Add nested domain allocation
  iommu/vt-d: Disallow nesting on domains with read-only mappings

Yi Liu (5):
  iommufd: Add data structure for Intel VT-d stage-1 domain allocation
  iommu/vt-d: Make domain attach helpers to be extern
  iommu/vt-d: Set the nested domain to a device
  iommu/vt-d: Add iotlb flush for nested domain
  iommu/vt-d: Implement hw_info for iommu capability query

 drivers/iommu/intel/Makefile |   2 +-
 drivers/iommu/intel/iommu.c  |  78 ++++++++++++---
 drivers/iommu/intel/iommu.h  |  55 +++++++++--
 drivers/iommu/intel/nested.c | 181 +++++++++++++++++++++++++++++++++++
 drivers/iommu/intel/pasid.c  | 151 +++++++++++++++++++++++++++++
 drivers/iommu/intel/pasid.h  |   2 +
 drivers/iommu/iommufd/main.c |   6 ++
 include/linux/iommu.h        |   1 +
 include/uapi/linux/iommufd.h | 149 ++++++++++++++++++++++++++++
 9 files changed, 603 insertions(+), 22 deletions(-)
 create mode 100644 drivers/iommu/intel/nested.c

Comments

Tian, Kevin May 24, 2023, 8:59 a.m. UTC | #1
> From: Liu, Yi L <yi.l.liu@intel.com>
> Sent: Thursday, May 11, 2023 10:51 PM
> 
> The first Intel platform supporting nested translation is Sapphire
> Rapids which, unfortunately, has a hardware errata [2] requiring special
> treatment. This errata happens when a stage-1 page table page (either
> level) is located in a stage-2 read-only region. In that case the IOMMU
> hardware may ignore the stage-2 RO permission and still set the A/D bit
> in stage-1 page table entries during page table walking.
> 
> A flag IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17 is introduced to
> report
> this errata to userspace. With that restriction the user should either
> disable nested translation to favor RO stage-2 mappings or ensure no
> RO stage-2 mapping to enable nested translation.
> 
> Intel-iommu driver is armed with necessary checks to prevent such mix
> in patch10 of this series.
> 
> Qemu currently does add RO mappings though. The vfio agent in Qemu
> simply maps all valid regions in the GPA address space which certainly
> includes RO regions e.g. vbios.
> 
> In reality we don't know a usage relying on DMA reads from the BIOS
> region. Hence finding a way to allow user opt-out RO mappings in
> Qemu might be an acceptable tradeoff. But how to achieve it cleanly
> needs more discussion in Qemu community. For now we just hacked Qemu
> to test.
> 

Hi, Alex,

Want to touch base on your thoughts about this errata before we
actually go to discuss how to handle it in Qemu.

Overall it affects all Sapphire Rapids platforms. Fully disabling nested
translation in the kernel just for this rare vulnerability sounds an overkill.

So we decide to enforce the exclusive check (RO in stage-2 vs. nesting)
in the kernel and expose the restriction to userspace so the VMM can
choose which one to enable based on its own requirement.

At least this looks a reasonable tradeoff to some proprietary VMMs
which never adds RO mappings in stage-2 today.

But we do want to get Qemu support nested translation on those
platform as the widely-used reference VMM!

Do you see any major oversight before pursuing such change in Qemu
e.g. having a way for the user to opt-out adding RO mappings in stage-2? 😊

Thanks
Kevin
Tian, Kevin May 26, 2023, 11:25 a.m. UTC | #2
> From: Alex Williamson <alex.williamson@redhat.com>
> Sent: Friday, May 26, 2023 2:07 AM
> 
> On Wed, 24 May 2023 08:59:43 +0000
> "Tian, Kevin" <kevin.tian@intel.com> wrote:
> 
> > > From: Liu, Yi L <yi.l.liu@intel.com>
> > > Sent: Thursday, May 11, 2023 10:51 PM
> > >
> > > The first Intel platform supporting nested translation is Sapphire
> > > Rapids which, unfortunately, has a hardware errata [2] requiring special
> > > treatment. This errata happens when a stage-1 page table page (either
> > > level) is located in a stage-2 read-only region. In that case the IOMMU
> > > hardware may ignore the stage-2 RO permission and still set the A/D bit
> > > in stage-1 page table entries during page table walking.
> > >
> > > A flag IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17 is introduced to
> > > report
> > > this errata to userspace. With that restriction the user should either
> > > disable nested translation to favor RO stage-2 mappings or ensure no
> > > RO stage-2 mapping to enable nested translation.
> > >
> > > Intel-iommu driver is armed with necessary checks to prevent such mix
> > > in patch10 of this series.
> > >
> > > Qemu currently does add RO mappings though. The vfio agent in Qemu
> > > simply maps all valid regions in the GPA address space which certainly
> > > includes RO regions e.g. vbios.
> > >
> > > In reality we don't know a usage relying on DMA reads from the BIOS
> > > region. Hence finding a way to allow user opt-out RO mappings in
> > > Qemu might be an acceptable tradeoff. But how to achieve it cleanly
> > > needs more discussion in Qemu community. For now we just hacked
> Qemu
> > > to test.
> > >
> >
> > Hi, Alex,
> >
> > Want to touch base on your thoughts about this errata before we
> > actually go to discuss how to handle it in Qemu.
> >
> > Overall it affects all Sapphire Rapids platforms. Fully disabling nested
> > translation in the kernel just for this rare vulnerability sounds an overkill.
> >
> > So we decide to enforce the exclusive check (RO in stage-2 vs. nesting)
> > in the kernel and expose the restriction to userspace so the VMM can
> > choose which one to enable based on its own requirement.
> >
> > At least this looks a reasonable tradeoff to some proprietary VMMs
> > which never adds RO mappings in stage-2 today.
> >
> > But we do want to get Qemu support nested translation on those
> > platform as the widely-used reference VMM!
> >
> > Do you see any major oversight before pursuing such change in Qemu
> > e.g. having a way for the user to opt-out adding RO mappings in stage-2?
> 😊
> 
> I don't feel like I have enough info to know what common scenarios are
> going to make use of 2-stage and nested configurations and how likely a
> user is to need such an opt-out.  If it's likely that a user is going
> to encounter this configuration, an opt-out is at best a workaround.
> It's a significant support issue if a user needs to generate a failure
> in QEMU, notice and decipher any log messages that failure may have
> generated, and take action to introduce specific changes in their VM
> configuration to support a usage restriction.

Thanks. This is a good point.

> 
> For QEMU I might lean more towards an effort to better filter the
> mappings we create to avoid these read-only ranges that likely don't
> require DMA mappings anyway.

We thought about having intel-viommu to register a discard memory
manager to filter in case the kernel reports this errata.

Our originally thought was that even with it we may still want to
explicitly let user to opt given this configuration doesn't match the
bare metal. But with your explanation probably doing so instead
causes more trouble than what it tries to achieve.

> 
> How much does this affect arbitrary userspace vfio drivers?  For
> example are there scenarios where running in a VM with a vIOMMU
> introduces nested support that's unknown to the user which now prevents
> this usage?  An example might be running an L2 guest with a version of
> QEMU that does create read-only mappings.  If necessary, how would lack
> of read-only mapping support be conveyed to those nested use cases?

To enable nested translation it's expected to have the guest use
stage-1 while the host uses stage-2. So the L0 QEMU will expose
a vIOMMU with only stage-1 capability to L1.

In that case it's perfectly fine to have RO mappings in stage-1 no
matter whether L1 further create L2 guest inside.

Then only L0 QEMU needs to care about this RO thing in stage-2.

In case L0 QEMU exposes a legacy vIOMMU which supports only stage-2
then nesting cannot be enabled. Instead it will fallback to the old
shadowing path then RO mapping from guest doesn't matter either.

Exposing a vIOMMU which supports both stage-1/stage-2/nesting
is another story. But I believe it's far from when this becomes useful
and it's reasonable to just have L0 QEMU not support this configuration
before this errata is fixed. 😊

Thanks,
Kevin
Jason Gunthorpe May 29, 2023, 6:43 p.m. UTC | #3
On Wed, May 24, 2023 at 08:59:43AM +0000, Tian, Kevin wrote:

> At least this looks a reasonable tradeoff to some proprietary VMMs
> which never adds RO mappings in stage-2 today.

What is the reason for the RO anyhow?

Would it be so bad if it was DMA mapped as RW due to the errata?

Jason
Jason Gunthorpe May 30, 2023, 4:42 p.m. UTC | #4
On Mon, May 29, 2023 at 06:16:44PM -0600, Alex Williamson wrote:
> On Mon, 29 May 2023 15:43:02 -0300
> Jason Gunthorpe <jgg@nvidia.com> wrote:
> 
> > On Wed, May 24, 2023 at 08:59:43AM +0000, Tian, Kevin wrote:
> > 
> > > At least this looks a reasonable tradeoff to some proprietary VMMs
> > > which never adds RO mappings in stage-2 today.  
> > 
> > What is the reason for the RO anyhow?
> > 
> > Would it be so bad if it was DMA mapped as RW due to the errata?
> 
> What if it's the zero page?  Thanks,

GUP doesn't return the zero page if FOL_WRITE is specified

Jason
Tian, Kevin June 14, 2023, 8:07 a.m. UTC | #5
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Tuesday, May 30, 2023 2:43 AM
> 
> On Wed, May 24, 2023 at 08:59:43AM +0000, Tian, Kevin wrote:
> 
> > At least this looks a reasonable tradeoff to some proprietary VMMs
> > which never adds RO mappings in stage-2 today.
> 
> What is the reason for the RO anyhow?

vfio simply follows the permission in the CPU address space.

vBIOS regions are marked as RO there hence also carried to vfio mappings.

> 
> Would it be so bad if it was DMA mapped as RW due to the errata?
> 

think of a scenario where the vbios memory is shared by multiple qemu
instances then RW allows a malicious VM to modify the shared content
then potentially attacking other VMs.

skipping the mapping is safest in this regard.
Jason Gunthorpe June 14, 2023, 11:52 a.m. UTC | #6
On Wed, Jun 14, 2023 at 08:07:30AM +0000, Tian, Kevin wrote:

> think of a scenario where the vbios memory is shared by multiple qemu
> instances then RW allows a malicious VM to modify the shared content
> then potentially attacking other VMs.

qemu would have to map the vbios as MAP_PRIVATE WRITE before the iommu
side could map it writable, so this is not a real worry.

Jason
Tian, Kevin June 16, 2023, 2:29 a.m. UTC | #7
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, June 14, 2023 7:53 PM
> 
> On Wed, Jun 14, 2023 at 08:07:30AM +0000, Tian, Kevin wrote:
> 
> > think of a scenario where the vbios memory is shared by multiple qemu
> > instances then RW allows a malicious VM to modify the shared content
> > then potentially attacking other VMs.
> 
> qemu would have to map the vbios as MAP_PRIVATE WRITE before the
> iommu
> side could map it writable, so this is not a real worry.
> 

Make sense.

but IMHO it's still safer to reduce the permission (RO->NP) than increasing
the permission (RO->RW) when faithfully emulating bare metal behavior
is impossible, especially when there is no real usage counting on it. 😊