Message ID | 20210127061729.1596640-1-namit@vmware.com |
---|---|
State | New |
Headers | show |
Series | [v2] iommu/vt-d: do not use flush-queue when caching-mode is on | expand |
> On Jan 27, 2021, at 3:25 AM, Lu Baolu <baolu.lu@linux.intel.com> wrote: > > On 2021/1/27 14:17, Nadav Amit wrote: >> From: Nadav Amit <namit@vmware.com> >> When an Intel IOMMU is virtualized, and a physical device is >> passed-through to the VM, changes of the virtual IOMMU need to be >> propagated to the physical IOMMU. The hypervisor therefore needs to >> monitor PTE mappings in the IOMMU page-tables. Intel specifications >> provide "caching-mode" capability that a virtual IOMMU uses to report >> that the IOMMU is virtualized and a TLB flush is needed after mapping to >> allow the hypervisor to propagate virtual IOMMU mappings to the physical >> IOMMU. To the best of my knowledge no real physical IOMMU reports >> "caching-mode" as turned on. >> Synchronizing the virtual and the physical IOMMU tables is expensive if >> the hypervisor is unaware which PTEs have changed, as the hypervisor is >> required to walk all the virtualized tables and look for changes. >> Consequently, domain flushes are much more expensive than page-specific >> flushes on virtualized IOMMUs with passthrough devices. The kernel >> therefore exploited the "caching-mode" indication to avoid domain >> flushing and use page-specific flushing in virtualized environments. See >> commit 78d5f0f500e6 ("intel-iommu: Avoid global flushes with caching >> mode.") >> This behavior changed after commit 13cf01744608 ("iommu/vt-d: Make use >> of iova deferred flushing"). Now, when batched TLB flushing is used (the >> default), full TLB domain flushes are performed frequently, requiring >> the hypervisor to perform expensive synchronization between the virtual >> TLB and the physical one. >> Getting batched TLB flushes to use in such circumstances page-specific >> invalidations again is not easy, since the TLB invalidation scheme >> assumes that "full" domain TLB flushes are performed for scalability. >> Disable batched TLB flushes when caching-mode is on, as the performance >> benefit from using batched TLB invalidations is likely to be much >> smaller than the overhead of the virtual-to-physical IOMMU page-tables >> synchronization. >> Fixes: 78d5f0f500e6 ("intel-iommu: Avoid global flushes with caching mode.") > > Isn't it > > Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing") > > ? Of course it is - bad copy-paste. I will send v3. Thanks again, Nadav
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 788119c5b021..de3dd617cf60 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -5373,6 +5373,36 @@ intel_iommu_domain_set_attr(struct iommu_domain *domain, return ret; } +static bool domain_use_flush_queue(void) +{ + struct dmar_drhd_unit *drhd; + struct intel_iommu *iommu; + bool r = true; + + if (intel_iommu_strict) + return false; + + /* + * The flush queue implementation does not perform page-selective + * invalidations that are required for efficient TLB flushes in virtual + * environments. The benefit of batching is likely to be much lower than + * the overhead of synchronizing the virtual and physical IOMMU + * page-tables. + */ + rcu_read_lock(); + for_each_active_iommu(iommu, drhd) { + if (!cap_caching_mode(iommu->cap)) + continue; + + pr_warn_once("IOMMU batching is disabled due to virtualization"); + r = false; + break; + } + rcu_read_unlock(); + + return r; +} + static int intel_iommu_domain_get_attr(struct iommu_domain *domain, enum iommu_attr attr, void *data) @@ -5383,7 +5413,7 @@ intel_iommu_domain_get_attr(struct iommu_domain *domain, case IOMMU_DOMAIN_DMA: switch (attr) { case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: - *(int *)data = !intel_iommu_strict; + *(int *)data = domain_use_flush_queue(); return 0; default: return -ENODEV;