Message ID | 1648067472-13000-3-git-send-email-mikelley@microsoft.com |
---|---|
State | Superseded |
Headers | show |
Series | Fix coherence for VMbus and PCI pass-thru devices in Hyper-V VM | expand |
On 2022-03-23 20:31, Michael Kelley wrote: > PCI pass-thru devices in a Hyper-V VM are represented as a VMBus > device and as a PCI device. The coherence of the VMbus device is > set based on the VMbus node in ACPI, but the PCI device has no > ACPI node and defaults to not hardware coherent. This results > in extra software coherence management overhead on ARM64 when > devices are hardware coherent. > > Fix this by setting up the PCI host bus so that normal > PCI mechanisms will propagate the coherence of the VMbus > device to the PCI device. There's no effect on x86/x64 where > devices are always hardware coherent. Honestly, I don't hate this :) It seems conceptually accurate, as far as I understand, and in functional terms I'm starting to think it might even be the most correct approach anyway. In the physical world we might be surprised to find the PCI side of a host bridge behind anything other than some platform/ACPI device representing the other side of a physical host bridge or root complex, but who's to say that a paravirtual world can't present a more abstract topology? Either way, a one-line way of tying in to the standard flow is hard to turn down. Acked-by: Robin Murphy <robin.murphy@arm.com> > Signed-off-by: Michael Kelley <mikelley@microsoft.com> > --- > drivers/pci/controller/pci-hyperv.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c > index ae0bc2f..88b3b56 100644 > --- a/drivers/pci/controller/pci-hyperv.c > +++ b/drivers/pci/controller/pci-hyperv.c > @@ -3404,6 +3404,15 @@ static int hv_pci_probe(struct hv_device *hdev, > hbus->bridge->domain_nr = dom; > #ifdef CONFIG_X86 > hbus->sysdata.domain = dom; > +#elif defined(CONFIG_ARM64) > + /* > + * Set the PCI bus parent to be the corresponding VMbus > + * device. Then the VMbus device will be assigned as the > + * ACPI companion in pcibios_root_bridge_prepare() and > + * pci_dma_configure() will propagate device coherence > + * information to devices created on the bus. > + */ > + hbus->sysdata.parent = hdev->device.parent; > #endif > > hbus->hdev = hdev;
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index ae0bc2f..88b3b56 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -3404,6 +3404,15 @@ static int hv_pci_probe(struct hv_device *hdev, hbus->bridge->domain_nr = dom; #ifdef CONFIG_X86 hbus->sysdata.domain = dom; +#elif defined(CONFIG_ARM64) + /* + * Set the PCI bus parent to be the corresponding VMbus + * device. Then the VMbus device will be assigned as the + * ACPI companion in pcibios_root_bridge_prepare() and + * pci_dma_configure() will propagate device coherence + * information to devices created on the bus. + */ + hbus->sysdata.parent = hdev->device.parent; #endif hbus->hdev = hdev;
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus device and as a PCI device. The coherence of the VMbus device is set based on the VMbus node in ACPI, but the PCI device has no ACPI node and defaults to not hardware coherent. This results in extra software coherence management overhead on ARM64 when devices are hardware coherent. Fix this by setting up the PCI host bus so that normal PCI mechanisms will propagate the coherence of the VMbus device to the PCI device. There's no effect on x86/x64 where devices are always hardware coherent. Signed-off-by: Michael Kelley <mikelley@microsoft.com> --- drivers/pci/controller/pci-hyperv.c | 9 +++++++++ 1 file changed, 9 insertions(+)