mbox series

[v3,0/7] GenieZone hypervisor drivers

Message ID 20230512080405.12043-1-yi-de.wu@mediatek.com
Headers show
Series GenieZone hypervisor drivers | expand

Message

Yi-De Wu May 12, 2023, 8:03 a.m. UTC
This series is based on linux-next, tag: next-20230512.

GenieZone is MediaTek proprietary hypervisor solution, and it is running
in EL2 stand alone as a type-I hypervisor. It is a pure EL2
implementation which implies it does not rely any specific host VM, and
this behavior improves GenieZone's security as it limits its interface.

To enable guest VMs running, a driver (gzvm) is provided for VMM (virtual
machine monitor) to operate. Currently, the gzvm driver supports only
crosvm.

This series supports ioctl interfaces for userspace VMM(eg., crosvm) to
operate guest VMs lifecycle, irqchip for virtual interrupt handling,
asynchronous notifcation mechanism for VMM.

Changes in v3:
- Refactor: separate arch/arm64/geniezone/gzvm_arch.c into vm.c/vcpu.c/vgic.c
- Remove redundant functions
- Fix reviewer's comments

Changes in v2:
https://lore.kernel.org/all/20230428103622.18291-1-yi-de.wu@mediatek.com/
- Refactor: move to drivers/virt/geniezone
- Refactor: decouple arch-dependent and arch-independent
- Check pending signal before entering guest context
- Fix reviewer's comments

v1: https://lore.kernel.org/all/20230413090735.4182-1-yi-de.wu@mediatek.com/

Yi-De Wu (7):
  docs: geniezone: Introduce GenieZone hypervisor
  dt-bindings: hypervisor: Add MediaTek GenieZone hypervisor
  virt: geniezone: Introduce GenieZone hypervisor support
  virt: geniezone: Add vcpu support
  virt: geniezone: Add irqchip support for virtual interrupt injection
  virt: geniezone: Add irqfd support
  virt: geniezone: Add ioeventfd support

 .../hypervisor/mediatek,geniezone-hyp.yaml    |  31 +
 Documentation/virt/geniezone/introduction.rst |  34 ++
 MAINTAINERS                                   |  13 +
 arch/arm64/Kbuild                             |   1 +
 arch/arm64/geniezone/Makefile                 |   9 +
 arch/arm64/geniezone/gzvm_arch_common.h       |  95 ++++
 arch/arm64/geniezone/vcpu.c                   |  84 +++
 arch/arm64/geniezone/vgic.c                   |  91 +++
 arch/arm64/geniezone/vm.c                     | 174 ++++++
 arch/arm64/include/uapi/asm/gzvm_arch.h       |  47 ++
 drivers/virt/Kconfig                          |   2 +-
 drivers/virt/geniezone/Kconfig                |  17 +
 drivers/virt/geniezone/Makefile               |  11 +
 drivers/virt/geniezone/gzvm_common.h          |  12 +
 drivers/virt/geniezone/gzvm_ioeventfd.c       | 263 +++++++++
 drivers/virt/geniezone/gzvm_irqchip.c         |  13 +
 drivers/virt/geniezone/gzvm_irqfd.c           | 537 ++++++++++++++++++
 drivers/virt/geniezone/gzvm_main.c            | 151 +++++
 drivers/virt/geniezone/gzvm_vcpu.c            | 260 +++++++++
 drivers/virt/geniezone/gzvm_vm.c              | 448 +++++++++++++++
 include/linux/gzvm_drv.h                      | 154 +++++
 include/uapi/asm-generic/Kbuild               |   1 +
 include/uapi/asm-generic/gzvm_arch.h          |  10 +
 include/uapi/linux/gzvm.h                     | 270 +++++++++
 24 files changed, 2727 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/devicetree/bindings/hypervisor/mediatek,
 geniezone-hyp.yaml
 create mode 100644 Documentation/virt/geniezone/introduction.rst
 create mode 100644 arch/arm64/geniezone/Makefile
 create mode 100644 arch/arm64/geniezone/gzvm_arch_common.h
 create mode 100644 arch/arm64/geniezone/vcpu.c
 create mode 100644 arch/arm64/geniezone/vgic.c
 create mode 100644 arch/arm64/geniezone/vm.c
 create mode 100644 arch/arm64/include/uapi/asm/gzvm_arch.h
 create mode 100644 drivers/virt/geniezone/Kconfig
 create mode 100644 drivers/virt/geniezone/Makefile
 create mode 100644 drivers/virt/geniezone/gzvm_common.h
 create mode 100644 drivers/virt/geniezone/gzvm_ioeventfd.c
 create mode 100644 drivers/virt/geniezone/gzvm_irqchip.c
 create mode 100644 drivers/virt/geniezone/gzvm_irqfd.c
 create mode 100644 drivers/virt/geniezone/gzvm_main.c
 create mode 100644 drivers/virt/geniezone/gzvm_vcpu.c
 create mode 100644 drivers/virt/geniezone/gzvm_vm.c
 create mode 100644 include/linux/gzvm_drv.h
 create mode 100644 include/uapi/asm-generic/gzvm_arch.h
 create mode 100644 include/uapi/linux/gzvm.h

Comments

Marc Zyngier May 18, 2023, 8:27 a.m. UTC | #1
On Fri, 12 May 2023 09:04:01 +0100,
Yi-De Wu <yi-de.wu@mediatek.com> wrote:
> 
> From: "Yingshiuan Pan" <yingshiuan.pan@mediatek.com>
> 
> GenieZone is MediaTek hypervisor solution, and it is running in EL2
> stand alone as a type-I hypervisor. This patch exports a set of ioctl
> interfaces for userspace VMM (e.g., crosvm) to operate guest VMs
> lifecycle (creation and destroy) on GenieZone.
> 
> Signed-off-by: Yingshiuan Pan <yingshiuan.pan@mediatek.com>
> Signed-off-by: Yi-De Wu <yi-de.wu@mediatek.com>

[...]

> +/**
> + * gzvm_gfn_to_pfn_memslot() - Translate gfn (guest ipa) to pfn (host pa),
> + *			       result is in @pfn
> + *
> + * Leverage KVM's gfn_to_pfn_memslot(). Because gfn_to_pfn_memslot() needs
> + * kvm_memory_slot as parameter, this function populates necessary fileds
> + * for calling gfn_to_pfn_memslot().
> + *
> + * Return:
> + * * 0			- Succeed
> + * * -EFAULT		- Failed to convert
> + */
> +static int gzvm_gfn_to_pfn_memslot(struct gzvm_memslot *memslot, u64 gfn, u64 *pfn)
> +{
> +	hfn_t __pfn;
> +	struct kvm_memory_slot kvm_slot = {0};
> +
> +	kvm_slot.base_gfn = memslot->base_gfn;
> +	kvm_slot.npages = memslot->npages;
> +	kvm_slot.dirty_bitmap = NULL;
> +	kvm_slot.userspace_addr = memslot->userspace_addr;
> +	kvm_slot.flags = memslot->flags;
> +	kvm_slot.id = memslot->slot_id;
> +	kvm_slot.as_id = 0;
> +
> +	__pfn = gfn_to_pfn_memslot(&kvm_slot, gfn);
> +	if (is_error_noslot_pfn(__pfn)) {
> +		*pfn = 0;
> +		return -EFAULT;
> +	}

I have commented on this before: there is absolutely *no way* that you
can use KVM as the unwilling helper for your stuff. You are passing
uninitialised data to the core KVM, completely ignoring the semantics
of all the other fields.

More importantly, you are now holding us responsible for any breakage
that would be caused to your code if we change the internals of this
*PRIVATE FUNCTION*.

Do you see Xen or Hyper-V using KVM's internals as some sort of
backend to make their life easier? No, because they understand that
this is off-limits, and creates an unhealthy dependency for both
hypervisors.

So this is a strong NAK. And you can trust me to keep voicing my
opposition to this sort of horror, wherever I will see these patches.

	M.