From patchwork Thu Jul 27 07:59:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi-De Wu X-Patchwork-Id: 707220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3882EC25B73 for ; Thu, 27 Jul 2023 08:04:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233793AbjG0IEE (ORCPT ); Thu, 27 Jul 2023 04:04:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233208AbjG0ICr (ORCPT ); Thu, 27 Jul 2023 04:02:47 -0400 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E84834C3C; Thu, 27 Jul 2023 01:00:19 -0700 (PDT) X-UUID: 9e4530862c5311ee9cb5633481061a41-20230727 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=mgs9/vfnXB+UqBMY5BF/kprRbaIQpOC+n1Cm3gcu/rA=; b=dIel7kD29RHqXGyCdgOR3uppWJdWJ+6kh1oTrzFvZXbLZDE+hOYUS4X1AWE5eSlBGPRmuJD5la2DmE5KlHXxhmlu7VMg4cQnCPP7pU5LfNaiY6tQLjCZ76DYpqZhB+PLQhb2qQzNfaU8t3jOt7jYlK6xRlUJsQVnCJFzWyKAxWE=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.29, REQID:04c21c84-6ab7-4ad3-b1cb-70aaa0f7eb61, IP:0, U RL:0,TC:0,Content:0,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTION :release,TS:95 X-CID-INFO: VERSION:1.1.29, REQID:04c21c84-6ab7-4ad3-b1cb-70aaa0f7eb61, IP:0, URL :0,TC:0,Content:0,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTION :quarantine,TS:95 X-CID-META: VersionHash:e7562a7, CLOUDID:771e97a0-0933-4333-8d4f-6c3c53ebd55b, B ulkID:230727160017Z96KAM8H,BulkQuantity:0,Recheck:0,SF:28|17|19|48|38|29,T C:nil,Content:0,EDM:-3,IP:nil,URL:1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 ,OSI:0,OSA:0,AV:0,LES:1,SPR:NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR, TF_CID_SPAM_SDM, TF_CID_SPAM_ASC, TF_CID_SPAM_FAS, TF_CID_SPAM_FSD,TF_CID_SPAM_ULS X-UUID: 9e4530862c5311ee9cb5633481061a41-20230727 Received: from mtkmbs13n1.mediatek.inc [(172.21.101.193)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 179442857; Thu, 27 Jul 2023 16:00:14 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs13n2.mediatek.inc (172.21.101.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 27 Jul 2023 16:00:13 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 27 Jul 2023 16:00:13 +0800 From: Yi-De Wu To: Yingshiuan Pan , Ze-Yu Wang , Yi-De Wu , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Corbet , Catalin Marinas , Will Deacon , Arnd Bergmann , Matthias Brugger , AngeloGioacchino Del Regno CC: , , , , , , David Bradil , Trilok Soni , Ivan Tseng , Jade Shih , My Chuang , Shawn Hsiao , PeiLun Suei , Liju Chen , Willix Yeh Subject: [PATCH v5 01/12] docs: geniezone: Introduce GenieZone hypervisor Date: Thu, 27 Jul 2023 15:59:54 +0800 Message-ID: <20230727080005.14474-2-yi-de.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230727080005.14474-1-yi-de.wu@mediatek.com> References: <20230727080005.14474-1-yi-de.wu@mediatek.com> MIME-Version: 1.0 X-MTK: N Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: "Yi-De Wu" GenieZone is MediaTek proprietary hypervisor solution, and it is running in EL2 stand alone as a type-I hypervisor. It is a pure EL2 implementation which implies it does not rely any specific host VM, and this behavior improves GenieZone's security as it limits its interface. Signed-off-by: Yingshiuan Pan Signed-off-by: Liju Chen Signed-off-by: Yi-De Wu Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202306151938.M7471qHi-lkp@intel.com/ --- Documentation/virt/geniezone/introduction.rst | 86 +++++++++++++++++++ Documentation/virt/index.rst | 1 + MAINTAINERS | 6 ++ 3 files changed, 93 insertions(+) create mode 100644 Documentation/virt/geniezone/introduction.rst diff --git a/Documentation/virt/geniezone/introduction.rst b/Documentation/virt/geniezone/introduction.rst new file mode 100644 index 000000000000..fb9fa41bcfb8 --- /dev/null +++ b/Documentation/virt/geniezone/introduction.rst @@ -0,0 +1,86 @@ +.. SPDX-License-Identifier: GPL-2.0 + +====================== +GenieZone Introduction +====================== + +Overview +======== +GenieZone hypervisor(gzvm) is a type-1 hypervisor that supports various virtual +machine types and provides security features such as TEE-like scenarios and +secure boot. It can create guest VMs for security use cases and has +virtualization capabilities for both platform and interrupt. Although the +hypervisor can be booted independently, it requires the assistance of GenieZone +hypervisor kernel driver(gzvm-ko) to leverage the ability of Linux kernel for +vCPU scheduling, memory management, inter-VM communication and virtio backend +support. + +Supported Architecture +====================== +GenieZone now only supports MediaTek ARM64 SoC. + +Features +======== + +- vCPU Management + +VM manager aims to provide vCPUs on the basis of time sharing on physical CPUs. +It requires Linux kernel in host VM for vCPU scheduling and VM power management. + +- Memory Management + +Direct use of physical memory from VMs is forbidden and designed to be dictated +to the privilege models managed by GenieZone hypervisor for security reason. +With the help of gzvm-ko, the hypervisor would be able to manipulate memory as +objects. + +- Virtual Platform + +We manage to emulate a virtual mobile platform for guest OS running on guest +VM. The platform supports various architecture-defined devices, such as +virtual arch timer, GIC, MMIO, PSCI, and exception watching...etc. + +- Inter-VM Communication + +Communication among guest VMs was provided mainly on RPC. More communication +mechanisms were to be provided in the future based on VirtIO-vsock. + +- Device Virtualization + +The solution is provided using the well-known VirtIO. The gzvm-ko would +redirect MMIO traps back to VMM where the virtual devices are mostly emulated. +Ioeventfd is implemented using eventfd for signaling host VM that some IO +events in guest VMs need to be processed. + +- Interrupt virtualization + +All Interrupts during some guest VMs running would be handled by GenieZone +hypervisor with the help of gzvm-ko, both virtual and physical ones. In case +there's no guest VM running out there, physical interrupts would be handled by +host VM directly for performance reason. Irqfd is also implemented using +eventfd for accepting vIRQ requests in gzvm-ko. + +Platform architecture component +=============================== + +- vm + +The vm component is responsible for setting up the capability and memory +management for the protected VMs. The capability is mainly about the lifecycle +control and boot context initialization. And the memory management is highly +integrated with ARM 2-stage translation tables to convert VA to IPA to PA under +proper security measures required by protected VMs. + +- vcpu + +The vcpu component is the core of virtualizing aarch64 physical CPU runnable, +and it controls the vCPU lifecycle including creating, running and destroying. +With self-defined exit handler, the vm component would be able to act +accordingly before terminated. + +- vgic + +The vgic component exposes control interfaces to Linux kernel via irqchip, and +we intend to support all SPI, PPI, and SGI. When it comes to virtual +interrupts, the GenieZone hypervisor would write to list registers and trigger +vIRQ injection in guest VMs via GIC. diff --git a/Documentation/virt/index.rst b/Documentation/virt/index.rst index 7fb55ae08598..cf12444db336 100644 --- a/Documentation/virt/index.rst +++ b/Documentation/virt/index.rst @@ -16,6 +16,7 @@ Virtualization Support coco/sev-guest coco/tdx-guest hyperv/index + geniezone/introduction .. only:: html and subproject diff --git a/MAINTAINERS b/MAINTAINERS index ae1fd58fc64c..a81903c029f2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8741,6 +8741,12 @@ F: include/vdso/ F: kernel/time/vsyscall.c F: lib/vdso/ +GENIEZONE HYPERVISOR DRIVER +M: Yingshiuan Pan +M: Ze-Yu Wang +M: Yi-De Wu +F: Documentation/virt/geniezone/ + GENWQE (IBM Generic Workqueue Card) M: Frank Haverkamp S: Supported From patchwork Thu Jul 27 07:59:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi-De Wu X-Patchwork-Id: 707223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3C06C0729B for ; Thu, 27 Jul 2023 08:03:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233646AbjG0ID5 (ORCPT ); Thu, 27 Jul 2023 04:03:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233742AbjG0ICy (ORCPT ); Thu, 27 Jul 2023 04:02:54 -0400 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5255A4EC0; Thu, 27 Jul 2023 01:00:22 -0700 (PDT) X-UUID: 9e4a2f962c5311eeb20a276fd37b9834-20230727 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=5OOJvZYewRp0q0mDAVnzzzSVH5FNYBOlKNtYiAdgrwE=; b=mJKzGVNnBtKpGpQsVQdi1/TIRSHbJXCucT3HWcfFNmBcGtkxVlHRmsfTBBvFdoDegTsaow6G32Wcvc1APBNczEykxpdNDzjccfBtiVQykqtbr+ueR0+fPR0+ELGfnqexHO+60HTgdupNNCcyzIsnFetVqHYythQLZj2MOZ/RVsU=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.29, REQID:fea3ed0a-f357-4d4b-b90e-47a714896c9a, IP:0, U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTI ON:release,TS:70 X-CID-INFO: VERSION:1.1.29, REQID:fea3ed0a-f357-4d4b-b90e-47a714896c9a, IP:0, URL :0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTI ON:quarantine,TS:70 X-CID-META: VersionHash:e7562a7, CLOUDID:24a862d2-cd77-4e67-bbfd-aa4eaace762f, B ulkID:230727160015Q42B2ZOO,BulkQuantity:0,Recheck:0,SF:29|28|17|19|48|38,T C:nil,Content:0,EDM:-3,IP:nil,URL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,CO L:0,OSI:0,OSA:0,AV:0,LES:1,SPR:NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_ULN, TF_CID_SPAM_SNR, TF_CID_SPAM_SDM, TF_CID_SPAM_ASC, TF_CID_SPAM_FAS,TF_CID_SPAM_FSD X-UUID: 9e4a2f962c5311eeb20a276fd37b9834-20230727 Received: from mtkmbs14n2.mediatek.inc [(172.21.101.76)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1730346221; Thu, 27 Jul 2023 16:00:14 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 27 Jul 2023 16:00:14 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 27 Jul 2023 16:00:13 +0800 From: Yi-De Wu To: Yingshiuan Pan , Ze-Yu Wang , Yi-De Wu , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Corbet , Catalin Marinas , Will Deacon , Arnd Bergmann , "Matthias Brugger" , AngeloGioacchino Del Regno CC: , , , , , , "David Bradil" , Trilok Soni , "Ivan Tseng" , Jade Shih , "My Chuang" , Shawn Hsiao , PeiLun Suei , Liju Chen , Willix Yeh Subject: [PATCH v5 04/12] virt: geniezone: Add vcpu support Date: Thu, 27 Jul 2023 15:59:57 +0800 Message-ID: <20230727080005.14474-5-yi-de.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230727080005.14474-1-yi-de.wu@mediatek.com> References: <20230727080005.14474-1-yi-de.wu@mediatek.com> MIME-Version: 1.0 X-MTK: N Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: "Yingshiuan Pan" VMM use this interface to create vcpu instance which is a fd, and this fd will be for any vcpu operations, such as setting vcpu registers and accepts the most important ioctl GZVM_VCPU_RUN which requests GenieZone hypervisor to do context switch to execute VM's vcpu context. Signed-off-by: Yingshiuan Pan Signed-off-by: Jerry Wang Signed-off-by: Liju Chen Signed-off-by: Yi-De Wu --- arch/arm64/geniezone/Makefile | 2 +- arch/arm64/geniezone/gzvm_arch_common.h | 20 ++ arch/arm64/geniezone/vcpu.c | 88 +++++++++ arch/arm64/geniezone/vm.c | 11 ++ arch/arm64/include/uapi/asm/gzvm_arch.h | 30 +++ drivers/virt/geniezone/Makefile | 3 +- drivers/virt/geniezone/gzvm_vcpu.c | 250 ++++++++++++++++++++++++ drivers/virt/geniezone/gzvm_vm.c | 5 + include/linux/gzvm_drv.h | 21 ++ include/uapi/linux/gzvm.h | 136 +++++++++++++ 10 files changed, 564 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/geniezone/vcpu.c create mode 100644 drivers/virt/geniezone/gzvm_vcpu.c diff --git a/arch/arm64/geniezone/Makefile b/arch/arm64/geniezone/Makefile index 2957898cdd05..69b0a4abeab0 100644 --- a/arch/arm64/geniezone/Makefile +++ b/arch/arm64/geniezone/Makefile @@ -4,6 +4,6 @@ # include $(srctree)/drivers/virt/geniezone/Makefile -gzvm-y += vm.o +gzvm-y += vm.o vcpu.o obj-$(CONFIG_MTK_GZVM) += gzvm.o diff --git a/arch/arm64/geniezone/gzvm_arch_common.h b/arch/arm64/geniezone/gzvm_arch_common.h index fdb95d619102..9be9cf77faa3 100644 --- a/arch/arm64/geniezone/gzvm_arch_common.h +++ b/arch/arm64/geniezone/gzvm_arch_common.h @@ -21,6 +21,7 @@ enum { GZVM_FUNC_CREATE_DEVICE = 11, GZVM_FUNC_PROBE = 12, GZVM_FUNC_ENABLE_CAP = 13, + GZVM_FUNC_INFORM_EXIT = 14, NR_GZVM_FUNC, }; @@ -42,6 +43,7 @@ enum { #define MT_HVC_GZVM_CREATE_DEVICE GZVM_HCALL_ID(GZVM_FUNC_CREATE_DEVICE) #define MT_HVC_GZVM_PROBE GZVM_HCALL_ID(GZVM_FUNC_PROBE) #define MT_HVC_GZVM_ENABLE_CAP GZVM_HCALL_ID(GZVM_FUNC_ENABLE_CAP) +#define MT_HVC_GZVM_INFORM_EXIT GZVM_HCALL_ID(GZVM_FUNC_INFORM_EXIT) /** * gzvm_hypcall_wrapper() - the wrapper for hvc calls @@ -65,4 +67,22 @@ static inline u16 get_vmid_from_tuple(unsigned int tuple) return (u16)(tuple >> 16); } +static inline u16 get_vcpuid_from_tuple(unsigned int tuple) +{ + return (u16)(tuple & 0xffff); +} + +static inline unsigned int +assemble_vm_vcpu_tuple(u16 vmid, u16 vcpuid) +{ + return ((unsigned int)vmid << 16 | vcpuid); +} + +static inline void +disassemble_vm_vcpu_tuple(unsigned int tuple, u16 *vmid, u16 *vcpuid) +{ + *vmid = get_vmid_from_tuple(tuple); + *vcpuid = get_vcpuid_from_tuple(tuple); +} + #endif /* __GZVM_ARCH_COMMON_H__ */ diff --git a/arch/arm64/geniezone/vcpu.c b/arch/arm64/geniezone/vcpu.c new file mode 100644 index 000000000000..95681fd66656 --- /dev/null +++ b/arch/arm64/geniezone/vcpu.c @@ -0,0 +1,88 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 MediaTek Inc. + */ + +#include +#include +#include + +#include +#include +#include "gzvm_arch_common.h" + +int gzvm_arch_vcpu_update_one_reg(struct gzvm_vcpu *vcpu, __u64 reg_id, + bool is_write, __u64 *data) +{ + struct arm_smccc_res res; + unsigned long a1; + int ret; + + /* reg id follows KVM's encoding */ + switch (reg_id & GZVM_REG_ARM_COPROC_MASK) { + case GZVM_REG_ARM_CORE: + break; + default: + return -EOPNOTSUPP; + } + + a1 = assemble_vm_vcpu_tuple(vcpu->gzvm->vm_id, vcpu->vcpuid); + if (!is_write) { + ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_GET_ONE_REG, + a1, reg_id, 0, 0, 0, 0, 0, &res); + if (ret == 0) + *data = res.a1; + } else { + ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_SET_ONE_REG, + a1, reg_id, *data, 0, 0, 0, 0, &res); + } + + return ret; +} + +int gzvm_arch_vcpu_run(struct gzvm_vcpu *vcpu, __u64 *exit_reason) +{ + struct arm_smccc_res res; + unsigned long a1; + int ret; + + a1 = assemble_vm_vcpu_tuple(vcpu->gzvm->vm_id, vcpu->vcpuid); + ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_RUN, a1, 0, 0, 0, 0, 0, + 0, &res); + *exit_reason = res.a1; + return ret; +} + +int gzvm_arch_destroy_vcpu(u16 vm_id, int vcpuid) +{ + struct arm_smccc_res res; + unsigned long a1; + + a1 = assemble_vm_vcpu_tuple(vm_id, vcpuid); + gzvm_hypcall_wrapper(MT_HVC_GZVM_DESTROY_VCPU, a1, 0, 0, 0, 0, 0, 0, + &res); + + return 0; +} + +/** + * gzvm_arch_create_vcpu() - Call smc to gz hypervisor to create vcpu + * @vm_id: vm id + * @vcpuid: vcpu id + * @run: Virtual address of vcpu->run + * + * Return: The wrapper helps caller to convert geniezone errno to Linux errno. + */ +int gzvm_arch_create_vcpu(u16 vm_id, int vcpuid, void *run) +{ + struct arm_smccc_res res; + unsigned long a1, a2; + int ret; + + a1 = assemble_vm_vcpu_tuple(vm_id, vcpuid); + a2 = (__u64)virt_to_phys(run); + ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_CREATE_VCPU, a1, a2, 0, 0, 0, 0, + 0, &res); + + return ret; +} diff --git a/arch/arm64/geniezone/vm.c b/arch/arm64/geniezone/vm.c index e35751b21821..2df321f13057 100644 --- a/arch/arm64/geniezone/vm.c +++ b/arch/arm64/geniezone/vm.c @@ -14,6 +14,17 @@ #define PAR_PA47_MASK ((((1UL << 48) - 1) >> 12) << 12) +int gzvm_arch_inform_exit(u16 vm_id) +{ + struct arm_smccc_res res; + + arm_smccc_hvc(MT_HVC_GZVM_INFORM_EXIT, vm_id, 0, 0, 0, 0, 0, 0, &res); + if (res.a0 == 0) + return 0; + + return -ENXIO; +} + int gzvm_arch_probe(void) { struct arm_smccc_res res; diff --git a/arch/arm64/include/uapi/asm/gzvm_arch.h b/arch/arm64/include/uapi/asm/gzvm_arch.h index 847bb627a65d..e56b4700e07e 100644 --- a/arch/arm64/include/uapi/asm/gzvm_arch.h +++ b/arch/arm64/include/uapi/asm/gzvm_arch.h @@ -17,4 +17,34 @@ /* GZVM_CAP_ARM_PVM_SET_PROTECTED_VM only sets protected but not load pvmfw */ #define GZVM_CAP_ARM_PVM_SET_PROTECTED_VM 2 +/* + * Architecture specific registers are to be defined in arch headers and + * ORed with the arch identifier. + */ +#define GZVM_REG_ARM 0x4000000000000000ULL +#define GZVM_REG_ARM64 0x6000000000000000ULL + +#define GZVM_REG_SIZE_SHIFT 52 +#define GZVM_REG_SIZE_MASK 0x00f0000000000000ULL +#define GZVM_REG_SIZE_U8 0x0000000000000000ULL +#define GZVM_REG_SIZE_U16 0x0010000000000000ULL +#define GZVM_REG_SIZE_U32 0x0020000000000000ULL +#define GZVM_REG_SIZE_U64 0x0030000000000000ULL +#define GZVM_REG_SIZE_U128 0x0040000000000000ULL +#define GZVM_REG_SIZE_U256 0x0050000000000000ULL +#define GZVM_REG_SIZE_U512 0x0060000000000000ULL +#define GZVM_REG_SIZE_U1024 0x0070000000000000ULL +#define GZVM_REG_SIZE_U2048 0x0080000000000000ULL + +#define GZVM_REG_ARCH_MASK 0xff00000000000000ULL + +/* If you need to interpret the index values, here is the key: */ +#define GZVM_REG_ARM_COPROC_MASK 0x000000000FFF0000 +#define GZVM_REG_ARM_COPROC_SHIFT 16 + +/* Normal registers are mapped as coprocessor 16. */ +#define GZVM_REG_ARM_CORE (0x0010 << GZVM_REG_ARM_COPROC_SHIFT) +#define GZVM_REG_ARM_CORE_REG(name) \ + (offsetof(struct gzvm_regs, name) / sizeof(__u32)) + #endif /* __GZVM_ARCH_H__ */ diff --git a/drivers/virt/geniezone/Makefile b/drivers/virt/geniezone/Makefile index 066efddc0b9c..8ebf2db0c970 100644 --- a/drivers/virt/geniezone/Makefile +++ b/drivers/virt/geniezone/Makefile @@ -6,5 +6,6 @@ GZVM_DIR ?= ../../../drivers/virt/geniezone -gzvm-y := $(GZVM_DIR)/gzvm_main.o $(GZVM_DIR)/gzvm_vm.o +gzvm-y := $(GZVM_DIR)/gzvm_main.o $(GZVM_DIR)/gzvm_vm.o \ + $(GZVM_DIR)/gzvm_vcpu.o diff --git a/drivers/virt/geniezone/gzvm_vcpu.c b/drivers/virt/geniezone/gzvm_vcpu.c new file mode 100644 index 000000000000..e051343f2b0e --- /dev/null +++ b/drivers/virt/geniezone/gzvm_vcpu.c @@ -0,0 +1,250 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* maximum size needed for holding an integer */ +#define ITOA_MAX_LEN 12 + +static long gzvm_vcpu_update_one_reg(struct gzvm_vcpu *vcpu, + void * __user argp, + bool is_write) +{ + struct gzvm_one_reg reg; + void __user *reg_addr; + u64 data = 0; + u64 reg_size; + long ret; + + if (copy_from_user(®, argp, sizeof(reg))) + return -EFAULT; + + reg_addr = (void __user *)reg.addr; + reg_size = (reg.id & GZVM_REG_SIZE_MASK) >> GZVM_REG_SIZE_SHIFT; + reg_size = BIT(reg_size); + + if (is_write) { + if (copy_from_user(&data, reg_addr, reg_size)) + return -EFAULT; + } + + ret = gzvm_arch_vcpu_update_one_reg(vcpu, reg.id, is_write, &data); + + if (ret) + return ret; + + if (!is_write) { + if (copy_to_user(reg_addr, &data, reg_size)) + return -EFAULT; + } + + return 0; +} + +/** + * gzvm_vcpu_run() - Handle vcpu run ioctl, entry point to guest and exit + * point from guest + * @vcpu: Pointer to struct gzvm_vcpu + * @argp: Pointer to struct gzvm_vcpu_run in userspace + * + * Return: + * * 0 - Success. + * * Negative - Failure. + */ +static long gzvm_vcpu_run(struct gzvm_vcpu *vcpu, void * __user argp) +{ + bool need_userspace = false; + u64 exit_reason = 0; + + if (copy_from_user(vcpu->run, argp, sizeof(struct gzvm_vcpu_run))) + return -EFAULT; + + for (int i = 0; i < ARRAY_SIZE(vcpu->run->padding1); i++) { + if (vcpu->run->padding1[i]) + return -EINVAL; + } + + if (vcpu->run->immediate_exit == 1) + return -EINTR; + + while (!need_userspace && !signal_pending(current)) { + gzvm_arch_vcpu_run(vcpu, &exit_reason); + + switch (exit_reason) { + case GZVM_EXIT_MMIO: + need_userspace = true; + break; + /** + * it's geniezone's responsibility to fill corresponding data + * structure + */ + case GZVM_EXIT_HYPERCALL: + fallthrough; + case GZVM_EXIT_EXCEPTION: + fallthrough; + case GZVM_EXIT_DEBUG: + fallthrough; + case GZVM_EXIT_FAIL_ENTRY: + fallthrough; + case GZVM_EXIT_INTERNAL_ERROR: + fallthrough; + case GZVM_EXIT_SYSTEM_EVENT: + fallthrough; + case GZVM_EXIT_SHUTDOWN: + need_userspace = true; + break; + case GZVM_EXIT_IRQ: + fallthrough; + case GZVM_EXIT_GZ: + break; + case GZVM_EXIT_UNKNOWN: + fallthrough; + default: + pr_err("vcpu unknown exit\n"); + need_userspace = true; + goto out; + } + } + +out: + if (copy_to_user(argp, vcpu->run, sizeof(struct gzvm_vcpu_run))) + return -EFAULT; + if (signal_pending(current)) { + // invoke hvc to inform gz to map memory + gzvm_arch_inform_exit(vcpu->gzvm->vm_id); + return -ERESTARTSYS; + } + return 0; +} + +static long gzvm_vcpu_ioctl(struct file *filp, unsigned int ioctl, + unsigned long arg) +{ + int ret = -ENOTTY; + void __user *argp = (void __user *)arg; + struct gzvm_vcpu *vcpu = filp->private_data; + + switch (ioctl) { + case GZVM_RUN: + ret = gzvm_vcpu_run(vcpu, argp); + break; + case GZVM_GET_ONE_REG: + /* is_write */ + ret = gzvm_vcpu_update_one_reg(vcpu, argp, false); + break; + case GZVM_SET_ONE_REG: + /* is_write */ + ret = gzvm_vcpu_update_one_reg(vcpu, argp, true); + break; + default: + break; + } + + return ret; +} + +static const struct file_operations gzvm_vcpu_fops = { + .unlocked_ioctl = gzvm_vcpu_ioctl, + .llseek = noop_llseek, +}; + +/* caller must hold the vm lock */ +static void gzvm_destroy_vcpu(struct gzvm_vcpu *vcpu) +{ + if (!vcpu) + return; + + gzvm_arch_destroy_vcpu(vcpu->gzvm->vm_id, vcpu->vcpuid); + /* clean guest's data */ + memset(vcpu->run, 0, GZVM_VCPU_RUN_MAP_SIZE); + free_pages_exact(vcpu->run, GZVM_VCPU_RUN_MAP_SIZE); + kfree(vcpu); +} + +/** + * gzvm_destroy_vcpus() - Destroy all vcpus, caller has to hold the vm lock + * + * @gzvm: vm struct that owns the vcpus + */ +void gzvm_destroy_vcpus(struct gzvm *gzvm) +{ + int i; + + for (i = 0; i < GZVM_MAX_VCPUS; i++) { + gzvm_destroy_vcpu(gzvm->vcpus[i]); + gzvm->vcpus[i] = NULL; + } +} + +/* create_vcpu_fd() - Allocates an inode for the vcpu. */ +static int create_vcpu_fd(struct gzvm_vcpu *vcpu) +{ + /* sizeof("gzvm-vcpu:") + max(strlen(itoa(vcpuid))) + null */ + char name[10 + ITOA_MAX_LEN + 1]; + + snprintf(name, sizeof(name), "gzvm-vcpu:%d", vcpu->vcpuid); + return anon_inode_getfd(name, &gzvm_vcpu_fops, vcpu, O_RDWR | O_CLOEXEC); +} + +/** + * gzvm_vm_ioctl_create_vcpu() - for GZVM_CREATE_VCPU + * @gzvm: Pointer to struct gzvm + * @cpuid: equals arg + * + * Return: Fd of vcpu, negative errno if error occurs + */ +int gzvm_vm_ioctl_create_vcpu(struct gzvm *gzvm, u32 cpuid) +{ + struct gzvm_vcpu *vcpu; + int ret; + + if (cpuid >= GZVM_MAX_VCPUS) + return -EINVAL; + + vcpu = kzalloc(sizeof(*vcpu), GFP_KERNEL); + if (!vcpu) + return -ENOMEM; + + /** + * Allocate 2 pages for data sharing between driver and gz hypervisor + * + * |- page 0 -|- page 1 -| + * |gzvm_vcpu_run|......|hwstate|.......| + * + */ + vcpu->run = alloc_pages_exact(GZVM_VCPU_RUN_MAP_SIZE, + GFP_KERNEL_ACCOUNT | __GFP_ZERO); + if (!vcpu->run) { + ret = -ENOMEM; + goto free_vcpu; + } + vcpu->vcpuid = cpuid; + vcpu->gzvm = gzvm; + mutex_init(&vcpu->lock); + + ret = gzvm_arch_create_vcpu(gzvm->vm_id, vcpu->vcpuid, vcpu->run); + if (ret < 0) + goto free_vcpu_run; + + ret = create_vcpu_fd(vcpu); + if (ret < 0) + goto free_vcpu_run; + gzvm->vcpus[cpuid] = vcpu; + + return ret; + +free_vcpu_run: + free_pages_exact(vcpu->run, GZVM_VCPU_RUN_MAP_SIZE); +free_vcpu: + kfree(vcpu); + return ret; +} diff --git a/drivers/virt/geniezone/gzvm_vm.c b/drivers/virt/geniezone/gzvm_vm.c index ee751369fd4b..aea99d050653 100644 --- a/drivers/virt/geniezone/gzvm_vm.c +++ b/drivers/virt/geniezone/gzvm_vm.c @@ -280,6 +280,10 @@ static long gzvm_vm_ioctl(struct file *filp, unsigned int ioctl, ret = gzvm_dev_ioctl_check_extension(gzvm, arg); break; } + case GZVM_CREATE_VCPU: { + ret = gzvm_vm_ioctl_create_vcpu(gzvm, arg); + break; + } case GZVM_SET_USER_MEMORY_REGION: { struct gzvm_userspace_memory_region userspace_mem; @@ -313,6 +317,7 @@ static void gzvm_destroy_vm(struct gzvm *gzvm) mutex_lock(&gzvm->lock); + gzvm_destroy_vcpus(gzvm); gzvm_arch_destroy_vm(gzvm->vm_id); mutex_lock(&gzvm_list_lock); diff --git a/include/linux/gzvm_drv.h b/include/linux/gzvm_drv.h index 4fd52fcbd5a8..c42edb4345cc 100644 --- a/include/linux/gzvm_drv.h +++ b/include/linux/gzvm_drv.h @@ -31,6 +31,8 @@ #define GZVM_MAX_VCPUS 8 #define GZVM_MAX_MEM_REGION 10 +#define GZVM_VCPU_RUN_MAP_SIZE (PAGE_SIZE * 2) + /* struct mem_region_addr_range - Identical to ffa memory constituent */ struct mem_region_addr_range { /* the base IPA of the constituent memory region, aligned to 4 kiB */ @@ -58,7 +60,16 @@ struct gzvm_memslot { u32 slot_id; }; +struct gzvm_vcpu { + struct gzvm *gzvm; + int vcpuid; + /* lock of vcpu*/ + struct mutex lock; + struct gzvm_vcpu_run *run; +}; + struct gzvm { + struct gzvm_vcpu *vcpus[GZVM_MAX_VCPUS]; /* userspace tied to this vm */ struct mm_struct *mm; struct gzvm_memslot memslot[GZVM_MAX_MEM_REGION]; @@ -75,6 +86,8 @@ int gzvm_err_to_errno(unsigned long err); void gzvm_destroy_all_vms(void); +void gzvm_destroy_vcpus(struct gzvm *gzvm); + /* arch-dependant functions */ int gzvm_arch_probe(void); int gzvm_arch_set_memregion(u16 vm_id, size_t buf_size, @@ -85,6 +98,14 @@ int gzvm_arch_destroy_vm(u16 vm_id); int gzvm_vm_ioctl_arch_enable_cap(struct gzvm *gzvm, struct gzvm_enable_cap *cap, void __user *argp); + u64 gzvm_hva_to_pa_arch(u64 hva); +int gzvm_vm_ioctl_create_vcpu(struct gzvm *gzvm, u32 cpuid); +int gzvm_arch_vcpu_update_one_reg(struct gzvm_vcpu *vcpu, __u64 reg_id, + bool is_write, __u64 *data); +int gzvm_arch_create_vcpu(u16 vm_id, int vcpuid, void *run); +int gzvm_arch_vcpu_run(struct gzvm_vcpu *vcpu, __u64 *exit_reason); +int gzvm_arch_destroy_vcpu(u16 vm_id, int vcpuid); +int gzvm_arch_inform_exit(u16 vm_id); #endif /* __GZVM_DRV_H__ */ diff --git a/include/uapi/linux/gzvm.h b/include/uapi/linux/gzvm.h index 99730c142b0e..4814c82b0dff 100644 --- a/include/uapi/linux/gzvm.h +++ b/include/uapi/linux/gzvm.h @@ -44,6 +44,11 @@ struct gzvm_memory_region { #define GZVM_SET_MEMORY_REGION _IOW(GZVM_IOC_MAGIC, 0x40, \ struct gzvm_memory_region) +/* + * GZVM_CREATE_VCPU receives as a parameter the vcpu slot, + * and returns a vcpu fd. + */ +#define GZVM_CREATE_VCPU _IO(GZVM_IOC_MAGIC, 0x41) /* for GZVM_SET_USER_MEMORY_REGION */ struct gzvm_userspace_memory_region { @@ -59,6 +64,124 @@ struct gzvm_userspace_memory_region { #define GZVM_SET_USER_MEMORY_REGION _IOW(GZVM_IOC_MAGIC, 0x46, \ struct gzvm_userspace_memory_region) +/* + * ioctls for vcpu fds + */ +#define GZVM_RUN _IO(GZVM_IOC_MAGIC, 0x80) + +/* VM exit reason */ +enum { + GZVM_EXIT_UNKNOWN = 0x92920000, + GZVM_EXIT_MMIO = 0x92920001, + GZVM_EXIT_HYPERCALL = 0x92920002, + GZVM_EXIT_IRQ = 0x92920003, + GZVM_EXIT_EXCEPTION = 0x92920004, + GZVM_EXIT_DEBUG = 0x92920005, + GZVM_EXIT_FAIL_ENTRY = 0x92920006, + GZVM_EXIT_INTERNAL_ERROR = 0x92920007, + GZVM_EXIT_SYSTEM_EVENT = 0x92920008, + GZVM_EXIT_SHUTDOWN = 0x92920009, + GZVM_EXIT_GZ = 0x9292000a, +}; + +/** + * struct gzvm_vcpu_run: Same purpose as kvm_run, this struct is + * shared between userspace, kernel and + * GenieZone hypervisor + * @exit_reason: The reason why gzvm_vcpu_run has stopped running the vCPU + * @immediate_exit: Polled when the vcpu is scheduled. + * If set, immediately returns -EINTR + * @padding1: Reserved for future-proof and must be zero filled + * @mmio: The nested struct in anonymous union. Handle mmio in host side + * @phys_addr: The address guest tries to access + * @data: The value to be written (is_write is 1) or + * be filled by user for reads (is_write is 0) + * @size: The size of written data. + * Only the first `size` bytes of `data` are handled + * @reg_nr: The register number where the data is stored + * @is_write: 1 for VM to perform a write or 0 for VM to perform a read + * @fail_entry: The nested struct in anonymous union. + * Handle invalid entry address at the first run + * @hardware_entry_failure_reason: The reason codes about hardware entry failure + * @cpu: The current processor number via smp_processor_id() + * @exception: The nested struct in anonymous union. + * Handle exception occurred in VM + * @exception: Which exception vector + * @error_code: Exception error codes + * @hypercall: The nested struct in anonymous union. + * Some hypercalls issued from VM must be handled + * @args: The hypercall's arguments + * @internal: The nested struct in anonymous union. The errors from hypervisor + * @suberror: The errors codes about GZVM_EXIT_INTERNAL_ERROR + * @ndata: The number of elements used in data[] + * @data: Keep the detailed information about GZVM_EXIT_INTERNAL_ERROR + * @system_event: The nested struct in anonymous union. + * VM's PSCI must be handled by host + * @type: System event type. + * Ex. GZVM_SYSTEM_EVENT_SHUTDOWN or GZVM_SYSTEM_EVENT_RESET...etc. + * @ndata: The number of elements used in data[] + * @data: Keep the detailed information about GZVM_EXIT_SYSTEM_EVENT + * @padding: Fix it to a reasonable size future-proof for keeping the same + * struct size when adding new variables in the union is needed + * + * Keep identical layout between the 3 modules + */ +struct gzvm_vcpu_run { + /* to userspace */ + __u32 exit_reason; + __u8 immediate_exit; + __u8 padding1[3]; + /* union structure of collection of guest exit reason */ + union { + /* GZVM_EXIT_MMIO */ + struct { + /* from FAR_EL2 */ + __u64 phys_addr; + __u8 data[8]; + /* from ESR_EL2 as */ + __u64 size; + /* from ESR_EL2 */ + __u32 reg_nr; + /* from ESR_EL2 */ + __u8 is_write; + } mmio; + /* GZVM_EXIT_FAIL_ENTRY */ + struct { + __u64 hardware_entry_failure_reason; + __u32 cpu; + } fail_entry; + /* GZVM_EXIT_EXCEPTION */ + struct { + __u32 exception; + __u32 error_code; + } exception; + /* GZVM_EXIT_HYPERCALL */ + struct { + __u64 args[8]; /* in-out */ + } hypercall; + /* GZVM_EXIT_INTERNAL_ERROR */ + struct { + __u32 suberror; + __u32 ndata; + __u64 data[16]; + } internal; + /* GZVM_EXIT_SYSTEM_EVENT */ + struct { +#define GZVM_SYSTEM_EVENT_SHUTDOWN 1 +#define GZVM_SYSTEM_EVENT_RESET 2 +#define GZVM_SYSTEM_EVENT_CRASH 3 +#define GZVM_SYSTEM_EVENT_WAKEUP 4 +#define GZVM_SYSTEM_EVENT_SUSPEND 5 +#define GZVM_SYSTEM_EVENT_SEV_TERM 6 +#define GZVM_SYSTEM_EVENT_S2IDLE 7 + __u32 type; + __u32 ndata; + __u64 data[16]; + } system_event; + char padding[256]; + }; +}; + /* for GZVM_ENABLE_CAP */ struct gzvm_enable_cap { /* in */ @@ -73,4 +196,17 @@ struct gzvm_enable_cap { #define GZVM_ENABLE_CAP _IOW(GZVM_IOC_MAGIC, 0xa3, \ struct gzvm_enable_cap) +/* for GZVM_GET/SET_ONE_REG */ +struct gzvm_one_reg { + __u64 id; + __u64 addr; +}; + +#define GZVM_GET_ONE_REG _IOW(GZVM_IOC_MAGIC, 0xab, \ + struct gzvm_one_reg) +#define GZVM_SET_ONE_REG _IOW(GZVM_IOC_MAGIC, 0xac, \ + struct gzvm_one_reg) + +#define GZVM_REG_GENERIC 0x0000000000000000ULL + #endif /* __GZVM_H__ */ From patchwork Thu Jul 27 07:59:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi-De Wu X-Patchwork-Id: 707222 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA0E1C07E8E for ; Thu, 27 Jul 2023 08:03:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233678AbjG0ID6 (ORCPT ); Thu, 27 Jul 2023 04:03:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233748AbjG0ICz (ORCPT ); Thu, 27 Jul 2023 04:02:55 -0400 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36D082D67; Thu, 27 Jul 2023 01:00:26 -0700 (PDT) X-UUID: a1d70fbc2c5311eeb20a276fd37b9834-20230727 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=FgfBAzQoHFTaH2qCFD7VJQbtWH4Cthq+FXUmtuZGzHk=; b=ThnPt9zCSQkcf0vttgSuPwiJkevJ8xAi0jvgOBvdf623DqQmd/5ZhUu2X6a6pzCROJIpEuXgq8dSkQnE2p18axKJ0LrlO4PnCtPKzm0dmWM5rqASews9rBFumphCv9CWFXEiFByqJ7V9gKBQBUF56Ghhq9j3Lx8qjyqZO0YHtP8=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.29, REQID:38bc736a-aa25-4bb5-aa4e-670ae929811c, IP:0, U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTI ON:release,TS:70 X-CID-INFO: VERSION:1.1.29, REQID:38bc736a-aa25-4bb5-aa4e-670ae929811c, IP:0, URL :0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTI ON:quarantine,TS:70 X-CID-META: VersionHash:e7562a7, CLOUDID:c4998342-d291-4e62-b539-43d7d78362ba, B ulkID:230727160017RU7JY85L,BulkQuantity:2,Recheck:0,SF:38|29|28|17|19|48,T C:nil,Content:0,EDM:-3,IP:nil,URL:0,File:nil,Bulk:43,QS:nil,BEC:nil,COL:0, OSI:0,OSA:0,AV:0,LES:1,SPR:NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_ASC, TF_CID_SPAM_FAS, TF_CID_SPAM_FSD, TF_CID_SPAM_SNR, TF_CID_SPAM_SDM X-UUID: a1d70fbc2c5311eeb20a276fd37b9834-20230727 Received: from mtkmbs14n1.mediatek.inc [(172.21.101.75)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1359804051; Thu, 27 Jul 2023 16:00:20 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs13n2.mediatek.inc (172.21.101.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 27 Jul 2023 16:00:14 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 27 Jul 2023 16:00:14 +0800 From: Yi-De Wu To: Yingshiuan Pan , Ze-Yu Wang , Yi-De Wu , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Corbet , Catalin Marinas , Will Deacon , Arnd Bergmann , Matthias Brugger , AngeloGioacchino Del Regno CC: , , , , , , David Bradil , Trilok Soni , Ivan Tseng , Jade Shih , My Chuang , Shawn Hsiao , PeiLun Suei , Liju Chen , Willix Yeh Subject: [PATCH v5 05/12] virt: geniezone: Add irqchip support for virtual interrupt injection Date: Thu, 27 Jul 2023 15:59:58 +0800 Message-ID: <20230727080005.14474-6-yi-de.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230727080005.14474-1-yi-de.wu@mediatek.com> References: <20230727080005.14474-1-yi-de.wu@mediatek.com> MIME-Version: 1.0 X-MTK: N Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: "Yingshiuan Pan" Enable GenieZone to handle virtual interrupt injection request. Signed-off-by: Yingshiuan Pan Signed-off-by: Liju Chen Signed-off-by: Yi-De Wu --- arch/arm64/geniezone/Makefile | 2 +- arch/arm64/geniezone/vgic.c | 108 ++++++++++++++++++++++++ arch/arm64/include/uapi/asm/gzvm_arch.h | 4 + drivers/virt/geniezone/gzvm_common.h | 12 +++ drivers/virt/geniezone/gzvm_vm.c | 82 ++++++++++++++++++ include/linux/gzvm_drv.h | 4 + include/uapi/linux/gzvm.h | 66 +++++++++++++++ 7 files changed, 277 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/geniezone/vgic.c create mode 100644 drivers/virt/geniezone/gzvm_common.h diff --git a/arch/arm64/geniezone/Makefile b/arch/arm64/geniezone/Makefile index 69b0a4abeab0..0e4f1087f9de 100644 --- a/arch/arm64/geniezone/Makefile +++ b/arch/arm64/geniezone/Makefile @@ -4,6 +4,6 @@ # include $(srctree)/drivers/virt/geniezone/Makefile -gzvm-y += vm.o vcpu.o +gzvm-y += vm.o vcpu.o vgic.o obj-$(CONFIG_MTK_GZVM) += gzvm.o diff --git a/arch/arm64/geniezone/vgic.c b/arch/arm64/geniezone/vgic.c new file mode 100644 index 000000000000..3746e0c9e247 --- /dev/null +++ b/arch/arm64/geniezone/vgic.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 MediaTek Inc. + */ + +#include +#include +#include +#include "gzvm_arch_common.h" + +/** + * is_irq_valid() - Check the irq number and irq_type are matched + * @irq: interrupt number + * @irq_type: interrupt type + * + * Return: + * true if irq is valid else false. + */ +static bool is_irq_valid(u32 irq, u32 irq_type) +{ + switch (irq_type) { + case GZVM_IRQ_TYPE_CPU: + /* 0 ~ 15: SGI */ + if (likely(irq <= GZVM_IRQ_CPU_FIQ)) + return true; + break; + case GZVM_IRQ_TYPE_PPI: + /* 16 ~ 31: PPI */ + if (likely(irq >= GZVM_VGIC_NR_SGIS && + irq < GZVM_VGIC_NR_PRIVATE_IRQS)) + return true; + break; + case GZVM_IRQ_TYPE_SPI: + /* 32 ~ : SPT */ + if (likely(irq >= GZVM_VGIC_NR_PRIVATE_IRQS)) + return true; + break; + default: + return false; + } + return false; +} + +/** + * gzvm_vgic_inject_irq() - Inject virtual interrupt to a VM + * @gzvm: Pointer to struct gzvm + * @vcpu_idx: vcpu index, only valid if PPI + * @irq_type: Interrupt type + * @irq: irq number + * @level: 1 if true else 0 + * + * Return: + * * 0 - Success. + * * Negative - Failure. + */ +static int gzvm_vgic_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx, + u32 irq_type, u32 irq, bool level) +{ + unsigned long a1 = assemble_vm_vcpu_tuple(gzvm->vm_id, vcpu_idx); + struct arm_smccc_res res; + + if (!unlikely(is_irq_valid(irq, irq_type))) + return -EINVAL; + + gzvm_hypcall_wrapper(MT_HVC_GZVM_IRQ_LINE, a1, irq, level, + 0, 0, 0, 0, &res); + if (res.a0) { + pr_err("Failed to set IRQ level (%d) to irq#%u on vcpu %d with ret=%d\n", + level, irq, vcpu_idx, (int)res.a0); + return -EFAULT; + } + + return 0; +} + +/** + * gzvm_vgic_inject_spi() - Inject virtual spi interrupt + * @gzvm: Pointer to struct gzvm + * @vcpu_idx: vcpu index + * @spi_irq: This is spi interrupt number (starts from 0 instead of 32) + * @level: 1 if true else 0 + * + * Return: + * * 0 if succeed else other negative values indicating each errors + */ +static int gzvm_vgic_inject_spi(struct gzvm *gzvm, unsigned int vcpu_idx, + u32 spi_irq, bool level) +{ + return gzvm_vgic_inject_irq(gzvm, 0, GZVM_IRQ_TYPE_SPI, + spi_irq + GZVM_VGIC_NR_PRIVATE_IRQS, + level); +} + +int gzvm_arch_create_device(u16 vm_id, struct gzvm_create_device *gzvm_dev) +{ + struct arm_smccc_res res; + + return gzvm_hypcall_wrapper(MT_HVC_GZVM_CREATE_DEVICE, vm_id, + virt_to_phys(gzvm_dev), 0, 0, 0, 0, 0, + &res); +} + +int gzvm_arch_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx, + u32 irq_type, u32 irq, bool level) +{ + /* default use spi */ + return gzvm_vgic_inject_spi(gzvm, vcpu_idx, irq, level); +} diff --git a/arch/arm64/include/uapi/asm/gzvm_arch.h b/arch/arm64/include/uapi/asm/gzvm_arch.h index e56b4700e07e..acfe9be0f849 100644 --- a/arch/arm64/include/uapi/asm/gzvm_arch.h +++ b/arch/arm64/include/uapi/asm/gzvm_arch.h @@ -47,4 +47,8 @@ #define GZVM_REG_ARM_CORE_REG(name) \ (offsetof(struct gzvm_regs, name) / sizeof(__u32)) +#define GZVM_VGIC_NR_SGIS 16 +#define GZVM_VGIC_NR_PPIS 16 +#define GZVM_VGIC_NR_PRIVATE_IRQS (GZVM_VGIC_NR_SGIS + GZVM_VGIC_NR_PPIS) + #endif /* __GZVM_ARCH_H__ */ diff --git a/drivers/virt/geniezone/gzvm_common.h b/drivers/virt/geniezone/gzvm_common.h new file mode 100644 index 000000000000..d0e39ded79e6 --- /dev/null +++ b/drivers/virt/geniezone/gzvm_common.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2023 MediaTek Inc. + */ + +#ifndef __GZ_COMMON_H__ +#define __GZ_COMMON_H__ + +int gzvm_irqchip_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx, + u32 irq_type, u32 irq, bool level); + +#endif /* __GZVM_COMMON_H__ */ diff --git a/drivers/virt/geniezone/gzvm_vm.c b/drivers/virt/geniezone/gzvm_vm.c index aea99d050653..b1397180cd02 100644 --- a/drivers/virt/geniezone/gzvm_vm.c +++ b/drivers/virt/geniezone/gzvm_vm.c @@ -11,6 +11,7 @@ #include #include #include +#include "gzvm_common.h" static DEFINE_MUTEX(gzvm_list_lock); static LIST_HEAD(gzvm_list); @@ -260,6 +261,73 @@ gzvm_vm_ioctl_set_memory_region(struct gzvm *gzvm, return register_memslot_addr_range(gzvm, memslot); } +int gzvm_irqchip_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx, + u32 irq_type, u32 irq, bool level) +{ + return gzvm_arch_inject_irq(gzvm, vcpu_idx, irq_type, irq, level); +} + +static int gzvm_vm_ioctl_irq_line(struct gzvm *gzvm, + struct gzvm_irq_level *irq_level) +{ + u32 irq = irq_level->irq; + u32 irq_type, vcpu_idx, vcpu2_idx, irq_num; + bool level = irq_level->level; + + irq_type = FIELD_GET(GZVM_IRQ_LINE_TYPE, irq); + vcpu_idx = FIELD_GET(GZVM_IRQ_LINE_VCPU, irq); + vcpu2_idx = FIELD_GET(GZVM_IRQ_LINE_VCPU2, irq) * (GZVM_IRQ_VCPU_MASK + 1); + irq_num = FIELD_GET(GZVM_IRQ_LINE_NUM, irq); + + return gzvm_irqchip_inject_irq(gzvm, vcpu_idx + vcpu2_idx, irq_type, irq_num, + level); +} + +static int gzvm_vm_ioctl_create_device(struct gzvm *gzvm, void __user *argp) +{ + struct gzvm_create_device *gzvm_dev; + void *dev_data = NULL; + int ret; + + gzvm_dev = (struct gzvm_create_device *)alloc_pages_exact(PAGE_SIZE, + GFP_KERNEL); + if (!gzvm_dev) + return -ENOMEM; + if (copy_from_user(gzvm_dev, argp, sizeof(*gzvm_dev))) { + ret = -EFAULT; + goto err_free_dev; + } + + if (gzvm_dev->attr_addr != 0 && gzvm_dev->attr_size != 0) { + size_t attr_size = gzvm_dev->attr_size; + void __user *attr_addr = (void __user *)gzvm_dev->attr_addr; + + /* Size of device specific data should not be over a page. */ + if (attr_size > PAGE_SIZE) + return -EINVAL; + + dev_data = alloc_pages_exact(attr_size, GFP_KERNEL); + if (!dev_data) { + ret = -ENOMEM; + goto err_free_dev; + } + + if (copy_from_user(dev_data, attr_addr, attr_size)) { + ret = -EFAULT; + goto err_free_dev_data; + } + gzvm_dev->attr_addr = virt_to_phys(dev_data); + } + + ret = gzvm_arch_create_device(gzvm->vm_id, gzvm_dev); +err_free_dev_data: + if (dev_data) + free_pages_exact(dev_data, 0); +err_free_dev: + free_pages_exact(gzvm_dev, 0); + return ret; +} + static int gzvm_vm_ioctl_enable_cap(struct gzvm *gzvm, struct gzvm_enable_cap *cap, void __user *argp) @@ -294,6 +362,20 @@ static long gzvm_vm_ioctl(struct file *filp, unsigned int ioctl, ret = gzvm_vm_ioctl_set_memory_region(gzvm, &userspace_mem); break; } + case GZVM_IRQ_LINE: { + struct gzvm_irq_level irq_event; + + if (copy_from_user(&irq_event, argp, sizeof(irq_event))) { + ret = -EFAULT; + goto out; + } + ret = gzvm_vm_ioctl_irq_line(gzvm, &irq_event); + break; + } + case GZVM_CREATE_DEVICE: { + ret = gzvm_vm_ioctl_create_device(gzvm, argp); + break; + } case GZVM_ENABLE_CAP: { struct gzvm_enable_cap cap; diff --git a/include/linux/gzvm_drv.h b/include/linux/gzvm_drv.h index c42edb4345cc..d86885d46195 100644 --- a/include/linux/gzvm_drv.h +++ b/include/linux/gzvm_drv.h @@ -108,4 +108,8 @@ int gzvm_arch_vcpu_run(struct gzvm_vcpu *vcpu, __u64 *exit_reason); int gzvm_arch_destroy_vcpu(u16 vm_id, int vcpuid); int gzvm_arch_inform_exit(u16 vm_id); +int gzvm_arch_create_device(u16 vm_id, struct gzvm_create_device *gzvm_dev); +int gzvm_arch_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx, + u32 irq_type, u32 irq, bool level); + #endif /* __GZVM_DRV_H__ */ diff --git a/include/uapi/linux/gzvm.h b/include/uapi/linux/gzvm.h index 4814c82b0dff..fb019d232a98 100644 --- a/include/uapi/linux/gzvm.h +++ b/include/uapi/linux/gzvm.h @@ -64,6 +64,72 @@ struct gzvm_userspace_memory_region { #define GZVM_SET_USER_MEMORY_REGION _IOW(GZVM_IOC_MAGIC, 0x46, \ struct gzvm_userspace_memory_region) +/* for GZVM_IRQ_LINE, irq field index values */ +#define GZVM_IRQ_VCPU_MASK 0xff +#define GZVM_IRQ_LINE_TYPE GENMASK(27, 24) +#define GZVM_IRQ_LINE_VCPU GENMASK(23, 16) +#define GZVM_IRQ_LINE_VCPU2 GENMASK(31, 28) +#define GZVM_IRQ_LINE_NUM GENMASK(15, 0) + +/* irq_type field */ +#define GZVM_IRQ_TYPE_CPU 0 +#define GZVM_IRQ_TYPE_SPI 1 +#define GZVM_IRQ_TYPE_PPI 2 + +/* out-of-kernel GIC cpu interrupt injection irq_number field */ +#define GZVM_IRQ_CPU_IRQ 0 +#define GZVM_IRQ_CPU_FIQ 1 + +struct gzvm_irq_level { + union { + __u32 irq; + __s32 status; + }; + __u32 level; +}; + +#define GZVM_IRQ_LINE _IOW(GZVM_IOC_MAGIC, 0x61, \ + struct gzvm_irq_level) + +enum gzvm_device_type { + GZVM_DEV_TYPE_ARM_VGIC_V3_DIST = 0, + GZVM_DEV_TYPE_ARM_VGIC_V3_REDIST = 1, + GZVM_DEV_TYPE_MAX, +}; + +/** + * struct gzvm_create_device: For GZVM_CREATE_DEVICE. + * @dev_type: Device type. + * @id: Device id. + * @flags: Bypass to hypervisor to handle them and these are flags of virtual + * devices. + * @dev_addr: Device ipa address in VM's view. + * @dev_reg_size: Device register range size. + * @attr_addr: If user -> kernel, this is user virtual address of device + * specific attributes (if needed). If kernel->hypervisor, + * this is ipa. + * @attr_size: This attr_size is the buffer size in bytes of each attribute + * needed from various devices. The attribute here refers to the + * additional data passed from VMM(e.g. Crosvm) to GenieZone + * hypervisor when virtual devices were to be created. Thus, + * we need attr_addr and attr_size in the gzvm_create_device + * structure to keep track of the attribute mentioned. + * + * Store information needed to create device. + */ +struct gzvm_create_device { + __u32 dev_type; + __u32 id; + __u64 flags; + __u64 dev_addr; + __u64 dev_reg_size; + __u64 attr_addr; + __u64 attr_size; +}; + +#define GZVM_CREATE_DEVICE _IOWR(GZVM_IOC_MAGIC, 0xe0, \ + struct gzvm_create_device) + /* * ioctls for vcpu fds */ From patchwork Thu Jul 27 07:59:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi-De Wu X-Patchwork-Id: 707219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE932C25B5F for ; Thu, 27 Jul 2023 08:04:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233764AbjG0IEB (ORCPT ); Thu, 27 Jul 2023 04:04:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233743AbjG0ICy (ORCPT ); Thu, 27 Jul 2023 04:02:54 -0400 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 742264ECA; Thu, 27 Jul 2023 01:00:23 -0700 (PDT) X-UUID: 9e5f87382c5311eeb20a276fd37b9834-20230727 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=T+a0YbsHWO7x+ja1GgHWsLxS59n6fxSQ53QxUvLZwKU=; b=SqioslB36U7CHwB1dERlBJoeyn0EPsMdOIJLzGyVbaCuNkiTuplSVxsggfNl8LKBdvOQaDnLq7IKHJflz0PTog1w/HShUC96dZ7ArgDghosMAJa8VkaXLS8W6MQaer5HQTQFTB+h8RAYQE7TGl/q7D7oyq9pcoQqGiZrGdXxwpI=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.29, REQID:d60fa5eb-5bb7-4a57-a374-06758e1205ef, IP:0, U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTI ON:release,TS:70 X-CID-INFO: VERSION:1.1.29, REQID:d60fa5eb-5bb7-4a57-a374-06758e1205ef, IP:0, URL :0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTI ON:quarantine,TS:70 X-CID-META: VersionHash:e7562a7, CLOUDID:23a862d2-cd77-4e67-bbfd-aa4eaace762f, B ulkID:230727160015WJHWLLOP,BulkQuantity:0,Recheck:0,SF:38|29|28|17|19|48,T C:nil,Content:0,EDM:-3,IP:nil,URL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 ,OSI:0,OSA:0,AV:0,LES:1,SPR:NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_FAS, TF_CID_SPAM_FSD, TF_CID_SPAM_SNR, TF_CID_SPAM_SDM, TF_CID_SPAM_ASC X-UUID: 9e5f87382c5311eeb20a276fd37b9834-20230727 Received: from mtkmbs10n1.mediatek.inc [(172.21.101.34)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1783256558; Thu, 27 Jul 2023 16:00:14 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs11n1.mediatek.inc (172.21.101.185) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 27 Jul 2023 16:00:14 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 27 Jul 2023 16:00:14 +0800 From: Yi-De Wu To: Yingshiuan Pan , Ze-Yu Wang , Yi-De Wu , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Corbet , Catalin Marinas , Will Deacon , Arnd Bergmann , Matthias Brugger , AngeloGioacchino Del Regno CC: , , , , , , David Bradil , Trilok Soni , Ivan Tseng , Jade Shih , My Chuang , Shawn Hsiao , PeiLun Suei , Liju Chen , Willix Yeh Subject: [PATCH v5 06/12] virt: geniezone: Add irqfd support Date: Thu, 27 Jul 2023 15:59:59 +0800 Message-ID: <20230727080005.14474-7-yi-de.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230727080005.14474-1-yi-de.wu@mediatek.com> References: <20230727080005.14474-1-yi-de.wu@mediatek.com> MIME-Version: 1.0 X-MTK: N Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: "Yingshiuan Pan" irqfd enables other threads than vcpu threads to inject virtual interrupt through irqfd asynchronously rather through ioctl interface. This interface is necessary for VMM which creates separated thread for IO handling or uses vhost devices. Signed-off-by: Yingshiuan Pan Signed-off-by: Liju Chen Signed-off-by: Yi-De Wu --- arch/arm64/geniezone/gzvm_arch_common.h | 18 + drivers/virt/geniezone/Makefile | 3 +- drivers/virt/geniezone/gzvm_irqfd.c | 566 ++++++++++++++++++++++++ drivers/virt/geniezone/gzvm_main.c | 4 + drivers/virt/geniezone/gzvm_vcpu.c | 1 + drivers/virt/geniezone/gzvm_vm.c | 18 + include/linux/gzvm_drv.h | 26 ++ include/uapi/linux/gzvm.h | 26 ++ 8 files changed, 660 insertions(+), 2 deletions(-) create mode 100644 drivers/virt/geniezone/gzvm_irqfd.c diff --git a/arch/arm64/geniezone/gzvm_arch_common.h b/arch/arm64/geniezone/gzvm_arch_common.h index 9be9cf77faa3..051d8f49a1df 100644 --- a/arch/arm64/geniezone/gzvm_arch_common.h +++ b/arch/arm64/geniezone/gzvm_arch_common.h @@ -45,6 +45,8 @@ enum { #define MT_HVC_GZVM_ENABLE_CAP GZVM_HCALL_ID(GZVM_FUNC_ENABLE_CAP) #define MT_HVC_GZVM_INFORM_EXIT GZVM_HCALL_ID(GZVM_FUNC_INFORM_EXIT) +#define GIC_V3_NR_LRS 16 + /** * gzvm_hypcall_wrapper() - the wrapper for hvc calls * @a0-a7: arguments passed in registers 0 to 7 @@ -72,6 +74,22 @@ static inline u16 get_vcpuid_from_tuple(unsigned int tuple) return (u16)(tuple & 0xffff); } +/** + * struct gzvm_vcpu_hwstate: Sync architecture state back to host for handling + * @nr_lrs: The available LRs(list registers) in Soc. + * @__pad: add an explicit '__u32 __pad;' in the middle to make it clear + * what the actual layout is. + * @lr: The array of LRs(list registers). + * + * - Keep the same layout of hypervisor data struct. + * - Sync list registers back for acking virtual device interrupt status. + */ +struct gzvm_vcpu_hwstate { + __le32 nr_lrs; + __le32 __pad; + __le64 lr[GIC_V3_NR_LRS]; +}; + static inline unsigned int assemble_vm_vcpu_tuple(u16 vmid, u16 vcpuid) { diff --git a/drivers/virt/geniezone/Makefile b/drivers/virt/geniezone/Makefile index 8ebf2db0c970..19a835b0aac2 100644 --- a/drivers/virt/geniezone/Makefile +++ b/drivers/virt/geniezone/Makefile @@ -7,5 +7,4 @@ GZVM_DIR ?= ../../../drivers/virt/geniezone gzvm-y := $(GZVM_DIR)/gzvm_main.o $(GZVM_DIR)/gzvm_vm.o \ - $(GZVM_DIR)/gzvm_vcpu.o - + $(GZVM_DIR)/gzvm_vcpu.o $(GZVM_DIR)/gzvm_irqfd.o diff --git a/drivers/virt/geniezone/gzvm_irqfd.c b/drivers/virt/geniezone/gzvm_irqfd.c new file mode 100644 index 000000000000..b10ac3a940ee --- /dev/null +++ b/drivers/virt/geniezone/gzvm_irqfd.c @@ -0,0 +1,566 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 MediaTek Inc. + */ + +#include +#include +#include +#include "gzvm_common.h" + +struct gzvm_irq_ack_notifier { + struct hlist_node link; + unsigned int gsi; + void (*irq_acked)(struct gzvm_irq_ack_notifier *ian); +}; + +/** + * struct gzvm_kernel_irqfd_resampler - irqfd resampler descriptor. + * @gzvm: Poiner to gzvm. + * @list: List of resampling struct _irqfd objects sharing this gsi. + * RCU list modified under gzvm->irqfds.resampler_lock. + * @notifier: gzvm irq ack notifier. + * @link: Entry in list of gzvm->irqfd.resampler_list. + * Use for sharing esamplers among irqfds on the same gsi. + * Accessed and modified under gzvm->irqfds.resampler_lock. + * + * Resampling irqfds are a special variety of irqfds used to emulate + * level triggered interrupts. The interrupt is asserted on eventfd + * trigger. On acknowledgment through the irq ack notifier, the + * interrupt is de-asserted and userspace is notified through the + * resamplefd. All resamplers on the same gsi are de-asserted + * together, so we don't need to track the state of each individual + * user. We can also therefore share the same irq source ID. + */ +struct gzvm_kernel_irqfd_resampler { + struct gzvm *gzvm; + + struct list_head list; + struct gzvm_irq_ack_notifier notifier; + + struct list_head link; +}; + +/** + * struct gzvm_kernel_irqfd: gzvm kernel irqfd descriptor. + * @gzvm: Pointer to struct gzvm. + * @wait: Wait queue entry. + * @gsi: Used for level IRQ fast-path. + * @resampler: The resampler used by this irqfd (resampler-only). + * @resamplefd: Eventfd notified on resample (resampler-only). + * @resampler_link: Entry in list of irqfds for a resampler (resampler-only). + * @eventfd: Used for setup/shutdown. + * @list: struct list_head. + * @pt: struct poll_table_struct. + * @shutdown: struct work_struct. + */ +struct gzvm_kernel_irqfd { + struct gzvm *gzvm; + wait_queue_entry_t wait; + + int gsi; + + struct gzvm_kernel_irqfd_resampler *resampler; + + struct eventfd_ctx *resamplefd; + + struct list_head resampler_link; + + struct eventfd_ctx *eventfd; + struct list_head list; + poll_table pt; + struct work_struct shutdown; +}; + +static struct workqueue_struct *irqfd_cleanup_wq; + +/** + * irqfd_set_spi(): irqfd to inject virtual interrupt. + * @gzvm: Pointer to gzvm. + * @irq_source_id: irq source id. + * @irq: This is spi interrupt number (starts from 0 instead of 32). + * @level: irq triggered level. + * @line_status: irq status. + */ +static void irqfd_set_spi(struct gzvm *gzvm, int irq_source_id, u32 irq, + int level, bool line_status) +{ + if (level) + gzvm_irqchip_inject_irq(gzvm, irq_source_id, 0, irq, level); +} + +/** + * irqfd_resampler_ack() - Notify all of the resampler irqfds using this GSI + * when IRQ de-assert once. + * @ian: Pointer to gzvm_irq_ack_notifier. + * + * Since resampler irqfds share an IRQ source ID, we de-assert once + * then notify all of the resampler irqfds using this GSI. We can't + * do multiple de-asserts or we risk racing with incoming re-asserts. + */ +static void irqfd_resampler_ack(struct gzvm_irq_ack_notifier *ian) +{ + struct gzvm_kernel_irqfd_resampler *resampler; + struct gzvm *gzvm; + struct gzvm_kernel_irqfd *irqfd; + int idx; + + resampler = container_of(ian, + struct gzvm_kernel_irqfd_resampler, notifier); + gzvm = resampler->gzvm; + + irqfd_set_spi(gzvm, GZVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID, + resampler->notifier.gsi, 0, false); + + idx = srcu_read_lock(&gzvm->irq_srcu); + + list_for_each_entry_srcu(irqfd, &resampler->list, resampler_link, + srcu_read_lock_held(&gzvm->irq_srcu)) { + eventfd_signal(irqfd->resamplefd, 1); + } + + srcu_read_unlock(&gzvm->irq_srcu, idx); +} + +static void gzvm_register_irq_ack_notifier(struct gzvm *gzvm, + struct gzvm_irq_ack_notifier *ian) +{ + mutex_lock(&gzvm->irq_lock); + hlist_add_head_rcu(&ian->link, &gzvm->irq_ack_notifier_list); + mutex_unlock(&gzvm->irq_lock); +} + +static void gzvm_unregister_irq_ack_notifier(struct gzvm *gzvm, + struct gzvm_irq_ack_notifier *ian) +{ + mutex_lock(&gzvm->irq_lock); + hlist_del_init_rcu(&ian->link); + mutex_unlock(&gzvm->irq_lock); + synchronize_srcu(&gzvm->irq_srcu); +} + +static void irqfd_resampler_shutdown(struct gzvm_kernel_irqfd *irqfd) +{ + struct gzvm_kernel_irqfd_resampler *resampler = irqfd->resampler; + struct gzvm *gzvm = resampler->gzvm; + + mutex_lock(&gzvm->irqfds.resampler_lock); + + list_del_rcu(&irqfd->resampler_link); + synchronize_srcu(&gzvm->irq_srcu); + + if (list_empty(&resampler->list)) { + list_del(&resampler->link); + gzvm_unregister_irq_ack_notifier(gzvm, &resampler->notifier); + irqfd_set_spi(gzvm, GZVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID, + resampler->notifier.gsi, 0, false); + kfree(resampler); + } + + mutex_unlock(&gzvm->irqfds.resampler_lock); +} + +/** + * irqfd_shutdown() - Race-free decouple logic (ordering is critical). + * @work: Pointer to work_struct. + */ +static void irqfd_shutdown(struct work_struct *work) +{ + struct gzvm_kernel_irqfd *irqfd = + container_of(work, struct gzvm_kernel_irqfd, shutdown); + struct gzvm *gzvm = irqfd->gzvm; + u64 cnt; + + /* Make sure irqfd has been initialized in assign path. */ + synchronize_srcu(&gzvm->irq_srcu); + + /* + * Synchronize with the wait-queue and unhook ourselves to prevent + * further events. + */ + eventfd_ctx_remove_wait_queue(irqfd->eventfd, &irqfd->wait, &cnt); + + if (irqfd->resampler) { + irqfd_resampler_shutdown(irqfd); + eventfd_ctx_put(irqfd->resamplefd); + } + + /* + * It is now safe to release the object's resources + */ + eventfd_ctx_put(irqfd->eventfd); + kfree(irqfd); +} + +/** + * irqfd_is_active() - Assumes gzvm->irqfds.lock is held. + * @irqfd: Pointer to gzvm_kernel_irqfd. + * + * Return: + * * true - irqfd is active. + */ +static bool irqfd_is_active(struct gzvm_kernel_irqfd *irqfd) +{ + return list_empty(&irqfd->list) ? false : true; +} + +/** + * irqfd_deactivate() - Mark the irqfd as inactive and schedule it for removal. + * assumes gzvm->irqfds.lock is held. + * @irqfd: Pointer to gzvm_kernel_irqfd. + */ +static void irqfd_deactivate(struct gzvm_kernel_irqfd *irqfd) +{ + if (!irqfd_is_active(irqfd)) + return; + + list_del_init(&irqfd->list); + + queue_work(irqfd_cleanup_wq, &irqfd->shutdown); +} + +/** + * irqfd_wakeup() - Callback of irqfd wait queue, would be woken by writing to + * irqfd to do virtual interrupt injection. + * @wait: Pointer to wait_queue_entry_t. + * @mode: Unused. + * @sync: Unused. + * @key: Get flags about Epoll events. + * + * Return: + * * 0 - Success + */ +static int irqfd_wakeup(wait_queue_entry_t *wait, unsigned int mode, int sync, + void *key) +{ + struct gzvm_kernel_irqfd *irqfd = + container_of(wait, struct gzvm_kernel_irqfd, wait); + __poll_t flags = key_to_poll(key); + struct gzvm *gzvm = irqfd->gzvm; + + if (flags & EPOLLIN) { + u64 cnt; + + eventfd_ctx_do_read(irqfd->eventfd, &cnt); + /* gzvm's irq injection is not blocked, don't need workq */ + irqfd_set_spi(gzvm, GZVM_USERSPACE_IRQ_SOURCE_ID, irqfd->gsi, + 1, false); + } + + if (flags & EPOLLHUP) { + /* The eventfd is closing, detach from GZVM */ + unsigned long iflags; + + spin_lock_irqsave(&gzvm->irqfds.lock, iflags); + + /* + * Do more check if someone deactivated the irqfd before + * we could acquire the irqfds.lock. + */ + if (irqfd_is_active(irqfd)) + irqfd_deactivate(irqfd); + + spin_unlock_irqrestore(&gzvm->irqfds.lock, iflags); + } + + return 0; +} + +static void irqfd_ptable_queue_proc(struct file *file, wait_queue_head_t *wqh, + poll_table *pt) +{ + struct gzvm_kernel_irqfd *irqfd = + container_of(pt, struct gzvm_kernel_irqfd, pt); + add_wait_queue_priority(wqh, &irqfd->wait); +} + +static int gzvm_irqfd_assign(struct gzvm *gzvm, struct gzvm_irqfd *args) +{ + struct gzvm_kernel_irqfd *irqfd, *tmp; + struct fd f; + struct eventfd_ctx *eventfd = NULL, *resamplefd = NULL; + int ret; + __poll_t events; + int idx; + + irqfd = kzalloc(sizeof(*irqfd), GFP_KERNEL_ACCOUNT); + if (!irqfd) + return -ENOMEM; + + irqfd->gzvm = gzvm; + irqfd->gsi = args->gsi; + irqfd->resampler = NULL; + + INIT_LIST_HEAD(&irqfd->list); + INIT_WORK(&irqfd->shutdown, irqfd_shutdown); + + f = fdget(args->fd); + if (!f.file) { + ret = -EBADF; + goto out; + } + + eventfd = eventfd_ctx_fileget(f.file); + if (IS_ERR(eventfd)) { + ret = PTR_ERR(eventfd); + goto fail; + } + + irqfd->eventfd = eventfd; + + if (args->flags & GZVM_IRQFD_FLAG_RESAMPLE) { + struct gzvm_kernel_irqfd_resampler *resampler; + + resamplefd = eventfd_ctx_fdget(args->resamplefd); + if (IS_ERR(resamplefd)) { + ret = PTR_ERR(resamplefd); + goto fail; + } + + irqfd->resamplefd = resamplefd; + INIT_LIST_HEAD(&irqfd->resampler_link); + + mutex_lock(&gzvm->irqfds.resampler_lock); + + list_for_each_entry(resampler, + &gzvm->irqfds.resampler_list, link) { + if (resampler->notifier.gsi == irqfd->gsi) { + irqfd->resampler = resampler; + break; + } + } + + if (!irqfd->resampler) { + resampler = kzalloc(sizeof(*resampler), + GFP_KERNEL_ACCOUNT); + if (!resampler) { + ret = -ENOMEM; + mutex_unlock(&gzvm->irqfds.resampler_lock); + goto fail; + } + + resampler->gzvm = gzvm; + INIT_LIST_HEAD(&resampler->list); + resampler->notifier.gsi = irqfd->gsi; + resampler->notifier.irq_acked = irqfd_resampler_ack; + INIT_LIST_HEAD(&resampler->link); + + list_add(&resampler->link, &gzvm->irqfds.resampler_list); + gzvm_register_irq_ack_notifier(gzvm, + &resampler->notifier); + irqfd->resampler = resampler; + } + + list_add_rcu(&irqfd->resampler_link, &irqfd->resampler->list); + synchronize_srcu(&gzvm->irq_srcu); + + mutex_unlock(&gzvm->irqfds.resampler_lock); + } + + /* + * Install our own custom wake-up handling so we are notified via + * a callback whenever someone signals the underlying eventfd + */ + init_waitqueue_func_entry(&irqfd->wait, irqfd_wakeup); + init_poll_funcptr(&irqfd->pt, irqfd_ptable_queue_proc); + + spin_lock_irq(&gzvm->irqfds.lock); + + ret = 0; + list_for_each_entry(tmp, &gzvm->irqfds.items, list) { + if (irqfd->eventfd != tmp->eventfd) + continue; + /* This fd is used for another irq already. */ + pr_err("already used: gsi=%d fd=%d\n", args->gsi, args->fd); + ret = -EBUSY; + spin_unlock_irq(&gzvm->irqfds.lock); + goto fail; + } + + idx = srcu_read_lock(&gzvm->irq_srcu); + + list_add_tail(&irqfd->list, &gzvm->irqfds.items); + + spin_unlock_irq(&gzvm->irqfds.lock); + + /* + * Check if there was an event already pending on the eventfd + * before we registered, and trigger it as if we didn't miss it. + */ + events = vfs_poll(f.file, &irqfd->pt); + + /* In case there is already a pending event */ + if (events & EPOLLIN) + irqfd_set_spi(gzvm, GZVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID, + irqfd->gsi, 1, false); + + srcu_read_unlock(&gzvm->irq_srcu, idx); + + /* + * do not drop the file until the irqfd is fully initialized, otherwise + * we might race against the EPOLLHUP + */ + fdput(f); + return 0; + +fail: + if (irqfd->resampler) + irqfd_resampler_shutdown(irqfd); + + if (resamplefd && !IS_ERR(resamplefd)) + eventfd_ctx_put(resamplefd); + + if (eventfd && !IS_ERR(eventfd)) + eventfd_ctx_put(eventfd); + + fdput(f); + +out: + kfree(irqfd); + return ret; +} + +static void gzvm_notify_acked_gsi(struct gzvm *gzvm, int gsi) +{ + struct gzvm_irq_ack_notifier *gian; + + hlist_for_each_entry_srcu(gian, &gzvm->irq_ack_notifier_list, + link, srcu_read_lock_held(&gzvm->irq_srcu)) + if (gian->gsi == gsi) + gian->irq_acked(gian); +} + +void gzvm_notify_acked_irq(struct gzvm *gzvm, unsigned int gsi) +{ + int idx; + + idx = srcu_read_lock(&gzvm->irq_srcu); + gzvm_notify_acked_gsi(gzvm, gsi); + srcu_read_unlock(&gzvm->irq_srcu, idx); +} + +/** + * gzvm_irqfd_deassign() - Shutdown any irqfd's that match fd+gsi. + * @gzvm: Pointer to gzvm. + * @args: Pointer to gzvm_irqfd. + * + * Return: + * * 0 - Success. + * * Negative value - Failure. + */ +static int gzvm_irqfd_deassign(struct gzvm *gzvm, struct gzvm_irqfd *args) +{ + struct gzvm_kernel_irqfd *irqfd, *tmp; + struct eventfd_ctx *eventfd; + + eventfd = eventfd_ctx_fdget(args->fd); + if (IS_ERR(eventfd)) + return PTR_ERR(eventfd); + + spin_lock_irq(&gzvm->irqfds.lock); + + list_for_each_entry_safe(irqfd, tmp, &gzvm->irqfds.items, list) { + if (irqfd->eventfd == eventfd && irqfd->gsi == args->gsi) + irqfd_deactivate(irqfd); + } + + spin_unlock_irq(&gzvm->irqfds.lock); + eventfd_ctx_put(eventfd); + + /* + * Block until we know all outstanding shutdown jobs have completed + * so that we guarantee there will not be any more interrupts on this + * gsi once this deassign function returns. + */ + flush_workqueue(irqfd_cleanup_wq); + + return 0; +} + +int gzvm_irqfd(struct gzvm *gzvm, struct gzvm_irqfd *args) +{ + for (int i = 0; i < ARRAY_SIZE(args->pad); i++) { + if (args->pad[i]) + return -EINVAL; + } + + if (args->flags & + ~(GZVM_IRQFD_FLAG_DEASSIGN | GZVM_IRQFD_FLAG_RESAMPLE)) + return -EINVAL; + + if (args->flags & GZVM_IRQFD_FLAG_DEASSIGN) + return gzvm_irqfd_deassign(gzvm, args); + + return gzvm_irqfd_assign(gzvm, args); +} + +/** + * gzvm_vm_irqfd_init() - Initialize irqfd data structure per VM + * + * @gzvm: Pointer to struct gzvm. + * + * Return: + * * 0 - Success. + * * Negative - Failure. + */ +int gzvm_vm_irqfd_init(struct gzvm *gzvm) +{ + mutex_init(&gzvm->irq_lock); + + spin_lock_init(&gzvm->irqfds.lock); + INIT_LIST_HEAD(&gzvm->irqfds.items); + INIT_LIST_HEAD(&gzvm->irqfds.resampler_list); + if (init_srcu_struct(&gzvm->irq_srcu)) + return -EINVAL; + INIT_HLIST_HEAD(&gzvm->irq_ack_notifier_list); + mutex_init(&gzvm->irqfds.resampler_lock); + + return 0; +} + +/** + * gzvm_vm_irqfd_release() - This function is called as the gzvm VM fd is being + * released. Shutdown all irqfds that still remain open. + * @gzvm: Pointer to gzvm. + */ +void gzvm_vm_irqfd_release(struct gzvm *gzvm) +{ + struct gzvm_kernel_irqfd *irqfd, *tmp; + + spin_lock_irq(&gzvm->irqfds.lock); + + list_for_each_entry_safe(irqfd, tmp, &gzvm->irqfds.items, list) + irqfd_deactivate(irqfd); + + spin_unlock_irq(&gzvm->irqfds.lock); + + /* + * Block until we know all outstanding shutdown jobs have completed. + */ + flush_workqueue(irqfd_cleanup_wq); +} + +/** + * gzvm_drv_irqfd_init() - Erase flushing work items when a VM exits. + * + * Return: + * * 0 - Success. + * * Negative - Failure. + * + * Create a host-wide workqueue for issuing deferred shutdown requests + * aggregated from all vm* instances. We need our own isolated + * queue to ease flushing work items when a VM exits. + */ +int gzvm_drv_irqfd_init(void) +{ + irqfd_cleanup_wq = alloc_workqueue("gzvm-irqfd-cleanup", 0, 0); + if (!irqfd_cleanup_wq) + return -ENOMEM; + + return 0; +} + +void gzvm_drv_irqfd_exit(void) +{ + destroy_workqueue(irqfd_cleanup_wq); +} diff --git a/drivers/virt/geniezone/gzvm_main.c b/drivers/virt/geniezone/gzvm_main.c index b629b41a0cd9..d4d5d75d3660 100644 --- a/drivers/virt/geniezone/gzvm_main.c +++ b/drivers/virt/geniezone/gzvm_main.c @@ -110,11 +110,15 @@ static int gzvm_drv_probe(struct platform_device *pdev) if (ret) return ret; + ret = gzvm_drv_irqfd_init(); + if (ret) + return ret; return 0; } static int gzvm_drv_remove(struct platform_device *pdev) { + gzvm_drv_irqfd_exit(); gzvm_destroy_all_vms(); misc_deregister(&gzvm_dev); return 0; diff --git a/drivers/virt/geniezone/gzvm_vcpu.c b/drivers/virt/geniezone/gzvm_vcpu.c index e051343f2b0e..a717fc713b2e 100644 --- a/drivers/virt/geniezone/gzvm_vcpu.c +++ b/drivers/virt/geniezone/gzvm_vcpu.c @@ -227,6 +227,7 @@ int gzvm_vm_ioctl_create_vcpu(struct gzvm *gzvm, u32 cpuid) ret = -ENOMEM; goto free_vcpu; } + vcpu->hwstate = (void *)vcpu->run + PAGE_SIZE; vcpu->vcpuid = cpuid; vcpu->gzvm = gzvm; mutex_init(&vcpu->lock); diff --git a/drivers/virt/geniezone/gzvm_vm.c b/drivers/virt/geniezone/gzvm_vm.c index b1397180cd02..a93f5b0e7078 100644 --- a/drivers/virt/geniezone/gzvm_vm.c +++ b/drivers/virt/geniezone/gzvm_vm.c @@ -376,6 +376,16 @@ static long gzvm_vm_ioctl(struct file *filp, unsigned int ioctl, ret = gzvm_vm_ioctl_create_device(gzvm, argp); break; } + case GZVM_IRQFD: { + struct gzvm_irqfd data; + + if (copy_from_user(&data, argp, sizeof(data))) { + ret = -EFAULT; + goto out; + } + ret = gzvm_irqfd(gzvm, &data); + break; + } case GZVM_ENABLE_CAP: { struct gzvm_enable_cap cap; @@ -399,6 +409,7 @@ static void gzvm_destroy_vm(struct gzvm *gzvm) mutex_lock(&gzvm->lock); + gzvm_vm_irqfd_release(gzvm); gzvm_destroy_vcpus(gzvm); gzvm_arch_destroy_vm(gzvm->vm_id); @@ -444,6 +455,13 @@ static struct gzvm *gzvm_create_vm(unsigned long vm_type) gzvm->mm = current->mm; mutex_init(&gzvm->lock); + ret = gzvm_vm_irqfd_init(gzvm); + if (ret) { + pr_err("Failed to initialize irqfd\n"); + kfree(gzvm); + return ERR_PTR(ret); + } + mutex_lock(&gzvm_list_lock); list_add(&gzvm->vm_list, &gzvm_list); mutex_unlock(&gzvm_list_lock); diff --git a/include/linux/gzvm_drv.h b/include/linux/gzvm_drv.h index d86885d46195..af7043d66567 100644 --- a/include/linux/gzvm_drv.h +++ b/include/linux/gzvm_drv.h @@ -9,6 +9,7 @@ #include #include #include +#include #define GZVM_VCPU_MMAP_SIZE PAGE_SIZE #define INVALID_VM_ID 0xffff @@ -23,6 +24,8 @@ #define ERR_NOT_SUPPORTED (-24) #define ERR_NOT_IMPLEMENTED (-27) #define ERR_FAULT (-40) +#define GZVM_USERSPACE_IRQ_SOURCE_ID 0 +#define GZVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID 1 /* * The following data structures are for data transferring between driver and @@ -66,6 +69,7 @@ struct gzvm_vcpu { /* lock of vcpu*/ struct mutex lock; struct gzvm_vcpu_run *run; + struct gzvm_vcpu_hwstate *hwstate; }; struct gzvm { @@ -75,8 +79,23 @@ struct gzvm { struct gzvm_memslot memslot[GZVM_MAX_MEM_REGION]; /* lock for list_add*/ struct mutex lock; + + struct { + /* lock for irqfds list operation */ + spinlock_t lock; + struct list_head items; + struct list_head resampler_list; + /* lock for irqfds resampler */ + struct mutex resampler_lock; + } irqfds; + struct list_head vm_list; u16 vm_id; + + struct hlist_head irq_ack_notifier_list; + struct srcu_struct irq_srcu; + /* lock for irq injection */ + struct mutex irq_lock; }; long gzvm_dev_ioctl_check_extension(struct gzvm *gzvm, unsigned long args); @@ -112,4 +131,11 @@ int gzvm_arch_create_device(u16 vm_id, struct gzvm_create_device *gzvm_dev); int gzvm_arch_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx, u32 irq_type, u32 irq, bool level); +void gzvm_notify_acked_irq(struct gzvm *gzvm, unsigned int gsi); +int gzvm_irqfd(struct gzvm *gzvm, struct gzvm_irqfd *args); +int gzvm_drv_irqfd_init(void); +void gzvm_drv_irqfd_exit(void); +int gzvm_vm_irqfd_init(struct gzvm *gzvm); +void gzvm_vm_irqfd_release(struct gzvm *gzvm); + #endif /* __GZVM_DRV_H__ */ diff --git a/include/uapi/linux/gzvm.h b/include/uapi/linux/gzvm.h index fb019d232a98..f4b16d70f035 100644 --- a/include/uapi/linux/gzvm.h +++ b/include/uapi/linux/gzvm.h @@ -275,4 +275,30 @@ struct gzvm_one_reg { #define GZVM_REG_GENERIC 0x0000000000000000ULL +#define GZVM_IRQFD_FLAG_DEASSIGN BIT(0) +/* + * GZVM_IRQFD_FLAG_RESAMPLE indicates resamplefd is valid and specifies + * the irqfd to operate in resampling mode for level triggered interrupt + * emulation. + */ +#define GZVM_IRQFD_FLAG_RESAMPLE BIT(1) + +/** + * struct gzvm_irqfd: gzvm irqfd descriptor + * @fd: File descriptor. + * @gsi: Used for level IRQ fast-path. + * @flags: FLAG_DEASSIGN or FLAG_RESAMPLE. + * @resamplefd: The file descriptor of the resampler. + * @pad: Reserved for future-proof. + */ +struct gzvm_irqfd { + __u32 fd; + __u32 gsi; + __u32 flags; + __u32 resamplefd; + __u8 pad[16]; +}; + +#define GZVM_IRQFD _IOW(GZVM_IOC_MAGIC, 0x76, struct gzvm_irqfd) + #endif /* __GZVM_H__ */ From patchwork Thu Jul 27 08:00:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi-De Wu X-Patchwork-Id: 707221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B85DC18E72 for ; Thu, 27 Jul 2023 08:04:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233726AbjG0IEA (ORCPT ); Thu, 27 Jul 2023 04:04:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233716AbjG0ICu (ORCPT ); Thu, 27 Jul 2023 04:02:50 -0400 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D480126A8; Thu, 27 Jul 2023 01:00:21 -0700 (PDT) X-UUID: 9f41a47e2c5311ee9cb5633481061a41-20230727 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=1sbzAAq0pL+hANw50K0jFP/mmk99nt1Mb8BD/sWj5o0=; b=kW9K7NEQb98srDCVoluUxHXMGt+nLRIclst7FcUTWyyXpGRUmtuSnYsysfpGxraLjNWySJalikE9mlZ4GE8RL0eepmStX5BghzNideb5cP9axdngsJaiDV6X7ju9UYWetAzGlPsOuIFyyM/98pARHZ6d4tLJkWX3qwmnCqKACVs=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.29, REQID:18f482a3-6d8d-4f73-bfe3-9ccf4936a5db, IP:0, U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTI ON:release,TS:70 X-CID-INFO: VERSION:1.1.29, REQID:18f482a3-6d8d-4f73-bfe3-9ccf4936a5db, IP:0, URL :0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTI ON:quarantine,TS:70 X-CID-META: VersionHash:e7562a7, CLOUDID:73998342-d291-4e62-b539-43d7d78362ba, B ulkID:230727160017115FM6O1,BulkQuantity:0,Recheck:0,SF:19|48|38|29|28|17,T C:nil,Content:0,EDM:-3,IP:nil,URL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,CO L:0,OSI:0,OSA:0,AV:0,LES:1,SPR:NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_ULN, TF_CID_SPAM_SNR, TF_CID_SPAM_SDM, TF_CID_SPAM_ASC, TF_CID_SPAM_FAS,TF_CID_SPAM_FSD X-UUID: 9f41a47e2c5311ee9cb5633481061a41-20230727 Received: from mtkmbs13n1.mediatek.inc [(172.21.101.193)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 296318186; Thu, 27 Jul 2023 16:00:16 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by MTKMBS14N2.mediatek.inc (172.21.101.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 27 Jul 2023 16:00:14 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 27 Jul 2023 16:00:14 +0800 From: Yi-De Wu To: Yingshiuan Pan , Ze-Yu Wang , Yi-De Wu , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Corbet , Catalin Marinas , Will Deacon , Arnd Bergmann , Matthias Brugger , AngeloGioacchino Del Regno CC: , , , , , , David Bradil , Trilok Soni , Ivan Tseng , Jade Shih , My Chuang , Shawn Hsiao , PeiLun Suei , Liju Chen , Willix Yeh Subject: [PATCH v5 07/12] virt: geniezone: Add ioeventfd support Date: Thu, 27 Jul 2023 16:00:00 +0800 Message-ID: <20230727080005.14474-8-yi-de.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230727080005.14474-1-yi-de.wu@mediatek.com> References: <20230727080005.14474-1-yi-de.wu@mediatek.com> MIME-Version: 1.0 X-MTK: N Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: "Yingshiuan Pan" Ioeventfd leverages eventfd to provide asynchronous notification mechanism for VMM. VMM can register a mmio address and bind with an eventfd. Once a mmio trap occurs on this registered region, its corresponding eventfd will be notified. Signed-off-by: Yingshiuan Pan Signed-off-by: Liju Chen Signed-off-by: Yi-De Wu --- drivers/virt/geniezone/Makefile | 3 +- drivers/virt/geniezone/gzvm_ioeventfd.c | 273 ++++++++++++++++++++++++ drivers/virt/geniezone/gzvm_vcpu.c | 27 ++- drivers/virt/geniezone/gzvm_vm.c | 17 ++ include/linux/gzvm_drv.h | 12 ++ include/uapi/linux/gzvm.h | 25 +++ 6 files changed, 355 insertions(+), 2 deletions(-) create mode 100644 drivers/virt/geniezone/gzvm_ioeventfd.c diff --git a/drivers/virt/geniezone/Makefile b/drivers/virt/geniezone/Makefile index 19a835b0aac2..bc5ae49f2407 100644 --- a/drivers/virt/geniezone/Makefile +++ b/drivers/virt/geniezone/Makefile @@ -7,4 +7,5 @@ GZVM_DIR ?= ../../../drivers/virt/geniezone gzvm-y := $(GZVM_DIR)/gzvm_main.o $(GZVM_DIR)/gzvm_vm.o \ - $(GZVM_DIR)/gzvm_vcpu.o $(GZVM_DIR)/gzvm_irqfd.o + $(GZVM_DIR)/gzvm_vcpu.o $(GZVM_DIR)/gzvm_irqfd.o \ + $(GZVM_DIR)/gzvm_ioeventfd.o diff --git a/drivers/virt/geniezone/gzvm_ioeventfd.c b/drivers/virt/geniezone/gzvm_ioeventfd.c new file mode 100644 index 000000000000..8d41db16ada2 --- /dev/null +++ b/drivers/virt/geniezone/gzvm_ioeventfd.c @@ -0,0 +1,273 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct gzvm_ioevent { + struct list_head list; + __u64 addr; + __u32 len; + struct eventfd_ctx *evt_ctx; + __u64 datamatch; + bool wildcard; +}; + +/** + * ioeventfd_check_collision() - Check collison assumes gzvm->slots_lock held. + * @gzvm: Pointer to gzvm. + * @p: Pointer to gzvm_ioevent. + * + * Return: + * * true - collison found + * * false - no collison + */ +static bool ioeventfd_check_collision(struct gzvm *gzvm, struct gzvm_ioevent *p) +{ + struct gzvm_ioevent *_p; + + list_for_each_entry(_p, &gzvm->ioevents, list) + if (_p->addr == p->addr && + (!_p->len || !p->len || + (_p->len == p->len && + (_p->wildcard || p->wildcard || + _p->datamatch == p->datamatch)))) + return true; + + return false; +} + +static void gzvm_ioevent_release(struct gzvm_ioevent *p) +{ + eventfd_ctx_put(p->evt_ctx); + list_del(&p->list); + kfree(p); +} + +static bool gzvm_ioevent_in_range(struct gzvm_ioevent *p, __u64 addr, int len, + const void *val) +{ + u64 _val; + + if (addr != p->addr) + /* address must be precise for a hit */ + return false; + + if (!p->len) + /* length = 0 means only look at the address, so always a hit */ + return true; + + if (len != p->len) + /* address-range must be precise for a hit */ + return false; + + if (p->wildcard) + /* all else equal, wildcard is always a hit */ + return true; + + /* otherwise, we have to actually compare the data */ + + WARN_ON_ONCE(!IS_ALIGNED((unsigned long)val, len)); + + switch (len) { + case 1: + _val = *(u8 *)val; + break; + case 2: + _val = *(u16 *)val; + break; + case 4: + _val = *(u32 *)val; + break; + case 8: + _val = *(u64 *)val; + break; + default: + return false; + } + + return _val == p->datamatch; +} + +static int gzvm_deassign_ioeventfd(struct gzvm *gzvm, + struct gzvm_ioeventfd *args) +{ + struct gzvm_ioevent *p, *tmp; + struct eventfd_ctx *evt_ctx; + int ret = -ENOENT; + bool wildcard; + + evt_ctx = eventfd_ctx_fdget(args->fd); + if (IS_ERR(evt_ctx)) + return PTR_ERR(evt_ctx); + + wildcard = !(args->flags & GZVM_IOEVENTFD_FLAG_DATAMATCH); + + mutex_lock(&gzvm->lock); + + list_for_each_entry_safe(p, tmp, &gzvm->ioevents, list) { + if (p->evt_ctx != evt_ctx || + p->addr != args->addr || + p->len != args->len || + p->wildcard != wildcard) + continue; + + if (!p->wildcard && p->datamatch != args->datamatch) + continue; + + gzvm_ioevent_release(p); + ret = 0; + break; + } + + mutex_unlock(&gzvm->lock); + + /* got in the front of this function */ + eventfd_ctx_put(evt_ctx); + + return ret; +} + +static int gzvm_assign_ioeventfd(struct gzvm *gzvm, struct gzvm_ioeventfd *args) +{ + struct eventfd_ctx *evt_ctx; + struct gzvm_ioevent *evt; + int ret; + + evt_ctx = eventfd_ctx_fdget(args->fd); + if (IS_ERR(evt_ctx)) + return PTR_ERR(evt_ctx); + + evt = kmalloc(sizeof(*evt), GFP_KERNEL); + if (!evt) + return -ENOMEM; + *evt = (struct gzvm_ioevent) { + .addr = args->addr, + .len = args->len, + .evt_ctx = evt_ctx, + }; + if (args->flags & GZVM_IOEVENTFD_FLAG_DATAMATCH) { + evt->datamatch = args->datamatch; + evt->wildcard = false; + } else { + evt->wildcard = true; + } + + if (ioeventfd_check_collision(gzvm, evt)) { + ret = -EEXIST; + goto err_free; + } + + mutex_lock(&gzvm->lock); + list_add_tail(&evt->list, &gzvm->ioevents); + mutex_unlock(&gzvm->lock); + + return 0; + +err_free: + kfree(evt); + eventfd_ctx_put(evt_ctx); + return ret; +} + +/** + * gzvm_ioeventfd_check_valid() - Check user arguments is valid. + * @args: Pointer to gzvm_ioeventfd. + * + * Return: + * * true if user arguments are valid. + * * false if user arguments are invalid. + */ +static bool gzvm_ioeventfd_check_valid(struct gzvm_ioeventfd *args) +{ + /* must be natural-word sized, or 0 to ignore length */ + switch (args->len) { + case 0: + case 1: + case 2: + case 4: + case 8: + break; + default: + return false; + } + + /* check for range overflow */ + if (args->addr + args->len < args->addr) + return false; + + /* check for extra flags that we don't understand */ + if (args->flags & ~GZVM_IOEVENTFD_VALID_FLAG_MASK) + return false; + + /* ioeventfd with no length can't be combined with DATAMATCH */ + if (!args->len && (args->flags & GZVM_IOEVENTFD_FLAG_DATAMATCH)) + return false; + + /* gzvm does not support pio bus ioeventfd */ + if (args->flags & GZVM_IOEVENTFD_FLAG_PIO) + return false; + + return true; +} + +/** + * gzvm_ioeventfd() - Register ioevent to ioevent list. + * @gzvm: Pointer to gzvm. + * @args: Pointer to gzvm_ioeventfd. + * + * Return: + * * 0 - Success. + * * Negative - Failure. + */ +int gzvm_ioeventfd(struct gzvm *gzvm, struct gzvm_ioeventfd *args) +{ + if (gzvm_ioeventfd_check_valid(args) == false) + return -EINVAL; + + if (args->flags & GZVM_IOEVENTFD_FLAG_DEASSIGN) + return gzvm_deassign_ioeventfd(gzvm, args); + return gzvm_assign_ioeventfd(gzvm, args); +} + +/** + * gzvm_ioevent_write() - Travers this vm's registered ioeventfd to see if + * need notifying it. + * @vcpu: Pointer to vcpu. + * @addr: mmio address. + * @len: mmio size. + * @val: Pointer to void. + * + * Return: + * * true if this io is already sent to ioeventfd's listener. + * * false if we cannot find any ioeventfd registering this mmio write. + */ +bool gzvm_ioevent_write(struct gzvm_vcpu *vcpu, __u64 addr, int len, + const void *val) +{ + struct gzvm_ioevent *e; + + list_for_each_entry(e, &vcpu->gzvm->ioevents, list) { + if (gzvm_ioevent_in_range(e, addr, len, val)) { + eventfd_signal(e->evt_ctx, 1); + return true; + } + } + return false; +} + +int gzvm_init_ioeventfd(struct gzvm *gzvm) +{ + INIT_LIST_HEAD(&gzvm->ioevents); + + return 0; +} diff --git a/drivers/virt/geniezone/gzvm_vcpu.c b/drivers/virt/geniezone/gzvm_vcpu.c index a717fc713b2e..72bd122a8be7 100644 --- a/drivers/virt/geniezone/gzvm_vcpu.c +++ b/drivers/virt/geniezone/gzvm_vcpu.c @@ -50,6 +50,30 @@ static long gzvm_vcpu_update_one_reg(struct gzvm_vcpu *vcpu, return 0; } +/** + * gzvm_vcpu_handle_mmio() - Handle mmio in kernel space. + * @vcpu: Pointer to vcpu. + * + * Return: + * * true - This mmio exit has been processed. + * * false - This mmio exit has not been processed, require userspace. + */ +static bool gzvm_vcpu_handle_mmio(struct gzvm_vcpu *vcpu) +{ + __u64 addr; + __u32 len; + const void *val_ptr; + + /* So far, we don't have in-kernel mmio read handler */ + if (!vcpu->run->mmio.is_write) + return false; + addr = vcpu->run->mmio.phys_addr; + len = vcpu->run->mmio.size; + val_ptr = &vcpu->run->mmio.data; + + return gzvm_ioevent_write(vcpu, addr, len, val_ptr); +} + /** * gzvm_vcpu_run() - Handle vcpu run ioctl, entry point to guest and exit * point from guest @@ -81,7 +105,8 @@ static long gzvm_vcpu_run(struct gzvm_vcpu *vcpu, void * __user argp) switch (exit_reason) { case GZVM_EXIT_MMIO: - need_userspace = true; + if (!gzvm_vcpu_handle_mmio(vcpu)) + need_userspace = true; break; /** * it's geniezone's responsibility to fill corresponding data diff --git a/drivers/virt/geniezone/gzvm_vm.c b/drivers/virt/geniezone/gzvm_vm.c index a93f5b0e7078..60bd017e41fa 100644 --- a/drivers/virt/geniezone/gzvm_vm.c +++ b/drivers/virt/geniezone/gzvm_vm.c @@ -386,6 +386,16 @@ static long gzvm_vm_ioctl(struct file *filp, unsigned int ioctl, ret = gzvm_irqfd(gzvm, &data); break; } + case GZVM_IOEVENTFD: { + struct gzvm_ioeventfd data; + + if (copy_from_user(&data, argp, sizeof(data))) { + ret = -EFAULT; + goto out; + } + ret = gzvm_ioeventfd(gzvm, &data); + break; + } case GZVM_ENABLE_CAP: { struct gzvm_enable_cap cap; @@ -462,6 +472,13 @@ static struct gzvm *gzvm_create_vm(unsigned long vm_type) return ERR_PTR(ret); } + ret = gzvm_init_ioeventfd(gzvm); + if (ret) { + pr_err("Failed to initialize ioeventfd\n"); + kfree(gzvm); + return ERR_PTR(ret); + } + mutex_lock(&gzvm_list_lock); list_add(&gzvm->vm_list, &gzvm_list); mutex_unlock(&gzvm_list_lock); diff --git a/include/linux/gzvm_drv.h b/include/linux/gzvm_drv.h index af7043d66567..d2985e4df4a0 100644 --- a/include/linux/gzvm_drv.h +++ b/include/linux/gzvm_drv.h @@ -6,6 +6,7 @@ #ifndef __GZVM_DRV_H__ #define __GZVM_DRV_H__ +#include #include #include #include @@ -89,6 +90,8 @@ struct gzvm { struct mutex resampler_lock; } irqfds; + struct list_head ioevents; + struct list_head vm_list; u16 vm_id; @@ -138,4 +141,13 @@ void gzvm_drv_irqfd_exit(void); int gzvm_vm_irqfd_init(struct gzvm *gzvm); void gzvm_vm_irqfd_release(struct gzvm *gzvm); +int gzvm_init_ioeventfd(struct gzvm *gzvm); +int gzvm_ioeventfd(struct gzvm *gzvm, struct gzvm_ioeventfd *args); +bool gzvm_ioevent_write(struct gzvm_vcpu *vcpu, __u64 addr, int len, + const void *val); +void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt); +struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr); +void add_wait_queue_priority(struct wait_queue_head *wq_head, + struct wait_queue_entry *wq_entry); + #endif /* __GZVM_DRV_H__ */ diff --git a/include/uapi/linux/gzvm.h b/include/uapi/linux/gzvm.h index f4b16d70f035..506ef975de02 100644 --- a/include/uapi/linux/gzvm.h +++ b/include/uapi/linux/gzvm.h @@ -301,4 +301,29 @@ struct gzvm_irqfd { #define GZVM_IRQFD _IOW(GZVM_IOC_MAGIC, 0x76, struct gzvm_irqfd) +enum { + gzvm_ioeventfd_flag_nr_datamatch = 0, + gzvm_ioeventfd_flag_nr_pio = 1, + gzvm_ioeventfd_flag_nr_deassign = 2, + gzvm_ioeventfd_flag_nr_max, +}; + +#define GZVM_IOEVENTFD_FLAG_DATAMATCH (1 << gzvm_ioeventfd_flag_nr_datamatch) +#define GZVM_IOEVENTFD_FLAG_PIO (1 << gzvm_ioeventfd_flag_nr_pio) +#define GZVM_IOEVENTFD_FLAG_DEASSIGN (1 << gzvm_ioeventfd_flag_nr_deassign) +#define GZVM_IOEVENTFD_VALID_FLAG_MASK ((1 << gzvm_ioeventfd_flag_nr_max) - 1) + +struct gzvm_ioeventfd { + __u64 datamatch; + /* private: legal pio/mmio address */ + __u64 addr; + /* private: 1, 2, 4, or 8 bytes; or 0 to ignore length */ + __u32 len; + __s32 fd; + __u32 flags; + __u8 pad[36]; +}; + +#define GZVM_IOEVENTFD _IOW(GZVM_IOC_MAGIC, 0x79, struct gzvm_ioeventfd) + #endif /* __GZVM_H__ */ From patchwork Thu Jul 27 08:00:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi-De Wu X-Patchwork-Id: 707224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EA82C41513 for ; Thu, 27 Jul 2023 08:03:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233505AbjG0IDy (ORCPT ); Thu, 27 Jul 2023 04:03:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233630AbjG0ICu (ORCPT ); Thu, 27 Jul 2023 04:02:50 -0400 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11D114EC8; Thu, 27 Jul 2023 01:00:22 -0700 (PDT) X-UUID: 9fba05d62c5311ee9cb5633481061a41-20230727 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=JajbX1Okju+0tr5aqbH4Ib57jLX1HZ9NfnHfbV9V2II=; b=JxIXttrdTcG3y21qQTvuuuzxzO+gPA65r0P96e62b8RdeBAfIqggFYCK4DPB0uyqsil7HfHt/3hVWj5WJZR6Z5rTGMjP2rd5pqWTLXb0inG/o1DbqfITVv7xTZziO0SHIn7z93Z4sh1E+lTq0n29YRzySLqYIBuHygEemwCsv8I=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.29, REQID:b51d582e-525a-4850-b671-b546e96228d5, IP:0, U RL:0,TC:0,Content:-5,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTION :release,TS:-5 X-CID-META: VersionHash:e7562a7, CLOUDID:46a862d2-cd77-4e67-bbfd-aa4eaace762f, B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES:1,SPR: NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0,NGT X-CID-BAS: 0,NGT,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR,TF_CID_SPAM_ULN X-UUID: 9fba05d62c5311ee9cb5633481061a41-20230727 Received: from mtkmbs11n2.mediatek.inc [(172.21.101.187)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1208337025; Thu, 27 Jul 2023 16:00:16 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 27 Jul 2023 16:00:15 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 27 Jul 2023 16:00:15 +0800 From: Yi-De Wu To: Yingshiuan Pan , Ze-Yu Wang , Yi-De Wu , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Corbet , Catalin Marinas , Will Deacon , Arnd Bergmann , "Matthias Brugger" , AngeloGioacchino Del Regno CC: , , , , , , "David Bradil" , Trilok Soni , "Ivan Tseng" , Jade Shih , "My Chuang" , Shawn Hsiao , PeiLun Suei , Liju Chen , Willix Yeh Subject: [PATCH v5 12/12] virt: geniezone: Add memory pin/unpin support Date: Thu, 27 Jul 2023 16:00:05 +0800 Message-ID: <20230727080005.14474-13-yi-de.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230727080005.14474-1-yi-de.wu@mediatek.com> References: <20230727080005.14474-1-yi-de.wu@mediatek.com> MIME-Version: 1.0 X-MTK: N Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: "Jerry Wang" Implement memory pin/unpin. - use rb_tree to store pinned memory page - pin page when handling page fault - unpin page when handling relinquish or destroy Signed-off-by: Jerry Wang Signed-off-by: Liju Chen Signed-off-by: Yi-De Wu --- drivers/virt/geniezone/Makefile | 3 +- drivers/virt/geniezone/gzvm_exception.c | 32 ---- drivers/virt/geniezone/gzvm_hvc.c | 34 ++++ drivers/virt/geniezone/gzvm_mmu.c | 210 ++++++++++++++++++++++++ drivers/virt/geniezone/gzvm_vcpu.c | 4 +- drivers/virt/geniezone/gzvm_vm.c | 113 +++---------- include/linux/gzvm_drv.h | 19 ++- include/uapi/linux/gzvm.h | 5 + 8 files changed, 291 insertions(+), 129 deletions(-) create mode 100644 drivers/virt/geniezone/gzvm_hvc.c create mode 100644 drivers/virt/geniezone/gzvm_mmu.c diff --git a/drivers/virt/geniezone/Makefile b/drivers/virt/geniezone/Makefile index e1299f99df76..ca4b84630573 100644 --- a/drivers/virt/geniezone/Makefile +++ b/drivers/virt/geniezone/Makefile @@ -8,4 +8,5 @@ GZVM_DIR ?= ../../../drivers/virt/geniezone gzvm-y := $(GZVM_DIR)/gzvm_main.o $(GZVM_DIR)/gzvm_vm.o \ $(GZVM_DIR)/gzvm_vcpu.o $(GZVM_DIR)/gzvm_irqfd.o \ - $(GZVM_DIR)/gzvm_ioeventfd.o $(GZVM_DIR)/gzvm_exception.o + $(GZVM_DIR)/gzvm_ioeventfd.o $(GZVM_DIR)/gzvm_exception.o \ + $(GZVM_DIR)/gzvm_hvc.o $(GZVM_DIR)/gzvm_mmu.o diff --git a/drivers/virt/geniezone/gzvm_exception.c b/drivers/virt/geniezone/gzvm_exception.c index c2cab1472d2f..8c413815369b 100644 --- a/drivers/virt/geniezone/gzvm_exception.c +++ b/drivers/virt/geniezone/gzvm_exception.c @@ -6,38 +6,6 @@ #include #include -/** - * gzvm_handle_page_fault() - Handle guest page fault, find corresponding page - * for the faulting gpa - * @vcpu: Pointer to struct gzvm_vcpu_run of the faulting vcpu - * - * Return: - * * 0 - Success to handle guest page fault - * * -EFAULT - Failed to map phys addr to guest's GPA - */ -static int gzvm_handle_page_fault(struct gzvm_vcpu *vcpu) -{ - struct gzvm *vm = vcpu->gzvm; - int memslot_id; - u64 pfn, gfn; - int ret; - - gfn = PHYS_PFN(vcpu->run->exception.fault_gpa); - memslot_id = gzvm_find_memslot(vm, gfn); - if (unlikely(memslot_id < 0)) - return -EFAULT; - - ret = gzvm_gfn_to_pfn_memslot(&vm->memslot[memslot_id], gfn, &pfn); - if (unlikely(ret)) - return -EFAULT; - - ret = gzvm_arch_map_guest(vm->vm_id, memslot_id, pfn, gfn, 1); - if (unlikely(ret)) - return -EFAULT; - - return 0; -} - /** * gzvm_handle_guest_exception() - Handle guest exception * @vcpu: Pointer to struct gzvm_vcpu_run in userspace diff --git a/drivers/virt/geniezone/gzvm_hvc.c b/drivers/virt/geniezone/gzvm_hvc.c new file mode 100644 index 000000000000..d8854eeec04e --- /dev/null +++ b/drivers/virt/geniezone/gzvm_hvc.c @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 MediaTek Inc. + */ + +#include + +/** + * gzvm_handle_guest_hvc() - Handle guest hvc + * @vcpu: Pointer to struct gzvm_vcpu_run in userspace + * Return: + * * true - This hvc has been processed, no need to back to VMM. + * * false - This hvc has not been processed, require userspace. + */ +bool gzvm_handle_guest_hvc(struct gzvm_vcpu *vcpu) +{ + int ret; + unsigned long ipa; + + switch (vcpu->run->hypercall.args[0]) { + case GZVM_HVC_MEM_RELINQUISH: + ipa = vcpu->run->hypercall.args[1]; + ret = gzvm_handle_relinquish(vcpu, ipa); + break; + default: + ret = false; + break; + } + + if (!ret) + return true; + else + return false; +} diff --git a/drivers/virt/geniezone/gzvm_mmu.c b/drivers/virt/geniezone/gzvm_mmu.c new file mode 100644 index 000000000000..4c2c7da7f929 --- /dev/null +++ b/drivers/virt/geniezone/gzvm_mmu.c @@ -0,0 +1,210 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 MediaTek Inc. + */ + +#include + +/** + * hva_to_pa_fast() - converts hva to pa in generic fast way + * @hva: Host virtual address. + * + * Return: 0 if translation error + */ +u64 hva_to_pa_fast(u64 hva) +{ + struct page *page[1]; + + u64 pfn; + + if (get_user_page_fast_only(hva, 0, page)) { + pfn = page_to_phys(page[0]); + put_page((struct page *)page); + return pfn; + } else { + return 0; + } +} + +/** + * hva_to_pa_slow() - note that this function may sleep + * @hva: Host virtual address. + * + * Return: 0 if translation error + */ +u64 hva_to_pa_slow(u64 hva) +{ + struct page *page = NULL; + u64 pfn = 0; + int npages; + + npages = get_user_pages_unlocked(hva, 1, &page, 0); + if (npages != 1) + return 0; + + if (page) { + pfn = page_to_phys(page); + put_page(page); + } + + return pfn; +} + +static u64 __gzvm_gfn_to_pfn_memslot(struct gzvm_memslot *memslot, u64 gfn) +{ + u64 hva, pa; + + hva = gzvm_gfn_to_hva_memslot(memslot, gfn); + + pa = gzvm_hva_to_pa_arch(hva); + if (pa != 0) + return PHYS_PFN(pa); + + pa = hva_to_pa_fast(hva); + if (pa) + return PHYS_PFN(pa); + + pa = hva_to_pa_slow(hva); + if (pa) + return PHYS_PFN(pa); + + return 0; +} + +/** + * gzvm_gfn_to_pfn_memslot() - Translate gfn (guest ipa) to pfn (host pa), + * result is in @pfn + * @memslot: Pointer to struct gzvm_memslot. + * @gfn: Guest frame number. + * @pfn: Host page frame number. + * + * Return: + * * 0 - Succeed + * * -EFAULT - Failed to convert + */ +int gzvm_gfn_to_pfn_memslot(struct gzvm_memslot *memslot, u64 gfn, u64 *pfn) +{ + u64 __pfn; + + if (!memslot) + return -EFAULT; + + __pfn = __gzvm_gfn_to_pfn_memslot(memslot, gfn); + if (__pfn == 0) { + *pfn = 0; + return -EFAULT; + } + + *pfn = __pfn; + + return 0; +} + +static int cmp_ppages(struct rb_node *node, const struct rb_node *parent) +{ + struct gzvm_pinned_page *a = container_of(node, struct gzvm_pinned_page, node); + struct gzvm_pinned_page *b = container_of(parent, struct gzvm_pinned_page, node); + + if (a->ipa < b->ipa) + return -1; + if (a->ipa > b->ipa) + return 1; + return 0; +} + +static int rb_ppage_cmp(const void *key, const struct rb_node *node) +{ + struct gzvm_pinned_page *p = container_of(node, struct gzvm_pinned_page, node); + phys_addr_t ipa = (phys_addr_t)key; + + return (ipa < p->ipa) ? -1 : (ipa > p->ipa); +} + +static int gzvm_insert_ppage(struct gzvm *vm, struct gzvm_pinned_page *ppage) +{ + if (rb_find_add(&ppage->node, &vm->pinned_pages, cmp_ppages)) + return -EEXIST; + return 0; +} + +static int pin_one_page(unsigned long hva, struct page **page) +{ + struct mm_struct *mm = current->mm; + unsigned int flags = FOLL_HWPOISON | FOLL_LONGTERM | FOLL_WRITE; + + mmap_read_lock(mm); + pin_user_pages(hva, 1, flags, page); + mmap_read_unlock(mm); + + return 0; +} + +/** + * gzvm_handle_relinquish() - Handle memory relinquish request from hypervisor + * + * @vcpu: Pointer to struct gzvm_vcpu_run in userspace + * @ipa: Start address(gpa) of a reclaimed page + */ +int gzvm_handle_relinquish(struct gzvm_vcpu *vcpu, phys_addr_t ipa) +{ + struct gzvm_pinned_page *ppage; + struct rb_node *node; + struct gzvm *vm = vcpu->gzvm; + + node = rb_find((void *)ipa, &vm->pinned_pages, rb_ppage_cmp); + + if (node) + rb_erase(node, &vm->pinned_pages); + else + return 0; + + ppage = container_of(node, struct gzvm_pinned_page, node); + unpin_user_pages_dirty_lock(&ppage->page, 1, true); + kfree(ppage); + return 0; +} + +/** + * gzvm_handle_page_fault() - Handle guest page fault, find corresponding page + * for the faulting gpa + * @vcpu: Pointer to struct gzvm_vcpu_run in userspace + */ +int gzvm_handle_page_fault(struct gzvm_vcpu *vcpu) +{ + struct gzvm *vm = vcpu->gzvm; + int memslot_id; + u64 pfn, gfn; + unsigned long hva; + struct gzvm_pinned_page *ppage = NULL; + struct page *page = NULL; + int ret; + + gfn = PHYS_PFN(vcpu->run->exception.fault_gpa); + memslot_id = gzvm_find_memslot(vm, gfn); + if (unlikely(memslot_id < 0)) + return -EFAULT; + + ret = gzvm_gfn_to_pfn_memslot(&vm->memslot[memslot_id], gfn, &pfn); + if (unlikely(ret)) + return -EFAULT; + + ret = gzvm_arch_map_guest(vm->vm_id, memslot_id, pfn, gfn, 1); + if (unlikely(ret)) + return -EFAULT; + + hva = gzvm_gfn_to_hva_memslot(&vm->memslot[memslot_id], gfn); + pin_one_page(hva, &page); + + if (!page) + return -EFAULT; + + ppage = kmalloc(sizeof(*ppage), GFP_KERNEL_ACCOUNT); + if (!ppage) + return -ENOMEM; + + ppage->page = page; + ppage->ipa = vcpu->run->exception.fault_gpa; + gzvm_insert_ppage(vm, ppage); + + return 0; +} diff --git a/drivers/virt/geniezone/gzvm_vcpu.c b/drivers/virt/geniezone/gzvm_vcpu.c index 794b24da40e2..964e18ff0d2a 100644 --- a/drivers/virt/geniezone/gzvm_vcpu.c +++ b/drivers/virt/geniezone/gzvm_vcpu.c @@ -117,7 +117,9 @@ static long gzvm_vcpu_run(struct gzvm_vcpu *vcpu, void * __user argp) need_userspace = true; break; case GZVM_EXIT_HYPERCALL: - fallthrough; + if (!gzvm_handle_guest_hvc(vcpu)) + need_userspace = true; + break; case GZVM_EXIT_DEBUG: fallthrough; case GZVM_EXIT_FAIL_ENTRY: diff --git a/drivers/virt/geniezone/gzvm_vm.c b/drivers/virt/geniezone/gzvm_vm.c index 3da5fdc141b6..6fae6bf976c8 100644 --- a/drivers/virt/geniezone/gzvm_vm.c +++ b/drivers/virt/geniezone/gzvm_vm.c @@ -16,106 +16,13 @@ static DEFINE_MUTEX(gzvm_list_lock); static LIST_HEAD(gzvm_list); -/** - * hva_to_pa_fast() - converts hva to pa in generic fast way - * @hva: Host virtual address. - * - * Return: 0 if translation error - */ -static u64 hva_to_pa_fast(u64 hva) -{ - struct page *page[1]; - - u64 pfn; - - if (get_user_page_fast_only(hva, 0, page)) { - pfn = page_to_phys(page[0]); - put_page((struct page *)page); - return pfn; - } else { - return 0; - } -} - -/** - * hva_to_pa_slow() - note that this function may sleep - * @hva: Host virtual address. - * - * Return: 0 if translation error - */ -static u64 hva_to_pa_slow(u64 hva) -{ - struct page *page; - int npages; - u64 pfn; - - npages = get_user_pages_unlocked(hva, 1, &page, 0); - if (npages != 1) - return 0; - - pfn = page_to_phys(page); - put_page(page); - - return pfn; -} - -static u64 gzvm_gfn_to_hva_memslot(struct gzvm_memslot *memslot, u64 gfn) +u64 gzvm_gfn_to_hva_memslot(struct gzvm_memslot *memslot, u64 gfn) { u64 offset = gfn - memslot->base_gfn; return memslot->userspace_addr + offset * PAGE_SIZE; } -static u64 __gzvm_gfn_to_pfn_memslot(struct gzvm_memslot *memslot, u64 gfn) -{ - u64 hva, pa; - - hva = gzvm_gfn_to_hva_memslot(memslot, gfn); - - pa = gzvm_hva_to_pa_arch(hva); - if (pa != 0) - return PHYS_PFN(pa); - - pa = hva_to_pa_fast(hva); - if (pa) - return PHYS_PFN(pa); - - pa = hva_to_pa_slow(hva); - if (pa) - return PHYS_PFN(pa); - - return 0; -} - -/** - * gzvm_gfn_to_pfn_memslot() - Translate gfn (guest ipa) to pfn (host pa), - * result is in @pfn - * @memslot: Pointer to struct gzvm_memslot. - * @gfn: Guest frame number. - * @pfn: Host page frame number. - * - * Return: - * * 0 - Succeed - * * -EFAULT - Failed to convert - */ -int gzvm_gfn_to_pfn_memslot(struct gzvm_memslot *memslot, u64 gfn, u64 *pfn) -{ - u64 __pfn; - - if (!memslot) - return -EFAULT; - - __pfn = __gzvm_gfn_to_pfn_memslot(memslot, gfn); - if (__pfn == 0) { - *pfn = 0; - return -EFAULT; - } - - *pfn = __pfn; - - return 0; -} - /** * gzvm_find_memslot() - Find memslot containing this @gpa * @vm: Pointer to struct gzvm @@ -454,6 +361,21 @@ static long gzvm_vm_ioctl(struct file *filp, unsigned int ioctl, return ret; } +static void gzvm_destroy_ppage(struct gzvm *gzvm) +{ + struct gzvm_pinned_page *ppage; + struct rb_node *node; + + node = rb_first(&gzvm->pinned_pages); + while (node) { + ppage = rb_entry(node, struct gzvm_pinned_page, node); + unpin_user_pages_dirty_lock(&ppage->page, 1, true); + node = rb_next(node); + rb_erase(&ppage->node, &gzvm->pinned_pages); + kfree(ppage); + } +} + static void gzvm_destroy_vm(struct gzvm *gzvm) { pr_debug("VM-%u is going to be destroyed\n", gzvm->vm_id); @@ -470,6 +392,8 @@ static void gzvm_destroy_vm(struct gzvm *gzvm) mutex_unlock(&gzvm->lock); + gzvm_destroy_ppage(gzvm); + kfree(gzvm); } @@ -505,6 +429,7 @@ static struct gzvm *gzvm_create_vm(unsigned long vm_type) gzvm->vm_id = ret; gzvm->mm = current->mm; mutex_init(&gzvm->lock); + gzvm->pinned_pages = RB_ROOT; ret = gzvm_vm_irqfd_init(gzvm); if (ret) { diff --git a/include/linux/gzvm_drv.h b/include/linux/gzvm_drv.h index d7838679c700..a0bc3b4244cc 100644 --- a/include/linux/gzvm_drv.h +++ b/include/linux/gzvm_drv.h @@ -12,6 +12,8 @@ #include #include #include +#include +#include #define GZVM_VCPU_MMAP_SIZE PAGE_SIZE #define INVALID_VM_ID 0xffff @@ -76,6 +78,12 @@ struct gzvm_vcpu { struct hrtimer gzvm_mtimer; }; +struct gzvm_pinned_page { + struct rb_node node; + struct page *page; + u64 ipa; +}; + struct gzvm { struct gzvm_vcpu *vcpus[GZVM_MAX_VCPUS]; /* userspace tied to this vm */ @@ -102,6 +110,9 @@ struct gzvm { struct srcu_struct irq_srcu; /* lock for irq injection */ struct mutex irq_lock; + + /* Use rb-tree to record pin/unpin page */ + struct rb_root pinned_pages; }; long gzvm_dev_ioctl_check_extension(struct gzvm *gzvm, unsigned long args); @@ -137,9 +148,15 @@ int gzvm_arch_inform_exit(u16 vm_id); int gzvm_arch_drv_init(void); void gzvm_arch_drv_exit(void); -int gzvm_gfn_to_pfn_memslot(struct gzvm_memslot *memslot, u64 gfn, u64 *pfn); int gzvm_find_memslot(struct gzvm *vm, u64 gpa); +int gzvm_gfn_to_pfn_memslot(struct gzvm_memslot *memslot, u64 gfn, u64 *pfn); +u64 gzvm_gfn_to_hva_memslot(struct gzvm_memslot *memslot, u64 gfn); +u64 hva_to_pa_fast(u64 hva); +u64 hva_to_pa_slow(u64 hva); +int gzvm_handle_page_fault(struct gzvm_vcpu *vcpu); +int gzvm_handle_relinquish(struct gzvm_vcpu *vcpu, phys_addr_t ipa); bool gzvm_handle_guest_exception(struct gzvm_vcpu *vcpu); +bool gzvm_handle_guest_hvc(struct gzvm_vcpu *vcpu); int gzvm_arch_create_device(u16 vm_id, struct gzvm_create_device *gzvm_dev); int gzvm_arch_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx, diff --git a/include/uapi/linux/gzvm.h b/include/uapi/linux/gzvm.h index a3329b713089..74b959994c4a 100644 --- a/include/uapi/linux/gzvm.h +++ b/include/uapi/linux/gzvm.h @@ -156,6 +156,11 @@ enum { GZVM_EXCEPTION_PAGE_FAULT = 0x1, }; +/* hypercall definitions of GZVM_EXIT_HYPERCALL */ +enum { + GZVM_HVC_MEM_RELINQUISH = 0xc6000009, +}; + /** * struct gzvm_vcpu_run: Same purpose as kvm_run, this struct is * shared between userspace, kernel and