From patchwork Sat Mar 4 01:06:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D68A5C678DB for ; Sat, 4 Mar 2023 01:07:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229670AbjCDBHG (ORCPT ); Fri, 3 Mar 2023 20:07:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229596AbjCDBHE (ORCPT ); Fri, 3 Mar 2023 20:07:04 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9AB161519; Fri, 3 Mar 2023 17:07:02 -0800 (PST) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3240hP7Q014377; Sat, 4 Mar 2023 01:06:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=lBgQ8sQd2ooWTyKgD+qhk0eX+e4plIATV17Lqcn5aqk=; b=dTZvV3FpctO2eCIguz4nqGEKuFNLz75gAuLhcESZv6ZQWZPAesNDQLGt19PBMtCBE65d qsHac/3AES2K5DTmUAhkdNcKSHEsK6hYusJemnuhTQiU3ByaasBDwb9PFI4vEg9C38PI JhY3dLMs/lAH8FerTs3vkED4w2Xo6YdW365ay3I1O+AS5DZtPiwT4j6gbKig5ylqn0bF yF+aV4uhjW3lrvgdF3VgLBK6AI5kKn8cNjDiNbeGCfQKg2q3xoKf3lRZXeH4vivpjdwK Wj+SqtPt72EBwexJSI5ct4Cj8gxvPBr7EdcsSI9BLK5VkBB2PxM0zFOACVmw5sCTbTv1 Dg== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3rse8by9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:06:48 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32416lmW025739 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:06:47 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:06:46 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu , Rob Herring , Krzysztof Kozlowski CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Jonathan Corbet , Bagas Sanjaya , Will Deacon , "Andy Gross" , Catalin Marinas , "Jassi Brar" , , , , , , "Rob Herring" Subject: [PATCH v11 02/26] dt-bindings: Add binding for gunyah hypervisor Date: Fri, 3 Mar 2023 17:06:08 -0800 Message-ID: <20230304010632.2127470-3-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 01EecaefkqRu4PEyg--kavSIItjnS7NB X-Proofpoint-ORIG-GUID: 01EecaefkqRu4PEyg--kavSIItjnS7NB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1011 malwarescore=0 suspectscore=0 priorityscore=1501 impostorscore=0 adultscore=0 phishscore=0 spamscore=0 bulkscore=0 lowpriorityscore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040004 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When Linux is booted as a guest under the Gunyah hypervisor, the Gunyah Resource Manager applies a devicetree overlay describing the virtual platform configuration of the guest VM, such as the message queue capability IDs for communicating with the Resource Manager. This information is not otherwise discoverable by a VM: the Gunyah hypervisor core does not provide a direct interface to discover capability IDs nor a way to communicate with RM without having already known the corresponding message queue capability ID. Add the DT bindings that Gunyah adheres for the hypervisor node and message queues. Reviewed-by: Rob Herring Signed-off-by: Elliot Berman --- .../bindings/firmware/gunyah-hypervisor.yaml | 82 +++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml diff --git a/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml b/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml new file mode 100644 index 000000000000..3fc0b043ac3c --- /dev/null +++ b/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml @@ -0,0 +1,82 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/firmware/gunyah-hypervisor.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Gunyah Hypervisor + +maintainers: + - Prakruthi Deepak Heragu + - Elliot Berman + +description: |+ + Gunyah virtual machines use this information to determine the capability IDs + of the message queues used to communicate with the Gunyah Resource Manager. + See also: https://github.com/quic/gunyah-resource-manager/blob/develop/src/vm_creation/dto_construct.c + +properties: + compatible: + const: gunyah-hypervisor + + "#address-cells": + description: Number of cells needed to represent 64-bit capability IDs. + const: 2 + + "#size-cells": + description: must be 0, because capability IDs are not memory address + ranges and do not have a size. + const: 0 + +patternProperties: + "^gunyah-resource-mgr(@.*)?": + type: object + description: + Resource Manager node which is required to communicate to Resource + Manager VM using Gunyah Message Queues. + + properties: + compatible: + const: gunyah-resource-manager + + reg: + items: + - description: Gunyah capability ID of the TX message queue + - description: Gunyah capability ID of the RX message queue + + interrupts: + items: + - description: Interrupt for the TX message queue + - description: Interrupt for the RX message queue + + additionalProperties: false + + required: + - compatible + - reg + - interrupts + +additionalProperties: false + +required: + - compatible + - "#address-cells" + - "#size-cells" + +examples: + - | + #include + + hypervisor { + #address-cells = <2>; + #size-cells = <0>; + compatible = "gunyah-hypervisor"; + + gunyah-resource-mgr@0 { + compatible = "gunyah-resource-manager"; + interrupts = , /* TX full IRQ */ + ; /* RX empty IRQ */ + reg = <0x00000000 0x00000000>, <0x00000000 0x00000001>; + /* TX, RX cap ids */ + }; + }; From patchwork Sat Mar 4 01:06:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659078 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B23DC6FD1B for ; Sat, 4 Mar 2023 01:07:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229737AbjCDBHH (ORCPT ); Fri, 3 Mar 2023 20:07:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229614AbjCDBHF (ORCPT ); Fri, 3 Mar 2023 20:07:05 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79EBC60D71; Fri, 3 Mar 2023 17:07:04 -0800 (PST) Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 323LUSmM024354; Sat, 4 Mar 2023 01:06:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=c+R7Cm2h9YH+WF+IGm5QH89rs8g8xWEdt60LkDWmLUY=; b=RTYYnWXMQdpSdp2TJL09Jpcz5BgY+a1j30FLL4KKMVBZk9IOF26t1OornNM9vExCtekC XzFm31zjOGzFcnk1OnhaQN/cU6tCaob6FNFvWRBQCY05SAjosE/u0f3DXiWz8R5lmf1Z KzqBW27aURXgWF6+YF1OnL/1veqGxCsqxU5QVok0XYRIQCTPcmEsRTEmQcHg+ISsriQB SQsoqQJGkb2FXQfRzXMzp3MONOCGWwjWGqqlgJ2XK9hs0cO3/y48so82gnaeKu4f0CXV 9BTPHeyBx/2JMPar0VU73q1ZIPwf8W8K0OTFWiUnLhRZIIAflgjG2Jf+jlAP9NSEI5Gz Yw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3c8htrbd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:06:52 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32416pYk031659 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:06:51 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:06:50 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 05/26] virt: gunyah: Identify hypervisor version Date: Fri, 3 Mar 2023 17:06:11 -0800 Message-ID: <20230304010632.2127470-6-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: dwX26VFNCPBxoep2hDYNa8iXyxQZpPOD X-Proofpoint-ORIG-GUID: dwX26VFNCPBxoep2hDYNa8iXyxQZpPOD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 clxscore=1015 phishscore=0 malwarescore=0 mlxlogscore=999 priorityscore=1501 impostorscore=0 mlxscore=0 bulkscore=0 suspectscore=0 adultscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040004 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Export the version of Gunyah which is reported via the hyp_identify hypercall. Increments of the major API version indicate possibly backwards incompatible changes. Export the hypervisor identity so that Gunyah drivers can act according to the major API version. Signed-off-by: Elliot Berman Reviewed-by: Srinivas Kandagatla --- drivers/virt/Makefile | 1 + drivers/virt/gunyah/Makefile | 3 ++ drivers/virt/gunyah/gunyah.c | 57 ++++++++++++++++++++++++++++++++++++ 3 files changed, 61 insertions(+) create mode 100644 drivers/virt/gunyah/Makefile create mode 100644 drivers/virt/gunyah/gunyah.c diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile index e9aa6fc96fab..a5817e2d7d71 100644 --- a/drivers/virt/Makefile +++ b/drivers/virt/Makefile @@ -12,3 +12,4 @@ obj-$(CONFIG_ACRN_HSM) += acrn/ obj-$(CONFIG_EFI_SECRET) += coco/efi_secret/ obj-$(CONFIG_SEV_GUEST) += coco/sev-guest/ obj-$(CONFIG_INTEL_TDX_GUEST) += coco/tdx-guest/ +obj-y += gunyah/ diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile new file mode 100644 index 000000000000..34f32110faf9 --- /dev/null +++ b/drivers/virt/gunyah/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_GUNYAH) += gunyah.o diff --git a/drivers/virt/gunyah/gunyah.c b/drivers/virt/gunyah/gunyah.c new file mode 100644 index 000000000000..4b7e6f3edaff --- /dev/null +++ b/drivers/virt/gunyah/gunyah.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#define pr_fmt(fmt) "gunyah: " fmt + +#include +#include +#include +#include + +static struct gh_hypercall_hyp_identify_resp gh_api; + +u16 gh_api_version(void) +{ + return FIELD_GET(GH_API_INFO_API_VERSION_MASK, gh_api.api_info); +} +EXPORT_SYMBOL_GPL(gh_api_version); + +bool gh_api_has_feature(enum gh_api_feature feature) +{ + switch (feature) { + case GH_FEATURE_DOORBELL: + case GH_FEATURE_MSGQUEUE: + case GH_FEATURE_VCPU: + case GH_FEATURE_MEMEXTENT: + return !!(gh_api.flags[0] & BIT_ULL(feature)); + default: + return false; + } +} +EXPORT_SYMBOL_GPL(gh_api_has_feature); + +static int __init gh_init(void) +{ + if (!arch_is_gh_guest()) + return -ENODEV; + + gh_hypercall_hyp_identify(&gh_api); + + pr_info("Running under Gunyah hypervisor %llx/v%u\n", + FIELD_GET(GH_API_INFO_VARIANT_MASK, gh_api.api_info), + gh_api_version()); + + /* We might move this out to individual drivers if there's ever an API version bump */ + if (gh_api_version() != GH_API_V1) { + pr_info("Unsupported Gunyah version: %u\n", gh_api_version()); + return -ENODEV; + } + + return 0; +} +arch_initcall(gh_init); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Hypervisor Driver"); From patchwork Sat Mar 4 01:06:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659076 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB3CAC6FD1B for ; Sat, 4 Mar 2023 01:07:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229822AbjCDBHL (ORCPT ); Fri, 3 Mar 2023 20:07:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229723AbjCDBHG (ORCPT ); Fri, 3 Mar 2023 20:07:06 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C79A760D71; Fri, 3 Mar 2023 17:07:05 -0800 (PST) Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3240NDCo010557; Sat, 4 Mar 2023 01:06:53 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=YsGuT96YuV6TmpZAGW8EVXUgQ910oBDQs5Ss5XxgyFk=; b=QccQw1y9PYbK9t9AXUCSaMzx9hY/iKnowqV8QwKTp6jMQ1ROXiSeExx4RlWqg6TpfKGl LZCM5Qu+M0w9euY4AXJWFgLnszE71Vlqc1PUh1pf21wKSr2ALdcRxjacKagpO/TPD+Jr YW1dFVSFGxcdoP4+1EUqfwLh3wdHbFxX58ofVR7HH2aF4co+zxPT8x9TDs/1tgCMwNGN IiQ7VAod78Dqo6ohBF4q1KxcQtF1bwl5RvfTxw6oSz612aTxRUXKQtfJtjy2FphK56Sk 0bM2SrFF+8w3iAIpNZAmv2fmd9IaxHiDyzodGjDMaBQ+2AgG0NRiNE3L883310l9nqIq 7g== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3ph4rpde-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:06:53 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32416qbp018614 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:06:52 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:06:51 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu , Catalin Marinas , Will Deacon CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Andy Gross , Jassi Brar , , , , , Subject: [PATCH v11 06/26] virt: gunyah: msgq: Add hypercalls to send and receive messages Date: Fri, 3 Mar 2023 17:06:12 -0800 Message-ID: <20230304010632.2127470-7-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 19W5F3kn0fDshEJNBOuMQ_lADrjVT29I X-Proofpoint-GUID: 19W5F3kn0fDshEJNBOuMQ_lADrjVT29I X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 spamscore=0 lowpriorityscore=0 mlxscore=0 suspectscore=0 priorityscore=1501 clxscore=1015 impostorscore=0 adultscore=0 mlxlogscore=999 bulkscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040004 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add hypercalls to send and receive messages on a Gunyah message queue. Signed-off-by: Elliot Berman --- arch/arm64/gunyah/gunyah_hypercall.c | 31 ++++++++++++++++++++++++++++ include/linux/gunyah.h | 6 ++++++ 2 files changed, 37 insertions(+) diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c index 0d14e767e2c8..3420d8f286a9 100644 --- a/arch/arm64/gunyah/gunyah_hypercall.c +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -41,6 +41,8 @@ EXPORT_SYMBOL_GPL(arch_is_gh_guest); fn) #define GH_HYPERCALL_HYP_IDENTIFY GH_HYPERCALL(0x8000) +#define GH_HYPERCALL_MSGQ_SEND GH_HYPERCALL(0x801B) +#define GH_HYPERCALL_MSGQ_RECV GH_HYPERCALL(0x801C) /** * gh_hypercall_hyp_identify() - Returns build information and feature flags @@ -60,5 +62,34 @@ void gh_hypercall_hyp_identify(struct gh_hypercall_hyp_identify_resp *hyp_identi } EXPORT_SYMBOL_GPL(gh_hypercall_hyp_identify); +enum gh_error gh_hypercall_msgq_send(u64 capid, size_t size, void *buff, int tx_flags, bool *ready) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GH_HYPERCALL_MSGQ_SEND, capid, size, (uintptr_t)buff, tx_flags, 0, &res); + + if (res.a0 == GH_ERROR_OK) + *ready = !!res.a1; + + return res.a0; +} +EXPORT_SYMBOL_GPL(gh_hypercall_msgq_send); + +enum gh_error gh_hypercall_msgq_recv(u64 capid, void *buff, size_t size, size_t *recv_size, + bool *ready) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GH_HYPERCALL_MSGQ_RECV, capid, (uintptr_t)buff, size, 0, &res); + + if (res.a0 == GH_ERROR_OK) { + *recv_size = res.a1; + *ready = !!res.a2; + } + + return res.a0; +} +EXPORT_SYMBOL_GPL(gh_hypercall_msgq_recv); + MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Gunyah Hypervisor Hypercalls"); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index bd080e3a6fc9..18cfbf5ee48b 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -108,4 +108,10 @@ struct gh_hypercall_hyp_identify_resp { void gh_hypercall_hyp_identify(struct gh_hypercall_hyp_identify_resp *hyp_identity); +#define GH_HYPERCALL_MSGQ_TX_FLAGS_PUSH BIT(0) + +enum gh_error gh_hypercall_msgq_send(u64 capid, size_t size, void *buff, int tx_flags, bool *ready); +enum gh_error gh_hypercall_msgq_recv(u64 capid, void *buff, size_t size, size_t *recv_size, + bool *ready); + #endif From patchwork Sat Mar 4 01:06:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659074 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF987C6FD1B for ; Sat, 4 Mar 2023 01:08:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229926AbjCDBH7 (ORCPT ); Fri, 3 Mar 2023 20:07:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229885AbjCDBH3 (ORCPT ); Fri, 3 Mar 2023 20:07:29 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 889606547E; Fri, 3 Mar 2023 17:07:11 -0800 (PST) Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3240h7nU014118; Sat, 4 Mar 2023 01:06:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=9Az7kssYSbGSJfbfBT4ukVvlyUcsH7RXzy7un8kyHRc=; b=RKwCRe2nwIXW8ljtlhU+TpA2BPAYnkFEyJilA7b3rO5RYUWWwnfrwtJPBmO4KFXEA5Wi xrkrQzFcyHEbam+96DfYUsy27/4iVyJyh6Ag2k4VEu5QoekjM/830CbjjhzKZ6nNdSBD Tr3Vaa1Md6B5h5iyGhZbbPq4HQyH6uGHOEM5NnPBbAYJrNeGxXYuatADaML4mchHQo7/ GUAC5ZIrpqIHjcD9rGUCxwaWko86iKSCKei8ZyEJ7bf7Q3lwRmzqUK12yrYs2Wqw3QTY 6C4GEt2k0q/UBNtPdIuI/n7y2IQacTdM0Xn/wNyQlfX3PgE9VuLzPj/p0i2BN3ZfthqA Bg== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p32ty42y4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:06:56 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32416tLh018647 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:06:55 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:06:54 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 08/26] gunyah: rsc_mgr: Add resource manager RPC core Date: Fri, 3 Mar 2023 17:06:14 -0800 Message-ID: <20230304010632.2127470-9-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: njLynAGyY091NuuYSnjN_6JTzDs9LqyK X-Proofpoint-ORIG-GUID: njLynAGyY091NuuYSnjN_6JTzDs9LqyK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 spamscore=0 mlxlogscore=999 lowpriorityscore=0 suspectscore=0 mlxscore=0 priorityscore=1501 malwarescore=0 adultscore=0 impostorscore=0 phishscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040004 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The resource manager is a special virtual machine which is always running on a Gunyah system. It provides APIs for creating and destroying VMs, secure memory management, sharing/lending of memory between VMs, and setup of inter-VM communication. Calls to the resource manager are made via message queues. This patch implements the basic probing and RPC mechanism to make those API calls. Request/response calls can be made with gh_rm_call. Drivers can also register to notifications pushed by RM via gh_rm_register_notifier Specific API calls that resource manager supports will be implemented in subsequent patches. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 3 + drivers/virt/gunyah/rsc_mgr.c | 688 +++++++++++++++++++++++++++++++++ drivers/virt/gunyah/rsc_mgr.h | 16 + include/linux/gunyah_rsc_mgr.h | 21 + 4 files changed, 728 insertions(+) create mode 100644 drivers/virt/gunyah/rsc_mgr.c create mode 100644 drivers/virt/gunyah/rsc_mgr.h create mode 100644 include/linux/gunyah_rsc_mgr.h diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 34f32110faf9..cc864ff5abbb 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,3 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_GUNYAH) += gunyah.o + +gunyah_rsc_mgr-y += rsc_mgr.o +obj-$(CONFIG_GUNYAH) += gunyah_rsc_mgr.o diff --git a/drivers/virt/gunyah/rsc_mgr.c b/drivers/virt/gunyah/rsc_mgr.c new file mode 100644 index 000000000000..67813c9a52db --- /dev/null +++ b/drivers/virt/gunyah/rsc_mgr.c @@ -0,0 +1,688 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rsc_mgr.h" + +#define RM_RPC_API_VERSION_MASK GENMASK(3, 0) +#define RM_RPC_HEADER_WORDS_MASK GENMASK(7, 4) +#define RM_RPC_API_VERSION FIELD_PREP(RM_RPC_API_VERSION_MASK, 1) +#define RM_RPC_HEADER_WORDS FIELD_PREP(RM_RPC_HEADER_WORDS_MASK, \ + (sizeof(struct gh_rm_rpc_hdr) / sizeof(u32))) +#define RM_RPC_API (RM_RPC_API_VERSION | RM_RPC_HEADER_WORDS) + +#define RM_RPC_TYPE_CONTINUATION 0x0 +#define RM_RPC_TYPE_REQUEST 0x1 +#define RM_RPC_TYPE_REPLY 0x2 +#define RM_RPC_TYPE_NOTIF 0x3 +#define RM_RPC_TYPE_MASK GENMASK(1, 0) + +#define GH_RM_MAX_NUM_FRAGMENTS 62 +#define RM_RPC_FRAGMENTS_MASK GENMASK(7, 2) + +struct gh_rm_rpc_hdr { + u8 api; + u8 type; + __le16 seq; + __le32 msg_id; +} __packed; + +struct gh_rm_rpc_reply_hdr { + struct gh_rm_rpc_hdr hdr; + __le32 err_code; /* GH_RM_ERROR_* */ +} __packed; + +#define GH_RM_MAX_MSG_SIZE (GH_MSGQ_MAX_MSG_SIZE - sizeof(struct gh_rm_rpc_hdr)) + +/* RM Error codes */ +enum gh_rm_error { + GH_RM_ERROR_OK = 0x0, + GH_RM_ERROR_UNIMPLEMENTED = 0xFFFFFFFF, + GH_RM_ERROR_NOMEM = 0x1, + GH_RM_ERROR_NORESOURCE = 0x2, + GH_RM_ERROR_DENIED = 0x3, + GH_RM_ERROR_INVALID = 0x4, + GH_RM_ERROR_BUSY = 0x5, + GH_RM_ERROR_ARGUMENT_INVALID = 0x6, + GH_RM_ERROR_HANDLE_INVALID = 0x7, + GH_RM_ERROR_VALIDATE_FAILED = 0x8, + GH_RM_ERROR_MAP_FAILED = 0x9, + GH_RM_ERROR_MEM_INVALID = 0xA, + GH_RM_ERROR_MEM_INUSE = 0xB, + GH_RM_ERROR_MEM_RELEASED = 0xC, + GH_RM_ERROR_VMID_INVALID = 0xD, + GH_RM_ERROR_LOOKUP_FAILED = 0xE, + GH_RM_ERROR_IRQ_INVALID = 0xF, + GH_RM_ERROR_IRQ_INUSE = 0x10, + GH_RM_ERROR_IRQ_RELEASED = 0x11, +}; + +/** + * struct gh_rm_connection - Represents a complete message from resource manager + * @payload: Combined payload of all the fragments (msg headers stripped off). + * @size: Size of the payload received so far. + * @msg_id: Message ID from the header. + * @type: RM_RPC_TYPE_REPLY or RM_RPC_TYPE_NOTIF. + * @num_fragments: total number of fragments expected to be received. + * @fragments_received: fragments received so far. + * @reply: Fields used for request/reply sequences + * @notification: Fields used for notifiations + */ +struct gh_rm_connection { + void *payload; + size_t size; + __le32 msg_id; + u8 type; + + u8 num_fragments; + u8 fragments_received; + + union { + /** + * @ret: Linux return code, there was an error processing connection + * @seq: Sequence ID for the main message. + * @rm_error: For request/reply sequences with standard replies + * @seq_done: Signals caller that the RM reply has been received + */ + struct { + int ret; + u16 seq; + enum gh_rm_error rm_error; + struct completion seq_done; + } reply; + + /** + * @rm: Pointer to the RM that launched the connection + * @work: Triggered when all fragments of a notification received + */ + struct { + struct gh_rm *rm; + struct work_struct work; + } notification; + }; +}; + +/** + * struct gh_rm - private data for communicating w/Gunyah resource manager + * @dev: pointer to device + * @tx_ghrsc: message queue resource to TX to RM + * @rx_ghrsc: message queue resource to RX from RM + * @msgq: mailbox instance of above + * @active_rx_connection: ongoing gh_rm_connection for which we're receiving fragments + * @last_tx_ret: return value of last mailbox tx + * @call_xarray: xarray to allocate & lookup sequence IDs for Request/Response flows + * @next_seq: next ID to allocate (for xa_alloc_cyclic) + * @cache: cache for allocating Tx messages + * @send_lock: synchronization to allow only one request to be sent at a time + * @nh: notifier chain for clients interested in RM notification messages + */ +struct gh_rm { + struct device *dev; + struct gh_resource tx_ghrsc; + struct gh_resource rx_ghrsc; + struct gh_msgq msgq; + struct mbox_client msgq_client; + struct gh_rm_connection *active_rx_connection; + int last_tx_ret; + + struct xarray call_xarray; + u32 next_seq; + + struct kmem_cache *cache; + struct mutex send_lock; + struct blocking_notifier_head nh; +}; + +/** + * gh_rm_remap_error() - Remap Gunyah resource manager errors into a Linux error code + * @gh_error: "Standard" return value from Gunyah resource manager + */ +static inline int gh_rm_remap_error(enum gh_rm_error rm_error) +{ + switch (rm_error) { + case GH_RM_ERROR_OK: + return 0; + case GH_RM_ERROR_UNIMPLEMENTED: + return -EOPNOTSUPP; + case GH_RM_ERROR_NOMEM: + return -ENOMEM; + case GH_RM_ERROR_NORESOURCE: + return -ENODEV; + case GH_RM_ERROR_DENIED: + return -EPERM; + case GH_RM_ERROR_BUSY: + return -EBUSY; + case GH_RM_ERROR_INVALID: + case GH_RM_ERROR_ARGUMENT_INVALID: + case GH_RM_ERROR_HANDLE_INVALID: + case GH_RM_ERROR_VALIDATE_FAILED: + case GH_RM_ERROR_MAP_FAILED: + case GH_RM_ERROR_MEM_INVALID: + case GH_RM_ERROR_MEM_INUSE: + case GH_RM_ERROR_MEM_RELEASED: + case GH_RM_ERROR_VMID_INVALID: + case GH_RM_ERROR_LOOKUP_FAILED: + case GH_RM_ERROR_IRQ_INVALID: + case GH_RM_ERROR_IRQ_INUSE: + case GH_RM_ERROR_IRQ_RELEASED: + return -EINVAL; + default: + return -EBADMSG; + } +} + +static int gh_rm_init_connection_payload(struct gh_rm_connection *connection, void *msg, + size_t hdr_size, size_t msg_size) +{ + size_t max_buf_size, payload_size; + struct gh_rm_rpc_hdr *hdr = msg; + + if (msg_size < hdr_size) + return -EINVAL; + + payload_size = msg_size - hdr_size; + + connection->num_fragments = FIELD_GET(RM_RPC_FRAGMENTS_MASK, hdr->type); + connection->fragments_received = 0; + + /* There's not going to be any payload, no need to allocate buffer. */ + if (!payload_size && !connection->num_fragments) + return 0; + + if (connection->num_fragments > GH_RM_MAX_NUM_FRAGMENTS) + return -EINVAL; + + max_buf_size = payload_size + (connection->num_fragments * GH_RM_MAX_MSG_SIZE); + + connection->payload = kzalloc(max_buf_size, GFP_KERNEL); + if (!connection->payload) + return -ENOMEM; + + memcpy(connection->payload, msg + hdr_size, payload_size); + connection->size = payload_size; + return 0; +} + +static void gh_rm_abort_connection(struct gh_rm *rm) +{ + switch (rm->active_rx_connection->type) { + case RM_RPC_TYPE_REPLY: + rm->active_rx_connection->reply.ret = -EIO; + complete(&rm->active_rx_connection->reply.seq_done); + break; + case RM_RPC_TYPE_NOTIF: + fallthrough; + default: + kfree(rm->active_rx_connection->payload); + kfree(rm->active_rx_connection); + } + + rm->active_rx_connection = NULL; +} + +static void gh_rm_notif_work(struct work_struct *work) +{ + struct gh_rm_connection *connection = container_of(work, struct gh_rm_connection, + notification.work); + struct gh_rm *rm = connection->notification.rm; + + blocking_notifier_call_chain(&rm->nh, connection->msg_id, connection->payload); + + gh_rm_put(rm); + kfree(connection->payload); + kfree(connection); +} + +static void gh_rm_process_notif(struct gh_rm *rm, void *msg, size_t msg_size) +{ + struct gh_rm_connection *connection; + struct gh_rm_rpc_hdr *hdr = msg; + int ret; + + if (rm->active_rx_connection) + gh_rm_abort_connection(rm); + + connection = kzalloc(sizeof(*connection), GFP_KERNEL); + if (!connection) + return; + + connection->type = RM_RPC_TYPE_NOTIF; + connection->msg_id = hdr->msg_id; + + gh_rm_get(rm); + connection->notification.rm = rm; + INIT_WORK(&connection->notification.work, gh_rm_notif_work); + + ret = gh_rm_init_connection_payload(connection, msg, sizeof(*hdr), msg_size); + if (ret) { + dev_err(rm->dev, "Failed to initialize connection for notification: %d\n", ret); + gh_rm_put(rm); + kfree(connection); + return; + } + + rm->active_rx_connection = connection; +} + +static void gh_rm_process_rply(struct gh_rm *rm, void *msg, size_t msg_size) +{ + struct gh_rm_rpc_reply_hdr *reply_hdr = msg; + struct gh_rm_connection *connection; + u16 seq_id; + + seq_id = le16_to_cpu(reply_hdr->hdr.seq); + connection = xa_load(&rm->call_xarray, seq_id); + + if (!connection || connection->msg_id != reply_hdr->hdr.msg_id) + return; + + if (rm->active_rx_connection) + gh_rm_abort_connection(rm); + + if (gh_rm_init_connection_payload(connection, msg, sizeof(*reply_hdr), msg_size)) { + dev_err(rm->dev, "Failed to alloc connection buffer for sequence %d\n", seq_id); + /* Send connection complete and error the client. */ + connection->reply.ret = -ENOMEM; + complete(&connection->reply.seq_done); + return; + } + + connection->reply.rm_error = le32_to_cpu(reply_hdr->err_code); + rm->active_rx_connection = connection; +} + +static void gh_rm_process_cont(struct gh_rm *rm, struct gh_rm_connection *connection, + void *msg, size_t msg_size) +{ + struct gh_rm_rpc_hdr *hdr = msg; + size_t payload_size = msg_size - sizeof(*hdr); + + if (!rm->active_rx_connection) + return; + + /* + * hdr->fragments and hdr->msg_id preserves the value from first reply + * or notif message. To detect mishandling, check it's still intact. + */ + if (connection->msg_id != hdr->msg_id || + connection->num_fragments != FIELD_GET(RM_RPC_FRAGMENTS_MASK, hdr->type)) { + gh_rm_abort_connection(rm); + return; + } + + memcpy(connection->payload + connection->size, msg + sizeof(*hdr), payload_size); + connection->size += payload_size; + connection->fragments_received++; +} + +static void gh_rm_try_complete_connection(struct gh_rm *rm) +{ + struct gh_rm_connection *connection = rm->active_rx_connection; + + if (!connection || connection->fragments_received != connection->num_fragments) + return; + + switch (connection->type) { + case RM_RPC_TYPE_REPLY: + complete(&connection->reply.seq_done); + break; + case RM_RPC_TYPE_NOTIF: + schedule_work(&connection->notification.work); + break; + default: + dev_err_ratelimited(rm->dev, "Invalid message type (%d) received\n", + connection->type); + gh_rm_abort_connection(rm); + break; + } + + rm->active_rx_connection = NULL; +} + +static void gh_rm_msgq_rx_data(struct mbox_client *cl, void *mssg) +{ + struct gh_rm *rm = container_of(cl, struct gh_rm, msgq_client); + struct gh_msgq_rx_data *rx_data = mssg; + size_t msg_size = rx_data->length; + void *msg = rx_data->data; + struct gh_rm_rpc_hdr *hdr; + + if (msg_size < sizeof(*hdr) || msg_size > GH_MSGQ_MAX_MSG_SIZE) + return; + + hdr = msg; + if (hdr->api != RM_RPC_API) { + dev_err(rm->dev, "Unknown RM RPC API version: %x\n", hdr->api); + return; + } + + switch (FIELD_GET(RM_RPC_TYPE_MASK, hdr->type)) { + case RM_RPC_TYPE_NOTIF: + gh_rm_process_notif(rm, msg, msg_size); + break; + case RM_RPC_TYPE_REPLY: + gh_rm_process_rply(rm, msg, msg_size); + break; + case RM_RPC_TYPE_CONTINUATION: + gh_rm_process_cont(rm, rm->active_rx_connection, msg, msg_size); + break; + default: + dev_err(rm->dev, "Invalid message type (%lu) received\n", + FIELD_GET(RM_RPC_TYPE_MASK, hdr->type)); + return; + } + + gh_rm_try_complete_connection(rm); +} + +static void gh_rm_msgq_tx_done(struct mbox_client *cl, void *mssg, int r) +{ + struct gh_rm *rm = container_of(cl, struct gh_rm, msgq_client); + + kmem_cache_free(rm->cache, mssg); + rm->last_tx_ret = r; +} + +static int gh_rm_send_request(struct gh_rm *rm, u32 message_id, + const void *req_buff, size_t req_buf_size, + struct gh_rm_connection *connection) +{ + size_t buf_size_remaining = req_buf_size; + const void *req_buf_curr = req_buff; + struct gh_msgq_tx_data *msg; + struct gh_rm_rpc_hdr *hdr, hdr_template; + u32 cont_fragments = 0; + size_t payload_size; + void *payload; + int ret; + + if (req_buf_size > GH_RM_MAX_NUM_FRAGMENTS * GH_RM_MAX_MSG_SIZE) { + dev_warn(rm->dev, "Limit exceeded for the number of fragments: %u\n", + cont_fragments); + dump_stack(); + return -E2BIG; + } + + if (req_buf_size) + cont_fragments = (req_buf_size - 1) / GH_RM_MAX_MSG_SIZE; + + hdr_template.api = RM_RPC_API; + hdr_template.type = FIELD_PREP(RM_RPC_TYPE_MASK, RM_RPC_TYPE_REQUEST) | + FIELD_PREP(RM_RPC_FRAGMENTS_MASK, cont_fragments); + hdr_template.seq = cpu_to_le16(connection->reply.seq); + hdr_template.msg_id = cpu_to_le32(message_id); + + ret = mutex_lock_interruptible(&rm->send_lock); + if (ret) + return ret; + + /* Consider also the 'request' packet for the loop count */ + do { + msg = kmem_cache_zalloc(rm->cache, GFP_KERNEL); + if (!msg) { + ret = -ENOMEM; + goto out; + } + + /* Fill header */ + hdr = (struct gh_rm_rpc_hdr *)msg->data; + *hdr = hdr_template; + + /* Copy payload */ + payload = hdr + 1; + payload_size = min(buf_size_remaining, GH_RM_MAX_MSG_SIZE); + memcpy(payload, req_buf_curr, payload_size); + req_buf_curr += payload_size; + buf_size_remaining -= payload_size; + + /* Force the last fragment to immediately alert the receiver */ + msg->push = !buf_size_remaining; + msg->length = sizeof(*hdr) + payload_size; + + ret = mbox_send_message(gh_msgq_chan(&rm->msgq), msg); + if (ret < 0) { + kmem_cache_free(rm->cache, msg); + break; + } + + if (rm->last_tx_ret) { + ret = rm->last_tx_ret; + break; + } + + hdr_template.type = FIELD_PREP(RM_RPC_TYPE_MASK, RM_RPC_TYPE_CONTINUATION) | + FIELD_PREP(RM_RPC_FRAGMENTS_MASK, cont_fragments); + } while (buf_size_remaining); + +out: + mutex_unlock(&rm->send_lock); + return ret < 0 ? ret : 0; +} + +/** + * gh_rm_call: Achieve request-response type communication with RPC + * @rm: Pointer to Gunyah resource manager internal data + * @message_id: The RM RPC message-id + * @req_buff: Request buffer that contains the payload + * @req_buf_size: Total size of the payload + * @resp_buf: Pointer to a response buffer + * @resp_buf_size: Size of the response buffer + * + * Make a request to the RM-VM and wait for reply back. For a successful + * response, the function returns the payload. The size of the payload is set in + * resp_buf_size. The resp_buf should be freed by the caller when 0 is returned + * and resp_buf_size != 0. + * + * req_buff should be not NULL for req_buf_size >0. If req_buf_size == 0, + * req_buff *can* be NULL and no additional payload is sent. + * + * Context: Process context. Will sleep waiting for reply. + * Return: 0 on success. <0 if error. + */ +int gh_rm_call(struct gh_rm *rm, u32 message_id, void *req_buff, size_t req_buf_size, + void **resp_buf, size_t *resp_buf_size) +{ + struct gh_rm_connection *connection; + u32 seq_id; + int ret; + + /* message_id 0 is reserved. req_buf_size implies req_buf is not NULL */ + if (!message_id || (!req_buff && req_buf_size) || !rm) + return -EINVAL; + + + connection = kzalloc(sizeof(*connection), GFP_KERNEL); + if (!connection) + return -ENOMEM; + + connection->type = RM_RPC_TYPE_REPLY; + connection->msg_id = cpu_to_le32(message_id); + + init_completion(&connection->reply.seq_done); + + /* Allocate a new seq number for this connection */ + ret = xa_alloc_cyclic(&rm->call_xarray, &seq_id, connection, xa_limit_16b, &rm->next_seq, + GFP_KERNEL); + if (ret < 0) + goto free; + connection->reply.seq = lower_16_bits(seq_id); + + /* Send the request to the Resource Manager */ + ret = gh_rm_send_request(rm, message_id, req_buff, req_buf_size, connection); + if (ret < 0) + goto out; + + /* Wait for response */ + ret = wait_for_completion_interruptible(&connection->reply.seq_done); + if (ret) + goto out; + + /* Check for internal (kernel) error waiting for the response */ + if (connection->reply.ret) { + ret = connection->reply.ret; + if (ret != -ENOMEM) + kfree(connection->payload); + goto out; + } + + /* Got a response, did resource manager give us an error? */ + if (connection->reply.rm_error != GH_RM_ERROR_OK) { + dev_warn(rm->dev, "RM rejected message %08x. Error: %d\n", message_id, + connection->reply.rm_error); + dump_stack(); + ret = gh_rm_remap_error(connection->reply.rm_error); + kfree(connection->payload); + goto out; + } + + /* Everything looks good, return the payload */ + if (resp_buf_size) + *resp_buf_size = connection->size; + if (connection->size && resp_buf) + *resp_buf = connection->payload; + else { + /* kfree in case RM sent us multiple fragments but never any data in + * those fragments. We would've allocated memory for it, but connection->size == 0 + */ + kfree(connection->payload); + } + +out: + xa_erase(&rm->call_xarray, connection->reply.seq); +free: + kfree(connection); + return ret; +} + + +int gh_rm_notifier_register(struct gh_rm *rm, struct notifier_block *nb) +{ + return blocking_notifier_chain_register(&rm->nh, nb); +} +EXPORT_SYMBOL_GPL(gh_rm_notifier_register); + +int gh_rm_notifier_unregister(struct gh_rm *rm, struct notifier_block *nb) +{ + return blocking_notifier_chain_unregister(&rm->nh, nb); +} +EXPORT_SYMBOL_GPL(gh_rm_notifier_unregister); + +struct device *gh_rm_get(struct gh_rm *rm) +{ + return get_device(rm->miscdev.this_device); +} +EXPORT_SYMBOL_GPL(gh_rm_get); + +void gh_rm_put(struct gh_rm *rm) +{ + put_device(rm->miscdev.this_device); +} +EXPORT_SYMBOL_GPL(gh_rm_put); + +static int gh_msgq_platform_probe_direction(struct platform_device *pdev, bool tx, + struct gh_resource *ghrsc) +{ + struct device_node *node = pdev->dev.of_node; + int ret; + int idx = tx ? 0 : 1; + + ghrsc->type = tx ? GH_RESOURCE_TYPE_MSGQ_TX : GH_RESOURCE_TYPE_MSGQ_RX; + + ghrsc->irq = platform_get_irq(pdev, idx); + if (ghrsc->irq < 0) { + dev_err(&pdev->dev, "Failed to get irq%d: %d\n", idx, ghrsc->irq); + return ghrsc->irq; + } + + ret = of_property_read_u64_index(node, "reg", idx, &ghrsc->capid); + if (ret) { + dev_err(&pdev->dev, "Failed to get capid%d: %d\n", idx, ret); + return ret; + } + + return 0; +} + +static int gh_rm_drv_probe(struct platform_device *pdev) +{ + struct gh_msgq_tx_data *msg; + struct gh_rm *rm; + int ret; + + rm = devm_kzalloc(&pdev->dev, sizeof(*rm), GFP_KERNEL); + if (!rm) + return -ENOMEM; + + platform_set_drvdata(pdev, rm); + rm->dev = &pdev->dev; + + mutex_init(&rm->send_lock); + BLOCKING_INIT_NOTIFIER_HEAD(&rm->nh); + xa_init_flags(&rm->call_xarray, XA_FLAGS_ALLOC); + rm->cache = kmem_cache_create("gh_rm", struct_size(msg, data, GH_MSGQ_MAX_MSG_SIZE), 0, + SLAB_HWCACHE_ALIGN, NULL); + if (!rm->cache) + return -ENOMEM; + + ret = gh_msgq_platform_probe_direction(pdev, true, &rm->tx_ghrsc); + if (ret) + goto err_cache; + + ret = gh_msgq_platform_probe_direction(pdev, false, &rm->rx_ghrsc); + if (ret) + goto err_cache; + + rm->msgq_client.dev = &pdev->dev; + rm->msgq_client.tx_block = true; + rm->msgq_client.rx_callback = gh_rm_msgq_rx_data; + rm->msgq_client.tx_done = gh_rm_msgq_tx_done; + + return gh_msgq_init(&pdev->dev, &rm->msgq, &rm->msgq_client, &rm->tx_ghrsc, &rm->rx_ghrsc); +err_cache: + kmem_cache_destroy(rm->cache); + return ret; +} + +static int gh_rm_drv_remove(struct platform_device *pdev) +{ + struct gh_rm *rm = platform_get_drvdata(pdev); + + mbox_free_channel(gh_msgq_chan(&rm->msgq)); + gh_msgq_remove(&rm->msgq); + kmem_cache_destroy(rm->cache); + + return 0; +} + +static const struct of_device_id gh_rm_of_match[] = { + { .compatible = "gunyah-resource-manager" }, + {} +}; +MODULE_DEVICE_TABLE(of, gh_rm_of_match); + +static struct platform_driver gh_rm_driver = { + .probe = gh_rm_drv_probe, + .remove = gh_rm_drv_remove, + .driver = { + .name = "gh_rsc_mgr", + .of_match_table = gh_rm_of_match, + }, +}; +module_platform_driver(gh_rm_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Resource Manager Driver"); diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h new file mode 100644 index 000000000000..3665ebc7b020 --- /dev/null +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ +#ifndef __GH_RSC_MGR_PRIV_H +#define __GH_RSC_MGR_PRIV_H + +#include +#include +#include + +struct gh_rm; +int gh_rm_call(struct gh_rm *rsc_mgr, u32 message_id, void *req_buff, size_t req_buf_size, + void **resp_buf, size_t *resp_buf_size); + +#endif diff --git a/include/linux/gunyah_rsc_mgr.h b/include/linux/gunyah_rsc_mgr.h new file mode 100644 index 000000000000..deca9b3da541 --- /dev/null +++ b/include/linux/gunyah_rsc_mgr.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _GUNYAH_RSC_MGR_H +#define _GUNYAH_RSC_MGR_H + +#include +#include +#include + +#define GH_VMID_INVAL U16_MAX + +struct gh_rm; +int gh_rm_notifier_register(struct gh_rm *rm, struct notifier_block *nb); +int gh_rm_notifier_unregister(struct gh_rm *rm, struct notifier_block *nb); +struct device *gh_rm_get(struct gh_rm *rm); +void gh_rm_put(struct gh_rm *rm); + +#endif From patchwork Sat Mar 4 01:06:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CBDBC64EC4 for ; Sat, 4 Mar 2023 01:08:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229506AbjCDBH6 (ORCPT ); Fri, 3 Mar 2023 20:07:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229874AbjCDBH2 (ORCPT ); Fri, 3 Mar 2023 20:07:28 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5ED365054; Fri, 3 Mar 2023 17:07:10 -0800 (PST) Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 323MXBj2013044; Sat, 4 Mar 2023 01:06:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=JlyHqXP7Soq3BVBkMamSVK9ujf73myqb4KE+XUy1blU=; b=crhKsJznVTFjHCT2VrtO0DCDFRxWf09Ra70xdA5b48qlWbmkG14JJ1NXTeYzTliyGDc4 6rLICaEIDhW+07in3jznrL75rGnxXS2MOAZBgeBvHp/GBO2pBFGu/ow/0iornqdQsblg 3se9Hvzyyr7yTBVMSpygcR/4xi6I3SdgNaQPClvqbmfUPetKS5YQ+bZ0SKxpJWgmI4Jc HQduplzZQiDJE7po6FvBMfLGU27eP7miTuKDHNcyxntu6cWNUnrsjyWXYOm7LQg9Z67a 11pOgie64KXW0vjZmz+rdFTX4h+GrviTSDk1lgTz7gdBVe5Chz43UdwDe51EFrj3erJ0 qg== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3ph4rpdg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:06:58 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32416vTp031701 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:06:57 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:06:56 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 09/26] gunyah: rsc_mgr: Add VM lifecycle RPC Date: Fri, 3 Mar 2023 17:06:15 -0800 Message-ID: <20230304010632.2127470-10-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: BhzwMmc3EXhrPh6P32A8rJDh1iAv9dCF X-Proofpoint-GUID: BhzwMmc3EXhrPh6P32A8rJDh1iAv9dCF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 spamscore=0 lowpriorityscore=0 mlxscore=0 suspectscore=0 priorityscore=1501 clxscore=1015 impostorscore=0 adultscore=0 mlxlogscore=999 bulkscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040004 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add Gunyah Resource Manager RPC to launch an unauthenticated VM. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/rsc_mgr_rpc.c | 260 ++++++++++++++++++++++++++++++ include/linux/gunyah_rsc_mgr.h | 73 +++++++++ 3 files changed, 334 insertions(+), 1 deletion(-) create mode 100644 drivers/virt/gunyah/rsc_mgr_rpc.c diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index cc864ff5abbb..de29769f2f3f 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -2,5 +2,5 @@ obj-$(CONFIG_GUNYAH) += gunyah.o -gunyah_rsc_mgr-y += rsc_mgr.o +gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o obj-$(CONFIG_GUNYAH) += gunyah_rsc_mgr.o diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c new file mode 100644 index 000000000000..ffcb861a31b5 --- /dev/null +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c @@ -0,0 +1,260 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include "rsc_mgr.h" + +/* Message IDs: VM Management */ +#define GH_RM_RPC_VM_ALLOC_VMID 0x56000001 +#define GH_RM_RPC_VM_DEALLOC_VMID 0x56000002 +#define GH_RM_RPC_VM_START 0x56000004 +#define GH_RM_RPC_VM_STOP 0x56000005 +#define GH_RM_RPC_VM_RESET 0x56000006 +#define GH_RM_RPC_VM_CONFIG_IMAGE 0x56000009 +#define GH_RM_RPC_VM_INIT 0x5600000B +#define GH_RM_RPC_VM_GET_HYP_RESOURCES 0x56000020 +#define GH_RM_RPC_VM_GET_VMID 0x56000024 + +struct gh_rm_vm_common_vmid_req { + __le16 vmid; + __le16 _padding; +} __packed; + +/* Call: VM_ALLOC */ +struct gh_rm_vm_alloc_vmid_resp { + __le16 vmid; + __le16 _padding; +} __packed; + +/* Call: VM_STOP */ +#define GH_RM_VM_STOP_FLAG_FORCE_STOP BIT(0) + +#define GH_RM_VM_STOP_REASON_FORCE_STOP 3 + +struct gh_rm_vm_stop_req { + __le16 vmid; + u8 flags; + u8 _padding; + __le32 stop_reason; +} __packed; + +/* Call: VM_CONFIG_IMAGE */ +struct gh_rm_vm_config_image_req { + __le16 vmid; + __le16 auth_mech; + __le32 mem_handle; + __le64 image_offset; + __le64 image_size; + __le64 dtb_offset; + __le64 dtb_size; +} __packed; + +/* + * Several RM calls take only a VMID as a parameter and give only standard + * response back. Deduplicate boilerplate code by using this common call. + */ +static int gh_rm_common_vmid_call(struct gh_rm *rm, u32 message_id, u16 vmid) +{ + struct gh_rm_vm_common_vmid_req req_payload = { + .vmid = cpu_to_le16(vmid), + }; + + return gh_rm_call(rm, message_id, &req_payload, sizeof(req_payload), NULL, NULL); +} + +/** + * gh_rm_alloc_vmid() - Allocate a new VM in Gunyah. Returns the VM identifier. + * @rm: Handle to a Gunyah resource manager + * @vmid: Use 0 to dynamically allocate a VM. A reserved VMID can be supplied + * to request allocation of a platform-defined VM. + * + * Returns - the allocated VMID or negative value on error + */ +int gh_rm_alloc_vmid(struct gh_rm *rm, u16 vmid) +{ + struct gh_rm_vm_common_vmid_req req_payload = { + .vmid = vmid, + }; + struct gh_rm_vm_alloc_vmid_resp *resp_payload; + size_t resp_size; + void *resp; + int ret; + + ret = gh_rm_call(rm, GH_RM_RPC_VM_ALLOC_VMID, &req_payload, sizeof(req_payload), &resp, + &resp_size); + if (ret) + return ret; + + if (!vmid) { + resp_payload = resp; + ret = le16_to_cpu(resp_payload->vmid); + kfree(resp); + } + + return ret; +} + +/** + * gh_rm_dealloc_vmid() - Dispose the VMID + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gh_rm_alloc_vmid + */ +int gh_rm_dealloc_vmid(struct gh_rm *rm, u16 vmid) +{ + return gh_rm_common_vmid_call(rm, GH_RM_RPC_VM_DEALLOC_VMID, vmid); +} + +/** + * gh_rm_vm_reset() - Reset the VM's resources + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gh_rm_alloc_vmid + * + * While tearing down the VM, request RM to clean up all the VM resources + * associated with the VM. Only after this, Linux can clean up all the + * references it maintains to resources. + */ +int gh_rm_vm_reset(struct gh_rm *rm, u16 vmid) +{ + return gh_rm_common_vmid_call(rm, GH_RM_RPC_VM_RESET, vmid); +} + +/** + * gh_rm_vm_start() - Move the VM into "ready to run" state + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gh_rm_alloc_vmid + * + * On VMs which use proxy scheduling, vcpu_run is needed to actually run the VM. + * On VMs which use Gunyah's scheduling, the vCPUs start executing in accordance with Gunyah + * scheduling policies. + */ +int gh_rm_vm_start(struct gh_rm *rm, u16 vmid) +{ + return gh_rm_common_vmid_call(rm, GH_RM_RPC_VM_START, vmid); +} + +/** + * gh_rm_vm_stop() - Send a request to Resource Manager VM to forcibly stop a VM. + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gh_rm_alloc_vmid + */ +int gh_rm_vm_stop(struct gh_rm *rm, u16 vmid) +{ + struct gh_rm_vm_stop_req req_payload = { + .vmid = cpu_to_le16(vmid), + .flags = GH_RM_VM_STOP_FLAG_FORCE_STOP, + .stop_reason = cpu_to_le32(GH_RM_VM_STOP_REASON_FORCE_STOP), + }; + + return gh_rm_call(rm, GH_RM_RPC_VM_STOP, &req_payload, sizeof(req_payload), NULL, NULL); +} + +/** + * gh_rm_vm_configure() - Prepare a VM to start and provide the common + * configuration needed by RM to configure a VM + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gh_rm_alloc_vmid + * @auth_mechanism: Authentication mechanism used by resource manager to verify + * the virtual machine + * @mem_handle: Handle to a previously shared memparcel that contains all parts + * of the VM image subject to authentication. + * @image_offset: Start address of VM image, relative to the start of memparcel + * @image_size: Size of the VM image + * @dtb_offset: Start address of the devicetree binary with VM configuration, + * relative to start of memparcel. + * @dtb_size: Maximum size of devicetree binary. Resource manager applies + * an overlay to the DTB and dtb_size should include room for + * the overlay. + */ +int gh_rm_vm_configure(struct gh_rm *rm, u16 vmid, enum gh_rm_vm_auth_mechanism auth_mechanism, + u32 mem_handle, u64 image_offset, u64 image_size, u64 dtb_offset, u64 dtb_size) +{ + struct gh_rm_vm_config_image_req req_payload = { + .vmid = cpu_to_le16(vmid), + .auth_mech = cpu_to_le16(auth_mechanism), + .mem_handle = cpu_to_le32(mem_handle), + .image_offset = cpu_to_le64(image_offset), + .image_size = cpu_to_le64(image_size), + .dtb_offset = cpu_to_le64(dtb_offset), + .dtb_size = cpu_to_le64(dtb_size), + }; + + return gh_rm_call(rm, GH_RM_RPC_VM_CONFIG_IMAGE, &req_payload, sizeof(req_payload), + NULL, NULL); +} + +/** + * gh_rm_vm_init() - Move the VM to initialized state. + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier + * + * RM will allocate needed resources for the VM. + */ +int gh_rm_vm_init(struct gh_rm *rm, u16 vmid) +{ + return gh_rm_common_vmid_call(rm, GH_RM_RPC_VM_INIT, vmid); +} + +/** + * gh_rm_get_hyp_resources() - Retrieve hypervisor resources (capabilities) associated with a VM + * @rm: Handle to a Gunyah resource manager + * @vmid: VMID of the other VM to get the resources of + * @resources: Set by gh_rm_get_hyp_resources and contains the returned hypervisor resources. + */ +int gh_rm_get_hyp_resources(struct gh_rm *rm, u16 vmid, + struct gh_rm_hyp_resources **resources) +{ + struct gh_rm_vm_common_vmid_req req_payload = { + .vmid = cpu_to_le16(vmid), + }; + struct gh_rm_hyp_resources *resp; + size_t resp_size; + int ret; + + ret = gh_rm_call(rm, GH_RM_RPC_VM_GET_HYP_RESOURCES, + &req_payload, sizeof(req_payload), + (void **)&resp, &resp_size); + if (ret) + return ret; + + if (!resp_size) + return -EBADMSG; + + if (resp_size < struct_size(resp, entries, 0) || + resp_size != struct_size(resp, entries, le32_to_cpu(resp->n_entries))) { + kfree(resp); + return -EBADMSG; + } + + *resources = resp; + return 0; +} + +/** + * gh_rm_get_vmid() - Retrieve VMID of this virtual machine + * @rm: Handle to a Gunyah resource manager + * @vmid: Filled with the VMID of this VM + */ +int gh_rm_get_vmid(struct gh_rm *rm, u16 *vmid) +{ + static u16 cached_vmid = GH_VMID_INVAL; + size_t resp_size; + __le32 *resp; + int ret; + + if (cached_vmid != GH_VMID_INVAL) { + *vmid = cached_vmid; + return 0; + } + + ret = gh_rm_call(rm, GH_RM_RPC_VM_GET_VMID, NULL, 0, (void **)&resp, &resp_size); + if (ret) + return ret; + + *vmid = cached_vmid = lower_16_bits(le32_to_cpu(*resp)); + kfree(resp); + + return ret; +} +EXPORT_SYMBOL_GPL(gh_rm_get_vmid); diff --git a/include/linux/gunyah_rsc_mgr.h b/include/linux/gunyah_rsc_mgr.h index deca9b3da541..6a2f434e67f7 100644 --- a/include/linux/gunyah_rsc_mgr.h +++ b/include/linux/gunyah_rsc_mgr.h @@ -18,4 +18,77 @@ int gh_rm_notifier_unregister(struct gh_rm *rm, struct notifier_block *nb); struct device *gh_rm_get(struct gh_rm *rm); void gh_rm_put(struct gh_rm *rm); +struct gh_rm_vm_exited_payload { + __le16 vmid; + __le16 exit_type; + __le32 exit_reason_size; + u8 exit_reason[]; +} __packed; + +#define GH_RM_NOTIFICATION_VM_EXITED 0x56100001 + +enum gh_rm_vm_status { + GH_RM_VM_STATUS_NO_STATE = 0, + GH_RM_VM_STATUS_INIT = 1, + GH_RM_VM_STATUS_READY = 2, + GH_RM_VM_STATUS_RUNNING = 3, + GH_RM_VM_STATUS_PAUSED = 4, + GH_RM_VM_STATUS_LOAD = 5, + GH_RM_VM_STATUS_AUTH = 6, + GH_RM_VM_STATUS_INIT_FAILED = 8, + GH_RM_VM_STATUS_EXITED = 9, + GH_RM_VM_STATUS_RESETTING = 10, + GH_RM_VM_STATUS_RESET = 11, +}; + +struct gh_rm_vm_status_payload { + __le16 vmid; + u16 reserved; + u8 vm_status; + u8 os_status; + __le16 app_status; +} __packed; + +#define GH_RM_NOTIFICATION_VM_STATUS 0x56100008 + +/* RPC Calls */ +int gh_rm_alloc_vmid(struct gh_rm *rm, u16 vmid); +int gh_rm_dealloc_vmid(struct gh_rm *rm, u16 vmid); +int gh_rm_vm_reset(struct gh_rm *rm, u16 vmid); +int gh_rm_vm_start(struct gh_rm *rm, u16 vmid); +int gh_rm_vm_stop(struct gh_rm *rm, u16 vmid); + +enum gh_rm_vm_auth_mechanism { + GH_RM_VM_AUTH_NONE = 0, + GH_RM_VM_AUTH_QCOM_PIL_ELF = 1, + GH_RM_VM_AUTH_QCOM_ANDROID_PVM = 2, +}; + +int gh_rm_vm_configure(struct gh_rm *rm, u16 vmid, enum gh_rm_vm_auth_mechanism auth_mechanism, + u32 mem_handle, u64 image_offset, u64 image_size, + u64 dtb_offset, u64 dtb_size); +int gh_rm_vm_init(struct gh_rm *rm, u16 vmid); + +struct gh_rm_hyp_resource { + u8 type; + u8 reserved; + __le16 partner_vmid; + __le32 resource_handle; + __le32 resource_label; + __le64 cap_id; + __le32 virq_handle; + __le32 virq; + __le64 base; + __le64 size; +} __packed; + +struct gh_rm_hyp_resources { + __le32 n_entries; + struct gh_rm_hyp_resource entries[]; +} __packed; + +int gh_rm_get_hyp_resources(struct gh_rm *rm, u16 vmid, + struct gh_rm_hyp_resources **resources); +int gh_rm_get_vmid(struct gh_rm *rm, u16 *vmid); + #endif From patchwork Sat Mar 4 01:06:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659073 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67589C742A7 for ; Sat, 4 Mar 2023 01:08:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230041AbjCDBIC (ORCPT ); Fri, 3 Mar 2023 20:08:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230015AbjCDBHd (ORCPT ); Fri, 3 Mar 2023 20:07:33 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D6C069070; Fri, 3 Mar 2023 17:07:17 -0800 (PST) Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 324124Uf012689; Sat, 4 Mar 2023 01:07:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=5voBfN4UfwS1Tsnhw5B4L1qKGGC+z5LiXjYwrWjVKeI=; b=WioLy2UQAy8mMYu9+kYmzuGpVczWdLcqVgkHRQUfVZikmaCx4Q+oycctmYi4T+a0fVYZ pa8984ufIWpFIAAUrHfs+JxQXsVLsitEBV+P+x/6k2CKmEQb6MylwxaF5FUiQbnOrdR4 H6B23dl2b1TW4XNy8BIXR2Klnq6HWcJDYvgSd3kNQhiDUnc8CH9yEfngZYZb4jrVE31+ +ZcTrEImQA998rZZGzxQKLYO335LiRkcezRbSg/FpDS/bwsrSt+1cFjJP9oNN515OYCW QuuW1ALuvU0ITnKb0mM+xxnQJSX1vifK6o/2e3p7jgQHGJUR6aEQGu7WI0KewSItIHlf vw== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3bsbtreh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:07:02 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 324172SD025881 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:07:02 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:07:01 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 12/26] gunyah: vm_mgr: Add/remove user memory regions Date: Fri, 3 Mar 2023 17:06:18 -0800 Message-ID: <20230304010632.2127470-13-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 623eFvYYF9RxhFnsOCc_wPZ3WEzvhNPz X-Proofpoint-ORIG-GUID: 623eFvYYF9RxhFnsOCc_wPZ3WEzvhNPz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 mlxlogscore=999 spamscore=0 phishscore=0 impostorscore=0 suspectscore=0 bulkscore=0 clxscore=1015 adultscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040005 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When launching a virtual machine, Gunyah userspace allocates memory for the guest and informs Gunyah about these memory regions through SET_USER_MEMORY_REGION ioctl. Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/vm_mgr.c | 44 ++++++ drivers/virt/gunyah/vm_mgr.h | 25 ++++ drivers/virt/gunyah/vm_mgr_mm.c | 229 ++++++++++++++++++++++++++++++++ include/uapi/linux/gunyah.h | 29 ++++ 5 files changed, 328 insertions(+), 1 deletion(-) create mode 100644 drivers/virt/gunyah/vm_mgr_mm.c diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 03951cf82023..ff8bc4925392 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -2,5 +2,5 @@ obj-$(CONFIG_GUNYAH) += gunyah.o -gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o +gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mm.o obj-$(CONFIG_GUNYAH) += gunyah_rsc_mgr.o diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index dbacf36af72d..e950274c6a53 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -18,8 +18,16 @@ static void gh_vm_free(struct work_struct *work) { struct gh_vm *ghvm = container_of(work, struct gh_vm, free_work); + struct gh_vm_mem *mapping, *tmp; int ret; + mutex_lock(&ghvm->mm_lock); + list_for_each_entry_safe(mapping, tmp, &ghvm->memory_mappings, list) { + gh_vm_mem_reclaim(ghvm, mapping); + kfree(mapping); + } + mutex_unlock(&ghvm->mm_lock); + ret = gh_rm_dealloc_vmid(ghvm->rm, ghvm->vmid); if (ret) pr_warn("Failed to deallocate vmid: %d\n", ret); @@ -47,11 +55,44 @@ static __must_check struct gh_vm *gh_vm_alloc(struct gh_rm *rm) ghvm->vmid = vmid; ghvm->rm = rm; + mutex_init(&ghvm->mm_lock); + INIT_LIST_HEAD(&ghvm->memory_mappings); INIT_WORK(&ghvm->free_work, gh_vm_free); return ghvm; } +static long gh_vm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) +{ + struct gh_vm *ghvm = filp->private_data; + void __user *argp = (void __user *)arg; + long r; + + switch (cmd) { + case GH_VM_SET_USER_MEM_REGION: { + struct gh_userspace_memory_region region; + + if (!gh_api_has_feature(GH_FEATURE_MEMEXTENT)) + return -EOPNOTSUPP; + + if (copy_from_user(®ion, argp, sizeof(region))) + return -EFAULT; + + /* All other flag bits are reserved for future use */ + if (region.flags & ~(GH_MEM_ALLOW_READ | GH_MEM_ALLOW_WRITE | GH_MEM_ALLOW_EXEC)) + return -EINVAL; + + r = gh_vm_mem_alloc(ghvm, ®ion); + break; + } + default: + r = -ENOTTY; + break; + } + + return r; +} + static int gh_vm_release(struct inode *inode, struct file *filp) { struct gh_vm *ghvm = filp->private_data; @@ -64,6 +105,9 @@ static int gh_vm_release(struct inode *inode, struct file *filp) } static const struct file_operations gh_vm_fops = { + .owner = THIS_MODULE, + .unlocked_ioctl = gh_vm_ioctl, + .compat_ioctl = compat_ptr_ioctl, .release = gh_vm_release, .llseek = noop_llseek, }; diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 4b22fbcac91c..c9f6fa5478ed 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -7,17 +7,42 @@ #define _GH_PRIV_VM_MGR_H #include +#include +#include +#include #include long gh_dev_vm_mgr_ioctl(struct gh_rm *rm, unsigned int cmd, unsigned long arg); +enum gh_vm_mem_share_type { + VM_MEM_SHARE, + VM_MEM_LEND, +}; + +struct gh_vm_mem { + struct list_head list; + enum gh_vm_mem_share_type share_type; + struct gh_rm_mem_parcel parcel; + + __u64 guest_phys_addr; + struct page **pages; + unsigned long npages; +}; + struct gh_vm { u16 vmid; struct gh_rm *rm; struct device *parent; struct work_struct free_work; + struct mutex mm_lock; + struct list_head memory_mappings; }; +int gh_vm_mem_alloc(struct gh_vm *ghvm, struct gh_userspace_memory_region *region); +void gh_vm_mem_reclaim(struct gh_vm *ghvm, struct gh_vm_mem *mapping); +int gh_vm_mem_free(struct gh_vm *ghvm, u32 label); +struct gh_vm_mem *gh_vm_mem_find_by_label(struct gh_vm *ghvm, u32 label); + #endif diff --git a/drivers/virt/gunyah/vm_mgr_mm.c b/drivers/virt/gunyah/vm_mgr_mm.c new file mode 100644 index 000000000000..db6f55cef37f --- /dev/null +++ b/drivers/virt/gunyah/vm_mgr_mm.c @@ -0,0 +1,229 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#define pr_fmt(fmt) "gh_vm_mgr: " fmt + +#include +#include + +#include + +#include "vm_mgr.h" + +static struct gh_vm_mem *__gh_vm_mem_find_by_label(struct gh_vm *ghvm, u32 label) + __must_hold(&ghvm->mm_lock) +{ + struct gh_vm_mem *mapping; + + list_for_each_entry(mapping, &ghvm->memory_mappings, list) + if (mapping->parcel.label == label) + return mapping; + + return NULL; +} + +void gh_vm_mem_reclaim(struct gh_vm *ghvm, struct gh_vm_mem *mapping) + __must_hold(&ghvm->mm_lock) +{ + int i, ret = 0; + + if (mapping->parcel.mem_handle != GH_MEM_HANDLE_INVAL) { + ret = gh_rm_mem_reclaim(ghvm->rm, &mapping->parcel); + if (ret) + pr_warn("Failed to reclaim memory parcel for label %d: %d\n", + mapping->parcel.label, ret); + } + + if (!ret) + for (i = 0; i < mapping->npages; i++) + unpin_user_page(mapping->pages[i]); + + kfree(mapping->pages); + kfree(mapping->parcel.acl_entries); + kfree(mapping->parcel.mem_entries); + + list_del(&mapping->list); +} + +struct gh_vm_mem *gh_vm_mem_find_by_label(struct gh_vm *ghvm, u32 label) +{ + struct gh_vm_mem *mapping; + int ret; + + ret = mutex_lock_interruptible(&ghvm->mm_lock); + if (ret) + return ERR_PTR(ret); + + mapping = __gh_vm_mem_find_by_label(ghvm, label); + mutex_unlock(&ghvm->mm_lock); + + return mapping ? : ERR_PTR(-ENODEV); +} + +int gh_vm_mem_alloc(struct gh_vm *ghvm, struct gh_userspace_memory_region *region) +{ + struct gh_vm_mem *mapping, *tmp_mapping; + struct gh_rm_mem_entry *mem_entries; + phys_addr_t curr_page, prev_page; + struct gh_rm_mem_parcel *parcel; + int i, j, pinned, ret = 0; + size_t entry_size; + u16 vmid; + + if (!region->memory_size || !PAGE_ALIGNED(region->memory_size) || + !PAGE_ALIGNED(region->userspace_addr) || !PAGE_ALIGNED(region->guest_phys_addr)) + return -EINVAL; + + if (region->guest_phys_addr + region->memory_size < region->guest_phys_addr) + return -EOVERFLOW; + + ret = mutex_lock_interruptible(&ghvm->mm_lock); + if (ret) + return ret; + + mapping = __gh_vm_mem_find_by_label(ghvm, region->label); + if (mapping) { + mutex_unlock(&ghvm->mm_lock); + return -EEXIST; + } + + mapping = kzalloc(sizeof(*mapping), GFP_KERNEL); + if (!mapping) { + mutex_unlock(&ghvm->mm_lock); + return -ENOMEM; + } + + mapping->parcel.label = region->label; + mapping->guest_phys_addr = region->guest_phys_addr; + mapping->npages = region->memory_size >> PAGE_SHIFT; + parcel = &mapping->parcel; + parcel->mem_handle = GH_MEM_HANDLE_INVAL; /* to be filled later by mem_share/mem_lend */ + parcel->mem_type = GH_RM_MEM_TYPE_NORMAL; + + /* Check for overlap */ + list_for_each_entry(tmp_mapping, &ghvm->memory_mappings, list) { + if (!((mapping->guest_phys_addr + (mapping->npages << PAGE_SHIFT) <= + tmp_mapping->guest_phys_addr) || + (mapping->guest_phys_addr >= + tmp_mapping->guest_phys_addr + (tmp_mapping->npages << PAGE_SHIFT)))) { + ret = -EEXIST; + goto free_mapping; + } + } + + list_add(&mapping->list, &ghvm->memory_mappings); + + mapping->pages = kcalloc(mapping->npages, sizeof(*mapping->pages), GFP_KERNEL); + if (!mapping->pages) { + ret = -ENOMEM; + mapping->npages = 0; /* update npages for reclaim */ + goto reclaim; + } + + pinned = pin_user_pages_fast(region->userspace_addr, mapping->npages, + FOLL_WRITE | FOLL_LONGTERM, mapping->pages); + if (pinned < 0) { + ret = pinned; + mapping->npages = 0; /* update npages for reclaim */ + goto reclaim; + } else if (pinned != mapping->npages) { + ret = -EFAULT; + mapping->npages = pinned; /* update npages for reclaim */ + goto reclaim; + } + + parcel->n_acl_entries = 2; + mapping->share_type = VM_MEM_SHARE; + parcel->acl_entries = kcalloc(parcel->n_acl_entries, sizeof(*parcel->acl_entries), + GFP_KERNEL); + if (!parcel->acl_entries) { + ret = -ENOMEM; + goto reclaim; + } + + parcel->acl_entries[0].vmid = cpu_to_le16(ghvm->vmid); + + if (region->flags & GH_MEM_ALLOW_READ) + parcel->acl_entries[0].perms |= GH_RM_ACL_R; + if (region->flags & GH_MEM_ALLOW_WRITE) + parcel->acl_entries[0].perms |= GH_RM_ACL_W; + if (region->flags & GH_MEM_ALLOW_EXEC) + parcel->acl_entries[0].perms |= GH_RM_ACL_X; + + if (mapping->share_type == VM_MEM_SHARE) { + ret = gh_rm_get_vmid(ghvm->rm, &vmid); + if (ret) + goto reclaim; + + parcel->acl_entries[1].vmid = cpu_to_le16(vmid); + /* Host assumed to have all these permissions. Gunyah will not + * grant new permissions if host actually had less than RWX + */ + parcel->acl_entries[1].perms |= GH_RM_ACL_R | GH_RM_ACL_W | GH_RM_ACL_X; + } + + mem_entries = kcalloc(mapping->npages, sizeof(*mem_entries), GFP_KERNEL); + if (!mem_entries) { + ret = -ENOMEM; + goto reclaim; + } + + /* reduce number of entries by combining contiguous pages into single memory entry */ + prev_page = page_to_phys(mapping->pages[0]); + mem_entries[0].ipa_base = cpu_to_le64(prev_page); + entry_size = PAGE_SIZE; + for (i = 1, j = 0; i < mapping->npages; i++) { + curr_page = page_to_phys(mapping->pages[i]); + if (curr_page - prev_page == PAGE_SIZE) { + entry_size += PAGE_SIZE; + } else { + mem_entries[j].size = cpu_to_le64(entry_size); + j++; + mem_entries[j].ipa_base = cpu_to_le64(curr_page); + entry_size = PAGE_SIZE; + } + + prev_page = curr_page; + } + mem_entries[j].size = cpu_to_le64(entry_size); + + parcel->n_mem_entries = j + 1; + parcel->mem_entries = kmemdup(mem_entries, sizeof(*mem_entries) * parcel->n_mem_entries, + GFP_KERNEL); + kfree(mem_entries); + if (!parcel->mem_entries) { + ret = -ENOMEM; + goto reclaim; + } + + mutex_unlock(&ghvm->mm_lock); + return 0; +reclaim: + gh_vm_mem_reclaim(ghvm, mapping); +free_mapping: + kfree(mapping); + mutex_unlock(&ghvm->mm_lock); + return ret; +} + +int gh_vm_mem_free(struct gh_vm *ghvm, u32 label) +{ + struct gh_vm_mem *mapping; + int ret; + + ret = mutex_lock_interruptible(&ghvm->mm_lock); + if (ret) + return ret; + + mapping = __gh_vm_mem_find_by_label(ghvm, label); + if (!mapping) + goto out; + + gh_vm_mem_reclaim(ghvm, mapping); + kfree(mapping); +out: + mutex_unlock(&ghvm->mm_lock); + return ret; +} diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 10ba32d2b0a6..a19207e3e065 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -20,4 +20,33 @@ */ #define GH_CREATE_VM _IO(GH_IOCTL_TYPE, 0x0) /* Returns a Gunyah VM fd */ +/* + * ioctls for VM fds + */ + +#define GH_MEM_ALLOW_READ (1UL << 0) +#define GH_MEM_ALLOW_WRITE (1UL << 1) +#define GH_MEM_ALLOW_EXEC (1UL << 2) + +/** + * struct gh_userspace_memory_region - Userspace memory descripion for GH_VM_SET_USER_MEM_REGION + * @label: Unique identifer to the region. + * @flags: Flags for memory parcel behavior + * @guest_phys_addr: Location of the memory region in guest's memory space (page-aligned) + * @memory_size: Size of the region (page-aligned) + * @userspace_addr: Location of the memory region in caller (userspace)'s memory + * + * See Documentation/virt/gunyah/vm-manager.rst for further details. + */ +struct gh_userspace_memory_region { + __u32 label; + __u32 flags; + __u64 guest_phys_addr; + __u64 memory_size; + __u64 userspace_addr; +}; + +#define GH_VM_SET_USER_MEM_REGION _IOW(GH_IOCTL_TYPE, 0x1, \ + struct gh_userspace_memory_region) + #endif From patchwork Sat Mar 4 01:06:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51CB0C7619A for ; Sat, 4 Mar 2023 01:08:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230110AbjCDBID (ORCPT ); Fri, 3 Mar 2023 20:08:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230006AbjCDBHd (ORCPT ); Fri, 3 Mar 2023 20:07:33 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E46069078; Fri, 3 Mar 2023 17:07:17 -0800 (PST) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 323NdeqR017178; Sat, 4 Mar 2023 01:07:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=MaVCwpT5GoBYhUfIWvF2v8THe49xZ8C+mZ8RNreqWGo=; b=K+BJ448nZkgGEv3RV5JB+oTGgLaKJgl0i+CVQZm/cUafjLlYjjKUbVwbLcFe7Rg9CvSm vuC/EWqrhjDeMnruajyDfU5CwaFEZ5QggmW4Veq2HhUdqzPnsGCOPuq9vrMMHNGliJnl eqKDEzAQm9UxUgGZQxmMwlAfoFmvqyjGCYOR+pANF29o/duwNWJgIO8lA1FWAWFqaNPd mK7E8pqVnBYKp9t/6oI0iWfa7HjezPp2SCbVMwr5DrhmiSV+kzXXXPsUHI5oGm/M/oo1 xceMrsssSyC1KANhODpnWmQIZbsz2N5OCMtlGAnBMO2QMS2cKBIE43Swg1nrEbw347er XA== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3dfrjf3x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:07:04 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 324173gY018726 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:07:03 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:07:02 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 13/26] gunyah: vm_mgr: Add ioctls to support basic non-proxy VM boot Date: Fri, 3 Mar 2023 17:06:19 -0800 Message-ID: <20230304010632.2127470-14-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: ZTwRCB7bU_EhQRaXBPnsYGMVGFJAzsIV X-Proofpoint-ORIG-GUID: ZTwRCB7bU_EhQRaXBPnsYGMVGFJAzsIV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 phishscore=0 adultscore=0 impostorscore=0 bulkscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 malwarescore=0 lowpriorityscore=0 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040005 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add remaining ioctls to support non-proxy VM boot: - Gunyah Resource Manager uses the VM's devicetree to configure the virtual machine. The location of the devicetree in the guest's virtual memory can be declared via the SET_DTB_CONFIG ioctl. - Trigger start of the virtual machine with VM_START ioctl. Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 243 ++++++++++++++++++++++++++++++-- drivers/virt/gunyah/vm_mgr.h | 10 ++ drivers/virt/gunyah/vm_mgr_mm.c | 23 +++ include/linux/gunyah_rsc_mgr.h | 6 + include/uapi/linux/gunyah.h | 13 ++ 5 files changed, 282 insertions(+), 13 deletions(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index e950274c6a53..299b9bb81edc 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -9,37 +9,118 @@ #include #include #include +#include #include #include #include "vm_mgr.h" +static int gh_vm_rm_notification_status(struct gh_vm *ghvm, void *data) +{ + struct gh_rm_vm_status_payload *payload = data; + + if (payload->vmid != ghvm->vmid) + return NOTIFY_OK; + + /* All other state transitions are synchronous to a corresponding RM call */ + if (payload->vm_status == GH_RM_VM_STATUS_RESET) { + down_write(&ghvm->status_lock); + ghvm->vm_status = payload->vm_status; + up_write(&ghvm->status_lock); + wake_up(&ghvm->vm_status_wait); + } + + return NOTIFY_DONE; +} + +static int gh_vm_rm_notification_exited(struct gh_vm *ghvm, void *data) +{ + struct gh_rm_vm_exited_payload *payload = data; + + if (payload->vmid != ghvm->vmid) + return NOTIFY_OK; + + down_write(&ghvm->status_lock); + ghvm->vm_status = GH_RM_VM_STATUS_EXITED; + up_write(&ghvm->status_lock); + + return NOTIFY_DONE; +} + +static int gh_vm_rm_notification(struct notifier_block *nb, unsigned long action, void *data) +{ + struct gh_vm *ghvm = container_of(nb, struct gh_vm, nb); + + switch (action) { + case GH_RM_NOTIFICATION_VM_STATUS: + return gh_vm_rm_notification_status(ghvm, data); + case GH_RM_NOTIFICATION_VM_EXITED: + return gh_vm_rm_notification_exited(ghvm, data); + default: + return NOTIFY_OK; + } +} + +static void gh_vm_stop(struct gh_vm *ghvm) +{ + int ret; + + down_write(&ghvm->status_lock); + if (ghvm->vm_status == GH_RM_VM_STATUS_RUNNING) { + ret = gh_rm_vm_stop(ghvm->rm, ghvm->vmid); + if (ret) + dev_warn(ghvm->parent, "Failed to stop VM: %d\n", ret); + } + + ghvm->vm_status = GH_RM_VM_STATUS_EXITED; + up_write(&ghvm->status_lock); +} + static void gh_vm_free(struct work_struct *work) { struct gh_vm *ghvm = container_of(work, struct gh_vm, free_work); struct gh_vm_mem *mapping, *tmp; int ret; - mutex_lock(&ghvm->mm_lock); - list_for_each_entry_safe(mapping, tmp, &ghvm->memory_mappings, list) { - gh_vm_mem_reclaim(ghvm, mapping); - kfree(mapping); - } - mutex_unlock(&ghvm->mm_lock); - - ret = gh_rm_dealloc_vmid(ghvm->rm, ghvm->vmid); - if (ret) - pr_warn("Failed to deallocate vmid: %d\n", ret); + switch (ghvm->vm_status) { + case GH_RM_VM_STATUS_RUNNING: + gh_vm_stop(ghvm); + fallthrough; + case GH_RM_VM_STATUS_INIT_FAILED: + case GH_RM_VM_STATUS_LOAD: + case GH_RM_VM_STATUS_EXITED: + mutex_lock(&ghvm->mm_lock); + list_for_each_entry_safe(mapping, tmp, &ghvm->memory_mappings, list) { + gh_vm_mem_reclaim(ghvm, mapping); + kfree(mapping); + } + mutex_unlock(&ghvm->mm_lock); + fallthrough; + case GH_RM_VM_STATUS_NO_STATE: + ret = gh_rm_dealloc_vmid(ghvm->rm, ghvm->vmid); + if (ret) + dev_warn(ghvm->parent, "Failed to deallocate vmid: %d\n", ret); + + gh_rm_notifier_unregister(ghvm->rm, &ghvm->nb); + gh_rm_put(ghvm->rm); + kfree(ghvm); + break; + default: + dev_err(ghvm->parent, "VM is unknown state: %d. VM will not be cleaned up.\n", + ghvm->vm_status); - put_gh_rm(ghvm->rm); - kfree(ghvm); + gh_rm_notifier_unregister(ghvm->rm, &ghvm->nb); + gh_rm_put(ghvm->rm); + kfree(ghvm); + break; + } } static __must_check struct gh_vm *gh_vm_alloc(struct gh_rm *rm) { struct gh_vm *ghvm; - int vmid; + int vmid, ret; vmid = gh_rm_alloc_vmid(rm, 0); if (vmid < 0) @@ -55,13 +136,130 @@ static __must_check struct gh_vm *gh_vm_alloc(struct gh_rm *rm) ghvm->vmid = vmid; ghvm->rm = rm; + init_waitqueue_head(&ghvm->vm_status_wait); + ghvm->nb.notifier_call = gh_vm_rm_notification; + ret = gh_rm_notifier_register(rm, &ghvm->nb); + if (ret) { + gh_rm_put(rm); + gh_rm_dealloc_vmid(rm, vmid); + kfree(ghvm); + return ERR_PTR(ret); + } + mutex_init(&ghvm->mm_lock); INIT_LIST_HEAD(&ghvm->memory_mappings); + init_rwsem(&ghvm->status_lock); INIT_WORK(&ghvm->free_work, gh_vm_free); + ghvm->vm_status = GH_RM_VM_STATUS_LOAD; return ghvm; } +static int gh_vm_start(struct gh_vm *ghvm) +{ + struct gh_vm_mem *mapping; + u64 dtb_offset; + u32 mem_handle; + int ret; + + down_write(&ghvm->status_lock); + if (ghvm->vm_status != GH_RM_VM_STATUS_LOAD) { + up_write(&ghvm->status_lock); + return 0; + } + + ghvm->vm_status = GH_RM_VM_STATUS_RESET; + + mutex_lock(&ghvm->mm_lock); + list_for_each_entry(mapping, &ghvm->memory_mappings, list) { + switch (mapping->share_type) { + case VM_MEM_LEND: + ret = gh_rm_mem_lend(ghvm->rm, &mapping->parcel); + break; + case VM_MEM_SHARE: + ret = gh_rm_mem_share(ghvm->rm, &mapping->parcel); + break; + } + if (ret) { + dev_warn(ghvm->parent, "Failed to %s parcel %d: %d\n", + mapping->share_type == VM_MEM_LEND ? "lend" : "share", + mapping->parcel.label, + ret); + goto err; + } + } + mutex_unlock(&ghvm->mm_lock); + + mapping = gh_vm_mem_find_by_addr(ghvm, ghvm->dtb_config.guest_phys_addr, + ghvm->dtb_config.size); + if (!mapping) { + dev_warn(ghvm->parent, "Failed to find the memory_handle for DTB\n"); + ret = -EINVAL; + goto err; + } + + mem_handle = mapping->parcel.mem_handle; + dtb_offset = ghvm->dtb_config.guest_phys_addr - mapping->guest_phys_addr; + + ret = gh_rm_vm_configure(ghvm->rm, ghvm->vmid, ghvm->auth, mem_handle, + 0, 0, dtb_offset, ghvm->dtb_config.size); + if (ret) { + dev_warn(ghvm->parent, "Failed to configure VM: %d\n", ret); + goto err; + } + + ret = gh_rm_vm_init(ghvm->rm, ghvm->vmid); + if (ret) { + dev_warn(ghvm->parent, "Failed to initialize VM: %d\n", ret); + goto err; + } + + ret = gh_rm_vm_start(ghvm->rm, ghvm->vmid); + if (ret) { + dev_warn(ghvm->parent, "Failed to start VM: %d\n", ret); + goto err; + } + + ghvm->vm_status = GH_RM_VM_STATUS_RUNNING; + up_write(&ghvm->status_lock); + return ret; +err: + ghvm->vm_status = GH_RM_VM_STATUS_INIT_FAILED; + /* gh_vm_free will handle releasing resources and reclaiming memory */ + up_write(&ghvm->status_lock); + return ret; +} + +static int gh_vm_ensure_started(struct gh_vm *ghvm) +{ + int ret; + + ret = down_read_interruptible(&ghvm->status_lock); + if (ret) + return ret; + + /* Unlikely because VM is typically started */ + if (unlikely(ghvm->vm_status == GH_RM_VM_STATUS_LOAD)) { + up_read(&ghvm->status_lock); + ret = gh_vm_start(ghvm); + if (ret) + goto out; + /** gh_vm_start() is guaranteed to bring status out of + * GH_RM_VM_STATUS_LOAD, thus inifitely recursive call is not + * possible + */ + return gh_vm_ensure_started(ghvm); + } + + /* Unlikely because VM is typically running */ + if (unlikely(ghvm->vm_status != GH_RM_VM_STATUS_RUNNING)) + ret = -ENODEV; + +out: + up_read(&ghvm->status_lock); + return ret; +} + static long gh_vm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { struct gh_vm *ghvm = filp->private_data; @@ -85,6 +283,25 @@ static long gh_vm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) r = gh_vm_mem_alloc(ghvm, ®ion); break; } + case GH_VM_SET_DTB_CONFIG: { + struct gh_vm_dtb_config dtb_config; + + if (copy_from_user(&dtb_config, argp, sizeof(dtb_config))) + return -EFAULT; + + dtb_config.size = PAGE_ALIGN(dtb_config.size); + if (dtb_config.guest_phys_addr + dtb_config.size < dtb_config.guest_phys_addr) + return -EOVERFLOW; + + ghvm->dtb_config = dtb_config; + + r = 0; + break; + } + case GH_VM_START: { + r = gh_vm_ensure_started(ghvm); + break; + } default: r = -ENOTTY; break; diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index c9f6fa5478ed..26bcc2ae4478 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include @@ -34,6 +36,13 @@ struct gh_vm { u16 vmid; struct gh_rm *rm; struct device *parent; + enum gh_rm_vm_auth_mechanism auth; + struct gh_vm_dtb_config dtb_config; + + struct notifier_block nb; + enum gh_rm_vm_status vm_status; + wait_queue_head_t vm_status_wait; + struct rw_semaphore status_lock; struct work_struct free_work; struct mutex mm_lock; @@ -44,5 +53,6 @@ int gh_vm_mem_alloc(struct gh_vm *ghvm, struct gh_userspace_memory_region *regio void gh_vm_mem_reclaim(struct gh_vm *ghvm, struct gh_vm_mem *mapping); int gh_vm_mem_free(struct gh_vm *ghvm, u32 label); struct gh_vm_mem *gh_vm_mem_find_by_label(struct gh_vm *ghvm, u32 label); +struct gh_vm_mem *gh_vm_mem_find_by_addr(struct gh_vm *ghvm, u64 guest_phys_addr, u32 size); #endif diff --git a/drivers/virt/gunyah/vm_mgr_mm.c b/drivers/virt/gunyah/vm_mgr_mm.c index db6f55cef37f..6e1d2e8bddb7 100644 --- a/drivers/virt/gunyah/vm_mgr_mm.c +++ b/drivers/virt/gunyah/vm_mgr_mm.c @@ -47,6 +47,29 @@ void gh_vm_mem_reclaim(struct gh_vm *ghvm, struct gh_vm_mem *mapping) list_del(&mapping->list); } +struct gh_vm_mem *gh_vm_mem_find_by_addr(struct gh_vm *ghvm, u64 guest_phys_addr, u32 size) +{ + struct gh_vm_mem *mapping = NULL; + int ret; + + ret = mutex_lock_interruptible(&ghvm->mm_lock); + if (ret) + return ERR_PTR(ret); + + list_for_each_entry(mapping, &ghvm->memory_mappings, list) { + if (guest_phys_addr >= mapping->guest_phys_addr && + (guest_phys_addr + size <= mapping->guest_phys_addr + + (mapping->npages << PAGE_SHIFT))) { + goto unlock; + } + } + + mapping = NULL; +unlock: + mutex_unlock(&ghvm->mm_lock); + return mapping; +} + struct gh_vm_mem *gh_vm_mem_find_by_label(struct gh_vm *ghvm, u32 label) { struct gh_vm_mem *mapping; diff --git a/include/linux/gunyah_rsc_mgr.h b/include/linux/gunyah_rsc_mgr.h index 88a429dad09e..8b0b46f28e39 100644 --- a/include/linux/gunyah_rsc_mgr.h +++ b/include/linux/gunyah_rsc_mgr.h @@ -29,6 +29,12 @@ struct gh_rm_vm_exited_payload { #define GH_RM_NOTIFICATION_VM_EXITED 0x56100001 enum gh_rm_vm_status { + /** + * RM doesn't have a state where load partially failed because + * only Linux + */ + GH_RM_VM_STATUS_LOAD_FAILED = -1, + GH_RM_VM_STATUS_NO_STATE = 0, GH_RM_VM_STATUS_INIT = 1, GH_RM_VM_STATUS_READY = 2, diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index a19207e3e065..d6abd8605a2e 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -49,4 +49,17 @@ struct gh_userspace_memory_region { #define GH_VM_SET_USER_MEM_REGION _IOW(GH_IOCTL_TYPE, 0x1, \ struct gh_userspace_memory_region) +/** + * struct gh_vm_dtb_config - Set the location of the VM's devicetree blob + * @guest_phys_addr: Address of the VM's devicetree in guest memory. + * @size: Maximum size of the devicetree. + */ +struct gh_vm_dtb_config { + __u64 guest_phys_addr; + __u64 size; +}; +#define GH_VM_SET_DTB_CONFIG _IOW(GH_IOCTL_TYPE, 0x2, struct gh_vm_dtb_config) + +#define GH_VM_START _IO(GH_IOCTL_TYPE, 0x3) + #endif From patchwork Sat Mar 4 01:06:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659072 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A271C76195 for ; Sat, 4 Mar 2023 01:08:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230112AbjCDBIE (ORCPT ); Fri, 3 Mar 2023 20:08:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbjCDBHg (ORCPT ); Fri, 3 Mar 2023 20:07:36 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 576886B31A; Fri, 3 Mar 2023 17:07:19 -0800 (PST) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 323NjDqh003547; Sat, 4 Mar 2023 01:07:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=Y5//UadlEdV0a+5zcX8TQ4LlO7IHHl8LDnZbN/15rkM=; b=DmMtkSJXpmDNIw3XKIRtnqA7C4mY7tEaPFGnoGc2JbevtDcmwt+dR5n7OhWpLSyHLMEz ORzm8Y8uYUqkpmdw30s2hIRLOemp25S1BFHyBro6Lx6OKwuIHUns5TVYCi1aBszHj7FC P7QUd82FmaZY0TDIoOp1znhEx/TRuzBeD9/oX0ulSLi5Evjd9NbVQu6vw5Wk0QTiBum5 f4xOu2hrSz5d32y7kTPKRbcLyV3q+LaFj508xDd5rlwL5qaOM9PCu7wFRubKdTL26WNT CUYXQokUKNS9hiCZeiJFomNv2hVE5jjM3gh8BnIkxoyG5kE1z9vmkHKvz/UELG2oRlyH pw== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p36mybghm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:07:06 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 324175Jt019307 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:07:05 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:07:04 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 14/26] samples: Add sample userspace Gunyah VM Manager Date: Fri, 3 Mar 2023 17:06:20 -0800 Message-ID: <20230304010632.2127470-15-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 4aFFUlhMHjADqvlSjcmx3j-iaZqyyyzq X-Proofpoint-ORIG-GUID: 4aFFUlhMHjADqvlSjcmx3j-iaZqyyyzq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 adultscore=0 clxscore=1015 mlxscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 spamscore=0 malwarescore=0 lowpriorityscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040005 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add a sample Gunyah VMM capable of launching a non-proxy scheduled VM. Signed-off-by: Elliot Berman --- samples/Kconfig | 10 ++ samples/Makefile | 1 + samples/gunyah/.gitignore | 2 + samples/gunyah/Makefile | 6 + samples/gunyah/gunyah_vmm.c | 270 +++++++++++++++++++++++++++++++++++ samples/gunyah/sample_vm.dts | 68 +++++++++ 6 files changed, 357 insertions(+) create mode 100644 samples/gunyah/.gitignore create mode 100644 samples/gunyah/Makefile create mode 100644 samples/gunyah/gunyah_vmm.c create mode 100644 samples/gunyah/sample_vm.dts diff --git a/samples/Kconfig b/samples/Kconfig index 30ef8bd48ba3..11070bf02bd7 100644 --- a/samples/Kconfig +++ b/samples/Kconfig @@ -273,6 +273,16 @@ config SAMPLE_CORESIGHT_SYSCFG This demonstrates how a user may create their own CoreSight configurations and easily load them into the system at runtime. +config SAMPLE_GUNYAH + bool "Build example Gunyah Virtual Machine Manager" + depends on CC_CAN_LINK && HEADERS_INSTALL + depends on GUNYAH + help + Build an example Gunyah VMM userspace program capable of launching + a basic virtual machine under the Gunyah hypervisor. + This demonstrates how to create a virtual machine under the Gunyah + hypervisor. + source "samples/rust/Kconfig" endif # SAMPLES diff --git a/samples/Makefile b/samples/Makefile index 7cb632ef88ee..a65555802642 100644 --- a/samples/Makefile +++ b/samples/Makefile @@ -37,3 +37,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak/ obj-$(CONFIG_SAMPLE_CORESIGHT_SYSCFG) += coresight/ obj-$(CONFIG_SAMPLE_FPROBE) += fprobe/ obj-$(CONFIG_SAMPLES_RUST) += rust/ +obj-$(CONFIG_SAMPLE_GUNYAH) += gunyah/ diff --git a/samples/gunyah/.gitignore b/samples/gunyah/.gitignore new file mode 100644 index 000000000000..adc7d1589fde --- /dev/null +++ b/samples/gunyah/.gitignore @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 +/gunyah_vmm diff --git a/samples/gunyah/Makefile b/samples/gunyah/Makefile new file mode 100644 index 000000000000..faf14f9bb337 --- /dev/null +++ b/samples/gunyah/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0-only + +userprogs-always-y += gunyah_vmm +dtb-y += sample_vm.dtb + +userccflags += -I usr/include diff --git a/samples/gunyah/gunyah_vmm.c b/samples/gunyah/gunyah_vmm.c new file mode 100644 index 000000000000..d0ba9c20cb13 --- /dev/null +++ b/samples/gunyah/gunyah_vmm.c @@ -0,0 +1,270 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#define __USE_GNU +#include + +#include + +struct vm_config { + int image_fd; + int dtb_fd; + int ramdisk_fd; + + uint64_t guest_base; + uint64_t guest_size; + + uint64_t image_offset; + off_t image_size; + uint64_t dtb_offset; + off_t dtb_size; + uint64_t ramdisk_offset; + off_t ramdisk_size; +}; + +static struct option options[] = { + { "help", no_argument, NULL, 'h' }, + { "image", required_argument, NULL, 'i' }, + { "dtb", required_argument, NULL, 'd' }, + { "ramdisk", optional_argument, NULL, 'r' }, + { "base", optional_argument, NULL, 'B' }, + { "size", optional_argument, NULL, 'S' }, + { "image_offset", optional_argument, NULL, 'I' }, + { "dtb_offset", optional_argument, NULL, 'D' }, + { "ramdisk_offset", optional_argument, NULL, 'R' }, + { } +}; + +static void print_help(char *cmd) +{ + printf("gunyah_vmm, a sample tool to launch Gunyah VMs\n" + "Usage: %s \n" + " --help, -h this menu\n" + " --image, -i VM image file to load (e.g. a kernel Image) [Required]\n" + " --dtb, -d Devicetree to load [Required]\n" + " --ramdisk, -r Ramdisk to load\n" + " --base, -B
Set the base address of guest's memory [Default: 0x80000000]\n" + " --size, -S The number of bytes large to make the guest's memory [Default: 0x6400000 (100 MB)]\n" + " --image_offset, -I Offset into guest memory to load the VM image file [Default: 0x10000]\n" + " --dtb_offset, -D Offset into guest memory to load the DTB [Default: 0]\n" + " --ramdisk_offset, -R Offset into guest memory to load a ramdisk [Default: 0x4600000]\n" + , cmd); +} + +int main(int argc, char **argv) +{ + int gunyah_fd, vm_fd, guest_fd; + struct gh_userspace_memory_region guest_mem_desc = { 0 }; + struct gh_vm_dtb_config dtb_config = { 0 }; + char *guest_mem; + struct vm_config config = { + /* Defaults good enough to boot static kernel and a basic ramdisk */ + .ramdisk_fd = -1, + .guest_base = 0x80000000, + .guest_size = 0x6400000, /* 100 MB */ + .image_offset = 0, + .dtb_offset = 0x45f0000, + .ramdisk_offset = 0x4600000, /* put at +70MB (30MB for ramdisk) */ + }; + struct stat st; + int opt, optidx, ret = 0; + long l; + + while ((opt = getopt_long(argc, argv, "hi:d:r:B:S:I:D:R:c:", options, &optidx)) != -1) { + switch (opt) { + case 'i': + config.image_fd = open(optarg, O_RDONLY | O_CLOEXEC); + if (config.image_fd < 0) { + perror("Failed to open image"); + return -1; + } + if (stat(optarg, &st) < 0) { + perror("Failed to stat image"); + return -1; + } + config.image_size = st.st_size; + break; + case 'd': + config.dtb_fd = open(optarg, O_RDONLY | O_CLOEXEC); + if (config.dtb_fd < 0) { + perror("Failed to open dtb"); + return -1; + } + if (stat(optarg, &st) < 0) { + perror("Failed to stat dtb"); + return -1; + } + config.dtb_size = st.st_size; + break; + case 'r': + config.ramdisk_fd = open(optarg, O_RDONLY | O_CLOEXEC); + if (config.ramdisk_fd < 0) { + perror("Failed to open ramdisk"); + return -1; + } + if (stat(optarg, &st) < 0) { + perror("Failed to stat ramdisk"); + return -1; + } + config.ramdisk_size = st.st_size; + break; + case 'B': + l = strtol(optarg, NULL, 0); + if (l == LONG_MIN) { + perror("Failed to parse base address"); + return -1; + } + config.guest_base = l; + break; + case 'S': + l = strtol(optarg, NULL, 0); + if (l == LONG_MIN) { + perror("Failed to parse memory size"); + return -1; + } + config.guest_size = l; + break; + case 'I': + l = strtol(optarg, NULL, 0); + if (l == LONG_MIN) { + perror("Failed to parse image offset"); + return -1; + } + config.image_offset = l; + break; + case 'D': + l = strtol(optarg, NULL, 0); + if (l == LONG_MIN) { + perror("Failed to parse dtb offset"); + return -1; + } + config.dtb_offset = l; + break; + case 'R': + l = strtol(optarg, NULL, 0); + if (l == LONG_MIN) { + perror("Failed to parse ramdisk offset"); + return -1; + } + config.ramdisk_offset = l; + break; + case 'h': + print_help(argv[0]); + return 0; + default: + print_help(argv[0]); + return -1; + } + } + + if (!config.image_fd || !config.dtb_fd) { + print_help(argv[0]); + return -1; + } + + if (config.image_offset + config.image_size > config.guest_size) { + fprintf(stderr, "Image offset and size puts it outside guest memory. Make image smaller or increase guest memory size.\n"); + return -1; + } + + if (config.dtb_offset + config.dtb_size > config.guest_size) { + fprintf(stderr, "DTB offset and size puts it outside guest memory. Make dtb smaller or increase guest memory size.\n"); + return -1; + } + + if (config.ramdisk_fd == -1 && + config.ramdisk_offset + config.ramdisk_size > config.guest_size) { + fprintf(stderr, "Ramdisk offset and size puts it outside guest memory. Make ramdisk smaller or increase guest memory size.\n"); + return -1; + } + + gunyah_fd = open("/dev/gunyah", O_RDWR | O_CLOEXEC); + if (gunyah_fd < 0) { + perror("Failed to open /dev/gunyah"); + return -1; + } + + vm_fd = ioctl(gunyah_fd, GH_CREATE_VM, 0); + if (vm_fd < 0) { + perror("Failed to create vm"); + return -1; + } + + guest_fd = memfd_create("guest_memory", MFD_CLOEXEC); + if (guest_fd < 0) { + perror("Failed to create guest memfd"); + return -1; + } + + if (ftruncate(guest_fd, config.guest_size) < 0) { + perror("Failed to grow guest memory"); + return -1; + } + + guest_mem = mmap(NULL, config.guest_size, PROT_READ | PROT_WRITE, MAP_SHARED, guest_fd, 0); + if (guest_mem == MAP_FAILED) { + perror("Not enough memory"); + return -1; + } + + if (read(config.image_fd, guest_mem + config.image_offset, config.image_size) < 0) { + perror("Failed to read image into guest memory"); + return -1; + } + + if (read(config.dtb_fd, guest_mem + config.dtb_offset, config.dtb_size) < 0) { + perror("Failed to read dtb into guest memory"); + return -1; + } + + if (config.ramdisk_fd > 0 && + read(config.ramdisk_fd, guest_mem + config.ramdisk_offset, + config.ramdisk_size) < 0) { + perror("Failed to read ramdisk into guest memory"); + return -1; + } + + guest_mem_desc.label = 0; + guest_mem_desc.flags = GH_MEM_ALLOW_READ | GH_MEM_ALLOW_WRITE | GH_MEM_ALLOW_EXEC; + guest_mem_desc.guest_phys_addr = config.guest_base; + guest_mem_desc.memory_size = config.guest_size; + guest_mem_desc.userspace_addr = (__u64)guest_mem; + + if (ioctl(vm_fd, GH_VM_SET_USER_MEM_REGION, &guest_mem_desc) < 0) { + perror("Failed to register guest memory with VM"); + return -1; + } + + dtb_config.guest_phys_addr = config.guest_base + config.dtb_offset; + dtb_config.size = config.dtb_size; + if (ioctl(vm_fd, GH_VM_SET_DTB_CONFIG, &dtb_config) < 0) { + perror("Failed to set DTB configuration for VM"); + return -1; + } + + ret = ioctl(vm_fd, GH_VM_START); + if (ret) { + perror("GH_VM_START failed"); + return -1; + } + + while (1) + sleep(10); + + return 0; +} diff --git a/samples/gunyah/sample_vm.dts b/samples/gunyah/sample_vm.dts new file mode 100644 index 000000000000..293bbc0469c8 --- /dev/null +++ b/samples/gunyah/sample_vm.dts @@ -0,0 +1,68 @@ +// SPDX-License-Identifier: BSD-3-Clause +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +/dts-v1/; + +/ { + #address-cells = <2>; + #size-cells = <2>; + interrupt-parent = <&intc>; + + chosen { + bootargs = "nokaslr"; + }; + + cpus { + #address-cells = <0x2>; + #size-cells = <0>; + + cpu@0 { + device_type = "cpu"; + compatible = "arm,armv8"; + reg = <0 0>; + }; + }; + + intc: interrupt-controller@3FFF0000 { + compatible = "arm,gic-v3"; + #interrupt-cells = <3>; + #address-cells = <2>; + #size-cells = <2>; + interrupt-controller; + reg = <0 0x3FFF0000 0 0x10000>, + <0 0x3FFD0000 0 0x20000>; + }; + + timer { + compatible = "arm,armv8-timer"; + always-on; + interrupts = <1 13 0x108>, + <1 14 0x108>, + <1 11 0x108>, + <1 10 0x108>; + clock-frequency = <19200000>; + }; + + gunyah-vm-config { + image-name = "linux_vm_0"; + + memory { + #address-cells = <2>; + #size-cells = <2>; + + base-address = <0 0x80000000>; + }; + + interrupts { + config = <&intc>; + }; + + vcpus { + affinity-map = < 0 >; + sched-priority = < (-1) >; + sched-timeslice = < 2000 >; + }; + }; +}; From patchwork Sat Mar 4 01:06:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1A61C61DA3 for ; Sat, 4 Mar 2023 01:10:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230139AbjCDBKM (ORCPT ); Fri, 3 Mar 2023 20:10:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229673AbjCDBJq (ORCPT ); Fri, 3 Mar 2023 20:09:46 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 905426B31A; Fri, 3 Mar 2023 17:08:49 -0800 (PST) Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 323Ndesj017428; Sat, 4 Mar 2023 01:07:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=tu6+qvW/Zhi1XKBb+7JrOFesQl2cFoGev4zTJaArKOc=; b=AmOLrPdxeAja25DoJ4/dxS/kaRzh6+EHsJd9KeAoH6e3k1oAHbdMwllnnssA9X6xvJ+k rdKuEnMYK/gVpxaVL7uTNitSfPSURVskAARSivNqGHrykDb3Dj4JtWdGBURValEyI92N X0MFW52WcI5fWtbQ+L3D6pbsrn31bsh8SmIEmSDoYiP5JGdIjKgKqCV3KibY868k5lz3 6YnqV+m1nR4Z8p5gKy92FydMj9vHpXoLHyC8up+VitR2UlDYPVzFfZGpOGe1DLs8vg9j cRKXDizu8iItb6gW4IgqIGtJpXLo5k5pPH62G89oFSaAwl61/NaNYX3oXhLBATJU0FgE CQ== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3bsbtrew-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:07:10 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 324179Il032297 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:07:09 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:07:08 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu , Jonathan Corbet CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 17/26] docs: gunyah: Document Gunyah VM Manager Date: Fri, 3 Mar 2023 17:06:23 -0800 Message-ID: <20230304010632.2127470-18-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: -Min-bMluKjxBJzwV2OMQ6tBhsnC1kRV X-Proofpoint-ORIG-GUID: -Min-bMluKjxBJzwV2OMQ6tBhsnC1kRV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 mlxlogscore=999 spamscore=0 phishscore=0 impostorscore=0 suspectscore=0 bulkscore=0 clxscore=1015 adultscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040005 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Document the ioctls and usage of Gunyah VM Manager driver. Signed-off-by: Elliot Berman --- Documentation/virt/gunyah/index.rst | 1 + Documentation/virt/gunyah/vm-manager.rst | 91 ++++++++++++++++++++++++ 2 files changed, 92 insertions(+) create mode 100644 Documentation/virt/gunyah/vm-manager.rst diff --git a/Documentation/virt/gunyah/index.rst b/Documentation/virt/gunyah/index.rst index 74aa345e0a14..7058249825b1 100644 --- a/Documentation/virt/gunyah/index.rst +++ b/Documentation/virt/gunyah/index.rst @@ -7,6 +7,7 @@ Gunyah Hypervisor .. toctree:: :maxdepth: 1 + vm-manager message-queue Gunyah is a Type-1 hypervisor which is independent of any OS kernel, and runs in diff --git a/Documentation/virt/gunyah/vm-manager.rst b/Documentation/virt/gunyah/vm-manager.rst new file mode 100644 index 000000000000..1b4aa18670a3 --- /dev/null +++ b/Documentation/virt/gunyah/vm-manager.rst @@ -0,0 +1,91 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======================= +Virtual Machine Manager +======================= + +The Gunyah Virtual Machine Manager is a Linux driver to support launching +virtual machines using Gunyah. It presently supports launching non-proxy +scheduled Linux-like virtual machines. + +Except for some basic information about the location of initial binaries, +most of the configuration about a Gunyah virtual machine is described in the +VM's devicetree. The devicetree is generated by userspace. Interacting with the +virtual machine is still done via the kernel and VM configuration requires some +of the corresponding functionality to be set up in the kernel. For instance, +sharing userspace memory with a VM is done via the GH_VM_SET_USER_MEM_REGION +ioctl. The VM itself is configured to use the memory region via the +devicetree. + +Sample Userspace VMM +==================== + +A sample userspace VMM is included in samples/gunyah/ along with a minimal +devicetree that can be used to launch a VM. To build this sample, enable +CONFIG_SAMPLE_GUNYAH. + +IOCTLs and userspace VMM flows +============================== + +The kernel exposes a char device interface at /dev/gunyah. + +To create a VM, use the GH_CREATE_VM ioctl. A successful call will return a +"Gunyah VM" file descriptor. + +/dev/gunyah API Descriptions +---------------------------- + +GH_CREATE_VM +~~~~~~~~~~~~ + +Creates a Gunyah VM. The argument is reserved for future use and must be 0. + +Gunyah VM API Descriptions +-------------------------- + +GH_VM_SET_USER_MEM_REGION +~~~~~~~~~~~~~~~~~~~~~~~~~ + +This ioctl allows the user to create or delete a memory parcel for a guest +virtual machine. Each memory region is uniquely identified by a label; +attempting to create two regions with the same label is not allowed. Labels are +unique per virtual machine. + +While VMM is guest-agnostic and allows runtime addition of memory regions, +Linux guest virtual machines do not support accepting memory regions at runtime. +Thus, memory regions should be provided before starting the VM and the VM must +be configured to accept these at boot-up. + +The guest physical address is used by Linux kernel to check that the requested +user regions do not overlap and to help find the corresponding memory region +for calls like GH_VM_SET_DTB_CONFIG. It must be page aligned. + +memory_size and userspace_addr must be page-aligned. + +The flags field of gh_userspace_memory_region accepts the following bits. All +other bits must be 0 and are reserved for future use. The ioctl will return +-EINVAL if an unsupported bit is detected. + + - GH_MEM_ALLOW_READ/GH_MEM_ALLOW_WRITE/GH_MEM_ALLOW_EXEC sets read/write/exec + permissions for the guest, respectively. + +To add a memory region, call GH_VM_SET_USER_MEM_REGION with fields set as +described above. + +.. kernel-doc:: include/uapi/linux/gunyah.h + :identifiers: gh_userspace_memory_region + +GH_VM_SET_DTB_CONFIG +~~~~~~~~~~~~~~~~~~~~ + +This ioctl sets the location of the VM's devicetree blob and is used by Gunyah +Resource Manager to allocate resources. The guest physical memory should be part +of the primary memory parcel provided to the VM prior to GH_VM_START. + +.. kernel-doc:: include/uapi/linux/gunyah.h + :identifiers: gh_vm_dtb_config + +GH_VM_START +~~~~~~~~~~~ + +This ioctl starts the VM. From patchwork Sat Mar 4 01:06:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 413D4C64EC4 for ; Sat, 4 Mar 2023 01:08:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229745AbjCDBIc (ORCPT ); Fri, 3 Mar 2023 20:08:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230008AbjCDBIM (ORCPT ); Fri, 3 Mar 2023 20:08:12 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DE5967803; Fri, 3 Mar 2023 17:07:42 -0800 (PST) Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3238mIJu007928; Sat, 4 Mar 2023 01:07:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=/VCX+yqz/ho1pAOqIOYfJGpNKwtPtomBUY+Aoh/O+TI=; b=UnEHWhCGXHoSs7qcIhC36RmNXdsseY0gRIJDjDz0Tc0NXEZr0sNwGpwqFIXCLqgT/dBn YsHVr6QMe9ZQWwAmlvXCXD8jOCxG0UyrN2fobJyLIPeZNQD671I+ANODqIxMs9KW9nJx oZYXwJjPzHbfQ+9zpDn3lSJTw4Xm0wG2JRQ4OBqMwXakvt2tkKl53AX5B11m9m31pBGM HeAuYpi9sLV1RWx64lGpg8AMwmS0o7UlZnbULPbg+3TOC9ZSd14MEBRKx/OL85/N6bFg Dou3v5oEsZ7WZ2B0OaOFkaQY0kWu2W9MksOR/AGt961cS5O5HTmd/Be/1NZOlgosE3Sq EA== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3dpxjjcb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:07:15 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32417EnF028975 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:07:14 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:07:13 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 20/26] virt: gunyah: Add resource tickets Date: Fri, 3 Mar 2023 17:06:26 -0800 Message-ID: <20230304010632.2127470-21-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: eyLziE9DYf1WbGWtvtPkVm5QrPjY-rYr X-Proofpoint-GUID: eyLziE9DYf1WbGWtvtPkVm5QrPjY-rYr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 malwarescore=0 spamscore=0 bulkscore=0 lowpriorityscore=0 impostorscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 mlxscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040004 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Some VM functions need to acquire Gunyah resources. For instance, Gunyah vCPUs are exposed to the host as a resource. The Gunyah vCPU function will register a resource ticket and be able to interact with the hypervisor once the resource ticket is filled. Resource tickets are the mechanism for functions to acquire ownership of Gunyah resources. Gunyah functions can be created before the VM's resources are created and made available to Linux. A resource ticket identifies a type of resource and a label of a resource which the ticket holder is interested in. Resources are created by Gunyah as configured in the VM's devicetree configuration. Gunyah doesn't process the label and that makes it possible for userspace to create multiple resources with the same label. Resource ticket owners need to be prepared for populate to be called multiple times if userspace created multiple resources with the same label. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 112 +++++++++++++++++++++++++++++++++- drivers/virt/gunyah/vm_mgr.h | 4 ++ include/linux/gunyah_vm_mgr.h | 14 +++++ 3 files changed, 129 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 88db011395ec..0269bcdaf692 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -165,6 +165,74 @@ static long gh_vm_rm_function(struct gh_vm *ghvm, struct gh_fn_desc *f) return r; } +int gh_vm_add_resource_ticket(struct gh_vm *ghvm, struct gh_vm_resource_ticket *ticket) +{ + struct gh_vm_resource_ticket *iter; + struct gh_resource *ghrsc; + int ret = 0; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry(iter, &ghvm->resource_tickets, list) { + if (iter->resource_type == ticket->resource_type && iter->label == ticket->label) { + ret = -EEXIST; + goto out; + } + } + + if (!try_module_get(ticket->owner)) { + ret = -ENODEV; + goto out; + } + + list_add(&ticket->list, &ghvm->resource_tickets); + INIT_LIST_HEAD(&ticket->resources); + + list_for_each_entry(ghrsc, &ghvm->resources, list) { + if (ghrsc->type == ticket->resource_type && ghrsc->rm_label == ticket->label) { + if (!ticket->populate(ticket, ghrsc)) + list_move(&ghrsc->list, &ticket->resources); + } + } +out: + mutex_unlock(&ghvm->resources_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gh_vm_add_resource_ticket); + +void gh_vm_remove_resource_ticket(struct gh_vm *ghvm, struct gh_vm_resource_ticket *ticket) +{ + struct gh_resource *ghrsc, *iter; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry_safe(ghrsc, iter, &ticket->resources, list) { + ticket->unpopulate(ticket, ghrsc); + list_move(&ghrsc->list, &ghvm->resources); + } + + module_put(ticket->owner); + list_del(&ticket->list); + mutex_unlock(&ghvm->resources_lock); +} +EXPORT_SYMBOL_GPL(gh_vm_remove_resource_ticket); + +static void gh_vm_add_resource(struct gh_vm *ghvm, struct gh_resource *ghrsc) +{ + struct gh_vm_resource_ticket *ticket; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry(ticket, &ghvm->resource_tickets, list) { + if (ghrsc->type == ticket->resource_type && ghrsc->rm_label == ticket->label) { + if (!ticket->populate(ticket, ghrsc)) { + list_add(&ghrsc->list, &ticket->resources); + goto found; + } + } + } + list_add(&ghrsc->list, &ghvm->resources); +found: + mutex_unlock(&ghvm->resources_lock); +} + static int gh_vm_rm_notification_status(struct gh_vm *ghvm, void *data) { struct gh_rm_vm_status_payload *payload = data; @@ -230,6 +298,8 @@ static void gh_vm_free(struct work_struct *work) { struct gh_vm *ghvm = container_of(work, struct gh_vm, free_work); struct gh_vm_function_instance *inst, *iiter; + struct gh_vm_resource_ticket *ticket, *titer; + struct gh_resource *ghrsc, *riter; struct gh_vm_mem *mapping, *tmp; int ret; @@ -246,6 +316,25 @@ static void gh_vm_free(struct work_struct *work) } mutex_unlock(&ghvm->fn_lock); + mutex_lock(&ghvm->resources_lock); + if (!list_empty(&ghvm->resource_tickets)) { + dev_warn(ghvm->parent, "Dangling resource tickets:\n"); + list_for_each_entry_safe(ticket, titer, &ghvm->resource_tickets, list) { + dev_warn(ghvm->parent, " %pS\n", ticket->populate); + gh_vm_remove_resource_ticket(ghvm, ticket); + } + } + + list_for_each_entry_safe(ghrsc, riter, &ghvm->resources, list) { + gh_rm_free_resource(ghrsc); + } + mutex_unlock(&ghvm->resources_lock); + + ret = gh_rm_vm_reset(ghvm->rm, ghvm->vmid); + if (ret) + dev_err(ghvm->parent, "Failed to reset the vm: %d\n", ret); + wait_event(ghvm->vm_status_wait, ghvm->vm_status == GH_RM_VM_STATUS_RESET); + mutex_lock(&ghvm->mm_lock); list_for_each_entry_safe(mapping, tmp, &ghvm->memory_mappings, list) { gh_vm_mem_reclaim(ghvm, mapping); @@ -329,6 +418,9 @@ static __must_check struct gh_vm *gh_vm_alloc(struct gh_rm *rm) init_rwsem(&ghvm->status_lock); INIT_WORK(&ghvm->free_work, gh_vm_free); kref_init(&ghvm->kref); + mutex_init(&ghvm->resources_lock); + INIT_LIST_HEAD(&ghvm->resources); + INIT_LIST_HEAD(&ghvm->resource_tickets); INIT_LIST_HEAD(&ghvm->functions); ghvm->vm_status = GH_RM_VM_STATUS_LOAD; @@ -338,9 +430,11 @@ static __must_check struct gh_vm *gh_vm_alloc(struct gh_rm *rm) static int gh_vm_start(struct gh_vm *ghvm) { struct gh_vm_mem *mapping; + struct gh_rm_hyp_resources *resources; + struct gh_resource *ghrsc; u64 dtb_offset; u32 mem_handle; - int ret; + int ret, i, n; down_write(&ghvm->status_lock); if (ghvm->vm_status != GH_RM_VM_STATUS_LOAD) { @@ -394,6 +488,22 @@ static int gh_vm_start(struct gh_vm *ghvm) goto err; } + ret = gh_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources); + if (ret) { + dev_warn(ghvm->parent, "Failed to get hypervisor resources for VM: %d\n", ret); + goto err; + } + + for (i = 0, n = le32_to_cpu(resources->n_entries); i < n; i++) { + ghrsc = gh_rm_alloc_resource(ghvm->rm, &resources->entries[i]); + if (!ghrsc) { + ret = -ENOMEM; + goto err; + } + + gh_vm_add_resource(ghvm, ghrsc); + } + ret = gh_rm_vm_start(ghvm->rm, ghvm->vmid); if (ret) { dev_warn(ghvm->parent, "Failed to start VM: %d\n", ret); diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 7bd271bad721..18d0e1effd25 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -7,6 +7,7 @@ #define _GH_PRIV_VM_MGR_H #include +#include #include #include #include @@ -51,6 +52,9 @@ struct gh_vm { struct list_head memory_mappings; struct mutex fn_lock; struct list_head functions; + struct mutex resources_lock; + struct list_head resources; + struct list_head resource_tickets; }; int gh_vm_mem_alloc(struct gh_vm *ghvm, struct gh_userspace_memory_region *region); diff --git a/include/linux/gunyah_vm_mgr.h b/include/linux/gunyah_vm_mgr.h index 3825c951790a..01b1761b5923 100644 --- a/include/linux/gunyah_vm_mgr.h +++ b/include/linux/gunyah_vm_mgr.h @@ -70,4 +70,18 @@ void gh_vm_function_unregister(struct gh_vm_function *f); DECLARE_GH_VM_FUNCTION(_name, _type, _bind, _unbind); \ module_gh_vm_function(_name) +struct gh_vm_resource_ticket { + struct list_head list; /* for gh_vm's resources list */ + struct list_head resources; /* for gh_resources's list */ + enum gh_resource_type resource_type; + u32 label; + + struct module *owner; + int (*populate)(struct gh_vm_resource_ticket *ticket, struct gh_resource *ghrsc); + void (*unpopulate)(struct gh_vm_resource_ticket *ticket, struct gh_resource *ghrsc); +}; + +int gh_vm_add_resource_ticket(struct gh_vm *ghvm, struct gh_vm_resource_ticket *ticket); +void gh_vm_remove_resource_ticket(struct gh_vm *ghvm, struct gh_vm_resource_ticket *ticket); + #endif From patchwork Sat Mar 4 01:06:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6F8DC6FD19 for ; Sat, 4 Mar 2023 01:08:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229800AbjCDBIe (ORCPT ); Fri, 3 Mar 2023 20:08:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229951AbjCDBIT (ORCPT ); Fri, 3 Mar 2023 20:08:19 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4445C6781A; Fri, 3 Mar 2023 17:07:50 -0800 (PST) Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3240F8Vi019324; Sat, 4 Mar 2023 01:07:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=DfRy3GvvyBk78GUTsKcTklgcl7kKoNmh7DRjvVFD040=; b=S79rfEXV4pdWh98610HCh6N3yO9svZkP15hDoFKYqLShwEjFgleT0rUk/YfrQmgXReSq 0VrqkIpN/s8iUmMG0VLwvs2CmMTkfP9Y3BTbRtihotXil4iU5Pdic786+yFOxu15FDSA nfTNKZVwHaIjHk1To1K7PxMlnJZZbej6zCnC5aqK/q5Pw7ZRKDJQ968IKk3GIDJ/UQlr 3RI3y+5/ktS36mI65kTqADynbIeegD8KFcFJJHiD9SHWKyQzXlv4KUxjGtcxfySqkmNG hVovQ+2WP5I0TksM2g852i4ctoIWB98iSHWRt6aEIb7DORdgKq9sOGVTty1vGQL0yCa4 ow== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p3c8htrd2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:07:20 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32417JOT029048 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:07:19 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:07:18 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu , Catalin Marinas , Will Deacon CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Bagas Sanjaya , Andy Gross , Jassi Brar , , , , , Subject: [PATCH v11 23/26] virt: gunyah: Add hypercalls for sending doorbell Date: Fri, 3 Mar 2023 17:06:29 -0800 Message-ID: <20230304010632.2127470-24-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: IiboxbtkbH4HJrmtSGSyH0Ln5QlJ8zUO X-Proofpoint-ORIG-GUID: IiboxbtkbH4HJrmtSGSyH0Ln5QlJ8zUO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 clxscore=1015 phishscore=0 malwarescore=0 mlxlogscore=766 priorityscore=1501 impostorscore=0 mlxscore=0 bulkscore=0 suspectscore=0 adultscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040004 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Gunyah doorbells allow two virtual machines to signal each other using interrupts. Add the hypercalls needed to assert the interrupt. Signed-off-by: Elliot Berman --- arch/arm64/gunyah/gunyah_hypercall.c | 25 +++++++++++++++++++++++++ include/linux/gunyah.h | 3 +++ 2 files changed, 28 insertions(+) diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c index f01f5cec4d23..0f1cdb706e91 100644 --- a/arch/arm64/gunyah/gunyah_hypercall.c +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -41,6 +41,8 @@ EXPORT_SYMBOL_GPL(arch_is_gh_guest); fn) #define GH_HYPERCALL_HYP_IDENTIFY GH_HYPERCALL(0x8000) +#define GH_HYPERCALL_BELL_SEND GH_HYPERCALL(0x8012) +#define GH_HYPERCALL_BELL_SET_MASK GH_HYPERCALL(0x8015) #define GH_HYPERCALL_MSGQ_SEND GH_HYPERCALL(0x801B) #define GH_HYPERCALL_MSGQ_RECV GH_HYPERCALL(0x801C) #define GH_HYPERCALL_VCPU_RUN GH_HYPERCALL(0x8065) @@ -63,6 +65,29 @@ void gh_hypercall_hyp_identify(struct gh_hypercall_hyp_identify_resp *hyp_identi } EXPORT_SYMBOL_GPL(gh_hypercall_hyp_identify); +enum gh_error gh_hypercall_bell_send(u64 capid, u64 new_flags, u64 *old_flags) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GH_HYPERCALL_BELL_SEND, capid, new_flags, 0, &res); + + if (res.a0 == GH_ERROR_OK) + *old_flags = res.a1; + + return res.a0; +} +EXPORT_SYMBOL_GPL(gh_hypercall_bell_send); + +enum gh_error gh_hypercall_bell_set_mask(u64 capid, u64 enable_mask, u64 ack_mask) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GH_HYPERCALL_BELL_SET_MASK, capid, enable_mask, ack_mask, 0, &res); + + return res.a0; +} +EXPORT_SYMBOL_GPL(gh_hypercall_bell_set_mask); + enum gh_error gh_hypercall_msgq_send(u64 capid, size_t size, void *buff, int tx_flags, bool *ready) { struct arm_smccc_res res; diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 37f1e2c822ce..63395dacc1a8 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -169,6 +169,9 @@ struct gh_hypercall_hyp_identify_resp { void gh_hypercall_hyp_identify(struct gh_hypercall_hyp_identify_resp *hyp_identity); +enum gh_error gh_hypercall_bell_send(u64 capid, u64 new_flags, u64 *old_flags); +enum gh_error gh_hypercall_bell_set_mask(u64 capid, u64 enable_mask, u64 ack_mask); + #define GH_HYPERCALL_MSGQ_TX_FLAGS_PUSH BIT(0) enum gh_error gh_hypercall_msgq_send(u64 capid, size_t size, void *buff, int tx_flags, bool *ready); From patchwork Sat Mar 4 01:06:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36708C678D5 for ; Sat, 4 Mar 2023 01:09:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229969AbjCDBJd (ORCPT ); Fri, 3 Mar 2023 20:09:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229736AbjCDBJ3 (ORCPT ); Fri, 3 Mar 2023 20:09:29 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3B7ABB9C; Fri, 3 Mar 2023 17:08:02 -0800 (PST) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3240J06l028516; Sat, 4 Mar 2023 01:07:22 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=vpOwzZQLrpmHaihmYhdcY2zbPs5StQEqOQFTEOCreb8=; b=R5DuSNfZHSZwnUGbomtO0mkgWDuzJsQnYZX1REN5qfI7uKLAFyeff6ZmFxiprfmakHwy u7YTUqKs+qdluMTXDrs0p5ifaus9Y7lxUJzRbS8o+fsfbLG3pliXGm4QGRVG0DFLZhJ8 6nhNqizl/9eM+PHRTm42iUXZy8L1I25ss5DbTllaRsziYRdp71IQMECMLssutgZIu/AM qiYAcakUWQ3DLF9hER3a+TM03blzsf3RLWVbqwV8EcqT62gAKC/Q59gKS288msxjHATP 8SSIemUIBOYRs7QwYpqV8vAPAKROUKd7XmuavXTMU1ftBxAhvEnSlXF/jI3bg8mvAJEt ig== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p36mybgjf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:07:21 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32417LVl026133 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:07:21 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:07:20 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu , Jonathan Corbet CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 24/26] virt: gunyah: Add irqfd interface Date: Fri, 3 Mar 2023 17:06:30 -0800 Message-ID: <20230304010632.2127470-25-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: unE253sIrpOs0k8Rbcml-qsvAYKgF2b7 X-Proofpoint-ORIG-GUID: unE253sIrpOs0k8Rbcml-qsvAYKgF2b7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 adultscore=0 clxscore=1015 mlxscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 spamscore=0 malwarescore=0 lowpriorityscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040005 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Enable support for creating irqfds which can raise an interrupt on a Gunyah virtual machine. irqfds are exposed to userspace as a Gunyah VM function with the name "irqfd". If the VM devicetree is not configured to create a doorbell with the corresponding label, userspace will still be able to assert the eventfd but no interrupt will be raised on the guest. Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- Documentation/virt/gunyah/vm-manager.rst | 2 +- drivers/virt/gunyah/Kconfig | 9 ++ drivers/virt/gunyah/Makefile | 1 + drivers/virt/gunyah/gunyah_irqfd.c | 164 +++++++++++++++++++++++ include/linux/gunyah.h | 5 + include/uapi/linux/gunyah.h | 30 +++++ 6 files changed, 210 insertions(+), 1 deletion(-) create mode 100644 drivers/virt/gunyah/gunyah_irqfd.c diff --git a/Documentation/virt/gunyah/vm-manager.rst b/Documentation/virt/gunyah/vm-manager.rst index 83d326b0d11f..a1dd70f0cbf6 100644 --- a/Documentation/virt/gunyah/vm-manager.rst +++ b/Documentation/virt/gunyah/vm-manager.rst @@ -124,7 +124,7 @@ the VM starts. The possible types are documented below: .. kernel-doc:: include/uapi/linux/gunyah.h - :identifiers: GH_FN_VCPU gh_fn_vcpu_arg + :identifiers: GH_FN_VCPU gh_fn_vcpu_arg GH_FN_IRQFD gh_fn_irqfd_arg Gunyah VCPU API Descriptions ---------------------------- diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig index 4c1c6110b50e..2cde24d429d1 100644 --- a/drivers/virt/gunyah/Kconfig +++ b/drivers/virt/gunyah/Kconfig @@ -26,3 +26,12 @@ config GUNYAH_VCPU VMMs can also handle stage 2 faults of the vCPUs. Say Y/M here if unsure and you want to support Gunyah VMMs. + +config GUNYAH_IRQFD + tristate "Gunyah irqfd interface" + depends on GUNYAH + help + Enable kernel support for creating irqfds which can raise an interrupt + on Gunyah virtual machine. + + Say Y/M here if unsure and you want to support Gunyah VMMs. diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 2d1b604a7b03..6cf756bfa3c2 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -7,3 +7,4 @@ gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mm.o obj-$(CONFIG_GUNYAH) += gunyah_rsc_mgr.o obj-$(CONFIG_GUNYAH_VCPU) += gunyah_vcpu.o +obj-$(CONFIG_GUNYAH_IRQFD) += gunyah_irqfd.o diff --git a/drivers/virt/gunyah/gunyah_irqfd.c b/drivers/virt/gunyah/gunyah_irqfd.c new file mode 100644 index 000000000000..38e5fe266b00 --- /dev/null +++ b/drivers/virt/gunyah/gunyah_irqfd.c @@ -0,0 +1,164 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +struct gh_irqfd { + struct gh_resource *ghrsc; + struct gh_vm_resource_ticket ticket; + struct gh_vm_function_instance *f; + + bool level; + + struct eventfd_ctx *ctx; + wait_queue_entry_t wait; + poll_table pt; +}; + +static int irqfd_wakeup(wait_queue_entry_t *wait, unsigned int mode, int sync, void *key) +{ + struct gh_irqfd *irqfd = container_of(wait, struct gh_irqfd, wait); + __poll_t flags = key_to_poll(key); + u64 enable_mask = GH_BELL_NONBLOCK; + u64 old_flags; + int ret = 0; + + if (flags & EPOLLIN) { + if (irqfd->ghrsc) { + ret = gh_hypercall_bell_send(irqfd->ghrsc->capid, enable_mask, &old_flags); + if (ret) + pr_err_ratelimited("Failed to inject interrupt %d: %d\n", + irqfd->ticket.label, ret); + } else + pr_err_ratelimited("Premature injection of interrupt\n"); + } + + return 0; +} + +static void irqfd_ptable_queue_proc(struct file *file, wait_queue_head_t *wqh, poll_table *pt) +{ + struct gh_irqfd *irq_ctx = container_of(pt, struct gh_irqfd, pt); + + add_wait_queue(wqh, &irq_ctx->wait); +} + +static int gh_irqfd_populate(struct gh_vm_resource_ticket *ticket, struct gh_resource *ghrsc) +{ + struct gh_irqfd *irqfd = container_of(ticket, struct gh_irqfd, ticket); + u64 enable_mask = GH_BELL_NONBLOCK; + u64 ack_mask = ~0; + int ret = 0; + + if (irqfd->ghrsc) { + pr_warn("irqfd%d already got a Gunyah resource. Check if multiple resources with same label were configured.\n", + irqfd->ticket.label); + return -1; + } + + irqfd->ghrsc = ghrsc; + if (irqfd->level) { + ret = gh_hypercall_bell_set_mask(irqfd->ghrsc->capid, enable_mask, ack_mask); + if (ret) + pr_warn("irq %d couldn't be set as level triggered. Might cause IRQ storm if asserted\n", + irqfd->ticket.label); + } + + return 0; +} + +static void gh_irqfd_unpopulate(struct gh_vm_resource_ticket *ticket, struct gh_resource *ghrsc) +{ + struct gh_irqfd *irqfd = container_of(ticket, struct gh_irqfd, ticket); + u64 cnt; + + eventfd_ctx_remove_wait_queue(irqfd->ctx, &irqfd->wait, &cnt); +} + +static long gh_irqfd_bind(struct gh_vm_function_instance *f) +{ + struct gh_fn_irqfd_arg *args = f->argp; + struct gh_irqfd *irqfd; + __poll_t events; + struct fd fd; + long r; + + if (f->arg_size != sizeof(*args)) + return -EINVAL; + + /* All other flag bits are reserved for future use */ + if (args->flags & ~GH_IRQFD_LEVEL) + return -EINVAL; + + irqfd = kzalloc(sizeof(*irqfd), GFP_KERNEL); + if (!irqfd) + return -ENOMEM; + + irqfd->f = f; + f->data = irqfd; + + fd = fdget(args->fd); + if (!fd.file) { + kfree(irqfd); + return -EBADF; + } + + irqfd->ctx = eventfd_ctx_fileget(fd.file); + if (IS_ERR(irqfd->ctx)) { + r = PTR_ERR(irqfd->ctx); + goto err_fdput; + } + + if (args->flags & GH_IRQFD_LEVEL) + irqfd->level = true; + + init_waitqueue_func_entry(&irqfd->wait, irqfd_wakeup); + init_poll_funcptr(&irqfd->pt, irqfd_ptable_queue_proc); + + irqfd->ticket.resource_type = GH_RESOURCE_TYPE_BELL_TX; + irqfd->ticket.label = args->label; + irqfd->ticket.owner = THIS_MODULE; + irqfd->ticket.populate = gh_irqfd_populate; + irqfd->ticket.unpopulate = gh_irqfd_unpopulate; + + r = gh_vm_add_resource_ticket(f->ghvm, &irqfd->ticket); + if (r) + goto err_ctx; + + events = vfs_poll(fd.file, &irqfd->pt); + if (events & EPOLLIN) + pr_warn("Premature injection of interrupt\n"); + fdput(fd); + + return 0; +err_ctx: + eventfd_ctx_put(irqfd->ctx); +err_fdput: + fdput(fd); + kfree(irqfd); + return r; +} + +static void gh_irqfd_unbind(struct gh_vm_function_instance *f) +{ + struct gh_irqfd *irqfd = f->data; + + gh_vm_remove_resource_ticket(irqfd->f->ghvm, &irqfd->ticket); + eventfd_ctx_put(irqfd->ctx); + kfree(irqfd); +} + +DECLARE_GH_VM_FUNCTION_INIT(irqfd, GH_FN_IRQFD, gh_irqfd_bind, gh_irqfd_unbind); +MODULE_DESCRIPTION("Gunyah irqfds"); +MODULE_LICENSE("GPL"); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 63395dacc1a8..0344b6988cfa 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -33,6 +33,11 @@ struct gh_resource { u32 rm_label; }; +/** + * Gunyah Doorbells + */ +#define GH_BELL_NONBLOCK BIT(32) + /** * Gunyah Message Queues */ diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index e52265fa5715..5617dadc1c7b 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -76,6 +76,19 @@ struct gh_vm_dtb_config { */ #define GH_FN_VCPU 1 +/** + * GH_FN_IRQFD - register eventfd to assert a Gunyah doorbell + * + * gh_fn_desc is filled with gh_fn_irqfd_arg + * + * Allows setting an eventfd to directly trigger a guest interrupt. + * irqfd.fd specifies the file descriptor to use as the eventfd. + * irqfd.label corresponds to the doorbell label used in the guest VM's devicetree. + * + * Return: 0 + */ +#define GH_FN_IRQFD 2 + #define GH_FN_MAX_ARG_SIZE 256 /** @@ -88,6 +101,23 @@ struct gh_fn_vcpu_arg { #define GH_IRQFD_LEVEL (1UL << 0) +/** + * struct gh_fn_irqfd_arg - Arguments to create an irqfd function + * @fd: an eventfd which when written to will raise a doorbell + * @label: Label of the doorbell created on the guest VM + * @flags: GH_IRQFD_LEVEL configures the corresponding doorbell to behave + * like a level triggered interrupt. + * @padding: padding bytes + */ +struct gh_fn_irqfd_arg { + __u32 fd; + __u32 label; + __u32 flags; + __u32 padding; +}; + +#define GH_IOEVENTFD_DATAMATCH (1UL << 0) + /** * struct gh_fn_desc - Arguments to create a VM function * @type: Type of the function. See GH_FN_* macro for supported types From patchwork Sat Mar 4 01:06:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 659066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B083C61DA4 for ; Sat, 4 Mar 2023 01:35:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229515AbjCDBfu (ORCPT ); Fri, 3 Mar 2023 20:35:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbjCDBft (ORCPT ); Fri, 3 Mar 2023 20:35:49 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA6E364A88; Fri, 3 Mar 2023 17:35:47 -0800 (PST) Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3240r4ow007584; Sat, 4 Mar 2023 01:07:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=wvhDtInPmC0leqk/Kbs8lbSLrxQCbpRgDn//zaKP+3A=; b=N4QIX8FVeV//H/Tk6LCPslaXvPIyP/w97rv2LggkAF+fdV9KtVgsNEeMFK5VCWd+wx5U aJAt8n7ppAp3cAEUXdJ4O+wl7AVblYvqdHA5R7ivr5utiCGoLHSrU8nifzhybmWxrsxW iuIV5gB6gSI3iq46OXwcdp9KQlS5rT6OciIqjViCqaHcXoE+K/ixRyLy5Cr2xbhfG8md oLHR2J93mFuJHsm6O6sz6tAJe5xqtO+JBsy6Rny40zBLeTn+PBzDkUv/V0+E6Zl0q+2F QtZB6tDmhM8kBI7J6BYtf+vGv4vZbJy3II/qdS4H05jOqU5n3546zPprMqBdey53d4Iv Ow== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3p32ty430h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 04 Mar 2023 01:07:23 +0000 Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 32417Mmd019390 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 4 Mar 2023 01:07:22 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 3 Mar 2023 17:07:22 -0800 From: Elliot Berman To: Alex Elder , Srinivas Kandagatla , Elliot Berman , Prakruthi Deepak Heragu , Jonathan Corbet CC: Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Dmitry Baryshkov , Bjorn Andersson , "Konrad Dybcio" , Arnd Bergmann , "Greg Kroah-Hartman" , Rob Herring , Krzysztof Kozlowski , Bagas Sanjaya , Will Deacon , Andy Gross , Catalin Marinas , Jassi Brar , , , , , Subject: [PATCH v11 25/26] virt: gunyah: Add ioeventfd Date: Fri, 3 Mar 2023 17:06:31 -0800 Message-ID: <20230304010632.2127470-26-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230304010632.2127470-1-quic_eberman@quicinc.com> References: <20230304010632.2127470-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 0ezja0ChU0czpuzouca54EhuQjAyN310 X-Proofpoint-ORIG-GUID: 0ezja0ChU0czpuzouca54EhuQjAyN310 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-03_07,2023-03-03_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 spamscore=0 mlxlogscore=999 lowpriorityscore=0 suspectscore=0 mlxscore=0 priorityscore=1501 malwarescore=0 adultscore=0 impostorscore=0 phishscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2303040004 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Allow userspace to attach an ioeventfd to an mmio address within the guest. Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- Documentation/virt/gunyah/vm-manager.rst | 2 +- drivers/virt/gunyah/Kconfig | 9 ++ drivers/virt/gunyah/Makefile | 1 + drivers/virt/gunyah/gunyah_ioeventfd.c | 117 +++++++++++++++++++++++ include/uapi/linux/gunyah.h | 37 +++++++ 5 files changed, 165 insertions(+), 1 deletion(-) create mode 100644 drivers/virt/gunyah/gunyah_ioeventfd.c diff --git a/Documentation/virt/gunyah/vm-manager.rst b/Documentation/virt/gunyah/vm-manager.rst index a1dd70f0cbf6..cd41a705849f 100644 --- a/Documentation/virt/gunyah/vm-manager.rst +++ b/Documentation/virt/gunyah/vm-manager.rst @@ -124,7 +124,7 @@ the VM starts. The possible types are documented below: .. kernel-doc:: include/uapi/linux/gunyah.h - :identifiers: GH_FN_VCPU gh_fn_vcpu_arg GH_FN_IRQFD gh_fn_irqfd_arg + :identifiers: GH_FN_VCPU gh_fn_vcpu_arg GH_FN_IRQFD gh_fn_irqfd_arg GH_FN_IOEVENTFD gh_fn_ioeventfd_arg Gunyah VCPU API Descriptions ---------------------------- diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig index 2cde24d429d1..bd8e31184962 100644 --- a/drivers/virt/gunyah/Kconfig +++ b/drivers/virt/gunyah/Kconfig @@ -35,3 +35,12 @@ config GUNYAH_IRQFD on Gunyah virtual machine. Say Y/M here if unsure and you want to support Gunyah VMMs. + +config GUNYAH_IOEVENTFD + tristate "Gunyah ioeventfd interface" + depends on GUNYAH + help + Enable kernel support for creating ioeventfds which can alert userspace + when a Gunyah virtual machine accesses a memory address. + + Say Y/M here if unsure and you want to support Gunyah VMMs. diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 6cf756bfa3c2..7347b1470491 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -8,3 +8,4 @@ obj-$(CONFIG_GUNYAH) += gunyah_rsc_mgr.o obj-$(CONFIG_GUNYAH_VCPU) += gunyah_vcpu.o obj-$(CONFIG_GUNYAH_IRQFD) += gunyah_irqfd.o +obj-$(CONFIG_GUNYAH_IOEVENTFD) += gunyah_ioeventfd.o diff --git a/drivers/virt/gunyah/gunyah_ioeventfd.c b/drivers/virt/gunyah/gunyah_ioeventfd.c new file mode 100644 index 000000000000..517f55706ed9 --- /dev/null +++ b/drivers/virt/gunyah/gunyah_ioeventfd.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +struct gh_ioeventfd { + struct gh_vm_function_instance *f; + struct gh_vm_io_handler io_handler; + + struct eventfd_ctx *ctx; +}; + +static int gh_write_ioeventfd(struct gh_vm_io_handler *io_dev, u64 addr, u32 len, u64 data) +{ + struct gh_ioeventfd *iofd = container_of(io_dev, struct gh_ioeventfd, io_handler); + + eventfd_signal(iofd->ctx, 1); + return 0; +} + +static struct gh_vm_io_handler_ops io_ops = { + .write = gh_write_ioeventfd, +}; + +static long gh_ioeventfd_bind(struct gh_vm_function_instance *f) +{ + const struct gh_fn_ioeventfd_arg *args = f->argp; + struct eventfd_ctx *ctx = NULL; + struct gh_ioeventfd *iofd; + int ret; + + if (f->arg_size != sizeof(*args)) + return -EINVAL; + + /* must be natural-word sized, or 0 to ignore length */ + switch (args->len) { + case 0: + case 1: + case 2: + case 4: + case 8: + break; + default: + return -EINVAL; + } + + /* check for range overflow */ + if (args->addr + args->len < args->addr) + return -EINVAL; + + /* ioeventfd with no length can't be combined with DATAMATCH */ + if (!args->len && (args->flags & GH_IOEVENTFD_DATAMATCH)) + return -EINVAL; + + /* All other flag bits are reserved for future use */ + if (args->flags & ~GH_IOEVENTFD_DATAMATCH) + return -EINVAL; + + ctx = eventfd_ctx_fdget(args->fd); + if (IS_ERR(ctx)) + return PTR_ERR(ctx); + + iofd = kzalloc(sizeof(*iofd), GFP_KERNEL); + if (!iofd) { + ret = -ENOMEM; + goto err_eventfd; + } + + f->data = iofd; + iofd->f = f; + + iofd->ctx = ctx; + + if (args->flags & GH_IOEVENTFD_DATAMATCH) { + iofd->io_handler.datamatch = true; + iofd->io_handler.len = args->len; + iofd->io_handler.data = args->datamatch; + } + iofd->io_handler.addr = args->addr; + iofd->io_handler.ops = &io_ops; + + ret = gh_vm_add_io_handler(f->ghvm, &iofd->io_handler); + if (ret) + goto err_io_dev_add; + + return 0; + +err_io_dev_add: + kfree(iofd); +err_eventfd: + eventfd_ctx_put(ctx); + return ret; +} + +static void gh_ioevent_unbind(struct gh_vm_function_instance *f) +{ + struct gh_ioeventfd *iofd = f->data; + + eventfd_ctx_put(iofd->ctx); + gh_vm_remove_io_handler(iofd->f->ghvm, &iofd->io_handler); + kfree(iofd); +} + +DECLARE_GH_VM_FUNCTION_INIT(ioeventfd, GH_FN_IOEVENTFD, + gh_ioeventfd_bind, gh_ioevent_unbind); +MODULE_DESCRIPTION("Gunyah ioeventfds"); +MODULE_LICENSE("GPL"); diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 5617dadc1c7b..f8482ff4cc55 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -89,6 +89,23 @@ struct gh_vm_dtb_config { */ #define GH_FN_IRQFD 2 +/** + * GH_FN_IOEVENTFD - register ioeventfd to trigger when VM faults on parameter + * + * gh_fn_desc is filled with gh_fn_ioeventfd_arg + * + * Attaches an ioeventfd to a legal mmio address within the guest. A guest write + * in the registered address will signal the provided event instead of triggering + * an exit on the GH_VCPU_RUN ioctl. + * + * If GH_IOEVENTFD_DATAMATCH flag is set, the event will be signaled only if the + * written value to the registered address is equal to datamatch in + * struct gh_fn_ioeventfd_arg. + * + * Return: 0 + */ +#define GH_FN_IOEVENTFD 3 + #define GH_FN_MAX_ARG_SIZE 256 /** @@ -118,6 +135,26 @@ struct gh_fn_irqfd_arg { #define GH_IOEVENTFD_DATAMATCH (1UL << 0) +/** + * struct gh_fn_ioeventfd_arg - Arguments to create an ioeventfd function + * @datamatch: data used when GH_IOEVENTFD_DATAMATCH is set + * @addr: Address in guest memory + * @len: Length of access + * @fd: When ioeventfd is matched, this eventfd is written + * @flags: If GH_IOEVENTFD_DATAMATCH flag is set, the event will be signaled + * only if the written value to the registered address is equal to + * @datamatch + * @padding: padding bytes + */ +struct gh_fn_ioeventfd_arg { + __u64 datamatch; + __u64 addr; /* legal mmio address */ + __u32 len; /* 1, 2, 4, or 8 bytes; or 0 to ignore length */ + __s32 fd; + __u32 flags; + __u32 padding; +}; + /** * struct gh_fn_desc - Arguments to create a VM function * @type: Type of the function. See GH_FN_* macro for supported types