From patchwork Tue Jan 9 19:37:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761175 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 305393D39F; Tue, 9 Jan 2024 19:38:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="YO5qyhSi" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GcW81031587; Tue, 9 Jan 2024 19:37:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=lLDuUBmGZvNBtcKj7wbGM3D9GD/JTPUN1398mSvaVtA =; b=YO5qyhSisJBy7bUqFg0nkNPGfLRzIKbQDeG9MAPPgqUiW7vKHtIwoENkQz2 n5fu5lqjw3x4/m5OoeWWfpmerLUdwOdjXirXdP16JIIZcl5x6a69s7mA0uVddIIL 6DTzU7lGUQA3QjNW4ChDZJd6pJlHtSNCGUtqo7qnTVtD4je0UxIgOdqnubX7xGpU XuzYTmG1CXX2jBxT+XnqPED7YjyavmNlqCMphduUcLs/B+vgUJk48RLtnlm1wMZF mJ/Jh3vEXQ24EF/Hh92nd/k4ozvX86tJtyxHoxOJkpd9h9plP0UMub3Lr+2pTBGI V8UM6u4tT7zdgewLdsE7sMmCpYQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9u9gdbv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:51 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbot3011357 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:50 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:49 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:40 -0800 Subject: [PATCH v16 02/34] dt-bindings: Add binding for gunyah hypervisor Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-2-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman , Rob Herring X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: edcD4wUb0AIlamWiAts9LXox64A9mtS0 X-Proofpoint-ORIG-GUID: edcD4wUb0AIlamWiAts9LXox64A9mtS0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 bulkscore=0 adultscore=0 suspectscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 The Gunyah Resource Manager applies a devicetree overlay describing the virtual platform configuration of the guest VM, such as the message queue capability IDs for communicating with the Resource Manager. This information is not otherwise discoverable by a VM: the Gunyah hypervisor core does not provide a direct interface to discover capability IDs nor a way to communicate with RM without having already known the corresponding message queue capability ID. Add the DT bindings that Gunyah adheres for the hypervisor node and message queues. Reviewed-by: Rob Herring Signed-off-by: Elliot Berman --- .../bindings/firmware/gunyah-hypervisor.yaml | 82 ++++++++++++++++++++++ 1 file changed, 82 insertions(+) diff --git a/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml b/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml new file mode 100644 index 000000000000..cdeb4885a807 --- /dev/null +++ b/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml @@ -0,0 +1,82 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/firmware/gunyah-hypervisor.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Gunyah Hypervisor + +maintainers: + - Prakruthi Deepak Heragu + - Elliot Berman + +description: |+ + Gunyah virtual machines use this information to determine the capability IDs + of the message queues used to communicate with the Gunyah Resource Manager. + See also: https://github.com/quic/gunyah-resource-manager/blob/develop/src/vm_creation/dto_construct.c + +properties: + compatible: + const: gunyah-hypervisor + + "#address-cells": + description: Number of cells needed to represent 64-bit capability IDs. + const: 2 + + "#size-cells": + description: must be 0, because capability IDs are not memory address + ranges and do not have a size. + const: 0 + +patternProperties: + "^gunyah-resource-mgr(@.*)?": + type: object + description: + Resource Manager node which is required to communicate to Resource + Manager VM using Gunyah Message Queues. + + properties: + compatible: + const: gunyah-resource-manager + + reg: + items: + - description: Gunyah capability ID of the TX message queue + - description: Gunyah capability ID of the RX message queue + + interrupts: + items: + - description: Interrupt for the TX message queue + - description: Interrupt for the RX message queue + + additionalProperties: false + + required: + - compatible + - reg + - interrupts + +additionalProperties: false + +required: + - compatible + - "#address-cells" + - "#size-cells" + +examples: + - | + #include + + hypervisor { + #address-cells = <2>; + #size-cells = <0>; + compatible = "gunyah-hypervisor"; + + gunyah-resource-mgr@0 { + compatible = "gunyah-resource-manager"; + interrupts = , /* TX allowed IRQ */ + ; /* RX requested IRQ */ + reg = <0x00000000 0x00000000>, /* TX capability ID */ + <0x00000000 0x00000001>; /* RX capability ID */ + }; + }; From patchwork Tue Jan 9 19:37:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761174 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4E7F3D3A0; Tue, 9 Jan 2024 19:38:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="bqDBYTlT" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Gashn012246; Tue, 9 Jan 2024 19:37:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=icjWnj0mkCwFQ21u1pvtQcXcenrQAbg51OlUAoFY4js =; b=bqDBYTlTznzQzDDe/plCWxNrXBj0C+AuBiF9nl2bWqlMa4C867i8U3UyhWg ftNcHS0TTD/PChYDLzXd1s3/lzOa4m8O8uh0Kzqnj1h3n3xcyfvBC7W85E//60A5 4L5MBJ2SbgfI46/u6FNCHd6/gV4r/TvTnlzhuYCR6RHVezFSf3f+1w3srNDzevO9 fbuIpK+Ea2ADnFDMspYMDE8v7eNkFtIp0zOo9KRXZVxtg4fA4yXeS08P8E2S6ef0 ArVuxvy++oFgiYVs6OE6IhJ2AmeQzXZDk1ChOFIWkSMH7B9wagruHaoyTU6Hqs7t nH8m6jW6Z1rrKtJweqGbJ27evvQ== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9ta0dw3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:53 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbq3R024506 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:52 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:51 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:43 -0800 Subject: [PATCH v16 05/34] virt: gunyah: Add hypervisor driver Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-5-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: Ig6iQvpCLqOoTnh4LaxOTKhPE4RmuWK3 X-Proofpoint-ORIG-GUID: Ig6iQvpCLqOoTnh4LaxOTKhPE4RmuWK3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 malwarescore=0 mlxscore=0 adultscore=0 mlxlogscore=999 bulkscore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add driver to detect when running under Gunyah. It performs basic identification hypercall and populates the platform bus for resource manager to probe. Signed-off-by: Elliot Berman --- drivers/virt/Makefile | 1 + drivers/virt/gunyah/Makefile | 3 +++ drivers/virt/gunyah/gunyah.c | 52 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 56 insertions(+) diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile index f29901bd7820..ef6a3835d078 100644 --- a/drivers/virt/Makefile +++ b/drivers/virt/Makefile @@ -10,3 +10,4 @@ obj-y += vboxguest/ obj-$(CONFIG_NITRO_ENCLAVES) += nitro_enclaves/ obj-$(CONFIG_ACRN_HSM) += acrn/ obj-y += coco/ +obj-y += gunyah/ diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile new file mode 100644 index 000000000000..34f32110faf9 --- /dev/null +++ b/drivers/virt/gunyah/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_GUNYAH) += gunyah.o diff --git a/drivers/virt/gunyah/gunyah.c b/drivers/virt/gunyah/gunyah.c new file mode 100644 index 000000000000..ef8a85f27590 --- /dev/null +++ b/drivers/virt/gunyah/gunyah.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include + +static int gunyah_probe(struct platform_device *pdev) +{ + struct gunyah_hypercall_hyp_identify_resp gunyah_api; + + if (!arch_is_gunyah_guest()) + return -ENODEV; + + gunyah_hypercall_hyp_identify(&gunyah_api); + + pr_info("Running under Gunyah hypervisor %llx/v%u\n", + FIELD_GET(GUNYAH_API_INFO_VARIANT_MASK, gunyah_api.api_info), + gunyah_api_version(&gunyah_api)); + + /* Might move this out to individual drivers if there's ever an API version bump */ + if (gunyah_api_version(&gunyah_api) != GUNYAH_API_V1) { + pr_info("Unsupported Gunyah version: %u\n", + gunyah_api_version(&gunyah_api)); + return -ENODEV; + } + + return devm_of_platform_populate(&pdev->dev); +} + +static const struct of_device_id gunyah_of_match[] = { + { .compatible = "gunyah-hypervisor" }, + {} +}; +MODULE_DEVICE_TABLE(of, gunyah_of_match); + +/* clang-format off */ +static struct platform_driver gunyah_driver = { + .probe = gunyah_probe, + .driver = { + .name = "gunyah", + .of_match_table = gunyah_of_match, + } +}; +/* clang-format on */ +module_platform_driver(gunyah_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Driver"); From patchwork Tue Jan 9 19:37:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761172 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E07D3D986; Tue, 9 Jan 2024 19:38:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="E6tW3/B+" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IoBRb003328; Tue, 9 Jan 2024 19:37:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=QdXgUydOaam49lWM9hHVQ3unnDzFxn1NJUUWfFsZFDo =; b=E6tW3/B+w/7M7KQyszu7fLxOIA8YVNUOkXkKBqpZecK92jpzWSK+73gjZg9 8Dma90uOkX7/19qX+UkCnUezDivGni5SPs+ixExx8dW3odCRFkCq6oGmM/Z+coQu jPTUPSDHRFuwf2JSrx49vyi1yWGirXpb9s2nj9zCEpG/XGpoBDX/hAXdB4ecBDZe 47rBlajR9AMHgdXg5YhjC6PNFBLozVGhkCB+pkuiitwU/nPHA5TJtZQYtY7vxEUO KVBZodGA+q7at45H1BO0puz7fNakgfb2yFqTkBlz0v+j5xUPogPHpdjGGn66PMG4 zJ0cY33jugoCY5X1hfKOSeYRT3A== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9bmggct-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:54 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbrLC024513 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:53 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:52 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:44 -0800 Subject: [PATCH v16 06/34] virt: gunyah: msgq: Add hypercalls to send and receive messages Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-6-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 63U6h9UTcv_bPBxoxi0a4YNlTfRwGL0t X-Proofpoint-GUID: 63U6h9UTcv_bPBxoxi0a4YNlTfRwGL0t X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 clxscore=1015 spamscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=857 adultscore=0 bulkscore=0 suspectscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add hypercalls to send and receive messages on a Gunyah message queue. Reviewed-by: Alex Elder Reviewed-by: Srinivas Kandagatla Signed-off-by: Elliot Berman --- arch/arm64/gunyah/gunyah_hypercall.c | 55 ++++++++++++++++++++++++++++++++++++ include/linux/gunyah.h | 8 ++++++ 2 files changed, 63 insertions(+) diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c index d44663334f38..1302e128be6e 100644 --- a/arch/arm64/gunyah/gunyah_hypercall.c +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -37,6 +37,8 @@ EXPORT_SYMBOL_GPL(arch_is_gunyah_guest); /* clang-format off */ #define GUNYAH_HYPERCALL_HYP_IDENTIFY GUNYAH_HYPERCALL(0x8000) +#define GUNYAH_HYPERCALL_MSGQ_SEND GUNYAH_HYPERCALL(0x801B) +#define GUNYAH_HYPERCALL_MSGQ_RECV GUNYAH_HYPERCALL(0x801C) /* clang-format on */ /** @@ -58,5 +60,58 @@ void gunyah_hypercall_hyp_identify( } EXPORT_SYMBOL_GPL(gunyah_hypercall_hyp_identify); +/** + * gunyah_hypercall_msgq_send() - Send a buffer on a message queue + * @capid: capability ID of the message queue to add message + * @size: Size of @buff + * @buff: Address of buffer to send + * @tx_flags: See GUNYAH_HYPERCALL_MSGQ_TX_FLAGS_* + * @ready: If the send was successful, ready is filled with true if more + * messages can be sent on the queue. If false, then the tx IRQ will + * be raised in future when send can succeed. + */ +enum gunyah_error gunyah_hypercall_msgq_send(u64 capid, size_t size, void *buff, + u64 tx_flags, bool *ready) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_MSGQ_SEND, capid, size, + (uintptr_t)buff, tx_flags, 0, &res); + + if (res.a0 == GUNYAH_ERROR_OK) + *ready = !!res.a1; + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_msgq_send); + +/** + * gunyah_hypercall_msgq_recv() - Send a buffer on a message queue + * @capid: capability ID of the message queue to add message + * @buff: Address of buffer to copy received data into + * @size: Size of @buff + * @recv_size: If the receive was successful, recv_size is filled with the + * size of data received. Will be <= size. + * @ready: If the receive was successful, ready is filled with true if more + * messages are ready to be received on the queue. If false, then the + * rx IRQ will be raised in future when recv can succeed. + */ +enum gunyah_error gunyah_hypercall_msgq_recv(u64 capid, void *buff, size_t size, + size_t *recv_size, bool *ready) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_MSGQ_RECV, capid, (uintptr_t)buff, + size, 0, &res); + + if (res.a0 == GUNYAH_ERROR_OK) { + *recv_size = res.a1; + *ready = !!res.a2; + } + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_msgq_recv); + MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Gunyah Hypervisor Hypercalls"); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 33bcbd22d39f..acd70f982425 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -141,4 +141,12 @@ gunyah_api_version(const struct gunyah_hypercall_hyp_identify_resp *gunyah_api) void gunyah_hypercall_hyp_identify( struct gunyah_hypercall_hyp_identify_resp *hyp_identity); +/* Immediately raise RX vIRQ on receiver VM */ +#define GUNYAH_HYPERCALL_MSGQ_TX_FLAGS_PUSH BIT(0) + +enum gunyah_error gunyah_hypercall_msgq_send(u64 capid, size_t size, void *buff, + u64 tx_flags, bool *ready); +enum gunyah_error gunyah_hypercall_msgq_recv(u64 capid, void *buff, size_t size, + size_t *recv_size, bool *ready); + #endif From patchwork Tue Jan 9 19:37:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761173 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CD623D56E; Tue, 9 Jan 2024 19:38:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Tp3GARDz" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IbI9t001477; Tue, 9 Jan 2024 19:37:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=PVavHXaSOmGFizSDr2TbfnB4vcPUF8zAyUPvn7d4XeM =; b=Tp3GARDzwIqil+y9XsjAyyrnArDlj4heQeEoNvOIJiv/Bd0nT9jcGgSNibX o0YVEJWNm1d/lk1MY/DF+/ZNA+Tusi8hf70rgWNnsAHac/vZanuQmMqIzhEERtJY gLlC74d6ghmc39H+K4XMi9rGdbDCNkkNq6aPwLZZJeCJa7763zZGGDM3EAOq/73s 5KTk1fbyX5ymFpbSo519S36MQyAzArduWCGMjopb4dh2HySBiXFBQ2lcsa+AcP8f 7oMMMYAWh2jX+lzuMjAnVoiQHTYNuvNEh/sfIAJ0PWdfPdU8lleujHPOAO5woTdt Z/s9xzTJW0yzecJFwooOL1/UCMw== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh85t0pmr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:55 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbsRr011365 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:54 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:53 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:45 -0800 Subject: [PATCH v16 07/34] gunyah: rsc_mgr: Add resource manager RPC core Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-7-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: cU8gJwoxE30MYItzgi1xt3CNb7AHA64h X-Proofpoint-GUID: cU8gJwoxE30MYItzgi1xt3CNb7AHA64h X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 The resource manager is a special virtual machine which is always running on a Gunyah system. It provides APIs for creating and destroying VMs, secure memory management, sharing/lending of memory between VMs, and setup of inter-VM communication. Calls to the resource manager are made via message queues. This patch implements the basic probing and RPC mechanism to make those API calls. Request/response calls can be made with gh_rm_call. Drivers can also register to notifications pushed by RM via gh_rm_register_notifier Specific API calls that resource manager supports will be implemented in subsequent patches. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 4 +- drivers/virt/gunyah/rsc_mgr.c | 724 ++++++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/rsc_mgr.h | 28 ++ 3 files changed, 755 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 34f32110faf9..c2308389f551 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,3 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_GUNYAH) += gunyah.o +gunyah_rsc_mgr-y += rsc_mgr.o + +obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o diff --git a/drivers/virt/gunyah/rsc_mgr.c b/drivers/virt/gunyah/rsc_mgr.c new file mode 100644 index 000000000000..a3578a0c10b4 --- /dev/null +++ b/drivers/virt/gunyah/rsc_mgr.c @@ -0,0 +1,724 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "rsc_mgr.h" + +/* clang-format off */ +#define RM_RPC_API_VERSION_MASK GENMASK(3, 0) +#define RM_RPC_HEADER_WORDS_MASK GENMASK(7, 4) +#define RM_RPC_API_VERSION FIELD_PREP(RM_RPC_API_VERSION_MASK, 1) +#define RM_RPC_HEADER_WORDS FIELD_PREP(RM_RPC_HEADER_WORDS_MASK, \ + (sizeof(struct gunyah_rm_rpc_hdr) / sizeof(u32))) +#define RM_RPC_API (RM_RPC_API_VERSION | RM_RPC_HEADER_WORDS) + +#define RM_RPC_TYPE_CONTINUATION 0x0 +#define RM_RPC_TYPE_REQUEST 0x1 +#define RM_RPC_TYPE_REPLY 0x2 +#define RM_RPC_TYPE_NOTIF 0x3 +#define RM_RPC_TYPE_MASK GENMASK(1, 0) + +#define GUNYAH_RM_MAX_NUM_FRAGMENTS 62 +#define RM_RPC_FRAGMENTS_MASK GENMASK(7, 2) +/* clang-format on */ + +struct gunyah_rm_rpc_hdr { + u8 api; + u8 type; + __le16 seq; + __le32 msg_id; +} __packed; + +struct gunyah_rm_rpc_reply_hdr { + struct gunyah_rm_rpc_hdr hdr; + __le32 err_code; /* GUNYAH_RM_ERROR_* */ +} __packed; + +#define GUNYAH_RM_MSGQ_MSG_SIZE 240 +#define GUNYAH_RM_PAYLOAD_SIZE \ + (GUNYAH_RM_MSGQ_MSG_SIZE - sizeof(struct gunyah_rm_rpc_hdr)) + +/* RM Error codes */ +enum gunyah_rm_error { + /* clang-format off */ + GUNYAH_RM_ERROR_OK = 0x0, + GUNYAH_RM_ERROR_UNIMPLEMENTED = 0xFFFFFFFF, + GUNYAH_RM_ERROR_NOMEM = 0x1, + GUNYAH_RM_ERROR_NORESOURCE = 0x2, + GUNYAH_RM_ERROR_DENIED = 0x3, + GUNYAH_RM_ERROR_INVALID = 0x4, + GUNYAH_RM_ERROR_BUSY = 0x5, + GUNYAH_RM_ERROR_ARGUMENT_INVALID = 0x6, + GUNYAH_RM_ERROR_HANDLE_INVALID = 0x7, + GUNYAH_RM_ERROR_VALIDATE_FAILED = 0x8, + GUNYAH_RM_ERROR_MAP_FAILED = 0x9, + GUNYAH_RM_ERROR_MEM_INVALID = 0xA, + GUNYAH_RM_ERROR_MEM_INUSE = 0xB, + GUNYAH_RM_ERROR_MEM_RELEASED = 0xC, + GUNYAH_RM_ERROR_VMID_INVALID = 0xD, + GUNYAH_RM_ERROR_LOOKUP_FAILED = 0xE, + GUNYAH_RM_ERROR_IRQ_INVALID = 0xF, + GUNYAH_RM_ERROR_IRQ_INUSE = 0x10, + GUNYAH_RM_ERROR_IRQ_RELEASED = 0x11, + /* clang-format on */ +}; + +/** + * struct gunyah_rm_message - Represents a complete message from resource manager + * @payload: Combined payload of all the fragments (msg headers stripped off). + * @size: Size of the payload received so far. + * @msg_id: Message ID from the header. + * @type: RM_RPC_TYPE_REPLY or RM_RPC_TYPE_NOTIF. + * @num_fragments: total number of fragments expected to be received. + * @fragments_received: fragments received so far. + * @reply: Fields used for request/reply sequences + */ +struct gunyah_rm_message { + void *payload; + size_t size; + u32 msg_id; + u8 type; + + u8 num_fragments; + u8 fragments_received; + + /** + * @ret: Linux return code, there was an error processing message + * @seq: Sequence ID for the main message. + * @rm_error: For request/reply sequences with standard replies + * @seq_done: Signals caller that the RM reply has been received + */ + struct { + int ret; + u16 seq; + enum gunyah_rm_error rm_error; + struct completion seq_done; + } reply; +}; + +/** + * struct gunyah_rm - private data for communicating w/Gunyah resource manager + * @dev: pointer to RM platform device + * @tx_ghrsc: message queue resource to TX to RM + * @rx_ghrsc: message queue resource to RX from RM + * @active_rx_message: ongoing gunyah_rm_message for which we're receiving fragments + * @call_xarray: xarray to allocate & lookup sequence IDs for Request/Response flows + * @next_seq: next ID to allocate (for xa_alloc_cyclic) + * @recv_msg: cached allocation for Rx messages + * @send_msg: cached allocation for Tx messages. Must hold @send_lock to manipulate. + * @send_lock: synchronization to allow only one request to be sent at a time + * @send_ready: completed when we know Tx message queue can take more messages + * @nh: notifier chain for clients interested in RM notification messages + */ +struct gunyah_rm { + struct device *dev; + struct gunyah_resource tx_ghrsc; + struct gunyah_resource rx_ghrsc; + struct gunyah_rm_message *active_rx_message; + + struct xarray call_xarray; + u32 next_seq; + + unsigned char recv_msg[GUNYAH_RM_MSGQ_MSG_SIZE]; + unsigned char send_msg[GUNYAH_RM_MSGQ_MSG_SIZE]; + struct mutex send_lock; + struct completion send_ready; + struct blocking_notifier_head nh; +}; + +/** + * gunyah_rm_error_remap() - Remap Gunyah resource manager errors into a Linux error code + * @rm_error: "Standard" return value from Gunyah resource manager + */ +static inline int gunyah_rm_error_remap(enum gunyah_rm_error rm_error) +{ + switch (rm_error) { + case GUNYAH_RM_ERROR_OK: + return 0; + case GUNYAH_RM_ERROR_UNIMPLEMENTED: + return -EOPNOTSUPP; + case GUNYAH_RM_ERROR_NOMEM: + return -ENOMEM; + case GUNYAH_RM_ERROR_NORESOURCE: + return -ENODEV; + case GUNYAH_RM_ERROR_DENIED: + return -EPERM; + case GUNYAH_RM_ERROR_BUSY: + return -EBUSY; + case GUNYAH_RM_ERROR_INVALID: + case GUNYAH_RM_ERROR_ARGUMENT_INVALID: + case GUNYAH_RM_ERROR_HANDLE_INVALID: + case GUNYAH_RM_ERROR_VALIDATE_FAILED: + case GUNYAH_RM_ERROR_MAP_FAILED: + case GUNYAH_RM_ERROR_MEM_INVALID: + case GUNYAH_RM_ERROR_MEM_INUSE: + case GUNYAH_RM_ERROR_MEM_RELEASED: + case GUNYAH_RM_ERROR_VMID_INVALID: + case GUNYAH_RM_ERROR_LOOKUP_FAILED: + case GUNYAH_RM_ERROR_IRQ_INVALID: + case GUNYAH_RM_ERROR_IRQ_INUSE: + case GUNYAH_RM_ERROR_IRQ_RELEASED: + return -EINVAL; + default: + return -EBADMSG; + } +} + +static int gunyah_rm_init_message_payload(struct gunyah_rm_message *message, + const void *msg, size_t hdr_size, + size_t msg_size) +{ + const struct gunyah_rm_rpc_hdr *hdr = msg; + size_t max_buf_size, payload_size; + + if (msg_size < hdr_size) + return -EINVAL; + + payload_size = msg_size - hdr_size; + + message->num_fragments = FIELD_GET(RM_RPC_FRAGMENTS_MASK, hdr->type); + message->fragments_received = 0; + + /* There's not going to be any payload, no need to allocate buffer. */ + if (!payload_size && !message->num_fragments) + return 0; + + if (message->num_fragments > GUNYAH_RM_MAX_NUM_FRAGMENTS) + return -EINVAL; + + max_buf_size = payload_size + + (message->num_fragments * GUNYAH_RM_PAYLOAD_SIZE); + + message->payload = kzalloc(max_buf_size, GFP_KERNEL); + if (!message->payload) + return -ENOMEM; + + memcpy(message->payload, msg + hdr_size, payload_size); + message->size = payload_size; + return 0; +} + +static void gunyah_rm_abort_message(struct gunyah_rm *rm) +{ + switch (rm->active_rx_message->type) { + case RM_RPC_TYPE_REPLY: + rm->active_rx_message->reply.ret = -EIO; + complete(&rm->active_rx_message->reply.seq_done); + break; + case RM_RPC_TYPE_NOTIF: + fallthrough; + default: + kfree(rm->active_rx_message->payload); + kfree(rm->active_rx_message); + } + + rm->active_rx_message = NULL; +} + +static inline void gunyah_rm_try_complete_message(struct gunyah_rm *rm) +{ + struct gunyah_rm_message *message = rm->active_rx_message; + + if (!message || message->fragments_received != message->num_fragments) + return; + + switch (message->type) { + case RM_RPC_TYPE_REPLY: + complete(&message->reply.seq_done); + break; + case RM_RPC_TYPE_NOTIF: + blocking_notifier_call_chain(&rm->nh, message->msg_id, + message->payload); + + kfree(message->payload); + kfree(message); + break; + default: + dev_err_ratelimited(rm->dev, + "Invalid message type (%u) received\n", + message->type); + gunyah_rm_abort_message(rm); + break; + } + + rm->active_rx_message = NULL; +} + +static void gunyah_rm_process_notif(struct gunyah_rm *rm, const void *msg, + size_t msg_size) +{ + const struct gunyah_rm_rpc_hdr *hdr = msg; + struct gunyah_rm_message *message; + int ret; + + if (rm->active_rx_message) { + dev_err(rm->dev, + "Unexpected new notification, still processing an active message"); + gunyah_rm_abort_message(rm); + } + + message = kzalloc(sizeof(*message), GFP_KERNEL); + if (!message) + return; + + message->type = RM_RPC_TYPE_NOTIF; + message->msg_id = le32_to_cpu(hdr->msg_id); + + ret = gunyah_rm_init_message_payload(message, msg, sizeof(*hdr), + msg_size); + if (ret) { + dev_err(rm->dev, + "Failed to initialize message for notification: %d\n", + ret); + kfree(message); + return; + } + + rm->active_rx_message = message; + + gunyah_rm_try_complete_message(rm); +} + +static void gunyah_rm_process_reply(struct gunyah_rm *rm, const void *msg, + size_t msg_size) +{ + const struct gunyah_rm_rpc_reply_hdr *reply_hdr = msg; + struct gunyah_rm_message *message; + u16 seq_id; + + seq_id = le16_to_cpu(reply_hdr->hdr.seq); + message = xa_load(&rm->call_xarray, seq_id); + + if (!message || message->msg_id != le32_to_cpu(reply_hdr->hdr.msg_id)) + return; + + if (rm->active_rx_message) { + dev_err(rm->dev, + "Unexpected new reply, still processing an active message"); + gunyah_rm_abort_message(rm); + } + + if (gunyah_rm_init_message_payload(message, msg, sizeof(*reply_hdr), + msg_size)) { + dev_err(rm->dev, + "Failed to alloc message buffer for sequence %d\n", + seq_id); + /* Send message complete and error the client. */ + message->reply.ret = -ENOMEM; + complete(&message->reply.seq_done); + return; + } + + message->reply.rm_error = le32_to_cpu(reply_hdr->err_code); + rm->active_rx_message = message; + + gunyah_rm_try_complete_message(rm); +} + +static void gunyah_rm_process_cont(struct gunyah_rm *rm, + struct gunyah_rm_message *message, + const void *msg, size_t msg_size) +{ + const struct gunyah_rm_rpc_hdr *hdr = msg; + size_t payload_size = msg_size - sizeof(*hdr); + + if (!rm->active_rx_message) + return; + + /* + * hdr->fragments and hdr->msg_id preserves the value from first reply + * or notif message. To detect mishandling, check it's still intact. + */ + if (message->msg_id != le32_to_cpu(hdr->msg_id) || + message->num_fragments != + FIELD_GET(RM_RPC_FRAGMENTS_MASK, hdr->type)) { + gunyah_rm_abort_message(rm); + return; + } + + memcpy(message->payload + message->size, msg + sizeof(*hdr), + payload_size); + message->size += payload_size; + message->fragments_received++; + + gunyah_rm_try_complete_message(rm); +} + +static irqreturn_t gunyah_rm_rx(int irq, void *data) +{ + enum gunyah_error gunyah_error; + struct gunyah_rm_rpc_hdr *hdr; + struct gunyah_rm *rm = data; + void *msg = &rm->recv_msg[0]; + size_t len; + bool ready; + + do { + gunyah_error = gunyah_hypercall_msgq_recv(rm->rx_ghrsc.capid, + msg, + sizeof(rm->recv_msg), + &len, &ready); + if (gunyah_error != GUNYAH_ERROR_OK) { + if (gunyah_error != GUNYAH_ERROR_MSGQUEUE_EMPTY) + dev_warn(rm->dev, + "Failed to receive data: %d\n", + gunyah_error); + return IRQ_HANDLED; + } + + if (len < sizeof(*hdr)) { + dev_err_ratelimited( + rm->dev, + "Too small message received. size=%ld\n", len); + continue; + } + + hdr = msg; + if (hdr->api != RM_RPC_API) { + dev_err(rm->dev, "Unknown RM RPC API version: %x\n", + hdr->api); + return IRQ_HANDLED; + } + + switch (FIELD_GET(RM_RPC_TYPE_MASK, hdr->type)) { + case RM_RPC_TYPE_NOTIF: + gunyah_rm_process_notif(rm, msg, len); + break; + case RM_RPC_TYPE_REPLY: + gunyah_rm_process_reply(rm, msg, len); + break; + case RM_RPC_TYPE_CONTINUATION: + gunyah_rm_process_cont(rm, rm->active_rx_message, msg, + len); + break; + default: + dev_err(rm->dev, + "Invalid message type (%lu) received\n", + FIELD_GET(RM_RPC_TYPE_MASK, hdr->type)); + return IRQ_HANDLED; + } + } while (ready); + + return IRQ_HANDLED; +} + +static irqreturn_t gunyah_rm_tx(int irq, void *data) +{ + struct gunyah_rm *rm = data; + + complete_all(&rm->send_ready); + + return IRQ_HANDLED; +} + +static int gunyah_rm_msgq_send(struct gunyah_rm *rm, size_t size, bool push) + __must_hold(&rm->send_lock) +{ + const u64 tx_flags = push ? GUNYAH_HYPERCALL_MSGQ_TX_FLAGS_PUSH : 0; + enum gunyah_error gunyah_error; + void *data = &rm->send_msg[0]; + bool ready; + +again: + wait_for_completion(&rm->send_ready); + /* reinit completion before hypercall. As soon as hypercall returns, we could get the + * ready interrupt. This might be before we have time to reinit the completion + */ + reinit_completion(&rm->send_ready); + gunyah_error = gunyah_hypercall_msgq_send(rm->tx_ghrsc.capid, size, + data, tx_flags, &ready); + + /* Should never happen because Linux properly tracks the ready-state of the msgq */ + if (WARN_ON(gunyah_error == GUNYAH_ERROR_MSGQUEUE_FULL)) + goto again; + + if (ready) + complete_all(&rm->send_ready); + + return gunyah_error_remap(gunyah_error); +} + +static int gunyah_rm_send_request(struct gunyah_rm *rm, u32 message_id, + const void *req_buf, size_t req_buf_size, + struct gunyah_rm_message *message) +{ + size_t buf_size_remaining = req_buf_size; + const void *req_buf_curr = req_buf; + struct gunyah_rm_rpc_hdr *hdr = + (struct gunyah_rm_rpc_hdr *)&rm->send_msg[0]; + struct gunyah_rm_rpc_hdr hdr_template; + void *payload = hdr + 1; + u32 cont_fragments = 0; + size_t payload_size; + bool push; + int ret; + + if (req_buf_size > + GUNYAH_RM_MAX_NUM_FRAGMENTS * GUNYAH_RM_PAYLOAD_SIZE) { + dev_warn( + rm->dev, + "Limit (%lu bytes) exceeded for the maximum message size: %lu\n", + GUNYAH_RM_MAX_NUM_FRAGMENTS * GUNYAH_RM_PAYLOAD_SIZE, + req_buf_size); + dump_stack(); + return -E2BIG; + } + + if (req_buf_size) + cont_fragments = (req_buf_size - 1) / GUNYAH_RM_PAYLOAD_SIZE; + + hdr_template.api = RM_RPC_API; + hdr_template.type = FIELD_PREP(RM_RPC_TYPE_MASK, RM_RPC_TYPE_REQUEST) | + FIELD_PREP(RM_RPC_FRAGMENTS_MASK, cont_fragments); + hdr_template.seq = cpu_to_le16(message->reply.seq); + hdr_template.msg_id = cpu_to_le32(message_id); + + ret = mutex_lock_interruptible(&rm->send_lock); + if (ret) + return ret; + + do { + *hdr = hdr_template; + + /* Copy payload */ + payload_size = min(buf_size_remaining, GUNYAH_RM_PAYLOAD_SIZE); + memcpy(payload, req_buf_curr, payload_size); + req_buf_curr += payload_size; + buf_size_remaining -= payload_size; + + /* Only the last message should have push flag set */ + push = !buf_size_remaining; + ret = gunyah_rm_msgq_send(rm, sizeof(*hdr) + payload_size, + push); + if (ret) + break; + + hdr_template.type = + FIELD_PREP(RM_RPC_TYPE_MASK, RM_RPC_TYPE_CONTINUATION) | + FIELD_PREP(RM_RPC_FRAGMENTS_MASK, cont_fragments); + } while (buf_size_remaining); + + mutex_unlock(&rm->send_lock); + return ret; +} + +/** + * gunyah_rm_call: Achieve request-response type communication with RPC + * @rm: Pointer to Gunyah resource manager internal data + * @message_id: The RM RPC message-id + * @req_buf: Request buffer that contains the payload + * @req_buf_size: Total size of the payload + * @resp_buf: Pointer to a response buffer + * @resp_buf_size: Size of the response buffer + * + * Make a request to the Resource Manager and wait for reply back. For a successful + * response, the function returns the payload. The size of the payload is set in + * resp_buf_size. The resp_buf must be freed by the caller when 0 is returned + * and resp_buf_size != 0. + * + * req_buf should be not NULL for req_buf_size >0. If req_buf_size == 0, + * req_buf *can* be NULL and no additional payload is sent. + * + * Context: Process context. Will sleep waiting for reply. + * Return: 0 on success. <0 if error. + */ +int gunyah_rm_call(struct gunyah_rm *rm, u32 message_id, const void *req_buf, + size_t req_buf_size, void **resp_buf, size_t *resp_buf_size) +{ + struct gunyah_rm_message message = { 0 }; + u32 seq_id; + int ret; + + /* message_id 0 is reserved. req_buf_size implies req_buf is not NULL */ + if (!rm || !message_id || (!req_buf && req_buf_size)) + return -EINVAL; + + message.type = RM_RPC_TYPE_REPLY; + message.msg_id = message_id; + + message.reply.seq_done = + COMPLETION_INITIALIZER_ONSTACK(message.reply.seq_done); + + /* Allocate a new seq number for this message */ + ret = xa_alloc_cyclic(&rm->call_xarray, &seq_id, &message, xa_limit_16b, + &rm->next_seq, GFP_KERNEL); + if (ret < 0) + return ret; + message.reply.seq = lower_16_bits(seq_id); + + /* Send the request to the Resource Manager */ + ret = gunyah_rm_send_request(rm, message_id, req_buf, req_buf_size, + &message); + if (ret < 0) { + dev_warn(rm->dev, "Failed to send request. Error: %d\n", ret); + goto out; + } + + /* + * Wait for response. Uninterruptible because rollback based on what RM did to VM + * requires us to know how RM handled the call. + */ + wait_for_completion(&message.reply.seq_done); + + /* Check for internal (kernel) error waiting for the response */ + if (message.reply.ret) { + ret = message.reply.ret; + if (ret != -ENOMEM) + kfree(message.payload); + goto out; + } + + /* Got a response, did resource manager give us an error? */ + if (message.reply.rm_error != GUNYAH_RM_ERROR_OK) { + dev_warn(rm->dev, "RM rejected message %08x. Error: %d\n", + message_id, message.reply.rm_error); + ret = gunyah_rm_error_remap(message.reply.rm_error); + kfree(message.payload); + goto out; + } + + /* Everything looks good, return the payload */ + if (resp_buf_size) + *resp_buf_size = message.size; + + if (message.size && resp_buf) { + *resp_buf = message.payload; + } else { + /* kfree in case RM sent us multiple fragments but never any data in + * those fragments. We would've allocated memory for it, but message.size == 0 + */ + kfree(message.payload); + } + +out: + xa_erase(&rm->call_xarray, message.reply.seq); + return ret; +} + +int gunyah_rm_notifier_register(struct gunyah_rm *rm, struct notifier_block *nb) +{ + return blocking_notifier_chain_register(&rm->nh, nb); +} +EXPORT_SYMBOL_GPL(gunyah_rm_notifier_register); + +int gunyah_rm_notifier_unregister(struct gunyah_rm *rm, + struct notifier_block *nb) +{ + return blocking_notifier_chain_unregister(&rm->nh, nb); +} +EXPORT_SYMBOL_GPL(gunyah_rm_notifier_unregister); + +static int gunyah_platform_probe_capability(struct platform_device *pdev, + int idx, + struct gunyah_resource *ghrsc) +{ + int ret; + + ghrsc->irq = platform_get_irq(pdev, idx); + if (ghrsc->irq < 0) { + dev_err(&pdev->dev, "Failed to get %s irq: %d\n", + idx ? "rx" : "tx", ghrsc->irq); + return ghrsc->irq; + } + + ret = of_property_read_u64_index(pdev->dev.of_node, "reg", idx, + &ghrsc->capid); + if (ret) { + dev_err(&pdev->dev, "Failed to get %s capid: %d\n", + idx ? "rx" : "tx", ret); + return ret; + } + + return 0; +} + +static int gunyah_rm_probe_tx_msgq(struct gunyah_rm *rm, + struct platform_device *pdev) +{ + int ret; + + rm->tx_ghrsc.type = GUNYAH_RESOURCE_TYPE_MSGQ_TX; + ret = gunyah_platform_probe_capability(pdev, 0, &rm->tx_ghrsc); + if (ret) + return ret; + + enable_irq_wake(rm->tx_ghrsc.irq); + + return devm_request_threaded_irq(rm->dev, rm->tx_ghrsc.irq, NULL, + gunyah_rm_tx, IRQF_ONESHOT, + "gunyah_rm_tx", rm); +} + +static int gunyah_rm_probe_rx_msgq(struct gunyah_rm *rm, + struct platform_device *pdev) +{ + int ret; + + rm->rx_ghrsc.type = GUNYAH_RESOURCE_TYPE_MSGQ_RX; + ret = gunyah_platform_probe_capability(pdev, 1, &rm->rx_ghrsc); + if (ret) + return ret; + + enable_irq_wake(rm->rx_ghrsc.irq); + + return devm_request_threaded_irq(rm->dev, rm->rx_ghrsc.irq, NULL, + gunyah_rm_rx, IRQF_ONESHOT, + "gunyah_rm_rx", rm); +} + +static int gunyah_rm_probe(struct platform_device *pdev) +{ + struct gunyah_rm *rm; + int ret; + + rm = devm_kzalloc(&pdev->dev, sizeof(*rm), GFP_KERNEL); + if (!rm) + return -ENOMEM; + + platform_set_drvdata(pdev, rm); + rm->dev = &pdev->dev; + + mutex_init(&rm->send_lock); + init_completion(&rm->send_ready); + BLOCKING_INIT_NOTIFIER_HEAD(&rm->nh); + xa_init_flags(&rm->call_xarray, XA_FLAGS_ALLOC); + + ret = gunyah_rm_probe_tx_msgq(rm, pdev); + if (ret) + return ret; + /* assume RM is ready to receive messages from us */ + complete_all(&rm->send_ready); + + ret = gunyah_rm_probe_rx_msgq(rm, pdev); + if (ret) + return ret; + + return 0; +} + +static const struct of_device_id gunyah_rm_of_match[] = { + { .compatible = "gunyah-resource-manager" }, + {} +}; +MODULE_DEVICE_TABLE(of, gunyah_rm_of_match); + +static struct platform_driver gunyah_rm_driver = { + .probe = gunyah_rm_probe, + .driver = { + .name = "gunyah_rsc_mgr", + .of_match_table = gunyah_rm_of_match, + }, +}; +module_platform_driver(gunyah_rm_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Resource Manager Driver"); diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h new file mode 100644 index 000000000000..21318ef25040 --- /dev/null +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ +#ifndef __GUNYAH_RSC_MGR_PRIV_H +#define __GUNYAH_RSC_MGR_PRIV_H + +#include +#include +#include + +#define GUNYAH_VMID_INVAL U16_MAX + +struct gunyah_rm; + +int gunyah_rm_notifier_register(struct gunyah_rm *rm, + struct notifier_block *nb); +int gunyah_rm_notifier_unregister(struct gunyah_rm *rm, + struct notifier_block *nb); +struct device *gunyah_rm_get(struct gunyah_rm *rm); +void gunyah_rm_put(struct gunyah_rm *rm); + + +int gunyah_rm_call(struct gunyah_rm *rsc_mgr, u32 message_id, + const void *req_buf, size_t req_buf_size, void **resp_buf, + size_t *resp_buf_size); + +#endif From patchwork Tue Jan 9 19:37:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761171 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 096D43D961; Tue, 9 Jan 2024 19:38:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="TDAArpDa" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GaJ18010710; Tue, 9 Jan 2024 19:37:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=xWdZbdG7dSafWq3cabzRvyRx9NAZngj+f9YcIbeZBfY =; b=TDAArpDahUKJYT7Ji4NE1rBHgi9T9MX6/TKw57bQQmvVzjuO7uQw0AHQLOS Ru7UbgJaSCXKlurAPEVVoULnJ8dyMtzYsYRJfxwGW1YVhFS+laYSAyr1cEWK1wH7 FnWH/fBjD7+iEyaaUwUeonBaw23vg0oR0ZEkjdz57ZBxnC4Imxor1MArNGW8BoE3 i4Lx7BTIvLf/7bOvorKdKiJggh3qm0CJUl7xYCguAHA0oHGKSyUmTMnluXLj1nKd k/obzOk5EFohZVl1J4FW+JLwuFaLDd/sgPojysn7Q1gIHCKUSMZ+AbAn+ZzSZC2j 39vNCH9hRwOh7UuVwR+iRBmpwdg== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9ta0dw8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:57 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbuQD011419 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:56 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:55 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:48 -0800 Subject: [PATCH v16 10/34] gunyah: vm_mgr: Add VM start/stop Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-10-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: xkWEhD3cCcd4_26fH5x9GQrV4OQNq86C X-Proofpoint-ORIG-GUID: xkWEhD3cCcd4_26fH5x9GQrV4OQNq86C X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 malwarescore=0 mlxscore=0 adultscore=0 mlxlogscore=999 bulkscore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add ioctl to trigger the start of a Gunyah virtual machine. Subsequent commits will provide memory to the virtual machine and add ability to interact with the resources (capabilities) of the virtual machine. Although start of the virtual machine can be done implicitly on the first vCPU run for proxy-schedule virtual machines, there is a non-trivial number of calls to Gunyah: a more precise error can be given to userspace which calls VM_START without looking at kernel logs because userspace can detect that the VM start failed instead of "couldn't run the vCPU". Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 198 +++++++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.h | 19 +++++ include/uapi/linux/gunyah.h | 5 ++ 3 files changed, 222 insertions(+) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index e9dff733e35e..f6e6b5669aae 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -15,6 +15,68 @@ #include "rsc_mgr.h" #include "vm_mgr.h" +static int gunyah_vm_rm_notification_status(struct gunyah_vm *ghvm, void *data) +{ + struct gunyah_rm_vm_status_payload *payload = data; + + if (le16_to_cpu(payload->vmid) != ghvm->vmid) + return NOTIFY_OK; + + /* All other state transitions are synchronous to a corresponding RM call */ + if (payload->vm_status == GUNYAH_RM_VM_STATUS_RESET) { + down_write(&ghvm->status_lock); + ghvm->vm_status = payload->vm_status; + up_write(&ghvm->status_lock); + wake_up(&ghvm->vm_status_wait); + } + + return NOTIFY_DONE; +} + +static int gunyah_vm_rm_notification_exited(struct gunyah_vm *ghvm, void *data) +{ + struct gunyah_rm_vm_exited_payload *payload = data; + + if (le16_to_cpu(payload->vmid) != ghvm->vmid) + return NOTIFY_OK; + + down_write(&ghvm->status_lock); + ghvm->vm_status = GUNYAH_RM_VM_STATUS_EXITED; + up_write(&ghvm->status_lock); + wake_up(&ghvm->vm_status_wait); + + return NOTIFY_DONE; +} + +static int gunyah_vm_rm_notification(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct gunyah_vm *ghvm = container_of(nb, struct gunyah_vm, nb); + + switch (action) { + case GUNYAH_RM_NOTIFICATION_VM_STATUS: + return gunyah_vm_rm_notification_status(ghvm, data); + case GUNYAH_RM_NOTIFICATION_VM_EXITED: + return gunyah_vm_rm_notification_exited(ghvm, data); + default: + return NOTIFY_OK; + } +} + +static void gunyah_vm_stop(struct gunyah_vm *ghvm) +{ + int ret; + + if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) { + ret = gunyah_rm_vm_stop(ghvm->rm, ghvm->vmid); + if (ret) + dev_warn(ghvm->parent, "Failed to stop VM: %d\n", ret); + } + + wait_event(ghvm->vm_status_wait, + ghvm->vm_status != GUNYAH_RM_VM_STATUS_RUNNING); +} + static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) { struct gunyah_vm *ghvm; @@ -24,14 +86,148 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) return ERR_PTR(-ENOMEM); ghvm->parent = gunyah_rm_get(rm); + ghvm->vmid = GUNYAH_VMID_INVAL; ghvm->rm = rm; + init_rwsem(&ghvm->status_lock); + init_waitqueue_head(&ghvm->vm_status_wait); + ghvm->vm_status = GUNYAH_RM_VM_STATUS_NO_STATE; + return ghvm; } +static int gunyah_vm_start(struct gunyah_vm *ghvm) +{ + int ret; + + down_write(&ghvm->status_lock); + if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE) { + up_write(&ghvm->status_lock); + return 0; + } + + ghvm->nb.notifier_call = gunyah_vm_rm_notification; + ret = gunyah_rm_notifier_register(ghvm->rm, &ghvm->nb); + if (ret) + goto err; + + ret = gunyah_rm_alloc_vmid(ghvm->rm, 0); + if (ret < 0) { + gunyah_rm_notifier_unregister(ghvm->rm, &ghvm->nb); + goto err; + } + ghvm->vmid = ret; + ghvm->vm_status = GUNYAH_RM_VM_STATUS_LOAD; + + ret = gunyah_rm_vm_configure(ghvm->rm, ghvm->vmid, ghvm->auth, 0, 0, 0, + 0, 0); + if (ret) { + dev_warn(ghvm->parent, "Failed to configure VM: %d\n", ret); + goto err; + } + + ret = gunyah_rm_vm_init(ghvm->rm, ghvm->vmid); + if (ret) { + ghvm->vm_status = GUNYAH_RM_VM_STATUS_INIT_FAILED; + dev_warn(ghvm->parent, "Failed to initialize VM: %d\n", ret); + goto err; + } + ghvm->vm_status = GUNYAH_RM_VM_STATUS_READY; + + ret = gunyah_rm_vm_start(ghvm->rm, ghvm->vmid); + if (ret) { + dev_warn(ghvm->parent, "Failed to start VM: %d\n", ret); + goto err; + } + + ghvm->vm_status = GUNYAH_RM_VM_STATUS_RUNNING; + up_write(&ghvm->status_lock); + return ret; +err: + /* gunyah_vm_free will handle releasing resources and reclaiming memory */ + up_write(&ghvm->status_lock); + return ret; +} + +static int gunyah_vm_ensure_started(struct gunyah_vm *ghvm) +{ + int ret; + + ret = down_read_interruptible(&ghvm->status_lock); + if (ret) + return ret; + + /* Unlikely because VM is typically started */ + if (unlikely(ghvm->vm_status == GUNYAH_RM_VM_STATUS_NO_STATE)) { + up_read(&ghvm->status_lock); + ret = gunyah_vm_start(ghvm); + if (ret) + return ret; + /** gunyah_vm_start() is guaranteed to bring status out of + * GUNYAH_RM_VM_STATUS_LOAD, thus infinitely recursive call is not + * possible + */ + return gunyah_vm_ensure_started(ghvm); + } + + /* Unlikely because VM is typically running */ + if (unlikely(ghvm->vm_status != GUNYAH_RM_VM_STATUS_RUNNING)) + ret = -ENODEV; + + up_read(&ghvm->status_lock); + return ret; +} + +static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + struct gunyah_vm *ghvm = filp->private_data; + long r; + + switch (cmd) { + case GUNYAH_VM_START: { + r = gunyah_vm_ensure_started(ghvm); + break; + } + default: + r = -ENOTTY; + break; + } + + return r; +} + static int gunyah_vm_release(struct inode *inode, struct file *filp) { struct gunyah_vm *ghvm = filp->private_data; + int ret; + + /** + * We might race with a VM exit notification, but that's ok: + * gh_rm_vm_stop() will just return right away. + */ + if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) + gunyah_vm_stop(ghvm); + + if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE && + ghvm->vm_status != GUNYAH_RM_VM_STATUS_LOAD && + ghvm->vm_status != GUNYAH_RM_VM_STATUS_RESET) { + ret = gunyah_rm_vm_reset(ghvm->rm, ghvm->vmid); + if (ret) + dev_err(ghvm->parent, "Failed to reset the vm: %d\n", + ret); + wait_event(ghvm->vm_status_wait, + ghvm->vm_status == GUNYAH_RM_VM_STATUS_RESET); + } + + if (ghvm->vm_status > GUNYAH_RM_VM_STATUS_NO_STATE) { + gunyah_rm_notifier_unregister(ghvm->rm, &ghvm->nb); + + ret = gunyah_rm_dealloc_vmid(ghvm->rm, ghvm->vmid); + if (ret) + dev_warn(ghvm->parent, + "Failed to deallocate vmid: %d\n", ret); + } gunyah_rm_put(ghvm->rm); kfree(ghvm); @@ -40,6 +236,8 @@ static int gunyah_vm_release(struct inode *inode, struct file *filp) static const struct file_operations gunyah_vm_fops = { .owner = THIS_MODULE, + .unlocked_ioctl = gunyah_vm_ioctl, + .compat_ioctl = compat_ptr_ioctl, .release = gunyah_vm_release, .llseek = noop_llseek, }; diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 50790d402676..e6cc9aead0b6 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -7,6 +7,8 @@ #define _GUNYAH_VM_MGR_PRIV_H #include +#include +#include #include @@ -17,12 +19,29 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, /** * struct gunyah_vm - Main representation of a Gunyah Virtual machine + * @vmid: Gunyah's VMID for this virtual machine * @rm: Pointer to the resource manager struct to make RM calls * @parent: For logging + * @nb: Notifier block for RM notifications + * @vm_status: Current state of the VM, as last reported by RM + * @vm_status_wait: Wait queue for status @vm_status changes + * @status_lock: Serializing state transitions + * @auth: Authentication mechanism to be used by resource manager when + * launching the VM + * + * Members are grouped by hot path. */ struct gunyah_vm { + u16 vmid; struct gunyah_rm *rm; + + struct notifier_block nb; + enum gunyah_rm_vm_status vm_status; + wait_queue_head_t vm_status_wait; + struct rw_semaphore status_lock; + struct device *parent; + enum gunyah_rm_vm_auth_mechanism auth; }; #endif diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index ac338ec4b85d..31e7f79a6c39 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -20,4 +20,9 @@ */ #define GUNYAH_CREATE_VM _IO(GUNYAH_IOCTL_TYPE, 0x0) /* Returns a Gunyah VM fd */ +/* + * ioctls for gunyah-vm fds (returned by GUNYAH_CREATE_VM) + */ +#define GUNYAH_VM_START _IO(GUNYAH_IOCTL_TYPE, 0x3) + #endif From patchwork Tue Jan 9 19:37:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761169 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 662A83EA60; Tue, 9 Jan 2024 19:38:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="bnceqlAr" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IeRwr005755; Tue, 9 Jan 2024 19:37:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=4qiqaM4jJd8zGHPZYjmgu0l1JblGJ6sPIsRdN2U0BMc =; b=bnceqlAru+3cNJwcaW3SUuwQZe81XenMVLM7ojxK3lszLcFa1xlkT9yQg6f wg/5uHBN6NJspj68vcNUgNeKW//jAVUQOh9ca6OTkR3M6ESp2mixPzDovwar7YXE 3I5jV+1sRTXyFL8Hv0Nx4DV+yVmo6IIeg3eTEub2W5RZQnJ0pv6rEbCmP4Pj8+1I DoiPWetPG+dUQUnTmi4CgMuepnHvjOTjvhkcAdo26tZUg1B9DoRikGB2Mig9C8Cb ln4fpxDbwd9cg/8Em4p19BmwLo7tOJz346lmRnWzdFIZGNEiXmDTS+2VyXhxTfBq zCyOppkUcOSdqKVdIloy47IzoDw== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh98m8hqt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:58 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbv3C030413 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:57 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:56 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:50 -0800 Subject: [PATCH v16 12/34] virt: gunyah: Add resource tickets Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-12-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: -_URcr59O8AG3f6gWqPnDYkWBNleK7zF X-Proofpoint-GUID: -_URcr59O8AG3f6gWqPnDYkWBNleK7zF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxscore=0 priorityscore=1501 phishscore=0 malwarescore=0 mlxlogscore=999 suspectscore=0 clxscore=1015 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Some VM functions need to acquire Gunyah resources. For instance, Gunyah vCPUs are exposed to the host as a resource. The Gunyah vCPU function will register a resource ticket and be able to interact with the hypervisor once the resource ticket is filled. Resource tickets are the mechanism for functions to acquire ownership of Gunyah resources. Gunyah functions can be created before the VM's resources are created and made available to Linux. A resource ticket identifies a type of resource and a label of a resource which the ticket holder is interested in. Resources are created by Gunyah as configured in the VM's devicetree configuration. Gunyah doesn't process the label and that makes it possible for userspace to create multiple resources with the same label. Resource ticket owners need to be prepared for populate to be called multiple times if userspace created multiple resources with the same label. Reviewed-by: Alex Elder Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 128 ++++++++++++++++++++++++++++++++++++++++++- drivers/virt/gunyah/vm_mgr.h | 7 +++ include/linux/gunyah.h | 39 +++++++++++++ 3 files changed, 173 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index f6e6b5669aae..65badcf6357b 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -15,6 +15,106 @@ #include "rsc_mgr.h" #include "vm_mgr.h" +int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket) +{ + struct gunyah_vm_resource_ticket *iter; + struct gunyah_resource *ghrsc, *rsc_iter; + int ret = 0; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry(iter, &ghvm->resource_tickets, vm_list) { + if (iter->resource_type == ticket->resource_type && + iter->label == ticket->label) { + ret = -EEXIST; + goto out; + } + } + + if (!try_module_get(ticket->owner)) { + ret = -ENODEV; + goto out; + } + + list_add(&ticket->vm_list, &ghvm->resource_tickets); + INIT_LIST_HEAD(&ticket->resources); + + list_for_each_entry_safe(ghrsc, rsc_iter, &ghvm->resources, list) { + if (ghrsc->type == ticket->resource_type && + ghrsc->rm_label == ticket->label) { + if (ticket->populate(ticket, ghrsc)) + list_move(&ghrsc->list, &ticket->resources); + } + } +out: + mutex_unlock(&ghvm->resources_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_vm_add_resource_ticket); + +void gunyah_vm_remove_resource_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket) +{ + struct gunyah_resource *ghrsc, *iter; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry_safe(ghrsc, iter, &ticket->resources, list) { + ticket->unpopulate(ticket, ghrsc); + list_move(&ghrsc->list, &ghvm->resources); + } + + module_put(ticket->owner); + list_del(&ticket->vm_list); + mutex_unlock(&ghvm->resources_lock); +} +EXPORT_SYMBOL_GPL(gunyah_vm_remove_resource_ticket); + +static void gunyah_vm_add_resource(struct gunyah_vm *ghvm, + struct gunyah_resource *ghrsc) +{ + struct gunyah_vm_resource_ticket *ticket; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry(ticket, &ghvm->resource_tickets, vm_list) { + if (ghrsc->type == ticket->resource_type && + ghrsc->rm_label == ticket->label) { + if (ticket->populate(ticket, ghrsc)) + list_add(&ghrsc->list, &ticket->resources); + else + list_add(&ghrsc->list, &ghvm->resources); + /* unconditonal -- we prevent multiple identical + * resource tickets so there will not be some other + * ticket elsewhere in the list if populate() failed. + */ + goto found; + } + } + list_add(&ghrsc->list, &ghvm->resources); +found: + mutex_unlock(&ghvm->resources_lock); +} + +static void gunyah_vm_clean_resources(struct gunyah_vm *ghvm) +{ + struct gunyah_vm_resource_ticket *ticket, *titer; + struct gunyah_resource *ghrsc, *riter; + + mutex_lock(&ghvm->resources_lock); + if (!list_empty(&ghvm->resource_tickets)) { + dev_warn(ghvm->parent, "Dangling resource tickets:\n"); + list_for_each_entry_safe(ticket, titer, &ghvm->resource_tickets, + vm_list) { + dev_warn(ghvm->parent, " %pS\n", ticket->populate); + gunyah_vm_remove_resource_ticket(ghvm, ticket); + } + } + + list_for_each_entry_safe(ghrsc, riter, &ghvm->resources, list) { + gunyah_rm_free_resource(ghrsc); + } + mutex_unlock(&ghvm->resources_lock); +} + static int gunyah_vm_rm_notification_status(struct gunyah_vm *ghvm, void *data) { struct gunyah_rm_vm_status_payload *payload = data; @@ -92,13 +192,18 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) init_rwsem(&ghvm->status_lock); init_waitqueue_head(&ghvm->vm_status_wait); ghvm->vm_status = GUNYAH_RM_VM_STATUS_NO_STATE; + mutex_init(&ghvm->resources_lock); + INIT_LIST_HEAD(&ghvm->resources); + INIT_LIST_HEAD(&ghvm->resource_tickets); return ghvm; } static int gunyah_vm_start(struct gunyah_vm *ghvm) { - int ret; + struct gunyah_rm_hyp_resources *resources; + struct gunyah_resource *ghrsc; + int ret, i, n; down_write(&ghvm->status_lock); if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE) { @@ -134,6 +239,25 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) } ghvm->vm_status = GUNYAH_RM_VM_STATUS_READY; + ret = gunyah_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources); + if (ret) { + dev_warn(ghvm->parent, + "Failed to get hypervisor resources for VM: %d\n", + ret); + goto err; + } + + for (i = 0, n = le32_to_cpu(resources->n_entries); i < n; i++) { + ghrsc = gunyah_rm_alloc_resource(ghvm->rm, + &resources->entries[i]); + if (!ghrsc) { + ret = -ENOMEM; + goto err; + } + + gunyah_vm_add_resource(ghvm, ghrsc); + } + ret = gunyah_rm_vm_start(ghvm->rm, ghvm->vmid); if (ret) { dev_warn(ghvm->parent, "Failed to start VM: %d\n", ret); @@ -209,6 +333,8 @@ static int gunyah_vm_release(struct inode *inode, struct file *filp) if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) gunyah_vm_stop(ghvm); + gunyah_vm_clean_resources(ghvm); + if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE && ghvm->vm_status != GUNYAH_RM_VM_STATUS_LOAD && ghvm->vm_status != GUNYAH_RM_VM_STATUS_RESET) { diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index e6cc9aead0b6..0d291f722885 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -26,6 +26,9 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @vm_status: Current state of the VM, as last reported by RM * @vm_status_wait: Wait queue for status @vm_status changes * @status_lock: Serializing state transitions + * @resource_lock: Serializing addition of resources and resource tickets + * @resources: List of &struct gunyah_resource that are associated with this VM + * @resource_tickets: List of &struct gunyah_vm_resource_ticket * @auth: Authentication mechanism to be used by resource manager when * launching the VM * @@ -39,9 +42,13 @@ struct gunyah_vm { enum gunyah_rm_vm_status vm_status; wait_queue_head_t vm_status_wait; struct rw_semaphore status_lock; + struct mutex resources_lock; + struct list_head resources; + struct list_head resource_tickets; struct device *parent; enum gunyah_rm_vm_auth_mechanism auth; + }; #endif diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index ede8abb1b276..001769100260 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -10,6 +10,7 @@ #include #include #include +#include #include /* Matches resource manager's resource types for VM_GET_HYP_RESOURCES RPC */ @@ -34,6 +35,44 @@ struct gunyah_resource { u32 rm_label; }; +struct gunyah_vm; + +/** + * struct gunyah_vm_resource_ticket - Represents a ticket to reserve access to VM resource(s) + * @vm_list: for @gunyah_vm->resource_tickets + * @resources: List of resource(s) associated with this ticket + * (members are from @gunyah_resource->list) + * @resource_type: Type of resource this ticket reserves + * @label: Label of the resource from resource manager this ticket reserves. + * @owner: owner of the ticket + * @populate: callback provided by the ticket owner and called when a resource is found that + * matches @resource_type and @label. Note that this callback could be called + * multiple times if userspace created mutliple resources with the same type/label. + * This callback may also have significant delay after gunyah_vm_add_resource_ticket() + * since gunyah_vm_add_resource_ticket() could be called before the VM starts. + * @unpopulate: callback provided by the ticket owner and called when the ticket owner should no + * longer use the resource provided in the argument. When unpopulate() returns, + * the ticket owner should not be able to use the resource any more as the resource + * might being freed. + */ +struct gunyah_vm_resource_ticket { + struct list_head vm_list; + struct list_head resources; + enum gunyah_resource_type resource_type; + u32 label; + + struct module *owner; + bool (*populate)(struct gunyah_vm_resource_ticket *ticket, + struct gunyah_resource *ghrsc); + void (*unpopulate)(struct gunyah_vm_resource_ticket *ticket, + struct gunyah_resource *ghrsc); +}; + +int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket); +void gunyah_vm_remove_resource_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket); + /******************************************************************************/ /* Common arch-independent definitions for Gunyah hypercalls */ #define GUNYAH_CAPID_INVAL U64_MAX From patchwork Tue Jan 9 19:37:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761168 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D73843EA7B; Tue, 9 Jan 2024 19:38:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="fpc1hZF6" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409HbTnp007926; Tue, 9 Jan 2024 19:37:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=mor3PjSL9nlkSdArAusagsgYfSKA55S60vleqZR3Ge0 =; b=fpc1hZF6yiMDJBOD7UHjRhR+fqNfWYWI+PRt7PCYWjvF2fRlscjwF0+xipF 9dKTFaJrvjdSk5TECmgfhcAKVk5i+iuyWcIzeHuNlpvntdv2mB6/Ecv1PUkxRbbk nkkfnQAC8zHjRcEg9Kq0iMj9W0YEoGG0IHLG3SBtsEDetMokRo5jV8bcgjrRar2Q vGWUhqK8HMX5+AZUwy9PK7B/hR4IJGnuqCSj2UlexJUwMUpEpSykfyVJZyPSBLx1 pCFcdZuPJukAZgASN3L6AWiFvxxq52gpzswnEm4kum3k9uMZ7kM4QlLWWrJb+O+9 04An+SJWdU0ancjlK7zfS/re9jg== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9evrfhb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:58 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbwao030416 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:58 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:57 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:51 -0800 Subject: [PATCH v16 13/34] gunyah: vm_mgr: Add framework for VM Functions Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-13-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: W6LFEG2ACudCKEUd5w0l_1futlHRQVbP X-Proofpoint-ORIG-GUID: W6LFEG2ACudCKEUd5w0l_1futlHRQVbP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 priorityscore=1501 adultscore=0 suspectscore=0 clxscore=1015 spamscore=0 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Introduce a framework for Gunyah userspace to install VM functions. VM functions are optional interfaces to the virtual machine. vCPUs, ioeventfs, and irqfds are examples of such VM functions and are implemented in subsequent patches. A generic framework is implemented instead of individual ioctls to create vCPUs, irqfds, etc., in order to simplify the VM manager core implementation and allow dynamic loading of VM function modules. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 207 ++++++++++++++++++++++++++++++++++++++++++- drivers/virt/gunyah/vm_mgr.h | 10 +++ include/linux/gunyah.h | 87 +++++++++++++++++- include/uapi/linux/gunyah.h | 18 ++++ 4 files changed, 318 insertions(+), 4 deletions(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 65badcf6357b..5d4f413f7a76 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -6,15 +6,175 @@ #define pr_fmt(fmt) "gunyah_vm_mgr: " fmt #include +#include #include #include #include +#include #include #include "rsc_mgr.h" #include "vm_mgr.h" +static DEFINE_XARRAY(gunyah_vm_functions); + +static void gunyah_vm_put_function(struct gunyah_vm_function *fn) +{ + module_put(fn->mod); +} + +static struct gunyah_vm_function *gunyah_vm_get_function(u32 type) +{ + struct gunyah_vm_function *fn; + + fn = xa_load(&gunyah_vm_functions, type); + if (!fn) { + request_module("ghfunc:%d", type); + + fn = xa_load(&gunyah_vm_functions, type); + } + + if (!fn || !try_module_get(fn->mod)) + fn = ERR_PTR(-ENOENT); + + return fn; +} + +static void +gunyah_vm_remove_function_instance(struct gunyah_vm_function_instance *inst) + __must_hold(&inst->ghvm->fn_lock) +{ + inst->fn->unbind(inst); + list_del(&inst->vm_list); + gunyah_vm_put_function(inst->fn); + kfree(inst->argp); + kfree(inst); +} + +static void gunyah_vm_remove_functions(struct gunyah_vm *ghvm) +{ + struct gunyah_vm_function_instance *inst, *iiter; + + mutex_lock(&ghvm->fn_lock); + list_for_each_entry_safe(inst, iiter, &ghvm->functions, vm_list) { + gunyah_vm_remove_function_instance(inst); + } + mutex_unlock(&ghvm->fn_lock); +} + +static long gunyah_vm_add_function_instance(struct gunyah_vm *ghvm, + struct gunyah_fn_desc *f) +{ + struct gunyah_vm_function_instance *inst; + void __user *argp; + long r = 0; + + if (f->arg_size > GUNYAH_FN_MAX_ARG_SIZE) { + dev_err_ratelimited(ghvm->parent, "%s: arg_size > %d\n", + __func__, GUNYAH_FN_MAX_ARG_SIZE); + return -EINVAL; + } + + inst = kzalloc(sizeof(*inst), GFP_KERNEL); + if (!inst) + return -ENOMEM; + + inst->arg_size = f->arg_size; + if (inst->arg_size) { + inst->argp = kzalloc(inst->arg_size, GFP_KERNEL); + if (!inst->argp) { + r = -ENOMEM; + goto free; + } + + argp = u64_to_user_ptr(f->arg); + if (copy_from_user(inst->argp, argp, f->arg_size)) { + r = -EFAULT; + goto free_arg; + } + } + + inst->fn = gunyah_vm_get_function(f->type); + if (IS_ERR(inst->fn)) { + r = PTR_ERR(inst->fn); + goto free_arg; + } + + inst->ghvm = ghvm; + inst->rm = ghvm->rm; + + mutex_lock(&ghvm->fn_lock); + r = inst->fn->bind(inst); + if (r < 0) { + mutex_unlock(&ghvm->fn_lock); + gunyah_vm_put_function(inst->fn); + goto free_arg; + } + + list_add(&inst->vm_list, &ghvm->functions); + mutex_unlock(&ghvm->fn_lock); + + return r; +free_arg: + kfree(inst->argp); +free: + kfree(inst); + return r; +} + +static long gunyah_vm_rm_function_instance(struct gunyah_vm *ghvm, + struct gunyah_fn_desc *f) +{ + struct gunyah_vm_function_instance *inst, *iter; + void __user *user_argp; + void *argp __free(kfree) = NULL; + long r = 0; + + if (f->arg_size) { + argp = kzalloc(f->arg_size, GFP_KERNEL); + if (!argp) + return -ENOMEM; + + user_argp = u64_to_user_ptr(f->arg); + if (copy_from_user(argp, user_argp, f->arg_size)) + return -EFAULT; + } + + r = mutex_lock_interruptible(&ghvm->fn_lock); + if (r) + return r; + + r = -ENOENT; + list_for_each_entry_safe(inst, iter, &ghvm->functions, vm_list) { + if (inst->fn->type == f->type && + inst->fn->compare(inst, argp, f->arg_size)) { + gunyah_vm_remove_function_instance(inst); + r = 0; + } + } + + mutex_unlock(&ghvm->fn_lock); + return r; +} + +int gunyah_vm_function_register(struct gunyah_vm_function *fn) +{ + if (!fn->bind || !fn->unbind) + return -EINVAL; + + return xa_err(xa_store(&gunyah_vm_functions, fn->type, fn, GFP_KERNEL)); +} +EXPORT_SYMBOL_GPL(gunyah_vm_function_register); + +void gunyah_vm_function_unregister(struct gunyah_vm_function *fn) +{ + /* Expecting unregister to only come when unloading a module */ + WARN_ON(fn->mod && module_refcount(fn->mod)); + xa_erase(&gunyah_vm_functions, fn->type); +} +EXPORT_SYMBOL_GPL(gunyah_vm_function_unregister); + int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket) { @@ -191,7 +351,11 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) init_rwsem(&ghvm->status_lock); init_waitqueue_head(&ghvm->vm_status_wait); + kref_init(&ghvm->kref); ghvm->vm_status = GUNYAH_RM_VM_STATUS_NO_STATE; + + INIT_LIST_HEAD(&ghvm->functions); + mutex_init(&ghvm->fn_lock); mutex_init(&ghvm->resources_lock); INIT_LIST_HEAD(&ghvm->resources); INIT_LIST_HEAD(&ghvm->resource_tickets); @@ -306,6 +470,7 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { struct gunyah_vm *ghvm = filp->private_data; + void __user *argp = (void __user *)arg; long r; switch (cmd) { @@ -313,6 +478,24 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, r = gunyah_vm_ensure_started(ghvm); break; } + case GUNYAH_VM_ADD_FUNCTION: { + struct gunyah_fn_desc f; + + if (copy_from_user(&f, argp, sizeof(f))) + return -EFAULT; + + r = gunyah_vm_add_function_instance(ghvm, &f); + break; + } + case GUNYAH_VM_REMOVE_FUNCTION: { + struct gunyah_fn_desc f; + + if (copy_from_user(&f, argp, sizeof(f))) + return -EFAULT; + + r = gunyah_vm_rm_function_instance(ghvm, &f); + break; + } default: r = -ENOTTY; break; @@ -321,9 +504,15 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, return r; } -static int gunyah_vm_release(struct inode *inode, struct file *filp) +int __must_check gunyah_vm_get(struct gunyah_vm *ghvm) { - struct gunyah_vm *ghvm = filp->private_data; + return kref_get_unless_zero(&ghvm->kref); +} +EXPORT_SYMBOL_GPL(gunyah_vm_get); + +static void _gunyah_vm_put(struct kref *kref) +{ + struct gunyah_vm *ghvm = container_of(kref, struct gunyah_vm, kref); int ret; /** @@ -333,6 +522,7 @@ static int gunyah_vm_release(struct inode *inode, struct file *filp) if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) gunyah_vm_stop(ghvm); + gunyah_vm_remove_functions(ghvm); gunyah_vm_clean_resources(ghvm); if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE && @@ -357,6 +547,19 @@ static int gunyah_vm_release(struct inode *inode, struct file *filp) gunyah_rm_put(ghvm->rm); kfree(ghvm); +} + +void gunyah_vm_put(struct gunyah_vm *ghvm) +{ + kref_put(&ghvm->kref, _gunyah_vm_put); +} +EXPORT_SYMBOL_GPL(gunyah_vm_put); + +static int gunyah_vm_release(struct inode *inode, struct file *filp) +{ + struct gunyah_vm *ghvm = filp->private_data; + + gunyah_vm_put(ghvm); return 0; } diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 0d291f722885..190a95ee8da6 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -7,6 +7,8 @@ #define _GUNYAH_VM_MGR_PRIV_H #include +#include +#include #include #include @@ -26,6 +28,10 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @vm_status: Current state of the VM, as last reported by RM * @vm_status_wait: Wait queue for status @vm_status changes * @status_lock: Serializing state transitions + * @kref: Reference counter for VM functions + * @fn_lock: Serialization addition of functions + * @functions: List of &struct gunyah_vm_function_instance that have been + * created by user for this VM. * @resource_lock: Serializing addition of resources and resource tickets * @resources: List of &struct gunyah_resource that are associated with this VM * @resource_tickets: List of &struct gunyah_vm_resource_ticket @@ -42,6 +48,10 @@ struct gunyah_vm { enum gunyah_rm_vm_status vm_status; wait_queue_head_t vm_status_wait; struct rw_semaphore status_lock; + + struct kref kref; + struct mutex fn_lock; + struct list_head functions; struct mutex resources_lock; struct list_head resources; struct list_head resource_tickets; diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 001769100260..359cd63b4938 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -11,8 +11,93 @@ #include #include #include +#include #include +#include + +struct gunyah_vm; + +int __must_check gunyah_vm_get(struct gunyah_vm *ghvm); +void gunyah_vm_put(struct gunyah_vm *ghvm); + +struct gunyah_vm_function_instance; +/** + * struct gunyah_vm_function - Represents a function type + * @type: value from &enum gunyah_fn_type + * @name: friendly name for debug purposes + * @mod: owner of the function type + * @bind: Called when a new function of this type has been allocated. + * @unbind: Called when the function instance is being destroyed. + * @compare: Compare function instance @f's argument to the provided arg. + * Return true if they are equivalent. Used on GUNYAH_VM_REMOVE_FUNCTION. + */ +struct gunyah_vm_function { + u32 type; + const char *name; + struct module *mod; + long (*bind)(struct gunyah_vm_function_instance *f); + void (*unbind)(struct gunyah_vm_function_instance *f); + bool (*compare)(const struct gunyah_vm_function_instance *f, + const void *arg, size_t size); +}; + +/** + * struct gunyah_vm_function_instance - Represents one function instance + * @arg_size: size of user argument + * @argp: pointer to user argument + * @ghvm: Pointer to VM instance + * @rm: Pointer to resource manager for the VM instance + * @fn: The ops for the function + * @data: Private data for function + * @vm_list: for gunyah_vm's functions list + * @fn_list: for gunyah_vm_function's instances list + */ +struct gunyah_vm_function_instance { + size_t arg_size; + void *argp; + struct gunyah_vm *ghvm; + struct gunyah_rm *rm; + struct gunyah_vm_function *fn; + void *data; + struct list_head vm_list; +}; + +int gunyah_vm_function_register(struct gunyah_vm_function *f); +void gunyah_vm_function_unregister(struct gunyah_vm_function *f); + +/* Since the function identifiers were setup in a uapi header as an + * enum and we do no want to change that, the user must supply the expanded + * constant as well and the compiler checks they are the same. + * See also MODULE_ALIAS_RDMA_NETLINK. + */ +#define MODULE_ALIAS_GUNYAH_VM_FUNCTION(_type, _idx) \ + static inline void __maybe_unused __chk##_idx(void) \ + { \ + BUILD_BUG_ON(_type != _idx); \ + } \ + MODULE_ALIAS("ghfunc:" __stringify(_idx)) + +#define DECLARE_GUNYAH_VM_FUNCTION(_name, _type, _bind, _unbind, _compare) \ + static struct gunyah_vm_function _name = { \ + .type = _type, \ + .name = __stringify(_name), \ + .mod = THIS_MODULE, \ + .bind = _bind, \ + .unbind = _unbind, \ + .compare = _compare, \ + } + +#define module_gunyah_vm_function(__gf) \ + module_driver(__gf, gunyah_vm_function_register, \ + gunyah_vm_function_unregister) + +#define DECLARE_GUNYAH_VM_FUNCTION_INIT(_name, _type, _idx, _bind, _unbind, \ + _compare) \ + DECLARE_GUNYAH_VM_FUNCTION(_name, _type, _bind, _unbind, _compare); \ + module_gunyah_vm_function(_name); \ + MODULE_ALIAS_GUNYAH_VM_FUNCTION(_type, _idx) + /* Matches resource manager's resource types for VM_GET_HYP_RESOURCES RPC */ enum gunyah_resource_type { /* clang-format off */ @@ -35,8 +120,6 @@ struct gunyah_resource { u32 rm_label; }; -struct gunyah_vm; - /** * struct gunyah_vm_resource_ticket - Represents a ticket to reserve access to VM resource(s) * @vm_list: for @gunyah_vm->resource_tickets diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 31e7f79a6c39..1b7cb5fde70a 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -25,4 +25,22 @@ */ #define GUNYAH_VM_START _IO(GUNYAH_IOCTL_TYPE, 0x3) +#define GUNYAH_FN_MAX_ARG_SIZE 256 + +/** + * struct gunyah_fn_desc - Arguments to create a VM function + * @type: Type of the function. See &enum gunyah_fn_type. + * @arg_size: Size of argument to pass to the function. arg_size <= GUNYAH_FN_MAX_ARG_SIZE + * @arg: Pointer to argument given to the function. See &enum gunyah_fn_type for expected + * arguments for a function type. + */ +struct gunyah_fn_desc { + __u32 type; + __u32 arg_size; + __u64 arg; +}; + +#define GUNYAH_VM_ADD_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x4, struct gunyah_fn_desc) +#define GUNYAH_VM_REMOVE_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x7, struct gunyah_fn_desc) + #endif From patchwork Tue Jan 9 19:37:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761166 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25425405C5; Tue, 9 Jan 2024 19:38:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Zo9chhnS" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409J7Wtr027734; Tue, 9 Jan 2024 19:38:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=ot2Rc6FWYqawtwAbAQxff+CJFZOjsl6S/t9tlgViSq4 =; b=Zo9chhnSm40/QbbTZ9T3vvSYbbmWNVbM7fMd5RB8IB6vR3+ZlIfKAGCoG6C 0uHKIVHrZ11CGbnUyXPmlnzXP7CjmEXgwvg34yHZaAgHa/At3sBI5Uq04qz33RKD jL9t4elpnYHd6+aiMR/ceFosZpLnMv2HFCH4LTuy1KfN6Kn4JQJIfc9ti3RLZB9B PCA9PBZXnKj0htcVLnfGOG3pMvxR8rbD4Jn/jENhB6khhqwsPbwWAgJiUmxfABlG 26zb18mXIeU/+xyYTSxbI68ZPe4/xLRRteAFR+eUWcslZtELzNUHiSSO049HSgjD 5Ot61cw21zAB9X/D7KBEbOLMNjQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9vfgdrp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:01 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc0T0011951 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:00 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:59 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:55 -0800 Subject: [PATCH v16 17/34] gunyah: rsc_mgr: Add memory parcel RPC Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-17-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: yuWyby_01twjbm-t5tFkMTfKW7_PGkc_ X-Proofpoint-ORIG-GUID: yuWyby_01twjbm-t5tFkMTfKW7_PGkc_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 lowpriorityscore=0 suspectscore=0 clxscore=1015 priorityscore=1501 impostorscore=0 adultscore=0 bulkscore=0 mlxlogscore=999 spamscore=0 phishscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 In a Gunyah hypervisor system using the Gunyah Resource Manager, the "standard" unit of donating, lending and sharing memory is called a memory parcel (memparcel). A memparcel is an abstraction used by the resource manager for securely managing donating, lending and sharing memory, which may be physically and virtually fragmented, without dealing directly with physical memory addresses. Memparcels are created and managed through the RM RPC functions for lending, sharing and reclaiming memory from VMs. When creating a new VM the initial VM memory containing the VM image and the VM's device tree blob must be provided as a memparcel. The memparcel must be created using the RM RPC for lending and mapping the memory to the VM. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/rsc_mgr.h | 9 ++ drivers/virt/gunyah/rsc_mgr_rpc.c | 231 ++++++++++++++++++++++++++++++++++++++ include/linux/gunyah.h | 43 +++++++ 3 files changed, 283 insertions(+) diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h index 52711de77bb7..ec8ad8149e8e 100644 --- a/drivers/virt/gunyah/rsc_mgr.h +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -10,6 +10,7 @@ #include #define GUNYAH_VMID_INVAL U16_MAX +#define GUNYAH_MEM_HANDLE_INVAL U32_MAX struct gunyah_rm; @@ -58,6 +59,12 @@ struct gunyah_rm_vm_status_payload { __le16 app_status; } __packed; +/* RPC Calls */ +int gunyah_rm_mem_share(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *parcel); +int gunyah_rm_mem_reclaim(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *parcel); + int gunyah_rm_alloc_vmid(struct gunyah_rm *rm, u16 vmid); int gunyah_rm_dealloc_vmid(struct gunyah_rm *rm, u16 vmid); int gunyah_rm_vm_reset(struct gunyah_rm *rm, u16 vmid); @@ -99,6 +106,8 @@ struct gunyah_rm_hyp_resources { int gunyah_rm_get_hyp_resources(struct gunyah_rm *rm, u16 vmid, struct gunyah_rm_hyp_resources **resources); +int gunyah_rm_get_vmid(struct gunyah_rm *rm, u16 *vmid); + struct gunyah_resource * gunyah_rm_alloc_resource(struct gunyah_rm *rm, struct gunyah_rm_hyp_resource *hyp_resource); diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c index 141ce0145e91..bc44bde990ce 100644 --- a/drivers/virt/gunyah/rsc_mgr_rpc.c +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c @@ -5,6 +5,12 @@ #include "rsc_mgr.h" +/* Message IDs: Memory Management */ +#define GUNYAH_RM_RPC_MEM_LEND 0x51000012 +#define GUNYAH_RM_RPC_MEM_SHARE 0x51000013 +#define GUNYAH_RM_RPC_MEM_RECLAIM 0x51000015 +#define GUNYAH_RM_RPC_MEM_APPEND 0x51000018 + /* Message IDs: VM Management */ /* clang-format off */ #define GUNYAH_RM_RPC_VM_ALLOC_VMID 0x56000001 @@ -15,6 +21,7 @@ #define GUNYAH_RM_RPC_VM_CONFIG_IMAGE 0x56000009 #define GUNYAH_RM_RPC_VM_INIT 0x5600000B #define GUNYAH_RM_RPC_VM_GET_HYP_RESOURCES 0x56000020 +#define GUNYAH_RM_RPC_VM_GET_VMID 0x56000024 /* clang-format on */ struct gunyah_rm_vm_common_vmid_req { @@ -22,6 +29,48 @@ struct gunyah_rm_vm_common_vmid_req { __le16 _padding; } __packed; +/* Call: MEM_LEND, MEM_SHARE */ +#define GUNYAH_RM_MAX_MEM_ENTRIES 512 + +#define GUNYAH_MEM_SHARE_REQ_FLAGS_APPEND BIT(1) + +struct gunyah_rm_mem_share_req_header { + u8 mem_type; + u8 _padding0; + u8 flags; + u8 _padding1; + __le32 label; +} __packed; + +struct gunyah_rm_mem_share_req_acl_section { + __le32 n_entries; + struct gunyah_rm_mem_acl_entry entries[]; +} __packed; + +struct gunyah_rm_mem_share_req_mem_section { + __le16 n_entries; + __le16 _padding; + struct gunyah_rm_mem_entry entries[]; +} __packed; + +/* Call: MEM_RELEASE */ +struct gunyah_rm_mem_release_req { + __le32 mem_handle; + u8 flags; /* currently not used */ + u8 _padding0; + __le16 _padding1; +} __packed; + +/* Call: MEM_APPEND */ +#define GUNYAH_MEM_APPEND_REQ_FLAGS_END BIT(0) + +struct gunyah_rm_mem_append_req_header { + __le32 mem_handle; + u8 flags; + u8 _padding0; + __le16 _padding1; +} __packed; + /* Call: VM_ALLOC */ struct gunyah_rm_vm_alloc_vmid_resp { __le16 vmid; @@ -66,6 +115,159 @@ static int gunyah_rm_common_vmid_call(struct gunyah_rm *rm, u32 message_id, NULL, NULL); } +static int gunyah_rm_mem_append(struct gunyah_rm *rm, u32 mem_handle, + struct gunyah_rm_mem_entry *entries, + size_t n_entries) +{ + struct gunyah_rm_mem_append_req_header *req __free(kfree) = NULL; + struct gunyah_rm_mem_share_req_mem_section *mem; + int ret = 0; + size_t n; + + req = kzalloc(sizeof(*req) + struct_size(mem, entries, GUNYAH_RM_MAX_MEM_ENTRIES), + GFP_KERNEL); + if (!req) + return -ENOMEM; + + req->mem_handle = cpu_to_le32(mem_handle); + mem = (void *)(req + 1); + + while (n_entries) { + req->flags = 0; + if (n_entries > GUNYAH_RM_MAX_MEM_ENTRIES) { + n = GUNYAH_RM_MAX_MEM_ENTRIES; + } else { + req->flags |= GUNYAH_MEM_APPEND_REQ_FLAGS_END; + n = n_entries; + } + + mem->n_entries = cpu_to_le16(n); + memcpy(mem->entries, entries, sizeof(*entries) * n); + + ret = gunyah_rm_call(rm, GUNYAH_RM_RPC_MEM_APPEND, req, + sizeof(*req) + struct_size(mem, entries, n), + NULL, NULL); + if (ret) + break; + + entries += n; + n_entries -= n; + } + + return ret; +} + +/** + * gunyah_rm_mem_share() - Share memory with other virtual machines. + * @rm: Handle to a Gunyah resource manager + * @p: Information about the memory to be shared. + * + * Sharing keeps Linux's access to the memory while the memory parcel is shared. + */ +int gunyah_rm_mem_share(struct gunyah_rm *rm, struct gunyah_rm_mem_parcel *p) +{ + u32 message_id = p->n_acl_entries == 1 ? GUNYAH_RM_RPC_MEM_LEND : + GUNYAH_RM_RPC_MEM_SHARE; + size_t msg_size, initial_mem_entries = p->n_mem_entries, resp_size; + struct gunyah_rm_mem_share_req_acl_section *acl; + struct gunyah_rm_mem_share_req_mem_section *mem; + struct gunyah_rm_mem_share_req_header *req_header; + size_t acl_size, mem_size; + u32 *attr_section; + bool need_append = false; + __le32 *resp; + void *msg; + int ret; + + if (!p->acl_entries || !p->n_acl_entries || !p->mem_entries || + !p->n_mem_entries || p->n_acl_entries > U8_MAX || + p->mem_handle != GUNYAH_MEM_HANDLE_INVAL) + return -EINVAL; + + if (initial_mem_entries > GUNYAH_RM_MAX_MEM_ENTRIES) { + initial_mem_entries = GUNYAH_RM_MAX_MEM_ENTRIES; + need_append = true; + } + + acl_size = struct_size(acl, entries, p->n_acl_entries); + mem_size = struct_size(mem, entries, initial_mem_entries); + + /* The format of the message goes: + * request header + * ACL entries (which VMs get what kind of access to this memory parcel) + * Memory entries (list of memory regions to share) + * Memory attributes (currently unused, we'll hard-code the size to 0) + */ + msg_size = sizeof(struct gunyah_rm_mem_share_req_header) + acl_size + + mem_size + + sizeof(u32); /* for memory attributes, currently unused */ + + msg = kzalloc(msg_size, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + req_header = msg; + acl = (void *)req_header + sizeof(*req_header); + mem = (void *)acl + acl_size; + attr_section = (void *)mem + mem_size; + + req_header->mem_type = p->mem_type; + if (need_append) + req_header->flags |= GUNYAH_MEM_SHARE_REQ_FLAGS_APPEND; + req_header->label = cpu_to_le32(p->label); + + acl->n_entries = cpu_to_le32(p->n_acl_entries); + memcpy(acl->entries, p->acl_entries, + flex_array_size(acl, entries, p->n_acl_entries)); + + mem->n_entries = cpu_to_le16(initial_mem_entries); + memcpy(mem->entries, p->mem_entries, + flex_array_size(mem, entries, initial_mem_entries)); + + /* Set n_entries for memory attribute section to 0 */ + *attr_section = 0; + + ret = gunyah_rm_call(rm, message_id, msg, msg_size, (void **)&resp, + &resp_size); + kfree(msg); + + if (ret) + return ret; + + p->mem_handle = le32_to_cpu(*resp); + kfree(resp); + + if (need_append) { + ret = gunyah_rm_mem_append( + rm, p->mem_handle, &p->mem_entries[initial_mem_entries], + p->n_mem_entries - initial_mem_entries); + if (ret) { + gunyah_rm_mem_reclaim(rm, p); + p->mem_handle = GUNYAH_MEM_HANDLE_INVAL; + } + } + + return ret; +} + +/** + * gunyah_rm_mem_reclaim() - Reclaim a memory parcel + * @rm: Handle to a Gunyah resource manager + * @parcel: Information about the memory to be reclaimed. + * + * RM maps the associated memory back into the stage-2 page tables of the owner VM. + */ +int gunyah_rm_mem_reclaim(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *parcel) +{ + struct gunyah_rm_mem_release_req req = { + .mem_handle = cpu_to_le32(parcel->mem_handle), + }; + + return gunyah_rm_call(rm, GUNYAH_RM_RPC_MEM_RECLAIM, &req, sizeof(req), + NULL, NULL); +} + /** * gunyah_rm_alloc_vmid() - Allocate a new VM in Gunyah. Returns the VM identifier. * @rm: Handle to a Gunyah resource manager @@ -236,3 +438,32 @@ int gunyah_rm_get_hyp_resources(struct gunyah_rm *rm, u16 vmid, *resources = resp; return 0; } + +/** + * gunyah_rm_get_vmid() - Retrieve VMID of this virtual machine + * @rm: Handle to a Gunyah resource manager + * @vmid: Filled with the VMID of this VM + */ +int gunyah_rm_get_vmid(struct gunyah_rm *rm, u16 *vmid) +{ + static u16 cached_vmid = GUNYAH_VMID_INVAL; + size_t resp_size; + __le32 *resp; + int ret; + + if (cached_vmid != GUNYAH_VMID_INVAL) { + *vmid = cached_vmid; + return 0; + } + + ret = gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_GET_VMID, NULL, 0, + (void **)&resp, &resp_size); + if (ret) + return ret; + + *vmid = cached_vmid = lower_16_bits(le32_to_cpu(*resp)); + kfree(resp); + + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_get_vmid); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index a517c5c33a75..9065f5758c39 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -156,6 +156,49 @@ int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, void gunyah_vm_remove_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket); +#define GUNYAH_RM_ACL_X BIT(0) +#define GUNYAH_RM_ACL_W BIT(1) +#define GUNYAH_RM_ACL_R BIT(2) + +struct gunyah_rm_mem_acl_entry { + __le16 vmid; + u8 perms; + u8 reserved; +} __packed; + +struct gunyah_rm_mem_entry { + __le64 phys_addr; + __le64 size; +} __packed; + +enum gunyah_rm_mem_type { + GUNYAH_RM_MEM_TYPE_NORMAL = 0, + GUNYAH_RM_MEM_TYPE_IO = 1, +}; + +/* + * struct gunyah_rm_mem_parcel - Info about memory to be lent/shared/donated/reclaimed + * @mem_type: The type of memory: normal (DDR) or IO + * @label: An client-specified identifier which can be used by the other VMs to identify the purpose + * of the memory parcel. + * @n_acl_entries: Count of the number of entries in the @acl_entries array. + * @acl_entries: An array of access control entries. Each entry specifies a VM and what access + * is allowed for the memory parcel. + * @n_mem_entries: Count of the number of entries in the @mem_entries array. + * @mem_entries: An array of regions to be associated with the memory parcel. Addresses should be + * (intermediate) physical addresses from Linux's perspective. + * @mem_handle: On success, filled with memory handle that RM allocates for this memory parcel + */ +struct gunyah_rm_mem_parcel { + enum gunyah_rm_mem_type mem_type; + u32 label; + size_t n_acl_entries; + struct gunyah_rm_mem_acl_entry *acl_entries; + size_t n_mem_entries; + struct gunyah_rm_mem_entry *mem_entries; + u32 mem_handle; +}; + /******************************************************************************/ /* Common arch-independent definitions for Gunyah hypercalls */ #define GUNYAH_CAPID_INVAL U64_MAX From patchwork Tue Jan 9 19:37:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761164 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E5D640BF3; Tue, 9 Jan 2024 19:38:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="kvkp5i7S" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409FdruS011442; Tue, 9 Jan 2024 19:38:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=mCvTvHnDAvi/Z3r/fbIcWBXtbKnYIFgrJK9nQ4SRZtQ =; b=kvkp5i7ShIhFPfNEBWrulkEmXtEQ+gwLJ/1+msn+NXbqJ5ouKzdjF6fxpq1 ZJaGNz9sV8hh3IAOe4Y16AWq6r+peU2NATbyLQZZh8pSudmPOLCu05eWwABKzTMd vBuTcd4vrXtvDF7+mbTV7XT1CbUGgda+nfyLLbtwdxZlLANUnrQpSRU80Iu5V8W2 /dqd2bUeBASYSTQD/ZDxV2U5ul/PNMaOvkJF+AF2HkRotc8R8frIA+VM5QHLEJDc Ac8w0h80BDHN/bK10KN14A5SeNJ8bB1I3w+qIfE6Yg07mIz10VXz7y8MsTqAb09l W41CNyMhqnpPb7VkVHUtr8bi/Uw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh3me1858-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:02 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc1X8024580 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:01 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:00 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:56 -0800 Subject: [PATCH v16 18/34] virt: gunyah: Add interfaces to map memory into guest address space Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-18-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: JhWjMLDtmjxLfi0BZgG3YWzgoCejQ_Wn X-Proofpoint-GUID: JhWjMLDtmjxLfi0BZgG3YWzgoCejQ_Wn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 suspectscore=0 impostorscore=0 clxscore=1015 adultscore=0 malwarescore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah virtual machines are created with either all memory provided at VM creation using the Resource Manager memory parcel construct, or Incrementally by enabling VM demand paging. The Gunyah demand paging support is provided directly by the hypervisor and does not require the creation of resource manager memory parcels. Demand paging allows the host to map/unmap contiguous pages (folios) to a Gunyah memory extent object with the correct rights allowing its contained pages to be mapped into the Guest VM's address space. Memory extents are Gunyah's mechanism for handling system memory abstracting from the direct use of physical page numbers. Memory extents are hypervisor objects and are therefore referenced and access controlled with capabilities. When a virtual machine is configured for demand paging, 3 memory extent and 1 address space capabilities are provided to the host. The resource manager defined policy is such that memory in the "host-only" extent (the default) is private to the host. Memory in the "guest-only" extent can be used for guest private mappings, and are unmapped from the host. Memory in the "host-and-guest-shared" extent can be mapped concurrently and shared between the host and guest VMs. Implement two functions which Linux can use to move memory between the virtual machines: gunyah_provide_folio and gunyah_reclaim_folio. Memory that has been provided to the guest is tracked in a maple tree to be reclaimed later. Folios provided to the virtual machine are assumed to be owned Gunyah stack: the folio's ->private field is used for bookkeeping about whether page is mapped into virtual machine. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/vm_mgr.c | 67 +++++++++ drivers/virt/gunyah/vm_mgr.h | 46 ++++++ drivers/virt/gunyah/vm_mgr_mem.c | 309 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 423 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 3f82af8c5ce7..f3c9507224ee 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o +gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index db3d1d18ccb8..26b6dce49970 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -17,6 +17,16 @@ #include "rsc_mgr.h" #include "vm_mgr.h" +#define GUNYAH_VM_ADDRSPACE_LABEL 0 +// "To" extent for memory private to guest +#define GUNYAH_VM_MEM_EXTENT_GUEST_PRIVATE_LABEL 0 +// "From" extent for memory shared with guest +#define GUNYAH_VM_MEM_EXTENT_HOST_SHARED_LABEL 1 +// "To" extent for memory shared with the guest +#define GUNYAH_VM_MEM_EXTENT_GUEST_SHARED_LABEL 3 +// "From" extent for memory private to guest +#define GUNYAH_VM_MEM_EXTENT_HOST_PRIVATE_LABEL 2 + static DEFINE_XARRAY(gunyah_vm_functions); static void gunyah_vm_put_function(struct gunyah_vm_function *fn) @@ -175,6 +185,16 @@ void gunyah_vm_function_unregister(struct gunyah_vm_function *fn) } EXPORT_SYMBOL_GPL(gunyah_vm_function_unregister); +static bool gunyah_vm_resource_ticket_populate_noop( + struct gunyah_vm_resource_ticket *ticket, struct gunyah_resource *ghrsc) +{ + return true; +} +static void gunyah_vm_resource_ticket_unpopulate_noop( + struct gunyah_vm_resource_ticket *ticket, struct gunyah_resource *ghrsc) +{ +} + int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket) { @@ -342,6 +362,17 @@ static void gunyah_vm_stop(struct gunyah_vm *ghvm) ghvm->vm_status != GUNYAH_RM_VM_STATUS_RUNNING); } +static inline void setup_extent_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket, + u32 label) +{ + ticket->resource_type = GUNYAH_RESOURCE_TYPE_MEM_EXTENT; + ticket->label = label; + ticket->populate = gunyah_vm_resource_ticket_populate_noop; + ticket->unpopulate = gunyah_vm_resource_ticket_unpopulate_noop; + gunyah_vm_add_resource_ticket(ghvm, ticket); +} + static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) { struct gunyah_vm *ghvm; @@ -365,6 +396,25 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) INIT_LIST_HEAD(&ghvm->resources); INIT_LIST_HEAD(&ghvm->resource_tickets); + mt_init(&ghvm->mm); + + ghvm->addrspace_ticket.resource_type = GUNYAH_RESOURCE_TYPE_ADDR_SPACE; + ghvm->addrspace_ticket.label = GUNYAH_VM_ADDRSPACE_LABEL; + ghvm->addrspace_ticket.populate = + gunyah_vm_resource_ticket_populate_noop; + ghvm->addrspace_ticket.unpopulate = + gunyah_vm_resource_ticket_unpopulate_noop; + gunyah_vm_add_resource_ticket(ghvm, &ghvm->addrspace_ticket); + + setup_extent_ticket(ghvm, &ghvm->host_private_extent_ticket, + GUNYAH_VM_MEM_EXTENT_HOST_PRIVATE_LABEL); + setup_extent_ticket(ghvm, &ghvm->host_shared_extent_ticket, + GUNYAH_VM_MEM_EXTENT_HOST_SHARED_LABEL); + setup_extent_ticket(ghvm, &ghvm->guest_private_extent_ticket, + GUNYAH_VM_MEM_EXTENT_GUEST_PRIVATE_LABEL); + setup_extent_ticket(ghvm, &ghvm->guest_shared_extent_ticket, + GUNYAH_VM_MEM_EXTENT_GUEST_SHARED_LABEL); + return ghvm; } @@ -528,6 +578,21 @@ static void _gunyah_vm_put(struct kref *kref) gunyah_vm_stop(ghvm); gunyah_vm_remove_functions(ghvm); + + /** + * If this fails, we're going to lose the memory for good and is + * BUG_ON-worthy, but not unrecoverable (we just lose memory). + * This call should always succeed though because the VM is in not + * running and RM will let us reclaim all the memory. + */ + WARN_ON(gunyah_vm_reclaim_range(ghvm, 0, U64_MAX)); + + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->addrspace_ticket); + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->host_shared_extent_ticket); + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->host_private_extent_ticket); + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->guest_shared_extent_ticket); + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->guest_private_extent_ticket); + gunyah_vm_clean_resources(ghvm); if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE && @@ -541,6 +606,8 @@ static void _gunyah_vm_put(struct kref *kref) ghvm->vm_status == GUNYAH_RM_VM_STATUS_RESET); } + mtree_destroy(&ghvm->mm); + if (ghvm->vm_status > GUNYAH_RM_VM_STATUS_NO_STATE) { gunyah_rm_notifier_unregister(ghvm->rm, &ghvm->nb); diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 8c5b94101b2c..e500f6eb014e 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -16,12 +17,42 @@ #include "rsc_mgr.h" +static inline u64 gunyah_gpa_to_gfn(u64 gpa) +{ + return gpa >> PAGE_SHIFT; +} + +static inline u64 gunyah_gfn_to_gpa(u64 gfn) +{ + return gfn << PAGE_SHIFT; +} + long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, unsigned long arg); /** * struct gunyah_vm - Main representation of a Gunyah Virtual machine * @vmid: Gunyah's VMID for this virtual machine + * @mm: A maple tree of all memory that has been mapped to a VM. + * Indices are guest frame numbers; entries are either folios or + * RM mem parcels + * @addrspace_ticket: Resource ticket to the capability for guest VM's + * address space + * @host_private_extent_ticket: Resource ticket to the capability for our + * memory extent from which to lend private + * memory to the guest + * @host_shared_extent_ticket: Resource ticket to the capaiblity for our + * memory extent from which to share memory + * with the guest. Distinction with + * @host_private_extent_ticket needed for + * current Qualcomm platforms; on non-Qualcomm + * platforms, this is the same capability ID + * @guest_private_extent_ticket: Resource ticket to the capaiblity for + * the guest's memory extent to lend private + * memory to + * @guest_shared_extent_ticket: Resource ticket to the capability for + * the memory extent that represents + * memory shared with the guest. * @rm: Pointer to the resource manager struct to make RM calls * @parent: For logging * @nb: Notifier block for RM notifications @@ -43,6 +74,11 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, */ struct gunyah_vm { u16 vmid; + struct maple_tree mm; + struct gunyah_vm_resource_ticket addrspace_ticket, + host_private_extent_ticket, host_shared_extent_ticket, + guest_private_extent_ticket, guest_shared_extent_ticket; + struct gunyah_rm *rm; struct notifier_block nb; @@ -63,4 +99,14 @@ struct gunyah_vm { }; +int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 gfn, + u64 nr); +int gunyah_vm_reclaim_parcel(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 gfn); +int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, + u64 gfn, bool share, bool write); +int gunyah_vm_reclaim_folio(struct gunyah_vm *ghvm, u64 gfn); +int gunyah_vm_reclaim_range(struct gunyah_vm *ghvm, u64 gfn, u64 nr); + #endif diff --git a/drivers/virt/gunyah/vm_mgr_mem.c b/drivers/virt/gunyah/vm_mgr_mem.c new file mode 100644 index 000000000000..d3fcb4514907 --- /dev/null +++ b/drivers/virt/gunyah/vm_mgr_mem.c @@ -0,0 +1,309 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#define pr_fmt(fmt) "gunyah_vm_mgr: " fmt + +#include +#include +#include + +#include "vm_mgr.h" + +#define WRITE_TAG (1 << 0) +#define SHARE_TAG (1 << 1) + +static inline struct gunyah_resource * +__first_resource(struct gunyah_vm_resource_ticket *ticket) +{ + return list_first_entry_or_null(&ticket->resources, + struct gunyah_resource, list); +} + +int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 gfn, + u64 nr) +{ + struct gunyah_rm_mem_entry *entry; + unsigned long i, entry_size, tag = 0; + struct folio *folio; + pgoff_t off = 0; + int ret; + + if (parcel->n_acl_entries > 1) + tag |= SHARE_TAG; + if (parcel->acl_entries[0].perms & GUNYAH_RM_ACL_W) + tag |= WRITE_TAG; + + for (i = 0; i < parcel->n_mem_entries; i++) { + entry = &parcel->mem_entries[i]; + entry_size = PHYS_PFN(le64_to_cpu(entry->size)); + + folio = pfn_folio(PHYS_PFN(le64_to_cpu(entry->phys_addr))); + ret = mtree_insert_range(&ghvm->mm, gfn + off, gfn + off + folio_nr_pages(folio) - 1, xa_tag_pointer(folio, tag), GFP_KERNEL); + if (ret == -ENOMEM) + return ret; + BUG_ON(ret); + off += folio_nr_pages(folio); + } + + BUG_ON(off != nr); + + return 0; +} + +static inline u32 donate_flags(bool share) +{ + if (share) + return FIELD_PREP_CONST(GUNYAH_MEMEXTENT_OPTION_TYPE_MASK, + GUNYAH_MEMEXTENT_DONATE_TO_SIBLING); + else + return FIELD_PREP_CONST(GUNYAH_MEMEXTENT_OPTION_TYPE_MASK, + GUNYAH_MEMEXTENT_DONATE_TO_PROTECTED); +} + +static inline u32 reclaim_flags(bool share) +{ + if (share) + return FIELD_PREP_CONST(GUNYAH_MEMEXTENT_OPTION_TYPE_MASK, + GUNYAH_MEMEXTENT_DONATE_TO_SIBLING); + else + return FIELD_PREP_CONST(GUNYAH_MEMEXTENT_OPTION_TYPE_MASK, + GUNYAH_MEMEXTENT_DONATE_FROM_PROTECTED); +} + +int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, + u64 gfn, bool share, bool write) +{ + struct gunyah_resource *guest_extent, *host_extent, *addrspace; + u32 map_flags = BIT(GUNYAH_ADDRSPACE_MAP_FLAG_PARTIAL); + u64 extent_attrs, gpa = gunyah_gfn_to_gpa(gfn); + phys_addr_t pa = PFN_PHYS(folio_pfn(folio)); + enum gunyah_pagetable_access access; + size_t size = folio_size(folio); + enum gunyah_error gunyah_error; + unsigned long tag = 0; + int ret; + + /* clang-format off */ + if (share) { + guest_extent = __first_resource(&ghvm->guest_shared_extent_ticket); + host_extent = __first_resource(&ghvm->host_shared_extent_ticket); + } else { + guest_extent = __first_resource(&ghvm->guest_private_extent_ticket); + host_extent = __first_resource(&ghvm->host_private_extent_ticket); + } + /* clang-format on */ + addrspace = __first_resource(&ghvm->addrspace_ticket); + + if (!addrspace || !guest_extent || !host_extent) + return -ENODEV; + + if (share) { + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_VMMIO); + tag |= SHARE_TAG; + } else { + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_PRIVATE); + } + + if (write) + tag |= WRITE_TAG; + + ret = mtree_insert_range(&ghvm->mm, gfn, + gfn + folio_nr_pages(folio) - 1, + xa_tag_pointer(folio, tag), GFP_KERNEL); + if (ret == -EEXIST) + return -EAGAIN; + if (ret) + return ret; + + if (share && write) + access = GUNYAH_PAGETABLE_ACCESS_RW; + else if (share && !write) + access = GUNYAH_PAGETABLE_ACCESS_R; + else if (!share && write) + access = GUNYAH_PAGETABLE_ACCESS_RWX; + else /* !share && !write */ + access = GUNYAH_PAGETABLE_ACCESS_RX; + + gunyah_error = gunyah_hypercall_memextent_donate(donate_flags(share), + host_extent->capid, + guest_extent->capid, + pa, size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err("Failed to donate memory for guest address 0x%016llx: %d\n", + gpa, gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto remove; + } + + extent_attrs = + FIELD_PREP_CONST(GUNYAH_MEMEXTENT_MAPPING_TYPE, + ARCH_GUNYAH_DEFAULT_MEMTYPE) | + FIELD_PREP(GUNYAH_MEMEXTENT_MAPPING_USER_ACCESS, access) | + FIELD_PREP(GUNYAH_MEMEXTENT_MAPPING_KERNEL_ACCESS, access); + gunyah_error = gunyah_hypercall_addrspace_map(addrspace->capid, + guest_extent->capid, gpa, + extent_attrs, map_flags, + pa, size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err("Failed to map guest address 0x%016llx: %d\n", gpa, + gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto memextent_reclaim; + } + + folio_get(folio); + if (!share) + folio_set_private(folio); + return 0; +memextent_reclaim: + gunyah_error = gunyah_hypercall_memextent_donate(reclaim_flags(share), + guest_extent->capid, + host_extent->capid, pa, + size); + if (gunyah_error != GUNYAH_ERROR_OK) + pr_err("Failed to reclaim memory donation for guest address 0x%016llx: %d\n", + gpa, gunyah_error); +remove: + mtree_erase(&ghvm->mm, gfn); + return ret; +} + +static int __gunyah_vm_reclaim_folio_locked(struct gunyah_vm *ghvm, void *entry, + u64 gfn, const bool sync) +{ + u32 map_flags = BIT(GUNYAH_ADDRSPACE_MAP_FLAG_PARTIAL); + struct gunyah_resource *guest_extent, *host_extent, *addrspace; + enum gunyah_pagetable_access access; + enum gunyah_error gunyah_error; + struct folio *folio; + bool write, share; + phys_addr_t pa; + size_t size; + int ret; + + addrspace = __first_resource(&ghvm->addrspace_ticket); + if (!addrspace) + return -ENODEV; + + share = !!(xa_pointer_tag(entry) & SHARE_TAG); + write = !!(xa_pointer_tag(entry) & WRITE_TAG); + folio = xa_untag_pointer(entry); + + if (!sync) + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_NOSYNC); + + /* clang-format off */ + if (share) { + guest_extent = __first_resource(&ghvm->guest_shared_extent_ticket); + host_extent = __first_resource(&ghvm->host_shared_extent_ticket); + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_VMMIO); + } else { + guest_extent = __first_resource(&ghvm->guest_private_extent_ticket); + host_extent = __first_resource(&ghvm->host_private_extent_ticket); + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_PRIVATE); + } + /* clang-format on */ + + pa = PFN_PHYS(folio_pfn(folio)); + size = folio_size(folio); + + gunyah_error = gunyah_hypercall_addrspace_unmap(addrspace->capid, + guest_extent->capid, + gunyah_gfn_to_gpa(gfn), + map_flags, pa, size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err_ratelimited( + "Failed to unmap guest address 0x%016llx: %d\n", + gunyah_gfn_to_gpa(gfn), gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto err; + } + + gunyah_error = gunyah_hypercall_memextent_donate(reclaim_flags(share), + guest_extent->capid, + host_extent->capid, pa, + size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err_ratelimited( + "Failed to reclaim memory donation for guest address 0x%016llx: %d\n", + gunyah_gfn_to_gpa(gfn), gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto err; + } + + if (share && write) + access = GUNYAH_PAGETABLE_ACCESS_RW; + else if (share && !write) + access = GUNYAH_PAGETABLE_ACCESS_R; + else if (!share && write) + access = GUNYAH_PAGETABLE_ACCESS_RWX; + else /* !share && !write */ + access = GUNYAH_PAGETABLE_ACCESS_RX; + + gunyah_error = gunyah_hypercall_memextent_donate(donate_flags(share), + guest_extent->capid, + host_extent->capid, pa, + size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err("Failed to reclaim memory donation for guest address 0x%016llx: %d\n", + gfn << PAGE_SHIFT, gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto err; + } + + BUG_ON(mtree_erase(&ghvm->mm, gfn) != entry); + + folio_clear_private(folio); + folio_put(folio); + return 0; +err: + return ret; +} + +int gunyah_vm_reclaim_folio(struct gunyah_vm *ghvm, u64 gfn) +{ + struct folio *folio; + void *entry; + + entry = mtree_load(&ghvm->mm, gfn); + if (!entry) + return 0; + + folio = xa_untag_pointer(entry); + if (mtree_load(&ghvm->mm, gfn) != entry) + return -EAGAIN; + + return __gunyah_vm_reclaim_folio_locked(ghvm, entry, gfn, true); +} + +int gunyah_vm_reclaim_range(struct gunyah_vm *ghvm, u64 gfn, u64 nr) +{ + unsigned long next = gfn, g; + struct folio *folio; + int ret, ret2 = 0; + void *entry; + bool sync; + + mt_for_each(&ghvm->mm, entry, next, gfn + nr) { + folio = xa_untag_pointer(entry); + g = next; + sync = !!mt_find_after(&ghvm->mm, &g, gfn + nr); + + g = next - folio_nr_pages(folio); + folio_get(folio); + folio_lock(folio); + if (mtree_load(&ghvm->mm, g) == entry) + ret = __gunyah_vm_reclaim_folio_locked(ghvm, entry, g, sync); + else + ret = -EAGAIN; + folio_unlock(folio); + folio_put(folio); + if (ret && ret2 != -EAGAIN) + ret2 = ret; + } + + return ret2; +} From patchwork Tue Jan 9 19:37:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761167 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1DFA3F8F7; Tue, 9 Jan 2024 19:38:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="oD3seSC8" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409JVgiM000985; Tue, 9 Jan 2024 19:38:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=9X97Z/Sw4iF1xfdQ6zXP7tMdWV0+M1tv2BkbjOoyCL8 =; b=oD3seSC88GSY09RBXXDfdO/S1XRWcAfJVppANpFpyYsRUtAjIM8F7yWT8KY US+gIixz6fLJ7c1G3JNlZJq8Cjidaxj8qarOqbSm2sCvkoeZxubRH7KXWOI/8DSg mWkLPV9/Iwe6lKKRTuU6VeWOf2VKyHg6M03d8c961/f1r4pxGtTQelEqXSVTO1+1 cNzPnKtkB36et2dnV14TwwDsbi4/YDhRXeszMROvdFJgjrRum7i9VpGYasjivj9n 6htyi72Npv0/UUoaV7hzDa9sRY+A2tCAALm5QnKZ+lt8Ltn9xwCzhPFuiw5gvl50 PejHQOT5BRoNkYfSlzORJlZ8H7w== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9q70eg9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:03 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc1ZX024598 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:01 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:01 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:57 -0800 Subject: [PATCH v16 19/34] gunyah: rsc_mgr: Add platform ops on mem_lend/mem_reclaim Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-19-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: BeJma36D3-TzHNyW40rWlmdLSG7o1asP X-Proofpoint-GUID: BeJma36D3-TzHNyW40rWlmdLSG7o1asP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 adultscore=0 clxscore=1015 impostorscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999 malwarescore=0 lowpriorityscore=0 priorityscore=1501 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 On Qualcomm platforms, there is a firmware entity which controls access to physical pages. In order to share memory with another VM, this entity needs to be informed that the guest VM should have access to the memory. Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Kconfig | 4 + drivers/virt/gunyah/Makefile | 1 + drivers/virt/gunyah/gunyah_platform_hooks.c | 115 ++++++++++++++++++++++++++++ drivers/virt/gunyah/rsc_mgr.h | 10 +++ drivers/virt/gunyah/rsc_mgr_rpc.c | 20 ++++- drivers/virt/gunyah/vm_mgr_mem.c | 32 +++++--- include/linux/gunyah.h | 37 +++++++++ 7 files changed, 206 insertions(+), 13 deletions(-) diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig index 6f4c85db80b5..23ba523d25dc 100644 --- a/drivers/virt/gunyah/Kconfig +++ b/drivers/virt/gunyah/Kconfig @@ -3,6 +3,7 @@ config GUNYAH tristate "Gunyah Virtualization drivers" depends on ARM64 + select GUNYAH_PLATFORM_HOOKS help The Gunyah drivers are the helper interfaces that run in a guest VM such as basic inter-VM IPC and signaling mechanisms, and higher level @@ -10,3 +11,6 @@ config GUNYAH Say Y/M here to enable the drivers needed to interact in a Gunyah virtual environment. + +config GUNYAH_PLATFORM_HOOKS + tristate diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index f3c9507224ee..ffcde0e0ccfa 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -3,3 +3,4 @@ gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o +obj-$(CONFIG_GUNYAH_PLATFORM_HOOKS) += gunyah_platform_hooks.o diff --git a/drivers/virt/gunyah/gunyah_platform_hooks.c b/drivers/virt/gunyah/gunyah_platform_hooks.c new file mode 100644 index 000000000000..a1f93321e5ba --- /dev/null +++ b/drivers/virt/gunyah/gunyah_platform_hooks.c @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include + +#include "rsc_mgr.h" + +static const struct gunyah_rm_platform_ops *rm_platform_ops; +static DECLARE_RWSEM(rm_platform_ops_lock); + +int gunyah_rm_platform_pre_mem_share(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel) +{ + int ret = 0; + + down_read(&rm_platform_ops_lock); + if (rm_platform_ops && rm_platform_ops->pre_mem_share) + ret = rm_platform_ops->pre_mem_share(rm, mem_parcel); + up_read(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_platform_pre_mem_share); + +int gunyah_rm_platform_post_mem_reclaim(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel) +{ + int ret = 0; + + down_read(&rm_platform_ops_lock); + if (rm_platform_ops && rm_platform_ops->post_mem_reclaim) + ret = rm_platform_ops->post_mem_reclaim(rm, mem_parcel); + up_read(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_platform_post_mem_reclaim); + +int gunyah_rm_platform_pre_demand_page(struct gunyah_rm *rm, u16 vmid, + u32 flags, struct folio *folio) +{ + int ret = 0; + + down_read(&rm_platform_ops_lock); + if (rm_platform_ops && rm_platform_ops->pre_demand_page) + ret = rm_platform_ops->pre_demand_page(rm, vmid, flags, folio); + up_read(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_platform_pre_demand_page); + +int gunyah_rm_platform_reclaim_demand_page(struct gunyah_rm *rm, u16 vmid, + u32 flags, struct folio *folio) +{ + int ret = 0; + + down_read(&rm_platform_ops_lock); + if (rm_platform_ops && rm_platform_ops->pre_demand_page) + ret = rm_platform_ops->release_demand_page(rm, vmid, flags, + folio); + up_read(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_platform_reclaim_demand_page); + +int gunyah_rm_register_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops) +{ + int ret = 0; + + down_write(&rm_platform_ops_lock); + if (!rm_platform_ops) + rm_platform_ops = platform_ops; + else + ret = -EEXIST; + up_write(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_register_platform_ops); + +void gunyah_rm_unregister_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops) +{ + down_write(&rm_platform_ops_lock); + if (rm_platform_ops == platform_ops) + rm_platform_ops = NULL; + up_write(&rm_platform_ops_lock); +} +EXPORT_SYMBOL_GPL(gunyah_rm_unregister_platform_ops); + +static void _devm_gunyah_rm_unregister_platform_ops(void *data) +{ + gunyah_rm_unregister_platform_ops( + (const struct gunyah_rm_platform_ops *)data); +} + +int devm_gunyah_rm_register_platform_ops( + struct device *dev, const struct gunyah_rm_platform_ops *ops) +{ + int ret; + + ret = gunyah_rm_register_platform_ops(ops); + if (ret) + return ret; + + return devm_add_action(dev, _devm_gunyah_rm_unregister_platform_ops, + (void *)ops); +} +EXPORT_SYMBOL_GPL(devm_gunyah_rm_register_platform_ops); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Platform Hooks"); diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h index ec8ad8149e8e..68d08d3cff02 100644 --- a/drivers/virt/gunyah/rsc_mgr.h +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -117,4 +117,14 @@ int gunyah_rm_call(struct gunyah_rm *rsc_mgr, u32 message_id, const void *req_buf, size_t req_buf_size, void **resp_buf, size_t *resp_buf_size); +int gunyah_rm_platform_pre_mem_share(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel); +int gunyah_rm_platform_post_mem_reclaim( + struct gunyah_rm *rm, struct gunyah_rm_mem_parcel *mem_parcel); + +int gunyah_rm_platform_pre_demand_page(struct gunyah_rm *rm, u16 vmid, + u32 flags, struct folio *folio); +int gunyah_rm_platform_reclaim_demand_page(struct gunyah_rm *rm, u16 vmid, + u32 flags, struct folio *folio); + #endif diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c index bc44bde990ce..0d78613827b5 100644 --- a/drivers/virt/gunyah/rsc_mgr_rpc.c +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c @@ -206,6 +206,12 @@ int gunyah_rm_mem_share(struct gunyah_rm *rm, struct gunyah_rm_mem_parcel *p) if (!msg) return -ENOMEM; + ret = gunyah_rm_platform_pre_mem_share(rm, p); + if (ret) { + kfree(msg); + return ret; + } + req_header = msg; acl = (void *)req_header + sizeof(*req_header); mem = (void *)acl + acl_size; @@ -231,8 +237,10 @@ int gunyah_rm_mem_share(struct gunyah_rm *rm, struct gunyah_rm_mem_parcel *p) &resp_size); kfree(msg); - if (ret) + if (ret) { + gunyah_rm_platform_post_mem_reclaim(rm, p); return ret; + } p->mem_handle = le32_to_cpu(*resp); kfree(resp); @@ -263,9 +271,15 @@ int gunyah_rm_mem_reclaim(struct gunyah_rm *rm, struct gunyah_rm_mem_release_req req = { .mem_handle = cpu_to_le32(parcel->mem_handle), }; + int ret; - return gunyah_rm_call(rm, GUNYAH_RM_RPC_MEM_RECLAIM, &req, sizeof(req), - NULL, NULL); + ret = gunyah_rm_call(rm, GUNYAH_RM_RPC_MEM_RECLAIM, &req, sizeof(req), + NULL, NULL); + /* Only call platform mem reclaim hooks if we reclaimed the memory */ + if (ret) + return ret; + + return gunyah_rm_platform_post_mem_reclaim(rm, parcel); } /** diff --git a/drivers/virt/gunyah/vm_mgr_mem.c b/drivers/virt/gunyah/vm_mgr_mem.c index d3fcb4514907..15610a8c6f82 100644 --- a/drivers/virt/gunyah/vm_mgr_mem.c +++ b/drivers/virt/gunyah/vm_mgr_mem.c @@ -9,6 +9,7 @@ #include #include +#include "rsc_mgr.h" #include "vm_mgr.h" #define WRITE_TAG (1 << 0) @@ -84,7 +85,7 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, size_t size = folio_size(folio); enum gunyah_error gunyah_error; unsigned long tag = 0; - int ret; + int ret, tmp; /* clang-format off */ if (share) { @@ -127,6 +128,11 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, else /* !share && !write */ access = GUNYAH_PAGETABLE_ACCESS_RX; + ret = gunyah_rm_platform_pre_demand_page(ghvm->rm, ghvm->vmid, access, + folio); + if (ret) + goto remove; + gunyah_error = gunyah_hypercall_memextent_donate(donate_flags(share), host_extent->capid, guest_extent->capid, @@ -135,7 +141,7 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, pr_err("Failed to donate memory for guest address 0x%016llx: %d\n", gpa, gunyah_error); ret = gunyah_error_remap(gunyah_error); - goto remove; + goto platform_release; } extent_attrs = @@ -166,6 +172,14 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, if (gunyah_error != GUNYAH_ERROR_OK) pr_err("Failed to reclaim memory donation for guest address 0x%016llx: %d\n", gpa, gunyah_error); +platform_release: + tmp = gunyah_rm_platform_reclaim_demand_page(ghvm->rm, ghvm->vmid, + access, folio); + if (tmp) { + pr_err("Platform failed to reclaim memory for guest address 0x%016llx: %d", + gpa, tmp); + return ret; + } remove: mtree_erase(&ghvm->mm, gfn); return ret; @@ -243,14 +257,12 @@ static int __gunyah_vm_reclaim_folio_locked(struct gunyah_vm *ghvm, void *entry, else /* !share && !write */ access = GUNYAH_PAGETABLE_ACCESS_RX; - gunyah_error = gunyah_hypercall_memextent_donate(donate_flags(share), - guest_extent->capid, - host_extent->capid, pa, - size); - if (gunyah_error != GUNYAH_ERROR_OK) { - pr_err("Failed to reclaim memory donation for guest address 0x%016llx: %d\n", - gfn << PAGE_SHIFT, gunyah_error); - ret = gunyah_error_remap(gunyah_error); + ret = gunyah_rm_platform_reclaim_demand_page(ghvm->rm, ghvm->vmid, + access, folio); + if (ret) { + pr_err_ratelimited( + "Platform failed to reclaim memory for guest address 0x%016llx: %d", + gunyah_gfn_to_gpa(gfn), ret); goto err; } diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 9065f5758c39..32ce578220ca 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -199,6 +199,43 @@ struct gunyah_rm_mem_parcel { u32 mem_handle; }; +struct gunyah_rm_platform_ops { + int (*pre_mem_share)(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel); + int (*post_mem_reclaim)(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel); + + int (*pre_demand_page)(struct gunyah_rm *rm, u16 vmid, u32 flags, + struct folio *folio); + int (*release_demand_page)(struct gunyah_rm *rm, u16 vmid, u32 flags, + struct folio *folio); +}; + +#if IS_ENABLED(CONFIG_GUNYAH_PLATFORM_HOOKS) +int gunyah_rm_register_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops); +void gunyah_rm_unregister_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops); +int devm_gunyah_rm_register_platform_ops( + struct device *dev, const struct gunyah_rm_platform_ops *ops); +#else +static inline int gunyah_rm_register_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops) +{ + return 0; +} +static inline void gunyah_rm_unregister_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops) +{ +} +static inline int +devm_gunyah_rm_register_platform_ops(struct device *dev, + const struct gunyah_rm_platform_ops *ops) +{ + return 0; +} +#endif + /******************************************************************************/ /* Common arch-independent definitions for Gunyah hypercalls */ #define GUNYAH_CAPID_INVAL U64_MAX From patchwork Tue Jan 9 19:37:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761165 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFDF940C0B; Tue, 9 Jan 2024 19:38:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Lo5n72Mz" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409JEnrB008597; Tue, 9 Jan 2024 19:38:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=miMCDfTEk99kjZA/iq3tKBkY47MzQ24QITHB/sPM65M =; b=Lo5n72MzcMcHMEz31QFLMDK3Jse/XQsdE7yrPE0K/M0fzI5bxpM9XlSITzj dYy35XZ3bDCG51f1h9ikSgSzTR3jaKzjLgcs5tEFmQLovYCnXPJ9daFl7fRRveIE okFKpscoSnGRp+PftF9NGeusmgNoNH5T6AMSvaUNdHjzFoXD9GITNrYR5GZt/JjM jzAh9YQ2w7cEFM1AMJYVyj9DOAUvEec35fr/+nDFM3QdjJrUY6tR/GUsfxHJXdyt 6NX/L9KdXakkrrN9/4R/L3AnIqxonyaGvbOxsGt2w019xc6rFE6Ihrxp+X8xvELl JmcRF4ngOxoG6P22XBKUwkzkTXg== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh98m8hr9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:04 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc32H024616 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:03 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:02 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:59 -0800 Subject: [PATCH v16 21/34] virt: gunyah: Implement guestmemfd Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-21-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 66cUBx-tajMGX41hKMj19ebFqRM3YAnm X-Proofpoint-GUID: 66cUBx-tajMGX41hKMj19ebFqRM3YAnm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxscore=0 priorityscore=1501 phishscore=0 malwarescore=0 mlxlogscore=999 suspectscore=0 clxscore=1015 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Memory provided to Gunyah virtual machines are provided by a Gunyah guestmemfd. Because memory provided to virtual machines may be unmapped at stage-2 from the host (i.e. in the hypervisor's page tables for the host), special care needs to be taken to ensure that the kernel doesn't have a page mapped when it is lent to the guest. Without this tracking, a kernel panic could be induced by userspace tricking the kernel into accessing guest-private memory. Introduce the basic guestmemfd ops and ioctl. Userspace should be able to access the memory unless it is provided to the guest virtual machine: this is necessary to allow userspace to preload binaries such as the kernel Image prior to running the VM. Subsequent commits will wire up providing the memory to the guest. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/guest_memfd.c | 279 ++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.c | 9 ++ drivers/virt/gunyah/vm_mgr.h | 2 + include/uapi/linux/gunyah.h | 19 +++ 5 files changed, 310 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index a6c6f29b887a..c4505fce177d 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 -gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o +gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o guest_memfd.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o obj-$(CONFIG_GUNYAH_PLATFORM_HOOKS) += gunyah_platform_hooks.o diff --git a/drivers/virt/gunyah/guest_memfd.c b/drivers/virt/gunyah/guest_memfd.c new file mode 100644 index 000000000000..73a3f1368081 --- /dev/null +++ b/drivers/virt/gunyah/guest_memfd.c @@ -0,0 +1,279 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#define pr_fmt(fmt) "gunyah_guest_mem: " fmt + +#include +#include +#include +#include +#include +#include + +#include + +#include "vm_mgr.h" + +static struct folio *gunyah_gmem_get_huge_folio(struct inode *inode, + pgoff_t index) +{ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + unsigned long huge_index = round_down(index, HPAGE_PMD_NR); + unsigned long flags = (unsigned long)inode->i_private; + struct address_space *mapping = inode->i_mapping; + gfp_t gfp = mapping_gfp_mask(mapping); + struct folio *folio; + + if (!(flags & GHMF_ALLOW_HUGEPAGE)) + return NULL; + + if (filemap_range_has_page(mapping, huge_index << PAGE_SHIFT, + (huge_index + HPAGE_PMD_NR - 1) + << PAGE_SHIFT)) + return NULL; + + folio = filemap_alloc_folio(gfp, HPAGE_PMD_ORDER); + if (!folio) + return NULL; + + if (filemap_add_folio(mapping, folio, huge_index, gfp)) { + folio_put(folio); + return NULL; + } + + return folio; +#else + return NULL; +#endif +} + +static struct folio *gunyah_gmem_get_folio(struct inode *inode, pgoff_t index) +{ + struct folio *folio; + + folio = gunyah_gmem_get_huge_folio(inode, index); + if (!folio) { + folio = filemap_grab_folio(inode->i_mapping, index); + if (IS_ERR_OR_NULL(folio)) + return NULL; + } + + /* + * Use the up-to-date flag to track whether or not the memory has been + * zeroed before being handed off to the guest. There is no backing + * storage for the memory, so the folio will remain up-to-date until + * it's removed. + */ + if (!folio_test_uptodate(folio)) { + unsigned long nr_pages = folio_nr_pages(folio); + unsigned long i; + + for (i = 0; i < nr_pages; i++) + clear_highpage(folio_page(folio, i)); + + folio_mark_uptodate(folio); + } + + /* + * Ignore accessed, referenced, and dirty flags. The memory is + * unevictable and there is no storage to write back to. + */ + return folio; +} + +static vm_fault_t gunyah_gmem_host_fault(struct vm_fault *vmf) +{ + struct folio *folio; + + folio = gunyah_gmem_get_folio(file_inode(vmf->vma->vm_file), + vmf->pgoff); + if (!folio || folio_test_private(folio)) { + folio_unlock(folio); + folio_put(folio); + return VM_FAULT_SIGBUS; + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + + return VM_FAULT_LOCKED; +} + +static const struct vm_operations_struct gunyah_gmem_vm_ops = { + .fault = gunyah_gmem_host_fault, +}; + +static int gunyah_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + file_accessed(file); + vma->vm_ops = &gunyah_gmem_vm_ops; + return 0; +} + +/** + * gunyah_gmem_punch_hole() - try to reclaim a range of pages + * @inode: guest memfd inode + * @offset: Offset into memfd to start reclaim + * @len: length to reclaim + * + * Will try to unmap from virtual machines any folios covered by + * [offset, offset+len]. If unmapped, then tries to free those folios + * + * Returns - error code if any folio in the range couldn't be freed. + */ +static long gunyah_gmem_punch_hole(struct inode *inode, loff_t offset, + loff_t len) +{ + truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); + + return 0; +} + +static long gunyah_gmem_allocate(struct inode *inode, loff_t offset, loff_t len) +{ + struct address_space *mapping = inode->i_mapping; + pgoff_t start, index, end; + int r; + + /* Dedicated guest is immutable by default. */ + if (offset + len > i_size_read(inode)) + return -EINVAL; + + filemap_invalidate_lock_shared(mapping); + + start = offset >> PAGE_SHIFT; + end = (offset + len) >> PAGE_SHIFT; + + r = 0; + for (index = start; index < end;) { + struct folio *folio; + + if (signal_pending(current)) { + r = -EINTR; + break; + } + + folio = gunyah_gmem_get_folio(inode, index); + if (!folio) { + r = -ENOMEM; + break; + } + + index = folio_next_index(folio); + + folio_unlock(folio); + folio_put(folio); + + /* 64-bit only, wrapping the index should be impossible. */ + if (WARN_ON_ONCE(!index)) + break; + + cond_resched(); + } + + filemap_invalidate_unlock_shared(mapping); + + return r; +} + +static long gunyah_gmem_fallocate(struct file *file, int mode, loff_t offset, + loff_t len) +{ + long ret; + + if (!(mode & FALLOC_FL_KEEP_SIZE)) + return -EOPNOTSUPP; + + if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_ZERO_RANGE)) + return -EOPNOTSUPP; + + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + + if (mode & FALLOC_FL_PUNCH_HOLE) + ret = gunyah_gmem_punch_hole(file_inode(file), offset, len); + else + ret = gunyah_gmem_allocate(file_inode(file), offset, len); + + if (!ret) + file_modified(file); + return ret; +} + +static int gunyah_gmem_release(struct inode *inode, struct file *file) +{ + return 0; +} + +static const struct file_operations gunyah_gmem_fops = { + .owner = THIS_MODULE, + .llseek = generic_file_llseek, + .mmap = gunyah_gmem_mmap, + .open = generic_file_open, + .fallocate = gunyah_gmem_fallocate, + .release = gunyah_gmem_release, +}; + +static const struct address_space_operations gunyah_gmem_aops = { + .dirty_folio = noop_dirty_folio, + .migrate_folio = migrate_folio, + .error_remove_folio = generic_error_remove_folio, +}; + +int gunyah_guest_mem_create(struct gunyah_create_mem_args *args) +{ + const char *anon_name = "[gh-gmem]"; + unsigned long fd_flags = 0; + struct inode *inode; + struct file *file; + int fd, err; + + if (!PAGE_ALIGNED(args->size)) + return -EINVAL; + + if (args->flags & ~(GHMF_CLOEXEC | GHMF_ALLOW_HUGEPAGE)) + return -EINVAL; + + if (args->flags & GHMF_CLOEXEC) + fd_flags |= O_CLOEXEC; + + fd = get_unused_fd_flags(fd_flags); + if (fd < 0) + return fd; + + /* + * Use the so called "secure" variant, which creates a unique inode + * instead of reusing a single inode. Each guest_memfd instance needs + * its own inode to track the size, flags, etc. + */ + file = anon_inode_create_getfile(anon_name, &gunyah_gmem_fops, NULL, + O_RDWR, NULL); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_fd; + } + + file->f_flags |= O_LARGEFILE; + + inode = file->f_inode; + WARN_ON(file->f_mapping != inode->i_mapping); + + inode->i_private = (void *)(unsigned long)args->flags; + inode->i_mapping->a_ops = &gunyah_gmem_aops; + inode->i_mode |= S_IFREG; + inode->i_size = args->size; + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_large_folios(inode->i_mapping); + mapping_set_unmovable(inode->i_mapping); + /* Unmovable mappings are supposed to be marked unevictable as well. */ + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); + + fd_install(fd, file); + return fd; + +err_fd: + put_unused_fd(fd); + return err; +} diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 26b6dce49970..33751d5cddd2 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -687,6 +687,15 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, switch (cmd) { case GUNYAH_CREATE_VM: return gunyah_dev_ioctl_create_vm(rm, arg); + case GUNYAH_CREATE_GUEST_MEM: { + struct gunyah_create_mem_args args; + + if (copy_from_user(&args, (const void __user *)arg, + sizeof(args))) + return -EFAULT; + + return gunyah_guest_mem_create(&args); + } default: return -ENOTTY; } diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index e500f6eb014e..055990842959 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -109,4 +109,6 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, int gunyah_vm_reclaim_folio(struct gunyah_vm *ghvm, u64 gfn); int gunyah_vm_reclaim_range(struct gunyah_vm *ghvm, u64 gfn, u64 nr); +int gunyah_guest_mem_create(struct gunyah_create_mem_args *args); + #endif diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 46f7d3aa61d0..c5f506350364 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -20,6 +20,25 @@ */ #define GUNYAH_CREATE_VM _IO(GUNYAH_IOCTL_TYPE, 0x0) /* Returns a Gunyah VM fd */ +enum gunyah_mem_flags { + GHMF_CLOEXEC = (1UL << 0), + GHMF_ALLOW_HUGEPAGE = (1UL << 1), +}; + +/** + * struct gunyah_create_mem_args - Description of guest memory to create + * @flags: See GHMF_*. + */ +struct gunyah_create_mem_args { + __u64 flags; + __u64 size; + __u64 reserved[6]; +}; + +#define GUNYAH_CREATE_GUEST_MEM \ + _IOW(GUNYAH_IOCTL_TYPE, 0x8, \ + struct gunyah_create_mem_args) /* Returns a Gunyah memory fd */ + /* * ioctls for gunyah-vm fds (returned by GUNYAH_CREATE_VM) */ From patchwork Tue Jan 9 19:38:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761159 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A430D481B4; Tue, 9 Jan 2024 19:38:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="m8NlPsuH" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IocPs003466; Tue, 9 Jan 2024 19:38:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=J86hk5ZNAM4yco10H3q8d/nCtmnOiD8TudkIuvK3R54 =; b=m8NlPsuHzRxSYJTrWZTsm5UF20FIEirZ/YLEqL6Fveo7ObkscWnw7VdGTtk AlFpsxYSQg8+so5UojOofAtUKOs+Apmokcw8aGT5kugiXn6zbsF5d1yk2i+mHqop wyBaSQwF/bTtHbcnsy3NPqasl/rlP+hE32CH9Jb8kIs3wJbemymdkB3Zp1diqnNG eK4vqgc+OfCKKjfkxcORbhIVP65uiKpd4ML1tEpUTL/KoDUN26QnHjbGWl7zYjep dsHGen41zqM1JHuvPbDCDWG4dneNUv4Z3LGfQGmUw+arR4YNzw8SCh+Vo1fjph5L HbaH14JWpo7cashONpnSIRHKiJg== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9bmggd9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:04 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc3JS003494 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:03 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:02 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:00 -0800 Subject: [PATCH v16 22/34] virt: gunyah: Add ioctl to bind guestmem to VMs Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-22-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: crWTMvyVI_IKN5KiiZJuwdyyipi-0F48 X-Proofpoint-GUID: crWTMvyVI_IKN5KiiZJuwdyyipi-0F48 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 clxscore=1015 spamscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 suspectscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 A maple tree is used to maintain a map from guest address ranges to a guestmemfd that provides the memory for that range of memory for the guest. The mapping of guest address range to guestmemfd is called a binding. Implement an ioctl to add/remove bindings to the virtual machine. The binding determines whether the memory is shared (host retains access) or lent (host loses access). Signed-off-by: Elliot Berman --- drivers/virt/gunyah/guest_memfd.c | 394 +++++++++++++++++++++++++++++++++++++- drivers/virt/gunyah/vm_mgr.c | 20 ++ drivers/virt/gunyah/vm_mgr.h | 9 + include/uapi/linux/gunyah.h | 41 ++++ 4 files changed, 455 insertions(+), 9 deletions(-) diff --git a/drivers/virt/gunyah/guest_memfd.c b/drivers/virt/gunyah/guest_memfd.c index 73a3f1368081..71686f1946da 100644 --- a/drivers/virt/gunyah/guest_memfd.c +++ b/drivers/virt/gunyah/guest_memfd.c @@ -16,6 +16,51 @@ #include "vm_mgr.h" +/** + * struct gunyah_gmem_binding - Represents a binding of guestmem to a Gunyah VM + * @gfn: Guest address to place acquired folios + * @ghvm: Pointer to Gunyah VM in this binding + * @i_off: offset into the guestmem to grab folios from + * @file: Pointer to guest_memfd + * @i_entry: list entry for inode->i_private_list + * @flags: Access flags for the binding + * @nr: Number of pages covered by this binding + */ +struct gunyah_gmem_binding { + u64 gfn; + struct gunyah_vm *ghvm; + + pgoff_t i_off; + struct file *file; + struct list_head i_entry; + + u32 flags; + unsigned long nr; +}; + +static inline pgoff_t gunyah_gfn_to_off(struct gunyah_gmem_binding *b, u64 gfn) +{ + return gfn - b->gfn + b->i_off; +} + +static inline u64 gunyah_off_to_gfn(struct gunyah_gmem_binding *b, pgoff_t off) +{ + return off - b->i_off + b->gfn; +} + +static inline bool gunyah_guest_mem_is_lend(struct gunyah_vm *ghvm, u32 flags) +{ + u8 access = flags & GUNYAH_MEM_ACCESS_MASK; + + if (access == GUNYAH_MEM_FORCE_LEND) + return true; + else if (access == GUNYAH_MEM_FORCE_SHARE) + return false; + + /* RM requires all VMs to be protected (isolated) */ + return true; +} + static struct folio *gunyah_gmem_get_huge_folio(struct inode *inode, pgoff_t index) { @@ -83,17 +128,55 @@ static struct folio *gunyah_gmem_get_folio(struct inode *inode, pgoff_t index) return folio; } +/** + * gunyah_gmem_launder_folio() - Tries to unmap one folio from virtual machine(s) + * @folio: The folio to unmap + * + * Returns - 0 if the folio has been reclaimed from any virtual machine(s) that + * folio was mapped into. + */ +static int gunyah_gmem_launder_folio(struct folio *folio) +{ + struct address_space *const mapping = folio->mapping; + struct gunyah_gmem_binding *b; + pgoff_t index = folio_index(folio); + int ret = 0; + u64 gfn; + + filemap_invalidate_lock_shared(mapping); + list_for_each_entry(b, &mapping->i_private_list, i_entry) { + /* if the mapping doesn't cover this folio: skip */ + if (b->i_off > index || index > b->i_off + b->nr) + continue; + + gfn = gunyah_off_to_gfn(b, index); + ret = gunyah_vm_reclaim_folio(b->ghvm, gfn); + if (WARN_RATELIMIT(ret, "failed to reclaim gfn: %08llx %d\n", + gfn, ret)) + break; + } + filemap_invalidate_unlock_shared(mapping); + + return ret; +} + static vm_fault_t gunyah_gmem_host_fault(struct vm_fault *vmf) { struct folio *folio; folio = gunyah_gmem_get_folio(file_inode(vmf->vma->vm_file), vmf->pgoff); - if (!folio || folio_test_private(folio)) { + if (!folio) + return VM_FAULT_SIGBUS; + + /* If the folio is lent to a VM, try to reclim it */ + if (folio_test_private(folio) && gunyah_gmem_launder_folio(folio)) { folio_unlock(folio); folio_put(folio); return VM_FAULT_SIGBUS; } + /* gunyah_gmem_launder_folio should clear the private bit if it returns 0 */ + BUG_ON(folio_test_private(folio)); vmf->page = folio_file_page(folio, vmf->pgoff); @@ -106,9 +189,36 @@ static const struct vm_operations_struct gunyah_gmem_vm_ops = { static int gunyah_gmem_mmap(struct file *file, struct vm_area_struct *vma) { - file_accessed(file); - vma->vm_ops = &gunyah_gmem_vm_ops; - return 0; + struct address_space *const mapping = file->f_mapping; + struct gunyah_gmem_binding *b; + int ret = 0; + u64 gfn, nr; + + filemap_invalidate_lock_shared(mapping); + list_for_each_entry(b, &mapping->i_private_list, i_entry) { + if (!gunyah_guest_mem_is_lend(b->ghvm, b->flags)) + continue; + + /* if the binding doesn't cover this vma: skip */ + if (vma->vm_pgoff + vma_pages(vma) < b->i_off) + continue; + if (vma->vm_pgoff > b->i_off + b->nr) + continue; + + gfn = gunyah_off_to_gfn(b, vma->vm_pgoff); + nr = gunyah_off_to_gfn(b, vma->vm_pgoff + vma_pages(vma)) - gfn; + ret = gunyah_vm_reclaim_range(b->ghvm, gfn, nr); + if (ret) + break; + } + filemap_invalidate_unlock_shared(mapping); + + if (!ret) { + file_accessed(file); + vma->vm_ops = &gunyah_gmem_vm_ops; + } + + return ret; } /** @@ -125,9 +235,7 @@ static int gunyah_gmem_mmap(struct file *file, struct vm_area_struct *vma) static long gunyah_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) { - truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); - - return 0; + return invalidate_inode_pages2_range(inode->i_mapping, offset, offset + len - 1); } static long gunyah_gmem_allocate(struct inode *inode, loff_t offset, loff_t len) @@ -204,6 +312,12 @@ static long gunyah_gmem_fallocate(struct file *file, int mode, loff_t offset, static int gunyah_gmem_release(struct inode *inode, struct file *file) { + /** + * each binding increments refcount on file, so we shouldn't be here + * if i_private_list not empty. + */ + BUG_ON(!list_empty(&inode->i_mapping->i_private_list)); + return 0; } @@ -216,10 +330,26 @@ static const struct file_operations gunyah_gmem_fops = { .release = gunyah_gmem_release, }; +static bool gunyah_gmem_release_folio(struct folio *folio, gfp_t gfp_flags) +{ + /* should return true if released; launder folio returns 0 if freed */ + return !gunyah_gmem_launder_folio(folio); +} + +static int gunyah_gmem_remove_folio(struct address_space *mapping, + struct folio *folio) +{ + if (mapping != folio->mapping) + return -EINVAL; + + return gunyah_gmem_launder_folio(folio); +} + static const struct address_space_operations gunyah_gmem_aops = { .dirty_folio = noop_dirty_folio, - .migrate_folio = migrate_folio, - .error_remove_folio = generic_error_remove_folio, + .release_folio = gunyah_gmem_release_folio, + .launder_folio = gunyah_gmem_launder_folio, + .error_remove_folio = gunyah_gmem_remove_folio, }; int gunyah_guest_mem_create(struct gunyah_create_mem_args *args) @@ -267,6 +397,7 @@ int gunyah_guest_mem_create(struct gunyah_create_mem_args *args) mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); mapping_set_large_folios(inode->i_mapping); mapping_set_unmovable(inode->i_mapping); + mapping_set_release_always(inode->i_mapping); /* Unmovable mappings are supposed to be marked unevictable as well. */ WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); @@ -277,3 +408,248 @@ int gunyah_guest_mem_create(struct gunyah_create_mem_args *args) put_unused_fd(fd); return err; } + +void gunyah_gmem_remove_binding(struct gunyah_gmem_binding *b) +{ + WARN_ON(gunyah_vm_reclaim_range(b->ghvm, b->gfn, b->nr)); + mtree_erase(&b->ghvm->bindings, b->gfn); + list_del(&b->i_entry); + fput(b->file); + kfree(b); +} + +static inline unsigned long gunyah_gmem_page_mask(struct file *file) +{ + unsigned long gmem_flags = (unsigned long)file_inode(file)->i_private; + + if (gmem_flags & GHMF_ALLOW_HUGEPAGE) { +#if IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) + return HPAGE_PMD_MASK; +#else + return ULONG_MAX; +#endif + } + + return PAGE_MASK; +} + +static int gunyah_gmem_init_binding(struct gunyah_vm *ghvm, struct file *file, + struct gunyah_map_mem_args *args, + struct gunyah_gmem_binding *binding) +{ + const unsigned long page_mask = ~gunyah_gmem_page_mask(file); + + if (args->flags & ~(GUNYAH_MEM_ALLOW_RWX | GUNYAH_MEM_ACCESS_MASK)) + return -EINVAL; + + if (args->guest_addr & page_mask) + return -EINVAL; + + if (args->offset & page_mask) + return -EINVAL; + + if (args->size & page_mask) + return -EINVAL; + + binding->gfn = gunyah_gpa_to_gfn(args->guest_addr); + binding->ghvm = ghvm; + binding->i_off = args->offset >> PAGE_SHIFT; + binding->file = file; + binding->flags = args->flags; + binding->nr = args->size >> PAGE_SHIFT; + + return 0; +} + +static int gunyah_gmem_trim_binding(struct gunyah_gmem_binding *b, + unsigned long start_delta, + unsigned long end_delta) +{ + struct gunyah_vm *ghvm = b->ghvm; + int ret; + + down_write(&ghvm->bindings_lock); + if (!start_delta && !end_delta) { + ret = gunyah_vm_reclaim_range(ghvm, b->gfn, b->nr); + if (ret) + goto unlock; + gunyah_gmem_remove_binding(b); + } else if (start_delta && !end_delta) { + /* shrink the beginning */ + ret = gunyah_vm_reclaim_range(ghvm, b->gfn, + b->gfn + start_delta); + if (ret) + goto unlock; + mtree_erase(&ghvm->bindings, b->gfn); + b->gfn += start_delta; + b->i_off += start_delta; + b->nr -= start_delta; + ret = mtree_insert_range(&ghvm->bindings, b->gfn, + b->gfn + b->nr - 1, b, GFP_KERNEL); + } else if (!start_delta && end_delta) { + /* Shrink the end */ + ret = gunyah_vm_reclaim_range(ghvm, b->gfn + b->nr - end_delta, + b->gfn + b->nr); + if (ret) + goto unlock; + mtree_erase(&ghvm->bindings, b->gfn); + b->nr -= end_delta; + ret = mtree_insert_range(&ghvm->bindings, b->gfn, + b->gfn + b->nr - 1, b, GFP_KERNEL); + } else { + /* TODO: split the mapping into 2 */ + ret = -EINVAL; + } + +unlock: + up_write(&ghvm->bindings_lock); + return ret; +} + +static int gunyah_gmem_remove_mapping(struct gunyah_vm *ghvm, struct file *file, + struct gunyah_map_mem_args *args) +{ + struct inode *inode = file_inode(file); + struct gunyah_gmem_binding *b = NULL; + unsigned long start_delta, end_delta; + struct gunyah_gmem_binding remove; + int ret; + + ret = gunyah_gmem_init_binding(ghvm, file, args, &remove); + if (ret) + return ret; + + ret = -ENOENT; + filemap_invalidate_lock(inode->i_mapping); + list_for_each_entry(b, &inode->i_mapping->i_private_list, i_entry) { + if (b->ghvm != remove.ghvm || b->flags != remove.flags || + WARN_ON(b->file != remove.file)) + continue; + /** + * Test if the binding to remove is within this binding + * [gfn b nr] + * [gfn remove nr] + */ + if (b->gfn > remove.gfn) + continue; + if (b->gfn + b->nr < remove.gfn + remove.nr) + continue; + + /** + * We found the binding! + * Compute the delta in gfn start and make sure the offset + * into guest memfd matches. + */ + start_delta = remove.gfn - b->gfn; + if (remove.i_off - b->i_off != start_delta) + continue; + end_delta = remove.gfn + remove.nr - b->gfn - b->nr; + + ret = gunyah_gmem_trim_binding(b, start_delta, end_delta); + break; + } + + filemap_invalidate_unlock(inode->i_mapping); + return ret; +} + +static bool gunyah_gmem_binding_allowed_overlap(struct gunyah_gmem_binding *a, + struct gunyah_gmem_binding *b) +{ + /* assumes we are operating on the same file, check to be sure */ + BUG_ON(a->file != b->file); + + /** + * Gunyah only guarantees we can share a page with one VM and + * doesn't (currently) allow us to share same page with multiple VMs, + * regardless whether host can also access. + * Gunyah supports, but Linux hasn't implemented mapping same page + * into 2 separate addresses in guest's address space. This doesn't + * seem reasonable today, but we could do it later. + * All this to justify: check that the `a` region doesn't overlap with + * `b` region w.r.t. file offsets. + */ + if (a->i_off + a->nr < b->i_off) + return false; + if (a->i_off > b->i_off + b->nr) + return false; + + return true; +} + +static int gunyah_gmem_add_mapping(struct gunyah_vm *ghvm, struct file *file, + struct gunyah_map_mem_args *args) +{ + struct gunyah_gmem_binding *b, *tmp = NULL; + struct inode *inode = file_inode(file); + int ret; + + b = kzalloc(sizeof(*b), GFP_KERNEL); + if (!b) + return -ENOMEM; + + ret = gunyah_gmem_init_binding(ghvm, file, args, b); + if (ret) + return ret; + + filemap_invalidate_lock(inode->i_mapping); + list_for_each_entry(tmp, &inode->i_mapping->i_private_list, i_entry) { + if (!gunyah_gmem_binding_allowed_overlap(b, tmp)) { + ret = -EEXIST; + goto unlock; + } + } + + ret = mtree_insert_range(&ghvm->bindings, b->gfn, b->gfn + b->nr - 1, + b, GFP_KERNEL); + if (ret) + goto unlock; + + list_add(&b->i_entry, &inode->i_mapping->i_private_list); + +unlock: + filemap_invalidate_unlock(inode->i_mapping); + return ret; +} + +int gunyah_gmem_modify_mapping(struct gunyah_vm *ghvm, + struct gunyah_map_mem_args *args) +{ + u8 access = args->flags & GUNYAH_MEM_ACCESS_MASK; + struct file *file; + int ret = -EINVAL; + + file = fget(args->guest_mem_fd); + if (!file) + return -EINVAL; + + if (file->f_op != &gunyah_gmem_fops) + goto err_file; + + if (args->flags & ~(GUNYAH_MEM_ALLOW_RWX | GUNYAH_MEM_UNMAP | GUNYAH_MEM_ACCESS_MASK)) + goto err_file; + + /* VM needs to have some permissions to the memory */ + if (!(args->flags & GUNYAH_MEM_ALLOW_RWX)) + goto err_file; + + if (access != GUNYAH_MEM_DEFAULT_ACCESS && + access != GUNYAH_MEM_FORCE_LEND && access != GUNYAH_MEM_FORCE_SHARE) + goto err_file; + + if (!PAGE_ALIGNED(args->guest_addr) || !PAGE_ALIGNED(args->offset) || + !PAGE_ALIGNED(args->size)) + goto err_file; + + if (args->flags & GUNYAH_MEM_UNMAP) { + args->flags &= ~GUNYAH_MEM_UNMAP; + ret = gunyah_gmem_remove_mapping(ghvm, file, args); + } else { + ret = gunyah_gmem_add_mapping(ghvm, file, args); + } + +err_file: + if (ret) + fput(file); + return ret; +} diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 33751d5cddd2..be2061aa0a06 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -397,6 +397,8 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) INIT_LIST_HEAD(&ghvm->resource_tickets); mt_init(&ghvm->mm); + mt_init(&ghvm->bindings); + init_rwsem(&ghvm->bindings_lock); ghvm->addrspace_ticket.resource_type = GUNYAH_RESOURCE_TYPE_ADDR_SPACE; ghvm->addrspace_ticket.label = GUNYAH_VM_ADDRSPACE_LABEL; @@ -551,6 +553,14 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, r = gunyah_vm_rm_function_instance(ghvm, &f); break; } + case GUNYAH_VM_MAP_MEM: { + struct gunyah_map_mem_args args; + + if (copy_from_user(&args, argp, sizeof(args))) + return -EFAULT; + + return gunyah_gmem_modify_mapping(ghvm, &args); + } default: r = -ENOTTY; break; @@ -568,6 +578,8 @@ EXPORT_SYMBOL_GPL(gunyah_vm_get); static void _gunyah_vm_put(struct kref *kref) { struct gunyah_vm *ghvm = container_of(kref, struct gunyah_vm, kref); + struct gunyah_gmem_binding *b; + unsigned long idx = 0; int ret; /** @@ -579,6 +591,13 @@ static void _gunyah_vm_put(struct kref *kref) gunyah_vm_remove_functions(ghvm); + down_write(&ghvm->bindings_lock); + mt_for_each(&ghvm->bindings, b, idx, ULONG_MAX) { + gunyah_gmem_remove_binding(b); + } + up_write(&ghvm->bindings_lock); + WARN_ON(!mtree_empty(&ghvm->bindings)); + mtree_destroy(&ghvm->bindings); /** * If this fails, we're going to lose the memory for good and is * BUG_ON-worthy, but not unrecoverable (we just lose memory). @@ -606,6 +625,7 @@ static void _gunyah_vm_put(struct kref *kref) ghvm->vm_status == GUNYAH_RM_VM_STATUS_RESET); } + WARN_ON(!mtree_empty(&ghvm->mm)); mtree_destroy(&ghvm->mm); if (ghvm->vm_status > GUNYAH_RM_VM_STATUS_NO_STATE) { diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 055990842959..518d05eeb642 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -36,6 +36,9 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @mm: A maple tree of all memory that has been mapped to a VM. * Indices are guest frame numbers; entries are either folios or * RM mem parcels + * @bindings: A maple tree of guest memfd bindings. Indices are guest frame + * numbers; entries are &struct gunyah_gmem_binding + * @bindings_lock: For serialization to @bindings * @addrspace_ticket: Resource ticket to the capability for guest VM's * address space * @host_private_extent_ticket: Resource ticket to the capability for our @@ -75,6 +78,8 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, struct gunyah_vm { u16 vmid; struct maple_tree mm; + struct maple_tree bindings; + struct rw_semaphore bindings_lock; struct gunyah_vm_resource_ticket addrspace_ticket, host_private_extent_ticket, host_shared_extent_ticket, guest_private_extent_ticket, guest_shared_extent_ticket; @@ -110,5 +115,9 @@ int gunyah_vm_reclaim_folio(struct gunyah_vm *ghvm, u64 gfn); int gunyah_vm_reclaim_range(struct gunyah_vm *ghvm, u64 gfn, u64 nr); int gunyah_guest_mem_create(struct gunyah_create_mem_args *args); +int gunyah_gmem_modify_mapping(struct gunyah_vm *ghvm, + struct gunyah_map_mem_args *args); +struct gunyah_gmem_binding; +void gunyah_gmem_remove_binding(struct gunyah_gmem_binding *binding); #endif diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index c5f506350364..1af4c5ae6bc3 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -87,6 +87,47 @@ struct gunyah_fn_desc { #define GUNYAH_VM_ADD_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x4, struct gunyah_fn_desc) #define GUNYAH_VM_REMOVE_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x7, struct gunyah_fn_desc) +/** + * enum gunyah_map_flags- Possible flags on &struct gunyah_map_mem_args + * @GUNYAH_MEM_DEFAULT_SHARE: Use default host access for the VM type + * @GUNYAH_MEM_FORCE_LEND: Force unmapping the memory once the guest starts to use + * @GUNYAH_MEM_FORCE_SHARE: Allow host to continue accessing memory when guest starts to use + * @GUNYAH_MEM_ALLOW_READ: Allow guest to read memory + * @GUNYAH_MEM_ALLOW_WRITE: Allow guest to write to the memory + * @GUNYAH_MEM_ALLOW_EXEC: Allow guest to execute instructions in the memory + */ +enum gunyah_map_flags { + GUNYAH_MEM_DEFAULT_ACCESS = 0, + GUNYAH_MEM_FORCE_LEND = 1, + GUNYAH_MEM_FORCE_SHARE = 2, +#define GUNYAH_MEM_ACCESS_MASK 0x7 + + GUNYAH_MEM_ALLOW_READ = 1UL << 4, + GUNYAH_MEM_ALLOW_WRITE = 1UL << 5, + GUNYAH_MEM_ALLOW_EXEC = 1UL << 6, + GUNYAH_MEM_ALLOW_RWX = + (GUNYAH_MEM_ALLOW_READ | GUNYAH_MEM_ALLOW_WRITE | GUNYAH_MEM_ALLOW_EXEC), + + GUNYAH_MEM_UNMAP = 1UL << 8, +}; + +/** + * struct gunyah_map_mem_args - Description to provide guest memory into a VM + * @guest_addr: Location in guest address space to place the memory + * @flags: See &enum gunyah_map_flags. + * @guest_mem_fd: File descriptor created by GUNYAH_CREATE_GUEST_MEM + * @offset: Offset into the guest memory file + */ +struct gunyah_map_mem_args { + __u64 guest_addr; + __u32 flags; + __u32 guest_mem_fd; + __u64 offset; + __u64 size; +}; + +#define GUNYAH_VM_MAP_MEM _IOW(GUNYAH_IOCTL_TYPE, 0x9, struct gunyah_map_mem_args) + /* * ioctls for vCPU fds */ From patchwork Tue Jan 9 19:38:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761163 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40381446D0; Tue, 9 Jan 2024 19:38:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="JsCn3DAO" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Gxr33016339; Tue, 9 Jan 2024 19:38:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=+Qmhckol3LBZQwJot12EAf5bVy2Rf/uHyT+4To7C3J0 =; b=JsCn3DAOL5dGuQbg9AbPFtybTs8AbU3IU2fM3psZaVJd0bp4eXfrdU2bh5F lxw5tC2mrLFKnWXpebotu98uqHEyO28xAVT9DcwzD1HByxVeCsTcn158gnjTsqPI fO/kAuTTIY0qLBYlspE2V740EKQb8L9O4Z+Ln1iP35goGy0SAPdXxBc3a+Zgkndi 8Nd1mk0nUBzI0bpD4aZ2OFndYd5KgxMeV0Q/orT+c35hx31z/eYBDxi3Sp0kejxy lWsY9Xcr5DXWB39cvJg+5T91iDLwf9SSTcV1SWGNUrdb9Si5PYM2xfgRt+GOeoLW 32vY+VLeVYgS3dzO5Z0vR5rmJNA== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh85t0pnu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:06 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc5Co030549 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:05 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:04 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:02 -0800 Subject: [PATCH v16 24/34] virt: gunyah: Share guest VM dtb configuration to Gunyah Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-24-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: vUg_t-T6NwaXapDllJE2l2cgFSriSpBt X-Proofpoint-GUID: vUg_t-T6NwaXapDllJE2l2cgFSriSpBt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah Resource Manager sets up a virtual machine based on a device tree which lives in guest memory. Resource manager requires this memory to be provided as a memory parcel for it to read and manipulate. Construct a memory parcel, lend it to the virtual machine, and inform resource manager about the device tree location (the memory parcel ID and offset into the memory parcel). Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 49 ++++++++++++++++++++++++++++++++++++++++++-- drivers/virt/gunyah/vm_mgr.h | 13 +++++++++++- include/uapi/linux/gunyah.h | 14 +++++++++++++ 3 files changed, 73 insertions(+), 3 deletions(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index be2061aa0a06..4379b5ba151e 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -445,8 +445,27 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) ghvm->vmid = ret; ghvm->vm_status = GUNYAH_RM_VM_STATUS_LOAD; - ret = gunyah_rm_vm_configure(ghvm->rm, ghvm->vmid, ghvm->auth, 0, 0, 0, - 0, 0); + ghvm->dtb.parcel_start = ghvm->dtb.config.guest_phys_addr >> PAGE_SHIFT; + ghvm->dtb.parcel_pages = ghvm->dtb.config.size >> PAGE_SHIFT; + /* RM requires the DTB parcel to be lent to guard against malicious + * modifications while starting VM. Force it so. + */ + ghvm->dtb.parcel.n_acl_entries = 1; + ret = gunyah_gmem_share_parcel(ghvm, &ghvm->dtb.parcel, + &ghvm->dtb.parcel_start, + &ghvm->dtb.parcel_pages); + if (ret) { + dev_warn(ghvm->parent, + "Failed to allocate parcel for DTB: %d\n", ret); + goto err; + } + + ret = gunyah_rm_vm_configure(ghvm->rm, ghvm->vmid, ghvm->auth, + ghvm->dtb.parcel.mem_handle, 0, 0, + ghvm->dtb.config.guest_phys_addr - + (ghvm->dtb.parcel_start + << PAGE_SHIFT), + ghvm->dtb.config.size); if (ret) { dev_warn(ghvm->parent, "Failed to configure VM: %d\n", ret); goto err; @@ -485,6 +504,8 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) goto err; } + WARN_ON(gunyah_vm_parcel_to_paged(ghvm, &ghvm->dtb.parcel, ghvm->dtb.parcel_start, ghvm->dtb.parcel_pages)); + ghvm->vm_status = GUNYAH_RM_VM_STATUS_RUNNING; up_write(&ghvm->status_lock); return ret; @@ -531,6 +552,21 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, long r; switch (cmd) { + case GUNYAH_VM_SET_DTB_CONFIG: { + struct gunyah_vm_dtb_config dtb_config; + + if (copy_from_user(&dtb_config, argp, sizeof(dtb_config))) + return -EFAULT; + + if (overflows_type(dtb_config.guest_phys_addr + dtb_config.size, + u64)) + return -EOVERFLOW; + + ghvm->dtb.config = dtb_config; + + r = 0; + break; + } case GUNYAH_VM_START: { r = gunyah_vm_ensure_started(ghvm); break; @@ -589,6 +625,15 @@ static void _gunyah_vm_put(struct kref *kref) if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) gunyah_vm_stop(ghvm); + if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_LOAD) { + ret = gunyah_gmem_reclaim_parcel(ghvm, &ghvm->dtb.parcel, + ghvm->dtb.parcel_start, + ghvm->dtb.parcel_pages); + if (ret) + dev_err(ghvm->parent, + "Failed to reclaim DTB parcel: %d\n", ret); + } + gunyah_vm_remove_functions(ghvm); down_write(&ghvm->bindings_lock); diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index a79c11f1c3a5..b2ab2f1bda3a 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -72,6 +72,13 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @resource_tickets: List of &struct gunyah_vm_resource_ticket * @auth: Authentication mechanism to be used by resource manager when * launching the VM + * @dtb: For tracking dtb configuration when launching the VM + * @dtb.config: Location of the DTB in the guest memory + * @dtb.parcel_start: Guest frame number where the memory parcel that we lent to + * VM (DTB could start in middle of folio; we lend entire + * folio; parcel_start is start of the folio) + * @dtb.parcel_pages: Number of pages lent for the memory parcel + * @dtb.parcel: Data for resource manager to lend the parcel * * Members are grouped by hot path. */ @@ -101,7 +108,11 @@ struct gunyah_vm { struct device *parent; enum gunyah_rm_vm_auth_mechanism auth; - + struct { + struct gunyah_vm_dtb_config config; + u64 parcel_start, parcel_pages; + struct gunyah_rm_mem_parcel parcel; + } dtb; }; int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 1af4c5ae6bc3..a89d9bedf3e5 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -42,6 +42,20 @@ struct gunyah_create_mem_args { /* * ioctls for gunyah-vm fds (returned by GUNYAH_CREATE_VM) */ + +/** + * struct gunyah_vm_dtb_config - Set the location of the VM's devicetree blob + * @guest_phys_addr: Address of the VM's devicetree in guest memory. + * @size: Maximum size of the devicetree including space for overlays. + * Resource manager applies an overlay to the DTB and dtb_size should + * include room for the overlay. A page of memory is typicaly plenty. + */ +struct gunyah_vm_dtb_config { + __u64 guest_phys_addr; + __u64 size; +}; +#define GUNYAH_VM_SET_DTB_CONFIG _IOW(GUNYAH_IOCTL_TYPE, 0x2, struct gunyah_vm_dtb_config) + #define GUNYAH_VM_START _IO(GUNYAH_IOCTL_TYPE, 0x3) /** From patchwork Tue Jan 9 19:38:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761161 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A38A647A5C; Tue, 9 Jan 2024 19:38:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="AiafqSiO" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409HCxfL023212; Tue, 9 Jan 2024 19:38:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=20kzljP3XXAHh/+L/ahQqYgdL0v1bIu2bkQnV2tNRHs =; b=AiafqSiO0HLbkygtBVeLwn7mIJx2BHAqg5lrO/Svn163jQjHcOC+xb+nDCU esyZXhvHXB+EesuvfQpViRbZglB1ZkjSAI6NwKbyZMXejIqF1jU1hCC2z2tKdFE6 0fpLOKx1un1Yv2B/InIyqai+BVTvbqaCN+hpyR3SoCAAT7GMVx5MjkJwcoZxTOeL aezOfD8xmns8GLptJ2YxwyKx3dz/jXSJrinbMVNNM6qUhjDS8Ayf0pyobLMA6wHp rEzS5vPq7faeJMINDxn2XwhJyEkGhTOfdUhw827Bv+sUYFXBlE4W+LIXCl6/xuct KMdyls6U815cKX5NN/fsHp+m7Fg== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh98m8hrr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:07 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc6SY003533 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:06 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:05 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:04 -0800 Subject: [PATCH v16 26/34] mm/interval_tree: Export iter_first/iter_next Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-26-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: PfQLvqRZVKjSZRqkcN_pgu1hNdjTZdHL X-Proofpoint-GUID: PfQLvqRZVKjSZRqkcN_pgu1hNdjTZdHL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxscore=0 priorityscore=1501 phishscore=0 malwarescore=0 mlxlogscore=819 suspectscore=0 clxscore=1015 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Export vma_interval_tree_iter_first and vma_interval_tree_iter_next for vma_interval_tree_foreach. Signed-off-by: Elliot Berman --- mm/interval_tree.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/interval_tree.c b/mm/interval_tree.c index 32e390c42c53..faa50767496c 100644 --- a/mm/interval_tree.c +++ b/mm/interval_tree.c @@ -24,6 +24,9 @@ INTERVAL_TREE_DEFINE(struct vm_area_struct, shared.rb, unsigned long, shared.rb_subtree_last, vma_start_pgoff, vma_last_pgoff, /* empty */, vma_interval_tree) +EXPORT_SYMBOL_GPL(vma_interval_tree_iter_first); +EXPORT_SYMBOL_GPL(vma_interval_tree_iter_next); + /* Insert node immediately after prev in the interval tree */ void vma_interval_tree_insert_after(struct vm_area_struct *node, struct vm_area_struct *prev, From patchwork Tue Jan 9 19:38:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761158 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A2E5481D3; Tue, 9 Jan 2024 19:38:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="nSo95nxl" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GcqSe032231; Tue, 9 Jan 2024 19:38:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=R9oS16D14s/n6UKLdxi8B+QIGsh3AN4EVEIgfOWN57I =; b=nSo95nxloAHcDQp3UMRYG+9ys11AoQKKhL68yJYaywuAYLuP+o701wulvxE 5jOvyWxF1RGW0y/s4KXMTQaW0eOng9wMZR8v6zujGGy0XBaneVKoIhdizu2rpytR LqxdMw2BcXUqumcnvXR8QWy6tuw6891MbMrLmny4wLyjDan+vbodY7iAfd1ccKpx bwnoPLxwRaPaNasLlZ4Wp9ByQ/AXlXKbNMiX0kPyMfo8MxPqvLRpxd8Ke3M90rQ2 aDj6oOPVh6RZhtWM21keb68fDjH/W+2JpJDa059W4LKRlrQ6buIACkJQQKrRg0m5 dy783prZo25BQzaVDsnLQ16VZdQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9u9gdd6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:09 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc86C012062 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:08 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:07 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:07 -0800 Subject: [PATCH v16 29/34] virt: gunyah: Allow userspace to initialize context of primary vCPU Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-29-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: r2Jg0yOIgZRWvcYW8LkdYcE94GP5WljK X-Proofpoint-ORIG-GUID: r2Jg0yOIgZRWvcYW8LkdYcE94GP5WljK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 bulkscore=0 adultscore=0 suspectscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 RM provides APIs to fill boot context the initial registers upon starting the vCPU. Most importantly, this allows userspace to set the initial PC for the primary vCPU when the VM starts to run. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 77 ++++++++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.h | 2 ++ include/uapi/linux/gunyah.h | 23 +++++++++++++ 3 files changed, 102 insertions(+) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 3b767eeeb7c2..1f3d29749174 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -395,6 +395,7 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) mutex_init(&ghvm->resources_lock); INIT_LIST_HEAD(&ghvm->resources); INIT_LIST_HEAD(&ghvm->resource_tickets); + xa_init(&ghvm->boot_context); mt_init(&ghvm->mm); mt_init(&ghvm->bindings); @@ -420,6 +421,66 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) return ghvm; } +static long gunyah_vm_set_boot_context(struct gunyah_vm *ghvm, + struct gunyah_vm_boot_context *boot_ctx) +{ + u8 reg_set, reg_index; /* to check values are reasonable */ + int ret; + + reg_set = (boot_ctx->reg >> GUNYAH_VM_BOOT_CONTEXT_REG_SHIFT) & 0xff; + reg_index = boot_ctx->reg & 0xff; + + switch (reg_set) { + case REG_SET_X: + if (reg_index > 31) + return -EINVAL; + break; + case REG_SET_PC: + if (reg_index) + return -EINVAL; + break; + case REG_SET_SP: + if (reg_index > 2) + return -EINVAL; + break; + default: + return -EINVAL; + } + + ret = down_read_interruptible(&ghvm->status_lock); + if (ret) + return ret; + + if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE) { + ret = -EINVAL; + goto out; + } + + ret = xa_err(xa_store(&ghvm->boot_context, boot_ctx->reg, + (void *)boot_ctx->value, GFP_KERNEL)); +out: + up_read(&ghvm->status_lock); + return ret; +} + +static inline int gunyah_vm_fill_boot_context(struct gunyah_vm *ghvm) +{ + unsigned long reg_set, reg_index, id; + void *entry; + int ret; + + xa_for_each(&ghvm->boot_context, id, entry) { + reg_set = (id >> GUNYAH_VM_BOOT_CONTEXT_REG_SHIFT) & 0xff; + reg_index = id & 0xff; + ret = gunyah_rm_vm_set_boot_context( + ghvm->rm, ghvm->vmid, reg_set, reg_index, (u64)entry); + if (ret) + return ret; + } + + return 0; +} + static int gunyah_vm_start(struct gunyah_vm *ghvm) { struct gunyah_rm_hyp_resources *resources; @@ -496,6 +557,13 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) } ghvm->vm_status = GUNYAH_RM_VM_STATUS_READY; + ret = gunyah_vm_fill_boot_context(ghvm); + if (ret) { + dev_warn(ghvm->parent, "Failed to setup boot context: %d\n", + ret); + goto err; + } + ret = gunyah_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources); if (ret) { dev_warn(ghvm->parent, @@ -614,6 +682,14 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, return gunyah_gmem_modify_mapping(ghvm, &args); } + case GUNYAH_VM_SET_BOOT_CONTEXT: { + struct gunyah_vm_boot_context boot_ctx; + + if (copy_from_user(&boot_ctx, argp, sizeof(boot_ctx))) + return -EFAULT; + + return gunyah_vm_set_boot_context(ghvm, &boot_ctx); + } default: r = -ENOTTY; break; @@ -699,6 +775,7 @@ static void _gunyah_vm_put(struct kref *kref) "Failed to deallocate vmid: %d\n", ret); } + xa_destroy(&ghvm->boot_context); gunyah_rm_put(ghvm->rm); kfree(ghvm); } diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 474ac866d237..4a436c3e435c 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -79,6 +79,7 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * folio; parcel_start is start of the folio) * @dtb.parcel_pages: Number of pages lent for the memory parcel * @dtb.parcel: Data for resource manager to lend the parcel + * @boot_context: Requested initial boot context to set when launching the VM * * Members are grouped by hot path. */ @@ -113,6 +114,7 @@ struct gunyah_vm { u64 parcel_start, parcel_pages; struct gunyah_rm_mem_parcel parcel; } dtb; + struct xarray boot_context; }; int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index a89d9bedf3e5..574116f54472 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -142,6 +142,29 @@ struct gunyah_map_mem_args { #define GUNYAH_VM_MAP_MEM _IOW(GUNYAH_IOCTL_TYPE, 0x9, struct gunyah_map_mem_args) +enum gunyah_vm_boot_context_reg { + REG_SET_X = 0, + REG_SET_PC = 1, + REG_SET_SP = 2, +}; + +#define GUNYAH_VM_BOOT_CONTEXT_REG_SHIFT 8 +#define GUNYAH_VM_BOOT_CONTEXT_REG(reg, idx) (((reg & 0xff) << GUNYAH_VM_BOOT_CONTEXT_REG_SHIFT) |\ + (idx & 0xff)) + +/** + * struct gunyah_vm_boot_context - Set an initial register for the VM + * @reg: Register to set. See GUNYAH_VM_BOOT_CONTEXT_REG_* macros + * @reserved: reserved for alignment + * @value: value to fill in the register + */ +struct gunyah_vm_boot_context { + __u32 reg; + __u32 reserved; + __u64 value; +}; +#define GUNYAH_VM_SET_BOOT_CONTEXT _IOW(GUNYAH_IOCTL_TYPE, 0xa, struct gunyah_vm_boot_context) + /* * ioctls for vCPU fds */ From patchwork Tue Jan 9 19:38:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761162 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB7C54776D; Tue, 9 Jan 2024 19:38:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="claOXthq" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Ik2LF020971; Tue, 9 Jan 2024 19:38:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=O0LTS8afF6938T/lQfplnM+8ELHnWjzwe6A6+1vfdHw =; b=claOXthqjhDnvl4cB5HE5bAjfN2ymsPboQB9NJGN3MQOcOErbzT1GIMudKB hNfHsz+SVvtgg3h2/7Eie/+9wleuSNBTiQE/pAebqiF1Hq4ZbB7udWqqI/tzg9gr CVcG5Z53O3tiA+E2k+2nK2KwKPP0C96pUkwlsCMMuzdfW9tCUP9E0AexLk5EKn5k jRWhYtIJhwmfmqA/BsWXkGJPbUJpqFYwaQqMW/3xvaaL8fSWabygVNSLRb46bDg/ TzbMjxBlVjsZL2aGHKmAJ0x7ar1iawUEVonUh4BtDLm/aeVEPbcIXfjTV8tOBXas k6+R1O4FGr3KNj7yKRblVaRYcZw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh3me185q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:09 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc9DV024695 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:09 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:08 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:08 -0800 Subject: [PATCH v16 30/34] virt: gunyah: Add hypercalls for sending doorbell Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-30-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 6mB1HQ53TyGWybWApgoJAF8XcYPAkg4i X-Proofpoint-GUID: 6mB1HQ53TyGWybWApgoJAF8XcYPAkg4i X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 priorityscore=1501 mlxlogscore=605 lowpriorityscore=0 mlxscore=0 suspectscore=0 impostorscore=0 clxscore=1015 adultscore=0 malwarescore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah doorbells allow a virtual machine to signal another using interrupts. Add the hypercalls needed to assert the interrupt. Reviewed-by: Alex Elder Signed-off-by: Elliot Berman --- arch/arm64/gunyah/gunyah_hypercall.c | 38 ++++++++++++++++++++++++++++++++++++ include/linux/gunyah.h | 5 +++++ 2 files changed, 43 insertions(+) diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c index 38403dc28c66..3c2672d683ae 100644 --- a/arch/arm64/gunyah/gunyah_hypercall.c +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -37,6 +37,8 @@ EXPORT_SYMBOL_GPL(arch_is_gunyah_guest); /* clang-format off */ #define GUNYAH_HYPERCALL_HYP_IDENTIFY GUNYAH_HYPERCALL(0x8000) +#define GUNYAH_HYPERCALL_BELL_SEND GUNYAH_HYPERCALL(0x8012) +#define GUNYAH_HYPERCALL_BELL_SET_MASK GUNYAH_HYPERCALL(0x8015) #define GUNYAH_HYPERCALL_MSGQ_SEND GUNYAH_HYPERCALL(0x801B) #define GUNYAH_HYPERCALL_MSGQ_RECV GUNYAH_HYPERCALL(0x801C) #define GUNYAH_HYPERCALL_ADDRSPACE_MAP GUNYAH_HYPERCALL(0x802B) @@ -64,6 +66,42 @@ void gunyah_hypercall_hyp_identify( } EXPORT_SYMBOL_GPL(gunyah_hypercall_hyp_identify); +/** + * gunyah_hypercall_bell_send() - Assert a gunyah doorbell + * @capid: capability ID of the doorbell + * @new_flags: bits to set on the doorbell + * @old_flags: Filled with the bits set before the send call if return value is GUNYAH_ERROR_OK + */ +enum gunyah_error gunyah_hypercall_bell_send(u64 capid, u64 new_flags, u64 *old_flags) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_BELL_SEND, capid, new_flags, 0, &res); + + if (res.a0 == GUNYAH_ERROR_OK && old_flags) + *old_flags = res.a1; + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_bell_send); + +/** + * gunyah_hypercall_bell_set_mask() - Set masks on a Gunyah doorbell + * @capid: capability ID of the doorbell + * @enable_mask: which bits trigger the receiver interrupt + * @ack_mask: which bits are automatically acknowledged when the receiver + * interrupt is ack'd + */ +enum gunyah_error gunyah_hypercall_bell_set_mask(u64 capid, u64 enable_mask, u64 ack_mask) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_BELL_SET_MASK, capid, enable_mask, ack_mask, 0, &res); + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_bell_set_mask); + /** * gunyah_hypercall_msgq_send() - Send a buffer on a message queue * @capid: capability ID of the message queue to add message diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 32ce578220ca..67cb9350ab9e 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -346,6 +346,11 @@ gunyah_api_version(const struct gunyah_hypercall_hyp_identify_resp *gunyah_api) void gunyah_hypercall_hyp_identify( struct gunyah_hypercall_hyp_identify_resp *hyp_identity); +enum gunyah_error gunyah_hypercall_bell_send(u64 capid, u64 new_flags, + u64 *old_flags); +enum gunyah_error gunyah_hypercall_bell_set_mask(u64 capid, u64 enable_mask, + u64 ack_mask); + /* Immediately raise RX vIRQ on receiver VM */ #define GUNYAH_HYPERCALL_MSGQ_TX_FLAGS_PUSH BIT(0) From patchwork Tue Jan 9 19:38:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761160 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00507481C7; Tue, 9 Jan 2024 19:38:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="meB4l/v4" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IPgvb009679; Tue, 9 Jan 2024 19:38:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=9IXiRln2Duun1O80Yl8EsvifJ9A2nUavg62PHcy0yVQ =; b=meB4l/v4NDsiJs4wRPv7TjhWtf35qjgjlmEZdoe6KtorlH+6+aiyx81qcU+ 2PSK4XQz5UnVSN7jdEk3INrccSurEXaoUxv6HwXV4Wah+dTHkNcoBGW9Vt1tUYTn D2ystJCaVc519dsmgvFk6EuCPq7DBthg36dkc45Bl06AFbeeR9wVfuQNr3KbslTq bbOqGJJ92LXThkRfDH4ZqQ/f1fPny87Fohj//lhFl6ybLPMoUFkX4+F7nnGa8yq/ agQE6NwlTYsZ8lCvPa6yFPR//EvHF5c7jYZR+kKK+EBBCLDnXnWXyd8EgCxS1Bbc 5zZpkb70VQJq9PwSBRJrKT+xo6Q== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh85t0pp9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:12 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JcBhQ003587 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:11 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:10 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:12 -0800 Subject: [PATCH v16 34/34] MAINTAINERS: Add Gunyah hypervisor drivers section Precedence: bulk X-Mailing-List: devicetree@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-34-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: kzM0F7NTMKnUvtJCJV2bCf88yMAs95Pq X-Proofpoint-GUID: kzM0F7NTMKnUvtJCJV2bCf88yMAs95Pq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=744 clxscore=1015 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add myself and Prakruthi as maintainers of Gunyah hypervisor drivers. Reviewed-by: Alex Elder Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index fa67e2624723..64f70ef1ef91 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9306,6 +9306,18 @@ L: linux-efi@vger.kernel.org S: Maintained F: block/partitions/efi.* +GUNYAH HYPERVISOR DRIVER +M: Elliot Berman +M: Prakruthi Deepak Heragu +L: linux-arm-msm@vger.kernel.org +S: Supported +F: Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml +F: Documentation/virt/gunyah/ +F: arch/arm64/gunyah/ +F: drivers/virt/gunyah/ +F: include/linux/gunyah*.h +K: gunyah + HABANALABS PCI DRIVER M: Oded Gabbay L: dri-devel@lists.freedesktop.org