From patchwork Fri Jul 2 09:58:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 469500 Delivered-To: patch@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp1221057jao; Fri, 2 Jul 2021 02:59:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx8MFBut2bsTx7NoUnYiEIwntDtguKFOFNNmX0biYDCPKja0HN4xh8ug17nd/oxxk8mPXGA X-Received: by 2002:a92:1802:: with SMTP id 2mr1836007ily.139.1625219979780; Fri, 02 Jul 2021 02:59:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625219979; cv=none; d=google.com; s=arc-20160816; b=IwmLa7QJtlYU3SqHsrAwWyWjvWM7r7OzhRgfuc4CB2p7TzzRfiMmpYQerap765cWv1 kyjAoIxngnqdDznLsWAiwBbBHdu3OFHW5naB9yNtXb3Fo6lJmSuBi107vgXb+BiCK1MU V0LTJRFBdYEVh0t3SKqnUTKefySuaGNbqV3JaBcdYNjHkdFQ59hcW79OmvRToqc63Z8F xeA0m/fxt/CdJDF/MAnCNnjvHRqNDG45QSfSNoUniTzF0JOHC1eeGZMn58oRlBHuQlM4 v258ebkqfcJYy98aOH4DRImT5ARrgzO8qBwNSrI8sNGCjv61hjrlbESfCX+EYO1ijJ+p zAvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=ehIhE4szw6C+BE3Z/l0Ttx8EXEfgtFnhhGoWQZSGAUo=; b=lDH3ONrrEwPiEHBeFhgsivihTB8VSbCngfaHXteOFVD+AMzcBng5+YMnB4cJP8njN7 VC4LjKGvdY+Zpze7VbiPKDwO913bv5U7W4vvvyUX1f9jb9Ahi/0vvRd9ZUap9q5g9pAe EYrvqLefGsrYlFY6mnEyfhoJCtSMy+CGVCRzwVCGPNJDI8poc0cAz8XrF6BZQy4M++l6 FfDjpukg704kThGpxyA9FF+tr/970BHfaqKqooMYFai7u9D4omt+6CyBJNyJoeZvdXtp ikew8vkVxImGrGWEX7xF26iwOu1qfceJpHbn6CfAo/PTM8k5Fyjpbq8GsC/CbQdaCQk/ NMJg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p31si3309856jac.95.2021.07.02.02.59.39; Fri, 02 Jul 2021 02:59:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231700AbhGBKCG (ORCPT + 3 others); Fri, 2 Jul 2021 06:02:06 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3343 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231466AbhGBKB7 (ORCPT ); Fri, 2 Jul 2021 06:01:59 -0400 Received: from fraeml738-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4GGVdz3gjPz6GB9V; Fri, 2 Jul 2021 17:48:55 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml738-chm.china.huawei.com (10.206.15.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 11:59:26 +0200 Received: from A2006125610.china.huawei.com (10.47.85.147) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 10:59:19 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [RFC v2 1/4] hisi-acc-vfio-pci: add new vfio_pci driver for HiSilicon ACC devices Date: Fri, 2 Jul 2021 10:58:46 +0100 Message-ID: <20210702095849.1610-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210702095849.1610-1-shameerali.kolothum.thodi@huawei.com> References: <20210702095849.1610-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.85.147] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add a vendor-specific vfio_pci driver for HiSilicon ACC devices. This will be extended in follow-up patches to add support for vfio live migration feature. Signed-off-by: Shameer Kolothum --- drivers/vfio/pci/Kconfig | 9 +++ drivers/vfio/pci/Makefile | 2 + drivers/vfio/pci/hisi_acc_vfio_pci.c | 100 +++++++++++++++++++++++++++ 3 files changed, 111 insertions(+) create mode 100644 drivers/vfio/pci/hisi_acc_vfio_pci.c -- 2.17.1 diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 9cdef46dd299..709807c28153 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -57,3 +57,12 @@ config MLX5_VFIO_PCI framework. If you don't know what to do here, say N. + +config HISI_ACC_VFIO_PCI + tristate "VFIO support for HiSilicon ACC devices" + depends on ARM64 && VFIO_PCI_CORE + help + This provides generic PCI support for HiSilicon devices using the VFIO + framework. + + If you don't know what to do here, say N. diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index a0df9c2a4bd9..d1de3e81921f 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -3,6 +3,7 @@ obj-$(CONFIG_VFIO_PCI_CORE) += vfio-pci-core.o obj-$(CONFIG_VFIO_PCI) += vfio-pci.o obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5-vfio-pci.o +obj-$(CONFIG_HISI_ACC_VFIO_PCI) += hisi-acc-vfio-pci.o vfio-pci-core-y := vfio_pci_core.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o vfio-pci-core-$(CONFIG_S390) += vfio_pci_zdev.o @@ -11,3 +12,4 @@ vfio-pci-y := vfio_pci.o vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o mlx5-vfio-pci-y := mlx5_vfio_pci.o +hisi-acc-vfio-pci-y := hisi_acc_vfio_pci.o diff --git a/drivers/vfio/pci/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisi_acc_vfio_pci.c new file mode 100644 index 000000000000..a9e173098ab5 --- /dev/null +++ b/drivers/vfio/pci/hisi_acc_vfio_pci.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, HiSilicon Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static int hisi_acc_vfio_pci_open(struct vfio_device *core_vdev) +{ + struct vfio_pci_core_device *vdev = + container_of(core_vdev, struct vfio_pci_core_device, vdev); + int ret; + + lockdep_assert_held(&core_vdev->reflck->lock); + + ret = vfio_pci_core_enable(vdev); + if (ret) + return ret; + + vfio_pci_core_finish_enable(vdev); + + return 0; +} + +static const struct vfio_device_ops hisi_acc_vfio_pci_ops = { + .name = "hisi-acc-vfio-pci", + .open = hisi_acc_vfio_pci_open, + .release = vfio_pci_core_release, + .ioctl = vfio_pci_core_ioctl, + .read = vfio_pci_core_read, + .write = vfio_pci_core_write, + .mmap = vfio_pci_core_mmap, + .request = vfio_pci_core_request, + .match = vfio_pci_core_match, + .reflck_attach = vfio_pci_core_reflck_attach, +}; + +static int hisi_acc_vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct vfio_pci_core_device *vdev; + int ret; + + vdev = kzalloc(sizeof(*vdev), GFP_KERNEL); + if (!vdev) + return -ENOMEM; + + ret = vfio_pci_core_register_device(vdev, pdev, &hisi_acc_vfio_pci_ops); + if (ret) + goto out_free; + + dev_set_drvdata(&pdev->dev, vdev); + + return 0; + +out_free: + kfree(vdev); + return ret; +} + +static void hisi_acc_vfio_pci_remove(struct pci_dev *pdev) +{ + struct vfio_pci_core_device *vdev = dev_get_drvdata(&pdev->dev); + + vfio_pci_core_unregister_device(vdev); + kfree(vdev); +} + +static const struct pci_device_id hisi_acc_vfio_pci_table[] = { + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, 0xa256) }, /* SEC VF */ + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, 0xa259) }, /* HPRE VF */ + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, 0xa251) }, /* ZIP VF */ + { 0, } +}; + +MODULE_DEVICE_TABLE(pci, hisi_acc_vfio_pci_table); + +static struct pci_driver hisi_acc_vfio_pci_driver = { + .name = "hisi-acc-vfio-pci", + .id_table = hisi_acc_vfio_pci_table, + .probe = hisi_acc_vfio_pci_probe, + .remove = hisi_acc_vfio_pci_remove, +#ifdef CONFIG_PCI_IOV + .sriov_configure = vfio_pci_core_sriov_configure, +#endif + .err_handler = &vfio_pci_core_err_handlers, +}; + +module_pci_driver(hisi_acc_vfio_pci_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Liu Longfang "); +MODULE_AUTHOR("Shameer Kolothum "); +MODULE_DESCRIPTION("HiSilicon VFIO PCI - Generic VFIO PCI driver for HiSilicon ACC device family"); From patchwork Fri Jul 2 09:58:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 469501 Delivered-To: patch@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp1221117jao; Fri, 2 Jul 2021 02:59:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwNzoLluh1CxhBZ7ykhM5qSJ9Mb83unNHNpyqAPtRl1v1GoUFtLH7c+3QfMuJU+kESgMcDH X-Received: by 2002:a17:906:fad5:: with SMTP id lu21mr4523463ejb.469.1625219986026; Fri, 02 Jul 2021 02:59:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625219986; cv=none; d=google.com; s=arc-20160816; b=A2Q95mlVBHGapJuViTcbcbhpTDxfHX2hKxRLJiqIN99zXSkYP5lMmSpG2QTYlmu8s7 0td5Z94x+Y36oP4ld76pbXPlEmpDlzPwu1wcOEFg1MpUzrqRMVF+tvNRabRGSdlMwIWa vMXb/EQJgrX9qClBcxLD0TDVD/+fzS6VXasqKxuCWY0pCh6wohtpos3eOTgP5t2KRxyC b+9OfPKvEaWuF9dXD9gnfFU30XxQCZBVWwaowkBMUZAhh1Rnh+sjZhgpWQT6fjqp0yAI FissIsAwMNU+E/3SC+bTzLNJ2BTRScyASuHmCZJFmmhEGXIwmMF9S39aPB1Bi7FTml8G eOGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=Qn+YBxovqhzUQz0Z22P5sWbwpbB9jcveU72LRkJ+oZo=; b=qh5fFraP7xiDT80timhVrEA6CS2D+V54SIyw3d1p53JkY6l+RhlYxlMw4t7YrIBuaY 7p+az1L+HxvUt2yHMXOHDHwwZ1l77uYbE5Z+hWrx25LPybc/k47TqQehyC8AJ6viU+Ny iaa3QCyx4nrmVFzNmV5JBaEVhLWT7if/wj4epJYNqAozprfzDqkl2NrAG+lUpQ6PZsXp 4vcSLeCMFAvhu6Xuk63c2SOBLX9MhjscJLMOTQhanndu51ySGMFNG94fg/iUayTm3Ipl ZRyiSzsaC0Ps7LXMNfkuBLkX4BEh6pkwd0yEj6ef0f+8tEE3NZtIVm9MstBIB39QU0iN opTw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n8si2978451edy.1.2021.07.02.02.59.45; Fri, 02 Jul 2021 02:59:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231823AbhGBKCL (ORCPT + 3 others); Fri, 2 Jul 2021 06:02:11 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3344 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231782AbhGBKCI (ORCPT ); Fri, 2 Jul 2021 06:02:08 -0400 Received: from fraeml736-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4GGVf84Wgdz6GBC9; Fri, 2 Jul 2021 17:49:04 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml736-chm.china.huawei.com (10.206.15.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 11:59:35 +0200 Received: from A2006125610.china.huawei.com (10.47.85.147) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 10:59:28 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [RFC v2 2/4] hisi_acc_vfio_pci: Override ioctl method to limit BAR2 region size Date: Fri, 2 Jul 2021 10:58:47 +0100 Message-ID: <20210702095849.1610-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210702095849.1610-1-shameerali.kolothum.thodi@huawei.com> References: <20210702095849.1610-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.85.147] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org HiSilicon ACC VF device BAR2 region consists of both functional register space and migration control register space. From a security point of view, it's not advisable to export the migration control region to Guest. Hence, hide the migration region and report only the functional register space. Signed-off-by: Shameer Kolothum --- drivers/vfio/pci/hisi_acc_vfio_pci.c | 42 +++++++++++++++++++++++++++- 1 file changed, 41 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/vfio/pci/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisi_acc_vfio_pci.c index a9e173098ab5..d57cf6d7adaf 100644 --- a/drivers/vfio/pci/hisi_acc_vfio_pci.c +++ b/drivers/vfio/pci/hisi_acc_vfio_pci.c @@ -12,6 +12,46 @@ #include #include +static long hisi_acc_vfio_pci_ioctl(struct vfio_device *core_vdev, unsigned int cmd, + unsigned long arg) +{ + struct vfio_pci_core_device *vdev = + container_of(core_vdev, struct vfio_pci_core_device, vdev); + + if (cmd == VFIO_DEVICE_GET_REGION_INFO) { + struct pci_dev *pdev = vdev->pdev; + struct vfio_region_info info; + unsigned long minsz; + + minsz = offsetofend(struct vfio_region_info, offset); + + if (copy_from_user(&info, (void __user *)arg, minsz)) + return -EFAULT; + + if (info.argsz < minsz) + return -EINVAL; + + if (info.index == VFIO_PCI_BAR2_REGION_INDEX) { + info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index); + + /* + * ACC VF dev BAR2 region(64K) consists of both functional + * register space and migration control register space. + * Report only the first 32K(functional region) to Guest. + */ + info.size = pci_resource_len(pdev, info.index) / 2; + + info.flags = VFIO_REGION_INFO_FLAG_READ | + VFIO_REGION_INFO_FLAG_WRITE | + VFIO_REGION_INFO_FLAG_MMAP; + + return copy_to_user((void __user *)arg, &info, minsz) ? + -EFAULT : 0; + } + } + return vfio_pci_core_ioctl(core_vdev, cmd, arg); +} + static int hisi_acc_vfio_pci_open(struct vfio_device *core_vdev) { struct vfio_pci_core_device *vdev = @@ -33,7 +73,7 @@ static const struct vfio_device_ops hisi_acc_vfio_pci_ops = { .name = "hisi-acc-vfio-pci", .open = hisi_acc_vfio_pci_open, .release = vfio_pci_core_release, - .ioctl = vfio_pci_core_ioctl, + .ioctl = hisi_acc_vfio_pci_ioctl, .read = vfio_pci_core_read, .write = vfio_pci_core_write, .mmap = vfio_pci_core_mmap, From patchwork Fri Jul 2 09:58:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 469502 Delivered-To: patch@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp1221197jao; Fri, 2 Jul 2021 02:59:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxTJj23YN5FcBRsVCap3etbOrzRzG5qBA1iVSuJS5i0W3dWUU+o+MuaItjVn46oishDkqoQ X-Received: by 2002:aa7:c857:: with SMTP id g23mr5499945edt.100.1625219992082; Fri, 02 Jul 2021 02:59:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625219992; cv=none; d=google.com; s=arc-20160816; b=K/L8JnvqyFJm9OhIZmE2Sqm0azFV2gqvsG84lB2hW9zhGsg8E0f5/bEX/d3pGM8aAw IsDberzH/UOODSI5BEgsK86qe97nxaTc11upd+HQBrUON9VPg8LHxHuuBGiIs+AanwWz BdgyLXPn+Pyxpj4j3j1jpbWQ89x1GzYx90E51EctlCDqkkQBT9Aa9DYFRU0DnGh29Yq2 jJ06DVQzpYW5oBes+TKvULfklUCRhX91YygpWeKvKAibLik/FJizjpOaXI2ijhy0RVXW 5WRGPA0mJvppHwS93dGCaX+xUMskCbTsdvAUsvxveWpnNV5n4ln8Z1kPchDZwh8Rvfxi FV0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=2NqxSlBj9tu33rc1qliJhOgqTsjy2W/a5+ZWqQ8ZbLY=; b=guMYHT8KlZeG6RfEIH1bpguGbhHxqDCOWk5sKjcHL3QrJ2zolN4uu3nvCwuaUVdFXA T6bWqs/bzZwkSt130RkUUEc23Y/9qumkX50C9vr+fICOQL/XXGJzna8iyFr1uojJa29Z WHudj+Syd+6m7kIvTNO0ceS3vOvXls9LDM13CSMccUc1go6RsFsPuqJcUGKImaCSYMC1 JmD6ejaWNESQqOcMezQyRIk9NdiqQzL3/nP8s9/aDsHL/ws/nXEupl9m+ejX6kwsXv2Y OYybKXieiiFO/wO2kuFURaUJC/G3x7oEyCgWpA5i+Kthxl6wS9Fpa0x220UJWaHiULD+ 1Ouw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y17si2506728edr.239.2021.07.02.02.59.51; Fri, 02 Jul 2021 02:59:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231466AbhGBKCU (ORCPT + 3 others); Fri, 2 Jul 2021 06:02:20 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3345 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231800AbhGBKCR (ORCPT ); Fri, 2 Jul 2021 06:02:17 -0400 Received: from fraeml737-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4GGVfK0tKFz6GBCj; Fri, 2 Jul 2021 17:49:13 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml737-chm.china.huawei.com (10.206.15.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 11:59:43 +0200 Received: from A2006125610.china.huawei.com (10.47.85.147) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 10:59:37 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [RFC v2 3/4] crypto: hisilicon/qm - Export mailbox functions for common use Date: Fri, 2 Jul 2021 10:58:48 +0100 Message-ID: <20210702095849.1610-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210702095849.1610-1-shameerali.kolothum.thodi@huawei.com> References: <20210702095849.1610-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.85.147] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Longfang Liu Export QM mailbox functions so that they can be used from HiSilicon ACC vfio live migration driver in follow-up patch. Signed-off-by: Longfang Liu Signed-off-by: Shameer Kolothum --- drivers/crypto/hisilicon/qm.c | 8 +++++--- drivers/crypto/hisilicon/qm.h | 4 ++++ 2 files changed, 9 insertions(+), 3 deletions(-) -- 2.17.1 diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index ce439a0c66c9..87fc0199705e 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -492,7 +492,7 @@ static bool qm_qp_avail_state(struct hisi_qm *qm, struct hisi_qp *qp, } /* return 0 mailbox ready, -ETIMEDOUT hardware timeout */ -static int qm_wait_mb_ready(struct hisi_qm *qm) +int qm_wait_mb_ready(struct hisi_qm *qm) { u32 val; @@ -500,6 +500,7 @@ static int qm_wait_mb_ready(struct hisi_qm *qm) val, !((val >> QM_MB_BUSY_SHIFT) & 0x1), POLL_PERIOD, POLL_TIMEOUT); } +EXPORT_SYMBOL_GPL(qm_wait_mb_ready); /* 128 bit should be written to hardware at one time to trigger a mailbox */ static void qm_mb_write(struct hisi_qm *qm, const void *src) @@ -523,8 +524,8 @@ static void qm_mb_write(struct hisi_qm *qm, const void *src) : "memory"); } -static int qm_mb(struct hisi_qm *qm, u8 cmd, dma_addr_t dma_addr, u16 queue, - bool op) +int qm_mb(struct hisi_qm *qm, u8 cmd, dma_addr_t dma_addr, u16 queue, + bool op) { struct qm_mailbox mailbox; int ret = 0; @@ -563,6 +564,7 @@ static int qm_mb(struct hisi_qm *qm, u8 cmd, dma_addr_t dma_addr, u16 queue, atomic64_inc(&qm->debug.dfx.mb_err_cnt); return ret; } +EXPORT_SYMBOL_GPL(qm_mb); static void qm_db_v1(struct hisi_qm *qm, u16 qn, u8 cmd, u16 index, u8 priority) { diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h index acefdf8b3a50..18b010d5452d 100644 --- a/drivers/crypto/hisilicon/qm.h +++ b/drivers/crypto/hisilicon/qm.h @@ -396,6 +396,10 @@ pci_ers_result_t hisi_qm_dev_slot_reset(struct pci_dev *pdev); void hisi_qm_reset_prepare(struct pci_dev *pdev); void hisi_qm_reset_done(struct pci_dev *pdev); +int qm_wait_mb_ready(struct hisi_qm *qm); +int qm_mb(struct hisi_qm *qm, u8 cmd, dma_addr_t dma_addr, u16 queue, + bool op); + struct hisi_acc_sgl_pool; struct hisi_acc_hw_sgl *hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev, struct scatterlist *sgl, struct hisi_acc_sgl_pool *pool, From patchwork Fri Jul 2 09:58:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 469503 Delivered-To: patch@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp1221293jao; Fri, 2 Jul 2021 03:00:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwAjeGxFOQIR2uGHVkTE+W71CAsamQtmqbnmZiP3whbLZlFP0IixTClyynlwYPnkD0N6n9T X-Received: by 2002:a05:6402:40c4:: with SMTP id z4mr5722208edb.364.1625220000535; Fri, 02 Jul 2021 03:00:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625220000; cv=none; d=google.com; s=arc-20160816; b=inO9mVX5pKo+9ebsjMS4MxXeGbyHj91ENR/fX9g/p1TwmfU/LKYYJID9tVwNilPe99 YHvkyKoBhMkK48pyHKPfVckX1+zo9b8lLD97xHyM046DYffvAZPLnVYjDATq9kW6FpeB Sbna97/hPDc9N47yNiG/03QvG8AjqM/OIl8vem5iW6cQTuvLsjM2JA9Cw/Mhb94RtyEY nVdGK8W8qNpTxu+Qr+jdlfw4hZEYJXC9GDd86g9iWUmBDSSSg5tykkwEL4/mN1y4pgp0 eMYXOtoQcil2qGEW+/ZTkSjs1IiliB2mzDT55ffzW1ljm0/xVunE3w0Nw1mwqR1geMkM t7yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=bSTCJomK0Q4IiYu4YIowNTGZutbANmRH43kxG3D5pqs=; b=fBVP6lRtwAvlW0Pcx5NLnOqk4fByQfSY0RSz/Y/DpuETQqGB8pXkdO+o5bkkqRXgPD O32fakq/q1sRjDFm1n7GlO6ohiQncjpThWU03wGKKz7T3gMrOUAEVzvUaQUEzS76p12c nCbZFH9uqHs85MbNuEp1R18MoaDELoL2N5md8+tXGr7IiXH8tn95u5C6+eQfYpOYQ/Hs iEYz5daDOpoYMJVPVAY90tQ+j+KDAW/sPkQBTxPANxOzt85hrLbUoHPCalcw3td5/7l0 3d6XT1a5a+DK5/7RHWP0yGIDXDVD6FES34O6uR66MGqe4oWavM1pYVwtu/3UNBuSgbod kCKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i27si2782345ejh.463.2021.07.02.03.00.00; Fri, 02 Jul 2021 03:00:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231569AbhGBKC3 (ORCPT + 3 others); Fri, 2 Jul 2021 06:02:29 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3346 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231134AbhGBKC1 (ORCPT ); Fri, 2 Jul 2021 06:02:27 -0400 Received: from fraeml734-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4GGVfV2YPrz6GBCn; Fri, 2 Jul 2021 17:49:22 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml734-chm.china.huawei.com (10.206.15.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 11:59:52 +0200 Received: from A2006125610.china.huawei.com (10.47.85.147) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 10:59:45 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [RFC v2 4/4] hisi_acc_vfio_pci: Add support for vfio live migration Date: Fri, 2 Jul 2021 10:58:49 +0100 Message-ID: <20210702095849.1610-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210702095849.1610-1-shameerali.kolothum.thodi@huawei.com> References: <20210702095849.1610-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.85.147] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Longfang Liu VMs assigned with HiSilicon ACC VF devices can now perform live migration if the VF devices are bind to the hisi-acc-vfio-pci driver. Signed-off-by: Longfang Liu Signed-off-by: Shameer Kolothum --- drivers/vfio/pci/Kconfig | 10 +- drivers/vfio/pci/hisi_acc_vfio_pci.c | 1023 +++++++++++++++++++++++++- drivers/vfio/pci/hisi_acc_vfio_pci.h | 144 ++++ 3 files changed, 1170 insertions(+), 7 deletions(-) create mode 100644 drivers/vfio/pci/hisi_acc_vfio_pci.h -- 2.17.1 diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 709807c28153..25dec1bb7741 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -59,10 +59,14 @@ config MLX5_VFIO_PCI If you don't know what to do here, say N. config HISI_ACC_VFIO_PCI - tristate "VFIO support for HiSilicon ACC devices" + tristate "VFIO live migration support for HiSilicon ACC devices" depends on ARM64 && VFIO_PCI_CORE + select CRYPTO_DEV_HISI_QM + depends on PCI && PCI_MSI + depends on UACCE || UACCE=n + depends on ACPI help - This provides generic PCI support for HiSilicon devices using the VFIO - framework. + This provides live migration support for HiSilicon ACC devices + using the VFIO framework. If you don't know what to do here, say N. diff --git a/drivers/vfio/pci/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisi_acc_vfio_pci.c index d57cf6d7adaf..f7906a151def 100644 --- a/drivers/vfio/pci/hisi_acc_vfio_pci.c +++ b/drivers/vfio/pci/hisi_acc_vfio_pci.c @@ -12,6 +12,1015 @@ #include #include +#include "hisi_acc_vfio_pci.h" + +/* return 0 VM acc device ready, -ETIMEDOUT hardware timeout */ +static int qm_wait_dev_ready(struct hisi_qm *qm) +{ + u32 val; + + return readl_relaxed_poll_timeout(qm->io_base + QM_VF_STATE, + val, !(val & 0x1), MB_POLL_PERIOD_US, + MB_POLL_TIMEOUT_US); +} + +/* + * Each state Reg is checked 100 times, + * with a delay of 100 microseconds after each check + */ +static u32 acc_check_reg_state(struct hisi_qm *qm, u32 regs) +{ + int check_times = 0; + u32 state; + + state = readl(qm->io_base + regs); + while (state && check_times < ERROR_CHECK_TIMEOUT) { + udelay(CHECK_DELAY_TIME); + state = readl(qm->io_base + regs); + check_times++; + } + + return state; +} + +/* Check the PF's RAS state and Function INT state */ +static int qm_check_int_state(struct acc_vf_migration *acc_vf_dev) +{ + struct hisi_qm *vfqm = acc_vf_dev->vf_qm; + struct hisi_qm *qm = acc_vf_dev->pf_qm; + struct pci_dev *vf_pdev = acc_vf_dev->vf_dev; + struct device *dev = &qm->pdev->dev; + u32 state; + + /* Check RAS state */ + state = acc_check_reg_state(qm, QM_ABNORMAL_INT_STATUS); + if (state) { + dev_err(dev, "failed to check QM RAS state!\n"); + return -EBUSY; + } + + /* Check Function Communication state between PF and VF */ + state = acc_check_reg_state(vfqm, QM_IFC_INT_STATUS); + if (state) { + dev_err(dev, "failed to check QM IFC INT state!\n"); + return -EBUSY; + } + state = acc_check_reg_state(vfqm, QM_IFC_INT_SET_V); + if (state) { + dev_err(dev, "failed to check QM IFC INT SET state!\n"); + return -EBUSY; + } + + /* Check submodule task state */ + switch (vf_pdev->device) { + case HISI_SEC_VF_DEV_ID: + state = acc_check_reg_state(qm, SEC_CORE_INT_STATUS); + if (state) { + dev_err(dev, "failed to check QM SEC Core INT state!\n"); + return -EBUSY; + } + return 0; + case HISI_HPRE_VF_DEV_ID: + state = acc_check_reg_state(qm, HPRE_HAC_INT_STATUS); + if (state) { + dev_err(dev, "failed to check QM HPRE HAC INT state!\n"); + return -EBUSY; + } + return 0; + case HISI_ZIP_VF_DEV_ID: + state = acc_check_reg_state(qm, HZIP_CORE_INT_STATUS); + if (state) { + dev_err(dev, "failed to check QM ZIP Core INT state!\n"); + return -EBUSY; + } + return 0; + default: + dev_err(dev, "failed to detect acc module type!\n"); + return -EINVAL; + } +} + +static int qm_read_reg(struct hisi_qm *qm, u32 reg_addr, + u32 *data, u8 nums) +{ + int i; + + if (nums < 1 || nums > QM_REGS_MAX_LEN) + return -EINVAL; + + for (i = 0; i < nums; i++) { + data[i] = readl(qm->io_base + reg_addr); + reg_addr += QM_REG_ADDR_OFFSET; + } + + return 0; +} + +static int qm_write_reg(struct hisi_qm *qm, u32 reg, + u32 *data, u8 nums) +{ + int i; + + if (nums < 1 || nums > QM_REGS_MAX_LEN) + return -EINVAL; + + for (i = 0; i < nums; i++) + writel(data[i], qm->io_base + reg + i * QM_REG_ADDR_OFFSET); + + return 0; +} + +static int qm_get_vft(struct hisi_qm *qm, u32 *base) +{ + u64 sqc_vft; + u32 qp_num; + int ret; + + ret = qm_mb(qm, QM_MB_CMD_SQC_VFT_V2, 0, 0, 1); + if (ret) + return ret; + + sqc_vft = readl(qm->io_base + QM_MB_CMD_DATA_ADDR_L) | + ((u64)readl(qm->io_base + QM_MB_CMD_DATA_ADDR_H) << + QM_XQC_ADDR_OFFSET); + *base = QM_SQC_VFT_BASE_MASK_V2 & (sqc_vft >> QM_SQC_VFT_BASE_SHIFT_V2); + qp_num = (QM_SQC_VFT_NUM_MASK_V2 & + (sqc_vft >> QM_SQC_VFT_NUM_SHIFT_V2)) + 1; + + return qp_num; +} + +static int qm_get_sqc(struct hisi_qm *qm, u64 *addr) +{ + int ret; + + ret = qm_mb(qm, QM_MB_CMD_SQC_BT, 0, 0, 1); + if (ret) + return ret; + + *addr = readl(qm->io_base + QM_MB_CMD_DATA_ADDR_L) | + ((u64)readl(qm->io_base + QM_MB_CMD_DATA_ADDR_H) << + QM_XQC_ADDR_OFFSET); + + return 0; +} + +static int qm_get_cqc(struct hisi_qm *qm, u64 *addr) +{ + int ret; + + ret = qm_mb(qm, QM_MB_CMD_CQC_BT, 0, 0, 1); + if (ret) + return ret; + + *addr = readl(qm->io_base + QM_MB_CMD_DATA_ADDR_L) | + ((u64)readl(qm->io_base + QM_MB_CMD_DATA_ADDR_H) << + QM_XQC_ADDR_OFFSET); + + return 0; +} + +static int qm_rw_regs_read(struct hisi_qm *qm, struct acc_vf_data *vf_data) +{ + struct device *dev = &qm->pdev->dev; + int ret; + + ret = qm_read_reg(qm, QM_VF_AEQ_INT_MASK, &vf_data->aeq_int_mask, 1); + if (ret) { + dev_err(dev, "failed to read QM_VF_AEQ_INT_MASK\n"); + return ret; + } + + ret = qm_read_reg(qm, QM_VF_EQ_INT_MASK, &vf_data->eq_int_mask, 1); + if (ret) { + dev_err(dev, "failed to read QM_VF_EQ_INT_MASK\n"); + return ret; + } + + ret = qm_read_reg(qm, QM_IFC_INT_SOURCE_V, + &vf_data->ifc_int_source, 1); + if (ret) { + dev_err(dev, "failed to read QM_IFC_INT_SOURCE_V\n"); + return ret; + } + + ret = qm_read_reg(qm, QM_IFC_INT_MASK, &vf_data->ifc_int_mask, 1); + if (ret) { + dev_err(dev, "failed to read QM_IFC_INT_MASK\n"); + return ret; + } + + ret = qm_read_reg(qm, QM_IFC_INT_SET_V, &vf_data->ifc_int_set, 1); + if (ret) { + dev_err(dev, "failed to read QM_IFC_INT_SET_V\n"); + return ret; + } + + ret = qm_read_reg(qm, QM_QUE_ISO_CFG_V, &vf_data->que_iso_cfg, 1); + if (ret) { + dev_err(dev, "failed to read QM_QUE_ISO_CFG_V\n"); + return ret; + } + + ret = qm_read_reg(qm, QM_PAGE_SIZE, &vf_data->page_size, 1); + if (ret) { + dev_err(dev, "failed to read QM_PAGE_SIZE\n"); + return ret; + } + + ret = qm_read_reg(qm, QM_VF_STATE, &vf_data->vf_state, 1); + if (ret) { + dev_err(dev, "failed to read QM_VF_STATE\n"); + return ret; + } + + /* QM_EQC_DW has 7 regs */ + ret = qm_read_reg(qm, QM_EQC_DW0, vf_data->qm_eqc_dw, 7); + if (ret) { + dev_err(dev, "failed to read QM_EQC_DW\n"); + return ret; + } + + /* QM_AEQC_DW has 7 regs */ + ret = qm_read_reg(qm, QM_AEQC_DW0, vf_data->qm_aeqc_dw, 7); + if (ret) { + dev_err(dev, "failed to read QM_AEQC_DW\n"); + return ret; + } + + return 0; +} + +static int qm_rw_regs_write(struct hisi_qm *qm, struct acc_vf_data *vf_data) +{ + struct device *dev = &qm->pdev->dev; + int ret; + + /* check VF state */ + if (unlikely(qm_wait_mb_ready(qm))) { + dev_err(&qm->pdev->dev, "QM device is not ready to write\n"); + return -EBUSY; + } + + ret = qm_write_reg(qm, QM_VF_AEQ_INT_MASK, &vf_data->aeq_int_mask, 1); + if (ret) { + dev_err(dev, "failed to write QM_VF_AEQ_INT_MASK\n"); + return ret; + } + + ret = qm_write_reg(qm, QM_VF_EQ_INT_MASK, &vf_data->eq_int_mask, 1); + if (ret) { + dev_err(dev, "failed to write QM_VF_EQ_INT_MASK\n"); + return ret; + } + + ret = qm_write_reg(qm, QM_IFC_INT_SOURCE_V, + &vf_data->ifc_int_source, 1); + if (ret) { + dev_err(dev, "failed to write QM_IFC_INT_SOURCE_V\n"); + return ret; + } + + ret = qm_write_reg(qm, QM_IFC_INT_MASK, &vf_data->ifc_int_mask, 1); + if (ret) { + dev_err(dev, "failed to write QM_IFC_INT_MASK\n"); + return ret; + } + + ret = qm_write_reg(qm, QM_IFC_INT_SET_V, &vf_data->ifc_int_set, 1); + if (ret) { + dev_err(dev, "failed to write QM_IFC_INT_SET_V\n"); + return ret; + } + + ret = qm_write_reg(qm, QM_QUE_ISO_CFG_V, &vf_data->que_iso_cfg, 1); + if (ret) { + dev_err(dev, "failed to write QM_QUE_ISO_CFG_V\n"); + return ret; + } + + ret = qm_write_reg(qm, QM_PAGE_SIZE, &vf_data->page_size, 1); + if (ret) { + dev_err(dev, "failed to write QM_PAGE_SIZE\n"); + return ret; + } + + ret = qm_write_reg(qm, QM_VF_STATE, &vf_data->vf_state, 1); + if (ret) { + dev_err(dev, "failed to write QM_VF_STATE\n"); + return ret; + } + + /* QM_EQC_DW has 7 regs */ + ret = qm_write_reg(qm, QM_EQC_DW0, vf_data->qm_eqc_dw, 7); + if (ret) { + dev_err(dev, "failed to write QM_EQC_DW\n"); + return ret; + } + + /* QM_AEQC_DW has 7 regs */ + ret = qm_write_reg(qm, QM_AEQC_DW0, vf_data->qm_aeqc_dw, 7); + if (ret) { + dev_err(dev, "failed to write QM_AEQC_DW\n"); + return ret; + } + + return 0; +} + +static void qm_db(struct hisi_qm *qm, u16 qn, u8 cmd, + u16 index, u8 priority) +{ + u64 doorbell; + u64 dbase; + u16 randata = 0; + + if (cmd == QM_DOORBELL_CMD_SQ || cmd == QM_DOORBELL_CMD_CQ) + dbase = QM_DOORBELL_SQ_CQ_BASE_V2; + else + dbase = QM_DOORBELL_EQ_AEQ_BASE_V2; + + doorbell = qn | ((u64)cmd << QM_DB_CMD_SHIFT_V2) | + ((u64)randata << QM_DB_RAND_SHIFT_V2) | + ((u64)index << QM_DB_INDEX_SHIFT_V2) | + ((u64)priority << QM_DB_PRIORITY_SHIFT_V2); + + writeq(doorbell, qm->io_base + dbase); +} + +static int vf_migration_data_store(struct hisi_qm *qm, + struct acc_vf_migration *acc_vf_dev) +{ + struct acc_vf_data *vf_data = acc_vf_dev->vf_data; + struct device *dev = &qm->pdev->dev; + int ret; + + ret = qm_rw_regs_read(qm, vf_data); + if (ret) + return -EINVAL; + + /* Every reg is 32 bit, the dma address is 64 bit. */ + vf_data->eqe_dma = vf_data->qm_eqc_dw[2]; + vf_data->eqe_dma <<= QM_XQC_ADDR_OFFSET; + vf_data->eqe_dma |= vf_data->qm_eqc_dw[1]; + vf_data->aeqe_dma = vf_data->qm_aeqc_dw[2]; + vf_data->aeqe_dma <<= QM_XQC_ADDR_OFFSET; + vf_data->aeqe_dma |= vf_data->qm_aeqc_dw[1]; + + /* Through SQC_BT/CQC_BT to get sqc and cqc address */ + ret = qm_get_sqc(qm, &vf_data->sqc_dma); + if (ret) { + dev_err(dev, "failed to read SQC addr!\n"); + return -EINVAL; + } + + ret = qm_get_cqc(qm, &vf_data->cqc_dma); + if (ret) { + dev_err(dev, "failed to read CQC addr!\n"); + return -EINVAL; + } + + return 0; +} + +static void qm_dev_cmd_init(struct hisi_qm *qm) +{ + /* Clear VF communication status registers. */ + writel(0x1, qm->io_base + QM_IFC_INT_SOURCE_V); + + /* Enable pf and vf communication. */ + writel(0x0, qm->io_base + QM_IFC_INT_MASK); +} + +static void vf_qm_fun_restart(struct hisi_qm *qm, + struct acc_vf_migration *acc_vf_dev) +{ + struct acc_vf_data *vf_data = acc_vf_dev->vf_data; + struct device *dev = &qm->pdev->dev; + int i; + + /* Check if we need to restart VF */ + if (vf_data->vf_state == VF_PREPARE) { + dev_info(dev, "No need to restart VF\n"); + return; + } + + for (i = 0; i < qm->qp_num; i++) + qm_db(qm, i, QM_DOORBELL_CMD_SQ, 0, 1); +} + +static int vf_migration_data_recover(struct hisi_qm *qm, + struct acc_vf_migration *acc_vf_dev) +{ + struct device *dev = &qm->pdev->dev; + struct acc_vf_data *vf_data = acc_vf_dev->vf_data; + struct vfio_device_migration_info *mig_ctl = acc_vf_dev->mig_ctl; + int ret; + + if (!mig_ctl->data_size) + return 0; + + qm->eqe_dma = vf_data->eqe_dma; + qm->aeqe_dma = vf_data->aeqe_dma; + qm->sqc_dma = vf_data->sqc_dma; + qm->cqc_dma = vf_data->cqc_dma; + + qm->qp_base = vf_data->qp_base; + qm->qp_num = vf_data->qp_num; + + ret = qm_rw_regs_write(qm, vf_data); + if (ret) { + dev_err(dev, "Set VF regs failed\n"); + return ret; + } + + ret = qm_mb(qm, QM_MB_CMD_SQC_BT, qm->sqc_dma, 0, 0); + if (ret) { + dev_err(dev, "Set sqc failed\n"); + return ret; + } + + ret = qm_mb(qm, QM_MB_CMD_CQC_BT, qm->cqc_dma, 0, 0); + if (ret) { + dev_err(dev, "Set cqc failed\n"); + return ret; + } + + qm_dev_cmd_init(qm); + + return 0; +} + +static int vf_qm_cache_wb(struct hisi_qm *qm) +{ + unsigned int val; + + writel(0x1, qm->io_base + QM_CACHE_WB_START); + if (readl_relaxed_poll_timeout(qm->io_base + QM_CACHE_WB_DONE, + val, val & BIT(0), MB_POLL_PERIOD_US, + MB_POLL_TIMEOUT_US)) { + dev_err(&qm->pdev->dev, "vf QM writeback sqc cache fail\n"); + return -EINVAL; + } + + return 0; +} + +static int vf_qm_func_stop(struct hisi_qm *qm) +{ + return qm_mb(qm, QM_MB_CMD_PAUSE_QM, 0, 0, 0); +} + +static int pf_qm_get_qp_num(struct hisi_qm *qm, int vf_id, u32 *rbase) +{ + unsigned int val; + u64 sqc_vft; + u32 qp_num; + int ret; + + ret = readl_relaxed_poll_timeout(qm->io_base + QM_VFT_CFG_RDY, val, + val & BIT(0), MB_POLL_PERIOD_US, + MB_POLL_TIMEOUT_US); + if (ret) + return ret; + + writel(0x1, qm->io_base + QM_VFT_CFG_OP_WR); + /* 0 mean SQC VFT */ + writel(0x0, qm->io_base + QM_VFT_CFG_TYPE); + writel(vf_id, qm->io_base + QM_VFT_CFG); + + writel(0x0, qm->io_base + QM_VFT_CFG_RDY); + writel(0x1, qm->io_base + QM_VFT_CFG_OP_ENABLE); + + ret = readl_relaxed_poll_timeout(qm->io_base + QM_VFT_CFG_RDY, val, + val & BIT(0), MB_POLL_PERIOD_US, + MB_POLL_TIMEOUT_US); + if (ret) + return ret; + + sqc_vft = readl(qm->io_base + QM_VFT_CFG_DATA_L) | + ((u64)readl(qm->io_base + QM_VFT_CFG_DATA_H) << + QM_XQC_ADDR_OFFSET); + *rbase = QM_SQC_VFT_BASE_MASK_V2 & + (sqc_vft >> QM_SQC_VFT_BASE_SHIFT_V2); + qp_num = (QM_SQC_VFT_NUM_MASK_V2 & + (sqc_vft >> QM_SQC_VFT_NUM_SHIFT_V2)) + 1; + + return qp_num; +} + +static int pf_qm_state_pre_save(struct hisi_qm *qm, + struct acc_vf_migration *acc_vf_dev) +{ + struct acc_vf_data *vf_data = acc_vf_dev->vf_data; + struct vfio_device_migration_info *mig_ctl = acc_vf_dev->mig_ctl; + struct device *dev = &qm->pdev->dev; + int vf_id = acc_vf_dev->vf_id; + int ret; + + /* save device id */ + vf_data->dev_id = acc_vf_dev->vf_dev->device; + + /* vf qp num save from PF */ + ret = pf_qm_get_qp_num(qm, vf_id, &qm->qp_base); + if (ret <= 0) { + dev_err(dev, "failed to get vft qp nums!\n"); + return -EINVAL; + } + qm->qp_num = ret; + vf_data->qp_base = qm->qp_base; + vf_data->qp_num = qm->qp_num; + + /* vf isolation state save from PF */ + ret = qm_read_reg(qm, QM_QUE_ISO_CFG, &vf_data->que_iso_cfg, 1); + if (ret) { + dev_err(dev, "failed to read QM_QUE_ISO_CFG!\n"); + return ret; + } + + mig_ctl->data_size = QM_MATCH_SIZE; + mig_ctl->pending_bytes = mig_ctl->data_size; + + return 0; +} + +static int vf_qm_state_save(struct hisi_qm *qm, + struct acc_vf_migration *acc_vf_dev) +{ + struct device *dev = &acc_vf_dev->vf_dev->dev; + struct vfio_device_migration_info *mig_ctl = acc_vf_dev->mig_ctl; + int ret; + + mig_ctl->data_size = 0; + mig_ctl->pending_bytes = 0; + + if (unlikely(qm_wait_dev_ready(qm))) { + dev_info(&qm->pdev->dev, "QM device not ready, no data to transfer\n"); + return 0; + } + + /* First stop the ACC vf function */ + ret = vf_qm_func_stop(qm); + if (ret) { + dev_err(dev, "failed to stop QM VF function!\n"); + return ret; + } + + ret = qm_check_int_state(acc_vf_dev); + if (ret) { + dev_err(dev, "failed to check QM INT state!\n"); + goto state_error; + } + + ret = vf_qm_cache_wb(qm); + if (ret) { + dev_err(dev, "failed to writeback QM Cache!\n"); + goto state_error; + } + + ret = vf_migration_data_store(qm, acc_vf_dev); + if (ret) { + dev_err(dev, "failed to get and store migration data!\n"); + goto state_error; + } + + mig_ctl->data_size = sizeof(struct acc_vf_data); + mig_ctl->pending_bytes = mig_ctl->data_size; + + return 0; + +state_error: + vf_qm_fun_restart(qm, acc_vf_dev); + return ret; +} + +static int vf_qm_state_resume(struct hisi_qm *qm, + struct acc_vf_migration *acc_vf_dev) +{ + struct device *dev = &acc_vf_dev->vf_dev->dev; + int ret; + + /* recover data to VF */ + ret = vf_migration_data_recover(qm, acc_vf_dev); + if (ret) { + dev_err(dev, "failed to recover the VF!\n"); + return ret; + } + + /* restart all destination VF's QP */ + vf_qm_fun_restart(qm, acc_vf_dev); + + return 0; +} + +/* + * HiSilicon ACC VF dev MMIO space contains both the functional register + * space and the migration control register space. We hide the migration + * control space from the Guest. But to successfully complete the live + * migration, we still need access to the functional MMIO space assigned + * to the Guest. To avoid any potential security issues, we need to be + * careful not to access this region while the Guest vCPUs are running. + * + * Hence check the device state before we map the region. + */ +static int hisi_acc_vf_ioremap(struct acc_vf_migration *acc_vf_dev, u32 state) +{ + struct hisi_qm *vfqm = acc_vf_dev->vf_qm; + struct pci_dev *pdev = acc_vf_dev->vf_dev; + + if (state == (VFIO_DEVICE_STATE_SAVING | VFIO_DEVICE_STATE_RUNNING)) + return -EINVAL; + + if (vfqm->io_base) + return 0; + + vfqm->io_base = ioremap(vfqm->phys_base, + pci_resource_len(pdev, VFIO_PCI_BAR2_REGION_INDEX)); + if (!vfqm->io_base) + return -EIO; + + return 0; +} + +static void hisi_acc_vf_iounmap(struct acc_vf_migration *acc_vf_dev) +{ + struct hisi_qm *vfqm = acc_vf_dev->vf_qm; + + if (!vfqm->io_base) + return; + + iounmap(vfqm->io_base); + vfqm->io_base = NULL; +} + +static int hisi_acc_vf_set_device_state(struct acc_vf_migration *acc_vf_dev, + u32 state) +{ + struct vfio_device_migration_info *mig_ctl = acc_vf_dev->mig_ctl; + struct hisi_qm *pfqm = acc_vf_dev->pf_qm; + struct hisi_qm *vfqm = acc_vf_dev->vf_qm; + int ret = 0; + + if (state == mig_ctl->device_state) + return 0; + + switch (state) { + case VFIO_DEVICE_STATE_RUNNING: + if (mig_ctl->device_state == VFIO_DEVICE_STATE_RESUMING) { + ret = hisi_acc_vf_ioremap(acc_vf_dev, state); + if (ret) + return ret; + + ret = vf_qm_state_resume(vfqm, acc_vf_dev); + if (ret) + goto out; + } + break; + case VFIO_DEVICE_STATE_SAVING | VFIO_DEVICE_STATE_RUNNING: + ret = pf_qm_state_pre_save(pfqm, acc_vf_dev); + if (ret) + return ret; + + break; + case VFIO_DEVICE_STATE_SAVING: + ret = hisi_acc_vf_ioremap(acc_vf_dev, state); + if (ret) + return ret; + + ret = vf_qm_state_save(vfqm, acc_vf_dev); + if (ret) + goto out; + + break; + case VFIO_DEVICE_STATE_STOP: + ret = hisi_acc_vf_ioremap(acc_vf_dev, state); + if (ret) + return ret; + + vf_qm_fun_restart(vfqm, acc_vf_dev); + break; + case VFIO_DEVICE_STATE_RESUMING: + break; + default: + return -EFAULT; + } + + mig_ctl->device_state = state; + +out: + hisi_acc_vf_iounmap(acc_vf_dev); + return ret; +} + +static int hisi_acc_vf_match_check(struct acc_vf_migration *acc_vf_dev) +{ + struct vfio_device_migration_info *mig_ctl = acc_vf_dev->mig_ctl; + struct acc_vf_data *vf_data = acc_vf_dev->vf_data; + struct hisi_qm *qm = acc_vf_dev->vf_qm; + struct device *dev = &qm->pdev->dev; + u32 que_iso_state; + int ret; + + /* + * Check we are in the correct dev state and have enough data to + * perform the check. + */ + if (mig_ctl->device_state != VFIO_DEVICE_STATE_RESUMING || + mig_ctl->data_size != QM_MATCH_SIZE) + return 0; + + /* vf acc dev type check */ + if (vf_data->dev_id != acc_vf_dev->vf_dev->device) { + dev_err(dev, "failed to match VF devices\n"); + return -EINVAL; + } + + ret = hisi_acc_vf_ioremap(acc_vf_dev, mig_ctl->device_state); + if (ret) + return ret; + + /* vf qp num check */ + ret = qm_get_vft(qm, &qm->qp_base); + if (ret <= 0) { + dev_err(dev, "failed to get vft qp nums\n"); + ret = -EINVAL; + goto out; + } + qm->qp_num = ret; + + if (vf_data->qp_num != qm->qp_num) { + dev_err(dev, "failed to match VF qp num\n"); + ret = -EINVAL; + goto out; + } + + /* vf isolation state check */ + ret = qm_read_reg(qm, QM_QUE_ISO_CFG_V, &que_iso_state, 1); + if (ret) { + dev_err(dev, "failed to read QM_QUE_ISO_CFG_V\n"); + goto out; + } + if (vf_data->que_iso_cfg != que_iso_state) { + dev_err(dev, "failed to match isolation state\n"); + ret = -EINVAL; + goto out; + } + + /* clear the VF match data size */ + mig_ctl->pending_bytes = 0; + mig_ctl->data_size = 0; + +out: + hisi_acc_vf_iounmap(acc_vf_dev); + return ret; +} + +static int hisi_acc_vf_data_transfer(struct acc_vf_migration *acc_vf_dev, + char __user *buf, size_t count, u64 pos, + bool iswrite) +{ + struct vfio_device_migration_info *mig_ctl = acc_vf_dev->mig_ctl; + void *data_addr = acc_vf_dev->vf_data; + int ret = 0; + + if (!count || pos < mig_ctl->data_offset || pos > MIGRATION_REGION_SZ) + return -EINVAL; + + data_addr += pos - mig_ctl->data_offset; + if (iswrite) { + ret = copy_from_user(data_addr, buf, count); + if (ret) + return -EFAULT; + + mig_ctl->pending_bytes += count; + } else { + ret = copy_to_user(buf, data_addr, count); + if (ret) + return -EFAULT; + + mig_ctl->pending_bytes -= count; + } + + return count; +} + +static size_t hisi_acc_vf_migrn_rw(struct vfio_pci_core_device *vdev, + char __user *buf, size_t count, loff_t *ppos, + bool iswrite) +{ + unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - + VFIO_PCI_NUM_REGIONS; + struct vfio_pci_region *region = &vdev->region[index]; + struct acc_vf_migration *acc_vf_dev; + struct vfio_device_migration_info *mig_ctl; + u64 pos = *ppos & VFIO_PCI_OFFSET_MASK; + struct device *dev; + u32 device_state; + int ret; + + if (!region) + return -EINVAL; + + acc_vf_dev = region->data; + dev = &acc_vf_dev->vf_dev->dev; + mig_ctl = acc_vf_dev->mig_ctl; + + if (pos >= region->size) + return -EINVAL; + + count = min(count, (size_t)(region->size - pos)); + + switch (pos) { + case VDM_OFFSET(device_state): + if (count != sizeof(mig_ctl->device_state)) + return -EFAULT; + + if (iswrite) { + ret = copy_from_user(&device_state, buf, count); + if (ret) + return -EFAULT; + + ret = hisi_acc_vf_set_device_state(acc_vf_dev, device_state); + if (ret) + return ret; + } else { + ret = copy_to_user(buf, &mig_ctl->device_state, count); + if (ret) + return -EFAULT; + } + + return count; + case VDM_OFFSET(reserved): + return 0; + case VDM_OFFSET(pending_bytes): + if (count != sizeof(mig_ctl->pending_bytes)) + return -EINVAL; + + if (iswrite) + return -EFAULT; + + ret = copy_to_user(buf, &mig_ctl->pending_bytes, count); + if (ret) + return -EFAULT; + + return count; + case VDM_OFFSET(data_offset): + if (count != sizeof(mig_ctl->data_offset)) + return -EINVAL; + + if (iswrite) { + ret = copy_from_user(&mig_ctl->data_offset, buf, count); + if (ret) + return -EFAULT; + } else { + ret = copy_to_user(buf, &mig_ctl->data_offset, count); + if (ret) + return -EFAULT; + } + + return count; + case VDM_OFFSET(data_size): + if (count != sizeof(mig_ctl->data_size)) + return -EINVAL; + + if (iswrite) { + ret = copy_from_user(&mig_ctl->data_size, buf, count); + if (ret) + return -EFAULT; + + /* Check whether the src and dst VF's match */ + ret = hisi_acc_vf_match_check(acc_vf_dev); + if (ret) + return ret; + } else { + ret = copy_to_user(buf, &mig_ctl->data_size, count); + if (ret) + return -EFAULT; + } + + return count; + default: + /* Transfer data */ + return hisi_acc_vf_data_transfer(acc_vf_dev, buf, + count, pos, iswrite); + } +} + +static void hisi_acc_vfio_pci_uninit(struct acc_vf_migration *acc_vf_dev) +{ + kfree(acc_vf_dev->mig_ctl); + kfree(acc_vf_dev->vf_qm); +} + +static void hisi_acc_vf_migrn_release(struct vfio_pci_core_device *vdev, + struct vfio_pci_region *region) +{ + struct acc_vf_migration *acc_vf_dev = region->data; + + hisi_acc_vfio_pci_uninit(acc_vf_dev); + kfree(acc_vf_dev); +} + +static const struct vfio_pci_regops hisi_acc_vfio_pci_regops = { + .rw = hisi_acc_vf_migrn_rw, + .release = hisi_acc_vf_migrn_release, +}; + +static int hisi_acc_vf_dev_init(struct pci_dev *pdev, struct hisi_qm *pf_qm, + struct acc_vf_migration *acc_vf_dev) +{ + struct vfio_device_migration_info *mig_ctl; + struct hisi_qm *vf_qm; + + vf_qm = kzalloc(sizeof(*vf_qm), GFP_KERNEL); + if (!vf_qm) + return -ENOMEM; + + vf_qm->dev_name = pf_qm->dev_name; + vf_qm->fun_type = QM_HW_VF; + vf_qm->phys_base = pci_resource_start(pdev, VFIO_PCI_BAR2_REGION_INDEX); + vf_qm->pdev = pdev; + mutex_init(&vf_qm->mailbox_lock); + + acc_vf_dev->vf_qm = vf_qm; + acc_vf_dev->pf_qm = pf_qm; + + /* the data region must follow migration info */ + mig_ctl = kzalloc(MIGRATION_REGION_SZ, GFP_KERNEL); + if (!mig_ctl) + goto init_qm_error; + + acc_vf_dev->mig_ctl = mig_ctl; + + acc_vf_dev->vf_data = (void *)(mig_ctl + 1); + + mig_ctl->device_state = VFIO_DEVICE_STATE_RUNNING; + mig_ctl->data_offset = sizeof(*mig_ctl); + mig_ctl->data_size = sizeof(struct acc_vf_data); + + return 0; + +init_qm_error: + kfree(vf_qm); + return -ENOMEM; +} + +static int hisi_acc_vfio_pci_init(struct vfio_pci_core_device *vdev) +{ + struct acc_vf_migration *acc_vf_dev; + struct pci_dev *pdev = vdev->pdev; + struct pci_dev *pf_dev, *vf_dev; + struct hisi_qm *pf_qm; + int vf_id, ret; + + pf_dev = pdev->physfn; + vf_dev = pdev; + + pf_qm = pci_get_drvdata(pf_dev); + if (!pf_qm) { + pr_err("HiSi ACC qm driver not loaded\n"); + return -EINVAL; + } + + if (pf_qm->ver < QM_HW_V3) { + dev_err(&pdev->dev, + "Migration not supported, hw version: 0x%x\n", + pf_qm->ver); + return -ENODEV; + } + + vf_id = PCI_FUNC(vf_dev->devfn); + acc_vf_dev = kzalloc(sizeof(*acc_vf_dev), GFP_KERNEL); + if (!acc_vf_dev) + return -ENOMEM; + + acc_vf_dev->vf_id = vf_id; + acc_vf_dev->pf_dev = pf_dev; + acc_vf_dev->vf_dev = vf_dev; + + ret = hisi_acc_vf_dev_init(pdev, pf_qm, acc_vf_dev); + if (ret) { + kfree(acc_vf_dev); + return -ENOMEM; + } + + ret = vfio_pci_register_dev_region(vdev, VFIO_REGION_TYPE_MIGRATION, + VFIO_REGION_SUBTYPE_MIGRATION, + &hisi_acc_vfio_pci_regops, + MIGRATION_REGION_SZ, + VFIO_REGION_INFO_FLAG_READ | + VFIO_REGION_INFO_FLAG_WRITE, + acc_vf_dev); + if (ret) + goto out; + + return 0; +out: + hisi_acc_vfio_pci_uninit(acc_vf_dev); + kfree(acc_vf_dev); + return ret; +} + static long hisi_acc_vfio_pci_ioctl(struct vfio_device *core_vdev, unsigned int cmd, unsigned long arg) { @@ -64,6 +1073,12 @@ static int hisi_acc_vfio_pci_open(struct vfio_device *core_vdev) if (ret) return ret; + ret = hisi_acc_vfio_pci_init(vdev); + if (ret) { + vfio_pci_core_disable(vdev); + return ret; + } + vfio_pci_core_finish_enable(vdev); return 0; @@ -113,9 +1128,9 @@ static void hisi_acc_vfio_pci_remove(struct pci_dev *pdev) } static const struct pci_device_id hisi_acc_vfio_pci_table[] = { - { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, 0xa256) }, /* SEC VF */ - { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, 0xa259) }, /* HPRE VF */ - { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, 0xa251) }, /* ZIP VF */ + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, HISI_SEC_VF_DEV_ID) }, + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, HISI_HPRE_VF_DEV_ID) }, + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_HUAWEI, HISI_ZIP_VF_DEV_ID) }, { 0, } }; @@ -137,4 +1152,4 @@ module_pci_driver(hisi_acc_vfio_pci_driver); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Liu Longfang "); MODULE_AUTHOR("Shameer Kolothum "); -MODULE_DESCRIPTION("HiSilicon VFIO PCI - Generic VFIO PCI driver for HiSilicon ACC device family"); +MODULE_DESCRIPTION("HiSilicon VFIO PCI - VFIO PCI driver with live migration support for HiSilicon ACC device family"); diff --git a/drivers/vfio/pci/hisi_acc_vfio_pci.h b/drivers/vfio/pci/hisi_acc_vfio_pci.h new file mode 100644 index 000000000000..f3f8c38d6f1a --- /dev/null +++ b/drivers/vfio/pci/hisi_acc_vfio_pci.h @@ -0,0 +1,144 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2021 HiSilicon Ltd. */ + +#ifndef HISI_ACC_VFIO_PCI_H +#define HISI_ACC_VFIO_PCI_H + +#include "../../drivers/crypto/hisilicon/qm.h" + +#define VDM_OFFSET(x) offsetof(struct vfio_device_migration_info, x) +#define MIGRATION_REGION_SZ (sizeof(struct acc_vf_data) + \ + sizeof(struct vfio_device_migration_info)) + +#define HISI_SEC_VF_DEV_ID 0xa256 +#define HISI_HPRE_VF_DEV_ID 0xa259 +#define HISI_ZIP_VF_DEV_ID 0xa251 + +#define MB_POLL_PERIOD_US 10 +#define MB_POLL_TIMEOUT_US 1000 +#define QM_CACHE_WB_START 0x204 +#define QM_CACHE_WB_DONE 0x208 +#define QM_MB_CMD_PAUSE_QM 0xe +#define QM_ABNORMAL_INT_STATUS 0x100008 +#define QM_IFC_INT_STATUS 0x0028 +#define SEC_CORE_INT_STATUS 0x301008 +#define HPRE_HAC_INT_STATUS 0x301800 +#define HZIP_CORE_INT_STATUS 0x3010AC +#define QM_QUE_ISO_CFG 0x301154 + +#define QM_VFT_CFG_RDY 0x10006c +#define QM_VFT_CFG_OP_WR 0x100058 +#define QM_VFT_CFG_TYPE 0x10005c +#define QM_VFT_CFG 0x100060 +#define QM_VFT_CFG_OP_ENABLE 0x100054 +#define QM_VFT_CFG_DATA_L 0x100064 +#define QM_VFT_CFG_DATA_H 0x100068 + +#define ERROR_CHECK_TIMEOUT 100 +#define CHECK_DELAY_TIME 100 + +#define QM_SQC_VFT_BASE_SHIFT_V2 28 +#define QM_SQC_VFT_BASE_MASK_V2 GENMASK(15, 0) +#define QM_SQC_VFT_NUM_SHIFT_V2 45 +#define QM_SQC_VFT_NUM_MASK_V2 GENMASK(9, 0) + +/* mailbox */ +#define QM_MB_CMD_SQC_BT 0x4 +#define QM_MB_CMD_CQC_BT 0x5 +#define QM_MB_CMD_SQC_VFT_V2 0x6 + +#define QM_MB_CMD_SEND_BASE 0x300 +#define QM_MB_BUSY_SHIFT 13 +#define QM_MB_OP_SHIFT 14 +#define QM_MB_CMD_DATA_ADDR_L 0x304 +#define QM_MB_CMD_DATA_ADDR_H 0x308 +#define QM_MB_MAX_WAIT_CNT 6000 + +/* doorbell */ +#define QM_DOORBELL_CMD_SQ 0 +#define QM_DOORBELL_CMD_CQ 1 +#define QM_DOORBELL_SQ_CQ_BASE_V2 0x1000 +#define QM_DOORBELL_EQ_AEQ_BASE_V2 0x2000 +#define QM_DB_CMD_SHIFT_V2 12 +#define QM_DB_RAND_SHIFT_V2 16 +#define QM_DB_INDEX_SHIFT_V2 32 +#define QM_DB_PRIORITY_SHIFT_V2 48 + +/* RW regs */ +#define QM_REGS_MAX_LEN 7 +#define QM_REG_ADDR_OFFSET 0x0004 + +#define QM_XQC_ADDR_OFFSET 32U +#define QM_VF_AEQ_INT_MASK 0x0004 +#define QM_VF_EQ_INT_MASK 0x000c +#define QM_IFC_INT_SOURCE_V 0x0020 +#define QM_IFC_INT_MASK 0x0024 +#define QM_IFC_INT_SET_V 0x002c +#define QM_QUE_ISO_CFG_V 0x0030 +#define QM_PAGE_SIZE 0x0034 +#define QM_VF_STATE 0x0060 + +#define QM_EQC_DW0 0X8000 +#define QM_AEQC_DW0 0X8020 + +#define QM_MATCH_SIZE 32L + +enum vf_state { + VF_READY = 0x0, + VF_NOT_READY, + VF_PREPARE, +}; + +struct acc_vf_data { + /* QM match information */ + u32 qp_num; + u32 dev_id; + u32 que_iso_cfg; + u32 qp_base; + /* QM reserved 4 match information */ + u32 qm_rsv_state[4]; + + /* QM RW regs */ + u32 aeq_int_mask; + u32 eq_int_mask; + u32 ifc_int_source; + u32 ifc_int_mask; + u32 ifc_int_set; + u32 page_size; + u32 vf_state; + + /* + * QM_VF_MB has 4 regs don't need to migration + * mailbox regs writeback value will cause + * hardware to perform command operations + */ + + /* QM_EQC_DW has 7 regs */ + u32 qm_eqc_dw[7]; + + /* QM_AEQC_DW has 7 regs */ + u32 qm_aeqc_dw[7]; + + /* QM reserved 5 regs */ + u32 qm_rsv_regs[5]; + + /* qm memory init information */ + dma_addr_t eqe_dma; + dma_addr_t aeqe_dma; + dma_addr_t sqc_dma; + dma_addr_t cqc_dma; +}; + +struct acc_vf_migration { + struct pci_dev *pf_dev; + struct pci_dev *vf_dev; + struct hisi_qm *pf_qm; + struct hisi_qm *vf_qm; + int vf_id; + + struct vfio_device_migration_info *mig_ctl; + struct acc_vf_data *vf_data; +}; + +#endif /* HISI_ACC_VFIO_PCI_H */ +