From patchwork Tue Sep 20 06:12:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Kilari X-Patchwork-Id: 76582 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp1318096qgf; Mon, 19 Sep 2016 23:16:40 -0700 (PDT) X-Received: by 10.67.16.76 with SMTP id fu12mr39000588pad.171.1474352200189; Mon, 19 Sep 2016 23:16:40 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id sq10si30991811pab.73.2016.09.19.23.16.40 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Sep 2016 23:16:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dmarc=fail (p=NONE dis=NONE) header.from=gmail.com Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bmEK4-000865-23; Tue, 20 Sep 2016 06:14:28 +0000 Received: from mail-pf0-x243.google.com ([2607:f8b0:400e:c00::243]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bmEJk-00080u-RF for linux-arm-kernel@lists.infradead.org; Tue, 20 Sep 2016 06:14:12 +0000 Received: by mail-pf0-x243.google.com with SMTP id 6so456512pfl.2 for ; Mon, 19 Sep 2016 23:13:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BNRFMZ9cHgYNvYHKD+kxe0xuKTEuu75B1vA6EQ7DKyc=; b=urxxvZzDGii4RWt08MxDgYnQM1KsO7dGM3wpJHa5sUmqCTJITREvMYECC0DvIGeyvZ zf/WYLB15UvRtLhJkrXZJIlux7xPWpVUQAmEF+SDIe34SYGDkmRwlaUnM3Aco2E4NGNb JQ3LFvQjCQRKlnD8jA7kZe+Q10Ss1mE32A3a+yXvg+RGiiJ+G8xgnuFVR+nnuYCHKmjB DS1JM/HDIBxQd8ChsTM/gU7S4CIW6EkuOoJnEM+fetQ9ny89u/r4x4jinw7liXRryztL 5PmLSjw4FI/wlR8+LlMrbb2T3LHlETIT07mQD2/Hw4yRpViu2xw8/DzSrmMkvlBS11YQ 2LnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BNRFMZ9cHgYNvYHKD+kxe0xuKTEuu75B1vA6EQ7DKyc=; b=CubbDVP4jbIrUnJ/Tdnoq6xmjSWKDYRp/GRXz1271G19gn7DuylVSBNF2nd0+5hv75 7wIKcZekFyrppU11xoovA6woX/E//B8fHmP0KRHFojZfEmaAobz/sI9/YLQYFFzfXALk 0g6ydtm6OvGQnv7Bz3S90mGAStL1fJQIy6arok6XvVGyoGep6Zqk8u+ypFsVeTUU6ZMu NyQ2yMhn8SBSbEmOTnkTbLeBo9ka2OgNtYanf+xYdyoE2I6Zic6dr8ntlynQ4NtV1HPu +B+o87yyibQWt7cFmCozKD9L9jEW98JyhbSsS8C2EU0bQwoHrlI+MKiXUBUHJdof06a5 GhYQ== X-Gm-Message-State: AE9vXwPw+vPihBcw+i49QQYfm7Fb4+cjXtwXKA2pMjQgUgd4oq5isUIFcO3av1+/Y4z3kg== X-Received: by 10.98.155.7 with SMTP id r7mr38836239pfd.171.1474352028000; Mon, 19 Sep 2016 23:13:48 -0700 (PDT) Received: from cavium-Vostro-2520.caveonetworks.com ([111.93.218.67]) by smtp.gmail.com with ESMTPSA id ya6sm35787386pab.14.2016.09.19.23.13.45 (version=TLS1_1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 19 Sep 2016 23:13:47 -0700 (PDT) From: vijay.kilari@gmail.com To: marc.zyngier@arm.com, christoffer.dall@linaro.org, peter.maydell@linaro.org Subject: [PATCH v6 1/7] arm/arm64: vgic-new: Implement support for userspace access Date: Tue, 20 Sep 2016 11:42:39 +0530 Message-Id: <1474351965-11586-2-git-send-email-vijay.kilari@gmail.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1474351965-11586-1-git-send-email-vijay.kilari@gmail.com> References: <1474351965-11586-1-git-send-email-vijay.kilari@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160919_231408_988225_4FF6E21F X-CRM114-Status: GOOD ( 22.56 ) X-Spam-Score: 2.8 (++) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (2.8 points) pts rule name description ---- ---------------------- -------------------------------------------------- 3.3 RCVD_IN_SBL_CSS RBL: Received via a relay in Spamhaus SBL-CSS [111.93.218.67 listed in zen.spamhaus.org] 0.8 RCVD_IN_SORBS_WEB RBL: SORBS: sender is an abusable web server [111.93.218.67 listed in dnsbl.sorbs.net] 1.3 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net [Blocked - see ] -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2607:f8b0:400e:c00:0:0:0:243 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (vijay.kilari[at]gmail.com) -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: p.fedin@samsung.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Vijaya Kumar K MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org From: Vijaya Kumar K Read and write of some registers like ISPENDR and ICPENDR from userspace requires special handling when compared to guest access for these registers. Refer to Documentation/virtual/kvm/devices/arm-vgic-v3.txt for handling of ISPENDR, ICPENDR registers handling. Add infrastructure to support guest and userspace read and write for the required registers Also moved vgic_uaccess from vgic-mmio-v2.c to vgic-mmio.c Signed-off-by: Vijaya Kumar K --- virt/kvm/arm/vgic/vgic-mmio-v2.c | 25 ---------- virt/kvm/arm/vgic/vgic-mmio-v3.c | 98 ++++++++++++++++++++++++++++++++-------- virt/kvm/arm/vgic/vgic-mmio.c | 78 ++++++++++++++++++++++++++++---- virt/kvm/arm/vgic/vgic-mmio.h | 19 ++++++++ 4 files changed, 169 insertions(+), 51 deletions(-) -- 1.9.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index b44b359..0b32f40 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -406,31 +406,6 @@ int vgic_v2_has_attr_regs(struct kvm_device *dev, struct kvm_device_attr *attr) return -ENXIO; } -/* - * When userland tries to access the VGIC register handlers, we need to - * create a usable struct vgic_io_device to be passed to the handlers and we - * have to set up a buffer similar to what would have happened if a guest MMIO - * access occurred, including doing endian conversions on BE systems. - */ -static int vgic_uaccess(struct kvm_vcpu *vcpu, struct vgic_io_device *dev, - bool is_write, int offset, u32 *val) -{ - unsigned int len = 4; - u8 buf[4]; - int ret; - - if (is_write) { - vgic_data_host_to_mmio_bus(buf, len, *val); - ret = kvm_io_gic_ops.write(vcpu, &dev->dev, offset, len, buf); - } else { - ret = kvm_io_gic_ops.read(vcpu, &dev->dev, offset, len, buf); - if (!ret) - *val = vgic_data_mmio_bus_to_host(buf, len); - } - - return ret; -} - int vgic_v2_cpuif_uaccess(struct kvm_vcpu *vcpu, bool is_write, int offset, u32 *val) { diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index 0d3c76a..ce2708d 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -209,6 +209,62 @@ static unsigned long vgic_mmio_read_v3_idregs(struct kvm_vcpu *vcpu, return 0; } +static unsigned long vgic_v3_uaccess_read_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len) +{ + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); + u32 value = 0; + int i; + + /* + * A level triggerred interrupt pending state is latched in both + * "soft_pending" and "line_level" variables. Userspace will save + * and restore soft_pending and line_level separately. + * Refer to Documentation/virtual/kvm/devices/arm-vgic-v3.txt + * handling of ISPENDR and ICPENDR. + */ + for (i = 0; i < len * 8; i++) { + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + + if (irq->config == VGIC_CONFIG_LEVEL && irq->soft_pending) + value |= (1U << i); + if (irq->config == VGIC_CONFIG_EDGE && irq->pending) + value |= (1U << i); + + vgic_put_irq(vcpu->kvm, irq); + } + + return value; +} + +static void vgic_v3_uaccess_write_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val) +{ + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); + int i; + + for (i = 0; i < len * 8; i++) { + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + + spin_lock(&irq->irq_lock); + if (test_bit(i, &val)) { + irq->pending = true; + irq->soft_pending = true; + vgic_queue_irq_unlock(vcpu->kvm, irq); + } else { + irq->soft_pending = false; + if (irq->config == VGIC_CONFIG_EDGE || + (irq->config == VGIC_CONFIG_LEVEL && + !irq->line_level)) + irq->pending = false; + spin_unlock(&irq->irq_lock); + } + + vgic_put_irq(vcpu->kvm, irq); + } +} + /* We want to avoid outer shareable. */ u64 vgic_sanitise_shareability(u64 field) { @@ -358,7 +414,7 @@ static void vgic_mmio_write_pendbase(struct kvm_vcpu *vcpu, * We take some special care here to fix the calculation of the register * offset. */ -#define REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(off, rd, wr, bpi, acc) \ +#define REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(off, rd, wr, ur, uw, bpi, acc) \ { \ .reg_offset = off, \ .bits_per_irq = bpi, \ @@ -373,6 +429,8 @@ static void vgic_mmio_write_pendbase(struct kvm_vcpu *vcpu, .access_flags = acc, \ .read = rd, \ .write = wr, \ + .uaccess_read = ur, \ + .uaccess_write = uw, \ } static const struct vgic_register_region vgic_v3_dist_registers[] = { @@ -380,40 +438,42 @@ static const struct vgic_register_region vgic_v3_dist_registers[] = { vgic_mmio_read_v3_misc, vgic_mmio_write_v3_misc, 16, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IGROUPR, - vgic_mmio_read_rao, vgic_mmio_write_wi, 1, + vgic_mmio_read_rao, vgic_mmio_write_wi, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISENABLER, - vgic_mmio_read_enable, vgic_mmio_write_senable, 1, + vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICENABLER, - vgic_mmio_read_enable, vgic_mmio_write_cenable, 1, + vgic_mmio_read_enable, vgic_mmio_write_cenable, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISPENDR, - vgic_mmio_read_pending, vgic_mmio_write_spending, 1, + vgic_mmio_read_pending, vgic_mmio_write_spending, + vgic_v3_uaccess_read_pending, vgic_v3_uaccess_write_pending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICPENDR, - vgic_mmio_read_pending, vgic_mmio_write_cpending, 1, + vgic_mmio_read_pending, vgic_mmio_write_cpending, + vgic_mmio_read_raz, vgic_mmio_write_wi, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISACTIVER, - vgic_mmio_read_active, vgic_mmio_write_sactive, 1, + vgic_mmio_read_active, vgic_mmio_write_sactive, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICACTIVER, - vgic_mmio_read_active, vgic_mmio_write_cactive, 1, + vgic_mmio_read_active, vgic_mmio_write_cactive, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IPRIORITYR, - vgic_mmio_read_priority, vgic_mmio_write_priority, 8, - VGIC_ACCESS_32bit | VGIC_ACCESS_8bit), + vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL, + 8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ITARGETSR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 8, + vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICFGR, - vgic_mmio_read_config, vgic_mmio_write_config, 2, + vgic_mmio_read_config, vgic_mmio_write_config, NULL, NULL, 2, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IGRPMODR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IROUTER, - vgic_mmio_read_irouter, vgic_mmio_write_irouter, 64, + vgic_mmio_read_irouter, vgic_mmio_write_irouter, NULL, NULL, 64, VGIC_ACCESS_64bit | VGIC_ACCESS_32bit), REGISTER_DESC_WITH_LENGTH(GICD_IDREGS, vgic_mmio_read_v3_idregs, vgic_mmio_write_wi, 48, @@ -451,11 +511,13 @@ static const struct vgic_register_region vgic_v3_sgibase_registers[] = { REGISTER_DESC_WITH_LENGTH(GICR_ICENABLER0, vgic_mmio_read_enable, vgic_mmio_write_cenable, 4, VGIC_ACCESS_32bit), - REGISTER_DESC_WITH_LENGTH(GICR_ISPENDR0, - vgic_mmio_read_pending, vgic_mmio_write_spending, 4, + REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ISPENDR0, + vgic_mmio_read_pending, vgic_mmio_write_spending, + vgic_v3_uaccess_read_pending, vgic_v3_uaccess_write_pending, 4, VGIC_ACCESS_32bit), - REGISTER_DESC_WITH_LENGTH(GICR_ICPENDR0, - vgic_mmio_read_pending, vgic_mmio_write_cpending, 4, + REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ICPENDR0, + vgic_mmio_read_pending, vgic_mmio_write_cpending, + vgic_mmio_read_raz, vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_LENGTH(GICR_ISACTIVER0, vgic_mmio_read_active, vgic_mmio_write_sactive, 4, diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index e18b30d..31f85df 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -468,6 +468,73 @@ static bool check_region(const struct vgic_register_region *region, return false; } +static const struct vgic_register_region * +vgic_get_mmio_region(struct vgic_io_device *iodev, gpa_t addr, int len) +{ + const struct vgic_register_region *region; + + region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions, + addr - iodev->base_addr); + if (!region || !check_region(region, addr, len)) + return NULL; + + return region; +} + +static int vgic_uaccess_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, + gpa_t addr, u32 *val) +{ + struct vgic_io_device *iodev = kvm_to_vgic_iodev(dev); + const struct vgic_register_region *region; + struct kvm_vcpu *r_vcpu; + + region = vgic_get_mmio_region(iodev, addr, sizeof(u32)); + if (!region) { + *val = 0; + return 0; + } + + r_vcpu = iodev->redist_vcpu ? iodev->redist_vcpu : vcpu; + if (region->uaccess_read) + *val = region->uaccess_read(r_vcpu, addr, sizeof(u32)); + else + *val = region->read(r_vcpu, addr, sizeof(u32)); + + return 0; +} + +static int vgic_uaccess_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, + gpa_t addr, const u32 *val) +{ + struct vgic_io_device *iodev = kvm_to_vgic_iodev(dev); + const struct vgic_register_region *region; + struct kvm_vcpu *r_vcpu; + + region = vgic_get_mmio_region(iodev, addr, sizeof(u32)); + if (!region) + return 0; + + r_vcpu = iodev->redist_vcpu ? iodev->redist_vcpu : vcpu; + if (region->uaccess_write) + region->uaccess_write(r_vcpu, addr, sizeof(u32), *val); + else + region->write(r_vcpu, addr, sizeof(u32), *val); + + return 0; +} + +/* + * Userland access to VGIC registers. + */ +int vgic_uaccess(struct kvm_vcpu *vcpu, struct vgic_io_device *dev, + bool is_write, int offset, u32 *val) +{ + if (is_write) + return vgic_uaccess_write(vcpu, &dev->dev, offset, val); + else + return vgic_uaccess_read(vcpu, &dev->dev, offset, val); +} + static int dispatch_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, gpa_t addr, int len, void *val) { @@ -475,9 +542,8 @@ static int dispatch_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, const struct vgic_register_region *region; unsigned long data = 0; - region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions, - addr - iodev->base_addr); - if (!region || !check_region(region, addr, len)) { + region = vgic_get_mmio_region(iodev, addr, len); + if (!region) { memset(val, 0, len); return 0; } @@ -508,14 +574,10 @@ static int dispatch_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, const struct vgic_register_region *region; unsigned long data = vgic_data_mmio_bus_to_host(val, len); - region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions, - addr - iodev->base_addr); + region = vgic_get_mmio_region(iodev, addr, len); if (!region) return 0; - if (!check_region(region, addr, len)) - return 0; - switch (iodev->iodev_type) { case IODEV_CPUIF: region->write(vcpu, addr, len, data); diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h index 4c34d39..97e6df7 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.h +++ b/virt/kvm/arm/vgic/vgic-mmio.h @@ -34,6 +34,10 @@ struct vgic_register_region { gpa_t addr, unsigned int len, unsigned long val); }; + unsigned long (*uaccess_read)(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len); + void (*uaccess_write)(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len, unsigned long val); }; extern struct kvm_io_device_ops kvm_io_gic_ops; @@ -86,6 +90,18 @@ extern struct kvm_io_device_ops kvm_io_gic_ops; .write = wr, \ } +#define REGISTER_DESC_WITH_LENGTH_UACCESS(off, rd, wr, urd, uwr, length, acc) \ + { \ + .reg_offset = off, \ + .bits_per_irq = 0, \ + .len = length, \ + .access_flags = acc, \ + .read = rd, \ + .write = wr, \ + .uaccess_read = urd, \ + .uaccess_write = uwr, \ + } + int kvm_vgic_register_mmio_region(struct kvm *kvm, struct kvm_vcpu *vcpu, struct vgic_register_region *reg_desc, struct vgic_io_device *region, @@ -158,6 +174,9 @@ void vgic_mmio_write_config(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val); +int vgic_uaccess(struct kvm_vcpu *vcpu, struct vgic_io_device *dev, + bool is_write, int offset, u32 *val); + unsigned int vgic_v2_init_dist_iodev(struct vgic_io_device *dev); unsigned int vgic_v3_init_dist_iodev(struct vgic_io_device *dev);