From patchwork Wed Jun 25 09:28:48 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 32465 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ig0-f198.google.com (mail-ig0-f198.google.com [209.85.213.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 358A520C88 for ; Wed, 25 Jun 2014 09:30:33 +0000 (UTC) Received: by mail-ig0-f198.google.com with SMTP id h3sf5109167igd.1 for ; Wed, 25 Jun 2014 02:30:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=8BKbmrxvJy2Fo9nL5jwX21ClKv33VwA4Cj1/aVtlsrI=; b=dQ82gNzR/znLd97RH3kPs4PMcem26O12NmqkutXx9wN7qgAlJ/4K3wl9Hhh9aIt4DD IYECiHqu3uJET/xrico+h1ilV3aNGxp9BdZ8HO+Ts+FZ/ipP1TALwCHBYCvSC52eMKAr iSn0w7gc2g5m8RIymsfdKsEP3SmkOf86vWwjFfDAQEhhk88qSiVl6RMxhqII4lovvaZW d3jh0Q5JDNqFxOnL96EkZYtejbn6xzSNbSThu1Pxd/sxvBdkgnEwKG0lxFzgCJ1n6YvW DXOM12gtZ34o50d0jZDD+Kg++dfvFEzMJ+4jOdnqj41nmmU+gNnaBUWP1szTbdSzlu5w L5og== X-Gm-Message-State: ALoCoQmhf+l4b9CQFrABrDpMKXtAUO5g7/+Pl5WEPMabPDmhgXvq1oKu2m0EW2Yz6M4KvL2z2fuH X-Received: by 10.182.250.229 with SMTP id zf5mr3803370obc.4.1403688633199; Wed, 25 Jun 2014 02:30:33 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.106.99 with SMTP id d90ls2681574qgf.57.gmail; Wed, 25 Jun 2014 02:30:33 -0700 (PDT) X-Received: by 10.52.252.193 with SMTP id zu1mr4931284vdc.7.1403688633099; Wed, 25 Jun 2014 02:30:33 -0700 (PDT) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id b5si1896996vdj.25.2014.06.25.02.30.33 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 25 Jun 2014 02:30:33 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id la4so1622078vcb.28 for ; Wed, 25 Jun 2014 02:30:33 -0700 (PDT) X-Received: by 10.52.139.101 with SMTP id qx5mr4865927vdb.17.1403688632982; Wed, 25 Jun 2014 02:30:32 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp274451vcb; Wed, 25 Jun 2014 02:30:32 -0700 (PDT) X-Received: by 10.66.142.135 with SMTP id rw7mr9762351pab.71.1403688632204; Wed, 25 Jun 2014 02:30:32 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id sr7si4272240pab.202.2014.06.25.02.30.31; Wed, 25 Jun 2014 02:30:31 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756021AbaFYJaV (ORCPT + 27 others); Wed, 25 Jun 2014 05:30:21 -0400 Received: from fw-tnat.austin.arm.com ([217.140.110.23]:51397 "EHLO collaborate-mta1.arm.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755387AbaFYJ3C (ORCPT ); Wed, 25 Jun 2014 05:29:02 -0400 Received: from e102391-lin.cambridge.arm.com (e102391-lin.cambridge.arm.com [10.1.209.143]) by collaborate-mta1.arm.com (Postfix) with ESMTP id A9B9213FB3D; Wed, 25 Jun 2014 04:29:00 -0500 (CDT) From: Marc Zyngier To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Will Deacon , Catalin Marinas , Thomas Gleixner , eric.auger@linaro.org, Christoffer Dall Subject: [RFC PATCH 7/9] KVM: arm: vgic: allow dynamic mapping of physical/virtual interrupts Date: Wed, 25 Jun 2014 10:28:48 +0100 Message-Id: <1403688530-23273-8-git-send-email-marc.zyngier@arm.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1403688530-23273-1-git-send-email-marc.zyngier@arm.com> References: <1403688530-23273-1-git-send-email-marc.zyngier@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: marc.zyngier@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , In order to be able to feed physical interrupts to a guest, we need to be able to establish the virtual-physical mapping between the two worlds. As we try to keep the injection interface simple, find out what the physical interrupt is (if any) when we actually build the LR. The mapping is kept in a rbtree, indexed by virtual interrupts. Signed-off-by: Marc Zyngier --- include/kvm/arm_vgic.h | 13 ++++++ include/linux/irqchip/arm-gic-v3.h | 3 ++ include/linux/irqchip/arm-gic.h | 1 + virt/kvm/arm/vgic-v2.c | 14 +++++- virt/kvm/arm/vgic-v3.c | 22 +++++++++- virt/kvm/arm/vgic.c | 88 ++++++++++++++++++++++++++++++++++++++ 6 files changed, 138 insertions(+), 3 deletions(-) diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index 82e00a5..5f61dfa 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -134,6 +134,12 @@ struct vgic_vm_ops { int (*vgic_init)(struct kvm *kvm, const struct vgic_params *params); }; +struct irq_phys_map { + struct rb_node node; + u32 virt_irq; + u32 phys_irq; +}; + struct vgic_dist { #ifdef CONFIG_KVM_ARM_VGIC spinlock_t lock; @@ -190,6 +196,8 @@ struct vgic_dist { unsigned long irq_pending_on_cpu; struct vgic_vm_ops vm_ops; + + struct rb_root irq_phys_map; #endif }; @@ -237,6 +245,8 @@ struct vgic_cpu { struct vgic_v2_cpu_if vgic_v2; struct vgic_v3_cpu_if vgic_v3; }; + + struct rb_root irq_phys_map; #endif }; @@ -265,6 +275,9 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg); int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu); bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exit_mmio *mmio); +int vgic_map_phys_irq(struct kvm_vcpu *vcpu, int virt_irq, int phys_irq); +int vgic_get_phys_irq(struct kvm_vcpu *vcpu, int virt_irq); +int vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, int virt_irq, int phys_irq); #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel)) #define vgic_initialized(k) ((k)->arch.vgic.ready) diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h index 0e74c19..7753d18 100644 --- a/include/linux/irqchip/arm-gic-v3.h +++ b/include/linux/irqchip/arm-gic-v3.h @@ -210,9 +210,12 @@ #define ICH_LR_EOI (1UL << 41) #define ICH_LR_GROUP (1UL << 60) +#define ICH_LR_HW (1UL << 61) #define ICH_LR_STATE (3UL << 62) #define ICH_LR_PENDING_BIT (1UL << 62) #define ICH_LR_ACTIVE_BIT (1UL << 63) +#define ICH_LR_PHYS_ID_SHIFT 32 +#define ICH_LR_PHYS_ID_MASK (0x3ffUL << ICH_LR_PHYS_ID_SHIFT) #define ICH_MISR_EOI (1 << 0) #define ICH_MISR_U (1 << 1) diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h index ffe3911..18c4e29 100644 --- a/include/linux/irqchip/arm-gic.h +++ b/include/linux/irqchip/arm-gic.h @@ -64,6 +64,7 @@ #define GICH_LR_PENDING_BIT (1 << 28) #define GICH_LR_ACTIVE_BIT (1 << 29) #define GICH_LR_EOI (1 << 19) +#define GICH_LR_HW (1 << 31); #define GICH_VMCR_CTRL_SHIFT 0 #define GICH_VMCR_CTRL_MASK (0x21f << GICH_VMCR_CTRL_SHIFT) diff --git a/virt/kvm/arm/vgic-v2.c b/virt/kvm/arm/vgic-v2.c index 4091078..6764d44 100644 --- a/virt/kvm/arm/vgic-v2.c +++ b/virt/kvm/arm/vgic-v2.c @@ -58,7 +58,9 @@ static struct vgic_lr vgic_v2_get_lr(const struct kvm_vcpu *vcpu, int lr) static void vgic_v2_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc) { - u32 lr_val = (lr_desc.source << GICH_LR_PHYSID_CPUID_SHIFT) | lr_desc.irq; + u32 lr_val; + + lr_val = lr_desc.irq; if (lr_desc.state & LR_STATE_PENDING) lr_val |= GICH_LR_PENDING_BIT; @@ -67,6 +69,16 @@ static void vgic_v2_set_lr(struct kvm_vcpu *vcpu, int lr, if (lr_desc.state & LR_EOI_INT) lr_val |= GICH_LR_EOI; + if (lr_desc.irq < VGIC_NR_SGIS) { + lr_val |= (lr_desc.source << GICH_LR_PHYSID_CPUID_SHIFT); + } else { + int phys_irq = vgic_get_phys_irq(vcpu, lr_desc.irq); + if (phys_irq >= 0) { + lr_val |= ((u32)phys_irq) << GICH_LR_PHYSID_CPUID_SHIFT; + lr_val |= GICH_LR_HW; + } + } + vcpu->arch.vgic_cpu.vgic_v2.vgic_lr[lr] = lr_val; } diff --git a/virt/kvm/arm/vgic-v3.c b/virt/kvm/arm/vgic-v3.c index d26d12f..41dee6c 100644 --- a/virt/kvm/arm/vgic-v3.c +++ b/virt/kvm/arm/vgic-v3.c @@ -116,6 +116,15 @@ static void vgic_v3_on_v3_set_lr(struct kvm_vcpu *vcpu, int lr, lr_val |= sync_lr_val(lr_desc.state); + if (lr_desc.irq >= VGIC_NR_SGIS) { + int phys_irq; + phys_irq = vgic_get_phys_irq(vcpu, lr_desc.irq); + if (phys_irq >= 0) { + lr_val |= ((u64)phys_irq) << ICH_LR_PHYS_ID_SHIFT; + lr_val |= ICH_LR_HW; + } + } + vcpu->arch.vgic_cpu.vgic_v3.vgic_lr[LR_INDEX(lr)] = lr_val; } @@ -126,10 +135,19 @@ static void vgic_v2_on_v3_set_lr(struct kvm_vcpu *vcpu, int lr, lr_val = lr_desc.irq; - lr_val |= (u32)lr_desc.source << GICH_LR_PHYSID_CPUID_SHIFT; - lr_val |= sync_lr_val(lr_desc.state); + if (lr_desc.irq < VGIC_NR_SGIS) { + lr_val |= (u32)lr_desc.source << GICH_LR_PHYSID_CPUID_SHIFT; + } else { + int phys_irq; + phys_irq = vgic_get_phys_irq(vcpu, lr_desc.irq); + if (phys_irq >= 0) { + lr_val |= ((u64)phys_irq) << ICH_LR_PHYS_ID_SHIFT; + lr_val |= ICH_LR_HW; + } + } + vcpu->arch.vgic_cpu.vgic_v3.vgic_lr[LR_INDEX(lr)] = lr_val; } diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index e3c7189..c404682c 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -1163,6 +1164,93 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data) return IRQ_HANDLED; } +static struct rb_root *vgic_get_irq_phys_map(struct kvm_vcpu *vcpu, + int virt_irq) +{ + if (virt_irq < VGIC_NR_PRIVATE_IRQS) + return &vcpu->arch.vgic_cpu.irq_phys_map; + else + return &vcpu->kvm->arch.vgic.irq_phys_map; +} + +int vgic_map_phys_irq(struct kvm_vcpu *vcpu, int virt_irq, int phys_irq) +{ + struct rb_root *root = vgic_get_irq_phys_map(vcpu, virt_irq); + struct rb_node **new = &root->rb_node, *parent = NULL; + struct irq_phys_map *new_map; + + /* Boilerplate rb_tree code */ + while (*new) { + struct irq_phys_map *this; + + this = container_of(*new, struct irq_phys_map, node); + parent = *new; + if (this->virt_irq < virt_irq) + new = &(*new)->rb_left; + else if (this->virt_irq > virt_irq) + new = &(*new)->rb_right; + else + return -EEXIST; + } + + new_map = kzalloc(sizeof(*new_map), GFP_KERNEL); + if (!new_map) + return -ENOMEM; + + new_map->virt_irq = virt_irq; + new_map->phys_irq = phys_irq; + + rb_link_node(&new_map->node, parent, new); + rb_insert_color(&new_map->node, root); + + return 0; +} + +static struct irq_phys_map *vgic_irq_map_search(struct kvm_vcpu *vcpu, + int virt_irq) +{ + struct rb_root *root = vgic_get_irq_phys_map(vcpu, virt_irq); + struct rb_node *node = root->rb_node; + + while(node) { + struct irq_phys_map *this; + + this = container_of(node, struct irq_phys_map, node); + + if (this->virt_irq < virt_irq) + node = node->rb_left; + else if (this->virt_irq > virt_irq) + node = node->rb_right; + else + return this; + } + + return NULL; +} + +int vgic_get_phys_irq(struct kvm_vcpu *vcpu, int virt_irq) +{ + struct irq_phys_map *map = vgic_irq_map_search(vcpu, virt_irq); + + if (map) + return map->phys_irq; + + return -ENOENT; +} + +int vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, int virt_irq, int phys_irq) +{ + struct irq_phys_map *map = vgic_irq_map_search(vcpu, virt_irq); + + if (map && map->phys_irq == phys_irq) { + rb_erase(&map->node, vgic_get_irq_phys_map(vcpu, virt_irq)); + kfree(map); + return 0; + } + + return -ENOENT; +} + static void vgic_vcpu_free_maps(struct vgic_cpu *vgic_cpu) { kfree(vgic_cpu->pending_shared);