From patchwork Fri Apr 15 17:11:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 65948 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp52847qge; Fri, 15 Apr 2016 10:27:45 -0700 (PDT) X-Received: by 10.98.100.200 with SMTP id y191mr3942425pfb.84.1460741265504; Fri, 15 Apr 2016 10:27:45 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id e83si3432367pfj.74.2016.04.15.10.27.45 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 15 Apr 2016 10:27:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ar7W1-0007eY-W2; Fri, 15 Apr 2016 17:26:46 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ar7R7-00025b-Tz for linux-arm-kernel@lists.infradead.org; Fri, 15 Apr 2016 17:22:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 24E315D3; Fri, 15 Apr 2016 10:10:50 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.203.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 940EF3F21A; Fri, 15 Apr 2016 10:12:03 -0700 (PDT) From: Andre Przywara To: Christoffer Dall , Marc Zyngier Subject: [PATCH 11/45] KVM: arm/arm64: vgic-new: Implement kvm_vgic_vcpu_pending_irq Date: Fri, 15 Apr 2016 18:11:22 +0100 Message-Id: <1460740316-8755-12-git-send-email-andre.przywara@arm.com> X-Mailer: git-send-email 2.7.3 In-Reply-To: <1460740316-8755-1-git-send-email-andre.przywara@arm.com> References: <1460740316-8755-1-git-send-email-andre.przywara@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160415_102142_976139_9DF17F11 X-CRM114-Status: GOOD ( 11.31 ) X-Spam-Score: -7.9 (-------) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-7.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -1.0 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, Eric Auger MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org From: Eric Auger Tell KVM whether a particular VCPU has an IRQ that needs handling in the guest. This is used to decide whether a VCPU is runnable. Signed-off-by: Eric Auger Signed-off-by: Andre Przywara Changelog RFC..v1: - return false if distributor is disabled - add vgic_kick_vcpus() implementations --- include/kvm/vgic/vgic.h | 2 ++ virt/kvm/arm/vgic/vgic.c | 40 ++++++++++++++++++++++++++++++++++++++++ virt/kvm/arm/vgic/vgic.h | 1 + 3 files changed, 43 insertions(+) -- 2.7.3 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/include/kvm/vgic/vgic.h b/include/kvm/vgic/vgic.h index 97c919c..664004f 100644 --- a/include/kvm/vgic/vgic.h +++ b/include/kvm/vgic/vgic.h @@ -184,6 +184,8 @@ struct vgic_cpu { int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid, bool level); +int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu); + #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel)) #define vgic_initialized(k) (false) #define vgic_ready(k) ((k)->arch.vgic.ready) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 13280b0..b1dd8d1 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -507,3 +507,43 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) vgic_flush_lr_state(vcpu); spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock); } + +int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + struct vgic_irq *irq; + bool pending = false; + + if (!vcpu->kvm->arch.vgic.enabled) + return false; + + spin_lock(&vgic_cpu->ap_list_lock); + + list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) { + spin_lock(&irq->irq_lock); + pending = irq->pending && irq->enabled; + spin_unlock(&irq->irq_lock); + + if (pending) + break; + } + + spin_unlock(&vgic_cpu->ap_list_lock); + + return pending; +} + +void vgic_kick_vcpus(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + int c; + + /* + * We've injected an interrupt, time to find out who deserves + * a good kick... + */ + kvm_for_each_vcpu(c, vcpu, kvm) { + if (kvm_vgic_vcpu_pending_irq(vcpu)) + kvm_vcpu_kick(vcpu); + } +} diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h index 81b1a20..0c92cda 100644 --- a/virt/kvm/arm/vgic/vgic.h +++ b/virt/kvm/arm/vgic/vgic.h @@ -21,6 +21,7 @@ struct vgic_irq *vgic_get_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 intid); bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq); +void vgic_kick_vcpus(struct kvm *kvm); void vgic_v2_process_maintenance(struct kvm_vcpu *vcpu); void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu);