From patchwork Fri Sep 4 19:40:50 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 53145 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by patches.linaro.org (Postfix) with ESMTPS id BE7922159E for ; Fri, 4 Sep 2015 19:42:19 +0000 (UTC) Received: by lbbti1 with SMTP id ti1sf9827221lbb.3 for ; Fri, 04 Sep 2015 12:42:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:cc:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=OmQdOVjRAHK2v5QLhGtn2Xw4zUJaPJaPt4Ys/fdjQZI=; b=YYWDpJwcDcNQ1HYTHeBO5t+PM4kzdPoJ6MrwwRYYlLiXLQrcsuQh0hVi6zeTvmYtfr k0rCGeA8pZM/Vt2HU4DVjEKNOAEOF/OgSnVTn1NArtpYJNAeGIzcF5pAXtnwfXFKS00R L7dRNBwt2UnGvDWvpBS+gFa5OsMEMW/Ytv83BcAGrarleDrTgXW1Iv2A6vm1xMw2KonH a6yM4y/ijgDbSOIxGNp0DhNyti75wERajcZstsUmRI9c1arn5vc/Lrt2gJOYwDXI3tk4 +kXQL1jSzfwhHZ7Y3QAOJ16Nc7mLxG0rslKQovzqJwPI2es+HC+h+fsHAmqdZy0F4oML M3Bg== X-Gm-Message-State: ALoCoQmQ12X0CR7XfW26rfzv03veYcM4S0GgvaNRBX2BzdmiKDOJAxCBGF0Q/3U8dNCuoHvVJTEm X-Received: by 10.180.77.105 with SMTP id r9mr1545034wiw.0.1441395738712; Fri, 04 Sep 2015 12:42:18 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.207.72 with SMTP id lu8ls342584lac.66.gmail; Fri, 04 Sep 2015 12:42:18 -0700 (PDT) X-Received: by 10.112.149.68 with SMTP id ty4mr178594lbb.74.1441395738545; Fri, 04 Sep 2015 12:42:18 -0700 (PDT) Received: from mail-la0-f53.google.com (mail-la0-f53.google.com. [209.85.215.53]) by mx.google.com with ESMTPS id wc4si3088580lbb.123.2015.09.04.12.42.18 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 04 Sep 2015 12:42:18 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) client-ip=209.85.215.53; Received: by lanb10 with SMTP id b10so19755116lan.3 for ; Fri, 04 Sep 2015 12:42:18 -0700 (PDT) X-Received: by 10.152.21.196 with SMTP id x4mr5044317lae.117.1441395738415; Fri, 04 Sep 2015 12:42:18 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.164.42 with SMTP id yn10csp121314lbb; Fri, 4 Sep 2015 12:42:17 -0700 (PDT) X-Received: by 10.68.233.5 with SMTP id ts5mr11743337pbc.58.1441395737295; Fri, 04 Sep 2015 12:42:17 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id bx12si5724937pdb.198.2015.09.04.12.42.16 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 04 Sep 2015 12:42:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZXwrB-0008Ua-6X; Fri, 04 Sep 2015 19:41:05 +0000 Received: from mail-lb0-f172.google.com ([209.85.217.172]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZXwq8-0006XC-Pe for linux-arm-kernel@lists.infradead.org; Fri, 04 Sep 2015 19:40:02 +0000 Received: by lbcao8 with SMTP id ao8so16695440lbc.3 for ; Fri, 04 Sep 2015 12:39:39 -0700 (PDT) X-Received: by 10.112.184.137 with SMTP id eu9mr5080112lbc.21.1441395578940; Fri, 04 Sep 2015 12:39:38 -0700 (PDT) Received: from localhost.localdomain (0187900153.0.fullrate.dk. [2.110.55.193]) by smtp.gmail.com with ESMTPSA id nz4sm798299lbb.38.2015.09.04.12.39.38 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 04 Sep 2015 12:39:38 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 8/8] arm/arm64: KVM: Support edge-triggered forwarded interrupts Date: Fri, 4 Sep 2015 21:40:50 +0200 Message-Id: <1441395650-19663-9-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.1.2.330.g565301e.dirty In-Reply-To: <1441395650-19663-1-git-send-email-christoffer.dall@linaro.org> References: <1441395650-19663-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150904_124001_050946_54EE0D80 X-CRM114-Status: GOOD ( 20.02 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.217.172 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.217.172 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 We mark edge-triggered interrupts with the HW bit set as queued to prevent the VGIC code from injecting LRs with both the Active and Pending bits set at the same time while also setting the HW bit, because the hardware does not support this. However, this means that we must also clear the queued flag when we sync back a LR where the state on the physical distributor went from active to inactive because the guest deactivated the interrupt. At this point we must also check if the interrupt is pending on the distributor, and tell the VGIC to queue it again if it is. Since these actions on the sync path are extremely close to those for level-triggered interrupts, rename process_level_irq to process_queued_irq, allowing it to cater for both cases. Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic.c | 40 ++++++++++++++++++++++------------------ 1 file changed, 22 insertions(+), 18 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index f4ea950..5942ce9 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -1322,13 +1322,10 @@ epilog: } } -static int process_level_irq(struct kvm_vcpu *vcpu, int lr, struct vgic_lr vlr) +static int process_queued_irq(struct kvm_vcpu *vcpu, + int lr, struct vgic_lr vlr) { - int level_pending = 0; - - vlr.state = 0; - vlr.hwirq = 0; - vgic_set_lr(vcpu, lr, vlr); + int pending = 0; /* * If the IRQ was EOIed (called from vgic_process_maintenance) or it @@ -1344,26 +1341,35 @@ static int process_level_irq(struct kvm_vcpu *vcpu, int lr, struct vgic_lr vlr) vgic_dist_irq_clear_soft_pend(vcpu, vlr.irq); /* - * Tell the gic to start sampling the line of this interrupt again. + * Tell the gic to start sampling this interrupt again. */ vgic_irq_clear_queued(vcpu, vlr.irq); /* Any additional pending interrupt? */ - if (vgic_dist_irq_get_level(vcpu, vlr.irq)) { - vgic_cpu_irq_set(vcpu, vlr.irq); - level_pending = 1; + if (vgic_irq_is_edge(vcpu, vlr.irq)) { + BUG_ON(!(vlr.state & LR_HW)); + pending = vgic_dist_irq_is_pending(vcpu, vlr.irq); } else { - vgic_dist_irq_clear_pending(vcpu, vlr.irq); - vgic_cpu_irq_clear(vcpu, vlr.irq); + if (vgic_dist_irq_get_level(vcpu, vlr.irq)) { + vgic_cpu_irq_set(vcpu, vlr.irq); + pending = 1; + } else { + vgic_dist_irq_clear_pending(vcpu, vlr.irq); + vgic_cpu_irq_clear(vcpu, vlr.irq); + } } /* * Despite being EOIed, the LR may not have * been marked as empty. */ + vlr.state = 0; + vlr.hwirq = 0; + vgic_set_lr(vcpu, lr, vlr); + vgic_sync_lr_elrsr(vcpu, lr, vlr); - return level_pending; + return pending; } static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) @@ -1400,7 +1406,7 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) vlr.irq - VGIC_NR_PRIVATE_IRQS); spin_lock(&dist->lock); - level_pending |= process_level_irq(vcpu, lr, vlr); + level_pending |= process_queued_irq(vcpu, lr, vlr); spin_unlock(&dist->lock); } } @@ -1422,7 +1428,7 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) /* * Save the physical active state, and reset it to inactive. * - * Return true if there's a pending level triggered interrupt line to queue. + * Return true if there's a pending forwarded interrupt to queue. */ static bool vgic_sync_hwirq(struct kvm_vcpu *vcpu, int lr, struct vgic_lr vlr) { @@ -1456,9 +1462,7 @@ static bool vgic_sync_hwirq(struct kvm_vcpu *vcpu, int lr, struct vgic_lr vlr) return false; } - /* Mapped edge-triggered interrupts not yet supported. */ - WARN_ON(vgic_irq_is_edge(vcpu, vlr.irq)); - return process_level_irq(vcpu, lr, vlr); + return process_queued_irq(vcpu, lr, vlr); } /* Sync back the VGIC state after a guest run */