From patchwork Thu Dec 12 19:55:47 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 22309 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f71.google.com (mail-oa0-f71.google.com [209.85.219.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1A4D623FC9 for ; Thu, 12 Dec 2013 19:54:43 +0000 (UTC) Received: by mail-oa0-f71.google.com with SMTP id i4sf3571593oah.10 for ; Thu, 12 Dec 2013 11:54:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=or5ejo41lxAZUy25RQ/B2aRgEBaLRNEnamH3Le3C13E=; b=AIgSlkVUYQkOZstQpVeJ49dcv9pRtkt0euZG4c/wnr7tljZANQ+tqROUj2C0DpD9Ds H6T7sFbMTHYvK4Z9QqsTwcmzocFlXPlJt77AqnJ7J03GeqF/6bbfXKvBB8/p0EQtkCIG KijzNjtL+3Y+BN6QK36q3qUdWKTdSFX39sfc5I08h0t/gbmnsmlpOWl0fLRR2QDZ7we7 pjwW/rPpMOEo5ZDXW92ZELyK3Bu7yFVFMIRG5yVX2zEvxLFhxEHpuaH92dHXMZzulfXd i9lnIJYigv8Je4IihxyLSlNakgWwmLiT6seRxz7NpMyu7ARRGpotVCGVA9TpPuL59rBX cgkQ== X-Gm-Message-State: ALoCoQkxgjbIvB3DqDZKU9szNhR17KkFWyKikCG0hQaY0lEQunlQ/oDZgEcT6Ayzva5dSghRU+3L X-Received: by 10.182.66.73 with SMTP id d9mr3618715obt.8.1386878082681; Thu, 12 Dec 2013 11:54:42 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.4.105 with SMTP id j9ls721199qej.94.gmail; Thu, 12 Dec 2013 11:54:42 -0800 (PST) X-Received: by 10.58.181.230 with SMTP id dz6mr570834vec.35.1386878082515; Thu, 12 Dec 2013 11:54:42 -0800 (PST) Received: from mail-ve0-f181.google.com (mail-ve0-f181.google.com [209.85.128.181]) by mx.google.com with ESMTPS id vr9si8027694vcb.126.2013.12.12.11.54.42 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 12 Dec 2013 11:54:42 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.181; Received: by mail-ve0-f181.google.com with SMTP id oy12so683605veb.40 for ; Thu, 12 Dec 2013 11:54:42 -0800 (PST) X-Received: by 10.58.154.6 with SMTP id vk6mr326864veb.61.1386878082429; Thu, 12 Dec 2013 11:54:42 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp382005vcz; Thu, 12 Dec 2013 11:54:42 -0800 (PST) X-Received: by 10.43.8.66 with SMTP id or2mr8057401icb.19.1386878081647; Thu, 12 Dec 2013 11:54:41 -0800 (PST) Received: from mail-pb0-f45.google.com (mail-pb0-f45.google.com [209.85.160.45]) by mx.google.com with ESMTPS id wm3si17324493pab.49.2013.12.12.11.54.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 12 Dec 2013 11:54:41 -0800 (PST) Received-SPF: neutral (google.com: 209.85.160.45 is neither permitted nor denied by best guess record for domain of christoffer.dall@linaro.org) client-ip=209.85.160.45; Received: by mail-pb0-f45.google.com with SMTP id rp16so1118542pbb.4 for ; Thu, 12 Dec 2013 11:54:41 -0800 (PST) X-Received: by 10.68.211.39 with SMTP id mz7mr15186286pbc.90.1386878081247; Thu, 12 Dec 2013 11:54:41 -0800 (PST) Received: from localhost.localdomain (c-67-169-181-221.hsd1.ca.comcast.net. [67.169.181.221]) by mx.google.com with ESMTPSA id ql10sm4014884pbc.44.2013.12.12.11.54.39 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 12 Dec 2013 11:54:40 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: linaro-kernel@lists.linaro.org, patches@linaro.org, Christoffer Dall Subject: [PATCH 08/10] KVM: arm-vgic: Support unqueueing of LRs to the dist Date: Thu, 12 Dec 2013 11:55:47 -0800 Message-Id: <1386878149-13397-9-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.8.4.3 In-Reply-To: <1386878149-13397-1-git-send-email-christoffer.dall@linaro.org> References: <1386878149-13397-1-git-send-email-christoffer.dall@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , To properly access the VGIC state from user space it is very unpractical to have to loop through all the LRs in all register access functions. Instead, support moving all pending state from LRs to the distributor, but leave active state LRs alone. Note that to accurately present the active and pending state to VCPUs reading these distributor registers from a live VM, we would have to stop all other VPUs than the calling VCPU and ask each CPU to unqueue their LR state onto the distributor and add fields to track active state on the distributor side as well. We don't have any users of such functionality yet and there are other inaccuracies of the GIC emulation, so don't provide accurate synchronized access to this state just yet. However, when the time comes, having this function should help. Signed-off-by: Christoffer Dall --- Changelog[v4]: - Reworked vgic_unqueue_irqs to explicitly check for the active bit and to not use __test_and_clear_bit. Changelog[v3]: - New patch in series virt/kvm/arm/vgic.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 81 insertions(+), 5 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 88599b5..8067e76 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -589,6 +589,78 @@ static bool handle_mmio_sgi_reg(struct kvm_vcpu *vcpu, return false; } +#define LR_CPUID(lr) \ + (((lr) & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT) +#define LR_IRQID(lr) \ + ((lr) & GICH_LR_VIRTUALID) + +static void vgic_retire_lr(int lr_nr, int irq, struct vgic_cpu *vgic_cpu) +{ + clear_bit(lr_nr, vgic_cpu->lr_used); + vgic_cpu->vgic_lr[lr_nr] &= ~GICH_LR_STATE; + vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY; +} + +/** + * vgic_unqueue_irqs - move pending IRQs from LRs to the distributor + * @vgic_cpu: Pointer to the vgic_cpu struct holding the LRs + * + * Move any pending IRQs that have already been assigned to LRs back to the + * emulated distributor state so that the complete emulated state can be read + * from the main emulation structures without investigating the LRs. + * + * Note that IRQs in the active state in the LRs get their pending state moved + * to the distributor but the active state stays in the LRs, because we don't + * track the active state on the distributor side. + */ +static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) +{ + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + int vcpu_id = vcpu->vcpu_id; + int i, irq, source_cpu; + u32 *lr; + + for_each_set_bit(i, vgic_cpu->lr_used, vgic_cpu->nr_lr) { + lr = &vgic_cpu->vgic_lr[i]; + irq = LR_IRQID(*lr); + source_cpu = LR_CPUID(*lr); + + /* + * There are three options for the state bits: + * + * 01: pending + * 10: active + * 11: pending and active + * + * If the LR holds only an active interrupt (not pending) then + * just leave it alone. + */ + if ((*lr & GICH_LR_STATE) == GICH_LR_ACTIVE_BIT) + continue; + + /* + * If the interrupt was only pending (not "active" or "pending + * and active") then we the pending state will get moved to + * the distributor and the LR does not hold any info and can + * be marked as free for other use. + */ + if ((*lr & GICH_LR_STATE) == GICH_LR_PENDING_BIT) + vgic_retire_lr(i, irq, vgic_cpu); + + /* + * Finally, reestablish the pending state on the distributor + * and the CPU interface. It may have already been pending, + * but that is fine, then we are only setting a few bits that + * were already set. + */ + vgic_dist_irq_set(vcpu, irq); + if (irq < VGIC_NR_SGIS) + dist->irq_sgi_sources[vcpu_id][irq] |= 1 << source_cpu; + vgic_update_state(vcpu->kvm); + } +} + static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio, phys_addr_t offset) @@ -848,8 +920,6 @@ static void vgic_update_state(struct kvm *kvm) } } -#define LR_CPUID(lr) \ - (((lr) & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT) #define MK_LR_PEND(src, irq) \ (GICH_LR_PENDING_BIT | ((src) << GICH_LR_PHYSID_CPUID_SHIFT) | (irq)) @@ -871,9 +941,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu) int irq = vgic_cpu->vgic_lr[lr] & GICH_LR_VIRTUALID; if (!vgic_irq_is_enabled(vcpu, irq)) { - vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY; - clear_bit(lr, vgic_cpu->lr_used); - vgic_cpu->vgic_lr[lr] &= ~GICH_LR_STATE; + vgic_retire_lr(lr, irq, vgic_cpu); if (vgic_irq_is_active(vcpu, irq)) vgic_irq_clear_active(vcpu, irq); } @@ -1675,6 +1743,14 @@ static int vgic_attr_regs_access(struct kvm_device *dev, } } + /* + * Move all pending IRQs from the LRs on all VCPUs so the pending + * state can be properly represented in the register state accessible + * through this API. + */ + kvm_for_each_vcpu(c, tmp_vcpu, dev->kvm) + vgic_unqueue_irqs(tmp_vcpu); + offset -= r->base; r->handle_mmio(vcpu, &mmio, offset);