From patchwork Tue Dec 17 05:30:00 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 22561 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f70.google.com (mail-oa0-f70.google.com [209.85.219.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C4A11202E2 for ; Tue, 17 Dec 2013 05:30:39 +0000 (UTC) Received: by mail-oa0-f70.google.com with SMTP id m1sf21662891oag.5 for ; Mon, 16 Dec 2013 21:30:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=hEpxwr7xgBdBPw+IU5LIIdpM7h3BFwLru4hbLmvRdwc=; b=c5sas+XCUfRYTHLTke/8+QOYBIZS/dHM0FIXkfMLf1zUlXBLY7HR0KT+2h0PKiOUTT k3HMsEBL09UM8JywLRnqQRoCmQz8fBiY6TPD4QgERwIeb/RIvX7XtNoStze3vqPbF6Xh 0DnrjLKeEe2xSUitjUildYcapSyXt4u/zRylacznvYy1MhVb2gK9UQSA4RxsxMAS9ePo CE+HngNHX+2Kqtt1kPQDGoJCSZDyHRmrWZ8SgERFS67GywT7FXSc6zRgfaoyYc9oF86A gjDgDDhPYuG1ObL8IEzIZuKgmTd7LU2RSXB0CFlHi+dLz+ItQt79DLR6aA9+G+5RJN7i Jhyw== X-Gm-Message-State: ALoCoQkAPSuXXF1U06fwsKxOQ5vc8pAjcc/CMjzeX3OZeuugGps2ejn7VB4zXd2cYNTiFBfkCBVq X-Received: by 10.182.236.74 with SMTP id us10mr7771480obc.36.1387258239428; Mon, 16 Dec 2013 21:30:39 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.120.164 with SMTP id ld4ls2344564qeb.51.gmail; Mon, 16 Dec 2013 21:30:39 -0800 (PST) X-Received: by 10.52.0.81 with SMTP id 17mr458535vdc.50.1387258239311; Mon, 16 Dec 2013 21:30:39 -0800 (PST) Received: from mail-vb0-f44.google.com (mail-vb0-f44.google.com [209.85.212.44]) by mx.google.com with ESMTPS id fu7si4306061vec.31.2013.12.16.21.30.39 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 16 Dec 2013 21:30:39 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.44 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.44; Received: by mail-vb0-f44.google.com with SMTP id x8so3837945vbf.31 for ; Mon, 16 Dec 2013 21:30:39 -0800 (PST) X-Received: by 10.58.37.67 with SMTP id w3mr10059904vej.22.1387258239193; Mon, 16 Dec 2013 21:30:39 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp44491vcz; Mon, 16 Dec 2013 21:30:38 -0800 (PST) X-Received: by 10.68.197.73 with SMTP id is9mr24774770pbc.75.1387258238067; Mon, 16 Dec 2013 21:30:38 -0800 (PST) Received: from mail-pb0-f52.google.com (mail-pb0-f52.google.com [209.85.160.52]) by mx.google.com with ESMTPS id pt8si10758800pac.192.2013.12.16.21.30.37 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 16 Dec 2013 21:30:38 -0800 (PST) Received-SPF: neutral (google.com: 209.85.160.52 is neither permitted nor denied by best guess record for domain of christoffer.dall@linaro.org) client-ip=209.85.160.52; Received: by mail-pb0-f52.google.com with SMTP id uo5so6510766pbc.11 for ; Mon, 16 Dec 2013 21:30:37 -0800 (PST) X-Received: by 10.68.12.138 with SMTP id y10mr24833563pbb.101.1387258237662; Mon, 16 Dec 2013 21:30:37 -0800 (PST) Received: from localhost.localdomain (c-67-169-181-221.hsd1.ca.comcast.net. [67.169.181.221]) by mx.google.com with ESMTPSA id vn10sm30771688pbc.21.2013.12.16.21.30.35 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 16 Dec 2013 21:30:36 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: linaro-kernel@lists.linaro.org, patches@linaro.org, Christoffer Dall Subject: [PATCH v5 09/10] KVM: arm-vgic: Add GICD_SPENDSGIR and GICD_CPENDSGIR handlers Date: Mon, 16 Dec 2013 21:30:00 -0800 Message-Id: <1387258201-8738-10-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.8.5 In-Reply-To: <1387258201-8738-1-git-send-email-christoffer.dall@linaro.org> References: <1387258201-8738-1-git-send-email-christoffer.dall@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.44 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Handle MMIO accesses to the two registers which should support both the case where the VMs want to read/write either of these registers and the case where user space reads/writes these registers to do save/restore of the VGIC state. Note that the added complexity compared to simple set/clear enable registers stems from the bookkeping of source cpu ids. It may be possible to change the underlying data structure to simplify the complexity, but since this is not in the critical path at all, this will do. Also note that reading this register from a live guest will not be accurate compared to on hardware, because some state may be living on the CPU LRs and the only way to give a consistent read would be to force stop all the VCPUs and request them to unqueu the LR state onto the distributor. Until we have an actual user of live reading this register, we can live with the difference. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 66 insertions(+), 4 deletions(-) Changelog[v3]: - Renamed read/write SGI set/clear functions - Rely on unqueuing of interrupts from LRs instead of reading LRs directly - Deduplicate code Changelog[v2]: - Use struct kvm_exit_mmio accessors for ->data field. diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index d08ba28..e59aaa4 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -663,18 +663,80 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) } } -static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu, - struct kvm_exit_mmio *mmio, - phys_addr_t offset) +/* Handle reads of GICD_CPENDSGIRn and GICD_SPENDSGIRn */ +static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu, + struct kvm_exit_mmio *mmio, + phys_addr_t offset) { + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + int sgi; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 reg = 0; + + /* Copy source SGIs from distributor side */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + int shift = 8 * (sgi - min_sgi); + reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift; + } + + mmio_data_write(mmio, ~0, reg); return false; } +static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu, + struct kvm_exit_mmio *mmio, + phys_addr_t offset, bool set) +{ + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + int sgi; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 reg; + bool updated = false; + + reg = mmio_data_read(mmio, ~0); + + /* Clear pending SGIs on the distributor */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + u8 mask = reg >> (8 * (sgi - min_sgi)); + if (set) { + if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask) + updated = true; + dist->irq_sgi_sources[vcpu_id][sgi] |= mask; + } else { + if (dist->irq_sgi_sources[vcpu_id][sgi] & mask) + updated = true; + dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask; + } + } + + if (updated) + vgic_update_state(vcpu->kvm); + + return updated; +} + static bool handle_mmio_sgi_set(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio, phys_addr_t offset) { - return false; + if (!mmio->is_write) + return read_set_clear_sgi_pend_reg(vcpu, mmio, offset); + else + return write_set_clear_sgi_pend_reg(vcpu, mmio, offset, true); +} + +static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu, + struct kvm_exit_mmio *mmio, + phys_addr_t offset) +{ + if (!mmio->is_write) + return read_set_clear_sgi_pend_reg(vcpu, mmio, offset); + else + return write_set_clear_sgi_pend_reg(vcpu, mmio, offset, false); } /*