From patchwork Tue Jun 11 04:51:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 17781 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ea0-f197.google.com (mail-ea0-f197.google.com [209.85.215.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 530F32397B for ; Tue, 11 Jun 2013 04:52:16 +0000 (UTC) Received: by mail-ea0-f197.google.com with SMTP id a15sf3930452eae.0 for ; Mon, 10 Jun 2013 21:52:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=RdxSPDFTvR6JIwOgISOawXr+Ey/9cVQZFrZDcDoly+Y=; b=hlT6ibCm6rpwx9EJJKZzvlj3MUTLXXSWbcsNKYKHUmoM8TUWhOZtG1ESk0keNlnFub ROKvlSE8nRVOSab2/ysxOy7N1NQIALg2/fjYWRfb0+/wm4AQDRb4voGotmocCghHS9rC TG0dXsirKeQmLon1NWONDN5g0rEoqMenp2nm3gTPP7Kc4X2+BtuX8qldddAo0YoIIX4/ 0EkOlpjZYfFFFNXvpheyK9RSN5JyeDKrzaGg5L7Fuina8N4JkPbSOPBjgdBdscZ4DOMF vdPrfv/KlEjAKzxvYwoh6HMmDjoDtwJtc5sc9tU+kRpDK8i1O0/9jHsKhiLbBr5Q7jem 9Jlg== X-Received: by 10.180.11.239 with SMTP id t15mr71242wib.3.1370926335466; Mon, 10 Jun 2013 21:52:15 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.180.83.74 with SMTP id o10ls1070433wiy.15.canary; Mon, 10 Jun 2013 21:52:15 -0700 (PDT) X-Received: by 10.180.189.41 with SMTP id gf9mr102503wic.32.1370926335374; Mon, 10 Jun 2013 21:52:15 -0700 (PDT) Received: from mail-ve0-x235.google.com (mail-ve0-x235.google.com [2607:f8b0:400c:c01::235]) by mx.google.com with ESMTPS id cz8si4677516wjc.71.2013.06.10.21.52.14 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 21:52:15 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c01::235 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c01::235; Received: by mail-ve0-f181.google.com with SMTP id db10so5358876veb.40 for ; Mon, 10 Jun 2013 21:52:14 -0700 (PDT) X-Received: by 10.52.53.36 with SMTP id y4mr6126392vdo.51.1370926334110; Mon, 10 Jun 2013 21:52:14 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.10.206 with SMTP id pb14csp94473vcb; Mon, 10 Jun 2013 21:52:13 -0700 (PDT) X-Received: by 10.66.164.161 with SMTP id yr1mr16937064pab.77.1370926333098; Mon, 10 Jun 2013 21:52:13 -0700 (PDT) Received: from mail-pb0-x230.google.com (mail-pb0-x230.google.com [2607:f8b0:400e:c01::230]) by mx.google.com with ESMTPS id qo10si6160976pbc.178.2013.06.10.21.52.12 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 21:52:13 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c01::230 is neither permitted nor denied by best guess record for domain of christoffer.dall@linaro.org) client-ip=2607:f8b0:400e:c01::230; Received: by mail-pb0-f48.google.com with SMTP id ma3so2566971pbc.21 for ; Mon, 10 Jun 2013 21:52:12 -0700 (PDT) X-Received: by 10.68.233.98 with SMTP id tv2mr12678706pbc.146.1370926332674; Mon, 10 Jun 2013 21:52:12 -0700 (PDT) Received: from localhost.localdomain (c-67-169-183-77.hsd1.ca.comcast.net. [67.169.183.77]) by mx.google.com with ESMTPSA id i16sm18268008pag.18.2013.06.10.21.52.11 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 21:52:12 -0700 (PDT) From: Christoffer Dall To: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: patches@linaro.org, linaro-kernel@lists.linaro.org, Christoffer Dall Subject: [PATCH 6/7] KVM: arm-vgic: Add GICD_SPENDSGIR and GICD_CPENDSGIR handlers Date: Mon, 10 Jun 2013 21:51:18 -0700 Message-Id: <1370926279-32532-7-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1370926279-32532-1-git-send-email-christoffer.dall@linaro.org> References: <1370926279-32532-1-git-send-email-christoffer.dall@linaro.org> X-Gm-Message-State: ALoCoQkmAGCUrLJUqr00IbLOlyDuGI7JhvPsWd7GjM+EVZNsnpb9893OQS+VMwfsRQRrY4TWFqY+ X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c01::235 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Handle MMIO accesses to the two registers which should support both the case where the VMs want to read/write either of these registers and the case where user space reads/writes these registers to do save/restore of the VGIC state. Note that the added complexity compared to simple set/clear enable registers stems from the bookkeping of source cpu ids. It may be possible to change the underlying data structure to simplify the complexity, but since this is not in the critical path, at all, this is left as an interesting excercise to the reader. Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic.c | 114 ++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 112 insertions(+), 2 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 4821ce9..62d0fec 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -591,18 +591,128 @@ static bool handle_mmio_sgi_reg(struct kvm_vcpu *vcpu, return false; } +static void read_sgi_set_clear(struct kvm_vcpu *vcpu, + struct kvm_exit_mmio *mmio, + phys_addr_t offset) +{ + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + int i, sgi, cpu; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 lr, reg = 0; + + /* Copy source SGIs from distributor side */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + int shift = 8 * (sgi - min_sgi); + reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift; + } + + /* Copy source SGIs already on LRs */ + for_each_set_bit(i, vgic_cpu->lr_used, vgic_cpu->nr_lr) { + lr = vgic_cpu->vgic_lr[i]; + sgi = lr & GICH_LR_VIRTUALID; + cpu = (lr & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT; + if (sgi >= min_sgi && sgi <= max_sgi) { + if (lr & GICH_LR_STATE) + reg |= (1 << cpu) << (8 * (sgi - min_sgi)); + } + } + + memcpy(mmio->data, ®, sizeof(reg)); +} + static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio, phys_addr_t offset) { - return false; + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + int i, sgi, cpu; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 *lr, reg; + bool updated = false; + + if (!mmio->is_write) { + read_sgi_set_clear(vcpu, mmio, offset); + return false; + } + + memcpy(®, mmio->data, sizeof(reg)); + + /* Clear pending SGIs on distributor side */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + u8 mask = reg >> (8 * (sgi - min_sgi)); + if (dist->irq_sgi_sources[vcpu_id][sgi] & mask) + updated = true; + dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask; + } + + /* Clear SGIs already on LRs */ + for_each_set_bit(i, vgic_cpu->lr_used, vgic_cpu->nr_lr) { + lr = &vgic_cpu->vgic_lr[i]; + sgi = *lr & GICH_LR_VIRTUALID; + cpu = (*lr & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT; + + if (sgi >= min_sgi && sgi <= max_sgi) { + if (reg & ((1 << cpu) << (8 * (sgi - min_sgi)))) { + if (*lr & GICH_LR_PENDING_BIT) + updated = true; + *lr &= GICH_LR_PENDING_BIT; + } + } + } + + return updated; } static bool handle_mmio_sgi_set(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio, phys_addr_t offset) { - return false; + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + int i, sgi, cpu; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 *lr, reg; + bool updated = false; + + if (!mmio->is_write) { + read_sgi_set_clear(vcpu, mmio, offset); + return false; + } + + memcpy(®, mmio->data, sizeof(reg)); + + /* Set pending SGIs on distributor side */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + u8 mask = reg >> (8 * (sgi - min_sgi)); + if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask) + updated = true; + dist->irq_sgi_sources[vcpu_id][sgi] |= mask; + } + + /* Set active SGIs already on LRs to pending and active */ + for_each_set_bit(i, vgic_cpu->lr_used, vgic_cpu->nr_lr) { + lr = &vgic_cpu->vgic_lr[i]; + sgi = *lr & GICH_LR_VIRTUALID; + cpu = (*lr & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT; + + if (sgi >= min_sgi && sgi <= max_sgi) { + if (reg & ((1 << cpu) << (8 * (sgi - min_sgi)))) { + if (!(*lr & GICH_LR_PENDING_BIT)) + updated = true; + *lr |= GICH_LR_PENDING_BIT; + } + } + } + + return updated; } /*