From patchwork Thu Mar 15 20:30:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 131850 Delivered-To: patch@linaro.org Received: by 10.46.84.17 with SMTP id i17csp1579265ljb; Thu, 15 Mar 2018 13:33:22 -0700 (PDT) X-Google-Smtp-Source: AG47ELu+Dn+qvaNTXuQw9Ek66xmrklhpXhgbtFILf64ZS5LwygrCPn1MS93CTSjwu71FvmWooz4F X-Received: by 10.36.158.194 with SMTP id p185mr712569itd.77.1521146002765; Thu, 15 Mar 2018 13:33:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521146002; cv=none; d=google.com; s=arc-20160816; b=x7A0foCuW4wCCH9jn/w35dS/7Q95U1NKGiPVcyE+o8O3zRESBg70H1MmlPCXmvUQfd Mewxd69G4v5nRZzjxqw9mNFq/254SHQ9Vtv5PoO/3ETozuNVOgp0DVXgvHA8HoCIljSU MG9N7N3YSiFtACuX+RI0JhEiV5d6aV1Hq2wqgTyTDarZbgBfyy+mUl44LFSSLE1gUqAL medCUIPeoOZuv6ZDZ2V9za5Wz5Io/zDpL7hSbLT6YFjJ5e3s0mZnHrB1vdEzyYyXhRrZ sQA0WZfdEjX71ucYb4mJkvtpTbLhJvrBI3Ng0rwH89oEhdtv5WI5kxjE/b/UYBtIUOVW frLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:dkim-signature:arc-authentication-results; bh=3rk+YEBsWM4ZGv34pRZTkCJOURZvcnrVC7nomf3bp9g=; b=FedL7H9f+mHTKxfPHC8lm+wVhQKL7ZIKXrXdm1x8tFKlNAGb/YTDnMpONYkqt/6ViE iGNERxEL/q/Tehye63RQCiPuRlunJzlHGaeXBERnlMFnT6thYqoVbuMcHqJTmuBXQfSY awFXlZvnNSw8pivYXjSQTSsftMyqoBLTWFwLAgFEC1G6xBpHt2uXqjdLrD5OTHsKPbgz jidylG3rMixqEbR4JlVBPbveODmPtBDBjujv//vrtwdblrG+mBIyjJFOt4HpfHuuwguB 1QQLGo5m2zN1954VfFBlJb34T4Cy2XaEJPakqtU3LXQ6MoyKN1j9oS1AmORSkCqDDc/S S8bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=Mg25R81g; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id m125-v6si471660itg.103.2018.03.15.13.33.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Mar 2018 13:33:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=Mg25R81g; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ewZX9-0003YH-AF; Thu, 15 Mar 2018 20:31:31 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ewZX7-0003Uj-AE for xen-devel@lists.xenproject.org; Thu, 15 Mar 2018 20:31:29 +0000 X-Inumbo-ID: d2abc9ca-288f-11e8-9728-bc764e045a96 Received: from mail-wr0-x241.google.com (unknown [2a00:1450:400c:c0c::241]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id d2abc9ca-288f-11e8-9728-bc764e045a96; Thu, 15 Mar 2018 21:31:22 +0100 (CET) Received: by mail-wr0-x241.google.com with SMTP id z73so5352505wrb.0 for ; Thu, 15 Mar 2018 13:31:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=b3jz7nj8OjWEPF/NvK5YAQH7X5lJo8YHArLDlVhpvkg=; b=Mg25R81gIntKpBS7GO/xaYDKd5nBOB2OHChimWkJbDKQRQhZDOBT9OtSVw1bz+kZmS IBz6NlDnKky+E0Bdn2kIIT7heCTfyHK5hIubflrWdEvUV2quDPiQBfOU4rm9Gxaw6dJU f6x2S2o3D41C+xudttjZgx1dXrz++tl5G4xKw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=b3jz7nj8OjWEPF/NvK5YAQH7X5lJo8YHArLDlVhpvkg=; b=j8m4cBOig8Sl70XiykXq3FG9l+RtQU9aZFRwfbH4G8pLriBwSDd0dpnR8gkETYbp0J p7sxiFzjnQILgOQXawwBq0biQ6tASW8EvcFlgR6ZRVJSCAuUCpPQ2eoz4+gQBJt1Tmph r6KLiVt1L5LnmsakFwkALltlWwXJNK6UIm0cQDHM+Gh/dmhW7CbuRUDnQueDQ9TLabDv CQMKI07hRORXxphQKuSX96J70Zk9+RYgRWBVMM+2OmL13IAoLkDfdQHb4NrZEl75nqit +i6QS1NZWYlksjio1bhBduUw2BP97i6aopwhkx1ntRh6hAJk87qxsXtcxAl9bt22ngBI bz8w== X-Gm-Message-State: AElRT7ER+Z8sB2h2zVXtvOkuqyxiC1AScaNC60GWdZJmQxsWH5ZwIkni PvysFet8WovYHfXxIGNy44v9rA== X-Received: by 10.223.150.175 with SMTP id u44mr3921109wrb.104.1521145886411; Thu, 15 Mar 2018 13:31:26 -0700 (PDT) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id w125sm3217102wmw.20.2018.03.15.13.31.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 15 Mar 2018 13:31:25 -0700 (PDT) From: Andre Przywara To: Stefano Stabellini , Julien Grall Date: Thu, 15 Mar 2018 20:30:24 +0000 Message-Id: <20180315203050.19791-20-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180315203050.19791-1-andre.przywara@linaro.org> References: <20180315203050.19791-1-andre.przywara@linaro.org> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [PATCH v2 19/45] ARM: new VGIC: Add IRQ sync/flush framework X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Implement the framework for syncing IRQs between our emulation and the list registers, which represent the guest's view of IRQs. This is done in vgic_sync_from_lrs() and vgic_sync_to_lrs(), which get called on guest entry and exit, respectively. The code talking to the actual GICv2/v3 hardware is added in the following patches. This is based on Linux commit 0919e84c0fc1, written by Marc Zyngier. Signed-off-by: Andre Przywara --- Changelog v1 ... v2: - make functions void - do underflow setting directly (no v2/v3 indirection) - fix multiple SGIs injections (as the late Linux bugfix) xen/arch/arm/vgic/vgic.c | 232 +++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic.h | 2 + 2 files changed, 234 insertions(+) diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index 7306a80dd3..e82d498766 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -399,6 +399,238 @@ void vgic_inject_irq(struct domain *d, struct vcpu *vcpu, unsigned int intid, return; } +/** + * vgic_prune_ap_list() - Remove non-relevant interrupts from the ap_list + * + * @vcpu: The VCPU of which the ap_list should be pruned. + * + * Go over the list of interrupts on a VCPU's ap_list, and prune those that + * we won't have to consider in the near future. + * This removes interrupts that have been successfully handled by the guest, + * or that have otherwise became obsolete (not pending anymore). + * Also this moves interrupts between VCPUs, if their affinity has changed. + */ +static void vgic_prune_ap_list(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq, *tmp; + unsigned long flags; + +retry: + spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags); + + list_for_each_entry_safe( irq, tmp, &vgic_cpu->ap_list_head, ap_list ) + { + struct vcpu *target_vcpu, *vcpuA, *vcpuB; + + spin_lock(&irq->irq_lock); + + BUG_ON(vcpu != irq->vcpu); + + target_vcpu = vgic_target_oracle(irq); + + if ( !target_vcpu ) + { + /* + * We don't need to process this interrupt any + * further, move it off the list. + */ + list_del(&irq->ap_list); + irq->vcpu = NULL; + spin_unlock(&irq->irq_lock); + + /* + * This vgic_put_irq call matches the + * vgic_get_irq_kref in vgic_queue_irq_unlock, + * where we added the LPI to the ap_list. As + * we remove the irq from the list, we drop + * also drop the refcount. + */ + vgic_put_irq(vcpu->domain, irq); + continue; + } + + if ( target_vcpu == vcpu ) + { + /* We're on the right CPU */ + spin_unlock(&irq->irq_lock); + continue; + } + + /* This interrupt looks like it has to be migrated. */ + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); + + /* + * Ensure locking order by always locking the smallest + * ID first. + */ + if ( vcpu->vcpu_id < target_vcpu->vcpu_id ) + { + vcpuA = vcpu; + vcpuB = target_vcpu; + } + else + { + vcpuA = target_vcpu; + vcpuB = vcpu; + } + + spin_lock_irqsave(&vcpuA->arch.vgic.ap_list_lock, flags); + spin_lock(&vcpuB->arch.vgic.ap_list_lock); + spin_lock(&irq->irq_lock); + + /* + * If the affinity has been preserved, move the + * interrupt around. Otherwise, it means things have + * changed while the interrupt was unlocked, and we + * need to replay this. + * + * In all cases, we cannot trust the list not to have + * changed, so we restart from the beginning. + */ + if ( target_vcpu == vgic_target_oracle(irq) ) + { + struct vgic_cpu *new_cpu = &target_vcpu->arch.vgic; + + list_del(&irq->ap_list); + irq->vcpu = target_vcpu; + list_add_tail(&irq->ap_list, &new_cpu->ap_list_head); + } + + spin_unlock(&irq->irq_lock); + spin_unlock(&vcpuB->arch.vgic.ap_list_lock); + spin_unlock_irqrestore(&vcpuA->arch.vgic.ap_list_lock, flags); + goto retry; + } + + spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); +} + +static void vgic_fold_lr_state(struct vcpu *vcpu) +{ +} + +/* Requires the irq_lock to be held. */ +static void vgic_populate_lr(struct vcpu *vcpu, + struct vgic_irq *irq, int lr) +{ + ASSERT(spin_is_locked(&irq->irq_lock)); +} + +static void vgic_set_underflow(struct vcpu *vcpu) +{ + ASSERT(vcpu == current); + + gic_hw_ops->update_hcr_status(GICH_HCR_UIE, 1); +} + +/* Requires the ap_list_lock to be held. */ +static int compute_ap_list_depth(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq; + int count = 0; + + ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); + + list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) + { + spin_lock(&irq->irq_lock); + /* GICv2 SGIs can count for more than one... */ + if ( vgic_irq_is_sgi(irq->intid) && irq->source ) + count += hweight8(irq->source); + else + count++; + spin_unlock(&irq->irq_lock); + } + return count; +} + +/* Requires the VCPU's ap_list_lock to be held. */ +static void vgic_flush_lr_state(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq; + int count = 0; + + ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); + + if ( compute_ap_list_depth(vcpu) > gic_get_nr_lrs() ) + vgic_sort_ap_list(vcpu); + + list_for_each_entry( irq, &vgic_cpu->ap_list_head, ap_list ) + { + spin_lock(&irq->irq_lock); + + if ( likely(vgic_target_oracle(irq) == vcpu) ) + vgic_populate_lr(vcpu, irq, count++); + + spin_unlock(&irq->irq_lock); + + if ( count == gic_get_nr_lrs() ) + { + if ( !list_is_last(&irq->ap_list, &vgic_cpu->ap_list_head) ) + vgic_set_underflow(vcpu); + break; + } + } + + vcpu->arch.vgic.used_lrs = count; +} + +/** + * vgic_sync_from_lrs() - Update VGIC state from hardware after a guest's run. + * @vcpu: the VCPU for which to transfer from the LRs to the IRQ list. + * + * Sync back the hardware VGIC state after the guest has run, into our + * VGIC emulation structures, It reads the LRs and updates the respective + * struct vgic_irq, taking level/edge into account. + * This is the high level function which takes care of the conditions, + * also bails out early if there were no interrupts queued. + * Was: kvm_vgic_sync_hwstate() + */ +void vgic_sync_from_lrs(struct vcpu *vcpu) +{ + /* An empty ap_list_head implies used_lrs == 0 */ + if ( list_empty(&vcpu->arch.vgic.ap_list_head) ) + return; + + vgic_fold_lr_state(vcpu); + + vgic_prune_ap_list(vcpu); +} + +/** + * vgic_sync_to_lrs() - flush emulation state into the hardware on guest entry + * + * Before we enter a guest, we have to translate the virtual GIC state of a + * VCPU into the GIC virtualization hardware registers, namely the LRs. + * This is the high level function which takes care about the conditions + * and the locking, also bails out early if there are no interrupts queued. + * Was: kvm_vgic_flush_hwstate() + */ +void vgic_sync_to_lrs(void) +{ + /* + * If there are no virtual interrupts active or pending for this + * VCPU, then there is no work to do and we can bail out without + * taking any lock. There is a potential race with someone injecting + * interrupts to the VCPU, but it is a benign race as the VCPU will + * either observe the new interrupt before or after doing this check, + * and introducing additional synchronization mechanism doesn't change + * this. + */ + if ( list_empty(¤t->arch.vgic.ap_list_head) ) + return; + + ASSERT(!local_irq_is_enabled()); + + spin_lock(¤t->arch.vgic.ap_list_lock); + vgic_flush_lr_state(current); + spin_unlock(¤t->arch.vgic.ap_list_lock); +} /* * Local variables: * mode: C diff --git a/xen/arch/arm/vgic/vgic.h b/xen/arch/arm/vgic/vgic.h index f9e2eeb2d6..f530cfa078 100644 --- a/xen/arch/arm/vgic/vgic.h +++ b/xen/arch/arm/vgic/vgic.h @@ -17,6 +17,8 @@ #ifndef __XEN_ARM_VGIC_VGIC_H__ #define __XEN_ARM_VGIC_VGIC_H__ +#define vgic_irq_is_sgi(intid) ((intid) < VGIC_NR_SGIS) + static inline bool irq_is_pending(struct vgic_irq *irq) { if ( irq->config == VGIC_CONFIG_EDGE )