From patchwork Mon Mar 5 16:03:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 130698 Delivered-To: patch@linaro.org Received: by 10.46.66.2 with SMTP id p2csp2855670lja; Mon, 5 Mar 2018 08:08:44 -0800 (PST) X-Google-Smtp-Source: AG47ELtQMjvuqY4dl2fTV99NZNaQJEKjWIB801IMW69QImNv87LqJv3vmXwNXuC7DDcA69BZ951m X-Received: by 10.107.15.8 with SMTP id x8mr18406260ioi.38.1520266039742; Mon, 05 Mar 2018 08:07:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520266039; cv=none; d=google.com; s=arc-20160816; b=PEdwRjEcA6bUoDpgQ/WMN1aHUklXsJg+Tr+ZOWdrCfK/sAwRn16U/epmbsTposH4Bx SrADfuCnzNIjJn04crKqboh7JBslw2kghmiGGLAF72cV1frF19S8KNZElEYA9cq5VgKv aYOx0vv2v1LuSyp6arlmNv2zbxhRyKwKfHyAF/23fG33jUX6J6AjVn73ToEo2Kr32Fn4 r7h5IEDgpzKXJnNM7qELntknw/TQXEYh40ghTqv0HEe9UZX8f/ZaF5+93wCz817Ck/K0 a4VDWh9AAbxj6QRyTvEo+PYbDHcT6QMaQZNvNFvCVfjI3UNrG86R3i3PbybEPUd0WRgb i6kQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:dkim-signature:arc-authentication-results; bh=QMvX2Qm6+WWu3oDxEGL+az0AUCEVeFEyB++HLwHOgtA=; b=OfUIPsdgQsdZH8XApipcNIDJLYlirbIoQT23vGbS+EDUioTTjdKU1gxee7LkGxrXs5 CVsK+8bNprxkMwyBNh9ZNQDDljOWHHIGPd6HRXDKGqROhPbLvDii2bmI6+cpLvAIzMDI k0tSe12BAu/MU3lGUE0UYeOJH+zX67f3LncxjSlpAT7V9965Q9e455I1NN79LNfximNs uMfkSSEvyzAFgAjPDCEtrxIAAOpL8gBuDRZ8wx3hbNBY7GqxL9U18QT1DXthlpu/UcFu IwEjJzNjPsp9neRFWuBRr1iN7vNDAgV22Uen69X/bil+ZusYNJYjh3fd6i1qw3ZbNq+J ci+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=jM1i0jMB; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id o200si5746638itg.20.2018.03.05.08.07.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Mar 2018 08:07:19 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=jM1i0jMB; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1essbh-00083r-Lc; Mon, 05 Mar 2018 16:04:57 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1essbg-00080D-9L for xen-devel@lists.xenproject.org; Mon, 05 Mar 2018 16:04:56 +0000 X-Inumbo-ID: ca55ba88-208e-11e8-ba59-bc764e045a96 Received: from mail-wm0-x243.google.com (unknown [2a00:1450:400c:c09::243]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id ca55ba88-208e-11e8-ba59-bc764e045a96; Mon, 05 Mar 2018 17:03:49 +0100 (CET) Received: by mail-wm0-x243.google.com with SMTP id i3so16516338wmi.4 for ; Mon, 05 Mar 2018 08:04:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lylzTnPk2FlpLdSzqyizAd7OqXxogX88uOHAh3ylYlw=; b=jM1i0jMBKoAyy7eVA7aQLbTPqhyTRXvbqiGPwwyIR/H4ginAAiGN4cHiBdM5vsin8G BlAB1RQSvlxbagkXyUMeHMpPY63S76shLyx4B4hZMQC/uXH7YqTctPO3Qd3h3sfcRiKc BGba0ZwCGnMXrPPoSGIf3Nrsk441mfshEjMLM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lylzTnPk2FlpLdSzqyizAd7OqXxogX88uOHAh3ylYlw=; b=OylQ0aOD3si2CUroF/dry0rDG4U6Mz2og48Uqx8uj2HKr5VG0D+Vle/8bpl30rpxwC ZVBI6HA9yjvbIVJbOtr4YieVx6G3vSUiCTnmC00bBpdZtTkm8IwvS8rhCwfd9MqmVtcb dquMb2zTLbHp6c9HBncmBxJQy0qhRc5JIfyABqVIBVdzgMBw9V45bZSRIYPZahYjYHfj qRitN/9zD1IXg1Wyj0WuIZDFFSJHHj9W/Ypf3v6mBA49SEXp2IfIGoTNI6kcs1Zz7KDA hEgw40blFWcEt309v9YPSFCCq6nv6HljfNucOpv//xA3ynTj/fw/WgQMakD8nsjSuE4H zKmQ== X-Gm-Message-State: AElRT7FAQZrNLfJjO3SdBWePqDkqLce0GaS7ndj4yyFm87AKfE3BITrV mlhTRYM4PcWMXc4WozF5fwa5ZA== X-Received: by 10.28.50.69 with SMTP id y66mr9337273wmy.133.1520265893735; Mon, 05 Mar 2018 08:04:53 -0800 (PST) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id y6sm6574381wmy.14.2018.03.05.08.04.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 05 Mar 2018 08:04:53 -0800 (PST) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Mon, 5 Mar 2018 16:03:49 +0000 Message-Id: <20180305160415.16760-32-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180305160415.16760-1-andre.przywara@linaro.org> References: <20180305160415.16760-1-andre.przywara@linaro.org> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [PATCH 31/57] ARM: new VGIC: Add IRQ sync/flush framework X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Implement the framework for syncing IRQs between our emulation and the list registers, which represent the guest's view of IRQs. This is done in kvm_vgic_flush_hwstate and kvm_vgic_sync_hwstate, which gets called on guest entry and exit. The code talking to the actual GICv2/v3 hardware is added in the following patches. This is based on Linux commit 0919e84c0fc1, written by Marc Zyngier. Signed-off-by: Andre Przywara --- Changelog RFC ... v1: - extend comments - adapt to former changes - remove gic_clear_lrs() xen/arch/arm/vgic/vgic.c | 239 +++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic.h | 2 + 2 files changed, 241 insertions(+) diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index efa6c67cb7..8e5215a00d 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -400,6 +400,245 @@ int vgic_inject_irq(struct domain *d, struct vcpu *vcpu, unsigned int intid, return 0; } +/** + * vgic_prune_ap_list() - Remove non-relevant interrupts from the ap_list + * + * @vcpu: The VCPU of which the ap_list should be pruned. + * + * Go over the list of interrupts on a VCPU's ap_list, and prune those that + * we won't have to consider in the near future. + * This removes interrupts that have been successfully handled by the guest, + * or that have otherwise became obsolete (not pending anymore). + * Also this moves interrupts between VCPUs, if their affinity has changed. + */ +static void vgic_prune_ap_list(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq, *tmp; + unsigned long flags; + +retry: + spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags); + + list_for_each_entry_safe( irq, tmp, &vgic_cpu->ap_list_head, ap_list ) + { + struct vcpu *target_vcpu, *vcpuA, *vcpuB; + + spin_lock(&irq->irq_lock); + + BUG_ON(vcpu != irq->vcpu); + + target_vcpu = vgic_target_oracle(irq); + + if ( !target_vcpu ) + { + /* + * We don't need to process this interrupt any + * further, move it off the list. + */ + list_del(&irq->ap_list); + irq->vcpu = NULL; + spin_unlock(&irq->irq_lock); + + /* + * This vgic_put_irq call matches the + * vgic_get_irq_kref in vgic_queue_irq_unlock, + * where we added the LPI to the ap_list. As + * we remove the irq from the list, we drop + * also drop the refcount. + */ + vgic_put_irq(vcpu->domain, irq); + continue; + } + + if ( target_vcpu == vcpu ) + { + /* We're on the right CPU */ + spin_unlock(&irq->irq_lock); + continue; + } + + /* This interrupt looks like it has to be migrated. */ + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); + + /* + * Ensure locking order by always locking the smallest + * ID first. + */ + if ( vcpu->vcpu_id < target_vcpu->vcpu_id ) + { + vcpuA = vcpu; + vcpuB = target_vcpu; + } + else + { + vcpuA = target_vcpu; + vcpuB = vcpu; + } + + spin_lock_irqsave(&vcpuA->arch.vgic.ap_list_lock, flags); + spin_lock(&vcpuB->arch.vgic.ap_list_lock); + spin_lock(&irq->irq_lock); + + /* + * If the affinity has been preserved, move the + * interrupt around. Otherwise, it means things have + * changed while the interrupt was unlocked, and we + * need to replay this. + * + * In all cases, we cannot trust the list not to have + * changed, so we restart from the beginning. + */ + if ( target_vcpu == vgic_target_oracle(irq) ) + { + struct vgic_cpu *new_cpu = &target_vcpu->arch.vgic; + + list_del(&irq->ap_list); + irq->vcpu = target_vcpu; + list_add_tail(&irq->ap_list, &new_cpu->ap_list_head); + } + + spin_unlock(&irq->irq_lock); + spin_unlock(&vcpuB->arch.vgic.ap_list_lock); + spin_unlock_irqrestore(&vcpuA->arch.vgic.ap_list_lock, flags); + goto retry; + } + + spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); +} + +static inline void vgic_fold_lr_state(struct vcpu *vcpu) +{ +} + +/* Requires the irq_lock to be held. */ +static inline void vgic_populate_lr(struct vcpu *vcpu, + struct vgic_irq *irq, int lr) +{ + ASSERT(spin_is_locked(&irq->irq_lock)); +} + +static inline void vgic_set_underflow(struct vcpu *vcpu) +{ +} + +/* Requires the ap_list_lock to be held. */ +static int compute_ap_list_depth(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq; + int count = 0; + + ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); + + list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) + { + spin_lock(&irq->irq_lock); + /* GICv2 SGIs can count for more than one... */ + if ( vgic_irq_is_sgi(irq->intid) && irq->source ) + count += hweight8(irq->source); + else + count++; + spin_unlock(&irq->irq_lock); + } + return count; +} + +/* Requires the VCPU's ap_list_lock to be held. */ +static void vgic_flush_lr_state(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq; + int count = 0; + + ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); + + if ( compute_ap_list_depth(vcpu) > gic_get_nr_lrs() ) + vgic_sort_ap_list(vcpu); + + list_for_each_entry( irq, &vgic_cpu->ap_list_head, ap_list ) + { + spin_lock(&irq->irq_lock); + + if ( unlikely(vgic_target_oracle(irq) != vcpu) ) + goto next; + + /* + * If we get an SGI with multiple sources, try to get + * them in all at once. + */ + do + { + vgic_populate_lr(vcpu, irq, count++); + } while ( irq->source && count < gic_get_nr_lrs() ); + +next: + spin_unlock(&irq->irq_lock); + + if ( count == gic_get_nr_lrs() ) + { + if ( !list_is_last(&irq->ap_list, &vgic_cpu->ap_list_head) ) + vgic_set_underflow(vcpu); + break; + } + } + + vcpu->arch.vgic.used_lrs = count; +} + +/** + * vgic_sync_from_lrs() - Update VGIC state from hardware after a guest's run. + * @vcpu: the VCPU for which to transfer from the LRs to the IRQ list. + * + * Sync back the hardware VGIC state after the guest has run, into our + * VGIC emulation structures, It reads the LRs and updates the respective + * struct vgic_irq, taking level/edge into account. + * This is the high level function which takes care of the conditions, + * also bails out early if there were no interrupts queued. + * Was: kvm_vgic_sync_hwstate() + */ +void vgic_sync_from_lrs(struct vcpu *vcpu) +{ + /* An empty ap_list_head implies used_lrs == 0 */ + if ( list_empty(&vcpu->arch.vgic.ap_list_head) ) + return; + + vgic_fold_lr_state(vcpu); + + vgic_prune_ap_list(vcpu); +} + +/** + * vgic_sync_to_lrs() - flush emulation state into the hardware on guest entry + * + * Before we enter a guest, we have to translate the virtual GIC state of a + * VCPU into the GIC virtualization hardware registers, namely the LRs. + * This is the high level function which takes care about the conditions + * and the locking, also bails out early if there are no interrupts queued. + * Was: kvm_vgic_flush_hwstate() + */ +void vgic_sync_to_lrs(void) +{ + /* + * If there are no virtual interrupts active or pending for this + * VCPU, then there is no work to do and we can bail out without + * taking any lock. There is a potential race with someone injecting + * interrupts to the VCPU, but it is a benign race as the VCPU will + * either observe the new interrupt before or after doing this check, + * and introducing additional synchronization mechanism doesn't change + * this. + */ + if ( list_empty(¤t->arch.vgic.ap_list_head) ) + return; + + ASSERT(!local_irq_is_enabled()); + + spin_lock(¤t->arch.vgic.ap_list_lock); + vgic_flush_lr_state(current); + spin_unlock(¤t->arch.vgic.ap_list_lock); +} /* * Local variables: * mode: C diff --git a/xen/arch/arm/vgic/vgic.h b/xen/arch/arm/vgic/vgic.h index 3430955d9f..a495116cb7 100644 --- a/xen/arch/arm/vgic/vgic.h +++ b/xen/arch/arm/vgic/vgic.h @@ -17,6 +17,8 @@ #ifndef __XEN_ARM_VGIC_VGIC_H__ #define __XEN_ARM_VGIC_VGIC_H__ +#define vgic_irq_is_sgi(intid) ((intid) < VGIC_NR_SGIS) + static inline bool irq_is_pending(struct vgic_irq *irq) { if ( irq->config == VGIC_CONFIG_EDGE )