From patchwork Thu Mar 15 20:30:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 131839 Delivered-To: patch@linaro.org Received: by 10.46.84.17 with SMTP id i17csp1579121ljb; Thu, 15 Mar 2018 13:33:16 -0700 (PDT) X-Google-Smtp-Source: AG47ELvIDWDmHq60Chd2FaoUadpXOYWnijuwZtKlBnBzE+7Rrpeu7XrJDjEY1ZzHQ3CeYJ64URcI X-Received: by 10.107.52.7 with SMTP id b7mr11143227ioa.103.1521145995870; Thu, 15 Mar 2018 13:33:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521145995; cv=none; d=google.com; s=arc-20160816; b=NvThiE2mU+9M0u2S4ke+MOl/4C3d9F+wXRj6kRJm+ngVqCQyYrPrKEqEL8BnH+cYdq nEQq7eR7y7Jp6V7J/w2vg+hVgP7uZv+px/cAhyYxrqrVuLbibkWiH0gOq5sfMpy3EttH hmI5PNzg/Gwpivs834jne+uFBeUFlC94sMs3/gj+Tabw16EjK5FEXr1rrBZxPwwEhf/I 2pUjVWEYdmjziqt1dEK26RqB6bCXvzH4NgNi155WXaltu2biUTH182thm8So9C0IcckQ 6SEN8VRfZ/OJUWYv3cqPw9oToTPl3DKtz87RwwMhQD1A1UdSUIdDK5tT9y6j/qdybXPm rh3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:dkim-signature:arc-authentication-results; bh=ZYrbyRR03pZw/XOlk3rNTHIqMsBhCgNd/1w4CB3pwuk=; b=Ssax/K2OYJeOEMHm1hiVBWL+zALOQ68pm5SF6yln2DiztW6ONZA/4G3JbtdQRm2hl1 T3dGM16mQATcaJkaIVDuIxdOWlcBJc9PKdp3wXoVnTk/K80yUtxyun+8/+M96INDQgsG q3eUZ1NOJ1O+Mt3wzBQoerqAHeaCTq8ZHcPldSZA9NRKWnzlZF2BxnGMceqka/KPjxBq p/NXQkgX0PpvefY7EtxXtDiHSLG06idf+J7NPWy2S/xqvSClpFYI1+WxUxMRlhF69GTG hRTztJTbGgCkkbwWbZecrPyQ3wdXHdfIk1hegn2+tVEEOtcv6crVFpw5d8eLld3qRE7W c+5Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=gNdisnHG; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id c18-v6si2954793itd.48.2018.03.15.13.33.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Mar 2018 13:33:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=gNdisnHG; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ewZX4-0003OU-Ga; Thu, 15 Mar 2018 20:31:26 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ewZX3-0003MM-Hc for xen-devel@lists.xenproject.org; Thu, 15 Mar 2018 20:31:25 +0000 X-Inumbo-ID: d05d5859-288f-11e8-9728-bc764e045a96 Received: from mail-wr0-x242.google.com (unknown [2a00:1450:400c:c0c::242]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id d05d5859-288f-11e8-9728-bc764e045a96; Thu, 15 Mar 2018 21:31:18 +0100 (CET) Received: by mail-wr0-x242.google.com with SMTP id k3so9605427wrg.6 for ; Thu, 15 Mar 2018 13:31:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+wjo6CRAFqgvP3FdGoKf+bmQZHypd20uygq0fYk2bCA=; b=gNdisnHGAXxxOlWZvMjPx0sB3K3x31bhSYg2RInoc/ZozRUASSguc4JpnDL25VPgvy yTVOjQ3Uq8/ZSbRhtKMXPZttcCgUQ8qBTy9tpVAACF2fzaY7wBxbiVEPSaTbEnlxGrnK mmKBwXUH586XZNiHWpjmIrJiBJUU8yOjJu9yI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+wjo6CRAFqgvP3FdGoKf+bmQZHypd20uygq0fYk2bCA=; b=ofdkQRJFD8GSj1Ct2SBCUoOQ3dbBFuuv1XhHMHUT0OnfpZalWMjTCXfFVKtHhRBVM5 A7ZJL0EEVACnEr895JMuLn7bmKlyEaA1P/Bl8gXwEvXubamyKgUO69xv+b1E0zsP1/nZ vOnYBSifRnPw2lq76ylW3GjZ3fnpZdktmhWdnQZgTzx2943f1sTwW6171mJp25z8JwhL 54cU2GwbVsEWKkbywDjZpohpWpGj2FndjCfZ4tljrzx6OJVKto/01tCGx8sntN3W9s/N A4R0haomc4PZL2HhF3ca/xAfQuXZkprzn3Jly15L0p2LDCbDlGEU2K0lPTR0JjDNAETz XbXw== X-Gm-Message-State: AElRT7HSdsRl2bmskZ8anYmadRUBr8tXTdqOBRToczePMbHKDWpOjeHO 4C5A1Y/gngD8WVA0fquRVi8u7w== X-Received: by 10.223.173.207 with SMTP id w73mr8749231wrc.234.1521145882387; Thu, 15 Mar 2018 13:31:22 -0700 (PDT) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id w125sm3217102wmw.20.2018.03.15.13.31.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 15 Mar 2018 13:31:21 -0700 (PDT) From: Andre Przywara To: Stefano Stabellini , Julien Grall Date: Thu, 15 Mar 2018 20:30:21 +0000 Message-Id: <20180315203050.19791-17-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180315203050.19791-1-andre.przywara@linaro.org> References: <20180315203050.19791-1-andre.przywara@linaro.org> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [PATCH v2 16/45] ARM: new VGIC: Implement virtual IRQ injection X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Provide a vgic_queue_irq_unlock() function which decides whether a given IRQ needs to be queued to a VCPU's ap_list. This should be called whenever an IRQ becomes pending or enabled, either as a result of a hardware IRQ injection, from devices emulated by Xen (like the architected timer) or from MMIO accesses to the distributor emulation. Also provides the necessary functions to allow to inject an IRQ to a guest. Since this is the first code that starts using our locking mechanism, we add some (hopefully) clear documentation of our locking strategy and requirements along with this patch. This is based on Linux commit 81eeb95ddbab, written by Christoffer Dall. Signed-off-by: Andre Przywara Reviewed-by: Julien Grall --- Changelog v1 ... v2: - rework validate_injection() - add comments - make vgic_inject_irq a void function - fix comment xen/arch/arm/vgic/vgic.c | 226 +++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic.h | 10 +++ 2 files changed, 236 insertions(+) diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index d9d285c361..20d48ac6f5 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -17,10 +17,36 @@ #include #include +#include #include #include "vgic.h" +/* + * Locking order is always: + * vgic->lock + * vgic_cpu->ap_list_lock + * vgic->lpi_list_lock + * desc->lock + * vgic_irq->irq_lock + * + * If you need to take multiple locks, always take the upper lock first, + * then the lower ones, e.g. first take the ap_list_lock, then the irq_lock. + * If you are already holding a lock and need to take a higher one, you + * have to drop the lower ranking lock first and re-acquire it after having + * taken the upper one. + * + * When taking more than one ap_list_lock at the same time, always take the + * lowest numbered VCPU's ap_list_lock first, so: + * vcpuX->vcpu_id < vcpuY->vcpu_id: + * spin_lock(vcpuX->arch.vgic.ap_list_lock); + * spin_lock(vcpuY->arch.vgic.ap_list_lock); + * + * Since the VGIC must support injecting virtual interrupts from ISRs, we have + * to use the spin_lock_irqsave/spin_unlock_irqrestore versions of outer + * spinlocks for any lock that may be taken while injecting an interrupt. + */ + /* * Iterate over the VM's list of mapped LPIs to find the one with a * matching interrupt ID and return a reference to the IRQ structure. @@ -114,6 +140,206 @@ void vgic_put_irq(struct domain *d, struct vgic_irq *irq) xfree(irq); } +/** + * vgic_target_oracle() - compute the target vcpu for an irq + * @irq: The irq to route. Must be already locked. + * + * Based on the current state of the interrupt (enabled, pending, + * active, vcpu and target_vcpu), compute the next vcpu this should be + * given to. Return NULL if this shouldn't be injected at all. + * + * Requires the IRQ lock to be held. + * + * Returns: The pointer to the virtual CPU this interrupt should be injected + * to. Will be NULL if this IRQ does not need to be injected. + */ +static struct vcpu *vgic_target_oracle(struct vgic_irq *irq) +{ + ASSERT(spin_is_locked(&irq->irq_lock)); + + /* If the interrupt is active, it must stay on the current vcpu */ + if ( irq->active ) + return irq->vcpu ? : irq->target_vcpu; + + /* + * If the IRQ is not active but enabled and pending, we should direct + * it to its configured target VCPU. + * If the distributor is disabled, pending interrupts shouldn't be + * forwarded. + */ + if ( irq->enabled && irq_is_pending(irq) ) + { + if ( unlikely(irq->target_vcpu && + !irq->target_vcpu->domain->arch.vgic.enabled) ) + return NULL; + + return irq->target_vcpu; + } + + /* + * If neither active nor pending and enabled, then this IRQ should not + * be queued to any VCPU. + */ + return NULL; +} + +/* + * Only valid injection if changing level for level-triggered IRQs or for a + * rising edge. + */ +static bool vgic_validate_injection(struct vgic_irq *irq, bool level) +{ + /* For edge interrupts we only care about a rising edge. */ + if ( irq->config == VGIC_CONFIG_EDGE ) + return level; + + /* For level interrupts we have to act when the line level changes. */ + return irq->line_level != level; +} + +/** + * vgic_queue_irq_unlock() - Queue an IRQ to a VCPU, to be injected to a guest. + * @d: The domain the virtual IRQ belongs to. + * @irq: A pointer to the vgic_irq of the virtual IRQ, with the lock held. + * @flags: The flags used when having grabbed the IRQ lock. + * + * Check whether an IRQ needs to (and can) be queued to a VCPU's ap list. + * Do the queuing if necessary, taking the right locks in the right order. + * + * Needs to be entered with the IRQ lock already held, but will return + * with all locks dropped. + */ +void vgic_queue_irq_unlock(struct domain *d, struct vgic_irq *irq, + unsigned long flags) +{ + struct vcpu *vcpu; + + ASSERT(spin_is_locked(&irq->irq_lock)); + +retry: + vcpu = vgic_target_oracle(irq); + if ( irq->vcpu || !vcpu ) + { + /* + * If this IRQ is already on a VCPU's ap_list, then it + * cannot be moved or modified and there is no more work for + * us to do. + * + * Otherwise, if the irq is not pending and enabled, it does + * not need to be inserted into an ap_list and there is also + * no more work for us to do. + */ + spin_unlock_irqrestore(&irq->irq_lock, flags); + + /* + * We have to kick the VCPU here, because we could be + * queueing an edge-triggered interrupt for which we + * get no EOI maintenance interrupt. In that case, + * while the IRQ is already on the VCPU's AP list, the + * VCPU could have EOI'ed the original interrupt and + * won't see this one until it exits for some other + * reason. + */ + if ( vcpu ) + vcpu_kick(vcpu); + + return; + } + + /* + * We must unlock the irq lock to take the ap_list_lock where + * we are going to insert this new pending interrupt. + */ + spin_unlock_irqrestore(&irq->irq_lock, flags); + + /* someone can do stuff here, which we re-check below */ + + spin_lock_irqsave(&vcpu->arch.vgic.ap_list_lock, flags); + spin_lock(&irq->irq_lock); + + /* + * Did something change behind our backs? + * + * There are two cases: + * 1) The irq lost its pending state or was disabled behind our + * backs and/or it was queued to another VCPU's ap_list. + * 2) Someone changed the affinity on this irq behind our + * backs and we are now holding the wrong ap_list_lock. + * + * In both cases, drop the locks and retry. + */ + + if ( unlikely(irq->vcpu || vcpu != vgic_target_oracle(irq)) ) + { + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&vcpu->arch.vgic.ap_list_lock, flags); + + spin_lock_irqsave(&irq->irq_lock, flags); + goto retry; + } + + /* + * Grab a reference to the irq to reflect the fact that it is + * now in the ap_list. + */ + vgic_get_irq_kref(irq); + list_add_tail(&irq->ap_list, &vcpu->arch.vgic.ap_list_head); + irq->vcpu = vcpu; + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&vcpu->arch.vgic.ap_list_lock, flags); + + vcpu_kick(vcpu); + + return; +} + +/** + * vgic_inject_irq() - Inject an IRQ from a device to the vgic + * @d: The domain pointer + * @vcpu: The vCPU for private IRQs (PPIs, SGIs). Ignored for SPIs and LPIs. + * @intid: The INTID to inject a new state to. + * @level: Edge-triggered: true: to trigger the interrupt + * false: to ignore the call + * Level-sensitive true: raise the input signal + * false: lower the input signal + * + * Injects an instance of the given virtual IRQ into a domain. + * The VGIC is not concerned with devices being active-LOW or active-HIGH for + * level-sensitive interrupts. You can think of the level parameter as 1 + * being HIGH and 0 being LOW and all devices being active-HIGH. + */ +void vgic_inject_irq(struct domain *d, struct vcpu *vcpu, unsigned int intid, + bool level) +{ + struct vgic_irq *irq; + unsigned long flags; + + irq = vgic_get_irq(d, vcpu, intid); + if ( !irq ) + return; + + spin_lock_irqsave(&irq->irq_lock, flags); + + if ( !vgic_validate_injection(irq, level) ) + { + /* Nothing to see here, move along... */ + spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(d, irq); + return; + } + + if ( irq->config == VGIC_CONFIG_LEVEL ) + irq->line_level = level; + else + irq->pending_latch = true; + + vgic_queue_irq_unlock(d, irq, flags); + vgic_put_irq(d, irq); + + return; +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/vgic/vgic.h b/xen/arch/arm/vgic/vgic.h index a3befd386b..f9e2eeb2d6 100644 --- a/xen/arch/arm/vgic/vgic.h +++ b/xen/arch/arm/vgic/vgic.h @@ -17,9 +17,19 @@ #ifndef __XEN_ARM_VGIC_VGIC_H__ #define __XEN_ARM_VGIC_VGIC_H__ +static inline bool irq_is_pending(struct vgic_irq *irq) +{ + if ( irq->config == VGIC_CONFIG_EDGE ) + return irq->pending_latch; + else + return irq->pending_latch || irq->line_level; +} + struct vgic_irq *vgic_get_irq(struct domain *d, struct vcpu *vcpu, u32 intid); void vgic_put_irq(struct domain *d, struct vgic_irq *irq); +void vgic_queue_irq_unlock(struct domain *d, struct vgic_irq *irq, + unsigned long flags); static inline void vgic_get_irq_kref(struct vgic_irq *irq) {