From patchwork Thu Mar 15 20:30:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 131878 Delivered-To: patch@linaro.org Received: by 10.46.84.17 with SMTP id i17csp1580618ljb; Thu, 15 Mar 2018 13:34:44 -0700 (PDT) X-Google-Smtp-Source: AG47ELtmLyCI8o7R32ItArRQpAuik/3KWYQH2y0UkjGenWFDLqcVT2o8Lxdr2VTt+Ss0HieAIiPK X-Received: by 2002:a24:595:: with SMTP id 143-v6mr7763980itl.138.1521146004621; Thu, 15 Mar 2018 13:33:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521146004; cv=none; d=google.com; s=arc-20160816; b=IjVIuzQrveRLBMuhdE4DkYrMhiXWzgBlPGsYVFhtzU37ClWYVDwB9LU9z1VmDdTmiu iSxLAafoJb7UvqxrPxVJ8A9GU+wAD5wpNywKDoBoREUPs/zqZxPuGlS3eBH/jrU6/vyV g3lGNOc5m6Clppx0MqHp86zBImDVZnRU0/5DLN67tl9Yvcykl4lyN52mw/qGo6jCtI4y 7ycsA16h2j0Gu4QQWq3xtg35UU8bEnqBIkOkv/1ytXgQiaNEDYmot+8Of1TJkPxFeHEa L1zeDE6cbPMiKs+ggHvxIBELOYuhu+bBowGKJ9s2ZtjZaWrUoM16QN4WF6pIAz7Fu+Vw x+dA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:dkim-signature:arc-authentication-results; bh=BDMq7pDottWrHM8iXQlMqrkPlHM0t4Kkvf52AYokjdI=; b=ec9qok07ygk9hhwHGJmNc5bHfSIMpc02u2Xb+Nn+GIgVlP6qeTi4dLzSdfsiAq7IA2 Z8KDuiIl5kP32xJFuOUOzA2zsXlqoWCrd30m4ItCH8ncrtRfppZ/YSvvlsbqe/v24/0A vBwBMPpb9V90qay5V2zyTL2J5NH/yo0+HFsaR8DYJH70UcXATA/WLPSAEZwV3C9kp6cR ykOCVd0YSnflbnfQU17a3vmtOX+70fRDUHW4s2nFQS48Q6elCRq8HkfRSoMVnNM07d8B 6WLjKo+fJp++q16qHoUoeYoBW5dxzWZo98DEI4MePEY93OTZzoreaWqAmCxH2Wx8I0tB Nopw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=Vw4Sf37M; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id a89si3637372ioj.148.2018.03.15.13.33.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Mar 2018 13:33:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=Vw4Sf37M; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ewZXH-0003vE-JJ; Thu, 15 Mar 2018 20:31:39 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ewZXF-0003qL-LY for xen-devel@lists.xenproject.org; Thu, 15 Mar 2018 20:31:37 +0000 X-Inumbo-ID: d7dfdf67-288f-11e8-9728-bc764e045a96 Received: from mail-wr0-x242.google.com (unknown [2a00:1450:400c:c0c::242]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id d7dfdf67-288f-11e8-9728-bc764e045a96; Thu, 15 Mar 2018 21:31:30 +0100 (CET) Received: by mail-wr0-x242.google.com with SMTP id o1so9583687wro.10 for ; Thu, 15 Mar 2018 13:31:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/nlZmbJcYsWbsaHDbDTAsUhBFhHHkG21czTxImhMLNA=; b=Vw4Sf37M3GbBzqPwdp9L6MLh4LsfTWoyV2m6Omq6A0g4DrJAcPMVpK8zroS9TW1JjO /15MruXWeY1Z2EPG0IfdDnWXvSp/zM3im7Y0mN+5aLJpGtqEkcSaX9PiDUNPNBHqvWg/ r1DrZupyGsg3sT4WrDxCm1DPlwQRxFLBCZzYE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/nlZmbJcYsWbsaHDbDTAsUhBFhHHkG21czTxImhMLNA=; b=lvAF351ajqG0Z4PDY5uoaOfx6jA1LNOr/gnxDdJMpSHIs+Fl7/FnCGfMSXLjj4lgkr gzmi1RoTAzLX4c3NNSxUwdXZRbHMHlKb4Ubxm9Rj5MAM9Hu6aN3PUrNoVDeMWuLmLJb4 k5Rep5x/0fTj1iffiao46eghHRm1MJ7dw+ctLICMKkGqTGE/wc3RVef9813hhNzCoD6B opelka2nAUzOifxAdainIWCNCTITfTvXbZXkbnpW9pL7rxXqXTZPT9oGzs3ma71PAnRq 5ZX9G5EIuj9GREA23jgEIXGpiVCjC7cJFK20dwZWkxickMkajPmvWl5aYOvWDN53C+gy MPjA== X-Gm-Message-State: AElRT7FqcDxbaJDM7ql477D3jj4VV6H2xi81Xxnk9fk66AXrU5bkVYmA /8zap0m4xosF2dfQdKcQ0GFw+Q== X-Received: by 10.223.198.199 with SMTP id c7mr8034738wrh.125.1521145895199; Thu, 15 Mar 2018 13:31:35 -0700 (PDT) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id w125sm3217102wmw.20.2018.03.15.13.31.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 15 Mar 2018 13:31:34 -0700 (PDT) From: Andre Przywara To: Stefano Stabellini , Julien Grall Date: Thu, 15 Mar 2018 20:30:31 +0000 Message-Id: <20180315203050.19791-27-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180315203050.19791-1-andre.przywara@linaro.org> References: <20180315203050.19791-1-andre.przywara@linaro.org> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [PATCH v2 26/45] ARM: new VGIC: Add PENDING registers handlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The pending register handlers are shared between the v2 and v3 emulation, so their implementation goes into vgic-mmio.c, to be easily referenced from the v3 emulation as well later. For level triggered interrupts the real line level is unaffected by this write, so we keep this state separate and combine it with the device's level to get the actual pending state. Hardware mapped IRQs need some special handling, as their hardware state has to be coordinated with the virtual pending bit to avoid hanging or masked interrupts. This is based on Linux commit 96b298000db4, written by Andre Przywara. Signed-off-by: Andre Przywara Reviewed-by: Julien Grall --- Changelog v1 ... v2: - ASSERT on h/w IRQ and vIRQ staying in sync xen/arch/arm/vgic/vgic-mmio-v2.c | 4 +- xen/arch/arm/vgic/vgic-mmio.c | 125 +++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic-mmio.h | 11 ++++ 3 files changed, 138 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c index 7efd1c4eb4..a48c554040 100644 --- a/xen/arch/arm/vgic/vgic-mmio-v2.c +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c @@ -95,10 +95,10 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { vgic_mmio_read_enable, vgic_mmio_write_cenable, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISPENDR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_pending, vgic_mmio_write_spending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ICPENDR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_pending, vgic_mmio_write_cpending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISACTIVER, vgic_mmio_read_raz, vgic_mmio_write_wi, 1, diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c index 99e1adb1ea..15183c112c 100644 --- a/xen/arch/arm/vgic/vgic-mmio.c +++ b/xen/arch/arm/vgic/vgic-mmio.c @@ -156,6 +156,131 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, } } +unsigned long vgic_mmio_read_pending(struct vcpu *vcpu, + paddr_t addr, unsigned int len) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + uint32_t value = 0; + unsigned int i; + + /* Loop over all IRQs affected by this read */ + for ( i = 0; i < len * 8; i++ ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + if ( irq_is_pending(irq) ) + value |= (1U << i); + + vgic_put_irq(vcpu->domain, irq); + } + + return value; +} + +void vgic_mmio_write_spending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + unsigned long flags; + irq_desc_t *desc; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + spin_lock_irqsave(&irq->irq_lock, flags); + irq->pending_latch = true; + + /* To observe the locking order, just take the irq_desc pointer here. */ + if ( irq->hw ) + desc = irq_to_desc(irq->hwintid); + else + desc = NULL; + + vgic_queue_irq_unlock(vcpu->domain, irq, flags); + + /* + * When the VM sets the pending state for a HW interrupt on the virtual + * distributor we set the active state on the physical distributor, + * because the virtual interrupt can become active and then the guest + * can deactivate it. + */ + if ( desc ) + { + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&irq->irq_lock); + + /* This h/w IRQ should still be assigned to the virtual IRQ. */ + ASSERT(irq->hw && desc->irq == irq->hwintid); + + gic_set_active_state(desc, true); + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&desc->lock, flags); + } + + vgic_put_irq(vcpu->domain, irq); + } +} + +void vgic_mmio_write_cpending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + unsigned long flags; + irq_desc_t *desc; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + spin_lock_irqsave(&irq->irq_lock, flags); + irq->pending_latch = false; + + /* To observe the locking order, just take the irq_desc pointer here. */ + if ( irq->hw ) + desc = irq_to_desc(irq->hwintid); + else + desc = NULL; + + spin_unlock_irqrestore(&irq->irq_lock, flags); + + /* + * We don't want the guest to effectively mask the physical + * interrupt by doing a write to SPENDR followed by a write to + * CPENDR for HW interrupts, so we clear the active state on + * the physical side if the virtual interrupt is not active. + * This may lead to taking an additional interrupt on the + * host, but that should not be a problem as the worst that + * can happen is an additional vgic injection. We also clear + * the pending state to maintain proper semantics for edge HW + * interrupts. + */ + if ( desc ) + { + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&irq->irq_lock); + + /* This h/w IRQ should still be assigned to the virtual IRQ. */ + ASSERT(irq->hw && desc->irq == irq->hwintid); + + gic_set_pending_state(desc, false); + if (!irq->active) + gic_set_active_state(desc, false); + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&desc->lock, flags); + } + + + vgic_put_irq(vcpu->domain, irq); + } +} + static int match_region(const void *key, const void *elt) { const unsigned int offset = (unsigned long)key; diff --git a/xen/arch/arm/vgic/vgic-mmio.h b/xen/arch/arm/vgic/vgic-mmio.h index a2cebd77f4..5c927f28b0 100644 --- a/xen/arch/arm/vgic/vgic-mmio.h +++ b/xen/arch/arm/vgic/vgic-mmio.h @@ -97,6 +97,17 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, paddr_t addr, unsigned int len, unsigned long val); +unsigned long vgic_mmio_read_pending(struct vcpu *vcpu, + paddr_t addr, unsigned int len); + +void vgic_mmio_write_spending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + +void vgic_mmio_write_cpending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + unsigned int vgic_v2_init_dist_iodev(struct vgic_io_device *dev); #endif