From patchwork Mon Mar 5 16:03:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 130666 Delivered-To: patch@linaro.org Received: by 10.46.66.2 with SMTP id p2csp2853679lja; Mon, 5 Mar 2018 08:07:05 -0800 (PST) X-Google-Smtp-Source: AG47ELvARAzWPgida83Y0NEW4/yBRDCN01qsn8lDiG2JuHqRCDHJpITc4hOwICIWziY++0Ya0usQ X-Received: by 10.107.150.1 with SMTP id y1mr18986438iod.100.1520266025118; Mon, 05 Mar 2018 08:07:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520266025; cv=none; d=google.com; s=arc-20160816; b=R336s+X2vo3VmlTydiDoKq/4mq+Kw0pG7YPow5p+PEQhr5LvN3B7uNn5ybFiVCbidY xdnR/Xv7ZXahZ/WNdB96Q/D+sZdvZTxYuWPNIya1cr+AOKNP6CoOYNFBpb0eim3sH+Z9 1aCtPxISzjP1GPxtYbyL0ExvXf5RIhtRONSQt/8OA42NFCAJaUamNR7R+e6cqawn7yq4 33R/NmLvLiDMOzZ6K6vAD/bo2kIZD0fiZGVT0aNEslAei93kWnTyJD2bY2lyDVvMljUz +S5zg0RxzATtSG3yIQFThFvcDVcUKrFvM7Z/O4gndkU/6Ed+h0GVu18W1YEoLkfed994 iUxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:dkim-signature:arc-authentication-results; bh=bqjUwxJbCYJ9zEvpLhyk12ilH3PYClZscgljljTttzw=; b=JftHvDlfSbFlPj488XqrHliRQ8nvYbGcO+pb6F9J0MDIhmR/tfuwhcr7hbFG83OcY3 1yVJBZgzqYnacjlHC0K2cNKy5v069C3lv+DjO5UaKFbg7jKddbNs5Lkbt38Jsq4E0gB7 FBJCibk0BDROpQv+fajIrtS0BP0XdIGhUw9RhyutBm01VG3lDBTj+nzLl5S161CxR854 3r49osOc/tUJkz6lJnOIBVH7Y/v5Wdwt5UrK6lx0pikx/rKCtKswR2OYVjNtgQn0JyXS u5Uqp+D7XMF8/LPs5/Z7XWwObCngBBcXAUd1rN+y6gRT04EIWB99Y3TzFqs+qprnxwcE Gs9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=ETvCXKUZ; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id q188si6984113iof.230.2018.03.05.08.07.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Mar 2018 08:07:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=ETvCXKUZ; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1essbo-0008Md-Ok; Mon, 05 Mar 2018 16:05:04 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1essbo-0008Kx-Bo for xen-devel@lists.xenproject.org; Mon, 05 Mar 2018 16:05:04 +0000 X-Inumbo-ID: cee96562-208e-11e8-ba59-bc764e045a96 Received: from mail-wr0-x241.google.com (unknown [2a00:1450:400c:c0c::241]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id cee96562-208e-11e8-ba59-bc764e045a96; Mon, 05 Mar 2018 17:03:57 +0100 (CET) Received: by mail-wr0-x241.google.com with SMTP id n7so17840686wrn.5 for ; Mon, 05 Mar 2018 08:05:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/z/fCBw4rHZ3uf73t5NFU3sqmPK3GG+fBL/6gpaVUCw=; b=ETvCXKUZ5OI7doYTh17tHtJ4BL7e74xjxomdEjxnEnMsTENZRoKtPgaD712/9TGM3J KPduUesfXhH0bSN/zp+9Y6F0VVVGm1rpfmJdT1Bst6mCBrN8qLZUh2bA3ezXKTfj/k7X 5/0tPhpu2sWNZD1aiNbBwBS2rAM62F0vJgh8o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/z/fCBw4rHZ3uf73t5NFU3sqmPK3GG+fBL/6gpaVUCw=; b=aIuCv+F6Ke8hr9hrbv03JVKsp7BCb40TzlWnhSIVKJiwLraELSxNNIbuAU/Njrsoz1 A0TrvFftDbGQFHg82bYwZ41Qkp4/p99keGWpRq5pULzFKtgjri4yiXVhlwq1szdtEKfg jtXnbZs7gN463yQsbicBpQpzQSR4FC7sNcXeyOFjYEVSag8V5cu5Ls+9m958oxV5lui0 ce6oagSxoC1QKwF8KHiWvKYLmjSpI0DdVhT3reOu/7157oco2uIgECRiU3hhcKStl34w NS9MmuMg5w/gKtTQTqM7GNytfo/kKuRm2HvssApV7l6iTPvnCr9mjjfE6viWqg1W54wk CsDQ== X-Gm-Message-State: APf1xPAoqbnN60CV3fLr7XFILuXK9ygiN4IDaQEC1GCK47STO1nTZulR noPuXN4bHJJW7/hNPTd1XG4tJA== X-Received: by 10.223.151.204 with SMTP id t12mr14027605wrb.156.1520265901546; Mon, 05 Mar 2018 08:05:01 -0800 (PST) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id y6sm6574381wmy.14.2018.03.05.08.05.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 05 Mar 2018 08:05:01 -0800 (PST) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Mon, 5 Mar 2018 16:03:56 +0000 Message-Id: <20180305160415.16760-39-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180305160415.16760-1-andre.przywara@linaro.org> References: <20180305160415.16760-1-andre.przywara@linaro.org> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [PATCH 38/57] ARM: new VGIC: Add PENDING registers handlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The pending register handlers are shared between the v2 and v3 emulation, so their implementation goes into vgic-mmio.c, to be easily referenced from the v3 emulation as well later. For level triggered interrupts the real line level is unaffected by this write, so we keep this state separate and combine it with the device's level to get the actual pending state. Hardware mapped IRQs need some special handling, as their hardware state has to be coordinated with the virtual pending bit to avoid hanging or masked interrupts. This is based on Linux commit 96b298000db4, written by Andre Przywara. Signed-off-by: Andre Przywara --- Changelog RFC ... v1: - propagate SET/CLEAR_PENDING requests to hardware xen/arch/arm/vgic/vgic-mmio-v2.c | 4 +- xen/arch/arm/vgic/vgic-mmio.c | 125 +++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic-mmio.h | 11 ++++ 3 files changed, 138 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c index 3dd983f885..efdd73301d 100644 --- a/xen/arch/arm/vgic/vgic-mmio-v2.c +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c @@ -86,10 +86,10 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { vgic_mmio_read_enable, vgic_mmio_write_cenable, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISPENDR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_pending, vgic_mmio_write_spending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ICPENDR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_pending, vgic_mmio_write_cpending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISACTIVER, vgic_mmio_read_raz, vgic_mmio_write_wi, 1, diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c index f8f0252eff..2e939d5e39 100644 --- a/xen/arch/arm/vgic/vgic-mmio.c +++ b/xen/arch/arm/vgic/vgic-mmio.c @@ -156,6 +156,131 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, } } +unsigned long vgic_mmio_read_pending(struct vcpu *vcpu, + paddr_t addr, unsigned int len) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + uint32_t value = 0; + unsigned int i; + + /* Loop over all IRQs affected by this read */ + for ( i = 0; i < len * 8; i++ ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + if ( irq_is_pending(irq) ) + value |= (1U << i); + + vgic_put_irq(vcpu->domain, irq); + } + + return value; +} + +void vgic_mmio_write_spending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + unsigned long flags; + irq_desc_t *desc; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + spin_lock_irqsave(&irq->irq_lock, flags); + irq->pending_latch = true; + + /* To observe the locking order, just take the irq_desc pointer here. */ + if ( irq->hw ) + desc = irq_to_desc(irq->hwintid); + else + desc = NULL; + + vgic_queue_irq_unlock(vcpu->domain, irq, flags); + + /* + * When the VM sets the pending state for a HW interrupt on the virtual + * distributor we set the active state on the physical distributor, + * because the virtual interrupt can become active and then the guest + * can deactivate it. + */ + if ( desc ) + { + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&irq->irq_lock); + + /* Is this h/w IRQ still assigned to the virtual IRQ? */ + if ( irq->hw && desc->irq == irq->hwintid ) + gic_set_active_state(desc, true); + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&desc->lock, flags); + } + + vgic_put_irq(vcpu->domain, irq); + } +} + +void vgic_mmio_write_cpending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + unsigned long flags; + irq_desc_t *desc; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + spin_lock_irqsave(&irq->irq_lock, flags); + irq->pending_latch = false; + + /* To observe the locking order, just take the irq_desc pointer here. */ + if ( irq->hw ) + desc = irq_to_desc(irq->hwintid); + else + desc = NULL; + + spin_unlock_irqrestore(&irq->irq_lock, flags); + + /* + * We don't want the guest to effectively mask the physical + * interrupt by doing a write to SPENDR followed by a write to + * CPENDR for HW interrupts, so we clear the active state on + * the physical side if the virtual interrupt is not active. + * This may lead to taking an additional interrupt on the + * host, but that should not be a problem as the worst that + * can happen is an additional vgic injection. We also clear + * the pending state to maintain proper semantics for edge HW + * interrupts. + */ + if ( desc ) + { + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&irq->irq_lock); + + /* Is this h/w IRQ still assigned to the virtual IRQ? */ + if ( irq->hw && desc->irq == irq->hwintid ) + { + gic_set_pending_state(desc, false); + if (!irq->active) + gic_set_active_state(desc, false); + } + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&desc->lock, flags); + } + + + vgic_put_irq(vcpu->domain, irq); + } +} + static int match_region(const void *key, const void *elt) { const unsigned int offset = (unsigned long)key; diff --git a/xen/arch/arm/vgic/vgic-mmio.h b/xen/arch/arm/vgic/vgic-mmio.h index 2ddcbbf58d..4465f3b7e5 100644 --- a/xen/arch/arm/vgic/vgic-mmio.h +++ b/xen/arch/arm/vgic/vgic-mmio.h @@ -107,6 +107,17 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, paddr_t addr, unsigned int len, unsigned long val); +unsigned long vgic_mmio_read_pending(struct vcpu *vcpu, + paddr_t addr, unsigned int len); + +void vgic_mmio_write_spending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + +void vgic_mmio_write_cpending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + unsigned int vgic_v2_init_dist_iodev(struct vgic_io_device *dev); #endif