From patchwork Thu Jul 3 16:53:12 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 33057 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pd0-f198.google.com (mail-pd0-f198.google.com [209.85.192.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 50F8B20560 for ; Thu, 3 Jul 2014 16:56:16 +0000 (UTC) Received: by mail-pd0-f198.google.com with SMTP id y10sf2622269pdj.9 for ; Thu, 03 Jul 2014 09:56:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=zc4F8g2M1kVBIUMiV64H/GsjQTgvHhhUc3qukARr4VM=; b=mWh7rD1UqTOCrT7rrZL5C9Aq/GA6lamKpnBb/UfTJC7CcuC0K9CqWPhVBBApoz0HQh 3WStE7xeV+ju+VToS4/k78pWb5CMUgpE0ihap1tt3hL7nmaeoov7FY0xs33MvNiyM5tB H3J4RRaYknDx0WN+ZVqRDRZ0BMmlNKg/C28bhYmjr4Lf9qX+UqboXtIcUAl4qs3+CjFI +RGql+vV9tVjly2MSadyxQcC5vXnAfQs7iD+wMFUN8wmEWhq9oExDt8HRiwEkOQE4vaw vQTpuy4Br0QL+KyBHxT0WSP5Eo1sUfHN8DK8GvVZzGHhrzaJcfS0Q7knZgSgjbAGljRG 67oA== X-Gm-Message-State: ALoCoQnvARq6MBLy7A4GwjY3DWExiVXSVIHw8UF4RusqRtkM5CalZrUo1EIoXAVbHr+0lfLpbvq9 X-Received: by 10.66.65.202 with SMTP id z10mr2723781pas.45.1404406574666; Thu, 03 Jul 2014 09:56:14 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.30.36 with SMTP id c33ls538229qgc.72.gmail; Thu, 03 Jul 2014 09:56:14 -0700 (PDT) X-Received: by 10.58.165.106 with SMTP id yx10mr5015549veb.17.1404406574548; Thu, 03 Jul 2014 09:56:14 -0700 (PDT) Received: from mail-ve0-f181.google.com (mail-ve0-f181.google.com [209.85.128.181]) by mx.google.com with ESMTPS id x5si14459360vei.12.2014.07.03.09.56.14 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Jul 2014 09:56:14 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.181 as permitted sender) client-ip=209.85.128.181; Received: by mail-ve0-f181.google.com with SMTP id db11so520226veb.40 for ; Thu, 03 Jul 2014 09:56:14 -0700 (PDT) X-Received: by 10.220.192.129 with SMTP id dq1mr1523492vcb.57.1404406574438; Thu, 03 Jul 2014 09:56:14 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp392528vcb; Thu, 3 Jul 2014 09:56:14 -0700 (PDT) X-Received: by 10.140.101.86 with SMTP id t80mr9269266qge.108.1404406572372; Thu, 03 Jul 2014 09:56:12 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id n105si38158400qga.24.2014.07.03.09.56.11 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 03 Jul 2014 09:56:12 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1X2kGu-0004eD-VQ; Thu, 03 Jul 2014 16:54:08 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1X2kGs-0004bn-HZ for xen-devel@lists.xensource.com; Thu, 03 Jul 2014 16:54:06 +0000 Received: from [85.158.137.68:7764] by server-9.bemta-3.messagelabs.com id C6/35-09496-DAA85B35; Thu, 03 Jul 2014 16:54:05 +0000 X-Env-Sender: Stefano.Stabellini@citrix.com X-Msg-Ref: server-8.tower-31.messagelabs.com!1404406442!13931422!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 25614 invoked from network); 3 Jul 2014 16:54:04 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 3 Jul 2014 16:54:04 -0000 X-IronPort-AV: E=Sophos;i="5.01,595,1400025600"; d="scan'208";a="149633512" Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net) ([10.9.154.239]) by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jul 2014 16:54:00 +0000 Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id 14.3.181.6; Thu, 3 Jul 2014 12:53:59 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1X2kGg-00014w-Bq; Thu, 03 Jul 2014 17:53:54 +0100 From: Stefano Stabellini To: Date: Thu, 3 Jul 2014 17:53:12 +0100 Message-ID: <1404406394-18231-5-git-send-email-stefano.stabellini@eu.citrix.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: MIME-Version: 1.0 X-DLP: MIA1 Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com, Stefano Stabellini Subject: [Xen-devel] [PATCH v7 5/6] xen/arm: physical irq follow virtual irq X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stefano.stabellini@eu.citrix.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.181 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: Migrate physical irqs to the same physical cpu that is running the vcpu expected to receive the irqs. That is done when enabling irqs, when the guest writes to GICD_ITARGETSR and when Xen migrates a vcpu to a different pcpu. In case of virq migration, if the virq is inflight and in a GICH_LR register already, delay migrating the corresponding physical irq until the virq is EOIed by the guest and the MIGRATING flag has been cleared. This way we make sure that the pcpu running the old vcpu gets interrupted with a new irq of the same kind, clearing the GICH_LR sooner. Introduce a new arch specific function, arch_move_irqs, that is empty on x86 and implements the vgic irq migration code on ARM. arch_move_irqs is going to be called by from sched.c. Signed-off-by: Stefano Stabellini Acked-by: Jan Beulich --- Changes in v7: - remove checks at the top of gic_irq_set_affinity, add assert instead; - move irq_set_affinity to irq.c; - delay setting the affinity of the physical irq when the virq is MIGRATING until the virq is EOIed by the guest; - do not set the affinity of MIGRATING irqs from arch_move_irqs. Changes in v6: - use vgic_get_target_vcpu instead of _vgic_get_target_vcpu in arch_move_irqs. Changes in v5: - prettify vgic_move_irqs; - rename vgic_move_irqs to arch_move_irqs; - introduce helper function irq_set_affinity. --- xen/arch/arm/gic-v2.c | 17 +++++++++++++++-- xen/arch/arm/gic.c | 1 + xen/arch/arm/irq.c | 6 ++++++ xen/arch/arm/vgic.c | 21 +++++++++++++++++++++ xen/include/asm-arm/gic.h | 1 + xen/include/asm-arm/irq.h | 2 ++ xen/include/asm-x86/irq.h | 2 ++ 7 files changed, 48 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c index 695c232..c3d2853 100644 --- a/xen/arch/arm/gic-v2.c +++ b/xen/arch/arm/gic-v2.c @@ -532,9 +532,22 @@ static void gicv2_guest_irq_end(struct irq_desc *desc) /* Deactivation happens in maintenance interrupt / via GICV */ } -static void gicv2_irq_set_affinity(struct irq_desc *desc, const cpumask_t *mask) +static void gicv2_irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask) { - BUG(); + volatile unsigned char *bytereg; + unsigned int mask; + + ASSERT(!cpumask_empty(cpu_mask)); + + spin_lock(&gicv2.lock); + + mask = gicv2_cpu_mask(cpu_mask); + + /* Set target CPU mask (RAZ/WI on uniprocessor) */ + bytereg = (unsigned char *) (GICD + GICD_ITARGETSR); + bytereg[desc->irq] = mask; + + spin_unlock(&gicv2.lock); } /* XXX different for level vs edge */ diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index be97261..37b08c2 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -397,6 +397,7 @@ static void gic_update_one_lr(struct vcpu *v, int i) /* vgic_get_target_vcpu takes the rank lock, ensuring * consistency with other itarget changes. */ v_target = vgic_get_target_vcpu(v, irq); + irq_set_affinity(p->desc, cpumask_of(v_target->processor)); vgic_vcpu_inject_irq(v_target, irq); spin_lock(&v->arch.vgic.lock); } diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c index 49ca467..7150c7a 100644 --- a/xen/arch/arm/irq.c +++ b/xen/arch/arm/irq.c @@ -134,6 +134,12 @@ static inline struct domain *irq_get_domain(struct irq_desc *desc) return desc->action->dev_id; } +void irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask) +{ + if ( desc != NULL ) + desc->handler->set_affinity(desc, cpu_mask); +} + int request_irq(unsigned int irq, unsigned int irqflags, void (*handler)(int, void *, struct cpu_user_regs *), const char *devname, void *dev_id) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index b4493a3..69d3040 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -399,6 +399,7 @@ static void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int ir if ( list_empty(&p->inflight) ) { + irq_set_affinity(p->desc, cpumask_of(new->processor)); spin_unlock_irqrestore(&old->arch.vgic.lock, flags); return; } @@ -407,6 +408,7 @@ static void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int ir { list_del_init(&p->lr_queue); list_del_init(&p->inflight); + irq_set_affinity(p->desc, cpumask_of(new->processor)); spin_unlock_irqrestore(&old->arch.vgic.lock, flags); vgic_vcpu_inject_irq(new, irq); return; @@ -422,6 +424,24 @@ static void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int ir spin_unlock_irqrestore(&old->arch.vgic.lock, flags); } +void arch_move_irqs(struct vcpu *v) +{ + const cpumask_t *cpu_mask = cpumask_of(v->processor); + struct domain *d = v->domain; + struct pending_irq *p; + struct vcpu *v_target; + int i; + + for ( i = 32; i < d->arch.vgic.nr_lines; i++ ) + { + v_target = vgic_get_target_vcpu(v, i); + p = irq_to_pending(v_target, i); + + if ( v_target == v && !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) ) + irq_set_affinity(p->desc, cpu_mask); + } +} + static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n) { const unsigned long mask = r; @@ -477,6 +497,7 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) } if ( p->desc != NULL ) { + irq_set_affinity(p->desc, cpumask_of(v_target->processor)); spin_lock_irqsave(&p->desc->lock, flags); p->desc->handler->enable(p->desc); spin_unlock_irqrestore(&p->desc->lock, flags); diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h index 839d053..6deb4bd 100644 --- a/xen/include/asm-arm/gic.h +++ b/xen/include/asm-arm/gic.h @@ -318,6 +318,7 @@ struct gic_hw_operations { void register_gic_ops(const struct gic_hw_operations *ops); struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq); +void arch_move_irqs(struct vcpu *v); #endif /* __ASSEMBLY__ */ #endif diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h index e567f71..dc282f0 100644 --- a/xen/include/asm-arm/irq.h +++ b/xen/include/asm-arm/irq.h @@ -48,6 +48,8 @@ int irq_set_spi_type(unsigned int spi, unsigned int type); int platform_get_irq(const struct dt_device_node *device, int index); +void irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask); + #endif /* _ASM_HW_IRQ_H */ /* * Local variables: diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h index 9066d38..d3c55f3 100644 --- a/xen/include/asm-x86/irq.h +++ b/xen/include/asm-x86/irq.h @@ -197,4 +197,6 @@ void cleanup_domain_irq_mapping(struct domain *); bool_t cpu_has_pending_apic_eoi(void); +static inline void arch_move_irqs(struct vcpu *v) { } + #endif /* _ASM_HW_IRQ_H */