From patchwork Fri Aug 8 17:13:48 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 35153 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f71.google.com (mail-oa0-f71.google.com [209.85.219.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3AD322118A for ; Fri, 8 Aug 2014 17:16:34 +0000 (UTC) Received: by mail-oa0-f71.google.com with SMTP id g18sf23925040oah.2 for ; Fri, 08 Aug 2014 10:16:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=w+uWg5GbXCPFU9ZYUh2qVnhK6ssitLXGo9CUbf9x3Rk=; b=mCdW4O+2dGeemJEdGGKEBMmG8Ht2K1rqig5LD92e92SZaIxYuUREnCoModf1gUeHyH aI0wHiQ1Hi5kpGQ2zcyq70LjSJCMamYZWdmb+Z5pBpy36VPbOyRnlinck6q99LF77c9S RoB5TdqmWgDLYBB0Xe3dlJTZgNBRca5ycebqvyd7f2DMAz8BlcbSzg6mDHH0aJDVJKH+ 7q9xPlGQEGWsJdoH6ySrZV6FIjQlP0q/rilSOLNm3RgC/kxSAZvvkP5q4MbulgiDP5MV s9i+OSiA4Qpc+dRY6oYWZx/Yo5apsOy8cuFc3iJy8Uz9EesETHdy4jOnl3QsEHUkCC5H I57w== X-Gm-Message-State: ALoCoQngQo6VNU1Q7S8N4EMjb68OW5qk0QbMnHpmbyWTiNbyQtUjNb6+T+KEKg4AEgmELE+inJGY X-Received: by 10.182.130.168 with SMTP id of8mr4210323obb.27.1407518193858; Fri, 08 Aug 2014 10:16:33 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.109.117 with SMTP id k108ls583612qgf.3.gmail; Fri, 08 Aug 2014 10:16:33 -0700 (PDT) X-Received: by 10.221.28.67 with SMTP id rt3mr58808vcb.78.1407518193758; Fri, 08 Aug 2014 10:16:33 -0700 (PDT) Received: from mail-vc0-f173.google.com (mail-vc0-f173.google.com [209.85.220.173]) by mx.google.com with ESMTPS id vp2si3100332vcb.66.2014.08.08.10.16.33 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 08 Aug 2014 10:16:33 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.173 as permitted sender) client-ip=209.85.220.173; Received: by mail-vc0-f173.google.com with SMTP id hy10so8695224vcb.18 for ; Fri, 08 Aug 2014 10:16:33 -0700 (PDT) X-Received: by 10.52.73.202 with SMTP id n10mr1055463vdv.86.1407518193659; Fri, 08 Aug 2014 10:16:33 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp140451vcb; Fri, 8 Aug 2014 10:16:33 -0700 (PDT) X-Received: by 10.42.132.69 with SMTP id c5mr14220687ict.76.1407518192437; Fri, 08 Aug 2014 10:16:32 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id la2si4840616igb.1.2014.08.08.10.16.31 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 08 Aug 2014 10:16:32 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XFnlA-0000mg-0j; Fri, 08 Aug 2014 17:15:20 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XFnl7-0000lc-Sk for xen-devel@lists.xensource.com; Fri, 08 Aug 2014 17:15:18 +0000 Received: from [85.158.143.35:37443] by server-3.bemta-4.messagelabs.com id 7A/B5-06192-5A505E35; Fri, 08 Aug 2014 17:15:17 +0000 X-Env-Sender: Stefano.Stabellini@citrix.com X-Msg-Ref: server-9.tower-21.messagelabs.com!1407518114!11976621!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n X-StarScan-Received: X-StarScan-Version: 6.12.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 22676 invoked from network); 8 Aug 2014 17:15:16 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 8 Aug 2014 17:15:16 -0000 X-IronPort-AV: E=Sophos;i="5.01,826,1400025600"; d="scan'208";a="160117696" Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.3.181.6; Fri, 8 Aug 2014 13:15:13 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1XFnky-0001op-G2; Fri, 08 Aug 2014 18:15:08 +0100 From: Stefano Stabellini To: Date: Fri, 8 Aug 2014 18:13:48 +0100 Message-ID: <1407518033-10694-5-git-send-email-stefano.stabellini@eu.citrix.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: MIME-Version: 1.0 X-DLP: MIA2 Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com, Stefano Stabellini Subject: [Xen-devel] [PATCH v11 05/10] xen/arm: physical irq follow virtual irq X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stefano.stabellini@eu.citrix.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.173 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: Migrate physical irqs to the same physical cpu that is running the vcpu expected to receive the irqs. That is done when enabling irqs, when the guest writes to GICD_ITARGETSR and when Xen migrates a vcpu to a different pcpu. In case of virq migration, if the virq is inflight and in a GICH_LR register already, delay migrating the corresponding physical irq until the virq is EOIed by the guest and the MIGRATING flag has been cleared. This way we make sure that the pcpu running the old vcpu gets interrupted with a new irq of the same kind, clearing the GICH_LR sooner. Introduce a new arch specific function, arch_move_irqs, that is empty on x86 and implements the vgic irq migration code on ARM. arch_move_irqs is going to be called by from sched.c. Signed-off-by: Stefano Stabellini Acked-by: Jan Beulich Acked-by: Julien Grall --- Changes in v10: - fix for loop over vgic.nr_lines. Changes in v9: - move arch_move_irqs declaration to irq.h. Changes in v7: - remove checks at the top of gic_irq_set_affinity, add assert instead; - move irq_set_affinity to irq.c; - delay setting the affinity of the physical irq when the virq is MIGRATING until the virq is EOIed by the guest; - do not set the affinity of MIGRATING irqs from arch_move_irqs. Changes in v6: - use vgic_get_target_vcpu instead of _vgic_get_target_vcpu in arch_move_irqs. Changes in v5: - prettify vgic_move_irqs; - rename vgic_move_irqs to arch_move_irqs; - introduce helper function irq_set_affinity. --- xen/arch/arm/gic-v2.c | 15 +++++++++++++-- xen/arch/arm/gic.c | 6 +++++- xen/arch/arm/irq.c | 6 ++++++ xen/arch/arm/vgic.c | 21 +++++++++++++++++++++ xen/include/asm-arm/irq.h | 3 +++ xen/include/asm-x86/irq.h | 2 ++ 6 files changed, 50 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c index 1305542..da60a41 100644 --- a/xen/arch/arm/gic-v2.c +++ b/xen/arch/arm/gic-v2.c @@ -569,9 +569,20 @@ static void gicv2_guest_irq_end(struct irq_desc *desc) /* Deactivation happens in maintenance interrupt / via GICV */ } -static void gicv2_irq_set_affinity(struct irq_desc *desc, const cpumask_t *mask) +static void gicv2_irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask) { - BUG(); + unsigned int mask; + + ASSERT(!cpumask_empty(cpu_mask)); + + spin_lock(&gicv2.lock); + + mask = gicv2_cpu_mask(cpu_mask); + + /* Set target CPU mask (RAZ/WI on uniprocessor) */ + writeb_gicd(mask, GICD_ITARGETSR + desc->irq); + + spin_unlock(&gicv2.lock); } /* XXX different for level vs edge */ diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index f5c7c91..2aa9500 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -382,7 +382,11 @@ static void gic_update_one_lr(struct vcpu *v, int i) gic_raise_guest_irq(v, irq, p->priority); else { list_del_init(&p->inflight); - clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status); + if ( test_and_clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) ) + { + struct vcpu *v_target = vgic_get_target_vcpu(v, irq); + irq_set_affinity(p->desc, cpumask_of(v_target->processor)); + } } } } diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c index 49ca467..7150c7a 100644 --- a/xen/arch/arm/irq.c +++ b/xen/arch/arm/irq.c @@ -134,6 +134,12 @@ static inline struct domain *irq_get_domain(struct irq_desc *desc) return desc->action->dev_id; } +void irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask) +{ + if ( desc != NULL ) + desc->handler->set_affinity(desc, cpu_mask); +} + int request_irq(unsigned int irq, unsigned int irqflags, void (*handler)(int, void *, struct cpu_user_regs *), const char *devname, void *dev_id) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 4344c36..731d84d 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -182,6 +182,7 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) if ( list_empty(&p->inflight) ) { + irq_set_affinity(p->desc, cpumask_of(new->processor)); spin_unlock_irqrestore(&old->arch.vgic.lock, flags); return; } @@ -190,6 +191,7 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) { list_del_init(&p->lr_queue); list_del_init(&p->inflight); + irq_set_affinity(p->desc, cpumask_of(new->processor)); spin_unlock_irqrestore(&old->arch.vgic.lock, flags); vgic_vcpu_inject_irq(new, irq); return; @@ -202,6 +204,24 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) spin_unlock_irqrestore(&old->arch.vgic.lock, flags); } +void arch_move_irqs(struct vcpu *v) +{ + const cpumask_t *cpu_mask = cpumask_of(v->processor); + struct domain *d = v->domain; + struct pending_irq *p; + struct vcpu *v_target; + int i; + + for ( i = 32; i < (d->arch.vgic.nr_lines + 32); i++ ) + { + v_target = vgic_get_target_vcpu(v, i); + p = irq_to_pending(v_target, i); + + if ( v_target == v && !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) ) + irq_set_affinity(p->desc, cpu_mask); + } +} + void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n) { struct domain *d = v->domain; @@ -259,6 +279,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) } if ( p->desc != NULL ) { + irq_set_affinity(p->desc, cpumask_of(v_target->processor)); spin_lock_irqsave(&p->desc->lock, flags); p->desc->handler->enable(p->desc); spin_unlock_irqrestore(&p->desc->lock, flags); diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h index e567f71..e877334 100644 --- a/xen/include/asm-arm/irq.h +++ b/xen/include/asm-arm/irq.h @@ -42,12 +42,15 @@ void init_secondary_IRQ(void); int route_irq_to_guest(struct domain *d, unsigned int irq, const char *devname); +void arch_move_irqs(struct vcpu *v); /* Set IRQ type for an SPI */ int irq_set_spi_type(unsigned int spi, unsigned int type); int platform_get_irq(const struct dt_device_node *device, int index); +void irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask); + #endif /* _ASM_HW_IRQ_H */ /* * Local variables: diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h index 9066d38..d3c55f3 100644 --- a/xen/include/asm-x86/irq.h +++ b/xen/include/asm-x86/irq.h @@ -197,4 +197,6 @@ void cleanup_domain_irq_mapping(struct domain *); bool_t cpu_has_pending_apic_eoi(void); +static inline void arch_move_irqs(struct vcpu *v) { } + #endif /* _ASM_HW_IRQ_H */