From patchwork Tue Apr 8 15:12:41 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 28004 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f72.google.com (mail-qa0-f72.google.com [209.85.216.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7DBAE20447 for ; Tue, 8 Apr 2014 15:14:51 +0000 (UTC) Received: by mail-qa0-f72.google.com with SMTP id hw13sf2834279qab.11 for ; Tue, 08 Apr 2014 08:14:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=+oSWC8nuFVokGg9jCZU3zzbfOZy8FXjjIVO9X4AFkUQ=; b=T8l9MS1kaTqbYxATRXMT3mGeKXhoz+W6ATBa4am1zxSQUihzYpjksm3leYrDUgtFvw y36FkgZxKgAJs3/WGF6oZ4c3LBCpmJiXrxFWnGJeGmxo5/I2DS2kEThp9WyfSWi0oXb3 lEfVBF5OC6dnyL6/SMSXTrtqgiV/E6z2C49WPBX0L0Br+L7jInSRO3OcWeHT1HeRlbSf dlAFea3YCeluQsiJVMunYeN1P0x66/BeuJDfsJxh5pePtZffuDGjXDbrTeMKOGth+aY0 RnMz2c1EXK9JLAXD7Jrlnv/aYXtbhtfXVdnCgt1fME1tkrsWNXGMGvSDyisMaXnsvaho lXCA== X-Gm-Message-State: ALoCoQmq73NQrqPoILgVqVMTvtLf28OyBGFuM0D1+zlQYifg/j5biW6jJAS1C3KHz+Vu/aZhrkFn X-Received: by 10.236.151.44 with SMTP id a32mr1632652yhk.21.1396970090956; Tue, 08 Apr 2014 08:14:50 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.104.163 with SMTP id a32ls250603qgf.73.gmail; Tue, 08 Apr 2014 08:14:50 -0700 (PDT) X-Received: by 10.220.163.3 with SMTP id y3mr3700311vcx.7.1396970090702; Tue, 08 Apr 2014 08:14:50 -0700 (PDT) Received: from mail-ve0-f178.google.com (mail-ve0-f178.google.com [209.85.128.178]) by mx.google.com with ESMTPS id sh5si450278vdc.68.2014.04.08.08.14.50 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 08 Apr 2014 08:14:50 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.178 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.178; Received: by mail-ve0-f178.google.com with SMTP id jw12so878275veb.37 for ; Tue, 08 Apr 2014 08:14:50 -0700 (PDT) X-Received: by 10.220.92.135 with SMTP id r7mr3693846vcm.11.1396970090584; Tue, 08 Apr 2014 08:14:50 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.12.8 with SMTP id v8csp254002vcv; Tue, 8 Apr 2014 08:14:50 -0700 (PDT) X-Received: by 10.52.251.199 with SMTP id zm7mr3096335vdc.21.1396970088828; Tue, 08 Apr 2014 08:14:48 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id wj8si448634vcb.92.2014.04.08.08.14.48 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 08 Apr 2014 08:14:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WXXik-0005Rh-JK; Tue, 08 Apr 2014 15:13:54 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WXXih-0005K6-9B for xen-devel@lists.xensource.com; Tue, 08 Apr 2014 15:13:51 +0000 Received: from [193.109.254.147:27844] by server-15.bemta-14.messagelabs.com id 4F/D8-15813-B2214435; Tue, 08 Apr 2014 15:13:47 +0000 X-Env-Sender: Stefano.Stabellini@citrix.com X-Msg-Ref: server-3.tower-27.messagelabs.com!1396970024!7017602!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.11.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 12201 invoked from network); 8 Apr 2014 15:13:46 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 8 Apr 2014 15:13:46 -0000 X-IronPort-AV: E=Sophos;i="4.97,818,1389744000"; d="scan'208";a="117915013" Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 08 Apr 2014 15:12:52 +0000 Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 8 Apr 2014 11:12:51 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1WXXhj-0005Ms-RY; Tue, 08 Apr 2014 16:12:51 +0100 From: Stefano Stabellini To: Date: Tue, 8 Apr 2014 16:12:41 +0100 Message-ID: <1396969969-18973-4-git-send-email-stefano.stabellini@eu.citrix.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: MIME-Version: 1.0 X-DLP: MIA1 Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com Subject: [Xen-devel] [PATCH v7 04/12] xen/arm: support HW interrupts, do not request maintenance_interrupts X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stefano.stabellini@eu.citrix.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.178 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: If the irq to be injected is an hardware irq (p->desc != NULL), set GICH_LR_HW. Do not set GICH_LR_MAINTENANCE_IRQ. Remove the code to EOI a physical interrupt on behalf of the guest because it has become unnecessary. Introduce a new function, gic_clear_lrs, that goes over the GICH_LR registers, clear the invalid ones and free the corresponding interrupts from the inflight queue if appropriate. Add the interrupt to lr_pending if the GIC_IRQ_GUEST_PENDING is still set. Call gic_clear_lrs on entry to the hypervisor to make sure that the calculation in Xen of the highest priority interrupt currently inflight is correct and accurate and not based on stale data. In vgic_vcpu_inject_irq, if the target is a vcpu running on another pcpu, we are already sending an SGI to the other pcpu so that it would pick up the new IRQ to inject. Now also send an SGI to the other pcpu even if the IRQ is already inflight, so that it can clear the LR corresponding to the previous injection as well as injecting the new interrupt. Signed-off-by: Stefano Stabellini Acked-by: Ian Campbell --- Changes in v7: - move enter_hypervisor_head before the first use to avoid forward declaration; - improve in code comments; - rename gic_clear_one_lr to gic_update_one_lr. Changes in v6: - remove double spin_lock on the vgic.lock introduced in v5. Changes in v5: - do not rename virtual_irq to irq; - replace "const long unsigned int" with "const unsigned long"; - remove useless "& GICH_LR_PHYSICAL_MASK" in gic_set_lr; - add a comment in maintenance_interrupts to explain its new purpose. - introduce gic_clear_one_lr. Changes in v4: - merged patch #3 and #4 into a single patch. Changes in v2: - remove the EOI code, now unnecessary; - do not assume physical IRQ == virtual IRQ; - refactor gic_set_lr. --- xen/arch/arm/gic.c | 137 +++++++++++++++++++++------------------------ xen/arch/arm/traps.c | 9 +++ xen/arch/arm/vgic.c | 3 +- xen/include/asm-arm/gic.h | 1 + 4 files changed, 75 insertions(+), 75 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index a7b29d8..b8b1452 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -68,6 +68,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id); /* Maximum cpu interface per GIC */ #define NR_GIC_CPU_IF 8 +static void gic_update_one_lr(struct vcpu *v, int i); + static unsigned int gic_cpu_mask(const cpumask_t *cpumask) { unsigned int cpu; @@ -626,16 +628,18 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new) static inline void gic_set_lr(int lr, struct pending_irq *p, unsigned int state) { - int maintenance_int = GICH_LR_MAINTENANCE_IRQ; + uint32_t lr_reg; BUG_ON(lr >= nr_lrs); BUG_ON(lr < 0); BUG_ON(state & ~(GICH_LR_STATE_MASK<priority >> 3) << GICH_LR_PRIORITY_SHIFT) | + lr_reg = state | ((p->priority >> 3) << GICH_LR_PRIORITY_SHIFT) | ((p->irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT); + if ( p->desc != NULL ) + lr_reg |= GICH_LR_HW | (p->desc->irq << GICH_LR_PHYSICAL_SHIFT); + + GICH[GICH_LR + lr] = lr_reg; set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); clear_bit(GIC_IRQ_GUEST_PENDING, &p->status); @@ -695,6 +699,56 @@ out: return; } +static void gic_update_one_lr(struct vcpu *v, int i) +{ + struct pending_irq *p; + uint32_t lr; + int irq; + bool_t inflight; + + ASSERT(spin_is_locked(&v->arch.vgic.lock)); + + lr = GICH[GICH_LR + i]; + if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) ) + { + inflight = 0; + GICH[GICH_LR + i] = 0; + clear_bit(i, &this_cpu(lr_mask)); + + irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK; + spin_lock(&gic.lock); + p = irq_to_pending(v, irq); + if ( p->desc != NULL ) + p->desc->status &= ~IRQ_INPROGRESS; + clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); + if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) && + test_bit(GIC_IRQ_GUEST_ENABLED, &p->status)) + { + inflight = 1; + gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority); + } + spin_unlock(&gic.lock); + if ( !inflight ) + list_del_init(&p->inflight); + } +} + +void gic_clear_lrs(struct vcpu *v) +{ + int i = 0; + unsigned long flags; + + spin_lock_irqsave(&v->arch.vgic.lock, flags); + + while ((i = find_next_bit((const unsigned long *) &this_cpu(lr_mask), + nr_lrs, i)) < nr_lrs) { + gic_update_one_lr(v, i); + i++; + } + + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); +} + static void gic_restore_pending_irqs(struct vcpu *v) { int i; @@ -893,77 +947,14 @@ int gicv_setup(struct domain *d) } -static void gic_irq_eoi(void *info) -{ - int virq = (uintptr_t) info; - GICC[GICC_DIR] = virq; -} - static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs) { - int i = 0, virq, pirq = -1; - uint32_t lr; - struct vcpu *v = current; - uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32); - - while ((i = find_next_bit((const long unsigned int *) &eisr, - 64, i)) < 64) { - struct pending_irq *p, *p2; - int cpu; - bool_t inflight; - - cpu = -1; - inflight = 0; - - spin_lock_irq(&gic.lock); - lr = GICH[GICH_LR + i]; - virq = lr & GICH_LR_VIRTUAL_MASK; - GICH[GICH_LR + i] = 0; - clear_bit(i, &this_cpu(lr_mask)); - - p = irq_to_pending(v, virq); - if ( p->desc != NULL ) { - p->desc->status &= ~IRQ_INPROGRESS; - /* Assume only one pcpu needs to EOI the irq */ - cpu = p->desc->arch.eoi_cpu; - pirq = p->desc->irq; - } - if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) && - test_bit(GIC_IRQ_GUEST_ENABLED, &p->status)) - { - inflight = 1; - gic_add_to_lr_pending(v, p); - } - - clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); - - if ( !list_empty(&v->arch.vgic.lr_pending) ) { - p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue); - gic_set_lr(i, p2, GICH_LR_PENDING); - list_del_init(&p2->lr_queue); - set_bit(i, &this_cpu(lr_mask)); - } - spin_unlock_irq(&gic.lock); - - if ( !inflight ) - { - spin_lock_irq(&v->arch.vgic.lock); - list_del_init(&p->inflight); - spin_unlock_irq(&v->arch.vgic.lock); - } - - if ( p->desc != NULL ) { - /* this is not racy because we can't receive another irq of the - * same type until we EOI it. */ - if ( cpu == smp_processor_id() ) - gic_irq_eoi((void*)(uintptr_t)pirq); - else - on_selected_cpus(cpumask_of(cpu), - gic_irq_eoi, (void*)(uintptr_t)pirq, 0); - } - - i++; - } + /* + * This is a dummy interrupt handler. + * Receiving the interrupt is going to cause gic_inject to be called + * on return to guest that is going to clear the old LRs and inject + * new interrupts. + */ } void gic_dump_info(struct vcpu *v) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 21c7b26..38b38a2 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1539,10 +1539,17 @@ bad_data_abort: inject_dabt_exception(regs, info.gva, hsr.len); } +static void enter_hypervisor_head(void) +{ + gic_clear_lrs(current); +} + asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) { union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) }; + enter_hypervisor_head(); + switch (hsr.ec) { case HSR_EC_WFI_WFE: if ( !check_conditional_instr(regs, hsr) ) @@ -1620,11 +1627,13 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) asmlinkage void do_trap_irq(struct cpu_user_regs *regs) { + enter_hypervisor_head(); gic_interrupt(regs, 0); } asmlinkage void do_trap_fiq(struct cpu_user_regs *regs) { + enter_hypervisor_head(); gic_interrupt(regs, 1); } diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index aab490c..566f0ff 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -701,8 +701,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq) if ( (irq != current->domain->arch.evtchn_irq) || (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) ) set_bit(GIC_IRQ_GUEST_PENDING, &n->status); - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); - return; + goto out; } /* vcpu offline */ diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h index 6fce5c2..ebb90c6 100644 --- a/xen/include/asm-arm/gic.h +++ b/xen/include/asm-arm/gic.h @@ -220,6 +220,7 @@ extern unsigned int gic_number_lines(void); /* IRQ translation function for the device tree */ int gic_irq_xlate(const u32 *intspec, unsigned int intsize, unsigned int *out_hwirq, unsigned int *out_type); +void gic_clear_lrs(struct vcpu *v); #endif /* __ASSEMBLY__ */ #endif