From patchwork Thu May 22 12:32:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 30615 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f200.google.com (mail-vc0-f200.google.com [209.85.220.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 4B22920369 for ; Thu, 22 May 2014 12:34:20 +0000 (UTC) Received: by mail-vc0-f200.google.com with SMTP id lc6sf11219092vcb.11 for ; Thu, 22 May 2014 05:34:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=MepFPwennCE9e1MMNtla5RX/SFDVTzUWImxc7xXMVdk=; b=alLGaya67SBN7gQ1b1x1qsBCjMJKsFVTSWijnk7Rx5DOasZO3qZ3rpRvo5ulpxEIj9 60KpJaD2h1tbwHwzls8vRHZUMREtAvhv8Q8dSBsWNnMHvmY/JuaE3a5HuWdN3H5kh687 RX/322cs0v01RFF7PDPaI6fOQYYktyAm3Wn+nSyH+TkSM3CqLkI9F4WqYT1ae9PmqKYQ it/+48063U2dMV+coUbXAtOXmJhmgeQRfT3PxNh9PaWtz1WCtSPlTAhf9Vrkx1NWZ5sR nqchskN+rpNol5qRwgl6Qs6mLur+2mOyOJjMsukOmf5iXxyPFyci+8XNO6Xf7mLjLLTH VDFw== X-Gm-Message-State: ALoCoQlHGgGelL0QAu3QsdoaGmHVkc5yFjQORUA6+IBXdfK62sbZ2121m9ev5Et/zAecdx10WXjo X-Received: by 10.52.170.145 with SMTP id am17mr22010020vdc.2.1400762060125; Thu, 22 May 2014 05:34:20 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.28.4 with SMTP id 4ls1236338qgy.39.gmail; Thu, 22 May 2014 05:34:20 -0700 (PDT) X-Received: by 10.52.163.98 with SMTP id yh2mr13150976vdb.5.1400762060023; Thu, 22 May 2014 05:34:20 -0700 (PDT) Received: from mail-ve0-f172.google.com (mail-ve0-f172.google.com [209.85.128.172]) by mx.google.com with ESMTPS id lt5si4746743vcb.103.2014.05.22.05.34.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 22 May 2014 05:34:20 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.172 as permitted sender) client-ip=209.85.128.172; Received: by mail-ve0-f172.google.com with SMTP id oz11so4323189veb.3 for ; Thu, 22 May 2014 05:34:19 -0700 (PDT) X-Received: by 10.58.246.132 with SMTP id xw4mr34387320vec.2.1400762059921; Thu, 22 May 2014 05:34:19 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp194931vcb; Thu, 22 May 2014 05:34:19 -0700 (PDT) X-Received: by 10.224.95.73 with SMTP id c9mr79110272qan.68.1400762059310; Thu, 22 May 2014 05:34:19 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id s5si5095479qas.87.2014.05.22.05.34.18 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 22 May 2014 05:34:19 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WnSBV-0004N6-UV; Thu, 22 May 2014 12:33:21 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WnSBT-0004IX-2Q for xen-devel@lists.xensource.com; Thu, 22 May 2014 12:33:19 +0000 Received: from [193.109.254.147:20592] by server-4.bemta-14.messagelabs.com id BE/92-02781-E8EED735; Thu, 22 May 2014 12:33:18 +0000 X-Env-Sender: Stefano.Stabellini@citrix.com X-Msg-Ref: server-4.tower-27.messagelabs.com!1400761996!6491501!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 13211 invoked from network); 22 May 2014 12:33:17 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 22 May 2014 12:33:17 -0000 X-IronPort-AV: E=Sophos;i="4.98,887,1392163200"; d="scan'208";a="134537193" Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net) ([10.9.154.239]) by FTLPIPO01.CITRIX.COM with ESMTP; 22 May 2014 12:33:13 +0000 Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id 14.3.181.6; Thu, 22 May 2014 08:33:13 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1WnSBI-0003tn-5H; Thu, 22 May 2014 13:33:08 +0100 From: Stefano Stabellini To: Date: Thu, 22 May 2014 13:32:21 +0100 Message-ID: <1400761950-25035-4-git-send-email-stefano.stabellini@eu.citrix.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: MIME-Version: 1.0 X-DLP: MIA1 Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com, Stefano Stabellini Subject: [Xen-devel] [PATCH v8 04/13] xen/arm: support HW interrupts, do not request maintenance_interrupts X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stefano.stabellini@eu.citrix.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: If the irq to be injected is an hardware irq (p->desc != NULL), set GICH_LR_HW. Do not set GICH_LR_MAINTENANCE_IRQ. Remove the code to EOI a physical interrupt on behalf of the guest because it has become unnecessary. Introduce a new function, gic_clear_lrs, that goes over the GICH_LR registers, clear the invalid ones and free the corresponding interrupts from the inflight queue if appropriate. Add the interrupt to lr_pending if the GIC_IRQ_GUEST_PENDING is still set. Call gic_clear_lrs on entry to the hypervisor if we are coming from guest mode to make sure that the calculation in Xen of the highest priority interrupt currently inflight is correct and accurate and not based on stale data. In vgic_vcpu_inject_irq, if the target is a vcpu running on another pcpu, we are already sending an SGI to the other pcpu so that it would pick up the new IRQ to inject. Now also send an SGI to the other pcpu even if the IRQ is already inflight, so that it can clear the LR corresponding to the previous injection as well as injecting the new interrupt. Signed-off-by: Stefano Stabellini Acked-by: Ian Campbell Acked-by: Julien Grall --- Changes in v8: - do not clear LRs for the idle domain; - do not clear LRs on hypervisor entry if we are not coming from guest mode; - rename lr_reg to lr_val; - remove double spin_lock in gic_update_one_lr. Changes in v7: - move enter_hypervisor_head before the first use to avoid forward declaration; - improve in code comments; - rename gic_clear_one_lr to gic_update_one_lr. Changes in v6: - remove double spin_lock on the vgic.lock introduced in v5. Changes in v5: - do not rename virtual_irq to irq; - replace "const long unsigned int" with "const unsigned long"; - remove useless "& GICH_LR_PHYSICAL_MASK" in gic_set_lr; - add a comment in maintenance_interrupts to explain its new purpose. - introduce gic_clear_one_lr. Changes in v4: - merged patch #3 and #4 into a single patch. Changes in v2: - remove the EOI code, now unnecessary; - do not assume physical IRQ == virtual IRQ; - refactor gic_set_lr. --- xen/arch/arm/gic.c | 133 ++++++++++++++++++++------------------------- xen/arch/arm/traps.c | 10 ++++ xen/arch/arm/vgic.c | 3 +- xen/include/asm-arm/gic.h | 1 + 4 files changed, 72 insertions(+), 75 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 6b21945..b73bee3 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -66,6 +66,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id); /* Maximum cpu interface per GIC */ #define NR_GIC_CPU_IF 8 +static void gic_update_one_lr(struct vcpu *v, int i); + static unsigned int gic_cpu_mask(const cpumask_t *cpumask) { unsigned int cpu; @@ -543,16 +545,18 @@ void gic_disable_cpu(void) static inline void gic_set_lr(int lr, struct pending_irq *p, unsigned int state) { - int maintenance_int = GICH_LR_MAINTENANCE_IRQ; + uint32_t lr_val; BUG_ON(lr >= nr_lrs); BUG_ON(lr < 0); BUG_ON(state & ~(GICH_LR_STATE_MASK<priority >> 3) << GICH_LR_PRIORITY_SHIFT) | + lr_val = state | ((p->priority >> 3) << GICH_LR_PRIORITY_SHIFT) | ((p->irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT); + if ( p->desc != NULL ) + lr_val |= GICH_LR_HW | (p->desc->irq << GICH_LR_PHYSICAL_SHIFT); + + GICH[GICH_LR + lr] = lr_val; set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); clear_bit(GIC_IRQ_GUEST_PENDING, &p->status); @@ -612,6 +616,52 @@ out: return; } +static void gic_update_one_lr(struct vcpu *v, int i) +{ + struct pending_irq *p; + uint32_t lr; + int irq; + + ASSERT(spin_is_locked(&v->arch.vgic.lock)); + + lr = GICH[GICH_LR + i]; + if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) ) + { + GICH[GICH_LR + i] = 0; + clear_bit(i, &this_cpu(lr_mask)); + + irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK; + p = irq_to_pending(v, irq); + if ( p->desc != NULL ) + p->desc->status &= ~IRQ_INPROGRESS; + clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); + if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) && + test_bit(GIC_IRQ_GUEST_ENABLED, &p->status)) + gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority); + else + list_del_init(&p->inflight); + } +} + +void gic_clear_lrs(struct vcpu *v) +{ + int i = 0; + unsigned long flags; + + if ( is_idle_vcpu(v) ) + return; + + spin_lock_irqsave(&v->arch.vgic.lock, flags); + + while ((i = find_next_bit((const unsigned long *) &this_cpu(lr_mask), + nr_lrs, i)) < nr_lrs) { + gic_update_one_lr(v, i); + i++; + } + + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); +} + static void gic_restore_pending_irqs(struct vcpu *v) { int i; @@ -767,77 +817,14 @@ int gicv_setup(struct domain *d) } -static void gic_irq_eoi(void *info) -{ - int virq = (uintptr_t) info; - GICC[GICC_DIR] = virq; -} - static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs) { - int i = 0, virq, pirq = -1; - uint32_t lr; - struct vcpu *v = current; - uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32); - - while ((i = find_next_bit((const long unsigned int *) &eisr, - 64, i)) < 64) { - struct pending_irq *p, *p2; - int cpu; - bool_t inflight; - - cpu = -1; - inflight = 0; - - spin_lock_irq(&gic.lock); - lr = GICH[GICH_LR + i]; - virq = lr & GICH_LR_VIRTUAL_MASK; - GICH[GICH_LR + i] = 0; - clear_bit(i, &this_cpu(lr_mask)); - - p = irq_to_pending(v, virq); - if ( p->desc != NULL ) { - p->desc->status &= ~IRQ_INPROGRESS; - /* Assume only one pcpu needs to EOI the irq */ - cpu = p->desc->arch.eoi_cpu; - pirq = p->desc->irq; - } - if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) && - test_bit(GIC_IRQ_GUEST_ENABLED, &p->status)) - { - inflight = 1; - gic_add_to_lr_pending(v, p); - } - - clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); - - if ( !list_empty(&v->arch.vgic.lr_pending) ) { - p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue); - gic_set_lr(i, p2, GICH_LR_PENDING); - list_del_init(&p2->lr_queue); - set_bit(i, &this_cpu(lr_mask)); - } - spin_unlock_irq(&gic.lock); - - if ( !inflight ) - { - spin_lock_irq(&v->arch.vgic.lock); - list_del_init(&p->inflight); - spin_unlock_irq(&v->arch.vgic.lock); - } - - if ( p->desc != NULL ) { - /* this is not racy because we can't receive another irq of the - * same type until we EOI it. */ - if ( cpu == smp_processor_id() ) - gic_irq_eoi((void*)(uintptr_t)pirq); - else - on_selected_cpus(cpumask_of(cpu), - gic_irq_eoi, (void*)(uintptr_t)pirq, 0); - } - - i++; - } + /* + * This is a dummy interrupt handler. + * Receiving the interrupt is going to cause gic_inject to be called + * on return to guest that is going to clear the old LRs and inject + * new interrupts. + */ } void gic_dump_info(struct vcpu *v) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 03a3da6..a4bdaaa 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1658,10 +1658,18 @@ bad_data_abort: inject_dabt_exception(regs, info.gva, hsr.len); } +static void enter_hypervisor_head(struct cpu_user_regs *regs) +{ + if ( guest_mode(regs) ) + gic_clear_lrs(current); +} + asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) { union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) }; + enter_hypervisor_head(regs); + switch (hsr.ec) { case HSR_EC_WFI_WFE: if ( !check_conditional_instr(regs, hsr) ) @@ -1750,11 +1758,13 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) asmlinkage void do_trap_irq(struct cpu_user_regs *regs) { + enter_hypervisor_head(regs); gic_interrupt(regs, 0); } asmlinkage void do_trap_fiq(struct cpu_user_regs *regs) { + enter_hypervisor_head(regs); gic_interrupt(regs, 1); } diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 9838ce5..d5b3a4b 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -720,8 +720,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq) if ( (irq != current->domain->arch.evtchn_irq) || (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) ) set_bit(GIC_IRQ_GUEST_PENDING, &n->status); - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); - return; + goto out; } /* vcpu offline */ diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h index b1b4fd5..92a8916 100644 --- a/xen/include/asm-arm/gic.h +++ b/xen/include/asm-arm/gic.h @@ -219,6 +219,7 @@ extern unsigned int gic_number_lines(void); /* IRQ translation function for the device tree */ int gic_irq_xlate(const u32 *intspec, unsigned int intsize, unsigned int *out_hwirq, unsigned int *out_type); +void gic_clear_lrs(struct vcpu *v); #endif /* __ASSEMBLY__ */ #endif