Message ID | 1400761950-25035-9-git-send-email-stefano.stabellini@eu.citrix.com |
---|---|
State | New |
Headers | show |
On 22/05/14 13:32, Stefano Stabellini wrote: > Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq > while the first one is still active. > If the first irq is already pending (not active), clear > GIC_IRQ_GUEST_QUEUED because the guest doesn't need a second > notification.If the irq has already been EOI'ed then just clear the > GICH_LR right away and move the interrupt to lr_pending so that it is > going to be reinjected by gic_restore_pending_irqs on return to guest. > > If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_QUEUED > and send an SGI. The target cpu is going to be interrupted and call > gic_clear_lrs, that is going to take the same actions. > > Do not call vgic_vcpu_inject_irq from gic_inject if > evtchn_upcall_pending is set. If we remove that call, we don't need to > special case evtchn_irq in vgic_vcpu_inject_irq anymore. > We need to force the first injection of evtchn_irq (call > gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending > is already set by common code on vcpu creation. If you only need it for the first time. Why can't you call vgic_inject_irq with the IRQ evtchn when the VCPU is turn on? This would remove every hack with this IRQ in the GIC code. > --- > xen/arch/arm/gic.c | 48 +++++++++++++++++++++++++++++++++++++-------- > xen/arch/arm/vgic.c | 11 +++++++---- > xen/include/asm-arm/gic.h | 1 + > 3 files changed, 48 insertions(+), 12 deletions(-) > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index 89d7025..a6fe566 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -66,6 +66,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id); > /* Maximum cpu interface per GIC */ > #define NR_GIC_CPU_IF 8 > > +#undef GIC_DEBUG >+ Did you intend to keep the debug in the final patch? Regards,
On Thu, 22 May 2014, Julien Grall wrote: > > while the first one is still active. > > If the first irq is already pending (not active), clear > > GIC_IRQ_GUEST_QUEUED because the guest doesn't need a second > > notification.If the irq has already been EOI'ed then just clear the > > GICH_LR right away and move the interrupt to lr_pending so that it is > > going to be reinjected by gic_restore_pending_irqs on return to guest. > > > > If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_QUEUED > > and send an SGI. The target cpu is going to be interrupted and call > > gic_clear_lrs, that is going to take the same actions. > > > > Do not call vgic_vcpu_inject_irq from gic_inject if > > evtchn_upcall_pending is set. If we remove that call, we don't need to > > special case evtchn_irq in vgic_vcpu_inject_irq anymore. > > We need to force the first injection of evtchn_irq (call > > gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending > > is already set by common code on vcpu creation. > > If you only need it for the first time. Why can't you call vgic_inject_irq > with the IRQ evtchn when the VCPU is turn on? > > This would remove every hack with this IRQ in the GIC code. In principle sounds nice, but in practice it is difficult and risks being racy. In vgic_vcpu_inject_irq we have: /* vcpu offline */ if ( test_bit(_VPF_down, &v->pause_flags) ) { spin_unlock_irqrestore(&v->arch.vgic.lock, flags); return; } So we can only inject the irq once the vcpu is properly up, that is certainly later than vcpu_initialise. > > --- > > xen/arch/arm/gic.c | 48 > > +++++++++++++++++++++++++++++++++++++-------- > > xen/arch/arm/vgic.c | 11 +++++++---- > > xen/include/asm-arm/gic.h | 1 + > > 3 files changed, 48 insertions(+), 12 deletions(-) > > > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > > index 89d7025..a6fe566 100644 > > --- a/xen/arch/arm/gic.c > > +++ b/xen/arch/arm/gic.c > > @@ -66,6 +66,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id); > > /* Maximum cpu interface per GIC */ > > #define NR_GIC_CPU_IF 8 > > > > +#undef GIC_DEBUG > > + > > Did you intend to keep the debug in the final patch? Yes, I think it is useful.
On 22/05/14 18:39, Stefano Stabellini wrote: > On Thu, 22 May 2014, Julien Grall wrote: >>> while the first one is still active. >>> If the first irq is already pending (not active), clear >>> GIC_IRQ_GUEST_QUEUED because the guest doesn't need a second >>> notification.If the irq has already been EOI'ed then just clear the >>> GICH_LR right away and move the interrupt to lr_pending so that it is >>> going to be reinjected by gic_restore_pending_irqs on return to guest. >>> >>> If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_QUEUED >>> and send an SGI. The target cpu is going to be interrupted and call >>> gic_clear_lrs, that is going to take the same actions. >>> >>> Do not call vgic_vcpu_inject_irq from gic_inject if >>> evtchn_upcall_pending is set. If we remove that call, we don't need to >>> special case evtchn_irq in vgic_vcpu_inject_irq anymore. >>> We need to force the first injection of evtchn_irq (call >>> gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending >>> is already set by common code on vcpu creation. >> >> If you only need it for the first time. Why can't you call vgic_inject_irq >> with the IRQ evtchn when the VCPU is turn on? >> >> This would remove every hack with this IRQ in the GIC code. > > In principle sounds nice, but in practice it is difficult and risks > being racy. In vgic_vcpu_inject_irq we have: > > /* vcpu offline */ > if ( test_bit(_VPF_down, &v->pause_flags) ) > { > spin_unlock_irqrestore(&v->arch.vgic.lock, flags); > return; > } > > So we can only inject the irq once the vcpu is properly up, that is > certainly later than vcpu_initialise. If we call vcpu_vgic_inject right before vcpu_wake (the _VPF_down flags has been cleared) we won't have any race condition. This can be done in both arch/arm/vpsci.c and common/domain.c (VCPUOP_up). It may require an arch specific function. Smth like arch_vcpu_prepare_up. Regards,
On Thu, 2014-05-22 at 13:32 +0100, Stefano Stabellini wrote: > @@ -626,19 +644,36 @@ static void gic_update_one_lr(struct vcpu *v, int i) > ASSERT(spin_is_locked(&v->arch.vgic.lock)); > > lr = GICH[GICH_LR + i]; > - if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) ) > + irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK; > + p = irq_to_pending(v, irq); > + if ( lr & GICH_LR_ACTIVE ) > { > + if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) && > + test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status) ) > + { > + if ( p->desc == NULL ) > + GICH[GICH_LR + i] = lr | GICH_LR_PENDING; > + else > + gdprintk(XENLOG_WARNING, "unable to inject hw irq=%d into d%dv%d: already active in LR%d\n", > + irq, v->domain->domain_id, v->vcpu_id, i); How common is this? Or should it never actually happen in reality? > + } > + } else if ( lr & GICH_LR_PENDING ) { > + int q __attribute__ ((unused)) = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status); > +#ifdef GIC_DEBUG > + if ( q ) if ( test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status) ) #ifdef GIC_DEBUG > + gdprintk(XENLOG_DEBUG, "trying to inject irq=%d into d%dv%d, when it is already pending in LR%d\n", > + irq, v->domain->domain_id, v->vcpu_id, i); #else ; /* Nothing to do */ #endif would be nicer than attribute unused IMHO. As it's a XENLOG_DEBUG this wouldn't be all that bad by default BTW, given that the already active case isn't even conditional. Ian.
On Fri, 6 Jun 2014, Ian Campbell wrote: > On Thu, 2014-05-22 at 13:32 +0100, Stefano Stabellini wrote: > > @@ -626,19 +644,36 @@ static void gic_update_one_lr(struct vcpu *v, int i) > > ASSERT(spin_is_locked(&v->arch.vgic.lock)); > > > > lr = GICH[GICH_LR + i]; > > - if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) ) > > + irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK; > > + p = irq_to_pending(v, irq); > > + if ( lr & GICH_LR_ACTIVE ) > > { > > + if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) && > > + test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status) ) > > + { > > + if ( p->desc == NULL ) > > + GICH[GICH_LR + i] = lr | GICH_LR_PENDING; > > + else > > + gdprintk(XENLOG_WARNING, "unable to inject hw irq=%d into d%dv%d: already active in LR%d\n", > > + irq, v->domain->domain_id, v->vcpu_id, i); > > How common is this? Or should it never actually happen in reality? It should never happen > > + } > > + } else if ( lr & GICH_LR_PENDING ) { > > + int q __attribute__ ((unused)) = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status); > > +#ifdef GIC_DEBUG > > + if ( q ) > > if ( test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status) ) > #ifdef GIC_DEBUG > > + gdprintk(XENLOG_DEBUG, "trying to inject irq=%d into d%dv%d, when it is already pending in LR%d\n", > > + irq, v->domain->domain_id, v->vcpu_id, i); > #else > ; /* Nothing to do */ > #endif > > would be nicer than attribute unused IMHO. > > As it's a XENLOG_DEBUG this wouldn't be all that bad by default BTW, > given that the already active case isn't even conditional. This case can actually happen though
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 89d7025..a6fe566 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -66,6 +66,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id); /* Maximum cpu interface per GIC */ #define NR_GIC_CPU_IF 8 +#undef GIC_DEBUG + static void gic_update_one_lr(struct vcpu *v, int i); static unsigned int gic_cpu_mask(const cpumask_t *cpumask) @@ -592,6 +594,22 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq) spin_unlock_irqrestore(&gic.lock, flags); } +void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq) +{ + struct pending_irq *n = irq_to_pending(v, virtual_irq); + + if ( list_empty(&n->lr_queue) ) + { + if ( v == current ) + gic_update_one_lr(v, n->lr); + } +#ifdef GIC_DEBUG + else + gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n", + virtual_irq, v->domain->domain_id, v->vcpu_id); +#endif +} + void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq, unsigned int priority) { @@ -626,19 +644,36 @@ static void gic_update_one_lr(struct vcpu *v, int i) ASSERT(spin_is_locked(&v->arch.vgic.lock)); lr = GICH[GICH_LR + i]; - if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) ) + irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK; + p = irq_to_pending(v, irq); + if ( lr & GICH_LR_ACTIVE ) { + if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) && + test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status) ) + { + if ( p->desc == NULL ) + GICH[GICH_LR + i] = lr | GICH_LR_PENDING; + else + gdprintk(XENLOG_WARNING, "unable to inject hw irq=%d into d%dv%d: already active in LR%d\n", + irq, v->domain->domain_id, v->vcpu_id, i); + } + } else if ( lr & GICH_LR_PENDING ) { + int q __attribute__ ((unused)) = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status); +#ifdef GIC_DEBUG + if ( q ) + gdprintk(XENLOG_DEBUG, "trying to inject irq=%d into d%dv%d, when it is already pending in LR%d\n", + irq, v->domain->domain_id, v->vcpu_id, i); +#endif + } else { GICH[GICH_LR + i] = 0; clear_bit(i, &this_cpu(lr_mask)); - irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK; - p = irq_to_pending(v, irq); if ( p->desc != NULL ) p->desc->status &= ~IRQ_INPROGRESS; clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); p->lr = GIC_INVALID_LR; - if ( test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) && - test_bit(GIC_IRQ_GUEST_ENABLED, &p->status)) + if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) && + test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) ) gic_raise_guest_irq(v, irq, p->priority); else list_del_init(&p->inflight); @@ -704,9 +739,6 @@ int gic_events_need_delivery(void) void gic_inject(void) { - if ( vcpu_info(current, evtchn_upcall_pending) ) - vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq); - gic_restore_pending_irqs(current); diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index b44937d..13c5c87 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -404,7 +404,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) irq = i + (32 * n); p = irq_to_pending(v, irq); set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); - if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) + if ( irq == v->domain->arch.evtchn_irq && + vcpu_info(current, evtchn_upcall_pending) && + list_empty(&p->inflight) ) + vgic_vcpu_inject_irq(v, irq); + else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) gic_raise_guest_irq(v, irq, p->priority); if ( p->desc != NULL ) { @@ -717,9 +721,8 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq) if ( !list_empty(&n->inflight) ) { - if ( (irq != current->domain->arch.evtchn_irq) || - (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) ) - set_bit(GIC_IRQ_GUEST_QUEUED, &n->status); + set_bit(GIC_IRQ_GUEST_QUEUED, &n->status); + gic_raise_inflight_irq(v, irq); goto out; } diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h index 15f94eb..0c4c583 100644 --- a/xen/include/asm-arm/gic.h +++ b/xen/include/asm-arm/gic.h @@ -182,6 +182,7 @@ extern int gic_events_need_delivery(void); extern void __cpuinit init_maintenance_interrupt(void); extern void gic_raise_guest_irq(struct vcpu *v, unsigned int irq, unsigned int priority); +extern void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq); extern void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq); /* Accept an interrupt from the GIC and dispatch its handler */
Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq while the first one is still active. If the first irq is already pending (not active), clear GIC_IRQ_GUEST_QUEUED because the guest doesn't need a second notification.If the irq has already been EOI'ed then just clear the GICH_LR right away and move the interrupt to lr_pending so that it is going to be reinjected by gic_restore_pending_irqs on return to guest. If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_QUEUED and send an SGI. The target cpu is going to be interrupted and call gic_clear_lrs, that is going to take the same actions. Do not call vgic_vcpu_inject_irq from gic_inject if evtchn_upcall_pending is set. If we remove that call, we don't need to special case evtchn_irq in vgic_vcpu_inject_irq anymore. We need to force the first injection of evtchn_irq (call gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending is already set by common code on vcpu creation. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- Changes in v8: - do not unify the inflight and non-inflight code paths in vgic_vcpu_inject_irq: it creates many buggy corner cases. Introduce gic_raise_inflight_irq instead; - add warnings for cases of lost interrupts. Changes in v7: - remove warning printk "Changing priority of an inflight interrupt is not supported". Changes in v3: - do not use the PENDING and ACTIVE state for HW interrupts; - unify the inflight and non-inflight code paths in vgic_vcpu_inject_irq. --- xen/arch/arm/gic.c | 48 +++++++++++++++++++++++++++++++++++++-------- xen/arch/arm/vgic.c | 11 +++++++---- xen/include/asm-arm/gic.h | 1 + 3 files changed, 48 insertions(+), 12 deletions(-)