Message ID | 1406223192-26267-7-git-send-email-stefano.stabellini@eu.citrix.com |
---|---|
State | New |
Headers | show |
On Tue, 5 Aug 2014, Jan Beulich wrote: > >>> On 04.08.14 at 22:29, <stefano.stabellini@eu.citrix.com> wrote: > > On Mon, 4 Aug 2014, Jan Beulich wrote: > >> No, you're right. So coming back to your suspicion above: Nothing > >> prevents a HVM guest to also call VCPUOP_register_vcpu_info on > >> the boot CPU (and in fact such an asymmetry would seem pretty > >> odd); old-style HVM guests with PV drivers (built from > >> unmodified_drivers/) don't call VCPUOP_register_vcpu_info at all. > >> But in the end if what you say is true there would be a case where > >> x86 is also broken, just that there doesn't appear to be a kernel > >> utilizing this case. Since especially for HVM guests we shouldn't be > >> making assumptions in the hypervisor on guest behavior, shouldn't > >> your patch at least try to address that case then at once? > > > > The most logical thing to do would be to implement arch_evtchn_inject on > > x86 as: > > > > void arch_evtchn_inject(struct vcpu *v) > > { > > if ( has_hvm_container_vcpu(v) ) > > hvm_assert_evtchn_irq(v); > > } > > > > however it is very difficult to test because: > > - the !xen_have_vector_callback code path doesn't work properly on a > > modern Linux kernel; > > - going all the way back to 2.6.37, !xen_have_vector_callback works but > > then calling xen_vcpu_setup on vcpu0 doesn't work anyway. I don't know > > exactly why but I don't think that the reason has anything to do with > > the problem we are discussing here. In fact simply calling on vcpu0 an > > hypercall that only sets evtchn_upcall_pending and then calls > > arch_evtchn_inject works as espected. > > > > I know we are not just dealing with Linux guests, but given all this I > > am not sure how useful would actually be to provide the implementation > > of arch_evtchn_inject on x86. What do you think? > > I think having the implementation you suggest above is in any > event better than just an empty one. And with that I would also > suggest that its declaration be moved to a common header. OK
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 299ae7e..474eebd 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -263,20 +263,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) v_target = d->arch.vgic.handler->get_target_vcpu(v, irq); p = irq_to_pending(v_target, irq); set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); - /* We need to force the first injection of evtchn_irq because - * evtchn_upcall_pending is already set by common code on vcpu - * creation. */ - if ( irq == v_target->domain->arch.evtchn_irq && - vcpu_info(current, evtchn_upcall_pending) && - list_empty(&p->inflight) ) - vgic_vcpu_inject_irq(v_target, irq); - else { - unsigned long flags; - spin_lock_irqsave(&v_target->arch.vgic.lock, flags); - if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) - gic_raise_guest_irq(v_target, irq, p->priority); - spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); - } + spin_lock_irqsave(&v_target->arch.vgic.lock, flags); + if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) + gic_raise_guest_irq(v_target, irq, p->priority); + spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); if ( p->desc != NULL ) { irq_set_affinity(p->desc, cpumask_of(v_target->processor)); @@ -432,6 +422,11 @@ void vgic_vcpu_inject_spi(struct domain *d, unsigned int irq) vgic_vcpu_inject_irq(v, irq); } +void arch_evtchn_inject(struct vcpu *v) +{ + vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq); +} + /* * Local variables: * mode: C diff --git a/xen/common/domain.c b/xen/common/domain.c index cd64aea..05d0049 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1058,6 +1058,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset) vcpu_info(v, evtchn_upcall_pending) = 1; for ( i = 0; i < BITS_PER_EVTCHN_WORD(d); i++ ) set_bit(i, &vcpu_info(v, evtchn_pending_sel)); + arch_evtchn_inject(v); return 0; } diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h index 5330dfe..8c77427 100644 --- a/xen/include/asm-arm/event.h +++ b/xen/include/asm-arm/event.h @@ -56,6 +56,8 @@ static inline int arch_virq_is_global(int virq) return 1; } +void arch_evtchn_inject(struct vcpu *v); + #endif /* * Local variables: diff --git a/xen/include/asm-x86/event.h b/xen/include/asm-x86/event.h index a82062e..3c1a9d1 100644 --- a/xen/include/asm-x86/event.h +++ b/xen/include/asm-x86/event.h @@ -44,4 +44,6 @@ static inline int arch_virq_is_global(uint32_t virq) return 1; } +static inline void arch_evtchn_inject(struct vcpu *v) { } + #endif
evtchn_upcall_pending is already set by common code at vcpu creation, therefore on ARM we also need to call vgic_vcpu_inject_irq for it. Currently we do that from vgic_enable_irqs as a workaround. Do this properly by introducing an appropriate arch specific hook: arch_evtchn_inject. arch_evtchn_inject is called by map_vcpu_info to inject the evtchn irq into the guest. On ARM is implemented by calling vgic_vcpu_inject_irq, on x86 is unneeded. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> CC: JBeulich@suse.com --- Changes in v9: - use an arch hook. Changes in v2: - coding style fix; - add comment; - return an error if arch_set_info_guest is called without VGCF_online. --- xen/arch/arm/vgic.c | 23 +++++++++-------------- xen/common/domain.c | 1 + xen/include/asm-arm/event.h | 2 ++ xen/include/asm-x86/event.h | 2 ++ 4 files changed, 14 insertions(+), 14 deletions(-)