Message ID | 20180305160415.16760-18-andre.przywara@linaro.org |
---|---|
State | New |
Headers | show |
Series | New VGIC(-v2) implementation | expand |
Hi Andre, On 05/03/18 16:03, Andre Przywara wrote: > If we change something in a vCPU that affects its runnability or > otherwise needs the vCPU's attention, we might need to tell the scheduler > about it. > We are using this in one place (vIRQ injection) at the moment, but will > need this at more places soon. > So let's factor out this functionality in the new kick_vcpu() function > and make this available to the whole Xen arch code. > > Signed-off-by: Andre Przywara <andre.przywara@linaro.org> > --- > Changelog RFC ... v1: > - new patch > > xen/arch/arm/smp.c | 14 ++++++++++++++ > xen/arch/arm/vgic.c | 10 ++-------- > xen/include/asm-arm/smp.h | 3 +++ > 3 files changed, 19 insertions(+), 8 deletions(-) > > diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c > index 62f57f0ba2..381a4786a2 100644 > --- a/xen/arch/arm/smp.c > +++ b/xen/arch/arm/smp.c > @@ -4,6 +4,8 @@ > #include <asm/page.h> > #include <asm/gic.h> > #include <asm/flushtlb.h> > +#include <xen/perfc.h> > +#include <xen/sched.h> > > void flush_tlb_mask(const cpumask_t *mask) > { > @@ -32,6 +34,18 @@ void smp_send_call_function_mask(const cpumask_t *mask) > } > } > > +void kick_vcpu(struct vcpu *vcpu) Can we name it vcpu_kick? This is to match the x86 side and it seems we have a prototype in events.h. Also, IHMO this belongs to domain.c as this deal with vCPU. smp.c is more for dealing with pCPU. > +{ > + bool running = vcpu->is_running; > + > + vcpu_unblock(vcpu); > + if ( running && vcpu != current ) > + { > + perfc_incr(vgic_cross_cpu_intr_inject); > + smp_send_event_check_mask(cpumask_of(vcpu->processor)); > + } > +} > + > /* > * Local variables: > * mode: C > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index 3c77d5fef6..e44de9ea95 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -530,7 +530,6 @@ int vgic_inject_irq(struct domain *d, struct vcpu *v, unsigned int virq, > uint8_t priority; > struct pending_irq *iter, *n; > unsigned long flags; > - bool running; > > /* > * For edge triggered interrupts we always ignore a "falling edge". > @@ -590,14 +589,9 @@ int vgic_inject_irq(struct domain *d, struct vcpu *v, unsigned int virq, > list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs); > out: > spin_unlock_irqrestore(&v->arch.vgic.lock, flags); > + > /* we have a new higher priority irq, inject it into the guest */ > - running = v->is_running; > - vcpu_unblock(v); > - if ( running && v != current ) > - { > - perfc_incr(vgic_cross_cpu_intr_inject); > - smp_send_event_check_mask(cpumask_of(v->processor)); > - } > + kick_vcpu(v); > > return 0; > } > diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h > index 3c122681d7..7c8ef75789 100644 > --- a/xen/include/asm-arm/smp.h > +++ b/xen/include/asm-arm/smp.h > @@ -28,6 +28,9 @@ extern void init_secondary(void); > extern void smp_init_cpus(void); > extern void smp_clear_cpu_maps (void); > extern int smp_get_max_cpus (void); > + > +void kick_vcpu(struct vcpu *vcpu); > + > #endif > > /* > Cheers,
diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c index 62f57f0ba2..381a4786a2 100644 --- a/xen/arch/arm/smp.c +++ b/xen/arch/arm/smp.c @@ -4,6 +4,8 @@ #include <asm/page.h> #include <asm/gic.h> #include <asm/flushtlb.h> +#include <xen/perfc.h> +#include <xen/sched.h> void flush_tlb_mask(const cpumask_t *mask) { @@ -32,6 +34,18 @@ void smp_send_call_function_mask(const cpumask_t *mask) } } +void kick_vcpu(struct vcpu *vcpu) +{ + bool running = vcpu->is_running; + + vcpu_unblock(vcpu); + if ( running && vcpu != current ) + { + perfc_incr(vgic_cross_cpu_intr_inject); + smp_send_event_check_mask(cpumask_of(vcpu->processor)); + } +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 3c77d5fef6..e44de9ea95 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -530,7 +530,6 @@ int vgic_inject_irq(struct domain *d, struct vcpu *v, unsigned int virq, uint8_t priority; struct pending_irq *iter, *n; unsigned long flags; - bool running; /* * For edge triggered interrupts we always ignore a "falling edge". @@ -590,14 +589,9 @@ int vgic_inject_irq(struct domain *d, struct vcpu *v, unsigned int virq, list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs); out: spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + /* we have a new higher priority irq, inject it into the guest */ - running = v->is_running; - vcpu_unblock(v); - if ( running && v != current ) - { - perfc_incr(vgic_cross_cpu_intr_inject); - smp_send_event_check_mask(cpumask_of(v->processor)); - } + kick_vcpu(v); return 0; } diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h index 3c122681d7..7c8ef75789 100644 --- a/xen/include/asm-arm/smp.h +++ b/xen/include/asm-arm/smp.h @@ -28,6 +28,9 @@ extern void init_secondary(void); extern void smp_init_cpus(void); extern void smp_clear_cpu_maps (void); extern int smp_get_max_cpus (void); + +void kick_vcpu(struct vcpu *vcpu); + #endif /*
If we change something in a vCPU that affects its runnability or otherwise needs the vCPU's attention, we might need to tell the scheduler about it. We are using this in one place (vIRQ injection) at the moment, but will need this at more places soon. So let's factor out this functionality in the new kick_vcpu() function and make this available to the whole Xen arch code. Signed-off-by: Andre Przywara <andre.przywara@linaro.org> --- Changelog RFC ... v1: - new patch xen/arch/arm/smp.c | 14 ++++++++++++++ xen/arch/arm/vgic.c | 10 ++-------- xen/include/asm-arm/smp.h | 3 +++ 3 files changed, 19 insertions(+), 8 deletions(-)