diff mbox

[Xen-devel] ARM Generic Timer interrupt

Message ID alpine.DEB.2.02.1405271203470.4779@kaball.uk.xensource.com
State New
Headers show

Commit Message

Stefano Stabellini May 27, 2014, 12:11 p.m. UTC
On Mon, 26 May 2014, Oleksandr Tyshchenko wrote:
> Hello, all.
> 
> I am trying to run QNX as domU under XEN (4.4.0) on OMAP5 platform (ARM32).
> For this purposes I have made some changes to origin TI OMAP5 BSP.
> Also I have created QNX IFS loader for Xen domain builder.
> 
> Currently, dom0 (Linux Kernel) loads QNX as domU. The QNX boots without crashes,
> and I have console (I left only one hw block in QNX - UART,
> I need it for debug output while HVC is not implemented).
> 
> During bringing up I have encountered with next problem.
> 1. QNX (our BSP) uses ARM Generic Timer as system timer:
> QNX doesn't mask/unmask timer interrupt. Of course the QNX doesn't know that interrupt mask may be set by someone else
> and that it should be reset for timer interrupt to occur again.
> 
> 2. XEN handles the firing ARM Generic Timer:
> Before injecting irq to the guest the XEN sets interrupt mask for the virtual timer.
> 
> .../xen/arch/arm/time.c
> 
> static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
> {
>     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
>     WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
>     vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
> }
> 
> And as result of run under XEN we don't have timer interrupts in QNX.
> I have changed interrupt handler in QNX to unmask timer interrupt before return.

Hi Oleksandr,
thanks for reporting this issue and for the good analysis of the
problem, as always.


> But, I have question:
> Should the Hypervisor masks virtual timer IRQ on his own?
> It is a guest's resource and the guest itself should decide what to do.
> For example, I see that Linux Kernel (3.8) sets and clears timer interrupt mask by itself.

In principle I agree with you that the vtimer is a guest resource.
However in practice if we don't mask the irq we can easily get into an
interrupt storm situation: if the guest doesn't handle the interrupt
immediately we could keep receiving the vtimer irq in the hypervisor and
busy loop around it. We really need to mask it somehow.

So the real question is: how are we going to mask the vtimer irq?
We could:
1) write CNTx_CTL_MASK to the ctl register, masking it at the timer
level
2) write to GICD_ICENABLER, deactivating it at the GICD level

Given that 1) didn't sound right to me, I tried 2) first but I had
issues with the ARM emulator at the time.  And as an ulterior
confirmation that deactivating it is not how ARM thought that the vtimer
should be used, Linux and KVM do 1) too.

But I don't like the idea of having to modify the vtimer handler in QNX,
so I have hacked together this patch, that would do 2) on top of my
maintenance interrupt series. Unfortunately it needs to ask for a
maintenance interrupt for the vtimer interrupt, so I can't say I am
completely convinced that it is the right thing to do.

What do people think about this?

Comments

Julien Grall May 27, 2014, 3 p.m. UTC | #1
On 05/27/2014 01:11 PM, Stefano Stabellini wrote:
> Given that 1) didn't sound right to me, I tried 2) first but I had
> issues with the ARM emulator at the time.  And as an ulterior
> confirmation that deactivating it is not how ARM thought that the vtimer
> should be used, Linux and KVM do 1) too.

I suspect you had issue on the emulator because VCPU can EOI the timer
IRQ on another CPU.

If so, you will disable the vtimer interrupt forever on this CPU.

> But I don't like the idea of having to modify the vtimer handler in QNX,
> so I have hacked together this patch, that would do 2) on top of my
> maintenance interrupt series. Unfortunately it needs to ask for a
> maintenance interrupt for the vtimer interrupt, so I can't say I am
> completely convinced that it is the right thing to do.
> 
> What do people think about this?

The solution 2) seems very hackish. Hence, this IRQ will be fired very
often.

It may be better to let either QNX use physical timer (AFAIK it's
working out-of-box), or modifying to support virtual timer.

Regards,
Stefano Stabellini May 27, 2014, 4:05 p.m. UTC | #2
On Tue, 27 May 2014, Julien Grall wrote:
> On 05/27/2014 01:11 PM, Stefano Stabellini wrote:
> > Given that 1) didn't sound right to me, I tried 2) first but I had
> > issues with the ARM emulator at the time.  And as an ulterior
> > confirmation that deactivating it is not how ARM thought that the vtimer
> > should be used, Linux and KVM do 1) too.
> 
> I suspect you had issue on the emulator because VCPU can EOI the timer
> IRQ on another CPU.
> 
> If so, you will disable the vtimer interrupt forever on this CPU.

I don't think so (unless the vcpu is migrated).
Keep in mind that the vtimer is a PPI.


> > But I don't like the idea of having to modify the vtimer handler in QNX,
> > so I have hacked together this patch, that would do 2) on top of my
> > maintenance interrupt series. Unfortunately it needs to ask for a
> > maintenance interrupt for the vtimer interrupt, so I can't say I am
> > completely convinced that it is the right thing to do.
> > 
> > What do people think about this?
> 
> The solution 2) seems very hackish. Hence, this IRQ will be fired very
> often.
> 
> It may be better to let either QNX use physical timer (AFAIK it's
> working out-of-box), or modifying to support virtual timer.

I agree.
Julien Grall May 27, 2014, 4:12 p.m. UTC | #3
On 05/27/2014 05:05 PM, Stefano Stabellini wrote:
> On Tue, 27 May 2014, Julien Grall wrote:
>> On 05/27/2014 01:11 PM, Stefano Stabellini wrote:
>>> Given that 1) didn't sound right to me, I tried 2) first but I had
>>> issues with the ARM emulator at the time.  And as an ulterior
>>> confirmation that deactivating it is not how ARM thought that the vtimer
>>> should be used, Linux and KVM do 1) too.
>>
>> I suspect you had issue on the emulator because VCPU can EOI the timer
>> IRQ on another CPU.
>>
>> If so, you will disable the vtimer interrupt forever on this CPU.
> 
> I don't think so (unless the vcpu is migrated).
> Keep in mind that the vtimer is a PPI.

Sorry, I meant during VCPU migration.

Regards,
Oleksandr Tyshchenko May 27, 2014, 4:53 p.m. UTC | #4
OK. Thank you for clarifications. I would prefer to leave modified
interrupt handler for virtual timer in QNX.


On Tue, May 27, 2014 at 7:12 PM, Julien Grall <julien.grall@linaro.org>wrote:

> On 05/27/2014 05:05 PM, Stefano Stabellini wrote:
> > On Tue, 27 May 2014, Julien Grall wrote:
> >> On 05/27/2014 01:11 PM, Stefano Stabellini wrote:
> >>> Given that 1) didn't sound right to me, I tried 2) first but I had
> >>> issues with the ARM emulator at the time.  And as an ulterior
> >>> confirmation that deactivating it is not how ARM thought that the
> vtimer
> >>> should be used, Linux and KVM do 1) too.
> >>
> >> I suspect you had issue on the emulator because VCPU can EOI the timer
> >> IRQ on another CPU.
> >>
> >> If so, you will disable the vtimer interrupt forever on this CPU.
> >
> > I don't think so (unless the vcpu is migrated).
> > Keep in mind that the vtimer is a PPI.
>
> Sorry, I meant during VCPU migration.
>
> Regards,
>
> --
> Julien Grall
>
Ian Campbell May 28, 2014, 10:10 a.m. UTC | #5
On Tue, 2014-05-27 at 13:11 +0100, Stefano Stabellini wrote:
> > But, I have question:
> > Should the Hypervisor masks virtual timer IRQ on his own?
> > It is a guest's resource and the guest itself should decide what to do.
> > For example, I see that Linux Kernel (3.8) sets and clears timer interrupt mask by itself.
> 
> In principle I agree with you that the vtimer is a guest resource.
> However in practice if we don't mask the irq we can easily get into an
> interrupt storm situation: if the guest doesn't handle the interrupt
> immediately we could keep receiving the vtimer irq in the hypervisor and
> busy loop around it.

Do we not do a priority drop on the interrupt when we receive it, so we
won't get any more interrupts from the timer until it acks the
interrupt?

Just to be clear: The behaviour of the physical timer is not that it is
automatically masked when it fires?

Ian.
Julien Grall May 28, 2014, 11:32 a.m. UTC | #6
On 05/28/2014 11:10 AM, Ian Campbell wrote:
> On Tue, 2014-05-27 at 13:11 +0100, Stefano Stabellini wrote:
>>> But, I have question:
>>> Should the Hypervisor masks virtual timer IRQ on his own?
>>> It is a guest's resource and the guest itself should decide what to do.
>>> For example, I see that Linux Kernel (3.8) sets and clears timer interrupt mask by itself.
>>
>> In principle I agree with you that the vtimer is a guest resource.
>> However in practice if we don't mask the irq we can easily get into an
>> interrupt storm situation: if the guest doesn't handle the interrupt
>> immediately we could keep receiving the vtimer irq in the hypervisor and
>> busy loop around it.
> 
> Do we not do a priority drop on the interrupt when we receive it, so we
> won't get any more interrupts from the timer until it acks the
> interrupt?

The timer interrupt is acked directly by Xen. We can't wait the guest
VCPU as EOI the interrupt because the guest may have move to another
pCPU by this time.

If so, you will lose the interrupt timer on this pCPU forever.

Regards,
Ian Campbell May 28, 2014, 11:34 a.m. UTC | #7
On Wed, 2014-05-28 at 12:32 +0100, Julien Grall wrote:
> On 05/28/2014 11:10 AM, Ian Campbell wrote:
> > On Tue, 2014-05-27 at 13:11 +0100, Stefano Stabellini wrote:
> >>> But, I have question:
> >>> Should the Hypervisor masks virtual timer IRQ on his own?
> >>> It is a guest's resource and the guest itself should decide what to do.
> >>> For example, I see that Linux Kernel (3.8) sets and clears timer interrupt mask by itself.
> >>
> >> In principle I agree with you that the vtimer is a guest resource.
> >> However in practice if we don't mask the irq we can easily get into an
> >> interrupt storm situation: if the guest doesn't handle the interrupt
> >> immediately we could keep receiving the vtimer irq in the hypervisor and
> >> busy loop around it.
> > 
> > Do we not do a priority drop on the interrupt when we receive it, so we
> > won't get any more interrupts from the timer until it acks the
> > interrupt?
> 
> The timer interrupt is acked directly by Xen. We can't wait the guest
> VCPU as EOI the interrupt because the guest may have move to another
> pCPU by this time.

Surely we can arrange to handle that though. The way we currently handle
the timer stuff always seemed suboptimal to me.

> 
> If so, you will lose the interrupt timer on this pCPU forever.
> 
> Regards,
>
Julien Grall May 28, 2014, 11:37 a.m. UTC | #8
On 05/28/2014 12:34 PM, Ian Campbell wrote:
> On Wed, 2014-05-28 at 12:32 +0100, Julien Grall wrote:
>> On 05/28/2014 11:10 AM, Ian Campbell wrote:
>>> On Tue, 2014-05-27 at 13:11 +0100, Stefano Stabellini wrote:
>>>>> But, I have question:
>>>>> Should the Hypervisor masks virtual timer IRQ on his own?
>>>>> It is a guest's resource and the guest itself should decide what to do.
>>>>> For example, I see that Linux Kernel (3.8) sets and clears timer interrupt mask by itself.
>>>>
>>>> In principle I agree with you that the vtimer is a guest resource.
>>>> However in practice if we don't mask the irq we can easily get into an
>>>> interrupt storm situation: if the guest doesn't handle the interrupt
>>>> immediately we could keep receiving the vtimer irq in the hypervisor and
>>>> busy loop around it.
>>>
>>> Do we not do a priority drop on the interrupt when we receive it, so we
>>> won't get any more interrupts from the timer until it acks the
>>> interrupt?
>>
>> The timer interrupt is acked directly by Xen. We can't wait the guest
>> VCPU as EOI the interrupt because the guest may have move to another
>> pCPU by this time.
> 
> Surely we can arrange to handle that though. The way we currently handle
> the timer stuff always seemed suboptimal to me.

If so, no need to request a maintenance interrupt (see Stefano's patch).
And handle EOI during context switch.

This will be slower than the current solution (which is also used by
KVM). I think it's fine to make a specific case for the virt timer in
the guest OS.

Regards,
Stefano Stabellini May 28, 2014, 11:51 a.m. UTC | #9
On Wed, 28 May 2014, Ian Campbell wrote:
> On Wed, 2014-05-28 at 12:32 +0100, Julien Grall wrote:
> > On 05/28/2014 11:10 AM, Ian Campbell wrote:
> > > On Tue, 2014-05-27 at 13:11 +0100, Stefano Stabellini wrote:
> > >>> But, I have question:
> > >>> Should the Hypervisor masks virtual timer IRQ on his own?
> > >>> It is a guest's resource and the guest itself should decide what to do.
> > >>> For example, I see that Linux Kernel (3.8) sets and clears timer interrupt mask by itself.
> > >>
> > >> In principle I agree with you that the vtimer is a guest resource.
> > >> However in practice if we don't mask the irq we can easily get into an
> > >> interrupt storm situation: if the guest doesn't handle the interrupt
> > >> immediately we could keep receiving the vtimer irq in the hypervisor and
> > >> busy loop around it.
> > > 
> > > Do we not do a priority drop on the interrupt when we receive it, so we
> > > won't get any more interrupts from the timer until it acks the
> > > interrupt?
> > 
> > The timer interrupt is acked directly by Xen. We can't wait the guest
> > VCPU as EOI the interrupt because the guest may have move to another
> > pCPU by this time.
> 
> Surely we can arrange to handle that though. The way we currently handle
> the timer stuff always seemed suboptimal to me.

Aside from vcpu migration that we can obviously handle correctly, in
order to avoid the current "hack" we would need to introduce 2 vtimer
special cases in vgic.c and gic.c. Also even though I don't have any
numbers to prove it, I suspect that activating/deactivating the vtimer
irq at the GICD level all the time might be slower than just masking it
at the vtimer level.

So the tradeoff is: worse, slower hypervisor code but correctness of the
interface, or faster, leaner hypervisor code but a slightly worse guest
interface?
I don't know what the right answer is, but I am leaning toward the
second option.
Ian Campbell May 28, 2014, 11:54 a.m. UTC | #10
On Wed, 2014-05-28 at 12:51 +0100, Stefano Stabellini wrote:
> On Wed, 28 May 2014, Ian Campbell wrote:
> > On Wed, 2014-05-28 at 12:32 +0100, Julien Grall wrote:
> > > On 05/28/2014 11:10 AM, Ian Campbell wrote:
> > > > On Tue, 2014-05-27 at 13:11 +0100, Stefano Stabellini wrote:
> > > >>> But, I have question:
> > > >>> Should the Hypervisor masks virtual timer IRQ on his own?
> > > >>> It is a guest's resource and the guest itself should decide what to do.
> > > >>> For example, I see that Linux Kernel (3.8) sets and clears timer interrupt mask by itself.
> > > >>
> > > >> In principle I agree with you that the vtimer is a guest resource.
> > > >> However in practice if we don't mask the irq we can easily get into an
> > > >> interrupt storm situation: if the guest doesn't handle the interrupt
> > > >> immediately we could keep receiving the vtimer irq in the hypervisor and
> > > >> busy loop around it.
> > > > 
> > > > Do we not do a priority drop on the interrupt when we receive it, so we
> > > > won't get any more interrupts from the timer until it acks the
> > > > interrupt?
> > > 
> > > The timer interrupt is acked directly by Xen. We can't wait the guest
> > > VCPU as EOI the interrupt because the guest may have move to another
> > > pCPU by this time.
> > 
> > Surely we can arrange to handle that though. The way we currently handle
> > the timer stuff always seemed suboptimal to me.
> 
> Aside from vcpu migration that we can obviously handle correctly, in
> order to avoid the current "hack" we would need to introduce 2 vtimer
> special cases in vgic.c and gic.c. Also even though I don't have any
> numbers to prove it, I suspect that activating/deactivating the vtimer
> irq at the GICD level all the time might be slower than just masking it
> at the vtimer level.
> 
> So the tradeoff is: worse, slower hypervisor code but correctness of the
> interface, or faster, leaner hypervisor code but a slightly worse guest
> interface?
> I don't know what the right answer is, but I am leaning toward the
> second option.

Adding requirements to the guest's use of the vtimer to make our lives
earlier certainly seems to make sense, but it would be best if this were
backed up by some standard from ARM or someone so that guests do it
consistently.

I was mostly objecting to the proposed patch which seemed to be choosing
the worst of both options above...

Ian.
Stefano Stabellini May 28, 2014, 12:11 p.m. UTC | #11
On Wed, 28 May 2014, Ian Campbell wrote:
> On Wed, 2014-05-28 at 12:51 +0100, Stefano Stabellini wrote:
> > On Wed, 28 May 2014, Ian Campbell wrote:
> > > On Wed, 2014-05-28 at 12:32 +0100, Julien Grall wrote:
> > > > On 05/28/2014 11:10 AM, Ian Campbell wrote:
> > > > > On Tue, 2014-05-27 at 13:11 +0100, Stefano Stabellini wrote:
> > > > >>> But, I have question:
> > > > >>> Should the Hypervisor masks virtual timer IRQ on his own?
> > > > >>> It is a guest's resource and the guest itself should decide what to do.
> > > > >>> For example, I see that Linux Kernel (3.8) sets and clears timer interrupt mask by itself.
> > > > >>
> > > > >> In principle I agree with you that the vtimer is a guest resource.
> > > > >> However in practice if we don't mask the irq we can easily get into an
> > > > >> interrupt storm situation: if the guest doesn't handle the interrupt
> > > > >> immediately we could keep receiving the vtimer irq in the hypervisor and
> > > > >> busy loop around it.
> > > > > 
> > > > > Do we not do a priority drop on the interrupt when we receive it, so we
> > > > > won't get any more interrupts from the timer until it acks the
> > > > > interrupt?
> > > > 
> > > > The timer interrupt is acked directly by Xen. We can't wait the guest
> > > > VCPU as EOI the interrupt because the guest may have move to another
> > > > pCPU by this time.
> > > 
> > > Surely we can arrange to handle that though. The way we currently handle
> > > the timer stuff always seemed suboptimal to me.
> > 
> > Aside from vcpu migration that we can obviously handle correctly, in
> > order to avoid the current "hack" we would need to introduce 2 vtimer
> > special cases in vgic.c and gic.c. Also even though I don't have any
> > numbers to prove it, I suspect that activating/deactivating the vtimer
> > irq at the GICD level all the time might be slower than just masking it
> > at the vtimer level.
> > 
> > So the tradeoff is: worse, slower hypervisor code but correctness of the
> > interface, or faster, leaner hypervisor code but a slightly worse guest
> > interface?
> > I don't know what the right answer is, but I am leaning toward the
> > second option.
> 
> Adding requirements to the guest's use of the vtimer to make our lives
> earlier certainly seems to make sense, but it would be best if this were
> backed up by some standard from ARM or someone so that guests do it
> consistently.
> 
> I was mostly objecting to the proposed patch which seemed to be choosing
> the worst of both options above...

I see.

The patch uses GICD_ICENABLER to disable the timer because after the
desc->handler->end call, the vtimer can fire again unless properly
masked/deactivated. At the same time we need to call end otherwise we
won't be receiving other interrupts in Xen.

I guess an alternative implementation would introduce a special case in
do_IRQ, avoid calling desc->handler->end for the vtimer and only do
GICC[GICC_EOIR] = vtimer_irq to lower the priority, similarly to
IRQ_GUEST. We would then EOI the irq after receiving the maintenance
interrupt, but not in the maintenance interrupt handler, because we need
to EOI the maintenance interrupt first.   In addition it also needs
another special case in gic_save_state to EOI the vtimer interrupt if
the guest hasn't yet done so on vcpu save. And yet another special case
in gic_restore_state to deactivate the interrupt using GICD_ICENABLER,
because since the guest might not have handled it yet, it could fire
again continuously and we are not protected by the missing EOI anymore.
It doesn't sound better overall.
Ian Campbell May 28, 2014, 12:20 p.m. UTC | #12
On Wed, 2014-05-28 at 13:11 +0100, Stefano Stabellini wrote:

> The patch uses GICD_ICENABLER to disable the timer because after the
> desc->handler->end call, the vtimer can fire again unless properly
> masked/deactivated. At the same time we need to call end otherwise we
> won't be receiving other interrupts in Xen.
> 
> I guess an alternative implementation would introduce a special case in
> do_IRQ, avoid calling desc->handler->end for the vtimer and only do
> GICC[GICC_EOIR] = vtimer_irq to lower the priority, similarly to
> IRQ_GUEST.

I was thinking that while the guest was running we would treat the
vtimer IRQ exactly like IRQ_GUEST.

Then when we deschedule the vcpu if it has an unacked vtimer IRQ then we
would ACK it and record that we have done so. When the vcpu is the
rescheduled we would then need to mask the vtimer IRQ (in the gicd) and
setup the LRs with a virtual interrupt such that we get a maintenance
interrupt when the guest does EOI the interrupt. At that point we unmask
and resume treating timers like IRQ_GUEST for the remainder of the
timeslice.

The benefit is that much of the time we can treat the vtimer like any
other h/w IRQ and avoid traps altogether.

>  We would then EOI the irq after receiving the maintenance
> interrupt, but not in the maintenance interrupt handler, because we need
> to EOI the maintenance interrupt first.   In addition it also needs
> another special case in gic_save_state to EOI the vtimer interrupt if
> the guest hasn't yet done so on vcpu save. And yet another special case
> in gic_restore_state to deactivate the interrupt using GICD_ICENABLER,
> because since the guest might not have handled it yet, it could fire
> again continuously and we are not protected by the missing EOI anymore.
> It doesn't sound better overall.

Better than what?

Obviously none of this is better than just mandating that guests expect
the vtimer to be masked when it fires, like we do today...

Ian.
Stefano Stabellini May 28, 2014, 12:33 p.m. UTC | #13
On Wed, 28 May 2014, Ian Campbell wrote:
> On Wed, 2014-05-28 at 13:11 +0100, Stefano Stabellini wrote:
> 
> > The patch uses GICD_ICENABLER to disable the timer because after the
> > desc->handler->end call, the vtimer can fire again unless properly
> > masked/deactivated. At the same time we need to call end otherwise we
> > won't be receiving other interrupts in Xen.
> > 
> > I guess an alternative implementation would introduce a special case in
> > do_IRQ, avoid calling desc->handler->end for the vtimer and only do
> > GICC[GICC_EOIR] = vtimer_irq to lower the priority, similarly to
> > IRQ_GUEST.
> 
> I was thinking that while the guest was running we would treat the
> vtimer IRQ exactly like IRQ_GUEST.
> 
> Then when we deschedule the vcpu if it has an unacked vtimer IRQ then we
> would ACK it and record that we have done so. When the vcpu is the
> rescheduled we would then need to mask the vtimer IRQ (in the gicd) and
> setup the LRs with a virtual interrupt such that we get a maintenance
> interrupt when the guest does EOI the interrupt. At that point we unmask
> and resume treating timers like IRQ_GUEST for the remainder of the
> timeslice.
> 
> The benefit is that much of the time we can treat the vtimer like any
> other h/w IRQ and avoid traps altogether.

yeah, this is exactly what I was trying to describe below


> >  We would then EOI the irq after receiving the maintenance
> > interrupt, but not in the maintenance interrupt handler, because we need
> > to EOI the maintenance interrupt first.   In addition it also needs
> > another special case in gic_save_state to EOI the vtimer interrupt if
> > the guest hasn't yet done so on vcpu save. And yet another special case
> > in gic_restore_state to deactivate the interrupt using GICD_ICENABLER,
> > because since the guest might not have handled it yet, it could fire
> > again continuously and we are not protected by the missing EOI anymore.
> > It doesn't sound better overall.
> 
> Better than what?
> 
> Obviously none of this is better than just mandating that guests expect
> the vtimer to be masked when it fires, like we do today...

True.
If we really had to go down this route I would still prefer my hacked-up
patch because I think it is easier to understand (at least to me).
But in any case I would advise to keep things as they are.
Ian Campbell May 28, 2014, 12:36 p.m. UTC | #14
On Wed, 2014-05-28 at 13:33 +0100, Stefano Stabellini wrote:
> But in any case I would advise to keep things as they are.

Ack. I wonder if there is some forum where we should raise the issue of
specifying the expected hypervisor behaviour wrt vtimer mask? It's one
thing to deviate from the behaviour of the ptimer, it's another for all
hypervisors to do it differently...

Ian.
Stefano Stabellini May 28, 2014, 1:21 p.m. UTC | #15
On Wed, 28 May 2014, Ian Campbell wrote:
> On Wed, 2014-05-28 at 13:33 +0100, Stefano Stabellini wrote:
> > But in any case I would advise to keep things as they are.
> 
> Ack. I wonder if there is some forum where we should raise the issue of
> specifying the expected hypervisor behaviour wrt vtimer mask? It's one
> thing to deviate from the behaviour of the ptimer, it's another for all
> hypervisors to do it differently...

The generic timer spec is not clear enough about the mask bit in the
control register to be able to say that the behaviour we are using is
actually not supported I think.

I can open a discussion within Linaro about this.
Oleksandr Tyshchenko May 29, 2014, 8:38 a.m. UTC | #16
Very interesting. For now I leave as is, but I will keep in mind Stefano's
patch. I will follow up a topic. Thank you.


On Wed, May 28, 2014 at 4:21 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Wed, 28 May 2014, Ian Campbell wrote:
> > On Wed, 2014-05-28 at 13:33 +0100, Stefano Stabellini wrote:
> > > But in any case I would advise to keep things as they are.
> >
> > Ack. I wonder if there is some forum where we should raise the issue of
> > specifying the expected hypervisor behaviour wrt vtimer mask? It's one
> > thing to deviate from the behaviour of the ptimer, it's another for all
> > hypervisors to do it differently...
>
> The generic timer spec is not clear enough about the mask bit in the
> control register to be able to say that the behaviour we are using is
> actually not supported I think.
>
> I can open a discussion within Linaro about this.
>
diff mbox

Patch

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 08ae23b..36e2ec0 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -559,6 +559,8 @@  static inline void gic_set_lr(int lr, struct pending_irq *p,
         ((p->irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
     if ( p->desc != NULL )
         lr_val |= GICH_LR_HW | (p->desc->irq << GICH_LR_PHYSICAL_SHIFT);
+    else if ( p->irq == timer_get_irq(TIMER_VIRT_PPI) )
+        lr_val |= GICH_LR_MAINTENANCE_IRQ;
 
     GICH[GICH_LR + lr] = lr_val;
 
@@ -636,6 +638,16 @@  void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
     gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
 }
 
+void gic_disable_vtimer(void)
+{
+        GICD[GICD_ICENABLER] = (1 << timer_get_irq(TIMER_VIRT_PPI));
+}
+
+void gic_enable_vtimer(void)
+{
+        GICD[GICD_ISENABLER] = (1 << timer_get_irq(TIMER_VIRT_PPI));
+}
+
 static void gic_update_one_lr(struct vcpu *v, int i)
 {
     struct pending_irq *p;
@@ -676,6 +688,8 @@  static void gic_update_one_lr(struct vcpu *v, int i)
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
         clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         p->lr = GIC_INVALID_LR;
+        if ( timer_get_irq(TIMER_VIRT_PPI) == irq )
+            gic_enable_vtimer();
         if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
              test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) )
             gic_raise_guest_irq(v, irq, p->priority);
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 6e8d1f3..f9a6e7e 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -214,7 +214,6 @@  static void timer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
-    WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
     vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
 }
 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 0f0ba1d..0624aa9 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -757,6 +757,11 @@  void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     if ( !list_empty(&n->inflight) )
     {
+        if ( timer_get_irq(TIMER_VIRT_PPI) == irq )
+        {
+            gic_disable_vtimer();
+            goto out;
+        }
         set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);
         gic_raise_inflight_irq(v, irq);
         goto out;
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index bf6fb1e..08f3c08 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -227,6 +227,9 @@  int gic_irq_xlate(const u32 *intspec, unsigned int intsize,
                   unsigned int *out_hwirq, unsigned int *out_type);
 void gic_clear_lrs(struct vcpu *v);
 
+extern void gic_disable_vtimer(void);
+extern void gic_enable_vtimer(void);
+
 #endif /* __ASSEMBLY__ */
 #endif