From patchwork Tue Jun 11 15:16:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 17799 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f200.google.com (mail-we0-f200.google.com [74.125.82.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CBDB625DFB for ; Tue, 11 Jun 2013 15:18:08 +0000 (UTC) Received: by mail-we0-f200.google.com with SMTP id p57sf7855544wes.11 for ; Tue, 11 Jun 2013 08:18:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=oxuPV9kU0xmqlLC2ynFHsv+x4t3jNODQYJjYv8wYxsw=; b=cpiMxfn+0uy9esrq7j8Gg6GgKjQY1FVgkHcMuQ4/kC+qHL1DXv2PKAWVM1CNoa/Mc2 EgCyywdibbXj36AfmQxF+r5kEGd0kL8qVlOnoLYIpu81piNwnrEsGcCK+U9rO0r7xqxQ K+1+G6A3na7AxJnS3kZ4hV5sms5w2LgNKyHe2DB+jSn6Mei0FA91KSUxF0ZeCy95Dil+ kpVdJfIbifUqtomqK2aQuYrSvCrAc2wGPu7dXh33yDmFDhofqIab/aN+1+dhAzahd8Cs ahGeIXI58dOQaD5A3LCxQ+Yd/DaALrCPwxdUmZuC03BFjNpH2Jf6FM7K6JW2vTehbQ6t 1GHA== X-Received: by 10.180.187.115 with SMTP id fr19mr1174547wic.4.1370963887559; Tue, 11 Jun 2013 08:18:07 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.180.38.52 with SMTP id d20ls1265713wik.21.gmail; Tue, 11 Jun 2013 08:18:07 -0700 (PDT) X-Received: by 10.180.11.176 with SMTP id r16mr1618544wib.58.1370963887043; Tue, 11 Jun 2013 08:18:07 -0700 (PDT) Received: from mail-ve0-x234.google.com (mail-ve0-x234.google.com [2607:f8b0:400c:c01::234]) by mx.google.com with ESMTPS id qj9si5516451wjc.2.2013.06.11.08.18.06 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 08:18:07 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c01::234 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c01::234; Received: by mail-ve0-f180.google.com with SMTP id pa12so5893583veb.11 for ; Tue, 11 Jun 2013 08:18:05 -0700 (PDT) X-Received: by 10.52.53.36 with SMTP id y4mr6772503vdo.51.1370963885856; Tue, 11 Jun 2013 08:18:05 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.10.206 with SMTP id pb14csp113365vcb; Tue, 11 Jun 2013 08:18:05 -0700 (PDT) X-Received: by 10.182.246.198 with SMTP id xy6mr12314362obc.1.1370963884781; Tue, 11 Jun 2013 08:18:04 -0700 (PDT) Received: from mail-ie0-x22e.google.com (mail-ie0-x22e.google.com [2607:f8b0:4001:c03::22e]) by mx.google.com with ESMTPS id qq3si2636448oeb.52.2013.06.11.08.18.04 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 08:18:04 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:4001:c03::22e is neither permitted nor denied by best guess record for domain of andre.przywara@linaro.org) client-ip=2607:f8b0:4001:c03::22e; Received: by mail-ie0-f174.google.com with SMTP id 9so4674074iec.33 for ; Tue, 11 Jun 2013 08:18:04 -0700 (PDT) X-Received: by 10.50.101.73 with SMTP id fe9mr1211309igb.73.1370963884160; Tue, 11 Jun 2013 08:18:04 -0700 (PDT) Received: from slackpad.drs.calxeda.com (f053086247.adsl.alicedsl.de. [78.53.86.247]) by mx.google.com with ESMTPSA id b8sm6959921igb.7.2013.06.11.08.18.01 for (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 08:18:03 -0700 (PDT) From: Andre Przywara To: christoffer.dall@linaro.org, marc.zyngier@arm.com Cc: peter.maydell@linaro.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, patches@linaro.org, Andre Przywara Subject: [PATCH v2] ARM/KVM: save and restore generic timer registers Date: Tue, 11 Jun 2013 17:16:59 +0200 Message-Id: <1370963819-26165-1-git-send-email-andre.przywara@linaro.org> X-Mailer: git-send-email 1.7.12.1 X-Gm-Message-State: ALoCoQkBaLsfzuZncTHv0sSYKZvwX9rW0JU1acdca1q2BYS0qakJ68mG2kYZE2IiFKTUGqYOq5dH X-Original-Sender: andre.przywara@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c01::234 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , For migration to work we need to save (and later restore) the state of each core's virtual generic timer. Since this is per VCPU, we can use the [gs]et_one_reg ioctl and export the three needed registers (control, counter, compare value). Though they live in cp15 space, we don't use the existing list, since they need special accessor functions and also the arch timer is optional. Changes from v1: - move code out of coproc.c and into guest.c and arch_timer.c - present the registers with their native CP15 addresses, but without using space in the VCPU's cp15 array - do the user space copying in the accessor functions Signed-off-by: Andre Przywara --- arch/arm/include/asm/kvm_host.h | 5 ++++ arch/arm/include/uapi/asm/kvm.h | 16 ++++++++++ arch/arm/kvm/arch_timer.c | 65 +++++++++++++++++++++++++++++++++++++++++ arch/arm/kvm/guest.c | 26 ++++++++++++++++- 4 files changed, 111 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 57cb786..1096e33 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -224,4 +224,9 @@ static inline int kvm_arch_dev_ioctl_check_extension(long ext) int kvm_perf_init(void); int kvm_perf_teardown(void); +int kvm_arm_num_timer_regs(void); +int kvm_arm_copy_timer_indices(struct kvm_vcpu *, u64 __user *); +int kvm_arm_timer_get_reg(struct kvm_vcpu *, const struct kvm_one_reg *); +int kvm_arm_timer_set_reg(struct kvm_vcpu *, const struct kvm_one_reg *); + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h index c1ee007..e3b0115 100644 --- a/arch/arm/include/uapi/asm/kvm.h +++ b/arch/arm/include/uapi/asm/kvm.h @@ -118,6 +118,22 @@ struct kvm_arch_memory_slot { #define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800 #define KVM_REG_ARM_32_CRN_SHIFT 11 +#define KVM_REG_ARM_32_CP15 (KVM_REG_ARM | KVM_REG_SIZE_U32 | \ + (15ULL << KVM_REG_ARM_COPROC_SHIFT)) +#define KVM_REG_ARM_64_CP15 (KVM_REG_ARM | KVM_REG_SIZE_U64 | \ + (15ULL << KVM_REG_ARM_COPROC_SHIFT)) +#define KVM_REG_ARM_TIMER_CTL (KVM_REG_ARM_32_CP15 | \ + ( 3ULL << KVM_REG_ARM_CRM_SHIFT) | \ + (14ULL << KVM_REG_ARM_32_CRN_SHIFT) | \ + ( 0ULL << KVM_REG_ARM_OPC1_SHIFT) | \ + ( 1ULL << KVM_REG_ARM_32_OPC2_SHIFT)) +#define KVM_REG_ARM_TIMER_CNT (KVM_REG_ARM_64_CP15 | \ + (14ULL << KVM_REG_ARM_CRM_SHIFT) | \ + ( 1ULL << KVM_REG_ARM_OPC1_SHIFT)) +#define KVM_REG_ARM_TIMER_CVAL (KVM_REG_ARM_64_CP15 | \ + (14ULL << KVM_REG_ARM_CRM_SHIFT) | \ + ( 3ULL << KVM_REG_ARM_OPC1_SHIFT)) + /* Normal registers are mapped as coprocessor 16. */ #define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT) #define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / 4) diff --git a/arch/arm/kvm/arch_timer.c b/arch/arm/kvm/arch_timer.c index c55b608..8d709eb 100644 --- a/arch/arm/kvm/arch_timer.c +++ b/arch/arm/kvm/arch_timer.c @@ -18,6 +18,7 @@ #include #include +#include #include #include #include @@ -171,6 +172,70 @@ static void kvm_timer_init_interrupt(void *info) enable_percpu_irq(timer_irq.irq, 0); } +int kvm_arm_num_timer_regs(void) +{ + return 3; +} + +int kvm_arm_copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) +{ + if (put_user(KVM_REG_ARM_TIMER_CTL, uindices)) + return -EFAULT; + uindices++; + if (put_user(KVM_REG_ARM_TIMER_CNT, uindices)) + return -EFAULT; + uindices++; + if (put_user(KVM_REG_ARM_TIMER_CVAL, uindices)) + return -EFAULT; + + return 0; +} + +int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; + void __user *uaddr = (void __user *)(long)reg->addr; + u64 val; + int ret; + + ret = copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)); + if (ret != 0) + return ret; + + switch (reg->id) { + case KVM_REG_ARM_TIMER_CTL: + timer->cntv_ctl = val; + break; + case KVM_REG_ARM_TIMER_CNT: + vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - val; + break; + case KVM_REG_ARM_TIMER_CVAL: + timer->cntv_cval = val; + break; + } + + return 0; +} + +int kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; + void __user *uaddr = (void __user *)(long)reg->addr; + u64 val; + + switch (reg->id) { + case KVM_REG_ARM_TIMER_CTL: + val = timer->cntv_ctl; + break; + case KVM_REG_ARM_TIMER_CNT: + val = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff; + break; + case KVM_REG_ARM_TIMER_CVAL: + val = timer->cntv_cval; + break; + } + return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)); +} static int kvm_timer_cpu_notify(struct notifier_block *self, unsigned long action, void *cpu) diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c index 152d036..a50ffb6 100644 --- a/arch/arm/kvm/guest.c +++ b/arch/arm/kvm/guest.c @@ -121,7 +121,8 @@ static unsigned long num_core_regs(void) */ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) { - return num_core_regs() + kvm_arm_num_coproc_regs(vcpu); + return num_core_regs() + kvm_arm_num_coproc_regs(vcpu) + + kvm_arm_num_timer_regs(); } /** @@ -133,6 +134,7 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) { unsigned int i; const u64 core_reg = KVM_REG_ARM | KVM_REG_SIZE_U32 | KVM_REG_ARM_CORE; + int ret; for (i = 0; i < sizeof(struct kvm_regs)/sizeof(u32); i++) { if (put_user(core_reg | i, uindices)) @@ -140,9 +142,25 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) uindices++; } + ret = kvm_arm_copy_timer_indices(vcpu, uindices); + if (ret) + return ret; + uindices += kvm_arm_num_timer_regs(); + return kvm_arm_copy_coproc_indices(vcpu, uindices); } +static bool is_timer_reg(u64 index) +{ + switch (index) { + case KVM_REG_ARM_TIMER_CTL: + case KVM_REG_ARM_TIMER_CNT: + case KVM_REG_ARM_TIMER_CVAL: + return true; + } + return false; +} + int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { /* We currently use nothing arch-specific in upper 32 bits */ @@ -153,6 +171,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) return get_core_reg(vcpu, reg); + if (is_timer_reg(reg->id)) + return kvm_arm_timer_get_reg(vcpu, reg); + return kvm_arm_coproc_get_reg(vcpu, reg); } @@ -166,6 +187,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) return set_core_reg(vcpu, reg); + if (is_timer_reg(reg->id)) + return kvm_arm_timer_set_reg(vcpu, reg); + return kvm_arm_coproc_set_reg(vcpu, reg); }