From patchwork Wed Jul 9 13:55:12 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 33315 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f197.google.com (mail-vc0-f197.google.com [209.85.220.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1BDC5203F4 for ; Wed, 9 Jul 2014 13:54:58 +0000 (UTC) Received: by mail-vc0-f197.google.com with SMTP id il7sf24505697vcb.4 for ; Wed, 09 Jul 2014 06:54:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :mime-version:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=3UumVdf+VQ1Cx12L5luA/eaRP+6NokbXHisj0eGPj4g=; b=l7G5IereYtqu5flhmnbDdSFSpayzjyRLvZTKg1Qt2g5ybsFjUlF8QH/EPiIuMBpjmv ncjExmFv7sgKlPmbWl/OsmYLgRqTC/mqWy7qNkXYxVvOx0Pe6AN+v6H5w93wlK+n5WNi SwZbkvW83tzYhTqnKecys0eSU+LWp2PE6VgoXsUr44oYE86qiS0MPlTCLzBLaX3Dibil zXusH+k/SgOYqD6sQVg39oQ0l95dXSIHGngrur6wBT9vhyojuBV+IvM5mRLBfFi2b9FC oB/ZSiwOCQpioXRFOilJj0iEIso8PuMnHMcCBwTCcF5qm7JZaAgv7XmzvsLLtb/hI2lL qKbg== X-Gm-Message-State: ALoCoQkxumpUJFntVUkooTgou1786t7bFZp7lp6FfcLjmi4xuxp0ype1YH7xHr+KtVV8oXPcx5zR X-Received: by 10.224.97.8 with SMTP id j8mr19968026qan.0.1404914098728; Wed, 09 Jul 2014 06:54:58 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.94.163 with SMTP id g32ls2406811qge.17.gmail; Wed, 09 Jul 2014 06:54:58 -0700 (PDT) X-Received: by 10.52.166.10 with SMTP id zc10mr433124vdb.61.1404914098550; Wed, 09 Jul 2014 06:54:58 -0700 (PDT) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by mx.google.com with ESMTPS id qb7si21542924vcb.43.2014.07.09.06.54.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 09 Jul 2014 06:54:58 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.174 as permitted sender) client-ip=209.85.220.174; Received: by mail-vc0-f174.google.com with SMTP id hy4so6909937vcb.19 for ; Wed, 09 Jul 2014 06:54:58 -0700 (PDT) X-Received: by 10.52.242.104 with SMTP id wp8mr33995687vdc.31.1404914098454; Wed, 09 Jul 2014 06:54:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp49654vcb; Wed, 9 Jul 2014 06:54:58 -0700 (PDT) X-Received: by 10.66.182.132 with SMTP id ee4mr17411215pac.64.1404914097645; Wed, 09 Jul 2014 06:54:57 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cc10si7375856pdb.101.2014.07.09.06.54.56; Wed, 09 Jul 2014 06:54:57 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755843AbaGINys (ORCPT + 28 others); Wed, 9 Jul 2014 09:54:48 -0400 Received: from static.88-198-71-155.clients.your-server.de ([88.198.71.155]:45509 "EHLO socrates.bennee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753507AbaGINyq (ORCPT ); Wed, 9 Jul 2014 09:54:46 -0400 Received: from localhost ([127.0.0.1] helo=zen.linaro.local) by socrates.bennee.com with esmtp (Exim 4.80) (envelope-from ) id 1X4sOv-0003sF-HX; Wed, 09 Jul 2014 15:59:13 +0200 From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= , Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Gleb Natapov , Paolo Bonzini , linux-kernel@vger.kernel.org (open list) Subject: [PATCH] arm64: KVM: export current vcpu->pause state via pseudo regs Date: Wed, 9 Jul 2014 14:55:12 +0100 Message-Id: <1404914112-7298-1-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.0.1 MIME-Version: 1.0 X-SA-Exim-Connect-IP: 127.0.0.1 X-SA-Exim-Mail-From: alex.bennee@linaro.org X-SA-Exim-Scanned: No (on socrates.bennee.com); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: alex.bennee@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.174 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , To cleanly restore an SMP VM we need to ensure that the current pause state of each vcpu is correctly recorded. Things could get confused if the CPU starts running after migration restore completes when it was paused before it state was captured. I've done this by exposing a register (currently only 1 bit used) via the GET/SET_ONE_REG logic to pass the state between KVM and the VM controller (e.g. QEMU). Signed-off-by: Alex Bennée --- arch/arm64/include/uapi/asm/kvm.h | 8 +++++ arch/arm64/kvm/guest.c | 61 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 68 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index eaf54a3..8990e6e 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -148,6 +148,14 @@ struct kvm_arch_memory_slot { #define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2) #define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2) +/* Power state (PSCI), not real registers */ +#define KVM_REG_ARM_PSCI (0x0014 << KVM_REG_ARM_COPROC_SHIFT) +#define KVM_REG_ARM_PSCI_REG(n) \ + (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_PSCI | \ + (n & ~KVM_REG_ARM_COPROC_MASK)) +#define KVM_REG_ARM_PSCI_STATE KVM_REG_ARM_PSCI_REG(0) +#define NUM_KVM_PSCI_REGS 1 + /* Device Control API: ARM VGIC */ #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 205f0d8..31d6439 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -189,6 +189,54 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) } /** + * PSCI State + * + * These are not real registers as they do not actually exist in the + * hardware but represent the current power state of the vCPU + */ + +static bool is_psci_reg(u64 index) +{ + switch (index) { + case KVM_REG_ARM_PSCI_STATE: + return true; + } + return false; +} + +static int copy_psci_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) +{ + if (put_user(KVM_REG_ARM_PSCI_STATE, uindices)) + return -EFAULT; + return 0; +} + +static int set_psci_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + void __user *uaddr = (void __user *)(long)reg->addr; + u64 val; + int ret; + + ret = copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)); + if (ret != 0) + return ret; + + vcpu->arch.pause = (val & 0x1) ? false : true; + return 0; +} + +static int get_psci_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + void __user *uaddr = (void __user *)(long)reg->addr; + u64 val; + + /* currently we only use one bit */ + val = vcpu->arch.pause ? 0 : 1; + return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)); +} + + +/** * kvm_arm_num_regs - how many registers do we present via KVM_GET_ONE_REG * * This is for all registers. @@ -196,7 +244,7 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) { return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu) - + NUM_TIMER_REGS; + + NUM_TIMER_REGS + NUM_KVM_PSCI_REGS; } /** @@ -221,6 +269,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return ret; uindices += NUM_TIMER_REGS; + ret = copy_psci_indices(vcpu, uindices); + if (ret) + return ret; + uindices += NUM_KVM_PSCI_REGS; + return kvm_arm_copy_sys_reg_indices(vcpu, uindices); } @@ -237,6 +290,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (is_timer_reg(reg->id)) return get_timer_reg(vcpu, reg); + if (is_psci_reg(reg->id)) + return get_psci_reg(vcpu, reg); + return kvm_arm_sys_reg_get_reg(vcpu, reg); } @@ -253,6 +309,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (is_timer_reg(reg->id)) return set_timer_reg(vcpu, reg); + if (is_psci_reg(reg->id)) + return set_psci_reg(vcpu, reg); + return kvm_arm_sys_reg_set_reg(vcpu, reg); }