From patchwork Fri Sep 25 21:41:17 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 54179 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f72.google.com (mail-la0-f72.google.com [209.85.215.72]) by patches.linaro.org (Postfix) with ESMTPS id 50A2D218DB for ; Fri, 25 Sep 2015 21:42:43 +0000 (UTC) Received: by lacdq2 with SMTP id dq2sf8574973lac.3 for ; Fri, 25 Sep 2015 14:42:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=PxZ5kXQnP0mBrypvn41KSM5Ofu2PFRMhMEBB98Tay6Q=; b=ZwNg8XJM0glVKxe2ZLTvWdiGja7Wnhn9VDLmxkMSspmY/+EQPsSFw+pBVHT9u7Mx+O BB1Nv0xYjNAzc1g57wJKF8roGNgN3GfzZkECLahsFrRHwhzHGRW01oMJvIr+JgrGRUsO VOdKtZtcseluiA2YR4BM3VZUc1gCgK6t/D6RLyVekL9dvZXrjkltLG9H/t4DWOHl38ks HXMzbw9rB4Gwo1LgGXPCkRSaxnKkiis17apdISX4A0EGlgeMS5vX1Re5NLG5SnkNTAUv FyG8ujPQPTFm1Kd7nVwoBhZY2wqMVooyBP0692JFv6jNFcNB5QKzHoVYK2Rqe4SKW62p XcqQ== X-Gm-Message-State: ALoCoQmhUHHnrDUnxmXkhPKaiTc2k0yrX8uj6JQ9594K+enJnSBZL/7AeBnvW1dQ2c8zWqxPD/bE X-Received: by 10.112.89.228 with SMTP id br4mr1267700lbb.3.1443217361511; Fri, 25 Sep 2015 14:42:41 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.143.209 with SMTP id r200ls246654lfd.6.gmail; Fri, 25 Sep 2015 14:42:41 -0700 (PDT) X-Received: by 10.152.20.167 with SMTP id o7mr2018971lae.54.1443217361382; Fri, 25 Sep 2015 14:42:41 -0700 (PDT) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com. [209.85.215.54]) by mx.google.com with ESMTPS id xe5si2540873lbb.65.2015.09.25.14.42.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Sep 2015 14:42:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) client-ip=209.85.215.54; Received: by lacrr8 with SMTP id rr8so32679963lac.2 for ; Fri, 25 Sep 2015 14:42:41 -0700 (PDT) X-Received: by 10.25.19.21 with SMTP id j21mr1516831lfi.106.1443217361280; Fri, 25 Sep 2015 14:42:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.59.35 with SMTP id w3csp279233lbq; Fri, 25 Sep 2015 14:42:40 -0700 (PDT) X-Received: by 10.66.121.229 with SMTP id ln5mr9932529pab.133.1443217356711; Fri, 25 Sep 2015 14:42:36 -0700 (PDT) Received: from mail-pa0-f44.google.com (mail-pa0-f44.google.com. [209.85.220.44]) by mx.google.com with ESMTPS id pw1si8258939pbb.1.2015.09.25.14.42.36 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Sep 2015 14:42:36 -0700 (PDT) Received-SPF: pass (google.com: domain of eric.auger@linaro.org designates 209.85.220.44 as permitted sender) client-ip=209.85.220.44; Received: by pacex6 with SMTP id ex6so116323617pac.0 for ; Fri, 25 Sep 2015 14:42:36 -0700 (PDT) X-Received: by 10.68.184.5 with SMTP id eq5mr10221366pbc.130.1443217356349; Fri, 25 Sep 2015 14:42:36 -0700 (PDT) Received: from gnx2579.st.com. ([70.35.39.2]) by smtp.gmail.com with ESMTPSA id u1sm5606756pbz.56.2015.09.25.14.42.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Sep 2015 14:42:35 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, christoffer.dall@linaro.org, marc.zyngier@arm.com, drjones@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, patches@linaro.org Subject: [PATCH v3 4/4] KVM: arm/arm64: implement kvm_arm_[halt, resume]_guest Date: Fri, 25 Sep 2015 23:41:17 +0200 Message-Id: <1443217277-13173-5-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1443217277-13173-1-git-send-email-eric.auger@linaro.org> References: <1443217277-13173-1-git-send-email-eric.auger@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: eric.auger@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , We introduce kvm_arm_halt_guest and resume functions. They will be used for IRQ forward state change. Halt is synchronous and prevents the guest from being re-entered. We use the same mechanism put in place for PSCI former pause, now renamed power_off. A new flag is introduced in arch vcpu state, pause, only meant to be used by those functions. Signed-off-by: Eric Auger Reviewed-by: Christoffer Dall --- v2 -> v3: - change the comment associated to the pause flag into: Don't run the guest (internal implementation need) - add Christoffer's R-b v1 -> v2: - check pause in kvm_arch_vcpu_runnable - we cannot use kvm_vcpu_block since this latter would exit on IRQ/FIQ and this is not what we want --- arch/arm/include/asm/kvm_host.h | 3 +++ arch/arm/kvm/arm.c | 35 +++++++++++++++++++++++++++++++---- arch/arm64/include/asm/kvm_host.h | 3 +++ 3 files changed, 37 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 1931a25..efbb812 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -128,6 +128,9 @@ struct kvm_vcpu_arch { /* vcpu power-off state */ bool power_off; + /* Don't run the guest (internal implementation need) */ + bool pause; + /* IO related fields */ struct kvm_decode mmio_decode; diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 4eb59e3..0308092 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -353,7 +353,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, int kvm_arch_vcpu_runnable(struct kvm_vcpu *v) { return ((!!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v)) - && !v->arch.power_off); + && !v->arch.power_off && !v->arch.pause); } /* Just ensure a guest exit from a particular CPU */ @@ -479,11 +479,38 @@ bool kvm_arch_intc_initialized(struct kvm *kvm) return vgic_initialized(kvm); } +static void kvm_arm_halt_guest(struct kvm *kvm) __maybe_unused; +static void kvm_arm_resume_guest(struct kvm *kvm) __maybe_unused; + +static void kvm_arm_halt_guest(struct kvm *kvm) +{ + int i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) + vcpu->arch.pause = true; + force_vm_exit(cpu_all_mask); +} + +static void kvm_arm_resume_guest(struct kvm *kvm) +{ + int i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + wait_queue_head_t *wq = kvm_arch_vcpu_wq(vcpu); + + vcpu->arch.pause = false; + wake_up_interruptible(wq); + } +} + static void vcpu_sleep(struct kvm_vcpu *vcpu) { wait_queue_head_t *wq = kvm_arch_vcpu_wq(vcpu); - wait_event_interruptible(*wq, !vcpu->arch.power_off); + wait_event_interruptible(*wq, ((!vcpu->arch.power_off) && + (!vcpu->arch.pause))); } static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu) @@ -533,7 +560,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) update_vttbr(vcpu->kvm); - if (vcpu->arch.power_off) + if (vcpu->arch.power_off || vcpu->arch.pause) vcpu_sleep(vcpu); /* @@ -561,7 +588,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) } if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) || - vcpu->arch.power_off) { + vcpu->arch.power_off || vcpu->arch.pause) { local_irq_enable(); kvm_timer_sync_hwstate(vcpu); kvm_vgic_sync_hwstate(vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index d89ec1b..d19dfd6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -151,6 +151,9 @@ struct kvm_vcpu_arch { /* vcpu power-off state */ bool power_off; + /* Don't run the guest (internal implementation need) */ + bool pause; + /* IO related fields */ struct kvm_decode mmio_decode;