From patchwork Thu May 23 10:34:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 164969 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp1994315ili; Thu, 23 May 2019 03:35:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqx4e4rUBbo+CZ8T3rSqlG01CgIXz9xqO/0oleJp5jy6Vh3zYFZSZwXutoZ0MqSlWTe3M6ei X-Received: by 2002:a17:902:d890:: with SMTP id b16mr48689609plz.91.1558607749597; Thu, 23 May 2019 03:35:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558607749; cv=none; d=google.com; s=arc-20160816; b=L5R9oUNrAMXTY+oS9nOzeD/pjLiqFCW1sFQ3Zn7mlJSnbPFYbg1108YDPw15Boma27 WWkAN4SnLZf7krQqJtI3efz4hu8sUMu/vD6doASNFQeHiwbIDsIe7TVA1Jw++oKZ14mj AiZ6vbkKVC/gy5h78r5Ac/GvZXBBJBpxu/ks1Q5uf6ncw/pn7JmtxTEZSq2T/lLfdZoN Aqy5JwPiUyC2ubuzswfzA3waOaKdOUDvFDWk0YK/KEXGbXDCjWYPEEDJXJyiadI0lgdA oD45qTwJNxFE3tA6NnZdKNl2Aq/PaFrlsDdhEs5ZWa6YJLFA+UmsxvZNZlWRzwehLk0g BBiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=X9RF7jdcOVn3jubOK7pY43kemrRq8s92xxoeMJbnMxc=; b=CZlxnyPwUpFvcOJgZNVWSNlhKeV5wbaRD0Mpgj+bsMx+nWnjHYes/XI9St41k9ci7d TMGtTHSwxZfXUFkGYejdC0OUYqtMvKD8KIkT2A6/5VoVK3eLUtXQ0V1gjA8TzKw454x4 a/UpmH+fMslvhQmLO/HVG8nmlW0sEisQBc25vqgCiaETqKvWPoeWrNFgl+90duLkIKEw c6AbWAN0/9xdl/1BlUt+UokxaUdK6veKh464Lpp51AJEGjb5KJegrRqAAPkjZdg50n+j HmsxDRhmf7TyhRX1UX+IFYIJntmzaE/arH3PV+45jbI6BG9y1CjReFzeZ3GhiUJJ2P5f 9/ew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j10si29831090pgj.87.2019.05.23.03.35.49; Thu, 23 May 2019 03:35:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730691AbfEWKfs (ORCPT + 30 others); Thu, 23 May 2019 06:35:48 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43128 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730669AbfEWKfp (ORCPT ); Thu, 23 May 2019 06:35:45 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 61E02A78; Thu, 23 May 2019 03:35:44 -0700 (PDT) Received: from usa.arm.com (e107155-lin.cambridge.arm.com [10.1.196.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 48CBD3F718; Thu, 23 May 2019 03:35:42 -0700 (PDT) From: Sudeep Holla To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: Sudeep Holla , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Christoffer Dall , Marc Zyngier , James Morse , Suzuki K Pouloze , Catalin Marinas , Will Deacon , Julien Thierry Subject: [PATCH v2 10/15] arm64: KVM: enable conditional save/restore full SPE profiling buffer controls Date: Thu, 23 May 2019 11:34:57 +0100 Message-Id: <20190523103502.25925-11-sudeep.holla@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190523103502.25925-1-sudeep.holla@arm.com> References: <20190523103502.25925-1-sudeep.holla@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that we can save/restore the full SPE controls, we can enable it if SPE is setup and ready to use in KVM. It's supported in KVM only if all the CPUs in the system supports SPE. However to support heterogenous systems, we need to move the check if host supports SPE and do a partial save/restore. Signed-off-by: Sudeep Holla --- arch/arm64/kvm/hyp/debug-sr.c | 33 ++++++++++++++++----------------- include/kvm/arm_spe.h | 3 +++ 2 files changed, 19 insertions(+), 17 deletions(-) -- 2.17.1 diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c index a4e6eaf5934f..cd0a7571abc1 100644 --- a/arch/arm64/kvm/hyp/debug-sr.c +++ b/arch/arm64/kvm/hyp/debug-sr.c @@ -67,18 +67,13 @@ } static void __hyp_text -__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt) +__debug_save_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt) { u64 reg; /* Clear pmscr in case of early return */ ctxt->sys_regs[PMSCR_EL1] = 0; - /* SPE present on this CPU? */ - if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1), - ID_AA64DFR0_PMSVER_SHIFT)) - return; - /* Yes; is it owned by higher EL? */ reg = read_sysreg_s(SYS_PMBIDR_EL1); if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) @@ -114,7 +109,7 @@ __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt) } static void __hyp_text -__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt) +__debug_restore_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt) { if (!ctxt->sys_regs[PMSCR_EL1]) return; @@ -182,11 +177,14 @@ void __hyp_text __debug_restore_guest_context(struct kvm_vcpu *vcpu) struct kvm_guest_debug_arch *host_dbg; struct kvm_guest_debug_arch *guest_dbg; + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + guest_ctxt = &vcpu->arch.ctxt; + + __debug_restore_spe_context(guest_ctxt, kvm_arm_spe_v1_ready(vcpu)); + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) return; - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - guest_ctxt = &vcpu->arch.ctxt; host_dbg = &vcpu->arch.host_debug_state.regs; guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); @@ -204,8 +202,7 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu) host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); guest_ctxt = &vcpu->arch.ctxt; - if (!has_vhe()) - __debug_restore_spe_nvhe(host_ctxt, false); + __debug_restore_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu)); if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) return; @@ -221,19 +218,21 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu) void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu) { - /* - * Non-VHE: Disable and flush SPE data generation - * VHE: The vcpu can run, but it can't hide. - */ struct kvm_cpu_context *host_ctxt; host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - if (!has_vhe()) - __debug_save_spe_nvhe(host_ctxt, false); + if (cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1), + ID_AA64DFR0_PMSVER_SHIFT)) + __debug_save_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu)); } void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu) { + bool kvm_spe_ready = kvm_arm_spe_v1_ready(vcpu); + + /* SPE present on this vCPU? */ + if (kvm_spe_ready) + __debug_save_spe_context(&vcpu->arch.ctxt, kvm_spe_ready); } u32 __hyp_text __kvm_get_mdcr_el2(void) diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h index 2440ff02f747..fdcb0df1e0fd 100644 --- a/include/kvm/arm_spe.h +++ b/include/kvm/arm_spe.h @@ -18,6 +18,8 @@ struct kvm_spe { #ifdef CONFIG_KVM_ARM_SPE +#define kvm_arm_spe_v1_ready(v) ((v)->arch.spe.ready) + static inline bool kvm_arm_support_spe_v1(void) { u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); @@ -27,6 +29,7 @@ static inline bool kvm_arm_support_spe_v1(void) } #else +#define kvm_arm_spe_v1_ready(v) (false) #define kvm_arm_support_spe_v1() (false) #endif /* CONFIG_KVM_ARM_SPE */