From patchwork Thu May 23 10:34:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 164976 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp1994881ili; Thu, 23 May 2019 03:36:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqxkxMyeGijdne40pyTCBe0fzGw0PVTAznXnM/BwojTtwde0O9/BYRnH37l8VPiHfQzNWAzm X-Received: by 2002:a63:2ace:: with SMTP id q197mr10856573pgq.102.1558607781488; Thu, 23 May 2019 03:36:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558607781; cv=none; d=google.com; s=arc-20160816; b=IcPvPgxxjPv/DZJneVgk4JODDqIVPGI4ur96jOkfROv4HpDYIDvmFAZ6tAlVCjA1cl 87OByFxtow0UM3Gd6c2dDgV5op/ZBayfiWinkfoKTsYob8iEHc53KafDypN0YQrpBHMs gPXzAAvYGzBCwzOVLDdjkMwntYvABPgoHHOvbVjmRBQKTm1dgev4aGaTMIbypr3X7//Z RxRS0CYX166C9dr7Qi2ozizkbcghJEblH8bicH5D7OXU0ZHRNzGDMqhb4NIaVVWw5foD cUlDQGlkFuzaTmC6jn0Gh8naDN3rlp1W1CFn3qCIAnT1Ya9i+Lo+nxufM/zTYOHKjp7e cvOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=qjqzEkvlZNRPrvv7mlv3SAPommf75Y7WHcTYmS5kwZA=; b=w6kz1Ej09z2SgJ3EAA5qJvG28w0S8DaM1zXly4A5JZFStUwPMtx44pr5KoSn0VaG5h LgwdeTmLq81cGuBjoGQ+/uAliUXHgXrVi9weOrFtVfBKHhvaZ/zsPY241SP2TQd5qHd5 o3mt4SEWrHru9eEh7WIsF9gvGVEKbEpl7+J9ww3hea/T2odLwxDnbNhFclapZLc0lLSb sM1q9VoXppgeQygYtMBCSMu1Gr1VsB1OuJ3anQ8RBT5xRtFBTQWXbc0oYNqQ4kwK3Ezk tEvx2NlHKfZ+wN+jheauzjWhPKUlx51eic+6TbErThaTl4oe8qi0xTkmlnnDBZaekvKa 1TqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v16si28366255pgk.431.2019.05.23.03.36.21; Thu, 23 May 2019 03:36:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730820AbfEWKgU (ORCPT + 30 others); Thu, 23 May 2019 06:36:20 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43118 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730618AbfEWKfm (ORCPT ); Thu, 23 May 2019 06:35:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 09CF7341; Thu, 23 May 2019 03:35:42 -0700 (PDT) Received: from usa.arm.com (e107155-lin.cambridge.arm.com [10.1.196.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E82C43F718; Thu, 23 May 2019 03:35:39 -0700 (PDT) From: Sudeep Holla To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: Sudeep Holla , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Christoffer Dall , Marc Zyngier , James Morse , Suzuki K Pouloze , Catalin Marinas , Will Deacon , Julien Thierry Subject: [PATCH v2 09/15] arm64: KVM: add support to save/restore SPE profiling buffer controls Date: Thu, 23 May 2019 11:34:56 +0100 Message-Id: <20190523103502.25925-10-sudeep.holla@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190523103502.25925-1-sudeep.holla@arm.com> References: <20190523103502.25925-1-sudeep.holla@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently since we don't support profiling using SPE in the guests, we just save the PMSCR_EL1, flush the profiling buffers and disable sampling. However in order to support simultaneous sampling both in the host and guests, we need to save and reatore the complete SPE profiling buffer controls' context. Let's add the support for the same and keep it disabled for now. We can enable it conditionally only if guests are allowed to use SPE. Signed-off-by: Sudeep Holla --- arch/arm64/kvm/hyp/debug-sr.c | 44 ++++++++++++++++++++++++++++------- 1 file changed, 35 insertions(+), 9 deletions(-) -- 2.17.1 diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c index a2714a5eb3e9..a4e6eaf5934f 100644 --- a/arch/arm64/kvm/hyp/debug-sr.c +++ b/arch/arm64/kvm/hyp/debug-sr.c @@ -66,7 +66,8 @@ default: write_debug(ptr[0], reg, 0); \ } -static void __hyp_text __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt) +static void __hyp_text +__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt) { u64 reg; @@ -83,22 +84,37 @@ static void __hyp_text __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt) if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) return; - /* No; is the host actually using the thing? */ - reg = read_sysreg_s(SYS_PMBLIMITR_EL1); - if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) + /* Save the control register and disable data generation */ + ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1_s(SYS_PMSCR); + + if (!ctxt->sys_regs[PMSCR_EL1]) return; - /* Yes; save the control register and disable data generation */ - ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1_s(SYS_PMSCR); write_sysreg_el1_s(0, SYS_PMSCR); isb(); /* Now drain all buffered data to memory */ psb_csync(); dsb(nsh); + + if (!full_ctxt) + return; + + ctxt->sys_regs[PMBLIMITR_EL1] = read_sysreg_s(SYS_PMBLIMITR_EL1); + write_sysreg_s(0, SYS_PMBLIMITR_EL1); + isb(); + + ctxt->sys_regs[PMSICR_EL1] = read_sysreg_s(SYS_PMSICR_EL1); + ctxt->sys_regs[PMSIRR_EL1] = read_sysreg_s(SYS_PMSIRR_EL1); + ctxt->sys_regs[PMSFCR_EL1] = read_sysreg_s(SYS_PMSFCR_EL1); + ctxt->sys_regs[PMSEVFR_EL1] = read_sysreg_s(SYS_PMSEVFR_EL1); + ctxt->sys_regs[PMSLATFR_EL1] = read_sysreg_s(SYS_PMSLATFR_EL1); + ctxt->sys_regs[PMBPTR_EL1] = read_sysreg_s(SYS_PMBPTR_EL1); + ctxt->sys_regs[PMBSR_EL1] = read_sysreg_s(SYS_PMBSR_EL1); } -static void __hyp_text __debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt) +static void __hyp_text +__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt) { if (!ctxt->sys_regs[PMSCR_EL1]) return; @@ -107,6 +123,16 @@ static void __hyp_text __debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt) isb(); /* Re-enable data generation */ + if (full_ctxt) { + write_sysreg_s(ctxt->sys_regs[PMBPTR_EL1], SYS_PMBPTR_EL1); + write_sysreg_s(ctxt->sys_regs[PMBLIMITR_EL1], SYS_PMBLIMITR_EL1); + write_sysreg_s(ctxt->sys_regs[PMSFCR_EL1], SYS_PMSFCR_EL1); + write_sysreg_s(ctxt->sys_regs[PMSEVFR_EL1], SYS_PMSEVFR_EL1); + write_sysreg_s(ctxt->sys_regs[PMSLATFR_EL1], SYS_PMSLATFR_EL1); + write_sysreg_s(ctxt->sys_regs[PMSIRR_EL1], SYS_PMSIRR_EL1); + write_sysreg_s(ctxt->sys_regs[PMSICR_EL1], SYS_PMSICR_EL1); + write_sysreg_s(ctxt->sys_regs[PMBSR_EL1], SYS_PMBSR_EL1); + } write_sysreg_el1_s(ctxt->sys_regs[PMSCR_EL1], SYS_PMSCR); } @@ -179,7 +205,7 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu) guest_ctxt = &vcpu->arch.ctxt; if (!has_vhe()) - __debug_restore_spe_nvhe(host_ctxt); + __debug_restore_spe_nvhe(host_ctxt, false); if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) return; @@ -203,7 +229,7 @@ void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu) host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); if (!has_vhe()) - __debug_save_spe_nvhe(host_ctxt); + __debug_save_spe_nvhe(host_ctxt, false); } void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)