From patchwork Mon Jul 6 02:17:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 50683 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f198.google.com (mail-wi0-f198.google.com [209.85.212.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 93015229E5 for ; Mon, 6 Jul 2015 02:20:58 +0000 (UTC) Received: by wifm2 with SMTP id m2sf5616961wif.1 for ; Sun, 05 Jul 2015 19:20:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:cc:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=Y/uJrKn+NLXp2BYexvBPw1awd47Qz5z23VbVufIDyeE=; b=BINbl0vV0AOm0lyR0jj0oX34UW3DWwiZhJAXWPcYmfqj8gss8NTKuogoq0tyJGCSJU 0sezyA7ZjA8rLWhaKiwgXsxkb/7gJDXD3ZUQ5kG55+6lbcmn50Dwy04ynpgGxDTzzMFY 1zmyJnFSyRKRwGl4n05doIJlscevpK8LQ0uiFQKNA4FpyoEEK461w8cQEqpNACcJAKU9 6ac06iLcaU1wHfCpD4LoB+fvyS0fcWlrM41MBVjczTkx8kzrTp4bfzrfx85wsnFcg/me 4DbIHFQBxBdqHwpMVefOcUxajtwH9CNHrrKP6KwPlHSmIjZDtEstqyj68FdmHsj4HNCV +IJA== X-Gm-Message-State: ALoCoQkmWFnmbSX8Tlh8LrdU0q9B0xxNfMjV8g60BBLzYhwLHMZdkwfvN1sDIMygk0qjRNzTQGLl X-Received: by 10.194.209.180 with SMTP id mn20mr29280484wjc.5.1436149257839; Sun, 05 Jul 2015 19:20:57 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.164.195 with SMTP id ys3ls636808lab.13.gmail; Sun, 05 Jul 2015 19:20:57 -0700 (PDT) X-Received: by 10.152.7.239 with SMTP id m15mr45395866laa.95.1436149257680; Sun, 05 Jul 2015 19:20:57 -0700 (PDT) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id pk3si13958015lbb.75.2015.07.05.19.20.57 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 05 Jul 2015 19:20:57 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by lagh6 with SMTP id h6so137416932lag.2 for ; Sun, 05 Jul 2015 19:20:57 -0700 (PDT) X-Received: by 10.152.18.162 with SMTP id x2mr27567319lad.73.1436149257525; Sun, 05 Jul 2015 19:20:57 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp1426278lbb; Sun, 5 Jul 2015 19:20:56 -0700 (PDT) X-Received: by 10.70.41.78 with SMTP id d14mr101252907pdl.35.1436149256425; Sun, 05 Jul 2015 19:20:56 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id o6si26410186pdn.123.2015.07.05.19.20.55 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 05 Jul 2015 19:20:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZBw0b-00077L-3l; Mon, 06 Jul 2015 02:19:49 +0000 Received: from mail-pa0-f54.google.com ([209.85.220.54]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZBvzx-00062C-36 for linux-arm-kernel@lists.infradead.org; Mon, 06 Jul 2015 02:19:09 +0000 Received: by pabvl15 with SMTP id vl15so87072613pab.1 for ; Sun, 05 Jul 2015 19:18:48 -0700 (PDT) X-Received: by 10.68.194.104 with SMTP id hv8mr100258232pbc.151.1436149128193; Sun, 05 Jul 2015 19:18:48 -0700 (PDT) Received: from localhost ([120.136.34.248]) by mx.google.com with ESMTPSA id c16sm16213656pdl.61.2015.07.05.19.18.46 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 05 Jul 2015 19:18:46 -0700 (PDT) From: shannon.zhao@linaro.org To: kvmarm@lists.cs.columbia.edu Subject: [PATCH 11/18] KVM: ARM64: Add reset and access handlers for PMCNTENSET_EL0 and PMCNTENCLR_EL0 register Date: Mon, 6 Jul 2015 10:17:41 +0800 Message-Id: <1436149068-3784-12-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1436149068-3784-1-git-send-email-shannon.zhao@linaro.org> References: <1436149068-3784-1-git-send-email-shannon.zhao@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150705_191909_185600_524C9545 X-CRM114-Status: GOOD ( 15.56 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.220.54 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.220.54 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: kvm@vger.kernel.org, marc.zyngier@arm.com, will.deacon@arm.com, linux-arm-kernel@lists.infradead.org, zhaoshenglong@huawei.com, alex.bennee@linaro.org, christoffer.dall@linaro.org, shannon.zhao@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Shannon Zhao Since the reset value of PMCNTENSET_EL0 and PMCNTENCLR_EL0 is UNKNOWN, use reset_unknown for its reset handler. Add access handler which emulates writing and reading PMCNTENSET_EL0 or PMCNTENCLR_EL0 register. When writing to PMCNTENSET_EL0, call perf_event_enable to enable the perf event. When writing to PMCNTENCLR_EL0, call perf_event_disable to disable the perf event. Signed-off-by: Shannon Zhao --- arch/arm64/kvm/sys_regs.c | 56 +++++++++++++++++++++++++++++++++++++++++++++-- include/kvm/arm_pmu.h | 4 ++++ virt/kvm/arm/pmu.c | 41 ++++++++++++++++++++++++++++++++++ 3 files changed, 99 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 29883df..c14ec8d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -392,6 +392,58 @@ static bool access_pmccntr(struct kvm_vcpu *vcpu, return true; } +/* PMCNTENSET_EL0 accessor. */ +static bool access_pmcntenset(struct kvm_vcpu *vcpu, + const struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + unsigned long val; + + if (p->is_write) { + val = *vcpu_reg(vcpu, p->Rt); + if (!p->is_aarch32) + vcpu_sys_reg(vcpu, r->reg) |= val; + else + vcpu_cp15(vcpu, r->reg) |= val & 0xffffffffUL; + + kvm_pmu_enable_counter(vcpu, val); + } else { + if (!p->is_aarch32) + val = vcpu_sys_reg(vcpu, r->reg); + else + val = vcpu_cp15(vcpu, r->reg); + *vcpu_reg(vcpu, p->Rt) = val; + } + + return true; +} + +/* PMCNTENCLR_EL0 accessor. */ +static bool access_pmcntenclr(struct kvm_vcpu *vcpu, + const struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + unsigned long val; + + if (p->is_write) { + val = *vcpu_reg(vcpu, p->Rt); + if (!p->is_aarch32) + vcpu_sys_reg(vcpu, r->reg) |= val; + else + vcpu_cp15(vcpu, r->reg) |= val & 0xffffffffUL; + + kvm_pmu_disable_counter(vcpu, val); + } else { + if (!p->is_aarch32) + val = vcpu_sys_reg(vcpu, r->reg); + else + val = vcpu_cp15(vcpu, r->reg); + *vcpu_reg(vcpu, p->Rt) = val; + } + + return true; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ /* DBGBVRn_EL1 */ \ @@ -586,10 +638,10 @@ static const struct sys_reg_desc sys_reg_descs[] = { access_pmcr, reset_pmcr_el0, PMCR_EL0, }, /* PMCNTENSET_EL0 */ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001), - trap_raz_wi }, + access_pmcntenset, reset_unknown, PMCNTENSET_EL0 }, /* PMCNTENCLR_EL0 */ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010), - trap_raz_wi }, + access_pmcntenclr, reset_unknown, PMCNTENCLR_EL0 }, /* PMOVSCLR_EL0 */ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011), trap_raz_wi }, diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 40ab4a0..2cfd9be 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -49,6 +49,8 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, unsigned long select_idx, unsigned long val); unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, unsigned long select_idx); +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, unsigned long val); +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, unsigned long val); void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, unsigned long select_idx); void kvm_pmu_init(struct kvm_vcpu *vcpu); @@ -61,6 +63,8 @@ unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, { return 0; } +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, unsigned long val) {} +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, unsigned long val) {} void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, unsigned long select_idx) {} static inline void kvm_pmu_init(struct kvm_vcpu *vcpu) {} diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 361fa51..cf59998 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -134,6 +134,47 @@ unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, } /** + * kvm_pmu_enable_counter - enable selected PMU counter + * @vcpu: The vcpu pointer + * @val: the value guest writes to PMCNTENSET_EL0 register + * + * Call perf_event_enable to start counting the perf event + */ +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, unsigned long val) +{ + int select_idx = find_first_bit(&val, 32); + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) { + local64_set(&pmc->perf_event->count, 0); + perf_event_enable(pmc->perf_event); + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) + printk("kvm: fail to enable event\n"); + } + pmc->enable = true; +} + +/** + * kvm_pmu_disable_counter - disable selected PMU counter + * @vcpu: The vcpu pointer + * @val: the value guest writes to PMCNTENCLR_EL0 register + * + * Call perf_event_disable to stop counting the perf event + */ +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, unsigned long val) +{ + int select_idx = find_first_bit(&val, 32); + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) + perf_event_disable(pmc->perf_event); + + pmc->enable = false; +} + +/** * kvm_pmu_find_hw_event - find hardware event * @pmu: The pmu pointer * @event_select: The number of selected event type