From patchwork Fri Sep 11 08:55:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 53409 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by patches.linaro.org (Postfix) with ESMTPS id 24EA022B26 for ; Fri, 11 Sep 2015 08:58:47 +0000 (UTC) Received: by lbcao8 with SMTP id ao8sf22163201lbc.1 for ; Fri, 11 Sep 2015 01:58:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:mime-version:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe:cc :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=mHogb2S32Qaz8RrH+TEEx2TbTwwVcxhT4U5w61wyBqw=; b=PpcpDScqubiFmrIr1Sd4hCaU+a0NOWs1Ma1IO2j4OQgAt9yXJcPHBbQXvI52nrGd5s q77rW8fcNPQiKxHRElvA5a6oU6+0Fc7/UqmHdMRcJFW0cF9NOyBj6wdjfML2LanlqE2l 0qJstY+crGcoV13y/d0ccRFQkWasW6DWTt6e8m7tOar6ssTeIbxvrgparJcZWoaFz+Oj H3aBdq69ttZ+B2k6TvueKLIxGs3K6lWwTPsRuCMl2d1+XfMDAVnpQxokQTiK0zE55et0 pNE4WfVHQjit8/4XAiQya4lxXE4/bdycEU9eeiCwak+R8xUX3WQsNLe3xTfL0dx6hmuL b2mw== X-Gm-Message-State: ALoCoQlnLyPJSxBdQDtgTnbEWE56d7rb4NWxBFA1mG0IiwQJxOgBwjF3Fuo30+DUtWfYTGSmKPPf X-Received: by 10.180.106.197 with SMTP id gw5mr336381wib.7.1441961926112; Fri, 11 Sep 2015 01:58:46 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.178.195 with SMTP id da3ls344316lac.79.gmail; Fri, 11 Sep 2015 01:58:46 -0700 (PDT) X-Received: by 10.112.141.8 with SMTP id rk8mr39660548lbb.87.1441961925968; Fri, 11 Sep 2015 01:58:45 -0700 (PDT) Received: from mail-la0-f50.google.com (mail-la0-f50.google.com. [209.85.215.50]) by mx.google.com with ESMTPS id e8si363980laa.121.2015.09.11.01.58.45 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 11 Sep 2015 01:58:45 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) client-ip=209.85.215.50; Received: by lagj9 with SMTP id j9so44447483lag.2 for ; Fri, 11 Sep 2015 01:58:45 -0700 (PDT) X-Received: by 10.152.170.230 with SMTP id ap6mr40785867lac.73.1441961925804; Fri, 11 Sep 2015 01:58:45 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1447416lbq; Fri, 11 Sep 2015 01:58:44 -0700 (PDT) X-Received: by 10.68.234.167 with SMTP id uf7mr93371805pbc.51.1441961924621; Fri, 11 Sep 2015 01:58:44 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id si2si936907pab.126.2015.09.11.01.58.43 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 11 Sep 2015 01:58:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZaK9C-0001Gc-Fs; Fri, 11 Sep 2015 08:57:30 +0000 Received: from merlin.infradead.org ([205.233.59.134]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZaK8x-00013O-G5 for linux-arm-kernel@bombadil.infradead.org; Fri, 11 Sep 2015 08:57:15 +0000 Received: from szxga01-in.huawei.com ([58.251.152.64]) by merlin.infradead.org with esmtps (Exim 4.85 #2 (Red Hat Linux)) id 1ZaK8t-0001YO-3q for linux-arm-kernel@lists.infradead.org; Fri, 11 Sep 2015 08:57:14 +0000 Received: from 172.24.1.49 (EHLO SZXEML423-HUB.china.huawei.com) ([172.24.1.49]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CUU27156; Fri, 11 Sep 2015 16:55:54 +0800 (CST) Received: from HGHY1Z002260041.china.huawei.com (10.177.16.142) by SZXEML423-HUB.china.huawei.com (10.82.67.154) with Microsoft SMTP Server id 14.3.235.1; Fri, 11 Sep 2015 16:55:45 +0800 From: Shannon Zhao To: Subject: [PATCH v2 08/22] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Date: Fri, 11 Sep 2015 16:55:01 +0800 Message-ID: <1441961715-11688-9-git-send-email-zhaoshenglong@huawei.com> X-Mailer: git-send-email 1.9.0.msysgit.0 In-Reply-To: <1441961715-11688-1-git-send-email-zhaoshenglong@huawei.com> References: <1441961715-11688-1-git-send-email-zhaoshenglong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.16.142] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150911_045712_573900_0F00052B X-CRM114-Status: GOOD ( 28.83 ) X-Spam-Score: -4.2 (----) X-Spam-Report: SpamAssassin version 3.4.1 on merlin.infradead.org summary: Content analysis details: (-4.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [58.251.152.64 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [58.251.152.64 listed in wl.mailspike.net] -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: wei@redhat.com, kvm@vger.kernel.org, marc.zyngier@arm.com, will.deacon@arm.com, peter.huangpeng@huawei.com, linux-arm-kernel@lists.infradead.org, zhaoshenglong@huawei.com, alex.bennee@linaro.org, christoffer.dall@linaro.org, shannon.zhao@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: patch@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Shannon Zhao When we use tools like perf on host, perf passes the event type and the id of this event type category to kernel, then kernel will map them to hardware event number and write this number to PMU PMEVTYPER_EL0 register. While we're trapping and emulating guest accesses to PMU registers, we get the hardware event number and map it to the event type and the id reversely. Then call perf_event kernel API to create an event for it. Signed-off-by: Shannon Zhao --- arch/arm64/include/asm/pmu.h | 2 + arch/arm64/kvm/Makefile | 1 + include/kvm/arm_pmu.h | 15 +++ virt/kvm/arm/pmu.c | 240 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 258 insertions(+) create mode 100644 virt/kvm/arm/pmu.c diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h index 95681e6..42e7093 100644 --- a/arch/arm64/include/asm/pmu.h +++ b/arch/arm64/include/asm/pmu.h @@ -33,6 +33,8 @@ #define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ #define ARMV8_PMCR_X (1 << 4) /* Export to ETM */ #define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ +/* Determines which PMCCNTR_EL0 bit generates an overflow */ +#define ARMV8_PMCR_LC (1 << 6) #define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */ #define ARMV8_PMCR_N_MASK 0x1f #define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */ diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index f90f4aa..78db4ee 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 64af88a..387ec6f 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -36,4 +36,19 @@ struct kvm_pmu { #endif }; +#ifdef CONFIG_KVM_ARM_PMU +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, + unsigned long select_idx); +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx); +#else +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, + unsigned long select_idx) +{ + return 0; +} +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx) {} +#endif + #endif diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c new file mode 100644 index 0000000..0c7fe5c --- /dev/null +++ b/virt/kvm/arm/pmu.c @@ -0,0 +1,240 @@ +/* + * Copyright (C) 2015 Linaro Ltd. + * Author: Shannon Zhao + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + */ + +#include +#include +#include +#include +#include +#include + +/* PMU HW events mapping. */ +static struct kvm_pmu_hw_event_map { + unsigned eventsel; + unsigned event_type; +} kvm_pmu_hw_events[] = { + [0] = { 0x11, PERF_COUNT_HW_CPU_CYCLES }, + [1] = { 0x08, PERF_COUNT_HW_INSTRUCTIONS }, + [2] = { 0x04, PERF_COUNT_HW_CACHE_REFERENCES }, + [3] = { 0x03, PERF_COUNT_HW_CACHE_MISSES }, + [4] = { 0x10, PERF_COUNT_HW_BRANCH_MISSES }, +}; + +/* PMU HW cache events mapping. */ +static struct kvm_pmu_hw_cache_event_map { + unsigned eventsel; + unsigned cache_type; + unsigned cache_op; + unsigned cache_result; +} kvm_pmu_hw_cache_events[] = { + [0] = { 0x12, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [1] = { 0x12, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, +}; + +static void kvm_pmu_set_evttyper(struct kvm_vcpu *vcpu, unsigned long idx, + unsigned long val) +{ + if (!vcpu_mode_is_32bit(vcpu)) + vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + idx) = val; + else + vcpu_cp15(vcpu, c14_PMEVTYPER0 + idx) = val; +} + +static unsigned long kvm_pmu_get_evttyper(struct kvm_vcpu *vcpu, + unsigned long idx) +{ + if (!vcpu_mode_is_32bit(vcpu)) + return vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + idx) + & ARMV8_EVTYPE_EVENT; + else + return vcpu_cp15(vcpu, c14_PMEVTYPER0 + idx) + & ARMV8_EVTYPE_EVENT; +} + +/** + * kvm_pmu_stop_counter - stop PMU counter for the selected counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + * + * If this counter has been configured to monitor some event, disable and + * release it. + */ +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, + unsigned long select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + pmc->perf_event = NULL; + } + kvm_pmu_set_evttyper(vcpu, select_idx, ARMV8_EVTYPE_EVENT); +} + +/** + * kvm_pmu_get_counter_value - get PMU counter value + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, + unsigned long select_idx) +{ + u64 enabled, running; + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + unsigned long counter; + + if (!vcpu_mode_is_32bit(vcpu)) + counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx); + else + counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx); + + if (pmc->perf_event) { + counter += perf_event_read_value(pmc->perf_event, + &enabled, &running); + } + return counter; +} + +/** + * kvm_pmu_find_hw_event - find hardware event + * @pmu: The pmu pointer + * @event_select: The number of selected event type + * + * Based on the number of selected event type, find out whether it belongs to + * PERF_TYPE_HARDWARE. If so, return the corresponding event id. + */ +static unsigned kvm_pmu_find_hw_event(struct kvm_pmu *pmu, + unsigned long event_select) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(kvm_pmu_hw_events); i++) + if (kvm_pmu_hw_events[i].eventsel == event_select) + return kvm_pmu_hw_events[i].event_type; + + return PERF_COUNT_HW_MAX; +} + +/** + * kvm_pmu_find_hw_cache_event - find hardware cache event + * @pmu: The pmu pointer + * @event_select: The number of selected event type + * + * Based on the number of selected event type, find out whether it belongs to + * PERF_TYPE_HW_CACHE. If so, return the corresponding event id. + */ +static unsigned kvm_pmu_find_hw_cache_event(struct kvm_pmu *pmu, + unsigned long event_select) +{ + int i; + unsigned config; + + for (i = 0; i < ARRAY_SIZE(kvm_pmu_hw_cache_events); i++) + if (kvm_pmu_hw_cache_events[i].eventsel == event_select) { + config = (kvm_pmu_hw_cache_events[i].cache_type & 0xff) + | ((kvm_pmu_hw_cache_events[i].cache_op & 0xff) << 8) + | ((kvm_pmu_hw_cache_events[i].cache_result & 0xff) << 16); + } + + return PERF_COUNT_HW_CACHE_MAX; +} + +/** + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * @vcpu: The vcpu pointer + * @data: The data guest writes to PMXEVTYPER_EL0 + * @select_idx: The number of selected counter + * + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an + * event with given hardware event number. Here we call perf_event API to + * emulate this action and create a kernel perf event for it. + */ +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct perf_event *event; + struct perf_event_attr attr; + unsigned config, type = PERF_TYPE_RAW; + unsigned int new_eventsel, old_eventsel; + u64 counter; + int overflow_bit, pmcr_lc; + + old_eventsel = kvm_pmu_get_evttyper(vcpu, select_idx); + new_eventsel = data & ARMV8_EVTYPE_EVENT; + if (new_eventsel == old_eventsel) { + if (pmc->perf_event) + local64_set(&pmc->perf_event->count, 0); + return; + } + + kvm_pmu_stop_counter(vcpu, select_idx); + kvm_pmu_set_evttyper(vcpu, select_idx, data); + + config = kvm_pmu_find_hw_event(pmu, new_eventsel); + if (config != PERF_COUNT_HW_MAX) { + type = PERF_TYPE_HARDWARE; + } else { + config = kvm_pmu_find_hw_cache_event(pmu, new_eventsel); + if (config != PERF_COUNT_HW_CACHE_MAX) + type = PERF_TYPE_HW_CACHE; + } + + if (type == PERF_TYPE_RAW) + config = new_eventsel; + + memset(&attr, 0, sizeof(struct perf_event_attr)); + attr.type = type; + attr.size = sizeof(attr); + attr.pinned = 1; + attr.disabled = 1; + attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0; + attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0; + attr.exclude_host = 1; /* Don't count host events */ + attr.config = config; + + overflow_bit = 31; /* Generic counters are 32-bit registers*/ + if (new_eventsel == 0x11) { + /* Cycle counter overflow on increment that changes PMCCNTR[63] + * or PMCCNTR[31] from 1 to 0 according to the value of + * ARMV8_PMCR_LC + */ + if (!vcpu_mode_is_32bit(vcpu)) + pmcr_lc = vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_LC; + else + pmcr_lc = vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_LC; + + overflow_bit = pmcr_lc ? 63 : 31; + } + counter = kvm_pmu_get_counter_value(vcpu, select_idx); + /* The initial sample period (overflow count) of an event. */ + attr.sample_period = (-counter) & (((u64)1 << overflow_bit) - 1); + + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); + if (IS_ERR(event)) { + printk_once("kvm: pmu event creation failed %ld\n", + PTR_ERR(event)); + return; + } + pmc->perf_event = event; +}