From patchwork Mon Jul 6 02:17:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 50679 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1CEE6229E5 for ; Mon, 6 Jul 2015 02:20:19 +0000 (UTC) Received: by wiwz6 with SMTP id z6sf47400658wiw.0 for ; Sun, 05 Jul 2015 19:20:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:cc:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=mV1pXJIBBAEjdfvyHHsytrc4SOtLn2q0UsmA525AFHU=; b=dWBYmTI0phSe2vuqajfO8tIVF9QEtjx5zXy/axAnDG5JeZYznj/GT2bcpxlHk64Mta MXusdkeAxyG/I2DrU/7ih0XG13M+wI5OD9W9bDSJMPKCx8n83LFUlz0zzYETa/FdbZeY P1K0r9uDkMVeN2bhG0gVxpkUtPAsomGovbnRQLqyl7Q8ngG8jruuhrvz2hvailcfoFOB ROXT8eBSoEAa8pX+JmEJS/YSvGCsICQuRY6J6ymzRBUt5dYpQjqgDVspv02kxZfEciWJ cjlWKOGK7ZxZwWhohQSnRnNiDhbHMsd5hV00Pv27cOZRgJd660yJ3zEwf8i2II0siBVl C1RA== X-Gm-Message-State: ALoCoQlJ3wVMvgvw3O1gxUb31yrrfc3rDrTK5mduybhE32dNGn8i0ZuWEBdbfvuzhFedmZ1wn1Nq X-Received: by 10.152.8.9 with SMTP id n9mr26399757laa.1.1436149218397; Sun, 05 Jul 2015 19:20:18 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.27.102 with SMTP id s6ls601354lag.102.gmail; Sun, 05 Jul 2015 19:20:18 -0700 (PDT) X-Received: by 10.112.217.2 with SMTP id ou2mr46734468lbc.15.1436149217986; Sun, 05 Jul 2015 19:20:17 -0700 (PDT) Received: from mail-la0-f43.google.com (mail-la0-f43.google.com. [209.85.215.43]) by mx.google.com with ESMTPS id l5si13957726lbt.73.2015.07.05.19.20.17 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 05 Jul 2015 19:20:17 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) client-ip=209.85.215.43; Received: by lagc2 with SMTP id c2so137518796lag.3 for ; Sun, 05 Jul 2015 19:20:17 -0700 (PDT) X-Received: by 10.152.7.7 with SMTP id f7mr47036444laa.106.1436149217708; Sun, 05 Jul 2015 19:20:17 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp1426079lbb; Sun, 5 Jul 2015 19:20:16 -0700 (PDT) X-Received: by 10.70.43.72 with SMTP id u8mr3101379pdl.33.1436149215778; Sun, 05 Jul 2015 19:20:15 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id pi2si26336853pbb.128.2015.07.05.19.20.14 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 05 Jul 2015 19:20:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZBvzv-0006IP-RY; Mon, 06 Jul 2015 02:19:07 +0000 Received: from mail-pa0-f50.google.com ([209.85.220.50]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZBvzc-0005qk-1w for linux-arm-kernel@lists.infradead.org; Mon, 06 Jul 2015 02:18:48 +0000 Received: by pacws9 with SMTP id ws9so88340291pac.0 for ; Sun, 05 Jul 2015 19:18:27 -0700 (PDT) X-Received: by 10.67.5.231 with SMTP id cp7mr100371578pad.36.1436149107173; Sun, 05 Jul 2015 19:18:27 -0700 (PDT) Received: from localhost ([120.136.34.248]) by mx.google.com with ESMTPSA id vl1sm16269056pab.21.2015.07.05.19.18.25 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 05 Jul 2015 19:18:26 -0700 (PDT) From: shannon.zhao@linaro.org To: kvmarm@lists.cs.columbia.edu Subject: [PATCH 07/18] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Date: Mon, 6 Jul 2015 10:17:37 +0800 Message-Id: <1436149068-3784-8-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1436149068-3784-1-git-send-email-shannon.zhao@linaro.org> References: <1436149068-3784-1-git-send-email-shannon.zhao@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150705_191848_149852_08E71FDC X-CRM114-Status: GOOD ( 18.84 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.220.50 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.220.50 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: kvm@vger.kernel.org, marc.zyngier@arm.com, will.deacon@arm.com, linux-arm-kernel@lists.infradead.org, zhaoshenglong@huawei.com, alex.bennee@linaro.org, christoffer.dall@linaro.org, shannon.zhao@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Shannon Zhao When we use tools like perf on host, perf passes the event type and the id in this type category to kernel, then kernel will map them to event number and write this number to PMU PMEVTYPER_EL0 register. While we're trapping and emulating guest accesses to PMU registers, we get the event number and map it to the event type and the id reversely. Check whether the event type is same with the one to be set. If not, stop counter to monitor current event and find the event type map id. According to the bits of data to configure this perf_event attr and set exclude_host to 1 for guest. Then call perf_event API to create the corresponding event and save the event pointer. Signed-off-by: Shannon Zhao --- include/kvm/arm_pmu.h | 4 ++ virt/kvm/arm/pmu.c | 173 ++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 177 insertions(+) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 27d14ca..1050b24 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -45,9 +45,13 @@ struct kvm_pmu { #ifdef CONFIG_KVM_ARM_PMU void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx); void kvm_pmu_init(struct kvm_vcpu *vcpu); #else static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx) {} static inline void kvm_pmu_init(struct kvm_vcpu *vcpu) {} #endif diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index dc252d0..50a3c82 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -18,8 +18,68 @@ #include #include #include +#include #include +/* PMU HW events mapping. */ +static struct kvm_pmu_hw_event_map { + unsigned eventsel; + unsigned event_type; +} kvm_pmu_hw_events[] = { + [0] = { 0x11, PERF_COUNT_HW_CPU_CYCLES }, + [1] = { 0x08, PERF_COUNT_HW_INSTRUCTIONS }, + [2] = { 0x04, PERF_COUNT_HW_CACHE_REFERENCES }, + [3] = { 0x03, PERF_COUNT_HW_CACHE_MISSES }, + [4] = { 0x10, PERF_COUNT_HW_BRANCH_MISSES }, +}; + +/* PMU HW cache events mapping. */ +static struct kvm_pmu_hw_cache_event_map { + unsigned eventsel; + unsigned cache_type; + unsigned cache_op; + unsigned cache_result; +} kvm_pmu_hw_cache_events[] = { + [0] = { 0x04, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [1] = { 0x03, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [2] = { 0x04, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [3] = { 0x03, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [4] = { 0x12, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [5] = { 0x10, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [6] = { 0x12, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [7] = { 0x10, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_MISS }, +}; + +/** + * kvm_pmu_stop_counter - stop PMU counter for the selected counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + * + * If this counter has been configured to monitor some event, disable and + * release it. + */ +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, + unsigned long select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + } + pmc->perf_event = NULL; + pmc->eventsel = 0xff; +} + /** * kvm_pmu_vcpu_reset - reset pmu state for cpu * @vcpu: The vcpu pointer @@ -27,12 +87,125 @@ */ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) { + int i; struct kvm_pmu *pmu = &vcpu->arch.pmu; + for (i = 0; i < ARMV8_MAX_COUNTERS; i++) + kvm_pmu_stop_counter(vcpu, i); + pmu->overflow_status = 0; pmu->irq_pending = false; } /** + * kvm_pmu_find_hw_event - find hardware event + * @pmu: The pmu pointer + * @event_select: The number of selected event type + * + * Based on the number of selected event type, find out whether it belongs to + * PERF_TYPE_HARDWARE. If so, return the corresponding event id. + */ +static unsigned kvm_pmu_find_hw_event(struct kvm_pmu *pmu, + unsigned long event_select) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(kvm_pmu_hw_events); i++) + if (kvm_pmu_hw_events[i].eventsel == event_select) + break; + + if (i == ARRAY_SIZE(kvm_pmu_hw_events)) + return PERF_COUNT_HW_MAX; + + return kvm_pmu_hw_events[i].event_type; +} + +/** + * kvm_pmu_find_hw_cache_event - find hardware cache event + * @pmu: The pmu pointer + * @event_select: The number of selected event type + * + * Based on the number of selected event type, find out whether it belongs to + * PERF_TYPE_HW_CACHE. If so, return the corresponding event id. + */ +static unsigned kvm_pmu_find_hw_cache_event(struct kvm_pmu *pmu, + unsigned long event_select) +{ + int i; + unsigned config; + + for (i = 0; i < ARRAY_SIZE(kvm_pmu_hw_cache_events); i++) + if (kvm_pmu_hw_cache_events[i].eventsel == event_select) + break; + + if (i == ARRAY_SIZE(kvm_pmu_hw_cache_events)) + return PERF_COUNT_HW_CACHE_MAX; + + config = (kvm_pmu_hw_cache_events[i].cache_type & 0xff) + | ((kvm_pmu_hw_cache_events[i].cache_op & 0xff) << 8) + | ((kvm_pmu_hw_cache_events[i].cache_result & 0xff) << 16); + + return config; +} + +/** + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * @vcpu: The vcpu pointer + * @data: The data guest writes to PMXEVTYPER_EL0 + * @select_idx: The number of selected counter + * + * Firstly check whether the event type is same with the one to be set. + * If not, stop counter to monitor current event and find the event type map id. + * According to the bits of data to configure this perf_event attr and set + * exclude_host to 1 for guest. Then call perf_event API to create the + * corresponding event and save the event pointer. + */ +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct perf_event *event; + struct perf_event_attr attr; + unsigned config, type = PERF_TYPE_RAW; + + if ((data & ARMV8_EVTYPE_EVENT) == pmc->eventsel) + return; + + kvm_pmu_stop_counter(vcpu, select_idx); + pmc->eventsel = data & ARMV8_EVTYPE_EVENT; + + config = kvm_pmu_find_hw_event(pmu, pmc->eventsel); + if (config != PERF_COUNT_HW_MAX) { + type = PERF_TYPE_HARDWARE; + } else { + config = kvm_pmu_find_hw_cache_event(pmu, pmc->eventsel); + if (config != PERF_COUNT_HW_CACHE_MAX) + type = PERF_TYPE_HW_CACHE; + } + + if (type == PERF_TYPE_RAW) + config = pmc->eventsel; + + attr.type = type; + attr.size = sizeof(attr); + attr.pinned = true; + attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0; + attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0; + attr.exclude_hv = data & ARMV8_INCLUDE_EL2 ? 0 : 1; + attr.exclude_host = 1; + attr.config = config; + attr.sample_period = (-pmc->counter) & (((u64)1 << 32) - 1); + + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); + if (IS_ERR(event)) { + kvm_err("kvm: pmu event creation failed %ld\n", + PTR_ERR(event)); + return; + } + pmc->perf_event = event; +} + +/** * kvm_pmu_init - Initialize global PMU state for per vcpu * @vcpu: The vcpu pointer *