From patchwork Mon Oct 27 12:06:35 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 39608 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B44212118A for ; Mon, 27 Oct 2014 12:10:03 +0000 (UTC) Received: by mail-la0-f71.google.com with SMTP id gi9sf4002462lab.6 for ; Mon, 27 Oct 2014 05:10:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=ibb80DxpZJz1cg9E17DEVGSz8lG/78N4Y+MWApIH1/I=; b=WRFUfVY8xgMDXB8xO3tEL1m+XXfxEaZnlOcWMMS1b4uPomr5hXSBsiQdmiovWsMZIB AF3jMi6ldmNI9GNSLy7GR0MKZ1UybTofncb65p9LtS8LubE0AkZ7pXmT0uJVnAp+6ExX BmRHZ8Psk8wukMtkqFVLn2LnA+Lucyfxo9enXAf6ex43yQZ2yS8mQM1TOxFlqM4qeSCl HkLvLTYXB64nxDZdoBLVn2Wvw2+vt4JPS4hF+xykudgDn1D1DxROFRtbQKxyZYu5jx3N Rt5Wtbh7qRYsfqtgl3Ugr0ueZv082WUM4nUx0cMPX+N0Sfd9A6PeSLqhS3mw8HsDCdoE NHSQ== X-Gm-Message-State: ALoCoQlgI8fakd6M0K48mkncktETwBBlvKpSSEiZgwW+skX6GVzsI8Q+4itx3JYkN6TD5QarsDh3 X-Received: by 10.194.58.47 with SMTP id n15mr2078525wjq.0.1414411802524; Mon, 27 Oct 2014 05:10:02 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.67 with SMTP id o3ls704297laj.22.gmail; Mon, 27 Oct 2014 05:10:02 -0700 (PDT) X-Received: by 10.112.224.162 with SMTP id rd2mr2868522lbc.95.1414411802207; Mon, 27 Oct 2014 05:10:02 -0700 (PDT) Received: from mail-lb0-f174.google.com (mail-lb0-f174.google.com. [209.85.217.174]) by mx.google.com with ESMTPS id r5si19716339lal.3.2014.10.27.05.10.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 27 Oct 2014 05:10:02 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) client-ip=209.85.217.174; Received: by mail-lb0-f174.google.com with SMTP id p9so5494809lbv.5 for ; Mon, 27 Oct 2014 05:10:02 -0700 (PDT) X-Received: by 10.112.189.10 with SMTP id ge10mr22991393lbc.23.1414411802120; Mon, 27 Oct 2014 05:10:02 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp276864lbz; Mon, 27 Oct 2014 05:10:01 -0700 (PDT) X-Received: by 10.68.243.34 with SMTP id wv2mr2022043pbc.146.1414411800410; Mon, 27 Oct 2014 05:10:00 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id yj9si10365306pac.127.2014.10.27.05.09.59 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 27 Oct 2014 05:10:00 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xij66-000783-4v; Mon, 27 Oct 2014 12:08:30 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xij5e-0006iC-Og for linux-arm-kernel@lists.infradead.org; Mon, 27 Oct 2014 12:08:03 +0000 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s9RC6owt022943; Mon, 27 Oct 2014 12:07:42 GMT From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCHv2 5/9] arm: perf: limit size of accounting data Date: Mon, 27 Oct 2014 12:06:35 +0000 Message-Id: <1414411599-1938-6-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1414411599-1938-1-git-send-email-mark.rutland@arm.com> References: <1414411599-1938-1-git-send-email-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141027_050803_148459_4093D734 X-CRM114-Status: GOOD ( 14.49 ) X-Spam-Score: -5.6 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.6 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: Mark Rutland , pawel.moll@arm.com, will.deacon@arm.com, punit.agrawal@arm.com, sboyd@codeaurora.org, drew.richardson@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Commit 3fc2c83087 (ARM: perf: remove event limit from pmu_hw_events) got rid of the upper limit on the number of events an arm_pmu could handle, but introduced additional complexity and places a burden on each PMU driver to allocate accounting data somehow. So far this has not generally been useful as the only users of arm_pmu are the CPU backend and the CCI driver. Now that the CCI driver plugs into the perf subsystem directly, we can remove some of the complexities that get in the way of supporting heterogeneous CPU PMUs. This patch restores the original limits on pmu_hw_events fields such that the pmu_hw_events data can be allocated as a contiguous block. This will simplify dynamic pmu_hw_events allocation in later patches. Signed-off-by: Mark Rutland Reviewed-by: Will Deacon Reviewed-by: Stephen Boyd Tested-by: Stephen Boyd --- arch/arm/include/asm/pmu.h | 4 ++-- arch/arm/kernel/perf_event.c | 4 +--- arch/arm/kernel/perf_event_cpu.c | 4 ---- 3 files changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/arm/include/asm/pmu.h b/arch/arm/include/asm/pmu.h index ff39290..3d7e30b 100644 --- a/arch/arm/include/asm/pmu.h +++ b/arch/arm/include/asm/pmu.h @@ -68,13 +68,13 @@ struct pmu_hw_events { /* * The events that are active on the PMU for the given index. */ - struct perf_event **events; + struct perf_event *events[ARMPMU_MAX_HWEVENTS]; /* * A 1 bit for an index indicates that the counter is being used for * an event. A 0 means that the counter can be used. */ - unsigned long *used_mask; + DECLARE_BITMAP(used_mask, ARMPMU_MAX_HWEVENTS); /* * Hardware lock to serialize accesses to PMU registers. Needed for the diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index 7ffb267..8648107 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -275,14 +275,12 @@ validate_group(struct perf_event *event) { struct perf_event *sibling, *leader = event->group_leader; struct pmu_hw_events fake_pmu; - DECLARE_BITMAP(fake_used_mask, ARMPMU_MAX_HWEVENTS); /* * Initialise the fake PMU. We only need to populate the * used_mask for the purposes of validation. */ - memset(fake_used_mask, 0, sizeof(fake_used_mask)); - fake_pmu.used_mask = fake_used_mask; + memset(&fake_pmu.used_mask, 0, sizeof(fake_pmu.used_mask)); if (!validate_event(&fake_pmu, leader)) return -EINVAL; diff --git a/arch/arm/kernel/perf_event_cpu.c b/arch/arm/kernel/perf_event_cpu.c index 64adf397..1528d3c 100644 --- a/arch/arm/kernel/perf_event_cpu.c +++ b/arch/arm/kernel/perf_event_cpu.c @@ -36,8 +36,6 @@ static struct arm_pmu *cpu_pmu; static DEFINE_PER_CPU(struct arm_pmu *, percpu_pmu); -static DEFINE_PER_CPU(struct perf_event * [ARMPMU_MAX_HWEVENTS], hw_events); -static DEFINE_PER_CPU(unsigned long [BITS_TO_LONGS(ARMPMU_MAX_HWEVENTS)], used_mask); static DEFINE_PER_CPU(struct pmu_hw_events, cpu_hw_events); /* @@ -172,8 +170,6 @@ static void cpu_pmu_init(struct arm_pmu *cpu_pmu) int cpu; for_each_possible_cpu(cpu) { struct pmu_hw_events *events = &per_cpu(cpu_hw_events, cpu); - events->events = per_cpu(hw_events, cpu); - events->used_mask = per_cpu(used_mask, cpu); raw_spin_lock_init(&events->pmu_lock); per_cpu(percpu_pmu, cpu) = cpu_pmu; }