From patchwork Tue Oct 21 13:11:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 39144 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f69.google.com (mail-ee0-f69.google.com [74.125.83.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B1ADF202DB for ; Tue, 21 Oct 2014 13:15:13 +0000 (UTC) Received: by mail-ee0-f69.google.com with SMTP id b57sf396983eek.8 for ; Tue, 21 Oct 2014 06:15:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=I2sTeaoMzXWTCfvsjaSBcjos28hQMOCSfySgYDTR6hg=; b=EQErpgixJga4Ghy7Eq0lUBgt7ORvtuuvUFF2/L1lZEidHJv/nTOvZMAItO7CaopCZV 6wWiyyr+PqvYP0CWrR2XDFtHJCqHye4a/ltPV8+ghKi6HTNhB7v+XhtmUTuNYtjrQMEX WNDnKA7w0H3Latl1d1DEdI7SxOQEFi/okuPpLGnvaT/anHUaGMoJm/7Zf/ceVUeihlF2 Wzr+LJ1AC2vvAuM3IbXP1p5N2dtN/Tvf6i8iK6g5R6kAf/oyLY+eXz9jh+AqDADPCjgR OXBr7uu0I/5biYNVPjrw0hF9w5yfSt5TSisCDWLWDnOp/l6ktChDundIAUb94cpzvYSQ s6fw== X-Gm-Message-State: ALoCoQkUsfqujAnoXoWPKnWexqy1etz1Ffg/QDiVWPzZj5upDnrpBXVOvlNLQSb4TQPyBCjgesI6 X-Received: by 10.180.76.42 with SMTP id h10mr3553734wiw.6.1413897311409; Tue, 21 Oct 2014 06:15:11 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.3.170 with SMTP id d10ls56727lad.75.gmail; Tue, 21 Oct 2014 06:15:11 -0700 (PDT) X-Received: by 10.112.125.106 with SMTP id mp10mr35283216lbb.50.1413897311167; Tue, 21 Oct 2014 06:15:11 -0700 (PDT) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com. [209.85.215.54]) by mx.google.com with ESMTPS id p6si13438968lap.91.2014.10.21.06.15.11 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 21 Oct 2014 06:15:11 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) client-ip=209.85.215.54; Received: by mail-la0-f54.google.com with SMTP id gm9so1027635lab.13 for ; Tue, 21 Oct 2014 06:15:11 -0700 (PDT) X-Received: by 10.112.130.41 with SMTP id ob9mr34026755lbb.74.1413897311061; Tue, 21 Oct 2014 06:15:11 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp494642lbz; Tue, 21 Oct 2014 06:15:09 -0700 (PDT) X-Received: by 10.68.209.230 with SMTP id mp6mr30618176pbc.27.1413897308386; Tue, 21 Oct 2014 06:15:08 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id ki1si11189879pbd.5.2014.10.21.06.15.07 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 21 Oct 2014 06:15:08 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XgZFm-0000ck-Kw; Tue, 21 Oct 2014 13:13:34 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XgZFL-0008PE-Aw for linux-arm-kernel@lists.infradead.org; Tue, 21 Oct 2014 13:13:08 +0000 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s9LDBfwu016180; Tue, 21 Oct 2014 14:12:45 +0100 (BST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 6/8] arm: perf: fold percpu_pmu into pmu_hw_events Date: Tue, 21 Oct 2014 14:11:22 +0100 Message-Id: <1413897084-19715-7-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1413897084-19715-1-git-send-email-mark.rutland@arm.com> References: <1413897084-19715-1-git-send-email-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141021_061307_735313_ED8B60CB X-CRM114-Status: GOOD ( 17.09 ) X-Spam-Score: -6.4 (------) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-6.4 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.4 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain Cc: Mark Rutland , pawel.moll@arm.com, sboyd@codeaurora.org, punit.agrawal@arm.com, will.deacon@arm.com, drew.richardson@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Currently the percpu_pmu pointers used as percpu_irq dev_id values are defined separately from the other per-cpu accounting data, which make dynamically allocating the data (as will be required for systems with heterogeneous CPUs) difficult. This patch moves the percpu_pmu pointers into pmu_hw_events (which is itself allocated per cpu), which will allow for easier dynamic allocation. Both percpu and regular irqs are requested using percpu_pmu pointers as tokens, freeing us from having to know whether an irq is percpu within the handler, and thus avoiding a radix tree lookup on the handler path. Signed-off-by: Mark Rutland Reviewed-by: Will Deacon Cc: Stephen Boyd Reviewed-by: Stephen Boyd Tested-by: Stephen Boyd --- arch/arm/include/asm/pmu.h | 6 ++++++ arch/arm/kernel/perf_event.c | 14 +++++++++----- arch/arm/kernel/perf_event_cpu.c | 14 ++++++++------ 3 files changed, 23 insertions(+), 11 deletions(-) diff --git a/arch/arm/include/asm/pmu.h b/arch/arm/include/asm/pmu.h index f273dd2..cc01498 100644 --- a/arch/arm/include/asm/pmu.h +++ b/arch/arm/include/asm/pmu.h @@ -81,6 +81,12 @@ struct pmu_hw_events { * read/modify/write sequences. */ raw_spinlock_t pmu_lock; + + /* + * When using percpu IRQs, we need a percpu dev_id. Place it here as we + * already have to allocate this struct per cpu. + */ + struct arm_pmu *percpu_pmu; }; struct arm_pmu { diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index 6bfa229..70ea8bd 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -304,17 +304,21 @@ static irqreturn_t armpmu_dispatch_irq(int irq, void *dev) int ret; u64 start_clock, finish_clock; - if (irq_is_percpu(irq)) - dev = *(void **)dev; - armpmu = dev; + /* + * we request the IRQ with a (possibly percpu) struct arm_pmu**, but + * the handlers expect a struct arm_pmu*. The percpu_irq framework will + * do any necessary shifting, we just need to perform the first + * dereference. + */ + armpmu = *(void **)dev; plat_device = armpmu->plat_device; plat = dev_get_platdata(&plat_device->dev); start_clock = sched_clock(); if (plat && plat->handle_irq) - ret = plat->handle_irq(irq, dev, armpmu->handle_irq); + ret = plat->handle_irq(irq, armpmu, armpmu->handle_irq); else - ret = armpmu->handle_irq(irq, dev); + ret = armpmu->handle_irq(irq, armpmu); finish_clock = sched_clock(); perf_sample_event_took(finish_clock - start_clock); diff --git a/arch/arm/kernel/perf_event_cpu.c b/arch/arm/kernel/perf_event_cpu.c index 13fab83..cd95388 100644 --- a/arch/arm/kernel/perf_event_cpu.c +++ b/arch/arm/kernel/perf_event_cpu.c @@ -35,7 +35,6 @@ /* Set at runtime when we know what CPU type we are. */ static struct arm_pmu *cpu_pmu; -static DEFINE_PER_CPU(struct arm_pmu *, percpu_pmu); static DEFINE_PER_CPU(struct pmu_hw_events, cpu_hw_events); /* @@ -85,20 +84,21 @@ static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu) { int i, irq, irqs; struct platform_device *pmu_device = cpu_pmu->plat_device; + struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events; irqs = min(pmu_device->num_resources, num_possible_cpus()); irq = platform_get_irq(pmu_device, 0); if (irq >= 0 && irq_is_percpu(irq)) { on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1); - free_percpu_irq(irq, &percpu_pmu); + free_percpu_irq(irq, &hw_events->percpu_pmu); } else { for (i = 0; i < irqs; ++i) { if (!cpumask_test_and_clear_cpu(i, &cpu_pmu->active_irqs)) continue; irq = platform_get_irq(pmu_device, i); if (irq >= 0) - free_irq(irq, cpu_pmu); + free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, i)); } } } @@ -107,6 +107,7 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler) { int i, err, irq, irqs; struct platform_device *pmu_device = cpu_pmu->plat_device; + struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events; if (!pmu_device) return -ENODEV; @@ -119,7 +120,8 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler) irq = platform_get_irq(pmu_device, 0); if (irq >= 0 && irq_is_percpu(irq)) { - err = request_percpu_irq(irq, handler, "arm-pmu", &percpu_pmu); + err = request_percpu_irq(irq, handler, "arm-pmu", + &hw_events->percpu_pmu); if (err) { pr_err("unable to request IRQ%d for ARM PMU counters\n", irq); @@ -146,7 +148,7 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler) err = request_irq(irq, handler, IRQF_NOBALANCING | IRQF_NO_THREAD, "arm-pmu", - cpu_pmu); + per_cpu_ptr(&hw_events->percpu_pmu, i)); if (err) { pr_err("unable to request IRQ%d for ARM PMU counters\n", irq); @@ -166,7 +168,7 @@ static void cpu_pmu_init(struct arm_pmu *cpu_pmu) for_each_possible_cpu(cpu) { struct pmu_hw_events *events = &per_cpu(cpu_hw_events, cpu); raw_spin_lock_init(&events->pmu_lock); - per_cpu(percpu_pmu, cpu) = cpu_pmu; + events->percpu_pmu = cpu_pmu; } cpu_pmu->hw_events = &cpu_hw_events;