From patchwork Wed Jun 16 19:28:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aman Priyadarshi X-Patchwork-Id: 461414 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp961851jae; Wed, 16 Jun 2021 12:30:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlDyFMehONp2kPfP2HLYkO98qGsqJDh5MmBl66Z4TLPq5yjHMJKW4j/Jcy1qv1ZsLTr2TO X-Received: by 2002:a17:906:5488:: with SMTP id r8mr1154162ejo.374.1623871806639; Wed, 16 Jun 2021 12:30:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623871806; cv=none; d=google.com; s=arc-20160816; b=psSYaUfSUo0OTwmOtk+x5O5XpCiIVPp2V53A4i1GMG821Xl+Y+U0yKYEAQLHx3FTOB rmHBn5Wf6Gr6HOalagx+lKeXAxxunGwCHrO5mYrwyuRh8MyNYhdNGxGS6hchXzZLdv1o BvtXLk6HwpU80GehiaF9HRnaMkIHw2jtupVv8QeciLt9kBctLkGgYdhduO+cAgzuiBwS 8IaghWTcj4NS+btoVJxuPpy6ExqFL+p0NEaaUKCUvrV8oudwQFSsrFLwbSEM5cp/YiZL bKmYx1JTq1ru5gkTtQcH2XXFQsVXmPQwnoJyTfob6lOsY2RZwkMiN3R+fDl4QE4IRNIh AnTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ABu9I4P8IaOlAy1/GgHEnror4Vsx/Q1iq7c1EIYYafc=; b=fVAZMbgMGV03t5P64yg5SG6h5KSx1/20KH2GoDZzOP2gFOlpcir5xfAge7eLOuqB2/ FQq47nkxxNWeaeOqNXmzwASej2hwlx1iQ1uucljLVHHJgyrHru9U4dIjo3UhVdF2QABG PxJLitc5zy53kxMnbrqNgW3gs60N6tMkTohlI8tcE/YE9LIubfC6+CytSeBmpgMIVKus s4CX6XhJScZf+FyqaeDZB8XRajcy1FV66fYC3aGdHQxUBO8rr6n6qXSRGiKtf8dFhSZe f4mXCQScdeeIytAgUCtE+/F2twJZOf8aj6qkYp3ymRxjC+IWC0XAWiL2mFUZYutyoy6/ vfYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=lA9NeFx3; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j5si3301732ejj.184.2021.06.16.12.30.06; Wed, 16 Jun 2021 12:30:06 -0700 (PDT) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=lA9NeFx3; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232825AbhFPTcL (ORCPT + 12 others); Wed, 16 Jun 2021 15:32:11 -0400 Received: from smtp-fw-9103.amazon.com ([207.171.188.200]:54918 "EHLO smtp-fw-9103.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231264AbhFPTcK (ORCPT ); Wed, 16 Jun 2021 15:32:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209; t=1623871804; x=1655407804; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=ABu9I4P8IaOlAy1/GgHEnror4Vsx/Q1iq7c1EIYYafc=; b=lA9NeFx3ncwD81xxUBCjOJ8PVvy75HyMZGlki1lZVV9iqaKRxgkdj+kM H1PRsMx8NyZ4/qc20SxOJDVmUWu8wqKcPPReQof8MdMsrdhqtOiyN4X7j bIflE0SGmAZ/y4hD2wc9p70RvQvviEiXjqWtEZlMQI61UUFWEjMsmvwbT 4=; X-IronPort-AV: E=Sophos;i="5.83,278,1616457600"; d="scan'208";a="938673791" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com) ([10.25.36.210]) by smtp-border-fw-9103.sea19.amazon.com with ESMTP; 16 Jun 2021 19:29:56 +0000 Received: from EX13D39EUC002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com (Postfix) with ESMTPS id 63CC0A1E1D; Wed, 16 Jun 2021 19:29:55 +0000 (UTC) Received: from laptop.ant.amazon.com (10.43.162.147) by EX13D39EUC002.ant.amazon.com (10.43.164.187) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 19:29:50 +0000 From: Aman Priyadarshi To: Greg Kroah-Hartman CC: Marc Zyngier , Will Deacon , Alexander Graf , Mark Rutland , , Ali Saidi Subject: [PATCH] arm64: perf: Disable PMU while processing counter overflows Date: Wed, 16 Jun 2021 21:28:59 +0200 Message-ID: <20210616192859.21708-1-apeureka@amazon.de> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.43.162.147] X-ClientProxiedBy: EX13D23UWA004.ant.amazon.com (10.43.160.72) To EX13D39EUC002.ant.amazon.com (10.43.164.187) Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 3cce50dfec4a5b0414c974190940f47dd32c6dee ] The arm64 PMU updates the event counters and reprograms the counters in the overflow IRQ handler without disabling the PMU. This could potentially cause skews in for group counters, where the overflowed counters may potentially loose some event counts, while they are reprogrammed. To prevent this, disable the PMU while we process the counter overflows and enable it right back when we are done. This patch also moves the PMU stop/start routines to avoid a forward declaration. Suggested-by: Mark Rutland Cc: Will Deacon Acked-by: Mark Rutland Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Aman Priyadarshi Cc: stable@vger.kernel.org --- arch/arm64/kernel/perf_event.c | 50 +++++++++++++++++++--------------- 1 file changed, 28 insertions(+), 22 deletions(-) -- 2.17.1 Amazon Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B Sitz: Berlin Ust-ID: DE 289 237 879 diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 53df84b2a07f..4ee1228d29eb 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -670,6 +670,28 @@ static void armv8pmu_disable_event(struct perf_event *event) raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } +static void armv8pmu_start(struct arm_pmu *cpu_pmu) +{ + unsigned long flags; + struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); + + raw_spin_lock_irqsave(&events->pmu_lock, flags); + /* Enable all counters */ + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + +static void armv8pmu_stop(struct arm_pmu *cpu_pmu) +{ + unsigned long flags; + struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); + + raw_spin_lock_irqsave(&events->pmu_lock, flags); + /* Disable all counters */ + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev) { u32 pmovsr; @@ -695,6 +717,11 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev) */ regs = get_irq_regs(); + /* + * Stop the PMU while processing the counter overflows + * to prevent skews in group events. + */ + armv8pmu_stop(cpu_pmu); for (idx = 0; idx < cpu_pmu->num_events; ++idx) { struct perf_event *event = cpuc->events[idx]; struct hw_perf_event *hwc; @@ -719,6 +746,7 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev) if (perf_event_overflow(event, &data, regs)) cpu_pmu->disable(event); } + armv8pmu_start(cpu_pmu); /* * Handle the pending perf events. @@ -732,28 +760,6 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev) return IRQ_HANDLED; } -static void armv8pmu_start(struct arm_pmu *cpu_pmu) -{ - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); - /* Enable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); -} - -static void armv8pmu_stop(struct arm_pmu *cpu_pmu) -{ - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); - /* Disable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); -} - static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, struct perf_event *event) {