From patchwork Mon Dec 22 09:39:45 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 42509 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C04202188F for ; Mon, 22 Dec 2014 09:39:57 +0000 (UTC) Received: by mail-wi0-f200.google.com with SMTP id ex7sf2708181wid.7 for ; Mon, 22 Dec 2014 01:39:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=q0iwnZHxNzDPB32Q3FDlWPl5rv+Jd7eb/k64jVVEfqs=; b=ch8MFI5T4cSAXpqnC+9KJbSKGcWy4jVLY3QC0JQWuUsH9+r9mlKmzqAnNbhI1iEw84 DEaoiUadAeHlzTqj0z8ufn8GMgqAa+xeR14RJNBcLNv5yXiAkXswC4qgJnwRhd/lg+wB Eq0TSDnlf1WisoxXkiH+uiK/X9uTNvuAQzPgG5JW8KMFz6ak1LD2XzXQ7pvfvjQkm5uS Ddpc4QLtvqIBtpCp8noyaMepcLURue4ef69lMfC8RfR+1Io1ZUbVaNoXuZmpT1SlDUFh Rj58Ao97iqtDPKct6POUE/PDhKoNHTW6dPWxqJHwm08vt0AksURKVrx60ikC/2oOCzpD qkvg== X-Gm-Message-State: ALoCoQl0GfSGfBWcDLNptQStPQF2tD0CurHgvqs7x7XsruX86eAIL48wkMpyWkYZ/re61lHZE6rV X-Received: by 10.180.105.97 with SMTP id gl1mr1977385wib.7.1419241197055; Mon, 22 Dec 2014 01:39:57 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.19.225 with SMTP id i1ls229479lae.19.gmail; Mon, 22 Dec 2014 01:39:56 -0800 (PST) X-Received: by 10.152.115.146 with SMTP id jo18mr14313313lab.9.1419241196902; Mon, 22 Dec 2014 01:39:56 -0800 (PST) Received: from mail-lb0-f173.google.com (mail-lb0-f173.google.com. [209.85.217.173]) by mx.google.com with ESMTPS id sj12si18022893lac.1.2014.12.22.01.39.56 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 22 Dec 2014 01:39:56 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) client-ip=209.85.217.173; Received: by mail-lb0-f173.google.com with SMTP id z12so3586502lbi.18 for ; Mon, 22 Dec 2014 01:39:56 -0800 (PST) X-Received: by 10.112.131.1 with SMTP id oi1mr20657444lbb.2.1419241196461; Mon, 22 Dec 2014 01:39:56 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.142.69 with SMTP id ru5csp971163lbb; Mon, 22 Dec 2014 01:39:55 -0800 (PST) X-Received: by 10.181.23.199 with SMTP id ic7mr29180244wid.18.1419241195550; Mon, 22 Dec 2014 01:39:55 -0800 (PST) Received: from mail-wg0-f47.google.com (mail-wg0-f47.google.com. [74.125.82.47]) by mx.google.com with ESMTPS id a10si17750401wiw.58.2014.12.22.01.39.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 22 Dec 2014 01:39:55 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 74.125.82.47 as permitted sender) client-ip=74.125.82.47; Received: by mail-wg0-f47.google.com with SMTP id n12so6222883wgh.20 for ; Mon, 22 Dec 2014 01:39:55 -0800 (PST) X-Received: by 10.194.241.194 with SMTP id wk2mr40133865wjc.132.1419241195229; Mon, 22 Dec 2014 01:39:55 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id gy8sm12580038wib.23.2014.12.22.01.39.52 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Dec 2014 01:39:53 -0800 (PST) From: Daniel Thompson To: Russell King , Will Deacon Cc: Daniel Thompson , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Peter Zijlstra , Paul Mackerras , Ingo Molnar , Arnaldo Carvalho de Melo , patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Sumit Semwal Subject: [PATCH 3.19-rc1 v3] arm: perf: Prevent wraparound during overflow Date: Mon, 22 Dec 2014 09:39:45 +0000 Message-Id: <1419241185-31317-1-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1416412346-8759-1-git-send-email-daniel.thompson@linaro.org> References: <1416412346-8759-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , If the overflow threshold for a counter is set above or near the 0xffffffff boundary then the kernel may lose track of the overflow causing only events that occur *after* the overflow to be recorded. Specifically the problem occurs when the value of the performance counter overtakes its original programmed value due to wrap around. Typical solutions to this problem are either to avoid programming in values likely to be overtaken or to treat the overflow bit as the 33rd bit of the counter. Its somewhat fiddly to refactor the code to correctly handle the 33rd bit during irqsave sections (context switches for example) so instead we take the simpler approach of avoiding values likely to be overtaken. We set the limit to half of max_period because this matches the limit imposed in __hw_perf_event_init(). This causes a doubling of the interrupt rate for large threshold values, however even with a very fast counter ticking at 4GHz the interrupt rate would only be ~1Hz. Signed-off-by: Daniel Thompson Acked-by: Will Deacon --- Notes: v3: * Rebased on 3.19-rc1 and dropped the arm64 patches (which are already upstream). v2: * Remove the redundant cast to s64 (Will Deacon). arch/arm/kernel/perf_event.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) -- 1.9.3 diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index f7c65adaa428..557e128e4df0 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -116,8 +116,14 @@ int armpmu_event_set_period(struct perf_event *event) ret = 1; } - if (left > (s64)armpmu->max_period) - left = armpmu->max_period; + /* + * Limit the maximum period to prevent the counter value + * from overtaking the one we are about to program. In + * effect we are reducing max_period to account for + * interrupt latency (and we are being very conservative). + */ + if (left > (armpmu->max_period >> 1)) + left = armpmu->max_period >> 1; local64_set(&hwc->prev_count, (u64)-left);