From patchwork Wed Nov 19 15:52:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 41188 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f72.google.com (mail-ee0-f72.google.com [74.125.83.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5726A241C9 for ; Wed, 19 Nov 2014 15:52:43 +0000 (UTC) Received: by mail-ee0-f72.google.com with SMTP id e53sf900968eek.11 for ; Wed, 19 Nov 2014 07:52:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:x-original-sender:x-original-authentication-results :precedence:mailing-list:list-id:list-post:list-help:list-archive :list-unsubscribe; bh=K0NuKOD0guOXidYxNDI3jyJvf4EZkKv8pGSvtGFsEX4=; b=FfxTVzNCSPEXKmyXUsvyAkeu5vY77frdiQgpYhzfPnAUgdYpWL5PkHs/FR1MIBg8ay QZDnvzbkCVC4uapJ0BNsykGDjgqz50vABRiFM85or3NZVZFrkJCJqeFrbWixS/09T8Q7 tKrvkR1DwX3XgTjkjG1LRcYk9Akz8Jcsvbc7jJwhon8cs30spOsl+9/e8UtpapBWSO2Q RgcS3/xdIA1cHr8nxGsk3ykCdxmnB7AadOryVzutzuV/wUG4lLHcdve9MtY9U0zt3t2a pL9Ee7xY2PrHFWV4CS4yTFleKWbk1cUqNbPTo92WRvwaFYNU95pB+0QbwRXl3owr7eoH uoLw== X-Gm-Message-State: ALoCoQlSD5v/vkLJmOnDs4pouPuiou/cZ83UUCWw1hDrGSi2aaJ0aAlTw89Skb7diCcPuzO5VERb X-Received: by 10.112.138.234 with SMTP id qt10mr3423516lbb.4.1416412362449; Wed, 19 Nov 2014 07:52:42 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.153.4.33 with SMTP id cb1ls1569339lad.89.gmail; Wed, 19 Nov 2014 07:52:42 -0800 (PST) X-Received: by 10.153.6.9 with SMTP id cq9mr3431796lad.79.1416412362310; Wed, 19 Nov 2014 07:52:42 -0800 (PST) Received: from mail-la0-f43.google.com (mail-la0-f43.google.com. [209.85.215.43]) by mx.google.com with ESMTPS id v4si2154302laj.106.2014.11.19.07.52.42 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Nov 2014 07:52:42 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) client-ip=209.85.215.43; Received: by mail-la0-f43.google.com with SMTP id q1so752346lam.16 for ; Wed, 19 Nov 2014 07:52:42 -0800 (PST) X-Received: by 10.152.87.100 with SMTP id w4mr6055691laz.27.1416412362234; Wed, 19 Nov 2014 07:52:42 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp106927lbc; Wed, 19 Nov 2014 07:52:41 -0800 (PST) X-Received: by 10.194.189.240 with SMTP id gl16mr30182838wjc.119.1416412361616; Wed, 19 Nov 2014 07:52:41 -0800 (PST) Received: from mail-wg0-f50.google.com (mail-wg0-f50.google.com. [74.125.82.50]) by mx.google.com with ESMTPS id qm5si3270296wjc.16.2014.11.19.07.52.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Nov 2014 07:52:41 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 74.125.82.50 as permitted sender) client-ip=74.125.82.50; Received: by mail-wg0-f50.google.com with SMTP id k14so1114647wgh.23 for ; Wed, 19 Nov 2014 07:52:41 -0800 (PST) X-Received: by 10.180.72.199 with SMTP id f7mr14185792wiv.53.1416412361241; Wed, 19 Nov 2014 07:52:41 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id j17sm2797482wjn.32.2014.11.19.07.52.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Nov 2014 07:52:40 -0800 (PST) From: Daniel Thompson To: Will Deacon , Russell King Cc: Daniel Thompson , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Peter Zijlstra , Paul Mackerras , Ingo Molnar , Arnaldo Carvalho de Melo , patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Sumit Semwal Subject: [PATCH] arm: perf: Prevent wraparound during overflow Date: Wed, 19 Nov 2014 15:52:26 +0000 Message-Id: <1416412346-8759-1-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , If the overflow threshold for a counter is set above or near the 0xffffffff boundary then the kernel may lose track of the overflow causing only events that occur *after* the overflow to be recorded. Specifically the problem occurs when the value of the performance counter overtakes its original programmed value due to wrap around. Typical solutions to this problem are either to avoid programming in values likely to be overtaken or to treat the overflow bit as the 33rd bit of the counter. Its somewhat fiddly to refactor the code to correctly handle the 33rd bit during irqsave sections (context switches for example) so instead we take the simpler approach of avoiding values likely to be overtaken. We set the limit to half of max_period because this matches the limit imposed in __hw_perf_event_init(). This causes a doubling of the interrupt rate for large threshold values, however even with a very fast counter ticking at 4GHz the interrupt rate would only be ~1Hz. Signed-off-by: Daniel Thompson --- Notes: There is similar code in the arm64 tree which retains the assumptions of the original arm code regarding 32-bit wide performance counters. If this patch doesn't get beaten up during review I'll also share a similar patch for arm64. arch/arm/kernel/perf_event.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) -- 1.9.3 diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index 266cba46db3e..b50a770f8c99 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -115,8 +115,14 @@ int armpmu_event_set_period(struct perf_event *event) ret = 1; } - if (left > (s64)armpmu->max_period) - left = armpmu->max_period; + /* + * Limit the maximum period to prevent the counter value + * from overtaking the one we are about to program. In + * effect we are reducing max_period to account for + * interrupt latency (and we are being very conservative). + */ + if (left > (s64)(armpmu->max_period >> 1)) + left = armpmu->max_period >> 1; local64_set(&hwc->prev_count, (u64)-left);