From patchwork Fri Nov 21 16:24:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 41327 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f69.google.com (mail-ee0-f69.google.com [74.125.83.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1F16C23C27 for ; Fri, 21 Nov 2014 16:24:49 +0000 (UTC) Received: by mail-ee0-f69.google.com with SMTP id d49sf3254483eek.8 for ; Fri, 21 Nov 2014 08:24:48 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=f3hLvqdbenEK6pYdxShnbyWgD4uC+DqKTDSBdlnEtUc=; b=D4ssG8f45j0FGknWof/MWgz4FyO7vMvtJ4zieklfQUVZGqUfca6g2xiqjHeBmN+f+6 dQhHsJZTLfOuBL5Vzq69ktBF0BSGzDhIbPfC6mWcePbt4xfiQb1WV2h0CBQC++Bcc5L8 GQkXWEp6n23+EHlp+h8Sr0FYK1gbwoh+PI8vgFfCt0bXI+V5twyS4fxZLZkG5n5C7hrc BVdjByHn+Onur1x/lauyGn2DPD9eF5HwU52liPQdankID+eALBVH9dD/84dBK/rYJGbO qbqxjwROXcvU6qnW0XCihG+b6cswuPV+GXfeR+wK8tdUoYh48lv+0Tc9S2gnHNbwCV3m SrMw== X-Gm-Message-State: ALoCoQm+qeRpgaJCr/WsB47m7tVIWD7T3TevG2JzGkFHhFfI+Ur8F1biMsE/j83YQ1k421yCXdyI X-Received: by 10.152.26.72 with SMTP id j8mr1836038lag.3.1416587087763; Fri, 21 Nov 2014 08:24:47 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.87.146 with SMTP id ay18ls224581lab.51.gmail; Fri, 21 Nov 2014 08:24:47 -0800 (PST) X-Received: by 10.152.42.198 with SMTP id q6mr5931230lal.48.1416587087469; Fri, 21 Nov 2014 08:24:47 -0800 (PST) Received: from mail-lb0-f181.google.com (mail-lb0-f181.google.com. [209.85.217.181]) by mx.google.com with ESMTPS id da5si6228354lac.76.2014.11.21.08.24.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Nov 2014 08:24:46 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) client-ip=209.85.217.181; Received: by mail-lb0-f181.google.com with SMTP id l4so4292783lbv.26 for ; Fri, 21 Nov 2014 08:24:46 -0800 (PST) X-Received: by 10.112.45.102 with SMTP id l6mr6064399lbm.46.1416587086353; Fri, 21 Nov 2014 08:24:46 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp121240lbc; Fri, 21 Nov 2014 08:24:45 -0800 (PST) X-Received: by 10.152.37.201 with SMTP id a9mr4125887lak.53.1416587085799; Fri, 21 Nov 2014 08:24:45 -0800 (PST) Received: from mail-wi0-f171.google.com (mail-wi0-f171.google.com. [209.85.212.171]) by mx.google.com with ESMTPS id dy10si11014778wib.89.2014.11.21.08.24.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Nov 2014 08:24:44 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 209.85.212.171 as permitted sender) client-ip=209.85.212.171; Received: by mail-wi0-f171.google.com with SMTP id bs8so12535550wib.4 for ; Fri, 21 Nov 2014 08:24:44 -0800 (PST) X-Received: by 10.194.90.112 with SMTP id bv16mr9392874wjb.122.1416587083066; Fri, 21 Nov 2014 08:24:43 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id g16sm8661348wjq.20.2014.11.21.08.24.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Nov 2014 08:24:42 -0800 (PST) From: Daniel Thompson To: Russell King , Will Deacon , Catalin Marinas Cc: Daniel Thompson , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Peter Zijlstra , Paul Mackerras , Ingo Molnar , Arnaldo Carvalho de Melo , patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Sumit Semwal Subject: [PATCH v2 1/2] arm: perf: Prevent wraparound during overflow Date: Fri, 21 Nov 2014 16:24:26 +0000 Message-Id: <1416587067-3220-2-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1416587067-3220-1-git-send-email-daniel.thompson@linaro.org> References: <1416412346-8759-1-git-send-email-daniel.thompson@linaro.org> <1416587067-3220-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , If the overflow threshold for a counter is set above or near the 0xffffffff boundary then the kernel may lose track of the overflow causing only events that occur *after* the overflow to be recorded. Specifically the problem occurs when the value of the performance counter overtakes its original programmed value due to wrap around. Typical solutions to this problem are either to avoid programming in values likely to be overtaken or to treat the overflow bit as the 33rd bit of the counter. Its somewhat fiddly to refactor the code to correctly handle the 33rd bit during irqsave sections (context switches for example) so instead we take the simpler approach of avoiding values likely to be overtaken. We set the limit to half of max_period because this matches the limit imposed in __hw_perf_event_init(). This causes a doubling of the interrupt rate for large threshold values, however even with a very fast counter ticking at 4GHz the interrupt rate would only be ~1Hz. Signed-off-by: Daniel Thompson --- arch/arm/kernel/perf_event.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index 266cba46db3e..ab68833c1e31 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -115,8 +115,14 @@ int armpmu_event_set_period(struct perf_event *event) ret = 1; } - if (left > (s64)armpmu->max_period) - left = armpmu->max_period; + /* + * Limit the maximum period to prevent the counter value + * from overtaking the one we are about to program. In + * effect we are reducing max_period to account for + * interrupt latency (and we are being very conservative). + */ + if (left > (armpmu->max_period >> 1)) + left = armpmu->max_period >> 1; local64_set(&hwc->prev_count, (u64)-left);