From patchwork Fri Jan 23 00:09:17 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 43545 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f72.google.com (mail-ee0-f72.google.com [74.125.83.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id F016D218DB for ; Fri, 23 Jan 2015 00:09:40 +0000 (UTC) Received: by mail-ee0-f72.google.com with SMTP id e49sf3198979eek.3 for ; Thu, 22 Jan 2015 16:09:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=chCAdiJQDhsFyNt+MC6Jx/uCqYu1eJstM3XRZPN/Lho=; b=BYMceSBF08ee9vUtxxhVhBpehemhuwfhVEIi256Mm8liRv1IhDorM9ZNPqVfL5F0IC YAgYdjyoHwrHr3Cuq0a2lJou+TODrRod3oekwhG9gQ2KQVS85Jka9Lw3pJMjX02BN/Me PBNnkpO273yzwhRpeUsV2Fpk7jd6hA04CsPi//OoO6GP5ljTjhTuWxs9RyYBwE7hJlBV z592zIwKlNAWfKR5NuJK7XEbqbrbLPQUF/LKI3RbyOpsXBw69YbqKlXWqupfYm621rXX hgO22GysLJ+fxrf/j/M1QI9XufeL9VCoimLlQHEG4E9JRQkWrQiX8mWChjf3Y17f0RBC tJvg== X-Gm-Message-State: ALoCoQnCgLj9j9UOrZcQZA2DAi8J1pnSqLaZg8zbZswCVlpSjdFFNVJdqaDD4v6oJLtQFLmZ0DQ9 X-Received: by 10.180.13.83 with SMTP id f19mr733385wic.7.1421971780190; Thu, 22 Jan 2015 16:09:40 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.7.100 with SMTP id i4ls177448laa.106.gmail; Thu, 22 Jan 2015 16:09:40 -0800 (PST) X-Received: by 10.112.47.135 with SMTP id d7mr4517886lbn.54.1421971780031; Thu, 22 Jan 2015 16:09:40 -0800 (PST) Received: from mail-la0-f53.google.com (mail-la0-f53.google.com. [209.85.215.53]) by mx.google.com with ESMTPS id zv10si22026966lbb.76.2015.01.22.16.09.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Jan 2015 16:09:39 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) client-ip=209.85.215.53; Received: by mail-la0-f53.google.com with SMTP id gq15so4623839lab.12 for ; Thu, 22 Jan 2015 16:09:39 -0800 (PST) X-Received: by 10.112.90.170 with SMTP id bx10mr4435633lbb.69.1421971779894; Thu, 22 Jan 2015 16:09:39 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.9.200 with SMTP id c8csp63860lbb; Thu, 22 Jan 2015 16:09:38 -0800 (PST) X-Received: by 10.66.222.6 with SMTP id qi6mr6376501pac.121.1421971776260; Thu, 22 Jan 2015 16:09:36 -0800 (PST) Received: from mail-pd0-f170.google.com (mail-pd0-f170.google.com. [209.85.192.170]) by mx.google.com with ESMTPS id oz9si9722243pdb.15.2015.01.22.16.09.35 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Jan 2015 16:09:36 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.192.170 as permitted sender) client-ip=209.85.192.170; Received: by mail-pd0-f170.google.com with SMTP id p10so4829079pdj.1 for ; Thu, 22 Jan 2015 16:09:35 -0800 (PST) X-Received: by 10.68.200.97 with SMTP id jr1mr6332382pbc.123.1421971775242; Thu, 22 Jan 2015 16:09:35 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id am14sm10405657pac.35.2015.01.22.16.09.33 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 22 Jan 2015 16:09:33 -0800 (PST) From: John Stultz To: Linux Kernel Mailing List Cc: John Stultz , Dave Jones , Linus Torvalds , Thomas Gleixner , Richard Cochran , Prarit Bhargava , Stephen Boyd , Ingo Molnar , Peter Zijlstra Subject: [PATCH 02/12] clocksource: Simplify logic around clocksource wrapping saftey margins Date: Thu, 22 Jan 2015 16:09:17 -0800 Message-Id: <1421971767-17707-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1421971767-17707-1-git-send-email-john.stultz@linaro.org> References: <1421971767-17707-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The clocksource logic has a number of places where we try to include a safety margin. Most of these are 12% safety margins, but they are inconsistently applied and sometimes are applied on top of each other. Additionally, in the previous patch, we corrected an issue where we unintentionally in effect created a 50% saftey margin, which these 12.5% margins where then added to. So to siplify the logic here, this patch removes the various 12.5% margins, and consolidates adding the margin in one place: clocks_calc_max_nsecs(). Addtionally, Linus prefers a 50% safety margin, as it allows bad clock values to be more easily caught. This should really have no net effect, due to the corrected issue earlier which caused greater then 50% margins to be used w/o issue. Cc: Dave Jones Cc: Linus Torvalds Cc: Thomas Gleixner Cc: Richard Cochran Cc: Prarit Bhargava Cc: Stephen Boyd Cc: Ingo Molnar Cc: Peter Zijlstra Acked-by: Stephen Boyd (for sched_clock.c bit) Signed-off-by: John Stultz --- kernel/time/clocksource.c | 26 ++++++++++++-------------- kernel/time/sched_clock.c | 4 ++-- 2 files changed, 14 insertions(+), 16 deletions(-) diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c index c14cd03..e837ffd1 100644 --- a/kernel/time/clocksource.c +++ b/kernel/time/clocksource.c @@ -545,6 +545,9 @@ static u32 clocksource_max_adjustment(struct clocksource *cs) * @shift: cycle to nanosecond divisor (power of two) * @maxadj: maximum adjustment value to mult (~11%) * @mask: bitmask for two's complement subtraction of non 64 bit counters + * + * NOTE: This function includes a saftey margin of 50%, so that bad clock values + * can be detected. */ u64 clocks_calc_max_nsecs(u32 mult, u32 shift, u32 maxadj, u64 mask) { @@ -566,11 +569,14 @@ u64 clocks_calc_max_nsecs(u32 mult, u32 shift, u32 maxadj, u64 mask) max_cycles = min(max_cycles, mask); max_nsecs = clocksource_cyc2ns(max_cycles, mult - maxadj, shift); + /* Return 50% of the actual maximum, so we can detect bad values */ + max_nsecs >>= 1; + return max_nsecs; } /** - * clocksource_max_deferment - Returns max time the clocksource can be deferred + * clocksource_max_deferment - Returns max time the clocksource should be deferred * @cs: Pointer to clocksource * */ @@ -580,13 +586,7 @@ static u64 clocksource_max_deferment(struct clocksource *cs) max_nsecs = clocks_calc_max_nsecs(cs->mult, cs->shift, cs->maxadj, cs->mask); - /* - * To ensure that the clocksource does not wrap whilst we are idle, - * limit the time the clocksource can be deferred by 12.5%. Please - * note a margin of 12.5% is used because this can be computed with - * a shift, versus say 10% which would require division. - */ - return max_nsecs - (max_nsecs >> 3); + return max_nsecs; } #ifndef CONFIG_ARCH_USES_GETTIMEOFFSET @@ -735,10 +735,9 @@ void __clocksource_updatefreq_scale(struct clocksource *cs, u32 scale, u32 freq) * conversion precision. 10 minutes is still a reasonable * amount. That results in a shift value of 24 for a * clocksource with mask >= 40bit and f >= 4GHz. That maps to - * ~ 0.06ppm granularity for NTP. We apply the same 12.5% - * margin as we do in clocksource_max_deferment() + * ~ 0.06ppm granularity for NTP. */ - sec = (cs->mask - (cs->mask >> 3)); + sec = cs->mask; do_div(sec, freq); do_div(sec, scale); if (!sec) @@ -750,9 +749,8 @@ void __clocksource_updatefreq_scale(struct clocksource *cs, u32 scale, u32 freq) NSEC_PER_SEC / scale, sec * scale); /* - * for clocksources that have large mults, to avoid overflow. - * Since mult may be adjusted by ntp, add an safety extra margin - * + * Ensure clocksources that have large mults don't overflow + * when adjusted. */ cs->maxadj = clocksource_max_adjustment(cs); while ((cs->mult + cs->maxadj < cs->mult) diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index 01d2d15..c794b84 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -125,9 +125,9 @@ void __init sched_clock_register(u64 (*read)(void), int bits, new_mask = CLOCKSOURCE_MASK(bits); - /* calculate how many ns until we wrap */ + /* calculate how many ns until we risk wrapping */ wrap = clocks_calc_max_nsecs(new_mult, new_shift, 0, new_mask); - new_wrap_kt = ns_to_ktime(wrap - (wrap >> 3)); + new_wrap_kt = ns_to_ktime(wrap); /* update epoch for new counter and update epoch_ns from old counter*/ new_epoch = read();