From patchwork Thu Jun 1 03:07:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 100812 Delivered-To: patches@linaro.org Received: by 10.140.96.100 with SMTP id j91csp612367qge; Wed, 31 May 2017 20:08:05 -0700 (PDT) X-Received: by 10.98.59.212 with SMTP id w81mr845625pfj.107.1496286484986; Wed, 31 May 2017 20:08:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496286484; cv=none; d=google.com; s=arc-20160816; b=Q5H9eEu8BzrGYIoklkWhlEHmNpumPKJPlNdq5/mxtk4nCo8pQftfCZLTjfjgRt+28+ fxilYpsQJupmJh8Enr6Ai7/9YL+mjQNq1b9sKmvX4tDxJ+8UyhvgxKJBmz7jSVKsLpEI m0+UjYlzJDH9zTx75ljOnlgQ9Fk1ot1M+c7/g6piGyyQo/Unl0rQQ7HM87dkYDI311dC Zx32Ya6a4J28NbRS6Eu2tchKWE8lM8+n0RHmWITBqYUxFfHbcCDrxP/fIRkZujm8pvAt bNUhu9XC1pLC4N7qfMi1li5j95fVEhmcHCXjY1mHZIgOwGc1xhSVjmUTHg1AL0QsrgGL dPdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=Ab6ods9YiYrr2j7MU9kZNIWIg9LpUKHaxEwPdZtMW9I=; b=TBbkLgbQIPKPv+EEXnkiMvfV41F9DtPSuiqUq8e1MBpC19nAaBNkj/snNbJGNjo3bX V+KlkMdbrdDtJQNq4oQvVQrWX/hEWsnhapJ+raUbwlDpjy2zKLvK29nNWeaIng+mR2a8 CacgELevQs13FHCOqdyYXxXdi/U7XZq2Wi7lwVqY8RXG1krNJTBvEpq00wQaum/PJf6R VEb24x1Pv/prki/dTf6C2iHLsbLazQhq0RLUbKJUHykjQfjVexaY7s5zluABdoZBuApp rUSAaZTOkq+VZ0raO1YpGTR0CQI3yQkp9qUTtgiOnVuRsXIZGhFm8Oej5lanY9zm3R94 LbBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of john.stultz@linaro.org designates 2607:f8b0:400e:c00::232 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-pf0-x232.google.com (mail-pf0-x232.google.com. [2607:f8b0:400e:c00::232]) by mx.google.com with ESMTPS id v5si49216652plg.227.2017.05.31.20.08.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 31 May 2017 20:08:04 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 2607:f8b0:400e:c00::232 as permitted sender) client-ip=2607:f8b0:400e:c00::232; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of john.stultz@linaro.org designates 2607:f8b0:400e:c00::232 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: by mail-pf0-x232.google.com with SMTP id m17so24585507pfg.3 for ; Wed, 31 May 2017 20:08:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ab6ods9YiYrr2j7MU9kZNIWIg9LpUKHaxEwPdZtMW9I=; b=bCAo6V6iVFFKD7IkKQ9us4A9yR0CTP9lHM91offzDVsZxcD32eWsA6UJgCNjCDpmm1 o1gJUP3wXbs0l8gXax4t7m2B3xDjFVDXXHhyg42EJ1YxxR/+LfUF3/UH7y+U9lL2BTu1 aDjq6IWX5taD4GVmRPG0X1m6NE7SBD+dFKYPw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ab6ods9YiYrr2j7MU9kZNIWIg9LpUKHaxEwPdZtMW9I=; b=j8WSfUo141neZ7KgOcDmUvucuCnfIXtSV4CWZH9WXhR3uwgc9E7rDcAIa2DXFtrKo3 gE2FfDd79mxpPlaWUll3L88jMX+Hg3fyFGyYzZcnG/IQqbF13dcfxSe/t9HtAs/pWoAN 3lX9TJ1/9xIibIzlWcOKkY6cmouYKqGx+r6ANtbFN4t6yTdUiBtNZ8jHr6HETknWX0qn opyAEAbH7CWJlMMOxE2G5QD6kSj2tzw+YGZT0UgEIING0Ih+V/Qu9jT1bxBQsBowhWjh 4lIPRXp9fRTKXVmZNPtwy0dO8fHcg2jcRbLik+Zx5Gy3colp7cxjxHMx1/FYKJ4cnOlk WxXw== X-Gm-Message-State: AODbwcCU6YPbUxGwk6p7YfpowzfjrySGp6zKF0unbHdE8fAE5ASSuobu fT5cjo+sMQgatfNNcds= X-Received: by 10.84.130.7 with SMTP id 7mr92751269plc.35.1496286484656; Wed, 31 May 2017 20:08:04 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:1002:83f0:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id s68sm29137756pgc.5.2017.05.31.20.08.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 May 2017 20:08:03 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Thomas Gleixner , Ingo Molnar , Miroslav Lichvar , Richard Cochran , Prarit Bhargava , Stephen Boyd , Kevin Brodsky , Will Deacon , Daniel Mentz , "stable #4 . 8+" Subject: [PATCH 2/3 v2] time: Fix CLOCK_MONOTONIC_RAW sub-nanosecond accounting Date: Wed, 31 May 2017 20:07:57 -0700 Message-Id: <1496286478-13584-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1496286478-13584-1-git-send-email-john.stultz@linaro.org> References: <1496286478-13584-1-git-send-email-john.stultz@linaro.org> Due to how the MONOTONIC_RAW accumulation logic was handled, there is the potential for a 1ns discontinuity when we do accumulations. This small discontinuity has for the most part gone un-noticed, but since ARM64 enabled CLOCK_MONOTONIC_RAW in their vDSO clock_gettime implementation, we've seen failures with the inconsistency-check test in kselftest. This patch addresses the issue by using the same sub-ns accumulation handling that CLOCK_MONOTONIC uses, which avoids the issue for in-kernel users. Since the ARM64 vDSO implementation has its own clock_gettime calculation logic, this patch reduces the frequency of errors, but failures are still seen. The ARM64 vDSO will need to be updated to include the sub-nanosecond xtime_nsec values in its calculation for this issue to be completely fixed. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Miroslav Lichvar Cc: Richard Cochran Cc: Prarit Bhargava Cc: Stephen Boyd Cc: Kevin Brodsky Cc: Will Deacon Cc: Daniel Mentz Cc: stable #4.8+ Tested-by: Daniel Mentz Signed-off-by: John Stultz --- v2: Address Ingo's style feedback --- include/linux/timekeeper_internal.h | 4 ++-- kernel/time/timekeeping.c | 19 ++++++++++--------- 2 files changed, 12 insertions(+), 11 deletions(-) -- 2.7.4 diff --git a/include/linux/timekeeper_internal.h b/include/linux/timekeeper_internal.h index 110f453..528cc86 100644 --- a/include/linux/timekeeper_internal.h +++ b/include/linux/timekeeper_internal.h @@ -58,7 +58,7 @@ struct tk_read_base { * interval. * @xtime_remainder: Shifted nano seconds left over when rounding * @cycle_interval - * @raw_interval: Raw nano seconds accumulated per NTP interval. + * @raw_interval: Shifted raw nano seconds accumulated per NTP interval. * @ntp_error: Difference between accumulated time and NTP time in ntp * shifted nano seconds. * @ntp_error_shift: Shift conversion between clock shifted nano seconds and @@ -100,7 +100,7 @@ struct timekeeper { u64 cycle_interval; u64 xtime_interval; s64 xtime_remainder; - u32 raw_interval; + u64 raw_interval; /* The ntp_tick_length() value currently being used. * This cached copy ensures we consistently apply the tick * length for an entire tick, as ntp_tick_length may change diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index 797c73e..8eaa95c 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -282,7 +282,7 @@ static void tk_setup_internals(struct timekeeper *tk, struct clocksource *clock) /* Go back from cycles -> shifted ns */ tk->xtime_interval = interval * clock->mult; tk->xtime_remainder = ntpinterval - tk->xtime_interval; - tk->raw_interval = (interval * clock->mult) >> clock->shift; + tk->raw_interval = interval * clock->mult; /* if changing clocks, convert xtime_nsec shift units */ if (old_clock) { @@ -1994,7 +1994,7 @@ static u64 logarithmic_accumulation(struct timekeeper *tk, u64 offset, u32 shift, unsigned int *clock_set) { u64 interval = tk->cycle_interval << shift; - u64 raw_nsecs; + u64 snsec_per_sec; /* If the offset is smaller than a shifted interval, do nothing */ if (offset < interval) @@ -2009,14 +2009,15 @@ static u64 logarithmic_accumulation(struct timekeeper *tk, u64 offset, *clock_set |= accumulate_nsecs_to_secs(tk); /* Accumulate raw time */ - raw_nsecs = (u64)tk->raw_interval << shift; - raw_nsecs += tk->raw_time.tv_nsec; - if (raw_nsecs >= NSEC_PER_SEC) { - u64 raw_secs = raw_nsecs; - raw_nsecs = do_div(raw_secs, NSEC_PER_SEC); - tk->raw_time.tv_sec += raw_secs; + tk->tkr_raw.xtime_nsec += (u64)tk->raw_time.tv_nsec << tk->tkr_raw.shift; + tk->tkr_raw.xtime_nsec += tk->raw_interval << shift; + snsec_per_sec = (u64)NSEC_PER_SEC << tk->tkr_raw.shift; + while (tk->tkr_raw.xtime_nsec >= snsec_per_sec) { + tk->tkr_raw.xtime_nsec -= snsec_per_sec; + tk->raw_time.tv_sec++; } - tk->raw_time.tv_nsec = raw_nsecs; + tk->raw_time.tv_nsec = tk->tkr_raw.xtime_nsec >> tk->tkr_raw.shift; + tk->tkr_raw.xtime_nsec -= (u64)tk->raw_time.tv_nsec << tk->tkr_raw.shift; /* Accumulate error between NTP and clock interval */ tk->ntp_error += tk->ntp_tick << shift;