From patchwork Tue Feb 28 00:29:32 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 6959 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 3FD7B23E4A for ; Tue, 28 Feb 2012 00:29:52 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 073C8A1875C for ; Tue, 28 Feb 2012 00:29:51 +0000 (UTC) Received: by mail-iy0-f180.google.com with SMTP id e36so310163iag.11 for ; Mon, 27 Feb 2012 16:29:51 -0800 (PST) Received: from mr.google.com ([10.50.170.41]) by 10.50.170.41 with SMTP id aj9mr20637377igc.0.1330388991867 (num_hops = 1); Mon, 27 Feb 2012 16:29:51 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.170.41 with SMTP id aj9mr16711992igc.0.1330388991797; Mon, 27 Feb 2012 16:29:51 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.11.10 with SMTP id r10csp13068ibr; Mon, 27 Feb 2012 16:29:50 -0800 (PST) Received: by 10.68.135.10 with SMTP id po10mr36179140pbb.121.1330388989614; Mon, 27 Feb 2012 16:29:49 -0800 (PST) Received: from e38.co.us.ibm.com (e38.co.us.ibm.com. [32.97.110.159]) by mx.google.com with ESMTPS id y4si19328918pbb.42.2012.02.27.16.29.49 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 27 Feb 2012 16:29:49 -0800 (PST) Received-SPF: pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.159 as permitted sender) client-ip=32.97.110.159; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jstultz@us.ibm.com designates 32.97.110.159 as permitted sender) smtp.mail=jstultz@us.ibm.com Received: from /spool/local by e38.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 27 Feb 2012 17:29:48 -0700 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e38.co.us.ibm.com (192.168.1.138) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 27 Feb 2012 17:29:46 -0700 Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 5A9533E40049; Mon, 27 Feb 2012 17:29:46 -0700 (MST) Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q1S0Tkl8125322; Mon, 27 Feb 2012 17:29:46 -0700 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q1S0TjXV002068; Mon, 27 Feb 2012 17:29:45 -0700 Received: from kernel.beaverton.ibm.com (kernel.beaverton.ibm.com [9.47.67.96]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q1S0Timd002005; Mon, 27 Feb 2012 17:29:44 -0700 Received: by kernel.beaverton.ibm.com (Postfix, from userid 1056) id 58B7CC03C7; Mon, 27 Feb 2012 16:29:42 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Thomas Gleixner , Eric Dumazet , Richard Cochran Subject: [PATCH 5/7] time: Shadow cycle_last in timekeeper structure Date: Mon, 27 Feb 2012 16:29:32 -0800 Message-Id: <1330388974-27793-6-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.3.2.146.gca209 In-Reply-To: <1330388974-27793-1-git-send-email-john.stultz@linaro.org> References: <1330388974-27793-1-git-send-email-john.stultz@linaro.org> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12022800-5518-0000-0000-0000028D6CD7 X-Gm-Message-State: ALoCoQnBOLhibPwIVeGLLZxq9NTuL2ZIkCBf0WoA5Hc9M+3EIDQrcI0KWzNALfi2Tlk5BsXr789F The clocksource cycle_last value is problematic for working on shadow copies of the timekeeper, because the clocksource is global. Since its mostly used only for timekeeping, move cycle_last into the timekeeper. Unfortunately there are some uses for cycle_last outside of timekeeping (such as tsc_read, which makes sure we haven't skipped to a core that the TSC is behind the last read), so we keep the clocksource cycle_last updated as well. CC: Thomas Gleixner CC: Eric Dumazet CC: Richard Cochran Signed-off-by: John Stultz --- kernel/time/timekeeping.c | 23 ++++++++++++++--------- 1 files changed, 14 insertions(+), 9 deletions(-) diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index 6c36d19..ebfb037 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -29,7 +29,8 @@ struct timekeeper { u32 mult; /* The shift value of the current clocksource. */ int shift; - + /* cycle value at last accumulation point */ + cycle_t cycle_last; /* Number of clock cycles in one NTP interval. */ cycle_t cycle_interval; /* Number of clock shifted nano seconds in one NTP interval. */ @@ -138,7 +139,8 @@ static void timekeeper_setup_internals(struct clocksource *clock) u64 tmp, ntpinterval; timekeeper.clock = clock; - clock->cycle_last = clock->read(clock); + timekeeper.cycle_last = clock->read(clock); + clock->cycle_last = timekeeper.cycle_last; /* Do the ns -> cycle conversion first, using original mult */ tmp = NTP_INTERVAL_LENGTH; @@ -184,7 +186,7 @@ static inline s64 timekeeping_get_ns(void) cycle_now = clock->read(clock); /* calculate the delta since the last update_wall_time: */ - cycle_delta = (cycle_now - clock->cycle_last) & clock->mask; + cycle_delta = (cycle_now - timekeeper.cycle_last) & clock->mask; nsec = cycle_delta * timekeeper.mult + timekeeper.xtime_nsec; return nsec >> timekeeper.shift; @@ -200,7 +202,7 @@ static inline s64 timekeeping_get_ns_raw(void) cycle_now = clock->read(clock); /* calculate the delta since the last update_wall_time: */ - cycle_delta = (cycle_now - clock->cycle_last) & clock->mask; + cycle_delta = (cycle_now - timekeeper.cycle_last) & clock->mask; /* return delta convert to nanoseconds. */ return clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift); @@ -248,8 +250,9 @@ static void timekeeping_forward_now(void) clock = timekeeper.clock; cycle_now = clock->read(clock); - cycle_delta = (cycle_now - clock->cycle_last) & clock->mask; - clock->cycle_last = cycle_now; + cycle_delta = (cycle_now - timekeeper.cycle_last) & clock->mask; + timekeeper.cycle_last = cycle_now; + timekeeper.clock->cycle_last = cycle_now; timekeeper.xtime_nsec += cycle_delta * timekeeper.mult; @@ -749,7 +752,8 @@ static void timekeeping_resume(void) __timekeeping_inject_sleeptime(&ts); } /* re-base the last cycle value */ - timekeeper.clock->cycle_last = timekeeper.clock->read(timekeeper.clock); + timekeeper.cycle_last = timekeeper.clock->read(timekeeper.clock); + timekeeper.clock->cycle_last = timekeeper.cycle_last; timekeeper.ntp_error = 0; timekeeping_suspended = 0; @@ -1016,7 +1020,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset, /* Accumulate one shifted interval */ offset -= tk->cycle_interval << shift; - tk->clock->cycle_last += tk->cycle_interval << shift; + tk->cycle_last += tk->cycle_interval << shift; tk->xtime_nsec += tk->xtime_interval << shift; while (tk->xtime_nsec >= nsecps) { @@ -1070,7 +1074,7 @@ static void update_wall_time(void) #ifdef CONFIG_ARCH_USES_GETTIMEOFFSET offset = tk.cycle_interval; #else - offset = (clock->read(clock) - clock->cycle_last) & clock->mask; + offset = (clock->read(clock) - tk.cycle_last) & clock->mask; #endif /* @@ -1143,6 +1147,7 @@ static void update_wall_time(void) timekeeper = tk; + timekeeper.clock->cycle_last = timekeeper.cycle_last; timekeeping_update(&timekeeper, false); out: