From patchwork Mon Sep 17 22:05:01 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 11476 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id B4C7C2415D for ; Mon, 17 Sep 2012 22:05:59 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id 14C823D08FCF for ; Mon, 17 Sep 2012 22:05:58 +0000 (UTC) Received: by ieak11 with SMTP id k11so9905710iea.11 for ; Mon, 17 Sep 2012 15:05:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references:x-cbid :x-gm-message-state; bh=Wfo64WCu5rXkdIZ4v7VkW0AEBkZ6VIymNHnfDTca0lE=; b=aBSUQo//rCANQ3hFosPKEJl5Dlx+z7UZ9bCNWOAnAN7Zs5/OLQ3a5WYo13r4bPIogJ 7C1DJgJfUz71tOhjA5GyYShsiEFjX2x0mocx3+XQt3UL6NnQ+YUMmBCZvh1BZSOYyYzK 4MSUVQ0GQ+vQmFUcsv7a6XFWL5sFgeLR/BMt7xIbVm1b9SANCb/MlXW9yOQSoTt5wGWm LjYF1DoBxdhYh/HFV3Nv5ccsgTDAUkyAseL9H86kcAMd0Kap3IvgMrCsuyHZ8uqJQjn8 hUdwl5nbJj0ztswI5idBaq7PELDMafq/LPBh/5s2/fXiqCnaeYhNVQoVtPHV3BIY0i5p 8EcQ== Received: by 10.50.195.134 with SMTP id ie6mr8487187igc.28.1347919558370; Mon, 17 Sep 2012 15:05:58 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp341538igc; Mon, 17 Sep 2012 15:05:57 -0700 (PDT) Received: by 10.50.149.137 with SMTP id ua9mr8443290igb.65.1347919557468; Mon, 17 Sep 2012 15:05:57 -0700 (PDT) Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com. [32.97.182.141]) by mx.google.com with ESMTPS id ch4si12200034igb.57.2012.09.17.15.05.56 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 17 Sep 2012 15:05:57 -0700 (PDT) Received-SPF: neutral (google.com: 32.97.182.141 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=32.97.182.141; Authentication-Results: mx.google.com; spf=neutral (google.com: 32.97.182.141 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) smtp.mail=john.stultz@linaro.org Received: from /spool/local by e1.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 17 Sep 2012 18:05:54 -0400 Received: from d01relay04.pok.ibm.com (9.56.227.236) by e1.ny.us.ibm.com (192.168.1.101) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 17 Sep 2012 18:05:18 -0400 Received: from d01av03.pok.ibm.com (d01av03.pok.ibm.com [9.56.224.217]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q8HM5HpJ178490; Mon, 17 Sep 2012 18:05:17 -0400 Received: from d01av03.pok.ibm.com (loopback [127.0.0.1]) by d01av03.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q8HM5F7L014683; Mon, 17 Sep 2012 19:05:16 -0300 Received: from kernel-pok.stglabs.ibm.com (kernel.stglabs.ibm.com [9.114.214.19]) by d01av03.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q8HM5DvJ014521; Mon, 17 Sep 2012 19:05:15 -0300 From: John Stultz To: linux-kernel Cc: John Stultz , Tony Luck , Paul Mackerras , Benjamin Herrenschmidt , Andy Lutomirski , Martin Schwidefsky , Paul Turner , Steven Rostedt , Richard Cochran , Prarit Bhargava , Thomas Gleixner Subject: [PATCH 6/6][RFC] time: Convert x86_64 to using new update_vsyscall Date: Mon, 17 Sep 2012 18:05:01 -0400 Message-Id: <1347919501-64534-7-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1347919501-64534-1-git-send-email-john.stultz@linaro.org> References: <1347919501-64534-1-git-send-email-john.stultz@linaro.org> x-cbid: 12091722-6078-0000-0000-00000FB0FCE5 X-Gm-Message-State: ALoCoQnwtSjTcFLAy8Q33GhhGrF0Q5uhg4xSAYqz2HQP4NTrcySpJo1Cg7EiSEWF5Ut53Tsfd2N8 Switch x86_64 to using sub-ns precise vsyscall Cc: Tony Luck Cc: Paul Mackerras Cc: Benjamin Herrenschmidt Cc: Andy Lutomirski Cc: Martin Schwidefsky Cc: Paul Turner Cc: Steven Rostedt Cc: Richard Cochran Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz --- arch/x86/Kconfig | 2 +- arch/x86/include/asm/vgtod.h | 4 ++-- arch/x86/kernel/vsyscall_64.c | 47 ++++++++++++++++++++++++---------------- arch/x86/vdso/vclock_gettime.c | 22 ++++++++++++------- 4 files changed, 45 insertions(+), 30 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index e3f3c1a..8ec3a1a 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -93,7 +93,7 @@ config X86 select GENERIC_CLOCKEVENTS select ARCH_CLOCKSOURCE_DATA if X86_64 select GENERIC_CLOCKEVENTS_BROADCAST if X86_64 || (X86_32 && X86_LOCAL_APIC) - select GENERIC_TIME_VSYSCALL_OLD if X86_64 + select GENERIC_TIME_VSYSCALL if X86_64 select KTIME_SCALAR if X86_32 select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNLEN_USER diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h index 8b38be2..46e24d3 100644 --- a/arch/x86/include/asm/vgtod.h +++ b/arch/x86/include/asm/vgtod.h @@ -17,8 +17,8 @@ struct vsyscall_gtod_data { /* open coded 'struct timespec' */ time_t wall_time_sec; - u32 wall_time_nsec; - u32 monotonic_time_nsec; + u64 wall_time_snsec; + u64 monotonic_time_snsec; time_t monotonic_time_sec; struct timezone sys_tz; diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c index 77dde29..3a3e8c9 100644 --- a/arch/x86/kernel/vsyscall_64.c +++ b/arch/x86/kernel/vsyscall_64.c @@ -82,32 +82,41 @@ void update_vsyscall_tz(void) vsyscall_gtod_data.sys_tz = sys_tz; } -void update_vsyscall_old(struct timespec *wall_time, struct timespec *wtm, - struct clocksource *clock, u32 mult) +void update_vsyscall(struct timekeeper *tk) { - struct timespec monotonic; + struct vsyscall_gtod_data *vdata = &vsyscall_gtod_data; - write_seqcount_begin(&vsyscall_gtod_data.seq); + write_seqcount_begin(&vdata->seq); /* copy vsyscall data */ - vsyscall_gtod_data.clock.vclock_mode = clock->archdata.vclock_mode; - vsyscall_gtod_data.clock.cycle_last = clock->cycle_last; - vsyscall_gtod_data.clock.mask = clock->mask; - vsyscall_gtod_data.clock.mult = mult; - vsyscall_gtod_data.clock.shift = clock->shift; - - vsyscall_gtod_data.wall_time_sec = wall_time->tv_sec; - vsyscall_gtod_data.wall_time_nsec = wall_time->tv_nsec; + vdata->clock.vclock_mode = tk->clock->archdata.vclock_mode; + vdata->clock.cycle_last = tk->clock->cycle_last; + vdata->clock.mask = tk->clock->mask; + vdata->clock.mult = tk->mult; + vdata->clock.shift = tk->shift; + + vdata->wall_time_sec = tk->xtime_sec; + vdata->wall_time_snsec = tk->xtime_nsec; + + vdata->monotonic_time_sec = tk->xtime_sec + + tk->wall_to_monotonic.tv_sec; + vdata->monotonic_time_snsec = tk->xtime_nsec + + (tk->wall_to_monotonic.tv_nsec + << tk->shift); + while (vdata->monotonic_time_snsec >= + (((u64)NSEC_PER_SEC) << tk->shift)) { + vdata->monotonic_time_snsec -= + ((u64)NSEC_PER_SEC) << tk->shift; + vdata->monotonic_time_sec++; + } - monotonic = timespec_add(*wall_time, *wtm); - vsyscall_gtod_data.monotonic_time_sec = monotonic.tv_sec; - vsyscall_gtod_data.monotonic_time_nsec = monotonic.tv_nsec; + vdata->wall_time_coarse.tv_sec = tk->xtime_sec; + vdata->wall_time_coarse.tv_nsec = (long)(tk->xtime_nsec >> tk->shift); - vsyscall_gtod_data.wall_time_coarse = __current_kernel_time(); - vsyscall_gtod_data.monotonic_time_coarse = - timespec_add(vsyscall_gtod_data.wall_time_coarse, *wtm); + vdata->monotonic_time_coarse = timespec_add(vdata->wall_time_coarse, + tk->wall_to_monotonic); - write_seqcount_end(&vsyscall_gtod_data.seq); + write_seqcount_end(&vdata->seq); } static void warn_bad_vsyscall(const char *level, struct pt_regs *regs, diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c index 885eff4..4df6c37 100644 --- a/arch/x86/vdso/vclock_gettime.c +++ b/arch/x86/vdso/vclock_gettime.c @@ -80,7 +80,7 @@ notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz) } -notrace static inline long vgetns(void) +notrace static inline u64 vgetsns(void) { long v; cycles_t cycles; @@ -91,21 +91,24 @@ notrace static inline long vgetns(void) else return 0; v = (cycles - gtod->clock.cycle_last) & gtod->clock.mask; - return (v * gtod->clock.mult) >> gtod->clock.shift; + return v * gtod->clock.mult; } /* Code size doesn't matter (vdso is 4k anyway) and this is faster. */ notrace static int __always_inline do_realtime(struct timespec *ts) { - unsigned long seq, ns; + unsigned long seq; + u64 ns; int mode; + ts->tv_nsec = 0; do { seq = read_seqcount_begin(>od->seq); mode = gtod->clock.vclock_mode; ts->tv_sec = gtod->wall_time_sec; - ts->tv_nsec = gtod->wall_time_nsec; - ns = vgetns(); + ns = gtod->wall_time_snsec; + ns += vgetsns(); + ns >>= gtod->clock.shift; } while (unlikely(read_seqcount_retry(>od->seq, seq))); timespec_add_ns(ts, ns); @@ -114,15 +117,18 @@ notrace static int __always_inline do_realtime(struct timespec *ts) notrace static int do_monotonic(struct timespec *ts) { - unsigned long seq, ns; + unsigned long seq; + u64 ns; int mode; + ts->tv_nsec = 0; do { seq = read_seqcount_begin(>od->seq); mode = gtod->clock.vclock_mode; ts->tv_sec = gtod->monotonic_time_sec; - ts->tv_nsec = gtod->monotonic_time_nsec; - ns = vgetns(); + ns = gtod->monotonic_time_snsec; + ns += vgetsns(); + ns >>= gtod->clock.shift; } while (unlikely(read_seqcount_retry(>od->seq, seq))); timespec_add_ns(ts, ns);