From patchwork Thu Mar 26 19:23:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 46376 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 49FAD21585 for ; Thu, 26 Mar 2015 19:23:41 +0000 (UTC) Received: by labow10 with SMTP id ow10sf7949098lab.1 for ; Thu, 26 Mar 2015 12:23:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=gr+xcblePONZIfuuV382MB1/6eoi4VXaQFJRANgfPZQ=; b=BZyzwlOPharH5FsM/qk/yMsJcy2rzbp4nggqX6VlAwpYDjF0okDTjQrUQksqXd0qqr wqUzVVBvjsIwRHlLMWYIR26J+N4ooBoC7qu2lvhSuJayRIkR7MSBHUXTok19PwS3pZSs T09agfCguUsCI/9GQSEUrLVxYnpWLi2hba9BA/GXZiJ4LuMNJ7HBGnf7SaxpvUjTeY1+ qBEqTbTpb+oMtzGGRYDzrOhaeRgTrBss7dpexSD1sILc5Sa0kK2UwHMywxBHbleOAYz8 PmBm0ouIi11YNjKDGpZCkn9+l7FoCs+Ryn+IfCYxgxt1IRA3fsMIKdusSZxwEErYsmaE cOmQ== X-Gm-Message-State: ALoCoQnmoFQpdCx3qiF+UHRElU7ijo7NGE2qsU0cDL2bdvPtCJraM22HNIogUjJCPAfjvNv7eCae X-Received: by 10.152.87.110 with SMTP id w14mr2110467laz.3.1427397820085; Thu, 26 Mar 2015 12:23:40 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.170.164 with SMTP id an4ls289106lac.19.gmail; Thu, 26 Mar 2015 12:23:39 -0700 (PDT) X-Received: by 10.112.26.43 with SMTP id i11mr14602433lbg.83.1427397819936; Thu, 26 Mar 2015 12:23:39 -0700 (PDT) Received: from mail-la0-f48.google.com (mail-la0-f48.google.com. [209.85.215.48]) by mx.google.com with ESMTPS id u2si5299064laz.161.2015.03.26.12.23.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Mar 2015 12:23:39 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) client-ip=209.85.215.48; Received: by lagg8 with SMTP id g8so53805201lag.1 for ; Thu, 26 Mar 2015 12:23:39 -0700 (PDT) X-Received: by 10.112.8.101 with SMTP id q5mr14789018lba.19.1427397819622; Thu, 26 Mar 2015 12:23:39 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.57.201 with SMTP id k9csp813223lbq; Thu, 26 Mar 2015 12:23:38 -0700 (PDT) X-Received: by 10.70.130.198 with SMTP id og6mr5473065pdb.160.1427397815555; Thu, 26 Mar 2015 12:23:35 -0700 (PDT) Received: from mail-pd0-f182.google.com (mail-pd0-f182.google.com. [209.85.192.182]) by mx.google.com with ESMTPS id dp2si9557485pbc.197.2015.03.26.12.23.34 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Mar 2015 12:23:35 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.192.182 as permitted sender) client-ip=209.85.192.182; Received: by pdbni2 with SMTP id ni2so71879035pdb.1 for ; Thu, 26 Mar 2015 12:23:34 -0700 (PDT) X-Received: by 10.66.221.8 with SMTP id qa8mr28823806pac.62.1427397814655; Thu, 26 Mar 2015 12:23:34 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id pv4sm6365753pbb.17.2015.03.26.12.23.33 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 26 Mar 2015 12:23:34 -0700 (PDT) From: John Stultz To: lkml Cc: Daniel Thompson , Russell King , Will Deacon , Catalin Marinas , Thomas Gleixner , Stephen Boyd , Peter Zijlstra , Ingo Molnar , John Stultz Subject: [PATCH 2/5] sched_clock: Optimize cache line usage Date: Thu, 26 Mar 2015 12:23:23 -0700 Message-Id: <1427397806-20889-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1427397806-20889-1-git-send-email-john.stultz@linaro.org> References: <1427397806-20889-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Daniel Thompson Currently sched_clock(), a very hot code path, is not optimized to minimise its cache profile. In particular: 1. cd is not ____cacheline_aligned, 2. struct clock_data does not distinguish between hotpath and coldpath data, reducing locality of reference in the hotpath, 3. Some hotpath data is missing from struct clock_data and is marked __read_mostly (which more or less guarantees it will not share a cache line with cd). This patch corrects these problems by extracting all hotpath data into a separate structure and using ____cacheline_aligned to ensure the hotpath uses a single (64 byte) cache line. Cc: Russell King Cc: Will Deacon Cc: Catalin Marinas Cc: Daniel Thompson Cc: Thomas Gleixner Cc: Stephen Boyd Cc: Peter Zijlstra Cc: Ingo Molnar Reviewed-by: Stephen Boyd Acked-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Thompson Signed-off-by: John Stultz --- kernel/time/sched_clock.c | 112 +++++++++++++++++++++++++++++++--------------- 1 file changed, 77 insertions(+), 35 deletions(-) diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index 1751e95..872e068 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -18,28 +18,59 @@ #include #include -struct clock_data { - ktime_t wrap_kt; +/** + * struct clock_read_data - data required to read from sched_clock + * + * @epoch_ns: sched_clock value at last update + * @epoch_cyc: Clock cycle value at last update + * @sched_clock_mask: Bitmask for two's complement subtraction of non 64bit + * clocks + * @read_sched_clock: Current clock source (or dummy source when suspended) + * @mult: Multipler for scaled math conversion + * @shift: Shift value for scaled math conversion + * @suspended: Flag to indicate if the clock is suspended (stopped) + * + * Care must be taken when updating this structure; it is read by + * some very hot code paths. It occupies <=48 bytes and, when combined + * with the seqcount used to synchronize access, comfortably fits into + * a 64 byte cache line. + */ +struct clock_read_data { u64 epoch_ns; u64 epoch_cyc; - seqcount_t seq; - unsigned long rate; + u64 sched_clock_mask; + u64 (*read_sched_clock)(void); u32 mult; u32 shift; bool suspended; }; +/** + * struct clock_data - all data needed for sched_clock (including + * registration of a new clock source) + * + * @seq: Sequence counter for protecting updates. + * @read_data: Data required to read from sched_clock. + * @wrap_kt: Duration for which clock can run before wrapping + * @rate: Tick rate of the registered clock + * @actual_read_sched_clock: Registered clock read function + * + * The ordering of this structure has been chosen to optimize cache + * performance. In particular seq and read_data (combined) should fit + * into a single 64 byte cache line. + */ +struct clock_data { + seqcount_t seq; + struct clock_read_data read_data; + ktime_t wrap_kt; + unsigned long rate; +}; + static struct hrtimer sched_clock_timer; static int irqtime = -1; core_param(irqtime, irqtime, int, 0400); -static struct clock_data cd = { - .mult = NSEC_PER_SEC / HZ, -}; - -static u64 __read_mostly sched_clock_mask; - static u64 notrace jiffy_sched_clock_read(void) { /* @@ -49,7 +80,10 @@ static u64 notrace jiffy_sched_clock_read(void) return (u64)(jiffies - INITIAL_JIFFIES); } -static u64 __read_mostly (*read_sched_clock)(void) = jiffy_sched_clock_read; +static struct clock_data cd ____cacheline_aligned = { + .read_data = { .mult = NSEC_PER_SEC / HZ, + .read_sched_clock = jiffy_sched_clock_read, }, +}; static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift) { @@ -60,15 +94,16 @@ unsigned long long notrace sched_clock(void) { u64 cyc, res; unsigned long seq; + struct clock_read_data *rd = &cd.read_data; do { seq = raw_read_seqcount_begin(&cd.seq); - res = cd.epoch_ns; - if (!cd.suspended) { - cyc = read_sched_clock(); - cyc = (cyc - cd.epoch_cyc) & sched_clock_mask; - res += cyc_to_ns(cyc, cd.mult, cd.shift); + res = rd->epoch_ns; + if (!rd->suspended) { + cyc = rd->read_sched_clock(); + cyc = (cyc - rd->epoch_cyc) & rd->sched_clock_mask; + res += cyc_to_ns(cyc, rd->mult, rd->shift); } } while (read_seqcount_retry(&cd.seq, seq)); @@ -83,16 +118,17 @@ static void notrace update_sched_clock(void) unsigned long flags; u64 cyc; u64 ns; + struct clock_read_data *rd = &cd.read_data; - cyc = read_sched_clock(); - ns = cd.epoch_ns + - cyc_to_ns((cyc - cd.epoch_cyc) & sched_clock_mask, - cd.mult, cd.shift); + cyc = rd->read_sched_clock(); + ns = rd->epoch_ns + + cyc_to_ns((cyc - rd->epoch_cyc) & rd->sched_clock_mask, + rd->mult, rd->shift); raw_local_irq_save(flags); raw_write_seqcount_begin(&cd.seq); - cd.epoch_ns = ns; - cd.epoch_cyc = cyc; + rd->epoch_ns = ns; + rd->epoch_cyc = cyc; raw_write_seqcount_end(&cd.seq); raw_local_irq_restore(flags); } @@ -111,6 +147,7 @@ void __init sched_clock_register(u64 (*read)(void), int bits, u32 new_mult, new_shift; unsigned long r; char r_unit; + struct clock_read_data *rd = &cd.read_data; if (cd.rate > rate) return; @@ -129,17 +166,18 @@ void __init sched_clock_register(u64 (*read)(void), int bits, /* update epoch for new counter and update epoch_ns from old counter*/ new_epoch = read(); - cyc = read_sched_clock(); - ns = cd.epoch_ns + cyc_to_ns((cyc - cd.epoch_cyc) & sched_clock_mask, - cd.mult, cd.shift); + cyc = rd->read_sched_clock(); + ns = rd->epoch_ns + + cyc_to_ns((cyc - rd->epoch_cyc) & rd->sched_clock_mask, + rd->mult, rd->shift); raw_write_seqcount_begin(&cd.seq); - read_sched_clock = read; - sched_clock_mask = new_mask; - cd.mult = new_mult; - cd.shift = new_shift; - cd.epoch_cyc = new_epoch; - cd.epoch_ns = ns; + rd->read_sched_clock = read; + rd->sched_clock_mask = new_mask; + rd->mult = new_mult; + rd->shift = new_shift; + rd->epoch_cyc = new_epoch; + rd->epoch_ns = ns; raw_write_seqcount_end(&cd.seq); r = rate; @@ -171,7 +209,7 @@ void __init sched_clock_postinit(void) * If no sched_clock function has been provided at that point, * make it the final one one. */ - if (read_sched_clock == jiffy_sched_clock_read) + if (cd.read_data.read_sched_clock == jiffy_sched_clock_read) sched_clock_register(jiffy_sched_clock_read, BITS_PER_LONG, HZ); update_sched_clock(); @@ -187,17 +225,21 @@ void __init sched_clock_postinit(void) static int sched_clock_suspend(void) { + struct clock_read_data *rd = &cd.read_data; + update_sched_clock(); hrtimer_cancel(&sched_clock_timer); - cd.suspended = true; + rd->suspended = true; return 0; } static void sched_clock_resume(void) { - cd.epoch_cyc = read_sched_clock(); + struct clock_read_data *rd = &cd.read_data; + + rd->epoch_cyc = rd->read_sched_clock(); hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL); - cd.suspended = false; + rd->suspended = false; } static struct syscore_ops sched_clock_ops = {