From patchwork Fri Jan 30 19:03:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 44060 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 20DD920CA8 for ; Fri, 30 Jan 2015 19:03:32 +0000 (UTC) Received: by mail-lb0-f200.google.com with SMTP id u10sf4557091lbd.3 for ; Fri, 30 Jan 2015 11:03:31 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=64WXPiBE+J5m2DJXnA1BC7zTDeoYmcUXtX2CvsOPRI0=; b=Nkvg3z5t+OrPzEGQp3G9FM0k8FJn2ZfqOqQ5PdDa0vgEypY51K6O3muKfjUWk05CfS c3G3oQR95Hj8kn0n9rJ8vPgCeBhzuHnIpOrGFhOy/YMxsYr/qSNN79zp86ugBYAjqB6T fHs+xbO8+k/xPgZF2PtW7gxN3FeO85rLI15bsgAS50MikuF1naRTKWpxMDnon2d1+PMg DpdF7DMvbZRRLAFpWX9Ci6FV5/y7JVHIf6Ufcd93dbywB8huL7yvpbMoinSA6hUZ5OLx G/MKISoQN2Vg7DBm8xvmoqC+4pzKaOBqoUmmTpDUSYlEoWQp/6MHgowNV4vmWFBpnDCM 4RlA== X-Gm-Message-State: ALoCoQmaGFFKZwi4id84YFGFkGnOeQxpd3uqs7Eh9B82IvEdBgdU0y6tX/ksihWck0UQ47GIcswP X-Received: by 10.112.137.70 with SMTP id qg6mr1031170lbb.14.1422644611140; Fri, 30 Jan 2015 11:03:31 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.120.193 with SMTP id le1ls477821lab.11.gmail; Fri, 30 Jan 2015 11:03:30 -0800 (PST) X-Received: by 10.152.207.11 with SMTP id ls11mr8048663lac.83.1422644610864; Fri, 30 Jan 2015 11:03:30 -0800 (PST) Received: from mail-la0-f47.google.com (mail-la0-f47.google.com. [209.85.215.47]) by mx.google.com with ESMTPS id ql3si10719483lbb.133.2015.01.30.11.03.30 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 30 Jan 2015 11:03:30 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.47 as permitted sender) client-ip=209.85.215.47; Received: by mail-la0-f47.google.com with SMTP id hz20so25277774lab.6 for ; Fri, 30 Jan 2015 11:03:30 -0800 (PST) X-Received: by 10.112.41.234 with SMTP id i10mr8118166lbl.25.1422644610727; Fri, 30 Jan 2015 11:03:30 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.35.133 with SMTP id h5csp380572lbj; Fri, 30 Jan 2015 11:03:30 -0800 (PST) X-Received: by 10.180.90.235 with SMTP id bz11mr330586wib.5.1422644610051; Fri, 30 Jan 2015 11:03:30 -0800 (PST) Received: from mail-we0-f169.google.com (mail-we0-f169.google.com. [74.125.82.169]) by mx.google.com with ESMTPS id lk11si8623974wic.16.2015.01.30.11.03.29 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 30 Jan 2015 11:03:30 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 74.125.82.169 as permitted sender) client-ip=74.125.82.169; Received: by mail-we0-f169.google.com with SMTP id u56so28855140wes.0 for ; Fri, 30 Jan 2015 11:03:29 -0800 (PST) X-Received: by 10.180.149.197 with SMTP id uc5mr331009wib.80.1422644609588; Fri, 30 Jan 2015 11:03:29 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id d6sm8166187wic.1.2015.01.30.11.03.28 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 30 Jan 2015 11:03:28 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner , John Stultz Cc: Daniel Thompson , linux-kernel@vger.kernel.org, patches@linaro.org, linaro-kernel@lists.linaro.org, Sumit Semwal , Stephen Boyd , Steven Rostedt , Russell King , Will Deacon , Catalin Marinas Subject: [PATCH v3 1/4] sched_clock: Match scope of read and write seqcounts Date: Fri, 30 Jan 2015 19:03:19 +0000 Message-Id: <1422644602-11953-2-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1422644602-11953-1-git-send-email-daniel.thompson@linaro.org> References: <1421859236-19782-1-git-send-email-daniel.thompson@linaro.org> <1422644602-11953-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.47 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently the scope of the raw_write_seqcount_begin/end in sched_clock_register far exceeds the scope of the read section in sched_clock. This gives the impression of safety during cursory review but achieves little. Note that this is likely to be a latent issue at present because sched_clock_register() is typically called before we enable interrupts, however the issue does risk bugs being needlessly introduced as the code evolves. This patch fixes the problem by increasing the scope of the read locking performed by sched_clock() to cover all data modified by sched_clock_register. We also improve clarity by moving writes to struct clock_data that do not impact sched_clock() outside of the critical section. Signed-off-by: Daniel Thompson Cc: Russell King Cc: Will Deacon Cc: Catalin Marinas --- kernel/time/sched_clock.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index 01d2d15aa662..3d21a8719444 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -58,23 +58,21 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift) unsigned long long notrace sched_clock(void) { - u64 epoch_ns; - u64 epoch_cyc; - u64 cyc; + u64 cyc, res; unsigned long seq; - if (cd.suspended) - return cd.epoch_ns; - do { seq = raw_read_seqcount_begin(&cd.seq); - epoch_cyc = cd.epoch_cyc; - epoch_ns = cd.epoch_ns; + + res = cd.epoch_ns; + if (!cd.suspended) { + cyc = read_sched_clock(); + cyc = (cyc - cd.epoch_cyc) & sched_clock_mask; + res += cyc_to_ns(cyc, cd.mult, cd.shift); + } } while (read_seqcount_retry(&cd.seq, seq)); - cyc = read_sched_clock(); - cyc = (cyc - epoch_cyc) & sched_clock_mask; - return epoch_ns + cyc_to_ns(cyc, cd.mult, cd.shift); + return res; } /* @@ -124,10 +122,11 @@ void __init sched_clock_register(u64 (*read)(void), int bits, clocks_calc_mult_shift(&new_mult, &new_shift, rate, NSEC_PER_SEC, 3600); new_mask = CLOCKSOURCE_MASK(bits); + cd.rate = rate; /* calculate how many ns until we wrap */ wrap = clocks_calc_max_nsecs(new_mult, new_shift, 0, new_mask); - new_wrap_kt = ns_to_ktime(wrap - (wrap >> 3)); + cd.wrap_kt = ns_to_ktime(wrap - (wrap >> 3)); /* update epoch for new counter and update epoch_ns from old counter*/ new_epoch = read(); @@ -138,8 +137,6 @@ void __init sched_clock_register(u64 (*read)(void), int bits, raw_write_seqcount_begin(&cd.seq); read_sched_clock = read; sched_clock_mask = new_mask; - cd.rate = rate; - cd.wrap_kt = new_wrap_kt; cd.mult = new_mult; cd.shift = new_shift; cd.epoch_cyc = new_epoch;