From patchwork Mon Mar 2 15:56:40 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 45293 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 88C992149C for ; Mon, 2 Mar 2015 15:57:06 +0000 (UTC) Received: by labgm9 with SMTP id gm9sf24630663lab.2 for ; Mon, 02 Mar 2015 07:57:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=9QycaTteItfb0q53IVOp/y8pEz65Ax2QEkyaAzAEZ6E=; b=K3kirXvI1VBXFpdUcZyTjlt3Z7Lv4k1eLLlm+ejv215f+KSEkK/vSGPZByA0ZvyTlb vJq1l1WfuYK0VUKO3FBGj6xdolaqaJ287FAIeVphR46phvGERenMcqpSlGtW1SmiszaR 0fYRfH8tCN++75+SlC+WueTAJe8ggLX056TwLIH4LmdnTY0tTKp/DiT7cryddjA7bo7A UJB+R1eOwGEE1XSnaW/d8JqfU5Y5EtjAX7HawXQyKw3I/wewDxvwCC5SIru8B1bQsDOx OeFH7InZv6fUlI7k15rhTeWMZNQop9Kg/6PxMupux8s9xJD6qhk6hvBiBJFFURcHYSxs Jypw== X-Gm-Message-State: ALoCoQlbp3wm1gG+nUxio8tSRwUyS6AF6X7LkckggMsG0EGJYmqSqN6db7SF7aOpVLP+3piAracA X-Received: by 10.152.27.134 with SMTP id t6mr411377lag.5.1425311825474; Mon, 02 Mar 2015 07:57:05 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.10.132 with SMTP id i4ls429716lab.25.gmail; Mon, 02 Mar 2015 07:57:05 -0800 (PST) X-Received: by 10.112.167.231 with SMTP id zr7mr25030352lbb.123.1425311825323; Mon, 02 Mar 2015 07:57:05 -0800 (PST) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com. [209.85.217.182]) by mx.google.com with ESMTPS id cr5si9158531lad.41.2015.03.02.07.57.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 02 Mar 2015 07:57:05 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) client-ip=209.85.217.182; Received: by lbiz11 with SMTP id z11so4989915lbi.3 for ; Mon, 02 Mar 2015 07:57:05 -0800 (PST) X-Received: by 10.152.206.70 with SMTP id lm6mr25339133lac.35.1425311824920; Mon, 02 Mar 2015 07:57:04 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.35.133 with SMTP id h5csp5610933lbj; Mon, 2 Mar 2015 07:57:04 -0800 (PST) X-Received: by 10.194.5.37 with SMTP id p5mr60955596wjp.20.1425311820256; Mon, 02 Mar 2015 07:57:00 -0800 (PST) Received: from mail-we0-f178.google.com (mail-we0-f178.google.com. [74.125.82.178]) by mx.google.com with ESMTPS id bj5si23215972wjc.22.2015.03.02.07.57.00 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 02 Mar 2015 07:57:00 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 74.125.82.178 as permitted sender) client-ip=74.125.82.178; Received: by wevl61 with SMTP id l61so34266592wev.2 for ; Mon, 02 Mar 2015 07:57:00 -0800 (PST) X-Received: by 10.194.52.66 with SMTP id r2mr58916223wjo.61.1425311819944; Mon, 02 Mar 2015 07:56:59 -0800 (PST) Received: from wychelm.lan (cpc4-aztw19-0-0-cust71.18-1.cable.virginm.net. [82.33.25.72]) by mx.google.com with ESMTPSA id g10sm16601168wic.7.2015.03.02.07.56.59 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 02 Mar 2015 07:56:59 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner , John Stultz Cc: Daniel Thompson , linux-kernel@vger.kernel.org, patches@linaro.org, linaro-kernel@lists.linaro.org, Sumit Semwal , Stephen Boyd , Steven Rostedt , Russell King , Will Deacon , Catalin Marinas Subject: [PATCH v5 1/5] sched_clock: Match scope of read and write seqcounts Date: Mon, 2 Mar 2015 15:56:40 +0000 Message-Id: <1425311804-3392-2-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1425311804-3392-1-git-send-email-daniel.thompson@linaro.org> References: <1421859236-19782-1-git-send-email-daniel.thompson@linaro.org> <1425311804-3392-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently the scope of the raw_write_seqcount_begin/end in sched_clock_register far exceeds the scope of the read section in sched_clock. This gives the impression of safety during cursory review but achieves little. Note that this is likely to be a latent issue at present because sched_clock_register() is typically called before we enable interrupts, however the issue does risk bugs being needlessly introduced as the code evolves. This patch fixes the problem by increasing the scope of the read locking performed by sched_clock() to cover all data modified by sched_clock_register. We also improve clarity by moving writes to struct clock_data that do not impact sched_clock() outside of the critical section. Signed-off-by: Daniel Thompson Cc: Russell King Cc: Will Deacon Cc: Catalin Marinas Reviewed-by: Stephen Boyd --- kernel/time/sched_clock.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index 01d2d15aa662..3d21a8719444 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -58,23 +58,21 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift) unsigned long long notrace sched_clock(void) { - u64 epoch_ns; - u64 epoch_cyc; - u64 cyc; + u64 cyc, res; unsigned long seq; - if (cd.suspended) - return cd.epoch_ns; - do { seq = raw_read_seqcount_begin(&cd.seq); - epoch_cyc = cd.epoch_cyc; - epoch_ns = cd.epoch_ns; + + res = cd.epoch_ns; + if (!cd.suspended) { + cyc = read_sched_clock(); + cyc = (cyc - cd.epoch_cyc) & sched_clock_mask; + res += cyc_to_ns(cyc, cd.mult, cd.shift); + } } while (read_seqcount_retry(&cd.seq, seq)); - cyc = read_sched_clock(); - cyc = (cyc - epoch_cyc) & sched_clock_mask; - return epoch_ns + cyc_to_ns(cyc, cd.mult, cd.shift); + return res; } /* @@ -124,10 +122,11 @@ void __init sched_clock_register(u64 (*read)(void), int bits, clocks_calc_mult_shift(&new_mult, &new_shift, rate, NSEC_PER_SEC, 3600); new_mask = CLOCKSOURCE_MASK(bits); + cd.rate = rate; /* calculate how many ns until we wrap */ wrap = clocks_calc_max_nsecs(new_mult, new_shift, 0, new_mask); - new_wrap_kt = ns_to_ktime(wrap - (wrap >> 3)); + cd.wrap_kt = ns_to_ktime(wrap - (wrap >> 3)); /* update epoch for new counter and update epoch_ns from old counter*/ new_epoch = read(); @@ -138,8 +137,6 @@ void __init sched_clock_register(u64 (*read)(void), int bits, raw_write_seqcount_begin(&cd.seq); read_sched_clock = read; sched_clock_mask = new_mask; - cd.rate = rate; - cd.wrap_kt = new_wrap_kt; cd.mult = new_mult; cd.shift = new_shift; cd.epoch_cyc = new_epoch;