From patchwork Thu Jul 18 17:24:48 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 18422 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qe0-f72.google.com (mail-qe0-f72.google.com [209.85.128.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BE62520E0B for ; Thu, 18 Jul 2013 17:24:51 +0000 (UTC) Received: by mail-qe0-f72.google.com with SMTP id 1sf4584867qec.7 for ; Thu, 18 Jul 2013 10:24:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-beenthere:x-forwarded-to:x-forwarded-for:delivered-to:date:from :to:cc:subject:in-reply-to:message-id:references:user-agent :mime-version:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe :content-type; bh=5/mUfiB6mwVcbkhwZ1HTLOtsSwEcbY6ajGK5NWmr7Ak=; b=EI+v5806RcEDdES0El5GdkHwq0w5RX+L9tgSvLpmKB9mnxKIE5EWvppoX3iVTU93yl vN4c7MYPY1TVXN/fuvbU5E9xdfseDt6BRzyY00qXU8bScJmT9SzqUSPHv6XmgNRJif+b /jELpl5TbTjChTumhhdGJPspOQo4lvPn8q5D/pKqCvWSzo0ScZfnI5dfQSThi2QVweTw 8y6QO/S0JlCD17qRGsMxbTW6kEsk9PIWLkmIzlRSI55r0S1NEypQCmKPLpG1kLlL9wIL GeFB8x94VBRv2b+IMCRrI5GT5xY+55AJLVx8YCozC3yPsZdggeLK/WMTyGA6a6AhWeh7 +nXg== X-Received: by 10.236.168.167 with SMTP id k27mr6704011yhl.23.1374168291410; Thu, 18 Jul 2013 10:24:51 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.63.1 with SMTP id c1ls1429153qes.32.gmail; Thu, 18 Jul 2013 10:24:51 -0700 (PDT) X-Received: by 10.58.236.42 with SMTP id ur10mr4303655vec.48.1374168291290; Thu, 18 Jul 2013 10:24:51 -0700 (PDT) Received: from mail-vb0-f50.google.com (mail-vb0-f50.google.com [209.85.212.50]) by mx.google.com with ESMTPS id ha3si3066589vdb.4.2013.07.18.10.24.51 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 18 Jul 2013 10:24:51 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.212.50 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.50; Received: by mail-vb0-f50.google.com with SMTP id w16so2582031vbb.9 for ; Thu, 18 Jul 2013 10:24:51 -0700 (PDT) X-Received: by 10.52.69.177 with SMTP id f17mr3807812vdu.48.1374168291177; Thu, 18 Jul 2013 10:24:51 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.149.77 with SMTP id s13csp235676vcv; Thu, 18 Jul 2013 10:24:50 -0700 (PDT) X-Received: by 10.224.174.209 with SMTP id u17mr14760336qaz.6.1374168290642; Thu, 18 Jul 2013 10:24:50 -0700 (PDT) Received: from mail-qa0-f47.google.com (mail-qa0-f47.google.com [209.85.216.47]) by mx.google.com with ESMTPS id h2si5205453qah.175.2013.07.18.10.24.50 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 18 Jul 2013 10:24:50 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.216.47 is neither permitted nor denied by best guess record for domain of nicolas.pitre@linaro.org) client-ip=209.85.216.47; Received: by mail-qa0-f47.google.com with SMTP id i13so3695275qae.6 for ; Thu, 18 Jul 2013 10:24:50 -0700 (PDT) X-Received: by 10.49.132.69 with SMTP id os5mr13867403qeb.48.1374168290416; Thu, 18 Jul 2013 10:24:50 -0700 (PDT) Received: from xanadu.home (modemcable044.209-83-70.mc.videotron.ca. [70.83.209.44]) by mx.google.com with ESMTPSA id d7sm16181720qag.13.2013.07.18.10.24.49 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 18 Jul 2013 10:24:49 -0700 (PDT) Date: Thu, 18 Jul 2013 13:24:48 -0400 (EDT) From: Nicolas Pitre To: Dave Martin , Russell King - ARM Linux cc: linux-arm-kernel@lists.infradead.org, Jon Medhurst , lorenzo.pieralisi@arm.com, pawel.moll@arm.com, patches@linaro.org, sudeep.karkadanagesha@arm.com, achin.gupta@arm.com, Olof Johansson Subject: Re: [PATCH 1/4] ARM: vexpress/dcscb: fix cache disabling sequences In-Reply-To: <20130718150408.GB2655@localhost.localdomain> Message-ID: References: <1374118116-16836-1-git-send-email-nicolas.pitre@linaro.org> <1374118116-16836-2-git-send-email-nicolas.pitre@linaro.org> <20130718150408.GB2655@localhost.localdomain> User-Agent: Alpine 2.03 (LFD 1266 2009-07-14) MIME-Version: 1.0 X-Gm-Message-State: ALoCoQk0zxc7VJndiXWGoN+7nlkBpFPQ2dKsQ0qXM0SGdDEa2tWoWxputggrS2fViyBbhgWmNcUp X-Original-Sender: nicolas.pitre@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.50 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , [ added Russell for his opinion on the patch below ] On Thu, 18 Jul 2013, Dave Martin wrote: > On Wed, Jul 17, 2013 at 11:28:33PM -0400, Nicolas Pitre wrote: > > Unlike real A15/A7's, the RTSM simulation doesn't appear to hit the > > cache when the CTRL.C bit is cleared. Let's ensure there is no memory > > access within the disable and flush cache sequence, including to the > > stack. > > > > Signed-off-by: Nicolas Pitre > > --- > > arch/arm/mach-vexpress/dcscb.c | 58 +++++++++++++++++++++++++++--------------- > > 1 file changed, 37 insertions(+), 21 deletions(-) > > > > diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c > > index 16d57a8a9d..9f01c04d58 100644 > > --- a/arch/arm/mach-vexpress/dcscb.c > > +++ b/arch/arm/mach-vexpress/dcscb.c > > @@ -136,14 +136,29 @@ static void dcscb_power_down(void) > > /* > > * Flush all cache levels for this cluster. > > * > > - * A15/A7 can hit in the cache with SCTLR.C=0, so we don't need > > - * a preliminary flush here for those CPUs. At least, that's > > - * the theory -- without the extra flush, Linux explodes on > > - * RTSM (to be investigated). > > + * To do so we do: > > + * - Clear the CTLR.C bit to prevent further cache allocations > > SCTLR Fixed. > > + * - Flush the whole cache > > + * - Disable local coherency by clearing the ACTLR "SMP" bit > > + * > > + * Let's do it in the safest possible way i.e. with > > + * no memory access within the following sequence > > + * including the stack. > > */ > > - flush_cache_all(); > > - set_cr(get_cr() & ~CR_C); > > - flush_cache_all(); > > + asm volatile( > > + "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" > > + "bic r0, r0, #"__stringify(CR_C)" \n\t" > > + "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" > > + "isb \n\t" > > + "bl v7_flush_dcache_all \n\t" > > + "clrex \n\t" > > + "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" > > + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" > > + "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" > > + "isb \n\t" > > + "dsb " > > + : : : "r0","r1","r2","r3","r4","r5","r6","r7", > > + "r9","r10","r11","lr","memory"); > > Along with the TC2 support, we now have 4 copies of this code sequence. > > This is basically the A15/A7 native "exit coherency and flash and > disable some levels of dcache" operation, whose only parameter is which > cache levels to flush. > > That's a big mouthful -- we can probably come up with a better name -- > but we've pretty much concluded that there is no way to break this > operation apart into bitesize pieces. Nonetheless, any native > powerdown sequence for these processors will need to do this, or > something closely related. > > Is it worth consolidating, or is that premature? It is probably worth consolidating. What about this: commit 390cf8b9b83eeeebdfef51912f5003a6a9b84115 Author: Nicolas Pitre Date: Thu Jul 18 13:12:48 2013 -0400 ARM: cacheflush: consolidate single-CPU ARMv7 cache disabling code This code is becoming duplicated in many places. So let's consolidate it into a handy macro that is known to be right and available for reuse. Signed-off-by: Nicolas Pitre diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 17d0ae8672..8a76933e80 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -436,4 +436,33 @@ static inline void __sync_cache_range_r(volatile void *p, size_t size) #define sync_cache_w(ptr) __sync_cache_range_w(ptr, sizeof *(ptr)) #define sync_cache_r(ptr) __sync_cache_range_r(ptr, sizeof *(ptr)) +/* + * Disabling cache access for one CPU in an ARMv7 SMP system is tricky. + * To do so we must: + * + * - Clear the SCTLR.C bit to prevent further cache allocations + * - Flush the desired level of cache + * - Clear the ACTLR "SMP" bit to disable local coherency + * + * ... and so without any intervening memory access in between those steps, + * not even to the stack. + * + * The clobber list is dictated by the call to v7_flush_dcache_*. + */ +#define v7_disable_flush_cache(level) \ + asm volatile( \ + "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" \ + "bic r0, r0, #"__stringify(CR_C)" \n\t" \ + "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" \ + "isb \n\t" \ + "bl v7_flush_dcache_"__stringify(level)" \n\t" \ + "clrex \n\t" \ + "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" \ + "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" \ + "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" \ + "isb \n\t" \ + "dsb " \ + : : : "r0","r1","r2","r3","r4","r5","r6","r7", \ + "r9","r10","r11","lr","memory" ) + #endif diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c index 85fffa702f..145d8237d5 100644 --- a/arch/arm/mach-vexpress/dcscb.c +++ b/arch/arm/mach-vexpress/dcscb.c @@ -133,32 +133,8 @@ static void dcscb_power_down(void) if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { arch_spin_unlock(&dcscb_lock); - /* - * Flush all cache levels for this cluster. - * - * To do so we do: - * - Clear the SCTLR.C bit to prevent further cache allocations - * - Flush the whole cache - * - Clear the ACTLR "SMP" bit to disable local coherency - * - * Let's do it in the safest possible way i.e. with - * no memory access within the following sequence - * including to the stack. - */ - asm volatile( - "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" - "bic r0, r0, #"__stringify(CR_C)" \n\t" - "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" - "isb \n\t" - "bl v7_flush_dcache_all \n\t" - "clrex \n\t" - "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" - "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" - "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" - "isb \n\t" - "dsb " - : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","r11","lr","memory"); + /* Flush all cache levels for this cluster. */ + v7_disable_flush_cache(all); /* * This is a harmless no-op. On platforms with a real @@ -177,24 +153,8 @@ static void dcscb_power_down(void) } else { arch_spin_unlock(&dcscb_lock); - /* - * Flush the local CPU cache. - * Let's do it in the safest possible way as above. - */ - asm volatile( - "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" - "bic r0, r0, #"__stringify(CR_C)" \n\t" - "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" - "isb \n\t" - "bl v7_flush_dcache_louis \n\t" - "clrex \n\t" - "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" - "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" - "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" - "isb \n\t" - "dsb " - : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","r11","lr","memory"); + /* Flush the local CPU cache. */ + v7_disable_flush_cache(louis); } __mcpm_cpu_down(cpu, cluster); diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c index dfb55d45b6..fd8bc2d931 100644 --- a/arch/arm/mach-vexpress/tc2_pm.c +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -134,26 +134,7 @@ static void tc2_pm_down(u64 residency) : : "r" (0x400) ); } - /* - * We need to disable and flush the whole (L1 and L2) cache. - * Let's do it in the safest possible way i.e. with - * no memory access within the following sequence - * including the stack. - */ - asm volatile( - "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" - "bic r0, r0, #"__stringify(CR_C)" \n\t" - "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" - "isb \n\t" - "bl v7_flush_dcache_all \n\t" - "clrex \n\t" - "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" - "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" - "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" - "isb \n\t" - "dsb " - : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","r11","lr","memory"); + v7_disable_flush_cache(all); cci_disable_port_by_cpu(mpidr); @@ -169,24 +150,7 @@ static void tc2_pm_down(u64 residency) arch_spin_unlock(&tc2_pm_lock); - /* - * We need to disable and flush only the L1 cache. - * Let's do it in the safest possible way as above. - */ - asm volatile( - "mrc p15, 0, r0, c1, c0, 0 @ get CR \n\t" - "bic r0, r0, #"__stringify(CR_C)" \n\t" - "mcr p15, 0, r0, c1, c0, 0 @ set CR \n\t" - "isb \n\t" - "bl v7_flush_dcache_louis \n\t" - "clrex \n\t" - "mrc p15, 0, r0, c1, c0, 1 @ get AUXCR \n\t" - "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" - "mcr p15, 0, r0, c1, c0, 1 @ set AUXCR \n\t" - "isb \n\t" - "dsb " - : : : "r0","r1","r2","r3","r4","r5","r6","r7", - "r9","r10","r11","lr","memory"); + v7_disable_flush_cache(louis); } __mcpm_cpu_down(cpu, cluster);