From patchwork Thu Mar 20 15:46:00 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Campbell X-Patchwork-Id: 26731 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f199.google.com (mail-qc0-f199.google.com [209.85.216.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8749720143 for ; Thu, 20 Mar 2014 15:50:51 +0000 (UTC) Received: by mail-qc0-f199.google.com with SMTP id e16sf2456543qcx.6 for ; Thu, 20 Mar 2014 08:50:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=/MWh3JsrrGMqFvuKHQwqyd74NrfzoXz32IAbLIc1a3Y=; b=DWxCuMoA+2kz7IRigM1uWC8+IrYBDFgc2koufL56XogTrWfZAp14NP1rQUA0H316Iz GDPeZSRzWB5prdCl4PsTKA+kB4J6Ofx1oEcXOmjJczq10YohVrGykwNX5lmRZNFJ/Dmx NLet3XmsnegBJNlsR13q+x8klA7Z8/3fo+E6TINLyLVRxrXdfnAwaWL4Jsg8FEy+WU7K Z6amLAyoHMuVH2NuAoWXdWjgIgZ8+ddAqdJT4i6tE09nP9ytoBOJrh743eHbzqfdrrZr ny34oHGUXRbQkTIWjo5A+0Q576sNTSJgtaqwcnfDz32z4vYDZvWuBb8/dVXwa5MRkvmv d5Ww== X-Gm-Message-State: ALoCoQmh3kv3ftVOLYRPB6tHjb+2vJlYTn+lRbDOJoFMIfJgjsLotijc6upqa0cmvi8cqLtJLF57 X-Received: by 10.52.103.103 with SMTP id fv7mr15287224vdb.3.1395330651249; Thu, 20 Mar 2014 08:50:51 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.22.75 with SMTP id 69ls300363qgm.94.gmail; Thu, 20 Mar 2014 08:50:51 -0700 (PDT) X-Received: by 10.52.247.231 with SMTP id yh7mr4419437vdc.34.1395330651092; Thu, 20 Mar 2014 08:50:51 -0700 (PDT) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx.google.com with ESMTPS id sq9si533389vdc.17.2014.03.20.08.50.51 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 20 Mar 2014 08:50:51 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.182 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.182; Received: by mail-vc0-f182.google.com with SMTP id ks9so1187339vcb.27 for ; Thu, 20 Mar 2014 08:50:51 -0700 (PDT) X-Received: by 10.52.137.74 with SMTP id qg10mr763498vdb.61.1395330650998; Thu, 20 Mar 2014 08:50:50 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp398785vck; Thu, 20 Mar 2014 08:50:47 -0700 (PDT) X-Received: by 10.50.30.6 with SMTP id o6mr3565477igh.43.1395330615309; Thu, 20 Mar 2014 08:50:15 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id cx6si37493338igc.35.2014.03.20.08.49.45 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 20 Mar 2014 08:50:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WQfC2-0004yC-NV; Thu, 20 Mar 2014 15:47:42 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WQfC1-0004xj-8q for xen-devel@lists.xen.org; Thu, 20 Mar 2014 15:47:41 +0000 Received: from [85.158.137.68:34285] by server-13.bemta-3.messagelabs.com id 5C/45-18692-C9D0B235; Thu, 20 Mar 2014 15:47:40 +0000 X-Env-Sender: Ian.Campbell@citrix.com X-Msg-Ref: server-10.tower-31.messagelabs.com!1395330457!1956655!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.11.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 5282 invoked from network); 20 Mar 2014 15:47:39 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 20 Mar 2014 15:47:39 -0000 X-IronPort-AV: E=Sophos;i="4.97,695,1389744000"; d="scan'208";a="111935831" Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 20 Mar 2014 15:47:37 +0000 Received: from norwich.cam.xci-test.com (10.80.248.129) by smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 20 Mar 2014 11:47:36 -0400 Received: from drall.uk.xensource.com ([10.80.16.71] helo=drall.uk.xensource.com.) by norwich.cam.xci-test.com with esmtp (Exim 4.72) (envelope-from ) id 1WQfAU-0006O2-PJ; Thu, 20 Mar 2014 15:46:06 +0000 From: Ian Campbell To: Date: Thu, 20 Mar 2014 15:46:00 +0000 Message-ID: <1395330365-9901-12-git-send-email-ian.campbell@citrix.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1395330336.3104.12.camel@kazak.uk.xensource.com> References: <1395330336.3104.12.camel@kazak.uk.xensource.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: julien.grall@linaro.org, tim@xen.org, Ian Campbell , stefano.stabellini@eu.citrix.com Subject: [Xen-devel] [PATCH 12/17] xen: arm64: reinstate hard tabs in system.h cmpxchg X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ian.campbell@citrix.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.182 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: These functions are from Linux and the intention was to keep the formatting the same to make resyncing easier. Signed-off-by: Ian Campbell Acked-by: Julien Grall --- xen/include/asm-arm/arm64/system.h | 196 ++++++++++++++++++------------------ 1 file changed, 98 insertions(+), 98 deletions(-) diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h index 0db96e0..9fa698b 100644 --- a/xen/include/asm-arm/arm64/system.h +++ b/xen/include/asm-arm/arm64/system.h @@ -6,7 +6,7 @@ extern void __bad_xchg(volatile void *, int); static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size) { - unsigned long ret, tmp; + unsigned long ret, tmp; switch (size) { case 1: @@ -15,8 +15,8 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size " stlxrb %w1, %w3, %2\n" " cbnz %w1, 1b\n" : "=&r" (ret), "=&r" (tmp), "+Q" (*(u8 *)ptr) - : "r" (x) - : "cc", "memory"); + : "r" (x) + : "cc", "memory"); break; case 2: asm volatile("// __xchg2\n" @@ -24,8 +24,8 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size " stlxrh %w1, %w3, %2\n" " cbnz %w1, 1b\n" : "=&r" (ret), "=&r" (tmp), "+Q" (*(u16 *)ptr) - : "r" (x) - : "cc", "memory"); + : "r" (x) + : "cc", "memory"); break; case 4: asm volatile("// __xchg4\n" @@ -33,8 +33,8 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size " stlxr %w1, %w3, %2\n" " cbnz %w1, 1b\n" : "=&r" (ret), "=&r" (tmp), "+Q" (*(u32 *)ptr) - : "r" (x) - : "cc", "memory"); + : "r" (x) + : "cc", "memory"); break; case 8: asm volatile("// __xchg8\n" @@ -42,12 +42,12 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size " stlxr %w1, %3, %2\n" " cbnz %w1, 1b\n" : "=&r" (ret), "=&r" (tmp), "+Q" (*(u64 *)ptr) - : "r" (x) - : "cc", "memory"); - break; - default: - __bad_xchg(ptr, size), ret = 0; - break; + : "r" (x) + : "cc", "memory"); + break; + default: + __bad_xchg(ptr, size), ret = 0; + break; } smp_mb(); @@ -55,107 +55,107 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size } #define xchg(ptr,x) \ - ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) + ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) extern void __bad_cmpxchg(volatile void *ptr, int size); static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, - unsigned long new, int size) + unsigned long new, int size) { - unsigned long oldval = 0, res; - - switch (size) { - case 1: - do { - asm volatile("// __cmpxchg1\n" - " ldxrb %w1, %2\n" - " mov %w0, #0\n" - " cmp %w1, %w3\n" - " b.ne 1f\n" - " stxrb %w0, %w4, %2\n" - "1:\n" - : "=&r" (res), "=&r" (oldval), "+Q" (*(u8 *)ptr) - : "Ir" (old), "r" (new) - : "cc"); - } while (res); - break; - - case 2: - do { - asm volatile("// __cmpxchg2\n" - " ldxrh %w1, %2\n" - " mov %w0, #0\n" - " cmp %w1, %w3\n" - " b.ne 1f\n" - " stxrh %w0, %w4, %2\n" - "1:\n" - : "=&r" (res), "=&r" (oldval), "+Q" (*(u16 *)ptr) - : "Ir" (old), "r" (new) - : "cc"); - } while (res); - break; - - case 4: - do { - asm volatile("// __cmpxchg4\n" - " ldxr %w1, %2\n" - " mov %w0, #0\n" - " cmp %w1, %w3\n" - " b.ne 1f\n" - " stxr %w0, %w4, %2\n" - "1:\n" - : "=&r" (res), "=&r" (oldval), "+Q" (*(u32 *)ptr) - : "Ir" (old), "r" (new) - : "cc"); - } while (res); - break; - - case 8: - do { - asm volatile("// __cmpxchg8\n" - " ldxr %1, %2\n" - " mov %w0, #0\n" - " cmp %1, %3\n" - " b.ne 1f\n" - " stxr %w0, %4, %2\n" - "1:\n" - : "=&r" (res), "=&r" (oldval), "+Q" (*(u64 *)ptr) - : "Ir" (old), "r" (new) - : "cc"); - } while (res); - break; - - default: + unsigned long oldval = 0, res; + + switch (size) { + case 1: + do { + asm volatile("// __cmpxchg1\n" + " ldxrb %w1, %2\n" + " mov %w0, #0\n" + " cmp %w1, %w3\n" + " b.ne 1f\n" + " stxrb %w0, %w4, %2\n" + "1:\n" + : "=&r" (res), "=&r" (oldval), "+Q" (*(u8 *)ptr) + : "Ir" (old), "r" (new) + : "cc"); + } while (res); + break; + + case 2: + do { + asm volatile("// __cmpxchg2\n" + " ldxrh %w1, %2\n" + " mov %w0, #0\n" + " cmp %w1, %w3\n" + " b.ne 1f\n" + " stxrh %w0, %w4, %2\n" + "1:\n" + : "=&r" (res), "=&r" (oldval), "+Q" (*(u16 *)ptr) + : "Ir" (old), "r" (new) + : "cc"); + } while (res); + break; + + case 4: + do { + asm volatile("// __cmpxchg4\n" + " ldxr %w1, %2\n" + " mov %w0, #0\n" + " cmp %w1, %w3\n" + " b.ne 1f\n" + " stxr %w0, %w4, %2\n" + "1:\n" + : "=&r" (res), "=&r" (oldval), "+Q" (*(u32 *)ptr) + : "Ir" (old), "r" (new) + : "cc"); + } while (res); + break; + + case 8: + do { + asm volatile("// __cmpxchg8\n" + " ldxr %1, %2\n" + " mov %w0, #0\n" + " cmp %1, %3\n" + " b.ne 1f\n" + " stxr %w0, %4, %2\n" + "1:\n" + : "=&r" (res), "=&r" (oldval), "+Q" (*(u64 *)ptr) + : "Ir" (old), "r" (new) + : "cc"); + } while (res); + break; + + default: __bad_cmpxchg(ptr, size); oldval = 0; - } + } - return oldval; + return oldval; } static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, - unsigned long new, int size) + unsigned long new, int size) { - unsigned long ret; + unsigned long ret; - smp_mb(); - ret = __cmpxchg(ptr, old, new, size); - smp_mb(); + smp_mb(); + ret = __cmpxchg(ptr, old, new, size); + smp_mb(); - return ret; + return ret; } -#define cmpxchg(ptr,o,n) \ - ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr)))) - -#define cmpxchg_local(ptr,o,n) \ - ((__typeof__(*(ptr)))__cmpxchg((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr)))) +#define cmpxchg(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr)))) + +#define cmpxchg_local(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr)))) /* Uses uimm4 as a bitmask to select the clearing of one or more of * the DAIF exception mask bits: