From patchwork Wed Mar 26 13:38:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Campbell X-Patchwork-Id: 27123 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f197.google.com (mail-qc0-f197.google.com [209.85.216.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5ABB020062 for ; Wed, 26 Mar 2014 13:40:24 +0000 (UTC) Received: by mail-qc0-f197.google.com with SMTP id i8sf4487118qcq.8 for ; Wed, 26 Mar 2014 06:40:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=sN9nPouzS008kJ0BGCRW6Su9lgcGMaW0DfS1ymqR8Qo=; b=ku8jZqQl1q1Ch2/r5RIADlv19C3/GIrn5ZAMvF+7K+byzNRKEDKXzHqIfpcNDfjcBm 7T5nvEJ/0PfXL9C+vI6HbbHwEdfUScsxuvyn/kMVFOa7SFMRQXN/NG+A7WD6FJzdDlzi 6TWaT/Q6YkXWGSxlDMWi4N4ZdxL+d0PXPv2oNabyomlHH5fQhLWIlcTyn7+dJNtKLp4K nhUtCcIxgWcGkTZP1RCBcrkwMWl5vIfodZedg/HAeV0AnARyrnv0X3zDQAPnflmsmyQc WN25lvP9R0Hc2Sl+Ff/6F3D7E78FOiL/T+lXMU23/CbBWfXKVciiKWyPE+Kln7WTeONY t9iQ== X-Gm-Message-State: ALoCoQkP/kWrawJ9H87GvGt6ngCXqgO1NBXe92Un9KTW8emQ2oMCy48hxbNELbO1rSJN03vo4imX X-Received: by 10.58.22.166 with SMTP id e6mr22333281vef.6.1395841223844; Wed, 26 Mar 2014 06:40:23 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.87.33 with SMTP id q30ls661060qgd.99.gmail; Wed, 26 Mar 2014 06:40:23 -0700 (PDT) X-Received: by 10.221.61.210 with SMTP id wx18mr1039481vcb.27.1395841223745; Wed, 26 Mar 2014 06:40:23 -0700 (PDT) Received: from mail-ve0-f179.google.com (mail-ve0-f179.google.com [209.85.128.179]) by mx.google.com with ESMTPS id sc12si4518527veb.133.2014.03.26.06.40.23 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 26 Mar 2014 06:40:23 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.179; Received: by mail-ve0-f179.google.com with SMTP id db12so2313115veb.38 for ; Wed, 26 Mar 2014 06:40:23 -0700 (PDT) X-Received: by 10.58.185.244 with SMTP id ff20mr173650vec.40.1395841223651; Wed, 26 Mar 2014 06:40:23 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp47079vck; Wed, 26 Mar 2014 06:40:23 -0700 (PDT) X-Received: by 10.52.240.207 with SMTP id wc15mr52613161vdc.14.1395841223190; Wed, 26 Mar 2014 06:40:23 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id sg4si4474728vcb.11.2014.03.26.06.40.22 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 26 Mar 2014 06:40:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WSo2r-0004W8-O6; Wed, 26 Mar 2014 13:39:05 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WSo2k-0004Mr-9I for xen-devel@lists.xen.org; Wed, 26 Mar 2014 13:38:58 +0000 Received: from [193.109.254.147:10613] by server-12.bemta-14.messagelabs.com id 71/87-27473-178D2335; Wed, 26 Mar 2014 13:38:57 +0000 X-Env-Sender: Ian.Campbell@citrix.com X-Msg-Ref: server-14.tower-27.messagelabs.com!1395841134!4276384!3 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n X-StarScan-Received: X-StarScan-Version: 6.11.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 12649 invoked from network); 26 Mar 2014 13:38:56 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 26 Mar 2014 13:38:56 -0000 X-IronPort-AV: E=Sophos;i="4.97,735,1389744000"; d="scan'208";a="115045020" Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net) ([10.9.154.239]) by FTLPIPO01.CITRIX.COM with ESMTP; 26 Mar 2014 13:38:54 +0000 Received: from norwich.cam.xci-test.com (10.80.248.129) by smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 26 Mar 2014 09:38:53 -0400 Received: from drall.uk.xensource.com ([10.80.16.71] helo=drall.uk.xensource.com.) by norwich.cam.xci-test.com with esmtp (Exim 4.72) (envelope-from ) id 1WSo2f-00074X-A8; Wed, 26 Mar 2014 13:38:53 +0000 From: Ian Campbell To: Date: Wed, 26 Mar 2014 13:38:39 +0000 Message-ID: <1395841133-2223-4-git-send-email-ian.campbell@citrix.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1395841009.12547.11.camel@kazak.uk.xensource.com> References: <1395841009.12547.11.camel@kazak.uk.xensource.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: julien.grall@linaro.org, tim@xen.org, Ian Campbell , stefano.stabellini@eu.citrix.com Subject: [Xen-devel] [PATCH v2 04/17] xen: arm32: replace hard tabs in atomics.h X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ian.campbell@citrix.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: This file is from Linux and the intention was to keep the formatting the same to make resyncing easier. Put the hardtabs back and adjust the emacs magic to reflect the desired use of whitespace. Adjust the 64-bit emacs magic too. Signed-off-by: Ian Campbell Acked-by: Julien Grall --- xen/include/asm-arm/arm32/atomic.h | 166 ++++++++++++++++++------------------ xen/include/asm-arm/arm64/atomic.h | 4 +- 2 files changed, 85 insertions(+), 85 deletions(-) diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32/atomic.h index 523c745..3f024d4 100644 --- a/xen/include/asm-arm/arm32/atomic.h +++ b/xen/include/asm-arm/arm32/atomic.h @@ -18,122 +18,122 @@ */ static inline void atomic_add(int i, atomic_t *v) { - unsigned long tmp; - int result; - - __asm__ __volatile__("@ atomic_add\n" -"1: ldrex %0, [%3]\n" -" add %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); + unsigned long tmp; + int result; + + __asm__ __volatile__("@ atomic_add\n" +"1: ldrex %0, [%3]\n" +" add %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "Ir" (i) + : "cc"); } static inline int atomic_add_return(int i, atomic_t *v) { - unsigned long tmp; - int result; + unsigned long tmp; + int result; - smp_mb(); + smp_mb(); - __asm__ __volatile__("@ atomic_add_return\n" -"1: ldrex %0, [%3]\n" -" add %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); + __asm__ __volatile__("@ atomic_add_return\n" +"1: ldrex %0, [%3]\n" +" add %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "Ir" (i) + : "cc"); - smp_mb(); + smp_mb(); - return result; + return result; } static inline void atomic_sub(int i, atomic_t *v) { - unsigned long tmp; - int result; - - __asm__ __volatile__("@ atomic_sub\n" -"1: ldrex %0, [%3]\n" -" sub %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); + unsigned long tmp; + int result; + + __asm__ __volatile__("@ atomic_sub\n" +"1: ldrex %0, [%3]\n" +" sub %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "Ir" (i) + : "cc"); } static inline int atomic_sub_return(int i, atomic_t *v) { - unsigned long tmp; - int result; + unsigned long tmp; + int result; - smp_mb(); + smp_mb(); - __asm__ __volatile__("@ atomic_sub_return\n" -"1: ldrex %0, [%3]\n" -" sub %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) - : "r" (&v->counter), "Ir" (i) - : "cc"); + __asm__ __volatile__("@ atomic_sub_return\n" +"1: ldrex %0, [%3]\n" +" sub %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter) + : "r" (&v->counter), "Ir" (i) + : "cc"); - smp_mb(); + smp_mb(); - return result; + return result; } static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) { - unsigned long oldval, res; + unsigned long oldval, res; - smp_mb(); + smp_mb(); - do { - __asm__ __volatile__("@ atomic_cmpxchg\n" - "ldrex %1, [%3]\n" - "mov %0, #0\n" - "teq %1, %4\n" - "strexeq %0, %5, [%3]\n" - : "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter) - : "r" (&ptr->counter), "Ir" (old), "r" (new) - : "cc"); - } while (res); + do { + __asm__ __volatile__("@ atomic_cmpxchg\n" + "ldrex %1, [%3]\n" + "mov %0, #0\n" + "teq %1, %4\n" + "strexeq %0, %5, [%3]\n" + : "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter) + : "r" (&ptr->counter), "Ir" (old), "r" (new) + : "cc"); + } while (res); - smp_mb(); + smp_mb(); - return oldval; + return oldval; } static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr) { - unsigned long tmp, tmp2; - - __asm__ __volatile__("@ atomic_clear_mask\n" -"1: ldrex %0, [%3]\n" -" bic %0, %0, %4\n" -" strex %1, %0, [%3]\n" -" teq %1, #0\n" -" bne 1b" - : "=&r" (tmp), "=&r" (tmp2), "+Qo" (*addr) - : "r" (addr), "Ir" (mask) - : "cc"); + unsigned long tmp, tmp2; + + __asm__ __volatile__("@ atomic_clear_mask\n" +"1: ldrex %0, [%3]\n" +" bic %0, %0, %4\n" +" strex %1, %0, [%3]\n" +" teq %1, #0\n" +" bne 1b" + : "=&r" (tmp), "=&r" (tmp2), "+Qo" (*addr) + : "r" (addr), "Ir" (mask) + : "cc"); } -#define atomic_inc(v) atomic_add(1, v) -#define atomic_dec(v) atomic_sub(1, v) +#define atomic_inc(v) atomic_add(1, v) +#define atomic_dec(v) atomic_sub(1, v) -#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) +#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0) +#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) #define atomic_inc_return(v) (atomic_add_return(1, v)) #define atomic_dec_return(v) (atomic_sub_return(1, v)) #define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) @@ -145,7 +145,7 @@ static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr) * Local variables: * mode: C * c-file-style: "BSD" - * c-basic-offset: 4 - * indent-tabs-mode: nil + * c-basic-offset: 8 + * indent-tabs-mode: t * End: */ diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64/atomic.h index a279755..b04e6d5 100644 --- a/xen/include/asm-arm/arm64/atomic.h +++ b/xen/include/asm-arm/arm64/atomic.h @@ -157,7 +157,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) * Local variables: * mode: C * c-file-style: "BSD" - * c-basic-offset: 4 - * indent-tabs-mode: nil + * c-basic-offset: 8 + * indent-tabs-mode: t * End: */