From patchwork Fri Sep 5 23:24:20 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Behan Webster X-Patchwork-Id: 36916 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pd0-f198.google.com (mail-pd0-f198.google.com [209.85.192.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1F5CE202E4 for ; Fri, 5 Sep 2014 23:24:47 +0000 (UTC) Received: by mail-pd0-f198.google.com with SMTP id fp1sf71519118pdb.9 for ; Fri, 05 Sep 2014 16:24:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=kCUD/4nFwbar6E/YcjeL44s/1WzEvlKUvYC7WDoSVY0=; b=CeszbhG/GydroFi+i1oDpenyIAdzKeq0yqu0O4vHZQSyizDijbaHnILs0CFOmQ7GUP gpg2GKq/iJUPgQpalfC+GcIUa4ua79AmYP8jkvt8QBgcIUioAu2SYeN1mIHM1QPjTYHl rugbhbb+3qFRnDH6ThEqnSWRaB2/B+zc6xKZ4Me2Xmz5lQ+/Bp1lmLXKENIQRlfI388K 0Abw9coqt72ZG/JFrvgbKL5+jY+qRWeqr6DHImwJmIqa5/t7YCpZOnrU+yKqVmvUClZd 1AjHWlV7/gjyBZxcE1/FutRybXKR2MXslhqr/xHQRh+/zyV+bsNTtNWr5yYXWSI/kjZS HiPg== X-Gm-Message-State: ALoCoQmWNeXKMScMjfj81M0VR4F9lGhYM16CGSYcPleTUvpSv46K43ck8LAdKaMpco9Ki+zOfaBk X-Received: by 10.66.196.193 with SMTP id io1mr8942293pac.28.1409959486430; Fri, 05 Sep 2014 16:24:46 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.21.200 with SMTP id 66ls968192qgl.75.gmail; Fri, 05 Sep 2014 16:24:46 -0700 (PDT) X-Received: by 10.220.175.17 with SMTP id v17mr13127310vcz.0.1409959486316; Fri, 05 Sep 2014 16:24:46 -0700 (PDT) Received: from mail-vc0-x231.google.com (mail-vc0-x231.google.com [2607:f8b0:400c:c03::231]) by mx.google.com with ESMTPS id ck3si1531800vcb.102.2014.09.05.16.24.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 05 Sep 2014 16:24:46 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c03::231 as permitted sender) client-ip=2607:f8b0:400c:c03::231; Received: by mail-vc0-f177.google.com with SMTP id hq11so13118724vcb.36 for ; Fri, 05 Sep 2014 16:24:46 -0700 (PDT) X-Received: by 10.52.4.69 with SMTP id i5mr10807273vdi.35.1409959486188; Fri, 05 Sep 2014 16:24:46 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.45.67 with SMTP id uj3csp150128vcb; Fri, 5 Sep 2014 16:24:45 -0700 (PDT) X-Received: by 10.66.251.195 with SMTP id zm3mr26403924pac.78.1409959485304; Fri, 05 Sep 2014 16:24:45 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id di2si6136607pbb.224.2014.09.05.16.24.44 for ; Fri, 05 Sep 2014 16:24:45 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752318AbaIEXYn (ORCPT + 26 others); Fri, 5 Sep 2014 19:24:43 -0400 Received: from mail-pd0-f176.google.com ([209.85.192.176]:63312 "EHLO mail-pd0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751998AbaIEXYl (ORCPT ); Fri, 5 Sep 2014 19:24:41 -0400 Received: by mail-pd0-f176.google.com with SMTP id w10so7327496pde.21 for ; Fri, 05 Sep 2014 16:24:40 -0700 (PDT) X-Received: by 10.68.117.238 with SMTP id kh14mr290887pbb.55.1409959480828; Fri, 05 Sep 2014 16:24:40 -0700 (PDT) Received: from galdor.websterwood.com (S0106dc9fdb80cffd.gv.shawcable.net. [96.50.97.138]) by mx.google.com with ESMTPSA id b9sm2601577pbu.91.2014.09.05.16.24.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 05 Sep 2014 16:24:39 -0700 (PDT) From: behanw@converseincode.com To: anderson@redhat.com, catalin.marinas@arm.com, cl@linux.com, cov@codeaurora.org, jays.lee@samsung.com, msalter@redhat.com, sandeepa.prabhu@linaro.org, srivatsa.bhat@linux.vnet.ibm.com, steve.capper@linaro.org, sudeep.karkadanagesha@arm.com, takahiro.akashi@linaro.org, Vijaya.Kumar@caviumnetworks.com, will.deacon@arm.com Cc: a.p.zijlstra@chello.nl, acme@kernel.org, akpm@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, lorenzo.pieralisi@arm.com, marc.zyngier@arm.com, Matthew.Leach@arm.com, mingo@redhat.com, olof@lixom.net, paulus@samba.org, Mark Charlebois , Behan Webster Subject: [PATCH] arm64: LLVMLinux: Fix inline arm64 assembly for use with clang Date: Fri, 5 Sep 2014 16:24:20 -0700 Message-Id: <1409959460-15989-1-git-send-email-behanw@converseincode.com> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: behanw@converseincode.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c03::231 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@ Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Mark Charlebois Fix variable types for 64-bit inline assembly. This patch now works with both gcc and clang. Signed-off-by: Mark Charlebois Signed-off-by: Behan Webster --- arch/arm64/include/asm/arch_timer.h | 26 +++++++++++++++----------- arch/arm64/include/asm/uaccess.h | 2 +- arch/arm64/kernel/debug-monitors.c | 8 ++++---- arch/arm64/kernel/perf_event.c | 34 +++++++++++++++++----------------- arch/arm64/mm/mmu.c | 2 +- 5 files changed, 38 insertions(+), 34 deletions(-) diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h index 9400596..c1f87e0 100644 --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -37,19 +37,23 @@ void arch_timer_reg_write_cp15(int access, enum arch_timer_reg reg, u32 val) if (access == ARCH_TIMER_PHYS_ACCESS) { switch (reg) { case ARCH_TIMER_REG_CTRL: - asm volatile("msr cntp_ctl_el0, %0" : : "r" (val)); + asm volatile("msr cntp_ctl_el0, %0" + : : "r" ((u64)val)); break; case ARCH_TIMER_REG_TVAL: - asm volatile("msr cntp_tval_el0, %0" : : "r" (val)); + asm volatile("msr cntp_tval_el0, %0" + : : "r" ((u64)val)); break; } } else if (access == ARCH_TIMER_VIRT_ACCESS) { switch (reg) { case ARCH_TIMER_REG_CTRL: - asm volatile("msr cntv_ctl_el0, %0" : : "r" (val)); + asm volatile("msr cntv_ctl_el0, %0" + : : "r" ((u64)val)); break; case ARCH_TIMER_REG_TVAL: - asm volatile("msr cntv_tval_el0, %0" : : "r" (val)); + asm volatile("msr cntv_tval_el0, %0" + : : "r" ((u64)val)); break; } } @@ -60,7 +64,7 @@ void arch_timer_reg_write_cp15(int access, enum arch_timer_reg reg, u32 val) static __always_inline u32 arch_timer_reg_read_cp15(int access, enum arch_timer_reg reg) { - u32 val; + u64 val; if (access == ARCH_TIMER_PHYS_ACCESS) { switch (reg) { @@ -82,26 +86,26 @@ u32 arch_timer_reg_read_cp15(int access, enum arch_timer_reg reg) } } - return val; + return (u32)val; } static inline u32 arch_timer_get_cntfrq(void) { - u32 val; + u64 val; asm volatile("mrs %0, cntfrq_el0" : "=r" (val)); - return val; + return (u32)val; } static inline u32 arch_timer_get_cntkctl(void) { - u32 cntkctl; + u64 cntkctl; asm volatile("mrs %0, cntkctl_el1" : "=r" (cntkctl)); - return cntkctl; + return (u32)cntkctl; } static inline void arch_timer_set_cntkctl(u32 cntkctl) { - asm volatile("msr cntkctl_el1, %0" : : "r" (cntkctl)); + asm volatile("msr cntkctl_el1, %0" : : "r" ((u64)cntkctl)); } static inline void arch_counter_set_user_access(void) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 3bf8f4e..104719b 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -93,7 +93,7 @@ static inline void set_fs(mm_segment_t fs) __chk_user_ptr(addr); \ asm("adds %1, %1, %3; ccmp %1, %4, #2, cc; cset %0, ls" \ : "=&r" (flag), "=&r" (roksum) \ - : "1" (addr), "Ir" (size), \ + : "1" (addr), "r" ((u64)size), \ "r" (current_thread_info()->addr_limit) \ : "cc"); \ flag; \ diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c index b056369..695a18f 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -43,15 +43,15 @@ static void mdscr_write(u32 mdscr) { unsigned long flags; local_dbg_save(flags); - asm volatile("msr mdscr_el1, %0" :: "r" (mdscr)); + asm volatile("msr mdscr_el1, %0" : : "r" ((u64)mdscr)); local_dbg_restore(flags); } static u32 mdscr_read(void) { - u32 mdscr; + u64 mdscr; asm volatile("mrs %0, mdscr_el1" : "=r" (mdscr)); - return mdscr; + return (u32)mdscr; } /* @@ -127,7 +127,7 @@ void disable_debug_monitors(enum debug_el el) */ static void clear_os_lock(void *unused) { - asm volatile("msr oslar_el1, %0" : : "r" (0)); + asm volatile("msr oslar_el1, %0" : : "r" ((u64)0)); } static int os_lock_notify(struct notifier_block *self, diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index baf5afb..f2e399c 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -844,16 +844,16 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] static inline u32 armv8pmu_pmcr_read(void) { - u32 val; + u64 val; asm volatile("mrs %0, pmcr_el0" : "=r" (val)); - return val; + return (u32)val; } static inline void armv8pmu_pmcr_write(u32 val) { val &= ARMV8_PMCR_MASK; isb(); - asm volatile("msr pmcr_el0, %0" :: "r" (val)); + asm volatile("msr pmcr_el0, %0" : : "r" ((u64)val)); } static inline int armv8pmu_has_overflowed(u32 pmovsr) @@ -893,7 +893,7 @@ static inline int armv8pmu_select_counter(int idx) } counter = ARMV8_IDX_TO_COUNTER(idx); - asm volatile("msr pmselr_el0, %0" :: "r" (counter)); + asm volatile("msr pmselr_el0, %0" : : "r" ((u64)counter)); isb(); return idx; @@ -901,7 +901,7 @@ static inline int armv8pmu_select_counter(int idx) static inline u32 armv8pmu_read_counter(int idx) { - u32 value = 0; + u64 value = 0; if (!armv8pmu_counter_valid(idx)) pr_err("CPU%u reading wrong counter %d\n", @@ -911,7 +911,7 @@ static inline u32 armv8pmu_read_counter(int idx) else if (armv8pmu_select_counter(idx) == idx) asm volatile("mrs %0, pmxevcntr_el0" : "=r" (value)); - return value; + return (u32)value; } static inline void armv8pmu_write_counter(int idx, u32 value) @@ -920,16 +920,16 @@ static inline void armv8pmu_write_counter(int idx, u32 value) pr_err("CPU%u writing wrong counter %d\n", smp_processor_id(), idx); else if (idx == ARMV8_IDX_CYCLE_COUNTER) - asm volatile("msr pmccntr_el0, %0" :: "r" (value)); + asm volatile("msr pmccntr_el0, %0" : : "r" ((u64)value)); else if (armv8pmu_select_counter(idx) == idx) - asm volatile("msr pmxevcntr_el0, %0" :: "r" (value)); + asm volatile("msr pmxevcntr_el0, %0" : : "r" ((u64)value)); } static inline void armv8pmu_write_evtype(int idx, u32 val) { if (armv8pmu_select_counter(idx) == idx) { val &= ARMV8_EVTYPE_MASK; - asm volatile("msr pmxevtyper_el0, %0" :: "r" (val)); + asm volatile("msr pmxevtyper_el0, %0" : : "r" ((u64)val)); } } @@ -944,7 +944,7 @@ static inline int armv8pmu_enable_counter(int idx) } counter = ARMV8_IDX_TO_COUNTER(idx); - asm volatile("msr pmcntenset_el0, %0" :: "r" (BIT(counter))); + asm volatile("msr pmcntenset_el0, %0" : : "r" ((u64)BIT(counter))); return idx; } @@ -959,7 +959,7 @@ static inline int armv8pmu_disable_counter(int idx) } counter = ARMV8_IDX_TO_COUNTER(idx); - asm volatile("msr pmcntenclr_el0, %0" :: "r" (BIT(counter))); + asm volatile("msr pmcntenclr_el0, %0" : : "r" ((u64)BIT(counter))); return idx; } @@ -974,7 +974,7 @@ static inline int armv8pmu_enable_intens(int idx) } counter = ARMV8_IDX_TO_COUNTER(idx); - asm volatile("msr pmintenset_el1, %0" :: "r" (BIT(counter))); + asm volatile("msr pmintenset_el1, %0" : : "r" ((u64)BIT(counter))); return idx; } @@ -989,17 +989,17 @@ static inline int armv8pmu_disable_intens(int idx) } counter = ARMV8_IDX_TO_COUNTER(idx); - asm volatile("msr pmintenclr_el1, %0" :: "r" (BIT(counter))); + asm volatile("msr pmintenclr_el1, %0" : : "r" ((u64)BIT(counter))); isb(); /* Clear the overflow flag in case an interrupt is pending. */ - asm volatile("msr pmovsclr_el0, %0" :: "r" (BIT(counter))); + asm volatile("msr pmovsclr_el0, %0" : : "r" ((u64)BIT(counter))); isb(); return idx; } static inline u32 armv8pmu_getreset_flags(void) { - u32 value; + u64 value; /* Read */ asm volatile("mrs %0, pmovsclr_el0" : "=r" (value)); @@ -1008,7 +1008,7 @@ static inline u32 armv8pmu_getreset_flags(void) value &= ARMV8_OVSR_MASK; asm volatile("msr pmovsclr_el0, %0" :: "r" (value)); - return value; + return (u32)value; } static void armv8pmu_enable_event(struct hw_perf_event *hwc, int idx) @@ -1217,7 +1217,7 @@ static void armv8pmu_reset(void *info) armv8pmu_pmcr_write(ARMV8_PMCR_P | ARMV8_PMCR_C); /* Disable access from userspace. */ - asm volatile("msr pmuserenr_el0, %0" :: "r" (0)); + asm volatile("msr pmuserenr_el0, %0" : : "r" ((u64)0)); } static int armv8_pmuv3_map_event(struct perf_event *event) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index c555672..6894ef3 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -94,7 +94,7 @@ static int __init early_cachepolicy(char *p) */ asm volatile( " mrs %0, mair_el1\n" - " bfi %0, %1, #%2, #8\n" + " bfi %0, %1, %2, #8\n" " msr mair_el1, %0\n" " isb\n" : "=&r" (tmp)