From patchwork Tue Aug 16 10:45:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 74007 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp1926100qga; Tue, 16 Aug 2016 03:47:17 -0700 (PDT) X-Received: by 10.66.78.5 with SMTP id x5mr62345492paw.108.1471344436849; Tue, 16 Aug 2016 03:47:16 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n28si31696286pfa.263.2016.08.16.03.47.16; Tue, 16 Aug 2016 03:47:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932225AbcHPKq5 (ORCPT + 27 others); Tue, 16 Aug 2016 06:46:57 -0400 Received: from fw-tnat.cambridge.arm.com ([217.140.96.140]:21645 "EHLO cam-smtp0.cambridge.arm.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750928AbcHPKqz (ORCPT ); Tue, 16 Aug 2016 06:46:55 -0400 Received: from e105922-lin.cambridge.arm.com (e105922-lin.cambridge.arm.com [10.1.194.52]) by cam-smtp0.cambridge.arm.com (8.13.8/8.13.8) with SMTP id u7GAkF6U032292; Tue, 16 Aug 2016 11:46:15 +0100 Received: by e105922-lin.cambridge.arm.com (sSMTP sendmail emulation); Tue, 16 Aug 2016 11:46:15 +0100 From: Punit Agrawal To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: Mark Rutland , Christoffer Dall , Marc Zyngier , Steven Rostedt , Ingo Molnar , Will Deacon , Catalin Marinas , Punit Agrawal Subject: [RFC PATCH 4/7] arm64: tlbflush.h: add __tlbi() macro Date: Tue, 16 Aug 2016 11:45:09 +0100 Message-Id: <1471344312-26685-5-git-send-email-punit.agrawal@arm.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1471344312-26685-1-git-send-email-punit.agrawal@arm.com> References: <1471344312-26685-1-git-send-email-punit.agrawal@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mark Rutland As with dsb() and isb(), add a __tbli() helper so that we can avoid distracting asm boilerplate every time we want a TLBI. As some TLBI operations take an argument while others do not, some pre-processor is used to handle these two cases with different assembly blocks. The existing tlbflush.h code is moved over to use the helper. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Marc Zyngier Cc: Will Deacon [ rename helper to __tlbi, update commit log ] Signed-off-by: Punit Agrawal --- arch/arm64/include/asm/tlbflush.h | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) -- 2.8.1 diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index b460ae2..d57a0be 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -25,6 +25,21 @@ #include /* + * Raw TLBI operations. Drivers and most kernel code should use the TLB + * management routines below in preference to these. Where necessary, these can + * be used to avoid asm() boilerplate. + * + * Can be used as __tlbi(op) or __tlbi(op, arg), depending on whether a + * particular TLBI op takes an argument or not. The macros below handle invoking + * the asm with or without the register argument as appropriate. + */ +#define TLBI_0(op, arg) asm ("tlbi " #op) +#define TLBI_1(op, arg) asm ("tlbi " #op ", %0" : : "r" (arg)) +#define TLBI_N(op, arg, n, ...) TLBI_##n(op, arg) + +#define __tlbi(op, ...) TLBI_N(op, ##__VA_ARGS__, 1, 0) + +/* * TLB Management * ============== * @@ -66,7 +81,7 @@ static inline void local_flush_tlb_all(void) { dsb(nshst); - asm("tlbi vmalle1"); + __tlbi(vmalle1); dsb(nsh); isb(); } @@ -74,7 +89,7 @@ static inline void local_flush_tlb_all(void) static inline void flush_tlb_all(void) { dsb(ishst); - asm("tlbi vmalle1is"); + __tlbi(vmalle1is); dsb(ish); isb(); } @@ -84,7 +99,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm) unsigned long asid = ASID(mm) << 48; dsb(ishst); - asm("tlbi aside1is, %0" : : "r" (asid)); + __tlbi(aside1is, asid); dsb(ish); } @@ -94,7 +109,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr = uaddr >> 12 | (ASID(vma->vm_mm) << 48); dsb(ishst); - asm("tlbi vale1is, %0" : : "r" (addr)); + __tlbi(vale1is, addr); dsb(ish); } @@ -122,9 +137,9 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) { if (last_level) - asm("tlbi vale1is, %0" : : "r"(addr)); + __tlbi(vale1is, addr); else - asm("tlbi vae1is, %0" : : "r"(addr)); + __tlbi(vae1is, addr); } dsb(ish); } @@ -149,7 +164,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end dsb(ishst); for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) - asm("tlbi vaae1is, %0" : : "r"(addr)); + __tlbi(vaae1is, addr); dsb(ish); isb(); } @@ -163,7 +178,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm, { unsigned long addr = uaddr >> 12 | (ASID(mm) << 48); - asm("tlbi vae1is, %0" : : "r" (addr)); + __tlbi(vae1is, addr); dsb(ish); }