From patchwork Thu Aug 30 16:15:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145540 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp91472ljw; Thu, 30 Aug 2018 09:15:52 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbV58DBwyhrdGU94fVsWR3iH9LnsjuZ7iw5H5q+qkm3OR6J+Vl9iCMOL+qmX4udYQm2SL/N X-Received: by 2002:a62:411a:: with SMTP id o26-v6mr11254095pfa.111.1535645752447; Thu, 30 Aug 2018 09:15:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645752; cv=none; d=google.com; s=arc-20160816; b=nqk/lhXbSIU2kn5dXOMOVOJFMosOpRcBd1eghZ74oabYjEwThyKcH2Ig0lW877oanH uN8Br2+zebVifDW7QC9d+tX3UNTE9w0T5iQBlZ8gymIMlRjJFb4NMQVlNl9ll7f7xbh5 0d+ak4QFhbElAE8tqW1uChIT6qvJqo9G0OKngEvrzBOHUo7eNpt1jHOSu4FUZryaojAY 3BxgSmJJL+wMMHcgzjFzuJeiqpvOwY+g1A1hwp9RX0ukUSDYiSU+Pup4sGYr/clWoAuH S7c60LHYoxbSuAJSXOmGkX66ZR82WWtCukKTRc05rYypWkMMBTgyPx4jAZ8FfZxUeaP2 eOHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=48MpQT/O6DRezqFyOD7X0Dg/T9dBqZQH37R/CBs/e9g=; b=LKDfLglkKfekLmRpS7jwfLUqifaa/K0MSsTsAJG9d00q0ivOUwxHz11FLWX4mykx0D 4mj1PTdXexzP0N5Hfe0Ru2wr25MZPc6pEc8N5yUGU8GL8HqJZstOKCXPNQuC21w3tYg8 2fBeWrhiDRJrIJcY3WjYESinnnJELzBhqoz9XEPk6T12AbSrvomFbXMtflv/gb7bsoMR OiiNxrBsRBVoRtVOCuxBpzvFRornIXo3tvjKMNp+4yU8P0iMQP7ko/7vhySAl7mzzOgG NBtV5lmIkWVsHrDrNnnsZMaSoQdnu5/TjJFnCB0utwJBm33sArrT5scmBKJqdIqBljsi Rvag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s12-v6si6985382plr.120.2018.08.30.09.15.52; Thu, 30 Aug 2018 09:15:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727655AbeH3USn (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:43 -0400 Received: from foss.arm.com ([217.140.101.70]:44846 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726122AbeH3USm (ORCPT ); Thu, 30 Aug 2018 16:18:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE964174E; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B01633F557; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 58BE71AE3700; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 05/12] arm64: tlbflush: Allow stride to be specified for __flush_tlb_range() Date: Thu, 30 Aug 2018 17:15:39 +0100 Message-Id: <1535645747-9823-6-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When we are unmapping intermediate page-table entries or huge pages, we don't need to issue a TLBI instruction for every PAGE_SIZE chunk in the VA range being unmapped. Allow the invalidation stride to be passed to __flush_tlb_range(), and adjust our "just nuke the ASID" heuristic to take this into account. Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlb.h | 2 +- arch/arm64/include/asm/tlbflush.h | 15 +++++++++------ 2 files changed, 10 insertions(+), 7 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index a3233167be60..1e1f68ce28f4 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -53,7 +53,7 @@ static inline void tlb_flush(struct mmu_gather *tlb) * the __(pte|pmd|pud)_free_tlb() functions, so last level * TLBI is sufficient here. */ - __flush_tlb_range(&vma, tlb->start, tlb->end, true); + __flush_tlb_range(&vma, tlb->start, tlb->end, PAGE_SIZE, true); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index ddbf1718669d..37ccdb246b20 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -149,25 +149,28 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, * This is meant to avoid soft lock-ups on large TLB flushing ranges and not * necessarily a performance improvement. */ -#define MAX_TLB_RANGE (1024UL << PAGE_SHIFT) +#define MAX_TLBI_OPS 1024UL static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, - bool last_level) + unsigned long stride, bool last_level) { unsigned long asid = ASID(vma->vm_mm); unsigned long addr; - if ((end - start) > MAX_TLB_RANGE) { + if ((end - start) > (MAX_TLBI_OPS * stride)) { flush_tlb_mm(vma->vm_mm); return; } + /* Convert the stride into units of 4k */ + stride >>= 12; + start = __TLBI_VADDR(start, asid); end = __TLBI_VADDR(end, asid); dsb(ishst); - for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) { + for (addr = start; addr < end; addr += stride) { if (last_level) { __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); @@ -186,14 +189,14 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, * We cannot use leaf-only invalidation here, since we may be invalidating * table entries as part of collapsing hugepages or moving page tables. */ - __flush_tlb_range(vma, start, end, false); + __flush_tlb_range(vma, start, end, PAGE_SIZE, false); } static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) { unsigned long addr; - if ((end - start) > MAX_TLB_RANGE) { + if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) { flush_tlb_all(); return; }