From patchwork Tue Feb 11 15:42:29 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 24471 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f70.google.com (mail-oa0-f70.google.com [209.85.219.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A7734206FF for ; Tue, 11 Feb 2014 15:42:39 +0000 (UTC) Received: by mail-oa0-f70.google.com with SMTP id m1sf34293665oag.5 for ; Tue, 11 Feb 2014 07:42:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:date:from:to:cc:subject:message-id :references:mime-version:in-reply-to:user-agent:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe:content-type :content-disposition; bh=xL5xVnwhkhjqmmEJ2ef4lVpYA7GktJxd0uLZyXAHIXc=; b=eHNetJ7po7oISc1lUyUAmNTx6WUc/uBm4gCYTVde6hcq+cs/a1XOTjDaCAvx4T/KgI i0DAzyEAqdgLvKg49oRqmHGk7T2to1xpKdBXsvat0V5e+gYcoAZB4yr+z1hkA+igTbcJ zk0SasXwbesLEOcvoyjLpvf2i5FTjgPLF+eJBz1WuSJi/cZr8UVWkYcoxFIpbDfIBWlW 6/vNY32MKW3GB/EB0OZnvmRbmQG91XNMJiUTEKE/3jzcGQPtRWJNyy+QDi8ebhsvZkOJ u0rQnA3ITsTL0rxt4DLzn6ypem7BrQUYXplFNxDJXihCm7mQarOOW6/C+KSze0DLdcOO itPg== X-Gm-Message-State: ALoCoQkKsgT09reXW7qPy4MFseLAwmvJ9UMJSX2lhXSGJs2RxHcYaaEhjMQAZ1IHJrkgky34+M6p X-Received: by 10.182.81.7 with SMTP id v7mr13013obx.28.1392133358611; Tue, 11 Feb 2014 07:42:38 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.96.52 with SMTP id j49ls2501568qge.7.gmail; Tue, 11 Feb 2014 07:42:38 -0800 (PST) X-Received: by 10.52.30.167 with SMTP id t7mr1049860vdh.36.1392133358459; Tue, 11 Feb 2014 07:42:38 -0800 (PST) Received: from mail-vb0-f53.google.com (mail-vb0-f53.google.com [209.85.212.53]) by mx.google.com with ESMTPS id ks3si6041837vec.89.2014.02.11.07.42.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Feb 2014 07:42:38 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.53 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.53; Received: by mail-vb0-f53.google.com with SMTP id p17so5877521vbe.40 for ; Tue, 11 Feb 2014 07:42:38 -0800 (PST) X-Received: by 10.221.26.10 with SMTP id rk10mr29155914vcb.0.1392133358298; Tue, 11 Feb 2014 07:42:38 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp255671vcz; Tue, 11 Feb 2014 07:42:37 -0800 (PST) X-Received: by 10.68.96.99 with SMTP id dr3mr45350150pbb.40.1392133356641; Tue, 11 Feb 2014 07:42:36 -0800 (PST) Received: from collaborate-mta1.arm.com (fw-tnat.austin.arm.com. [217.140.110.23]) by mx.google.com with ESMTP id ez5si19354778pab.222.2014.02.11.07.42.35 for ; Tue, 11 Feb 2014 07:42:36 -0800 (PST) Received-SPF: pass (google.com: domain of catalin.marinas@arm.com designates 217.140.110.23 as permitted sender) client-ip=217.140.110.23; Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.182]) by collaborate-mta1.arm.com (Postfix) with ESMTPS id AFFF913F841; Tue, 11 Feb 2014 09:42:33 -0600 (CST) Date: Tue, 11 Feb 2014 15:42:29 +0000 From: Catalin Marinas To: Steve Capper Cc: "linux-arm-kernel@lists.infradead.org" , Will Deacon , "linux@arm.linux.org.uk" , "chanho61.park@samsung.com" , "zishen.lim@linaro.org" , "patches@linaro.org" , "gary.robertson@linaro.org" , "michael.hudson@linaro.org" , "christoffer.dall@linaro.org" , Peter Zijlstra Subject: Re: [RFC PATCH V2 3/4] arm64: mm: Enable HAVE_RCU_TABLE_FREE logic Message-ID: <20140211154229.GF3748@arm.com> References: <1391703531-12845-1-git-send-email-steve.capper@linaro.org> <1391703531-12845-4-git-send-email-steve.capper@linaro.org> MIME-Version: 1.0 In-Reply-To: <1391703531-12845-4-git-send-email-steve.capper@linaro.org> User-Agent: Mutt/1.5.20 (2009-06-14) X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: catalin.marinas@arm.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.53 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-Disposition: inline Hi Steve, On Thu, Feb 06, 2014 at 04:18:50PM +0000, Steve Capper wrote: > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 6d4dd22..129bd6a 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -28,6 +28,7 @@ config ARM64 > select HAVE_HW_BREAKPOINT if PERF_EVENTS > select HAVE_MEMBLOCK > select HAVE_PERF_EVENTS > + select HAVE_RCU_TABLE_FREE > select IRQ_DOMAIN > select MODULES_USE_ELF_RELA > select NO_BOOTMEM > diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h > index 717031a..8999823 100644 > --- a/arch/arm64/include/asm/tlb.h > +++ b/arch/arm64/include/asm/tlb.h > @@ -27,12 +27,33 @@ > > #define MMU_GATHER_BUNDLE 8 > > +static inline void __tlb_remove_table(void *_table) > +{ > + free_page_and_swap_cache((struct page *)_table); > +} I think you can reduce your patch to just the above (and a linux/swap.h include) after the arm64 conversion to generic mmu_gather below. I cc'ed Peter Z for a sanity check, some of the code is close to https://lkml.org/lkml/2011/3/7/302, only that it's under arch/arm64. And, of course, it needs a lot more testing. -------------8<--------------------------------------- >From 01a958dfc44eb7ec697625813b3b98a705bad324 Mon Sep 17 00:00:00 2001 From: Catalin Marinas Date: Tue, 11 Feb 2014 15:22:01 +0000 Subject: [PATCH] arm64: Convert asm/tlb.h to generic mmu_gather Over the past couple of years, the generic mmu_gather gained range tracking - 597e1c3580b7 (mm/mmu_gather: enable tlb flush range in generic mmu_gather), 2b047252d087 (Fix TLB gather virtual address range invalidation corner cases) - and tlb_fast_mode() has been removed - 29eb77825cc7 (arch, mm: Remove tlb_fast_mode()). The new mmu_gather structure is now suitable for arm64 and this patch converts the arch asm/tlb.h to the generic code. One functional difference is the shift_arg_pages() case where previously the code was flushing the full mm (no tlb_start_vma call) but now it flushes the range given to tlb_gather_mmu() (possibly slightly more efficient previously). Signed-off-by: Catalin Marinas Cc: Peter Zijlstra --- arch/arm64/include/asm/tlb.h | 136 +++++++------------------------------------ 1 file changed, 20 insertions(+), 116 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 717031a762c2..72cadf52ca80 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -19,115 +19,44 @@ #ifndef __ASM_TLB_H #define __ASM_TLB_H -#include -#include -#include -#include - -#define MMU_GATHER_BUNDLE 8 - -/* - * TLB handling. This allows us to remove pages from the page - * tables, and efficiently handle the TLB issues. - */ -struct mmu_gather { - struct mm_struct *mm; - unsigned int fullmm; - struct vm_area_struct *vma; - unsigned long start, end; - unsigned long range_start; - unsigned long range_end; - unsigned int nr; - unsigned int max; - struct page **pages; - struct page *local[MMU_GATHER_BUNDLE]; -}; +#include /* - * This is unnecessarily complex. There's three ways the TLB shootdown - * code is used: + * There's three ways the TLB shootdown code is used: * 1. Unmapping a range of vmas. See zap_page_range(), unmap_region(). * tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. * 2. Unmapping all vmas. See exit_mmap(). * tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. Additionally, page tables will be freed. + * Page tables will be freed. * 3. Unmapping argument pages. See shift_arg_pages(). * tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called. - * tlb->vma will be NULL. */ static inline void tlb_flush(struct mmu_gather *tlb) { - if (tlb->fullmm || !tlb->vma) + if (tlb->fullmm) { flush_tlb_mm(tlb->mm); - else if (tlb->range_end > 0) { - flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end); - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; + } else if (tlb->end > 0) { + struct vm_area_struct vma = { .vm_mm = tlb->mm, }; + flush_tlb_range(&vma, tlb->start, tlb->end); + tlb->start = TASK_SIZE; + tlb->end = 0; } } static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr) { if (!tlb->fullmm) { - if (addr < tlb->range_start) - tlb->range_start = addr; - if (addr + PAGE_SIZE > tlb->range_end) - tlb->range_end = addr + PAGE_SIZE; - } -} - -static inline void __tlb_alloc_page(struct mmu_gather *tlb) -{ - unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); - - if (addr) { - tlb->pages = (void *)addr; - tlb->max = PAGE_SIZE / sizeof(struct page *); + tlb->start = min(tlb->start, addr); + tlb->end = max(tlb->end, addr + PAGE_SIZE); } } -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ - tlb_flush(tlb); - free_pages_and_swap_cache(tlb->pages, tlb->nr); - tlb->nr = 0; - if (tlb->pages == tlb->local) - __tlb_alloc_page(tlb); -} - -static inline void -tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->fullmm = !(start | (end+1)); - tlb->start = start; - tlb->end = end; - tlb->vma = NULL; - tlb->max = ARRAY_SIZE(tlb->local); - tlb->pages = tlb->local; - tlb->nr = 0; - __tlb_alloc_page(tlb); -} - -static inline void -tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end) -{ - tlb_flush_mmu(tlb); - - /* keep the page table cache within bounds */ - check_pgt_cache(); - - if (tlb->pages != tlb->local) - free_pages((unsigned long)tlb->pages, 0); -} - /* * Memorize the range for the TLB flush. */ -static inline void -tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr) +static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, + unsigned long addr) { tlb_add_flush(tlb, addr); } @@ -137,38 +66,24 @@ tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr) * case where we're doing a full MM flush. When we're doing a munmap, * the vmas are adjusted to only cover the region to be torn down. */ -static inline void -tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +static inline void tlb_start_vma(struct mmu_gather *tlb, + struct vm_area_struct *vma) { if (!tlb->fullmm) { - tlb->vma = vma; - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; + tlb->start = TASK_SIZE; + tlb->end = 0; } } -static inline void -tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +static inline void tlb_end_vma(struct mmu_gather *tlb, + struct vm_area_struct *vma) { if (!tlb->fullmm) tlb_flush(tlb); } -static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->pages[tlb->nr++] = page; - VM_BUG_ON(tlb->nr > tlb->max); - return tlb->max - tlb->nr; -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - if (!__tlb_remove_page(tlb, page)) - tlb_flush_mmu(tlb); -} - static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, - unsigned long addr) + unsigned long addr) { pgtable_page_dtor(pte); tlb_add_flush(tlb, addr); @@ -184,16 +99,5 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, } #endif -#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr) -#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr) -#define pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp) - -#define tlb_migrate_finish(mm) do { } while (0) - -static inline void -tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) -{ - tlb_add_flush(tlb, addr); -} #endif