From patchwork Thu Feb 6 16:18:48 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 24262 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f70.google.com (mail-oa0-f70.google.com [209.85.219.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6592120445 for ; Thu, 6 Feb 2014 16:19:05 +0000 (UTC) Received: by mail-oa0-f70.google.com with SMTP id m1sf10474972oag.5 for ; Thu, 06 Feb 2014 08:19:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=TdD8C/ZXviAblSVKVJaTMkQAiR07bVSJBPU27n+HSY4=; b=YUZWdacEoKTg6B1CUxcYCgmwjeKNqQMEhl8gMj6nBU1GaxldMIwSap2+1O5mWhUin2 PbO9lL0TCWaVLb86cnAbMYJo4Nbgjlv1HCRdGHhDJGDDiFjsRotkt4ZB9z87Gyq18fvN 8+mc9wh0b8SoQlR1E7bUw5jk4OOgiBk2gVLZao/QXFi5fPiqC8w/RgXD1kWF3SXuLHvn Ik9jEicFhvs8IIRCKx0abMAsS3TBptdR0zKpNFJrok6veHv5wpNDL3pl6WhViRcE56gU U1Yz24pZCFnMoIG5DOJhNEdVBXc7WwSF/YCV1YmdsCCa3ZVX8t5muf2F5fdiMl9zhrZA 0xwQ== X-Gm-Message-State: ALoCoQnGD2/IUTfbA9olE04FYB5MYpIJspXk8x4N0eAR3SDBvK21C3wsIyrgCMdbmL/pao5ikzId X-Received: by 10.182.236.74 with SMTP id us10mr3855554obc.36.1391703544334; Thu, 06 Feb 2014 08:19:04 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.41.202 with SMTP id z68ls687620qgz.85.gmail; Thu, 06 Feb 2014 08:19:04 -0800 (PST) X-Received: by 10.220.252.134 with SMTP id mw6mr6700vcb.51.1391703544210; Thu, 06 Feb 2014 08:19:04 -0800 (PST) Received: from mail-vb0-f47.google.com (mail-vb0-f47.google.com [209.85.212.47]) by mx.google.com with ESMTPS id s10si365681vco.56.2014.02.06.08.19.04 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 06 Feb 2014 08:19:04 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.47 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.47; Received: by mail-vb0-f47.google.com with SMTP id p6so1569595vbe.20 for ; Thu, 06 Feb 2014 08:19:04 -0800 (PST) X-Received: by 10.220.178.73 with SMTP id bl9mr32027vcb.42.1391703544075; Thu, 06 Feb 2014 08:19:04 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4cs28911vcz; Thu, 6 Feb 2014 08:19:03 -0800 (PST) X-Received: by 10.180.12.115 with SMTP id x19mr41860wib.19.1391703542943; Thu, 06 Feb 2014 08:19:02 -0800 (PST) Received: from mail-we0-f169.google.com (mail-we0-f169.google.com [74.125.82.169]) by mx.google.com with ESMTPS id n18si1338614wij.19.2014.02.06.08.19.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 06 Feb 2014 08:19:02 -0800 (PST) Received-SPF: neutral (google.com: 74.125.82.169 is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=74.125.82.169; Received: by mail-we0-f169.google.com with SMTP id t61so1474691wes.28 for ; Thu, 06 Feb 2014 08:19:02 -0800 (PST) X-Received: by 10.181.13.112 with SMTP id ex16mr32823wid.23.1391703542505; Thu, 06 Feb 2014 08:19:02 -0800 (PST) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id d6sm6170359wiz.4.2014.02.06.08.19.01 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Feb 2014 08:19:02 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Cc: will.deacon@arm.com, catalin.marinas@arm.com, linux@arm.linux.org.uk, chanho61.park@samsung.com, zishen.lim@linaro.org, patches@linaro.org, gary.robertson@linaro.org, michael.hudson@linaro.org, christoffer.dall@linaro.org, Steve Capper Subject: [RFC PATCH V2 1/4] arm: mm: Enable HAVE_RCU_TABLE_FREE logic Date: Thu, 6 Feb 2014 16:18:48 +0000 Message-Id: <1391703531-12845-2-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1391703531-12845-1-git-send-email-steve.capper@linaro.org> References: <1391703531-12845-1-git-send-email-steve.capper@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.47 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , In order to implement fast_get_user_pages we need to ensure that the page table walker is protected from page table pages being freed from under it. One way to achieve this is to have the walker disable interrupts, and rely on IPIs from the TLB flushing code blocking before the page table pages are freed. On some ARM platforms we have hardware TLB invalidation, thus the TLB flushing code won't necessarily broadcast IPIs. Also spuriously broadcasting IPIs can hurt system performance if done too often. This problem has already been solved on PowerPC and Sparc by batching up page table pages belonging to more than one mm_user, then scheduling an rcu_sched callback to free the pages. If one were to disable interrupts, that would block the page table pages being freed. This logic has also been promoted to core code and is activated when one enables HAVE_RCU_TABLE_FREE. This patch enables HAVE_RCU_TABLE_FREE and incorporates it into the existing ARM TLB logic. Signed-off-by: Steve Capper --- arch/arm/Kconfig | 1 + arch/arm/include/asm/tlb.h | 38 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index c1f1a7e..e4a0e59 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -55,6 +55,7 @@ config ARM select HAVE_PERF_EVENTS select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP + select HAVE_RCU_TABLE_FREE if SMP select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_SYSCALL_TRACEPOINTS select HAVE_UID16 diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 0baf7f0..8cb5552 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -35,12 +35,39 @@ #define MMU_GATHER_BUNDLE 8 +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +static inline void __tlb_remove_table(void *_table) +{ + free_page_and_swap_cache((struct page *)_table); +} + +struct mmu_table_batch { + struct rcu_head rcu; + unsigned int nr; + void *tables[0]; +}; + +#define MAX_TABLE_BATCH \ + ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) + +extern void tlb_table_flush(struct mmu_gather *tlb); +extern void tlb_remove_table(struct mmu_gather *tlb, void *table); + +#define tlb_remove_entry(tlb,entry) tlb_remove_table(tlb,entry) +#else +#define tlb_remove_entry(tlb,entry) tlb_remove_page(tlb,entry) +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ + /* * TLB handling. This allows us to remove pages from the page * tables, and efficiently handle the TLB issues. */ struct mmu_gather { struct mm_struct *mm; +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + struct mmu_table_batch *batch; + unsigned int need_flush; +#endif unsigned int fullmm; struct vm_area_struct *vma; unsigned long start, end; @@ -101,6 +128,9 @@ static inline void __tlb_alloc_page(struct mmu_gather *tlb) static inline void tlb_flush_mmu(struct mmu_gather *tlb) { tlb_flush(tlb); +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb_table_flush(tlb); +#endif free_pages_and_swap_cache(tlb->pages, tlb->nr); tlb->nr = 0; if (tlb->pages == tlb->local) @@ -119,6 +149,10 @@ tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start tlb->pages = tlb->local; tlb->nr = 0; __tlb_alloc_page(tlb); + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb->batch = NULL; +#endif } static inline void @@ -195,7 +229,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, tlb_add_flush(tlb, addr + SZ_1M); #endif - tlb_remove_page(tlb, pte); + tlb_remove_entry(tlb, pte); } static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, @@ -203,7 +237,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, { #ifdef CONFIG_ARM_LPAE tlb_add_flush(tlb, addr); - tlb_remove_page(tlb, virt_to_page(pmdp)); + tlb_remove_entry(tlb, virt_to_page(pmdp)); #endif }