From patchwork Thu Feb 6 16:18:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 24264 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f199.google.com (mail-vc0-f199.google.com [209.85.220.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1F5D820445 for ; Thu, 6 Feb 2014 16:19:07 +0000 (UTC) Received: by mail-vc0-f199.google.com with SMTP id hu8sf4570425vcb.6 for ; Thu, 06 Feb 2014 08:19:07 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=jq0UIiDdcMXbUIOmbNPv3slY2rJ7xZ/BvZjAY/K68eQ=; b=ASyjYTrjA61N6JsSZUkHgRMbvNYJl3LHs109Sro9hYrAwZXeI0aqaytY9p1GQMTRmG FbdLDiekfKWFEcv3+oqm9BQCFn3bPQToBBn4rsIy+5tHwyacV3Uok21SkeaNuGyxyuyf fzFZncf/VsQarrILUq9Ut5Yj+iM3IUckUfKYjRkTBKOOEVxiiCKsiY1xMqu9X74qHZQ5 eF6yUjicdHz9PLroa6OucZxgljXOYd/wOywheXkuSPaqbcNXV1Q5sKOBRE3vhi24g/Gn yFb1gKFnpj+HggYZM77uf+dfMaS1K1zFpGcEZVq1kGKyoZ5TSF/BfvOJyX9yQnpBgNZ/ GuBQ== X-Gm-Message-State: ALoCoQnWisQfwYNr3j2iJa5j26kfk2otIENHXMveZZpygoNuuTujAUv9KTD3EM8IpW+KMtU1IMOF X-Received: by 10.224.55.19 with SMTP id s19mr3698349qag.0.1391703547171; Thu, 06 Feb 2014 08:19:07 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.23.80 with SMTP id 74ls627315qgo.61.gmail; Thu, 06 Feb 2014 08:19:07 -0800 (PST) X-Received: by 10.58.249.164 with SMTP id yv4mr1945vec.55.1391703546981; Thu, 06 Feb 2014 08:19:06 -0800 (PST) Received: from mail-vc0-f177.google.com (mail-vc0-f177.google.com [209.85.220.177]) by mx.google.com with ESMTPS id xe7si359348vec.116.2014.02.06.08.19.06 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 06 Feb 2014 08:19:06 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.177 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.177; Received: by mail-vc0-f177.google.com with SMTP id if11so1581898vcb.22 for ; Thu, 06 Feb 2014 08:19:06 -0800 (PST) X-Received: by 10.220.58.202 with SMTP id i10mr6515382vch.23.1391703546835; Thu, 06 Feb 2014 08:19:06 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp25216vcz; Thu, 6 Feb 2014 08:19:06 -0800 (PST) X-Received: by 10.180.95.162 with SMTP id dl2mr45297wib.17.1391703545516; Thu, 06 Feb 2014 08:19:05 -0800 (PST) Received: from mail-wg0-f49.google.com (mail-wg0-f49.google.com [74.125.82.49]) by mx.google.com with ESMTPS id gn10si1329728wib.65.2014.02.06.08.19.05 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 06 Feb 2014 08:19:05 -0800 (PST) Received-SPF: neutral (google.com: 74.125.82.49 is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=74.125.82.49; Received: by mail-wg0-f49.google.com with SMTP id a1so1413699wgh.28 for ; Thu, 06 Feb 2014 08:19:05 -0800 (PST) X-Received: by 10.180.149.206 with SMTP id uc14mr108306wib.44.1391703545059; Thu, 06 Feb 2014 08:19:05 -0800 (PST) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id d6sm6170359wiz.4.2014.02.06.08.19.04 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Feb 2014 08:19:04 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Cc: will.deacon@arm.com, catalin.marinas@arm.com, linux@arm.linux.org.uk, chanho61.park@samsung.com, zishen.lim@linaro.org, patches@linaro.org, gary.robertson@linaro.org, michael.hudson@linaro.org, christoffer.dall@linaro.org, Steve Capper Subject: [RFC PATCH V2 3/4] arm64: mm: Enable HAVE_RCU_TABLE_FREE logic Date: Thu, 6 Feb 2014 16:18:50 +0000 Message-Id: <1391703531-12845-4-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1391703531-12845-1-git-send-email-steve.capper@linaro.org> References: <1391703531-12845-1-git-send-email-steve.capper@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.177 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , In order to implement fast_get_user_pages we need to ensure that the page table walker is protected from page table pages being freed from under it. This patch enables HAVE_RCU_TABLE_FREE and incorporates it into the existing arm64 TLB logic. Any page table pages belonging to address spaces with multiple users will be call_rcu_sched freed. Meaning that disabling interrupts will block the free and protect the fast gup page walker. Signed-off-by: Steve Capper --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/tlb.h | 27 +++++++++++++++++++++++++-- 2 files changed, 26 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6d4dd22..129bd6a 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -28,6 +28,7 @@ config ARM64 select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_MEMBLOCK select HAVE_PERF_EVENTS + select HAVE_RCU_TABLE_FREE select IRQ_DOMAIN select MODULES_USE_ELF_RELA select NO_BOOTMEM diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 717031a..8999823 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -27,12 +27,33 @@ #define MMU_GATHER_BUNDLE 8 +static inline void __tlb_remove_table(void *_table) +{ + free_page_and_swap_cache((struct page *)_table); +} + +struct mmu_table_batch { + struct rcu_head rcu; + unsigned int nr; + void *tables[0]; +}; + +#define MAX_TABLE_BATCH \ + ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) + +extern void tlb_table_flush(struct mmu_gather *tlb); +extern void tlb_remove_table(struct mmu_gather *tlb, void *table); + +#define tlb_remove_entry(tlb,entry) tlb_remove_table(tlb,entry) + /* * TLB handling. This allows us to remove pages from the page * tables, and efficiently handle the TLB issues. */ struct mmu_gather { struct mm_struct *mm; + struct mmu_table_batch *batch; + unsigned int need_flush; unsigned int fullmm; struct vm_area_struct *vma; unsigned long start, end; @@ -91,6 +112,7 @@ static inline void __tlb_alloc_page(struct mmu_gather *tlb) static inline void tlb_flush_mmu(struct mmu_gather *tlb) { tlb_flush(tlb); + tlb_table_flush(tlb); free_pages_and_swap_cache(tlb->pages, tlb->nr); tlb->nr = 0; if (tlb->pages == tlb->local) @@ -109,6 +131,7 @@ tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start tlb->pages = tlb->local; tlb->nr = 0; __tlb_alloc_page(tlb); + tlb->batch = NULL; } static inline void @@ -172,7 +195,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, { pgtable_page_dtor(pte); tlb_add_flush(tlb, addr); - tlb_remove_page(tlb, pte); + tlb_remove_entry(tlb, pte); } #ifndef CONFIG_ARM64_64K_PAGES @@ -180,7 +203,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { tlb_add_flush(tlb, addr); - tlb_remove_page(tlb, virt_to_page(pmdp)); + tlb_remove_entry(tlb, virt_to_page(pmdp)); } #endif