From patchwork Wed May 20 08:30:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 225520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75193C433E0 for ; Wed, 20 May 2020 08:33:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 413E9206C3 for ; Wed, 20 May 2020 08:33:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.b="KoBiDgTY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726691AbgETIdQ (ORCPT ); Wed, 20 May 2020 04:33:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726436AbgETIdQ (ORCPT ); Wed, 20 May 2020 04:33:16 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91711C061A0E for ; Wed, 20 May 2020 01:33:15 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id w19so1017761ply.11 for ; Wed, 20 May 2020 01:33:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jK3eak8AGYwL0wJRZqsVcAfH4fevuortaENnjYjREuc=; b=KoBiDgTYuD8IKAqJBdXII+sxD6sA6Xi0IPUUqUub+G+3EfhEhjhsSpHZL/JvhHO0dM 2HmkJd/HUr826uvD1Vp3f/hieDftN1Bj0Gt9Pj5/OPBqO0kaUEBNKBRikRilWTNxINnH CUi6XhdWliKmD07poHONfOzVzfFSNwkijYCUMh/fsdJTPoUc7qVFqqVokN4sD7a+JugC UlXaCfR61zsJ/DNdrlRdJXk4sc3pyv6Yz9/x/u67nggENWPChw+Y0vU2tK46gsVu0K6z rpuemkIFk3dA37qPpjIHwefNbzQGNSQ5cDK8KHkHMgg0lTXWgUsjb8nrh/27VNnm5X0U fuPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jK3eak8AGYwL0wJRZqsVcAfH4fevuortaENnjYjREuc=; b=DiviZevGShmzo5GIUksyL13y1g6DHG6QjX8N1VrIqzpzBLZJSSvGf1VlzsB2fprji1 sbZicu/yjI511uiDlPtU2JiaXR4O5j4Way4JFg1z6Bdu/+BzSGHCHgggDaIiaGFCicd2 UZC6K0Te/UNs3mvHOkwm+tyybhZdG9CeXUrw0pZ6YMn3lM7cBDhbn7A+hfWfeAjBiKG4 kxrUn39zvH4Trna0ImSUMhDLupdQl4AqKX3wPYA+2MMu17uD/WAShcnIgLKv2mGOrM+j S/U43VzJWirclswvbEOKK49+CtREEPKiVcDFRwnYzhme4yGI2MYBCz2+46lBt+75Achp 9FoA== X-Gm-Message-State: AOAM531UO4O3fDV9s17Hz/XH4h2cHx+hR4s723ocA/cWZIcQBOlN3uoB MUcv9Mi3pw0Di311a+C1c5K0afJgak4= X-Google-Smtp-Source: ABdhPJzDTT7m+BSVW0Xo6BaI3Xo8msG0e5KvtMajBqg/QMPqaK6NDnNIXye7Ajo6vR+g/ZnmZp9tZw== X-Received: by 2002:a17:90a:2586:: with SMTP id k6mr4041874pje.121.1589963594861; Wed, 20 May 2020 01:33:14 -0700 (PDT) Received: from santosiv.in.ibm.com.com ([223.181.246.139]) by smtp.gmail.com with ESMTPSA id 2sm1553980pfz.39.2020.05.20.01.33.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 May 2020 01:33:14 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Cc: Michael Ellerman , Greg KH , Sasha Levin , Will Deacon , Santosh Sivaraj Subject: [PATCH v4 2/6] asm-generic/tlb: Track which levels of the page tables have been cleared Date: Wed, 20 May 2020 14:00:21 +0530 Message-Id: <20200520083025.229011-3-santosh@fossix.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200520083025.229011-1-santosh@fossix.org> References: <20200520083025.229011-1-santosh@fossix.org> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit a6d60245d6d9b1caf66b0d94419988c4836980af upstream It is common for architectures with hugepage support to require only a single TLB invalidation operation per hugepage during unmap(), rather than iterating through the mapping at a PAGE_SIZE increment. Currently, however, the level in the page table where the unmap() operation occurs is not stored in the mmu_gather structure, therefore forcing architectures to issue additional TLB invalidation operations or to give up and over-invalidate by e.g. invalidating the entire TLB. Ideally, we could add an interval rbtree to the mmu_gather structure, which would allow us to associate the correct mapping granule with the various sub-mappings within the range being invalidated. However, this is costly in terms of book-keeping and memory management, so instead we approximate by keeping track of the page table levels that are cleared and provide a means to query the smallest granule required for invalidation. Signed-off-by: Will Deacon Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: prerequisite for upcoming tlbflush backports] --- include/asm-generic/tlb.h | 58 +++++++++++++++++++++++++++++++++------ mm/memory.c | 4 ++- 2 files changed, 53 insertions(+), 9 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 97306b32d8d2..f2b9dc9cbaf8 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -114,6 +114,14 @@ struct mmu_gather { */ unsigned int freed_tables : 1; + /* + * at which levels have we cleared entries? + */ + unsigned int cleared_ptes : 1; + unsigned int cleared_pmds : 1; + unsigned int cleared_puds : 1; + unsigned int cleared_p4ds : 1; + struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; @@ -148,6 +156,10 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->end = 0; } tlb->freed_tables = 0; + tlb->cleared_ptes = 0; + tlb->cleared_pmds = 0; + tlb->cleared_puds = 0; + tlb->cleared_p4ds = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -197,6 +209,25 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, } #endif +static inline unsigned long tlb_get_unmap_shift(struct mmu_gather *tlb) +{ + if (tlb->cleared_ptes) + return PAGE_SHIFT; + if (tlb->cleared_pmds) + return PMD_SHIFT; + if (tlb->cleared_puds) + return PUD_SHIFT; + if (tlb->cleared_p4ds) + return P4D_SHIFT; + + return PAGE_SHIFT; +} + +static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) +{ + return 1UL << tlb_get_unmap_shift(tlb); +} + /* * In the case of tlb vma handling, we can optimise these away in the * case where we're doing a full MM flush. When we're doing a munmap, @@ -230,13 +261,19 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_tlb_entry(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->cleared_ptes = 1; \ __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - do { \ - __tlb_adjust_range(tlb, address, huge_page_size(h)); \ - __tlb_remove_tlb_entry(tlb, ptep, address); \ +#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ + do { \ + unsigned long _sz = huge_page_size(h); \ + __tlb_adjust_range(tlb, address, _sz); \ + if (_sz == PMD_SIZE) \ + tlb->cleared_pmds = 1; \ + else if (_sz == PUD_SIZE) \ + tlb->cleared_puds = 1; \ + __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) /** @@ -250,6 +287,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_pmd_tlb_entry(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PMD_SIZE); \ + tlb->cleared_pmds = 1; \ __tlb_remove_pmd_tlb_entry(tlb, pmdp, address); \ } while (0) @@ -264,6 +302,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_pud_tlb_entry(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PUD_SIZE); \ + tlb->cleared_puds = 1; \ __tlb_remove_pud_tlb_entry(tlb, pudp, address); \ } while (0) @@ -289,7 +328,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_pmds = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -298,7 +338,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pmd_free_tlb(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_puds = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -308,7 +349,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_p4ds = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -319,7 +361,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define p4d_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif diff --git a/mm/memory.c b/mm/memory.c index bbf0cc4066c8..1832c5ed6ac0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -267,8 +267,10 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb, { struct mmu_gather_batch *batch, *next; - if (force) + if (force) { + __tlb_reset_range(tlb); __tlb_adjust_range(tlb, start, end - start); + } tlb_flush_mmu(tlb); From patchwork Wed May 20 08:30:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 225519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FD09C433E0 for ; Wed, 20 May 2020 08:33:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0EF49207ED for ; Wed, 20 May 2020 08:33:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.b="RJoZ4Xg9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726801AbgETIdY (ORCPT ); Wed, 20 May 2020 04:33:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726436AbgETIdX (ORCPT ); Wed, 20 May 2020 04:33:23 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5143EC061A0E for ; Wed, 20 May 2020 01:33:23 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id 23so1203310pfy.8 for ; Wed, 20 May 2020 01:33:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0WrLFXSRKQDZy0BVrG6fpwzhIiirPXa1ZZ2+F23h/xo=; b=RJoZ4Xg9ChrLjboaatcLi11P3/BfFu9/06Ez954FemKONvyNrhGgP9Qs0LwpVeg4qN aQc42ZbQuHmOm6vfNKmslrKcoNmzYWzJgnf+MSOsyRFXUKBm2ltW+iBqFX+IzgD514ru WkQM+Uka+sWIrwtQFdTJWmE/HOaxEIYUlmkZS5mpah7gmVboRk8vsfQ2q4eXk2/XJRn2 CD29DRxhipU3EnBWYRAXu6BGi6hGiibdb2YMIg2Nnhqru+Ys70dG7VjAnO+HPGmfwmyn ctTyAitOLgjZbdz7rDX7qPfPXe3r7n7U6eRQH/XkQdB9gWS0EXlYHc1NvyAjTqtBkBD/ tT2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0WrLFXSRKQDZy0BVrG6fpwzhIiirPXa1ZZ2+F23h/xo=; b=dEIdlbErkYl+YwQzj1RQUfU3DaXdKD61u/fVoC19mZ3F7Nao6a2z8MaERo+edXArFL uBJd5slilFyekwPwuBSAlaSsIDe+eJI8P5QGNkrJ0p9rBel413DTSc8wuvlpDAnQnOje K4aJU5Tp7AkArrfpQHbdlqMOxeZBTCOUsVSOCbU4LHJ7c2J7ipSJon8iTtluMUrgNJOx k1RFmTY4j/PGEAZ2W8ebCpJjoiC/6OAJR0lJroyr5j2TGQb3dFAzdHcPxBbBd/tTuA+1 GQaVHDQu/1x5V83/0i5gEqpuDGWqctE2JHGdxpBuu3VK5I/TUGp8SglI/fZtH/MdV5er 480g== X-Gm-Message-State: AOAM532D+aBj3TRsEW41hC2GEHocWJYFwbTKZnQOIUrQPcZwtrh2ocZc UdbmtjnFUoiDrCkTYu2YahoOSrNFCOU= X-Google-Smtp-Source: ABdhPJxZ9CDZanXwI00iKof052hSihO6c0Z7hLztBSZhU7r7znhPh7idsoGP8V6L4cp8OvOwqhZk1w== X-Received: by 2002:a63:4c1d:: with SMTP id z29mr2968658pga.243.1589963602568; Wed, 20 May 2020 01:33:22 -0700 (PDT) Received: from santosiv.in.ibm.com.com ([223.181.246.139]) by smtp.gmail.com with ESMTPSA id 2sm1553980pfz.39.2020.05.20.01.33.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 May 2020 01:33:22 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Cc: Michael Ellerman , Greg KH , Sasha Levin , "Aneesh Kumar K.V" , Santosh Sivaraj Subject: [PATCH v4 4/6] powerpc/mmu_gather: enable RCU_TABLE_FREE even for !SMP case Date: Wed, 20 May 2020 14:00:23 +0530 Message-Id: <20200520083025.229011-5-santosh@fossix.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200520083025.229011-1-santosh@fossix.org> References: <20200520083025.229011-1-santosh@fossix.org> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: "Aneesh Kumar K.V" commit 12e4d53f3f04e81f9e83d6fc10edc7314ab9f6b9 upstream Patch series "Fixup page directory freeing", v4. This is a repost of patch series from Peter with the arch specific changes except ppc64 dropped. ppc64 changes are added here because we are redoing the patch series on top of ppc64 changes. This makes it easy to backport these changes. Only the first 2 patches need to be backported to stable. The thing is, on anything SMP, freeing page directories should observe the exact same order as normal page freeing: 1) unhook page/directory 2) TLB invalidate 3) free page/directory Without this, any concurrent page-table walk could end up with a Use-after-Free. This is esp. trivial for anything that has software page-table walkers (HAVE_FAST_GUP / software TLB fill) or the hardware caches partial page-walks (ie. caches page directories). Even on UP this might give issues since mmu_gather is preemptible these days. An interrupt or preempted task accessing user pages might stumble into the free page if the hardware caches page directories. This patch series fixes ppc64 and add generic MMU_GATHER changes to support the conversion of other architectures. I haven't added patches w.r.t other architecture because they are yet to be acked. This patch (of 9): A followup patch is going to make sure we correctly invalidate page walk cache before we free page table pages. In order to keep things simple enable RCU_TABLE_FREE even for !SMP so that we don't have to fixup the !SMP case differently in the followup patch !SMP case is right now broken for radix translation w.r.t page walk cache flush. We can get interrupted in between page table free and that would imply we have page walk cache entries pointing to tables which got freed already. Michael said "both our platforms that run on Power9 force SMP on in Kconfig, so the !SMP case is unlikely to be a problem for anyone in practice, unless they've hacked their kernel to build it !SMP." Link: http://lkml.kernel.org/r/20200116064531.483522-2-aneesh.kumar@linux.ibm.com Signed-off-by: Aneesh Kumar K.V Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: backported for 4.19 stable] --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/book3s/32/pgalloc.h | 8 -------- arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 -- arch/powerpc/include/asm/nohash/32/pgalloc.h | 8 -------- arch/powerpc/mm/pgtable-book3s64.c | 7 ------- 5 files changed, 1 insertion(+), 26 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index e09cfb109b8c..1a00ce4b0040 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -215,7 +215,7 @@ config PPC select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_FREE select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index 82e44b1a00ae..79ba3fbb512e 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -110,7 +110,6 @@ static inline void pgtable_free(void *table, unsigned index_size) #define check_pgt_cache() do { } while (0) #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { @@ -127,13 +126,6 @@ static inline void __tlb_remove_table(void *_table) pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, - void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index f9019b579903..1013c0214213 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -47,9 +47,7 @@ extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); extern void pte_fragment_free(unsigned long *, int); extern void pmd_fragment_free(unsigned long *); extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); -#ifdef CONFIG_SMP extern void __tlb_remove_table(void *_table); -#endif static inline pgd_t *radix__pgd_alloc(struct mm_struct *mm) { diff --git a/arch/powerpc/include/asm/nohash/32/pgalloc.h b/arch/powerpc/include/asm/nohash/32/pgalloc.h index 8825953c225b..96eed46d5684 100644 --- a/arch/powerpc/include/asm/nohash/32/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/32/pgalloc.h @@ -111,7 +111,6 @@ static inline void pgtable_free(void *table, unsigned index_size) #define check_pgt_cache() do { } while (0) #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { @@ -128,13 +127,6 @@ static inline void __tlb_remove_table(void *_table) pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, - void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c index 297db665d953..5b4e9fd8990c 100644 --- a/arch/powerpc/mm/pgtable-book3s64.c +++ b/arch/powerpc/mm/pgtable-book3s64.c @@ -432,7 +432,6 @@ static inline void pgtable_free(void *table, int index) } } -#ifdef CONFIG_SMP void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int index) { unsigned long pgf = (unsigned long)table; @@ -449,12 +448,6 @@ void __tlb_remove_table(void *_table) return pgtable_free(table, index); } -#else -void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int index) -{ - return pgtable_free(table, index); -} -#endif #ifdef CONFIG_PROC_FS atomic_long_t direct_pages_count[MMU_PAGE_COUNT]; From patchwork Wed May 20 08:30:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 225518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4645C433E0 for ; Wed, 20 May 2020 08:33:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 94322206C3 for ; Wed, 20 May 2020 08:33:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.b="X9lGxgxE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726852AbgETIdd (ORCPT ); Wed, 20 May 2020 04:33:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726436AbgETIdc (ORCPT ); Wed, 20 May 2020 04:33:32 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 953E9C061A0E for ; Wed, 20 May 2020 01:33:32 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id 5so923055pjd.0 for ; Wed, 20 May 2020 01:33:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NlxdBI8wiLZenSs6j2nq89RHtqfPDggo6DaziIKFMVc=; b=X9lGxgxEPnFHSzzaOdMDLP7ATZXnbqTUa9hrsb9Z/3TMQ80e/6CC8Cs8MftCgeSYbW tWcwUGqj6bE/2RDfGVmERWU7RPf0O6SdL458fXe39rOozu8D6CrQV2zNWO6a5Az7EUuA xQkKuJkw0+s/Ti2gwcMztT4AZXIvphgs81Qiqd0Gl+HVXPpWunOQAHYV4TBJTQFguzcb F88Z2xxLrTpo2V1WlcK3bJKCow6rb8gUzz3+WbB5WG+tAC3GhxCGj+Z+T1FGeENlVvu+ xPjnBtW0wsp50C6tiPwC2CPQ8bKKmknPnKSgWiaOSFS0qWv9l19AinL1O8Ne7B17uMzi WnUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NlxdBI8wiLZenSs6j2nq89RHtqfPDggo6DaziIKFMVc=; b=aL8CA0SdaGQYycVEmRac04v8oHaUqQDRrb72JN2vqQurhsTGk4Jbhpw2ok0yoOJj4h oiTcGxcL1vrTzrjzo81XeoxYovyHWF0SwnMuQtwtatwizmV1xfZ8vADzvLht+pXiX0GY NTfTsk+gU+8Vq2aIsmvYr2M2kxdpY64eKNdKbI74w9sbN9fkzMg5MU0KX3hTTDtfTQE7 cusS1Uk2rmB/372VbTMHCx/i2+aw8xj+nlLNg7ULarihvNy2QrjOBfjWYDtrl07MM3MX 8twDzGMFa0g+dQfqLSJA2/3WvgsjN2u8ox9RhAZsnyVs1mct9sL0N3LL/X4F4T8JhUIa wp4Q== X-Gm-Message-State: AOAM530bcJdFwR9UbuFRblT6A0ZQ6FKzJLOFlQGDKtGhJxu4A0kkOHoa yeaLFHpV7ebPrEjycQryfp6M1XvmukA= X-Google-Smtp-Source: ABdhPJwEQgWQuk6x5EevptElzwnttM1fzDUykzVF8QoxGjOlm29ZKZuBRwmc+SfC8yz5Pv/hFbgtbg== X-Received: by 2002:a17:90a:1303:: with SMTP id h3mr3819470pja.44.1589963611955; Wed, 20 May 2020 01:33:31 -0700 (PDT) Received: from santosiv.in.ibm.com.com ([223.181.246.139]) by smtp.gmail.com with ESMTPSA id 2sm1553980pfz.39.2020.05.20.01.33.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 May 2020 01:33:31 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Cc: Michael Ellerman , Greg KH , Sasha Levin , Peter Zijlstra , "Aneesh Kumar K . V" , Santosh Sivaraj Subject: [PATCH v4 6/6] asm-generic/tlb: avoid potential double flush Date: Wed, 20 May 2020 14:00:25 +0530 Message-Id: <20200520083025.229011-7-santosh@fossix.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200520083025.229011-1-santosh@fossix.org> References: <20200520083025.229011-1-santosh@fossix.org> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Peter Zijlstra commit 0758cd8304942292e95a0f750c374533db378b32 upstream Aneesh reported that: tlb_flush_mmu() tlb_flush_mmu_tlbonly() tlb_flush() <-- #1 tlb_flush_mmu_free() tlb_table_flush() tlb_table_invalidate() tlb_flush_mmu_tlbonly() tlb_flush() <-- #2 does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not clear tlb->end in that case. Observe that any caller to __tlb_adjust_range() also sets at least one of the tlb->freed_tables || tlb->cleared_p* bits, and those are unconditionally cleared by __tlb_reset_range(). Change the condition for actually issuing TLBI to having one of those bits set, as opposed to having tlb->end != 0. Link: http://lkml.kernel.org/r/20200116064531.483522-4-aneesh.kumar@linux.ibm.com Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V Reported-by: "Aneesh Kumar K.V" Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: backported to 4.19 stable] --- include/asm-generic/tlb.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 19934cdd143e..427a70c56ddd 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -179,7 +179,12 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { - if (!tlb->end) + /* + * Anything calling __tlb_adjust_range() also sets at least one of + * these bits. + */ + if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds || + tlb->cleared_puds || tlb->cleared_p4ds)) return; tlb_flush(tlb);