From patchwork Thu Mar 12 13:27:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 229410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 845CEC10DCE for ; Thu, 12 Mar 2020 13:28:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5D568206BE for ; Thu, 12 Mar 2020 13:28:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.b="lErWO/yW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727237AbgCLN2M (ORCPT ); Thu, 12 Mar 2020 09:28:12 -0400 Received: from mail-pj1-f67.google.com ([209.85.216.67]:51004 "EHLO mail-pj1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726558AbgCLN2L (ORCPT ); Thu, 12 Mar 2020 09:28:11 -0400 Received: by mail-pj1-f67.google.com with SMTP id u10so2567295pjy.0 for ; Thu, 12 Mar 2020 06:28:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=epu6wmj6Ft/4Rps2Jy4h0kjSZmJ0dnkFAY3fVVQ+XPg=; b=lErWO/yWGrzIPqhIYqCx37unoh4hD8+ENeQOFWwu56C7E4ve4ApmFIQycC+29+E5Iu Yah327Pd326g0OJJ53e3pJf6dv0cmu2jxl1qXkSdXCZqJQmDdtMJn4rNTPsgb+Yhgyk6 Df/sLLdFl1PbpU7WRz7bRwHhdeXVMT63Rw1HCefEl0GO0Eju0s1LcotA8lWlbgKZjAlP gkXros2rl0jiNf38tNpRwYFMc44tM32jdbu2MRgc/gASDQpDFVMf6j8B/Q9S/LMNEJPM u4G4JQvqgxx1SRVwKgCb94U67LwWuE9YcqEHI4X3NyM/Sb9xI20T26qzmvJo5Eu8WAcC 8aiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=epu6wmj6Ft/4Rps2Jy4h0kjSZmJ0dnkFAY3fVVQ+XPg=; b=KbyZZc8vmEkfJyGDKOLkr2TGWZ+QGBgf3vIV0cqf5PBVxViXC0OyP8HYdiMuxr/OJe di9ZUNIL64tXAlx0HJxB5UZWESguoKyelrq7BhEIVSwZzDPw07ih6vVIiRQuu1wzR962 vrUnWO0SvRr4yrvNv0QHNajQ9qQGNgpP5XFR9en09xEbJA4pIEaQ1h19DZTGvIXHASCI do3WEHKHyhhQ4aE3Hd3Kw/y2pS0wRVadAGXxCOR7gAyFQ5vyR9rfIVrs8hrJ7rXjTxRT zsPo67f6bOcWOKUKLNiDPsl5k0nSDNVUplb3MSvg1tQmuO33MMagxKDi94Gk/bOWZz02 3a4g== X-Gm-Message-State: ANhLgQ080EQKWSDsPXS78jY0DnMYMdZwDwpqDQA6XbZ7Lqoe0Im1y3uy hey/PB4+1/ew1fef6YP5AYVAFIDIOAs= X-Google-Smtp-Source: ADFU+vv0JOQGQhpb4v/9/M3jT392OAGyw4apgSqJSgTdp8YhgbpqAZvk354HwGUpiR/BQ5ED+Fm2cA== X-Received: by 2002:a17:90a:21ce:: with SMTP id q72mr4074655pjc.160.1584019689969; Thu, 12 Mar 2020 06:28:09 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:08 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Cc: Michael Ellerman , Greg KH , Sasha Levin , Peter Zijlstra , Will Deacon Subject: [PATCH v3 1/6] asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather Date: Thu, 12 Mar 2020 18:57:35 +0530 Message-Id: <20200312132740.225241-2-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Peter Zijlstra commit 22a61c3c4f1379ef8b0ce0d5cb78baf3178950e2 upstream Some architectures require different TLB invalidation instructions depending on whether it is only the last-level of page table being changed, or whether there are also changes to the intermediate (directory) entries higher up the tree. Add a new bit to the flags bitfield in struct mmu_gather so that the architecture code can operate accordingly if it's the intermediate levels being invalidated. Signed-off-by: Peter Zijlstra Signed-off-by: Will Deacon Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: prerequisite for tlbflush backports] --- include/asm-generic/tlb.h | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b3353e21f3b3..97306b32d8d2 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -97,12 +97,22 @@ struct mmu_gather { #endif unsigned long start; unsigned long end; - /* we are in the middle of an operation to clear - * a full mm and can make some optimizations */ - unsigned int fullmm : 1, - /* we have performed an operation which - * requires a complete flush of the tlb */ - need_flush_all : 1; + /* + * we are in the middle of an operation to clear + * a full mm and can make some optimizations + */ + unsigned int fullmm : 1; + + /* + * we have performed an operation which + * requires a complete flush of the tlb + */ + unsigned int need_flush_all : 1; + + /* + * we have removed page directories + */ + unsigned int freed_tables : 1; struct mmu_gather_batch *active; struct mmu_gather_batch local; @@ -137,6 +147,7 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->start = TASK_SIZE; tlb->end = 0; } + tlb->freed_tables = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -278,6 +289,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -285,7 +297,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #ifndef pmd_free_tlb #define pmd_free_tlb(tlb, pmdp, address) \ do { \ - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -295,6 +308,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -304,7 +318,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #ifndef p4d_free_tlb #define p4d_free_tlb(tlb, pudp, address) \ do { \ - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif From patchwork Thu Mar 12 13:27:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 229409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D980C10DCE for ; Thu, 12 Mar 2020 13:28:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D987F206FA for ; Thu, 12 Mar 2020 13:28:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.b="IbGHOA3s" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727231AbgCLN2T (ORCPT ); Thu, 12 Mar 2020 09:28:19 -0400 Received: from mail-pj1-f67.google.com ([209.85.216.67]:52198 "EHLO mail-pj1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726558AbgCLN2T (ORCPT ); Thu, 12 Mar 2020 09:28:19 -0400 Received: by mail-pj1-f67.google.com with SMTP id y7so2564954pjn.1 for ; Thu, 12 Mar 2020 06:28:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ptL293kyj8nSFPX5gOhpUcpQg2dEQmzdIA3eipuwDzk=; b=IbGHOA3sWoTfAV58TcG7+E3AhEpW57IQvk9yDRbkbKZzcOJtB0wa8x3Hm/czySSKWR +0c+d+bJgiwcQv0Vi0altJNp/v5SBggifoTlJEkXWtE6X0yKlElTnVsvG4vjmvWajgEp Oj7x75h7o9LBho0fwgP5lYTdpMeSRmvFu4uf+7FvyB6GsEFyYE2DJ1U09c3o6EclbRRo FbL3VGvuZg0+AaEkiO2yYs9y6fc+9lLU8rd6CthyHuGdxFu6rNsJvo5imIbskHrzXVsQ AfdDE101+Ie7EbLNbOI5qa7kmktJb+8d1zQNyNgXJtK2SNPKiiOZjIh5uBFmnuZgOab7 siTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ptL293kyj8nSFPX5gOhpUcpQg2dEQmzdIA3eipuwDzk=; b=PNjoUUE19USOfBDTLdiqGQD946L5CwfeyfmdLCej2tUrIjT+hpmjRPXHmG30PEP0JI Pj/1BnzptQyr5k7zSAjF2gYDqRaxvbD2mv8TLL0AnDuCb9W3DCp8OgS0zMo8q+zTkL4q oFP5BDZPiICVcksS6mEK5pectP6R3h3+s1HsQVuqXFg/dQksQLL5jYlySOUr7bny9Rq+ O/n1zbaUcBbXaM4j/FAfcABIC0jHrd6YoBdWiKw7QzqfXKN4ag8Mb7dGcUXWdpLKoi0m h7KOOFcUYTUTLbKOktB/j3uE7ngC1GY72RylvSolPn05XzIVQg/dHRYaVY0FogpwA9h1 7YTQ== X-Gm-Message-State: ANhLgQ1mp4LycEXgZEaNtqsBzQHUo8vRTWkxxyLfqv63txsxDMHFy3ji CNibHxOyQg56FjHOtI0B3K/ELvuMt1Y= X-Google-Smtp-Source: ADFU+vst0ubOJ3OTPrlmehYL5+WoIBd645gGraJhFqONxy/bybPJuO7k0PfDYSHJ3OEs6iZnWB7pOQ== X-Received: by 2002:a17:90a:34c6:: with SMTP id m6mr4265443pjf.13.1584019696597; Thu, 12 Mar 2020 06:28:16 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:15 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Cc: Michael Ellerman , Greg KH , Sasha Levin , Peter Zijlstra Subject: [PATCH v3 3/6] asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATE Date: Thu, 12 Mar 2020 18:57:37 +0530 Message-Id: <20200312132740.225241-4-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Peter Zijlstra commit 96bc9567cbe112e9320250f01b9c060c882e8619 upstream. Make issuing a TLB invalidate for page-table pages the normal case. The reason is twofold: - too many invalidates is safer than too few, - most architectures use the linux page-tables natively and would thus require this. Make it an opt-out, instead of an opt-in. No change in behavior intended. Signed-off-by: Peter Zijlstra (Intel) Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: prerequisite for upcoming tlbflush backports] --- arch/Kconfig | 2 +- arch/powerpc/Kconfig | 1 + arch/sparc/Kconfig | 1 + arch/x86/Kconfig | 1 - mm/memory.c | 2 +- 5 files changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index a336548487e6..061a12b8140e 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -363,7 +363,7 @@ config HAVE_ARCH_JUMP_LABEL config HAVE_RCU_TABLE_FREE bool -config HAVE_RCU_TABLE_INVALIDATE +config HAVE_RCU_TABLE_NO_INVALIDATE bool config ARCH_HAVE_NMI_SAFE_CMPXCHG diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 6f475dc5829b..e09cfb109b8c 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -216,6 +216,7 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index e6f2a38d2e61..d90d632868aa 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -64,6 +64,7 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index af35f5caadbe..181d0d522977 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -181,7 +181,6 @@ config X86 select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if PARAVIRT - select HAVE_RCU_TABLE_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION select HAVE_STACKPROTECTOR if CC_HAS_SANE_STACKPROTECTOR diff --git a/mm/memory.c b/mm/memory.c index 1832c5ed6ac0..ba5689610c04 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -327,7 +327,7 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE +#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE /* * Invalidate page-table caches used by hardware walkers. Then we still * need to RCU-sched wait while freeing the pages because software From patchwork Thu Mar 12 13:27:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 229408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C3B1C10DCE for ; Thu, 12 Mar 2020 13:28:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 37DC7206BE for ; Thu, 12 Mar 2020 13:28:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.b="lS0GhE93" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727272AbgCLN20 (ORCPT ); Thu, 12 Mar 2020 09:28:26 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:42133 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726558AbgCLN20 (ORCPT ); Thu, 12 Mar 2020 09:28:26 -0400 Received: by mail-pg1-f193.google.com with SMTP id h8so3082368pgs.9 for ; Thu, 12 Mar 2020 06:28:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AOYTyCBfTy8MK3UPn6mW4+x0YQLfUKNtszBzIavYGWA=; b=lS0GhE933CCmOmeFpvr/ip2iV/KpmaPPFcD8rxSamy00DeBg+mdvBlS5+H6c/ex7XX K3aiuBWHV6Qf5EQ9/Wp5+tSiiijGe8mBqxaG+wwHvKKE0tYpbNvKj88JqqwoPZ7QMrbX yzovdeL1Cys01eyNxdLuAAKzcy4cg4CX62XCipRkekbz8faqicPn75ZUG+oGB/9FMtKb PKKg1mfCCJo6km0DekMDWyM1FS0MQskkbPf14Y9esO72VTkxP5E1tU92Nx2k/7iMsFbt c6k1s0fJKfgTypfQVeHCFLI22dp2zi5rT3BM3NdaOWJH9pvtVpqkm4Tj/aQreaCqPVAk HggQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AOYTyCBfTy8MK3UPn6mW4+x0YQLfUKNtszBzIavYGWA=; b=cE92hCOAo+ME9VjeAzwRRjvtud70yU0i9CaGmZhX9yB0UAVeU84j0IaXdnvBA9Qlm1 VOdozob8VUsX1wwVbmuEjK4VKqT//tIBqUuBEO7dgoBHkr+3mAJ5405xo7f15/s+optb zryoR2maGmQEprI8Mk1R6dzfLBzv3Qi0Z0/cv6Zq/4oUij3APAKZ6lWP/1abmiVLQhe9 GeqsGjpIx53AOx+x6DwLFlA1E4wyGvkVNkA/7u5xpXkvvdqty4mDzCKQ+1/ZUGmfPrAR mcIJVkNh8NQZVgyNWbYQ5CdRwesdZJJFk5ewFC5xNMW5/0qQRU4nnBEh/3fTXpxzaz5j DD6g== X-Gm-Message-State: ANhLgQ0jzziUFpycDv2GIeaR7vAgnLdJzg8JPW6dN+VH0N99QuHgtj+q osMjFTpUg3bj4RDN2mIbzL2+NZol+ns= X-Google-Smtp-Source: ADFU+vv2KWyb3e7X6LuFGDW5/DQGC9VabvRVdjphSKuYITMI1VSWGCqToqd7D5TYG1iQ9Ytrzj18/g== X-Received: by 2002:a63:3111:: with SMTP id x17mr7979463pgx.422.1584019703713; Thu, 12 Mar 2020 06:28:23 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:22 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Cc: Michael Ellerman , Greg KH , Sasha Levin , Peter Zijlstra , "Aneesh Kumar K . V" Subject: [PATCH v3 5/6] mm/mmu_gather: invalidate TLB correctly on batch allocation failure and flush Date: Thu, 12 Mar 2020 18:57:39 +0530 Message-Id: <20200312132740.225241-6-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Peter Zijlstra commit 0ed1325967ab5f7a4549a2641c6ebe115f76e228 upstream. Architectures for which we have hardware walkers of Linux page table should flush TLB on mmu gather batch allocation failures and batch flush. Some architectures like POWER supports multiple translation modes (hash and radix) and in the case of POWER only radix translation mode needs the above TLBI. This is because for hash translation mode kernel wants to avoid this extra flush since there are no hardware walkers of linux page table. With radix translation, the hardware also walks linux page table and with that, kernel needs to make sure to TLB invalidate page walk cache before page table pages are freed. More details in commit d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE") The changes to sparc are to make sure we keep the old behavior since we are now removing HAVE_RCU_TABLE_NO_INVALIDATE. The default value for tlb_needs_table_invalidate is to always force an invalidate and sparc can avoid the table invalidate. Hence we define tlb_needs_table_invalidate to false for sparc architecture. Link: http://lkml.kernel.org/r/20200116064531.483522-3-aneesh.kumar@linux.ibm.com Fixes: a46cc7a90fd8 ("powerpc/mm/radix: Improve TLB/PWC flushes") Signed-off-by: Peter Zijlstra (Intel) Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: backported to 4.19 stable] --- arch/Kconfig | 3 --- arch/powerpc/Kconfig | 1 - arch/powerpc/include/asm/tlb.h | 11 +++++++++++ arch/sparc/Kconfig | 1 - arch/sparc/include/asm/tlb_64.h | 9 +++++++++ include/asm-generic/tlb.h | 15 +++++++++++++++ mm/memory.c | 16 ++++++++-------- 7 files changed, 43 insertions(+), 13 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 061a12b8140e..3abbdb0cea44 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -363,9 +363,6 @@ config HAVE_ARCH_JUMP_LABEL config HAVE_RCU_TABLE_FREE bool -config HAVE_RCU_TABLE_NO_INVALIDATE - bool - config ARCH_HAVE_NMI_SAFE_CMPXCHG bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1a00ce4b0040..e5bc0cfea2b1 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -216,7 +216,6 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h index f0e571b2dc7c..63418275f402 100644 --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -30,6 +30,17 @@ #define tlb_remove_check_page_size_change tlb_remove_check_page_size_change extern void tlb_flush(struct mmu_gather *tlb); +/* + * book3s: + * Hash does not use the linux page-tables, so we can avoid + * the TLB invalidate for page-table freeing, Radix otoh does use the + * page-tables and needs the TLBI. + * + * nohash: + * We still do TLB invalidate in the __pte_free_tlb routine before we + * add the page table pages to mmu gather table batch. + */ +#define tlb_needs_table_invalidate() radix_enabled() /* Get the generic bits... */ #include diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index d90d632868aa..e6f2a38d2e61 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -64,7 +64,6 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select HAVE_RCU_TABLE_FREE if SMP - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE diff --git a/arch/sparc/include/asm/tlb_64.h b/arch/sparc/include/asm/tlb_64.h index a2f3fa61ee36..8cb8f3833239 100644 --- a/arch/sparc/include/asm/tlb_64.h +++ b/arch/sparc/include/asm/tlb_64.h @@ -28,6 +28,15 @@ void flush_tlb_pending(void); #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) #define tlb_flush(tlb) flush_tlb_pending() +/* + * SPARC64's hardware TLB fill does not use the Linux page-tables + * and therefore we don't need a TLBI when freeing page-table pages. + */ + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_needs_table_invalidate() (false) +#endif + #include #endif /* _SPARC64_TLB_H */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index f2b9dc9cbaf8..19934cdd143e 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -61,8 +61,23 @@ struct mmu_table_batch { extern void tlb_table_flush(struct mmu_gather *tlb); extern void tlb_remove_table(struct mmu_gather *tlb, void *table); +/* + * This allows an architecture that does not use the linux page-tables for + * hardware to skip the TLBI when freeing page tables. + */ +#ifndef tlb_needs_table_invalidate +#define tlb_needs_table_invalidate() (true) #endif +#else + +#ifdef tlb_needs_table_invalidate +#error tlb_needs_table_invalidate() requires HAVE_RCU_TABLE_FREE +#endif + +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ + + /* * If we can't allocate a page to make a big batch of page pointers * to work on, then just handle a few from the on-stack structure. diff --git a/mm/memory.c b/mm/memory.c index ba5689610c04..7daa7ae1b046 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -327,14 +327,14 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE - /* - * Invalidate page-table caches used by hardware walkers. Then we still - * need to RCU-sched wait while freeing the pages because software - * walkers can still be in-flight. - */ - tlb_flush_mmu_tlbonly(tlb); -#endif + if (tlb_needs_table_invalidate()) { + /* + * Invalidate page-table caches used by hardware walkers. Then + * we still need to RCU-sched wait while freeing the pages + * because software walkers can still be in-flight. + */ + tlb_flush_mmu_tlbonly(tlb); + } } static void tlb_remove_table_smp_sync(void *arg)