From patchwork Mon Jan 4 15:57:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 356405 Delivered-To: patch@linaro.org Received: by 2002:a02:85a7:0:0:0:0:0 with SMTP id d36csp15093289jai; Mon, 4 Jan 2021 08:15:14 -0800 (PST) X-Google-Smtp-Source: ABdhPJwvaaHVgYasmnfYrhTT3wHCS7Bdrk+12Yo0rSSE1mmsHmHZRaS0zcuTzZgSenO7XGWs++o4 X-Received: by 2002:a17:906:1796:: with SMTP id t22mr48482773eje.372.1609776913863; Mon, 04 Jan 2021 08:15:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1609776913; cv=none; d=google.com; s=arc-20160816; b=TlvM21zSCwHEIHNvg/56S9hUhAmRMNCLr+qwvEfcHCqE35sLxiyMfM5tZTE+atr466 7RiQpQuk51StsddUzlHQ/xr0TTIFkPvigfX1r7fb7YwLVoXro6Q4GJ+rLyfyMlV2heNu KpIrcD41DCZDbc/yzmySmU5ZUP5FAtM8OoWWC2ilqKVqDmii321vHAAamk+sWKtl0Gma fIRKx6YPEcVMyPc0ycZ1UlT73N/4cweIv5uEviNu652HbVJ9+XJgWTa/6J3D6Zof6mgG WPuqYm2PLhQkgV+gl6jghVSR8TOOB9SJ8FaB542lpSJL2q81H0k31WcD20732A/YGAUP BV2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=cRh7udCI0z8gO+pzPi6y8JYIcFl3Xtzt+jPvv8TzapI=; b=YQYSmLFtbzbkanIM55nX07cCDlAJf4/ZhuFwWrFDOJ+nHEYUHxIOKGk13nxMIsb02f sDD/WUCGyJz5E7g4P4lOjdGsDID1gfZvKMMDi6Qh2wlOtst0Z1tN2lbGSpBuZTY5e5sX sxCIuoVIiGcpCKk45j5IDUmXJ139Gv4weh7LGxFdCxwSoGp0akV/Wz3sjVahnVsSA3/1 NEn1+UZ6wC0zTfSwJgNAObtdPzsdXhSxKsPzwq2buGWp7aYPiev4rENgwfSfuPfPEZDq K+Q9nj1zCI2SjUMLH4fH+bd0rhHR/P7txxgmx6MDWayfRWtZo4CKh3CxwFngzWHGagY8 k6tA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=qRmcXg5M; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n7si29179797ejz.458.2021.01.04.08.15.13; Mon, 04 Jan 2021 08:15:13 -0800 (PST) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=qRmcXg5M; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728011AbhADP7h (ORCPT + 14 others); Mon, 4 Jan 2021 10:59:37 -0500 Received: from mail.kernel.org ([198.145.29.99]:36472 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727973AbhADP7g (ORCPT ); Mon, 4 Jan 2021 10:59:36 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id EB43C22515; Mon, 4 Jan 2021 15:58:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609775917; bh=R6lmCwaW2N7KE9H1bzMYEhfthcrG8lvKYtIKO8RDEs0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qRmcXg5MwPQ59pxI02rrTP/AB47XnOSOQ1lRpbDomwDTsH3Le+/o6ES+eKn/9vB8J utX0BM+nMxHEP+FuAjAo5Ze8pc9yu9WhH8h2z0vcpu50gMLsUaUGrR+DFRLG8NW+uc kMlxmUhUUSgNcPNjID203rQ6dzpKmvA+aoKIG0MA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Will Deacon , Santosh Sivaraj Subject: [PATCH 4.19 16/35] asm-generic/tlb: Track which levels of the page tables have been cleared Date: Mon, 4 Jan 2021 16:57:19 +0100 Message-Id: <20210104155704.197670503@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210104155703.375788488@linuxfoundation.org> References: <20210104155703.375788488@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit a6d60245d6d9b1caf66b0d94419988c4836980af upstream It is common for architectures with hugepage support to require only a single TLB invalidation operation per hugepage during unmap(), rather than iterating through the mapping at a PAGE_SIZE increment. Currently, however, the level in the page table where the unmap() operation occurs is not stored in the mmu_gather structure, therefore forcing architectures to issue additional TLB invalidation operations or to give up and over-invalidate by e.g. invalidating the entire TLB. Ideally, we could add an interval rbtree to the mmu_gather structure, which would allow us to associate the correct mapping granule with the various sub-mappings within the range being invalidated. However, this is costly in terms of book-keeping and memory management, so instead we approximate by keeping track of the page table levels that are cleared and provide a means to query the smallest granule required for invalidation. Signed-off-by: Will Deacon Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: prerequisite for upcoming tlbflush backports] Signed-off-by: Greg Kroah-Hartman --- include/asm-generic/tlb.h | 58 +++++++++++++++++++++++++++++++++++++++------- mm/memory.c | 4 ++- 2 files changed, 53 insertions(+), 9 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -114,6 +114,14 @@ struct mmu_gather { */ unsigned int freed_tables : 1; + /* + * at which levels have we cleared entries? + */ + unsigned int cleared_ptes : 1; + unsigned int cleared_pmds : 1; + unsigned int cleared_puds : 1; + unsigned int cleared_p4ds : 1; + struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; @@ -148,6 +156,10 @@ static inline void __tlb_reset_range(str tlb->end = 0; } tlb->freed_tables = 0; + tlb->cleared_ptes = 0; + tlb->cleared_pmds = 0; + tlb->cleared_puds = 0; + tlb->cleared_p4ds = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -197,6 +209,25 @@ static inline void tlb_remove_check_page } #endif +static inline unsigned long tlb_get_unmap_shift(struct mmu_gather *tlb) +{ + if (tlb->cleared_ptes) + return PAGE_SHIFT; + if (tlb->cleared_pmds) + return PMD_SHIFT; + if (tlb->cleared_puds) + return PUD_SHIFT; + if (tlb->cleared_p4ds) + return P4D_SHIFT; + + return PAGE_SHIFT; +} + +static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) +{ + return 1UL << tlb_get_unmap_shift(tlb); +} + /* * In the case of tlb vma handling, we can optimise these away in the * case where we're doing a full MM flush. When we're doing a munmap, @@ -230,13 +261,19 @@ static inline void tlb_remove_check_page #define tlb_remove_tlb_entry(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->cleared_ptes = 1; \ __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - do { \ - __tlb_adjust_range(tlb, address, huge_page_size(h)); \ - __tlb_remove_tlb_entry(tlb, ptep, address); \ +#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ + do { \ + unsigned long _sz = huge_page_size(h); \ + __tlb_adjust_range(tlb, address, _sz); \ + if (_sz == PMD_SIZE) \ + tlb->cleared_pmds = 1; \ + else if (_sz == PUD_SIZE) \ + tlb->cleared_puds = 1; \ + __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) /** @@ -250,6 +287,7 @@ static inline void tlb_remove_check_page #define tlb_remove_pmd_tlb_entry(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PMD_SIZE); \ + tlb->cleared_pmds = 1; \ __tlb_remove_pmd_tlb_entry(tlb, pmdp, address); \ } while (0) @@ -264,6 +302,7 @@ static inline void tlb_remove_check_page #define tlb_remove_pud_tlb_entry(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PUD_SIZE); \ + tlb->cleared_puds = 1; \ __tlb_remove_pud_tlb_entry(tlb, pudp, address); \ } while (0) @@ -289,7 +328,8 @@ static inline void tlb_remove_check_page #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_pmds = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -298,7 +338,8 @@ static inline void tlb_remove_check_page #define pmd_free_tlb(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_puds = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -308,7 +349,8 @@ static inline void tlb_remove_check_page #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_p4ds = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -319,7 +361,7 @@ static inline void tlb_remove_check_page #define p4d_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif --- a/mm/memory.c +++ b/mm/memory.c @@ -279,8 +279,10 @@ void arch_tlb_finish_mmu(struct mmu_gath { struct mmu_gather_batch *batch, *next; - if (force) + if (force) { + __tlb_reset_range(tlb); __tlb_adjust_range(tlb, start, end - start); + } tlb_flush_mmu(tlb);