From patchwork Mon Jul 12 06:12:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 473823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB700C11F78 for ; Mon, 12 Jul 2021 07:29:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D40A1610FA for ; Mon, 12 Jul 2021 07:29:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344361AbhGLHbu (ORCPT ); Mon, 12 Jul 2021 03:31:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:43378 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344401AbhGLH3b (ORCPT ); Mon, 12 Jul 2021 03:29:31 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4107061453; Mon, 12 Jul 2021 07:24:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1626074698; bh=sotzp4vjjswgvpon5TPoI2fGgunuSzPX7j+uzbSC46g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YU+opw+WvwbxqZj3J2hAL/w+JX5yRmkzMDTevZc1A5fsLfKKmz2bWqslK/VU4IKZf y2YzzhQrR9KgJ0aWHqoylqx39xCcENHOgiug+2QuDsgrqN5uHE4TJDpq7sChRp7C0H GSiApkk6TVoU63rfWXFavmJX4Ec9Uc2xLZYnYyOU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Miaohe Lin , Yang Shi , Anshuman Khandual , David Hildenbrand , Zi Yan , William Kucharski , Matthew Wilcox , "Aneesh Kumar K . V" , Ralph Campbell , Song Liu , "Kirill A. Shutemov" , Rik van Riel , Johannes Weiner , Minchan Kim , Hugh Dickins , Alexey Dobriyan , Mike Kravetz , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 5.12 665/700] mm/huge_memory.c: remove dedicated macro HPAGE_CACHE_INDEX_MASK Date: Mon, 12 Jul 2021 08:12:28 +0200 Message-Id: <20210712061046.648021828@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210712060924.797321836@linuxfoundation.org> References: <20210712060924.797321836@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Miaohe Lin [ Upstream commit b2bd53f18bb7f7cfc91b3bb527d7809376700a8e ] Patch series "Cleanup and fixup for huge_memory:, v3. This series contains cleanups to remove dedicated macro and remove unnecessary tlb_remove_page_size() for huge zero pmd. Also this adds missing read-only THP checking for transparent_hugepage_enabled() and avoids discarding hugepage if other processes are mapping it. More details can be found in the respective changelogs. Thi patch (of 5): Rewrite the pgoff checking logic to remove macro HPAGE_CACHE_INDEX_MASK which is only used here to simplify the code. Link: https://lkml.kernel.org/r/20210511134857.1581273-1-linmiaohe@huawei.com Link: https://lkml.kernel.org/r/20210511134857.1581273-2-linmiaohe@huawei.com Signed-off-by: Miaohe Lin Reviewed-by: Yang Shi Reviewed-by: Anshuman Khandual Reviewed-by: David Hildenbrand Cc: Zi Yan Cc: William Kucharski Cc: Matthew Wilcox Cc: "Aneesh Kumar K . V" Cc: Ralph Campbell Cc: Song Liu Cc: Kirill A. Shutemov Cc: Rik van Riel Cc: Johannes Weiner Cc: Minchan Kim Cc: Hugh Dickins Cc: Alexey Dobriyan Cc: Mike Kravetz Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/linux/huge_mm.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 6686a0baa91d..2a7c58ea5ca9 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -155,15 +155,13 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) bool transparent_hugepage_enabled(struct vm_area_struct *vma); -#define HPAGE_CACHE_INDEX_MASK (HPAGE_PMD_NR - 1) - static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, unsigned long haddr) { /* Don't have to check pgoff for anonymous vma */ if (!vma_is_anonymous(vma)) { - if (((vma->vm_start >> PAGE_SHIFT) & HPAGE_CACHE_INDEX_MASK) != - (vma->vm_pgoff & HPAGE_CACHE_INDEX_MASK)) + if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, + HPAGE_PMD_NR)) return false; }