From patchwork Sat Sep 26 01:58:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 263435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 881FEC4727C for ; Sat, 26 Sep 2020 01:58:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5159B221EB for ; Sat, 26 Sep 2020 01:58:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601085493; bh=MIZlko5CIPUJM1wk6N/r12pUBnHx++KTr5v9Ldbd7Fw=; h=Date:From:To:Subject:List-ID:From; b=T9zt4WsuIUYI/p42ndD+WdC048hi5sLUvr0DmtPcRoxymeYfCoI4dBMucJRfMXAMB GgUdHRcIRw66EX3c/wPScpWu52Yb2uQFqnfqEaKnRRiTs4s0U+oaNpB4+Moz7csRiK IwTWWSP6XRNIC0MLP/av0MPeep1YJntMvDhEk2ig= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729952AbgIZB6M (ORCPT ); Fri, 25 Sep 2020 21:58:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:40528 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727495AbgIZB6M (ORCPT ); Fri, 25 Sep 2020 21:58:12 -0400 Received: from localhost.localdomain (c-71-198-47-131.hsd1.ca.comcast.net [71.198.47.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9F203207EA; Sat, 26 Sep 2020 01:58:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601085492; bh=MIZlko5CIPUJM1wk6N/r12pUBnHx++KTr5v9Ldbd7Fw=; h=Date:From:To:Subject:From; b=Fv4pG9Jf+zhqwmAtT30Uqs+kWJ9roPC0hgpy6ehKYOYcUWrtdE/xHg8ySoDc5tMKY 6PEbOC92VGMYxKMptvGIZyhXIDlgblMZdgv+F05x/U5tPoa6AxcrIW/m6O3UaqqLUy TwwpDdfr3QeLCwS/yjoQgfYOYWDxEtlhhTWvfDYQ= Date: Fri, 25 Sep 2020 18:58:11 -0700 From: akpm@linux-foundation.org To: apopple@nvidia.com, bharata@linux.ibm.com, bskeggs@redhat.com, hch@lst.de, jgg@nvidia.com, jglisse@redhat.com, jhubbard@nvidia.com, mm-commits@vger.kernel.org, rcampbell@nvidia.com, shuah@kernel.org, shy828301@gmail.com, stable@vger.kernel.org, ziy@nvidia.com Subject: [merged] mm-thp-fix-__split_huge_pmd_locked-for-migration-pmd.patch removed from -mm tree Message-ID: <20200926015811.-UjkluRI2%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch titled Subject: mm/thp: fix __split_huge_pmd_locked() for migration PMD has been removed from the -mm tree. Its filename was mm-thp-fix-__split_huge_pmd_locked-for-migration-pmd.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Ralph Campbell Subject: mm/thp: fix __split_huge_pmd_locked() for migration PMD A migrating transparent huge page has to already be unmapped. Otherwise, the page could be modified while it is being copied to a new page and data could be lost. The function __split_huge_pmd() checks for a PMD migration entry before calling __split_huge_pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD. However, the code always increments the page->_mapcount and adjusts the memory control group accounting assuming the page is mapped. Also, if the PMD entry is a migration PMD entry, the call to is_huge_zero_pmd(*pmd) is incorrect because it calls pmd_pfn(pmd) instead of migration_entry_to_pfn(pmd_to_swp_entry(pmd)). Fix these problems by checking for a PMD migration entry. Link: https://lkml.kernel.org/r/20200903183140.19055-1-rcampbell@nvidia.com Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path") Signed-off-by: Ralph Campbell Reviewed-by: Yang Shi Reviewed-by: Zi Yan Cc: Jerome Glisse Cc: John Hubbard Cc: Alistair Popple Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Bharata B Rao Cc: Ben Skeggs Cc: Shuah Khan Cc: [4.14+] Signed-off-by: Andrew Morton --- mm/huge_memory.c | 42 +++++++++++++++++++++++------------------- 1 file changed, 23 insertions(+), 19 deletions(-) --- a/mm/huge_memory.c~mm-thp-fix-__split_huge_pmd_locked-for-migration-pmd +++ a/mm/huge_memory.c @@ -2022,7 +2022,7 @@ static void __split_huge_pmd_locked(stru put_page(page); add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR); return; - } else if (is_huge_zero_pmd(*pmd)) { + } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) { /* * FIXME: Do we want to invalidate secondary mmu by calling * mmu_notifier_invalidate_range() see comments below inside @@ -2116,30 +2116,34 @@ static void __split_huge_pmd_locked(stru pte = pte_offset_map(&_pmd, addr); BUG_ON(!pte_none(*pte)); set_pte_at(mm, addr, pte, entry); - atomic_inc(&page[i]._mapcount); - pte_unmap(pte); - } - - /* - * Set PG_double_map before dropping compound_mapcount to avoid - * false-negative page_mapped(). - */ - if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) { - for (i = 0; i < HPAGE_PMD_NR; i++) + if (!pmd_migration) atomic_inc(&page[i]._mapcount); + pte_unmap(pte); } - lock_page_memcg(page); - if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { - /* Last compound_mapcount is gone. */ - __dec_lruvec_page_state(page, NR_ANON_THPS); - if (TestClearPageDoubleMap(page)) { - /* No need in mapcount reference anymore */ + if (!pmd_migration) { + /* + * Set PG_double_map before dropping compound_mapcount to avoid + * false-negative page_mapped(). + */ + if (compound_mapcount(page) > 1 && + !TestSetPageDoubleMap(page)) { for (i = 0; i < HPAGE_PMD_NR; i++) - atomic_dec(&page[i]._mapcount); + atomic_inc(&page[i]._mapcount); + } + + lock_page_memcg(page); + if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { + /* Last compound_mapcount is gone. */ + __dec_lruvec_page_state(page, NR_ANON_THPS); + if (TestClearPageDoubleMap(page)) { + /* No need in mapcount reference anymore */ + for (i = 0; i < HPAGE_PMD_NR; i++) + atomic_dec(&page[i]._mapcount); + } } + unlock_page_memcg(page); } - unlock_page_memcg(page); smp_wmb(); /* make pte visible before pmd */ pmd_populate(mm, pmd, pgtable);