From patchwork Mon Jul 12 06:06:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 473021 Delivered-To: patch@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp2702260jao; Mon, 12 Jul 2021 00:54:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzZO+WSe6bucgrqevfzEHhDL3kDoxpHIwTnrorjjd5RYrflFjMI1Y8bZ/ULe9yHnTtKyTZW X-Received: by 2002:a17:906:a0a:: with SMTP id w10mr51378675ejf.416.1626076452089; Mon, 12 Jul 2021 00:54:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626076452; cv=none; d=google.com; s=arc-20160816; b=WxHYV5mkOQdroRJSIqZmsyLTTLcCJd6DtR9HE4wCifJFXGV3QIV1To0h7IQiIMRlkv zAEE+SyoYQbz5SjzTz1HPdQ4fcScaT3PObMhlUIM+FR3OaEXbq8ItnO1E9Vnxkg/HY48 71QJj4JvhoZK+r9b8uahJ9Fv3X3Ge/ZhjpjRyh14iC3pzui/tUxagDnHQC8umbTPv5my Jse/+vkH6y2K3Zzf9H6Fkj4T9ts7P2J0ILvTu0pA+T0fD88ifl+w1MMMD2+YVnfv5m/r oKZ1q6G4DtyTZJk7LV2xeM8XJiVXPhYYrLuU4yzWnc2Tvnpvj+L3SaSrt/Bs3g0MrE/C 5vVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=UFd0orVUEeBTjwx37fEmIrktvx5A1rJM6Axe35Zn9AE=; b=EZhU/VszVJEpamjls1Vbe34GXxkU1yTBvhlJxGVkEmSrdZ7v+gppdMQEF2/Xghispn LXzbD0oQYUZGOIbHi8Q/fYdzZhU9BIpOrPVKJl5obtmjKXaFkkEUk2wLyz3i1GpY9XMT QE9pwOqbsQ919mgOWr8unZBs6J2PmK9HQlNL1N6LsuwpC+XfaYDaz4Z1s2iTW1s/VbKm p5hzMsVIvpJ4KjkvM1sE03EAQGtcTc9FM8mU9d5wWbMM9od6rkFOArNJP48Sq5fQ6dR+ B082FwzDgAs+cVGow2GtKcEQumtpu6xsjeXX6Lds5KlZVroKr8cCADmImQXEQudbhg2U TqNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=aLdiuHDr; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n21si13642925eda.13.2021.07.12.00.54.11; Mon, 12 Jul 2021 00:54:12 -0700 (PDT) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=aLdiuHDr; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352432AbhGLHyr (ORCPT + 12 others); Mon, 12 Jul 2021 03:54:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:36642 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350380AbhGLHu6 (ORCPT ); Mon, 12 Jul 2021 03:50:58 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id CE6356146E; Mon, 12 Jul 2021 07:44:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1626075892; bh=zVTyZcwSd7vA0lpZYi7rMFyWbCSA/mGFPsjWPV3idlI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aLdiuHDrLoPZJe3sPGuFQS64t6oCRiyD3ZWNJAJD0vuHgm0MxksYPpfiQTW5nxclR 4uUG/G4XAEp6yFcfgIDsirLKRgPM9qGkTv+ergj3gam85TZIxMBBzEBw69FgJUjIR8 N6WyET0HoHYzey1XgrolQ7EVgNN7qgH1h1JUTF0U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Anshuman Khandual , "Aneesh Kumar K.V" , Christophe Leroy , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 5.13 382/800] mm/debug_vm_pgtable: ensure THP availability via has_transparent_hugepage() Date: Mon, 12 Jul 2021 08:06:45 +0200 Message-Id: <20210712061007.804995816@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210712060912.995381202@linuxfoundation.org> References: <20210712060912.995381202@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Anshuman Khandual [ Upstream commit 65ac1a60a57e2c55f2ac37f27095f6b012295e81 ] On certain platforms, THP support could not just be validated via the build option CONFIG_TRANSPARENT_HUGEPAGE. Instead has_transparent_hugepage() also needs to be called upon to verify THP runtime support. Otherwise the debug test will just run into unusable THP helpers like in the case of a 4K hash config on powerpc platform [1]. This just moves all pfn_pmd() and pfn_pud() after THP runtime validation with has_transparent_hugepage() which prevents the mentioned problem. [1] https://bugzilla.kernel.org/show_bug.cgi?id=213069 Link: https://lkml.kernel.org/r/1621397588-19211-1-git-send-email-anshuman.khandual@arm.com Fixes: 787d563b8642 ("mm/debug_vm_pgtable: fix kernel crash by checking for THP support") Signed-off-by: Anshuman Khandual Cc: Aneesh Kumar K.V Cc: Christophe Leroy Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/debug_vm_pgtable.c | 63 ++++++++++++++++++++++++++++++++++--------- 1 file changed, 51 insertions(+), 12 deletions(-) -- 2.30.2 diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 297d1b349c19..92bfc37300df 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -146,13 +146,14 @@ static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot) static void __init pmd_basic_tests(unsigned long pfn, int idx) { pgprot_t prot = protection_map[idx]; - pmd_t pmd = pfn_pmd(pfn, prot); unsigned long val = idx, *ptr = &val; + pmd_t pmd; if (!has_transparent_hugepage()) return; pr_debug("Validating PMD basic (%pGv)\n", ptr); + pmd = pfn_pmd(pfn, prot); /* * This test needs to be executed after the given page table entry @@ -185,7 +186,7 @@ static void __init pmd_advanced_tests(struct mm_struct *mm, unsigned long pfn, unsigned long vaddr, pgprot_t prot, pgtable_t pgtable) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; if (!has_transparent_hugepage()) return; @@ -232,9 +233,14 @@ static void __init pmd_advanced_tests(struct mm_struct *mm, static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; + + if (!has_transparent_hugepage()) + return; pr_debug("Validating PMD leaf\n"); + pmd = pfn_pmd(pfn, prot); + /* * PMD based THP is a leaf entry. */ @@ -267,12 +273,16 @@ static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot) static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; if (!IS_ENABLED(CONFIG_NUMA_BALANCING)) return; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD saved write\n"); + pmd = pfn_pmd(pfn, prot); WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd)))); WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd)))); } @@ -281,13 +291,14 @@ static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) static void __init pud_basic_tests(struct mm_struct *mm, unsigned long pfn, int idx) { pgprot_t prot = protection_map[idx]; - pud_t pud = pfn_pud(pfn, prot); unsigned long val = idx, *ptr = &val; + pud_t pud; if (!has_transparent_hugepage()) return; pr_debug("Validating PUD basic (%pGv)\n", ptr); + pud = pfn_pud(pfn, prot); /* * This test needs to be executed after the given page table entry @@ -323,7 +334,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm, unsigned long pfn, unsigned long vaddr, pgprot_t prot) { - pud_t pud = pfn_pud(pfn, prot); + pud_t pud; if (!has_transparent_hugepage()) return; @@ -332,6 +343,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm, /* Align the address wrt HPAGE_PUD_SIZE */ vaddr &= HPAGE_PUD_MASK; + pud = pfn_pud(pfn, prot); set_pud_at(mm, vaddr, pudp, pud); pudp_set_wrprotect(mm, vaddr, pudp); pud = READ_ONCE(*pudp); @@ -370,9 +382,13 @@ static void __init pud_advanced_tests(struct mm_struct *mm, static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { - pud_t pud = pfn_pud(pfn, prot); + pud_t pud; + + if (!has_transparent_hugepage()) + return; pr_debug("Validating PUD leaf\n"); + pud = pfn_pud(pfn, prot); /* * PUD based THP is a leaf entry. */ @@ -654,12 +670,16 @@ static void __init pte_protnone_tests(unsigned long pfn, pgprot_t prot) #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pmd_mkhuge(pfn_pmd(pfn, prot)); + pmd_t pmd; if (!IS_ENABLED(CONFIG_NUMA_BALANCING)) return; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD protnone\n"); + pmd = pmd_mkhuge(pfn_pmd(pfn, prot)); WARN_ON(!pmd_protnone(pmd)); WARN_ON(!pmd_present(pmd)); } @@ -679,18 +699,26 @@ static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot) #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; + + if (!has_transparent_hugepage()) + return; pr_debug("Validating PMD devmap\n"); + pmd = pfn_pmd(pfn, prot); WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd))); } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) { - pud_t pud = pfn_pud(pfn, prot); + pud_t pud; + + if (!has_transparent_hugepage()) + return; pr_debug("Validating PUD devmap\n"); + pud = pfn_pud(pfn, prot); WARN_ON(!pud_devmap(pud_mkdevmap(pud))); } #else /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ @@ -733,25 +761,33 @@ static void __init pte_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot) #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY)) return; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD soft dirty\n"); + pmd = pfn_pmd(pfn, prot); WARN_ON(!pmd_soft_dirty(pmd_mksoft_dirty(pmd))); WARN_ON(pmd_soft_dirty(pmd_clear_soft_dirty(pmd))); } static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) || !IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION)) return; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD swap soft dirty\n"); + pmd = pfn_pmd(pfn, prot); WARN_ON(!pmd_swp_soft_dirty(pmd_swp_mksoft_dirty(pmd))); WARN_ON(pmd_swp_soft_dirty(pmd_swp_clear_soft_dirty(pmd))); } @@ -780,6 +816,9 @@ static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot) swp_entry_t swp; pmd_t pmd; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD swap\n"); pmd = pfn_pmd(pfn, prot); swp = __pmd_to_swp_entry(pmd);