From patchwork Mon Jul 12 06:07:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 472957 Delivered-To: patch@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp2676188jao; Mon, 12 Jul 2021 00:13:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxYx2fTL76pAoAYzM+H6dr0d/YWA9qXUiRgL7UexTqkBkwE4DLtPqVRSq0NvFO9IbzR8zp1 X-Received: by 2002:a92:d344:: with SMTP id a4mr7816506ilh.230.1626073930968; Mon, 12 Jul 2021 00:12:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626073930; cv=none; d=google.com; s=arc-20160816; b=AilU18D+YxdZOpeodbL+BkrvlL7m9/YqNl/kFj8vC9ubzjGkYAm5BMdaQ55qlk9ZRN rvcsar7tSmGnfCeN78aI9C0nb1Y9BaWg5nnFSMXi3LpJ3iCYwjCvu57eytaxNs7mZpp0 V+cniy+igVu0TeFeVQ//vuRQ2Sj2ckEGGzBO3TQv3PHE9ZWwPIKHGrPGUvkGnGNKCosz /LsgEruphyzsmLoqJu8jG+enQSp0AZJ3VRf6/tpmjeV6fU0H9uJRucR1wO3UXAicXtcV dp9BWtJSkJ2/qrdeNHdI4tHhJ4wHVuqy7k3FVDCyHBbzdAPH+KnQXusjSChn9k/oLiuW 0f0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=kjrZ1To4QapR/SiwEAava7DANgSmEMY9mDsvDXuYmWw=; b=AHIBVzXYECmo9UmUU+8GgOataNJqEAIpXC+/IgZwW+Og1Awz5hrRFkrmk73rcmdNOc kRWYf0ZAd2UUO9dd+KTTTco4hDvZzz6dcw9Wboqe9U0N6iGo2hzCS66s2GWPHiEqigoO wfIwlaCo6oPb0RTTtYYLg7NOE0ef/V1hbwHgaAb6IfO4azuhGhbVxc4YJJbTkEX/RrbX yUCIiNVzXSRXrTuViJO5dGxdgVYbqlakwZ92vV5zzhXdw8VT864M1ngcSFRH+c7NtN3q Yx4ENJgge4KDvRxlYYmUcz1HoOgMD7urUg+IxC5M94sbUngy23O71OyxVi/mofBzTi8/ pjsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=b4FVxSJn; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g12si16580694jas.65.2021.07.12.00.12.10; Mon, 12 Jul 2021 00:12:10 -0700 (PDT) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=b4FVxSJn; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241219AbhGLHNl (ORCPT + 12 others); Mon, 12 Jul 2021 03:13:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:46544 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242315AbhGLHMH (ORCPT ); Mon, 12 Jul 2021 03:12:07 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 38B67610CA; Mon, 12 Jul 2021 07:09:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1626073750; bh=pGyUPq/CcyxUrsdxtO71INuUHWTfk5AWyYSFZDYFbDA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b4FVxSJn0oHsyw0Gi1JwVK2Sp+E0majz5Pslq37gsluMCo8RewwJiYqoYonfoRT72 CvMjiE/qgX06/ZTNiwS+u8ScsCMYgrbnwJTxJ/JCqEFL6gVXv9wYhhTFiJDyYXkXK5 4ihXylrzvMhMiX1IS7puzz3gNDakxr7POyDXqpgo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Anshuman Khandual , "Aneesh Kumar K.V" , Christophe Leroy , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 5.12 345/700] mm/debug_vm_pgtable: ensure THP availability via has_transparent_hugepage() Date: Mon, 12 Jul 2021 08:07:08 +0200 Message-Id: <20210712061012.973374545@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210712060924.797321836@linuxfoundation.org> References: <20210712060924.797321836@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Anshuman Khandual [ Upstream commit 65ac1a60a57e2c55f2ac37f27095f6b012295e81 ] On certain platforms, THP support could not just be validated via the build option CONFIG_TRANSPARENT_HUGEPAGE. Instead has_transparent_hugepage() also needs to be called upon to verify THP runtime support. Otherwise the debug test will just run into unusable THP helpers like in the case of a 4K hash config on powerpc platform [1]. This just moves all pfn_pmd() and pfn_pud() after THP runtime validation with has_transparent_hugepage() which prevents the mentioned problem. [1] https://bugzilla.kernel.org/show_bug.cgi?id=213069 Link: https://lkml.kernel.org/r/1621397588-19211-1-git-send-email-anshuman.khandual@arm.com Fixes: 787d563b8642 ("mm/debug_vm_pgtable: fix kernel crash by checking for THP support") Signed-off-by: Anshuman Khandual Cc: Aneesh Kumar K.V Cc: Christophe Leroy Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/debug_vm_pgtable.c | 63 ++++++++++++++++++++++++++++++++++--------- 1 file changed, 51 insertions(+), 12 deletions(-) -- 2.30.2 diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 726fd2030f64..12ebc97e8b43 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -146,13 +146,14 @@ static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot) static void __init pmd_basic_tests(unsigned long pfn, int idx) { pgprot_t prot = protection_map[idx]; - pmd_t pmd = pfn_pmd(pfn, prot); unsigned long val = idx, *ptr = &val; + pmd_t pmd; if (!has_transparent_hugepage()) return; pr_debug("Validating PMD basic (%pGv)\n", ptr); + pmd = pfn_pmd(pfn, prot); /* * This test needs to be executed after the given page table entry @@ -185,7 +186,7 @@ static void __init pmd_advanced_tests(struct mm_struct *mm, unsigned long pfn, unsigned long vaddr, pgprot_t prot, pgtable_t pgtable) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; if (!has_transparent_hugepage()) return; @@ -232,9 +233,14 @@ static void __init pmd_advanced_tests(struct mm_struct *mm, static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; + + if (!has_transparent_hugepage()) + return; pr_debug("Validating PMD leaf\n"); + pmd = pfn_pmd(pfn, prot); + /* * PMD based THP is a leaf entry. */ @@ -267,12 +273,16 @@ static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot) static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; if (!IS_ENABLED(CONFIG_NUMA_BALANCING)) return; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD saved write\n"); + pmd = pfn_pmd(pfn, prot); WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd)))); WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd)))); } @@ -281,13 +291,14 @@ static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) static void __init pud_basic_tests(struct mm_struct *mm, unsigned long pfn, int idx) { pgprot_t prot = protection_map[idx]; - pud_t pud = pfn_pud(pfn, prot); unsigned long val = idx, *ptr = &val; + pud_t pud; if (!has_transparent_hugepage()) return; pr_debug("Validating PUD basic (%pGv)\n", ptr); + pud = pfn_pud(pfn, prot); /* * This test needs to be executed after the given page table entry @@ -323,7 +334,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm, unsigned long pfn, unsigned long vaddr, pgprot_t prot) { - pud_t pud = pfn_pud(pfn, prot); + pud_t pud; if (!has_transparent_hugepage()) return; @@ -332,6 +343,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm, /* Align the address wrt HPAGE_PUD_SIZE */ vaddr &= HPAGE_PUD_MASK; + pud = pfn_pud(pfn, prot); set_pud_at(mm, vaddr, pudp, pud); pudp_set_wrprotect(mm, vaddr, pudp); pud = READ_ONCE(*pudp); @@ -370,9 +382,13 @@ static void __init pud_advanced_tests(struct mm_struct *mm, static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { - pud_t pud = pfn_pud(pfn, prot); + pud_t pud; + + if (!has_transparent_hugepage()) + return; pr_debug("Validating PUD leaf\n"); + pud = pfn_pud(pfn, prot); /* * PUD based THP is a leaf entry. */ @@ -654,12 +670,16 @@ static void __init pte_protnone_tests(unsigned long pfn, pgprot_t prot) #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pmd_mkhuge(pfn_pmd(pfn, prot)); + pmd_t pmd; if (!IS_ENABLED(CONFIG_NUMA_BALANCING)) return; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD protnone\n"); + pmd = pmd_mkhuge(pfn_pmd(pfn, prot)); WARN_ON(!pmd_protnone(pmd)); WARN_ON(!pmd_present(pmd)); } @@ -679,18 +699,26 @@ static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot) #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; + + if (!has_transparent_hugepage()) + return; pr_debug("Validating PMD devmap\n"); + pmd = pfn_pmd(pfn, prot); WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd))); } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) { - pud_t pud = pfn_pud(pfn, prot); + pud_t pud; + + if (!has_transparent_hugepage()) + return; pr_debug("Validating PUD devmap\n"); + pud = pfn_pud(pfn, prot); WARN_ON(!pud_devmap(pud_mkdevmap(pud))); } #else /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ @@ -733,25 +761,33 @@ static void __init pte_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot) #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY)) return; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD soft dirty\n"); + pmd = pfn_pmd(pfn, prot); WARN_ON(!pmd_soft_dirty(pmd_mksoft_dirty(pmd))); WARN_ON(pmd_soft_dirty(pmd_clear_soft_dirty(pmd))); } static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot) { - pmd_t pmd = pfn_pmd(pfn, prot); + pmd_t pmd; if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) || !IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION)) return; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD swap soft dirty\n"); + pmd = pfn_pmd(pfn, prot); WARN_ON(!pmd_swp_soft_dirty(pmd_swp_mksoft_dirty(pmd))); WARN_ON(pmd_swp_soft_dirty(pmd_swp_clear_soft_dirty(pmd))); } @@ -780,6 +816,9 @@ static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot) swp_entry_t swp; pmd_t pmd; + if (!has_transparent_hugepage()) + return; + pr_debug("Validating PMD swap\n"); pmd = pfn_pmd(pfn, prot); swp = __pmd_to_swp_entry(pmd);