From patchwork Mon Mar 1 16:14:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 389989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59041C41621 for ; Mon, 1 Mar 2021 17:24:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E4D66506D for ; Mon, 1 Mar 2021 17:24:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238170AbhCARXg (ORCPT ); Mon, 1 Mar 2021 12:23:36 -0500 Received: from mail.kernel.org ([198.145.29.99]:57230 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237989AbhCARQE (ORCPT ); Mon, 1 Mar 2021 12:16:04 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 88A086504B; Mon, 1 Mar 2021 16:46:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1614617174; bh=5MKwQfX3WVWHiU2ZKh8rkd/E6VooJndjZa/fl8rCdmg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BEdVA3IC9YRNt5G7LhliJTQdoFDQDzrU65tdmRZ5T8GYIDfpMUvi+qS/4Qkvakk5P YvVBx6VN8aoYCQ3W+7eysHshZ3BsReMuOsx82sYRgxsNXRcizzyhNIWmdcpGx0Yzd4 DRiRaK+KT1ANCzbmynKX2SMyc84jf4lKx3lVQ5GM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Kravetz , Zi Yan , Davidlohr Bueso , "Kirill A . Shutemov" , Andrea Arcangeli , Matthew Wilcox , Oscar Salvador , Joao Martins , Andrew Morton , Linus Torvalds Subject: [PATCH 4.19 221/247] hugetlb: fix copy_huge_page_from_user contig page struct assumption Date: Mon, 1 Mar 2021 17:14:01 +0100 Message-Id: <20210301161042.500881839@linuxfoundation.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301161031.684018251@linuxfoundation.org> References: <20210301161031.684018251@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mike Kravetz commit 3272cfc2525b3a2810a59312d7a1e6f04a0ca3ef upstream. page structs are not guaranteed to be contiguous for gigantic pages. The routine copy_huge_page_from_user can encounter gigantic pages, yet it assumes page structs are contiguous when copying pages from user space. Since page structs for the target gigantic page are not contiguous, the data copied from user space could overwrite other pages not associated with the gigantic page and cause data corruption. Non-contiguous page structs are generally not an issue. However, they can exist with a specific kernel configuration and hotplug operations. For example: Configure the kernel with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP. Then, hotplug add memory for the area where the gigantic page will be allocated. Link: https://lkml.kernel.org/r/20210217184926.33567-2-mike.kravetz@oracle.com Fixes: 8fb5debc5fcd ("userfaultfd: hugetlbfs: add hugetlb_mcopy_atomic_pte for userfaultfd support") Signed-off-by: Mike Kravetz Cc: Zi Yan Cc: Davidlohr Bueso Cc: "Kirill A . Shutemov" Cc: Andrea Arcangeli Cc: Matthew Wilcox Cc: Oscar Salvador Cc: Joao Martins Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/memory.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) --- a/mm/memory.c +++ b/mm/memory.c @@ -4885,17 +4885,19 @@ long copy_huge_page_from_user(struct pag void *page_kaddr; unsigned long i, rc = 0; unsigned long ret_val = pages_per_huge_page * PAGE_SIZE; + struct page *subpage = dst_page; - for (i = 0; i < pages_per_huge_page; i++) { + for (i = 0; i < pages_per_huge_page; + i++, subpage = mem_map_next(subpage, dst_page, i)) { if (allow_pagefault) - page_kaddr = kmap(dst_page + i); + page_kaddr = kmap(subpage); else - page_kaddr = kmap_atomic(dst_page + i); + page_kaddr = kmap_atomic(subpage); rc = copy_from_user(page_kaddr, (const void __user *)(src + i * PAGE_SIZE), PAGE_SIZE); if (allow_pagefault) - kunmap(dst_page + i); + kunmap(subpage); else kunmap_atomic(page_kaddr);