From patchwork Tue Nov 17 13:04:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E35CC64E8A for ; Tue, 17 Nov 2020 13:07:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CF7462225B for ; Tue, 17 Nov 2020 13:07:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="Mt9FnRUc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729027AbgKQNHE (ORCPT ); Tue, 17 Nov 2020 08:07:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:33738 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728998AbgKQNHE (ORCPT ); Tue, 17 Nov 2020 08:07:04 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 85E1D221EB; Tue, 17 Nov 2020 13:07:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618423; bh=LNZuy9ndNPfOV1EXe5VTgCg1MZvORSf4NFK4JU1ECuQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mt9FnRUcPmGFW0Hod8I7qTYiwTTK5dWVjf0ypXcAcS/mZzx452qTEfZhmA7pxMdht j/9zQ1mKzzhEUPEeG7RL85xnEStliWF+uVB8GmPAZewB8Ycfbvt1ELoDd6WnBAUhs5 VFuic7gDBmrGsfRuVRpk6E7Yl2BW3URNCFCX0ZtQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Shijie Luo , Miaohe Lin , Andrew Morton , Oscar Salvador , Michal Hocko , Feilong Lin , Linus Torvalds , Sasha Levin Subject: [PATCH 4.4 03/64] mm: mempolicy: fix potential pte_unmap_unlock pte error Date: Tue, 17 Nov 2020 14:04:26 +0100 Message-Id: <20201117122106.299492637@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Shijie Luo [ Upstream commit 3f08842098e842c51e3b97d0dcdebf810b32558e ] When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea. queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't migrate misplaced pages but returns with EIO when encountering such a page. Since commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified") and early break on the first pte in the range results in pte_unmap_unlock on an underflow pte. This can lead to lockups later on when somebody tries to lock the pte resp. page_table_lock again.. Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified") Signed-off-by: Shijie Luo Signed-off-by: Miaohe Lin Signed-off-by: Andrew Morton Reviewed-by: Oscar Salvador Acked-by: Michal Hocko Cc: Miaohe Lin Cc: Feilong Lin Cc: Shijie Luo Cc: Link: https://lkml.kernel.org/r/20201019074853.50856-1-luoshijie1@huawei.com Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/mempolicy.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e101cac3d4a63..9ab7969ee7e30 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -490,14 +490,14 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, struct queue_pages *qp = walk->private; unsigned long flags = qp->flags; int nid; - pte_t *pte; + pte_t *pte, *mapped_pte; spinlock_t *ptl; split_huge_page_pmd(vma, addr, pmd); if (pmd_trans_unstable(pmd)) return 0; - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); for (; addr != end; pte++, addr += PAGE_SIZE) { if (!pte_present(*pte)) continue; @@ -521,7 +521,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, } else break; } - pte_unmap_unlock(pte - 1, ptl); + pte_unmap_unlock(mapped_pte, ptl); cond_resched(); return addr != end ? -EIO : 0; }