From patchwork Thu Apr 14 13:09:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 561911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81D0BC433F5 for ; Thu, 14 Apr 2022 13:17:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243872AbiDNNTX (ORCPT ); Thu, 14 Apr 2022 09:19:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243860AbiDNNSr (ORCPT ); Thu, 14 Apr 2022 09:18:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2FB79399C; Thu, 14 Apr 2022 06:16:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 59F4B612CA; Thu, 14 Apr 2022 13:16:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64EC2C385A5; Thu, 14 Apr 2022 13:16:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649942163; bh=RnbZr9ihjCs/hbI65XeIou+i5G6vVvdSXIrydyueKeg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=owXHeZUuFvmNL2eU2yEkdJIsCRuPMc96yDtYRpfCvO2DBwv466+LZrbshjYB9M1wo dVmfJhomC65jQEjp/q60JRZxq3+vEHn2QWigoP3/T/J9hQB4NDnqGbpH0R4LNMlqA0 1ue/86axNIVEyTk3uHg6NKbxUoMkjGjyF03yeMMU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hugh Dickins , Oleg Nesterov , "Liam R. Howlett" , Vlastimil Babka , Andrew Morton , Linus Torvalds Subject: [PATCH 4.19 038/338] mempolicy: mbind_range() set_policy() after vma_merge() Date: Thu, 14 Apr 2022 15:09:01 +0200 Message-Id: <20220414110839.979504325@linuxfoundation.org> X-Mailer: git-send-email 2.35.2 In-Reply-To: <20220414110838.883074566@linuxfoundation.org> References: <20220414110838.883074566@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Hugh Dickins commit 4e0906008cdb56381638aa17d9c32734eae6d37a upstream. v2.6.34 commit 9d8cebd4bcd7 ("mm: fix mbind vma merge problem") introduced vma_merge() to mbind_range(); but unlike madvise, mlock and mprotect, it put a "continue" to next vma where its precedents go to update flags on current vma before advancing: that left vma with the wrong setting in the infamous vma_merge() case 8. v3.10 commit 1444f92c8498 ("mm: merging memory blocks resets mempolicy") tried to fix that in vma_adjust(), without fully understanding the issue. v3.11 commit 3964acd0dbec ("mm: mempolicy: fix mbind_range() && vma_adjust() interaction") reverted that, and went about the fix in the right way, but chose to optimize out an unnecessary mpol_dup() with a prior mpol_equal() test. But on tmpfs, that also pessimized out the vital call to its ->set_policy(), leaving the new mbind unenforced. The user visible effect was that the pages got allocated on the local node (happened to be 0), after the mbind() caller had specifically asked for them to be allocated on node 1. There was not any page migration involved in the case reported: the pages simply got allocated on the wrong node. Just delete that optimization now (though it could be made conditional on vma not having a set_policy). Also remove the "next" variable: it turned out to be blameless, but also pointless. Link: https://lkml.kernel.org/r/319e4db9-64ae-4bca-92f0-ade85d342ff@google.com Fixes: 3964acd0dbec ("mm: mempolicy: fix mbind_range() && vma_adjust() interaction") Signed-off-by: Hugh Dickins Acked-by: Oleg Nesterov Reviewed-by: Liam R. Howlett Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/mempolicy.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -732,7 +732,6 @@ static int vma_replace_policy(struct vm_ static int mbind_range(struct mm_struct *mm, unsigned long start, unsigned long end, struct mempolicy *new_pol) { - struct vm_area_struct *next; struct vm_area_struct *prev; struct vm_area_struct *vma; int err = 0; @@ -748,8 +747,7 @@ static int mbind_range(struct mm_struct if (start > vma->vm_start) prev = vma; - for (; vma && vma->vm_start < end; prev = vma, vma = next) { - next = vma->vm_next; + for (; vma && vma->vm_start < end; prev = vma, vma = vma->vm_next) { vmstart = max(start, vma->vm_start); vmend = min(end, vma->vm_end); @@ -763,10 +761,6 @@ static int mbind_range(struct mm_struct new_pol, vma->vm_userfaultfd_ctx); if (prev) { vma = prev; - next = vma->vm_next; - if (mpol_equal(vma_policy(vma), new_pol)) - continue; - /* vma_merge() joined vma && vma->next, case 8 */ goto replace; } if (vma->vm_start != vmstart) {