From patchwork Wed Aug 26 00:38:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 264899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB8D5C433E1 for ; Wed, 26 Aug 2020 00:38:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9995820707 for ; Wed, 26 Aug 2020 00:38:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1598402320; bh=FTTpw/iHR2T6EVlznk4a/o+pDus8FlyY5Z0rM0R5LNw=; h=Date:From:To:Subject:List-ID:From; b=1s/gGhj+O0YU1jA3wKCZCSrJq/PtXYjc1slE0O/1huREDt+q/a5PHpBvNB0EfkjgP viHEsP45JTxcay58BTlxuXnt+y7yfbz+ZlpY9f+NCqxPZiluhO1f2uUaoO5+5bIyVJ s6Kf9Pa3z8qoi55xugZGsujdenwItvfnFdiYTvPE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726551AbgHZAik (ORCPT ); Tue, 25 Aug 2020 20:38:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:51308 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726483AbgHZAik (ORCPT ); Tue, 25 Aug 2020 20:38:40 -0400 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F168E20737; Wed, 26 Aug 2020 00:38:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1598402319; bh=FTTpw/iHR2T6EVlznk4a/o+pDus8FlyY5Z0rM0R5LNw=; h=Date:From:To:Subject:From; b=ARIyR3mqRSOkOvi6FFK+kIvWHs3wNzeWwtPgnPWpJNtn7OtItxRvzraz3yl3W84lg 3ZRgIJ7ci0wXv6ybymQhQXuLMZM2wrO1SBbXs7TK1zrs+CHCLcKkDFOaZkKpTxVXfm AEz5+13SMHvHEQMeT6vJ0V1lcYLY/gxUVKpRHyd8= Date: Tue, 25 Aug 2020 17:38:38 -0700 From: akpm@linux-foundation.org To: alistair@popple.id.au, jglisse@redhat.com, jhubbard@nvidia.com, mm-commits@vger.kernel.org, peterx@redhat.com, rcampbell@nvidia.com, stable@vger.kernel.org Subject: + mm-rmap-fixup-copying-of-soft-dirty-and-uffd-ptes.patch added to -mm tree Message-ID: <20200826003838.aqAgh1Wow%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch titled Subject: mm/rmap: fixup copying of soft dirty and uffd ptes has been added to the -mm tree. Its filename is mm-rmap-fixup-copying-of-soft-dirty-and-uffd-ptes.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-rmap-fixup-copying-of-soft-dirty-and-uffd-ptes.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-rmap-fixup-copying-of-soft-dirty-and-uffd-ptes.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Alistair Popple Subject: mm/rmap: fixup copying of soft dirty and uffd ptes During memory migration a pte is temporarily replaced with a migration swap pte. Some pte bits from the existing mapping such as the soft-dirty and uffd write-protect bits are preserved by copying these to the temporary migration swap pte. However these bits are not stored at the same location for swap and non-swap ptes. Therefore testing these bits requires using the appropriate helper function for the given pte type. Unfortunately several code locations were found where the wrong helper function is being used to test soft_dirty and uffd_wp bits which leads to them getting incorrectly set or cleared during page-migration. Fix these by using the correct tests based on pte type. Link: https://lkml.kernel.org/r/20200825064232.10023-2-alistair@popple.id.au Fixes: a5430dda8a3a ("mm/migrate: support un-addressable ZONE_DEVICE page in migration") Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages") Fixes: f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration") Signed-off-by: Alistair Popple Reviewed-by: Peter Xu Cc: Jérôme Glisse Cc: John Hubbard Cc: Ralph Campbell Cc: Alistair Popple Cc: Signed-off-by: Andrew Morton --- mm/migrate.c | 15 +++++++++++---- mm/rmap.c | 9 +++++++-- 2 files changed, 18 insertions(+), 6 deletions(-) --- a/mm/migrate.c~mm-rmap-fixup-copying-of-soft-dirty-and-uffd-ptes +++ a/mm/migrate.c @@ -2427,10 +2427,17 @@ again: entry = make_migration_entry(page, mpfn & MIGRATE_PFN_WRITE); swp_pte = swp_entry_to_pte(entry); - if (pte_soft_dirty(pte)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pte)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); + if (pte_present(pte)) { + if (pte_soft_dirty(pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pte)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + } else { + if (pte_swp_soft_dirty(pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_swp_uffd_wp(pte)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + } set_pte_at(mm, addr, ptep, swp_pte); /* --- a/mm/rmap.c~mm-rmap-fixup-copying-of-soft-dirty-and-uffd-ptes +++ a/mm/rmap.c @@ -1511,9 +1511,14 @@ static bool try_to_unmap_one(struct page */ entry = make_migration_entry(page, 0); swp_pte = swp_entry_to_pte(entry); - if (pte_soft_dirty(pteval)) + + /* + * pteval maps a zone device page and is therefore + * a swap pte. + */ + if (pte_swp_soft_dirty(pteval)) swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pteval)) + if (pte_swp_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); /*