From patchwork Mon Nov 16 18:41:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 325481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 285ABC64E7D for ; Mon, 16 Nov 2020 18:41:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3FEF217A0 for ; Mon, 16 Nov 2020 18:41:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="vcuPHgFZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727891AbgKPSlV (ORCPT ); Mon, 16 Nov 2020 13:41:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:44934 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727883AbgKPSlV (ORCPT ); Mon, 16 Nov 2020 13:41:21 -0500 Received: from localhost.localdomain (c-71-198-47-131.hsd1.ca.comcast.net [71.198.47.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9D68420867; Mon, 16 Nov 2020 18:41:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605552080; bh=xIU8KFCgxD/CPhat5cPEQLa6oofXQgk8tS8dXV2+pAI=; h=Date:From:To:Subject:From; b=vcuPHgFZw3ulu3UPKkPfpVqJQ46vR/3i7fN5qdQE1u4JxCSPA5JvE5+PpYLZZW8uM AELEeJBP+NMcgcP1tmCLd2rnvP4ff9GiqcCxgPNXDEHHWvcUgWDhYNomDv2/NsVnDc hSle0/8sjxdIZe/fnwJD5zj9l7USKAhwws8CGHBg= Date: Mon, 16 Nov 2020 10:41:20 -0800 From: akpm@linux-foundation.org To: aneesh.kumar@linux.ibm.com, dan.j.williams@intel.com, ira.weiny@intel.com, jgg@nvidia.com, jhubbard@nvidia.com, mm-commits@vger.kernel.org, stable@vger.kernel.org Subject: [merged] mm-gup-use-unpin_user_pages-in-__gup_longterm_locked.patch removed from -mm tree Message-ID: <20201116184120.Yl7fF-luC%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch titled Subject: mm/gup: use unpin_user_pages() in __gup_longterm_locked() has been removed from the -mm tree. Its filename was mm-gup-use-unpin_user_pages-in-__gup_longterm_locked.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Jason Gunthorpe Subject: mm/gup: use unpin_user_pages() in __gup_longterm_locked() When FOLL_PIN is passed to __get_user_pages() the page list must be put back using unpin_user_pages() otherwise the page pin reference persists in a corrupted state. There are two places in the unwind of __gup_longterm_locked() that put the pages back without checking. Normally on error this function would return the partial page list making this the caller's responsibility, but in these two cases the caller is not allowed to see these pages at all. Link: https://lkml.kernel.org/r/0-v2-3ae7d9d162e2+2a7-gup_cma_fix_jgg@nvidia.com Fixes: 3faa52c03f44 ("mm/gup: track FOLL_PIN pages") Signed-off-by: Jason Gunthorpe Reported-by: Ira Weiny Reviewed-by: Ira Weiny Reviewed-by: John Hubbard Cc: Aneesh Kumar K.V Cc: Dan Williams Cc: Signed-off-by: Andrew Morton --- mm/gup.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) --- a/mm/gup.c~mm-gup-use-unpin_user_pages-in-__gup_longterm_locked +++ a/mm/gup.c @@ -1647,8 +1647,11 @@ check_again: /* * drop the above get_user_pages reference. */ - for (i = 0; i < nr_pages; i++) - put_page(pages[i]); + if (gup_flags & FOLL_PIN) + unpin_user_pages(pages, nr_pages); + else + for (i = 0; i < nr_pages; i++) + put_page(pages[i]); if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { @@ -1728,8 +1731,11 @@ static long __gup_longterm_locked(struct goto out; if (check_dax_vmas(vmas_tmp, rc)) { - for (i = 0; i < rc; i++) - put_page(pages[i]); + if (gup_flags & FOLL_PIN) + unpin_user_pages(pages, rc); + else + for (i = 0; i < rc; i++) + put_page(pages[i]); rc = -EOPNOTSUPP; goto out; }