From patchwork Sat Oct 31 11:35:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 317431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2DC3C388F7 for ; Sat, 31 Oct 2020 11:36:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7FCD620739 for ; Sat, 31 Oct 2020 11:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604144209; bh=S0QvFvUxPInt+D8MFMABSkUUKIPJWGO7tTyY4wyle4o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=wTTo9aJhZk0CltP1EzAkwv6uC+VJ0VBuv8YibfOONQ71SpGf1zWA44QxU0NmW3XxW 4SLaGAqPZvucRRF7rure2cAPbhADhz+cbaqfne8iIrJmiQqUk7KuDRqqixseY1OHeM 4QUv8VZ1j9tJlmyQV6Rv1ZwdICYan+ks7KRbUakM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727167AbgJaLgs (ORCPT ); Sat, 31 Oct 2020 07:36:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:35266 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727243AbgJaLgq (ORCPT ); Sat, 31 Oct 2020 07:36:46 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2EC7720739; Sat, 31 Oct 2020 11:36:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604144205; bh=S0QvFvUxPInt+D8MFMABSkUUKIPJWGO7tTyY4wyle4o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HtozCBmNc9AnMH2dpnu2lgtjFAPnkN9PKd65AGg6IU4i6PmU4cscH61JI8+d2DwTn iteubkTOgrwj6dukYH2XM2Ls01xmH5f6ZnZM/nbPmGSABKXU6eSFyj/D8NzOmBOG81 NXGp+Jw/cdIQx/JMXY64HWCF9dKzD+2H7THZOXvw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Boris Ostrovsky , Souptick Joarder , John Hubbard , Juergen Gross , David Vrabel Subject: [PATCH 5.4 45/49] xen/gntdev.c: Mark pages as dirty Date: Sat, 31 Oct 2020 12:35:41 +0100 Message-Id: <20201031113457.616778397@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201031113455.439684970@linuxfoundation.org> References: <20201031113455.439684970@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Souptick Joarder commit 779055842da5b2e508f3ccf9a8153cb1f704f566 upstream. There seems to be a bug in the original code when gntdev_get_page() is called with writeable=true then the page needs to be marked dirty before being put. To address this, a bool writeable is added in gnt_dev_copy_batch, set it in gntdev_grant_copy_seg() (and drop `writeable` argument to gntdev_get_page()) and then, based on batch->writeable, use set_page_dirty_lock(). Fixes: a4cdb556cae0 (xen/gntdev: add ioctl for grant copy) Suggested-by: Boris Ostrovsky Signed-off-by: Souptick Joarder Cc: John Hubbard Cc: Boris Ostrovsky Cc: Juergen Gross Cc: David Vrabel Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/1599375114-32360-1-git-send-email-jrdr.linux@gmail.com Reviewed-by: Boris Ostrovsky Signed-off-by: Boris Ostrovsky Signed-off-by: Greg Kroah-Hartman --- drivers/xen/gntdev.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -831,17 +831,18 @@ struct gntdev_copy_batch { s16 __user *status[GNTDEV_COPY_BATCH]; unsigned int nr_ops; unsigned int nr_pages; + bool writeable; }; static int gntdev_get_page(struct gntdev_copy_batch *batch, void __user *virt, - bool writeable, unsigned long *gfn) + unsigned long *gfn) { unsigned long addr = (unsigned long)virt; struct page *page; unsigned long xen_pfn; int ret; - ret = get_user_pages_fast(addr, 1, writeable ? FOLL_WRITE : 0, &page); + ret = get_user_pages_fast(addr, 1, batch->writeable ? FOLL_WRITE : 0, &page); if (ret < 0) return ret; @@ -857,9 +858,13 @@ static void gntdev_put_pages(struct gntd { unsigned int i; - for (i = 0; i < batch->nr_pages; i++) + for (i = 0; i < batch->nr_pages; i++) { + if (batch->writeable && !PageDirty(batch->pages[i])) + set_page_dirty_lock(batch->pages[i]); put_page(batch->pages[i]); + } batch->nr_pages = 0; + batch->writeable = false; } static int gntdev_copy(struct gntdev_copy_batch *batch) @@ -948,8 +953,9 @@ static int gntdev_grant_copy_seg(struct virt = seg->source.virt + copied; off = (unsigned long)virt & ~XEN_PAGE_MASK; len = min(len, (size_t)XEN_PAGE_SIZE - off); + batch->writeable = false; - ret = gntdev_get_page(batch, virt, false, &gfn); + ret = gntdev_get_page(batch, virt, &gfn); if (ret < 0) return ret; @@ -967,8 +973,9 @@ static int gntdev_grant_copy_seg(struct virt = seg->dest.virt + copied; off = (unsigned long)virt & ~XEN_PAGE_MASK; len = min(len, (size_t)XEN_PAGE_SIZE - off); + batch->writeable = true; - ret = gntdev_get_page(batch, virt, true, &gfn); + ret = gntdev_get_page(batch, virt, &gfn); if (ret < 0) return ret;