From patchwork Mon Aug 17 15:17:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 266324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D297DC433E4 for ; Mon, 17 Aug 2020 17:41:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B713520716 for ; Mon, 17 Aug 2020 17:41:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597686080; bh=gC8/wPGyCFCRF3pJJ6lZrF65xmkbLWfo794ETf82wAU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=reiiBeHy0GOla669NMwGou1q0CPdxPq0r3VHs84XM0jrT3gQJo484HMTI70xakIRj ZnOYUbIJ19ednRV2GrnXUNhokiIKh+ZI3nSHpPsfULFxV433VxSvY8WsZgAVqQ2xvA NMIKi8rqBz+lxq5RR/YQ3pAVW0sLFjQGKJEjAgpk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731468AbgHQRlU (ORCPT ); Mon, 17 Aug 2020 13:41:20 -0400 Received: from mail.kernel.org ([198.145.29.99]:56220 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731106AbgHQQQj (ORCPT ); Mon, 17 Aug 2020 12:16:39 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2DC7222CF6; Mon, 17 Aug 2020 16:16:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597680966; bh=gC8/wPGyCFCRF3pJJ6lZrF65xmkbLWfo794ETf82wAU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bNGJZOGy9ozs3cpw/z0f86FlOh9eDUaaXBZbX6GjMKxiBUY4XwnY9LWUv/4ylKZEN s4R7IzilRnWSIbFR5X4Eyo5xyMin8O07EddldtjZzAadsCADJAsYj77F4a6wupBW3Q yO81MwUOZSauZhgQ1fX990HvPsy4WzGVYNW97/qs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chuck Lever , Sasha Levin Subject: [PATCH 4.19 128/168] svcrdma: Fix page leak in svc_rdma_recv_read_chunk() Date: Mon, 17 Aug 2020 17:17:39 +0200 Message-Id: <20200817143740.079593346@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200817143733.692105228@linuxfoundation.org> References: <20200817143733.692105228@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Chuck Lever [ Upstream commit e814eecbe3bbeaa8b004d25a4b8974d232b765a9 ] Commit 07d0ff3b0cd2 ("svcrdma: Clean up Read chunk path") moved the page saver logic so that it gets executed event when an error occurs. In that case, the I/O is never posted, and those pages are then leaked. Errors in this path, however, are quite rare. Fixes: 07d0ff3b0cd2 ("svcrdma: Clean up Read chunk path") Signed-off-by: Chuck Lever Signed-off-by: Sasha Levin --- net/sunrpc/xprtrdma/svc_rdma_rw.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c index 4fc0ce1270894..22f1352638151 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_rw.c +++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c @@ -679,7 +679,6 @@ static int svc_rdma_build_read_chunk(struct svc_rqst *rqstp, struct svc_rdma_read_info *info, __be32 *p) { - unsigned int i; int ret; ret = -EINVAL; @@ -702,12 +701,6 @@ static int svc_rdma_build_read_chunk(struct svc_rqst *rqstp, info->ri_chunklen += rs_length; } - /* Pages under I/O have been copied to head->rc_pages. - * Prevent their premature release by svc_xprt_release() . - */ - for (i = 0; i < info->ri_readctxt->rc_page_count; i++) - rqstp->rq_pages[i] = NULL; - return ret; } @@ -802,6 +795,26 @@ static int svc_rdma_build_pz_read_chunk(struct svc_rqst *rqstp, return ret; } +/* Pages under I/O have been copied to head->rc_pages. Ensure they + * are not released by svc_xprt_release() until the I/O is complete. + * + * This has to be done after all Read WRs are constructed to properly + * handle a page that is part of I/O on behalf of two different RDMA + * segments. + * + * Do this only if I/O has been posted. Otherwise, we do indeed want + * svc_xprt_release() to clean things up properly. + */ +static void svc_rdma_save_io_pages(struct svc_rqst *rqstp, + const unsigned int start, + const unsigned int num_pages) +{ + unsigned int i; + + for (i = start; i < num_pages + start; i++) + rqstp->rq_pages[i] = NULL; +} + /** * svc_rdma_recv_read_chunk - Pull a Read chunk from the client * @rdma: controlling RDMA transport @@ -855,6 +868,7 @@ int svc_rdma_recv_read_chunk(struct svcxprt_rdma *rdma, struct svc_rqst *rqstp, ret = svc_rdma_post_chunk_ctxt(&info->ri_cc); if (ret < 0) goto out_err; + svc_rdma_save_io_pages(rqstp, 0, head->rc_page_count); return 0; out_err: