From patchwork Wed May 12 14:50:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "gregkh@linuxfoundation.org" X-Patchwork-Id: 436240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FD2DC433B4 for ; Wed, 12 May 2021 17:05:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2ACCC613C2 for ; Wed, 12 May 2021 17:05:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238778AbhELRGM (ORCPT ); Wed, 12 May 2021 13:06:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:46180 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238973AbhELQuH (ORCPT ); Wed, 12 May 2021 12:50:07 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id DD34761D5A; Wed, 12 May 2021 16:16:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1620836177; bh=KZsc5yZIuWsrYya7OjS5DggjPvs/sLXeDrdmYlchDik=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U/pK216wN9ufuwf3KwFdTCsQl7ezQIx4oJPj0xYSFPymxC20MnvLYq1F5Wb7KapeH CsvzeOTcMTsKIUxOP+Xxd89H+nDn9+/ClqqMZsNxbcrHD6u1ruNdYyyyzZsAhxt3MW 2vWtlk7oRRGY+7nsOFrWTLSJfRlatUs3p4zc/Ozg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Stefano Garzarella , "David S. Miller" , Sasha Levin , syzbot+24452624fc4c571eedd9@syzkaller.appspotmail.com Subject: [PATCH 5.12 607/677] vsock/virtio: free queued packets when closing socket Date: Wed, 12 May 2021 16:50:53 +0200 Message-Id: <20210512144857.535068746@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512144837.204217980@linuxfoundation.org> References: <20210512144837.204217980@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Stefano Garzarella [ Upstream commit 8432b8114957235f42e070a16118a7f750de9d39 ] As reported by syzbot [1], there is a memory leak while closing the socket. We partially solved this issue with commit ac03046ece2b ("vsock/virtio: free packets during the socket release"), but we forgot to drain the RX queue when the socket is definitely closed by the scheduled work. To avoid future issues, let's use the new virtio_transport_remove_sock() to drain the RX queue before removing the socket from the af_vsock lists calling vsock_remove_sock(). [1] https://syzkaller.appspot.com/bug?extid=24452624fc4c571eedd9 Fixes: ac03046ece2b ("vsock/virtio: free packets during the socket release") Reported-and-tested-by: syzbot+24452624fc4c571eedd9@syzkaller.appspotmail.com Signed-off-by: Stefano Garzarella Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/vmw_vsock/virtio_transport_common.c | 28 +++++++++++++++++-------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index e4370b1b7494..902cb6dd710b 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -733,6 +733,23 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t, return t->send_pkt(reply); } +/* This function should be called with sk_lock held and SOCK_DONE set */ +static void virtio_transport_remove_sock(struct vsock_sock *vsk) +{ + struct virtio_vsock_sock *vvs = vsk->trans; + struct virtio_vsock_pkt *pkt, *tmp; + + /* We don't need to take rx_lock, as the socket is closing and we are + * removing it. + */ + list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) { + list_del(&pkt->list); + virtio_transport_free_pkt(pkt); + } + + vsock_remove_sock(vsk); +} + static void virtio_transport_wait_close(struct sock *sk, long timeout) { if (timeout) { @@ -765,7 +782,7 @@ static void virtio_transport_do_close(struct vsock_sock *vsk, (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) { vsk->close_work_scheduled = false; - vsock_remove_sock(vsk); + virtio_transport_remove_sock(vsk); /* Release refcnt obtained when we scheduled the timeout */ sock_put(sk); @@ -828,22 +845,15 @@ static bool virtio_transport_close(struct vsock_sock *vsk) void virtio_transport_release(struct vsock_sock *vsk) { - struct virtio_vsock_sock *vvs = vsk->trans; - struct virtio_vsock_pkt *pkt, *tmp; struct sock *sk = &vsk->sk; bool remove_sock = true; if (sk->sk_type == SOCK_STREAM) remove_sock = virtio_transport_close(vsk); - list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) { - list_del(&pkt->list); - virtio_transport_free_pkt(pkt); - } - if (remove_sock) { sock_set_flag(sk, SOCK_DONE); - vsock_remove_sock(vsk); + virtio_transport_remove_sock(vsk); } } EXPORT_SYMBOL_GPL(virtio_transport_release);