diff mbox series

[5.17,145/146] io_uring: fix leaks on IOPOLL and CQE_SKIP

Message ID 20220426081754.135044907@linuxfoundation.org
State New
Headers show
Series None | expand

Commit Message

Greg KH April 26, 2022, 8:22 a.m. UTC
From: Pavel Begunkov <asml.silence@gmail.com>

commit c0713540f6d55c53dca65baaead55a5a8b20552d upstream.

If all completed requests in io_do_iopoll() were marked with
REQ_F_CQE_SKIP, we'll not only skip CQE posting but also
io_free_batch_list() leaking memory and resources.

Move @nr_events increment before REQ_F_CQE_SKIP check. We'll potentially
return the value greater than the real one, but iopolling will deal with
it and the userspace will re-iopoll if needed. In anyway, I don't think
there are many use cases for REQ_F_CQE_SKIP + IOPOLL.

Fixes: 83a13a4181b0e ("io_uring: tweak iopoll CQE_SKIP event counting")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/5072fc8693fbfd595f89e5d4305bfcfd5d2f0a64.1650186611.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 fs/io_uring.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
diff mbox series

Patch

--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2612,11 +2612,10 @@  static int io_do_iopoll(struct io_ring_c
 		/* order with io_complete_rw_iopoll(), e.g. ->result updates */
 		if (!smp_load_acquire(&req->iopoll_completed))
 			break;
+		nr_events++;
 		if (unlikely(req->flags & REQ_F_CQE_SKIP))
 			continue;
-
 		__io_fill_cqe(ctx, req->user_data, req->result, io_put_kbuf(req));
-		nr_events++;
 	}
 
 	if (unlikely(!nr_events))