Message ID | 018bb0de66981fa798da015a983e5bb6c41bae5b.1607293068.git.asml.silence@gmail.com |
---|---|
State | Accepted |
Commit | 59850d226e4907a6f37c1d2fe5ba97546a8691a4 |
Headers | show |
Series | [5.10,1/5] io_uring: always let io_iopoll_complete() complete polled io. | expand |
diff --git a/fs/io_uring.c b/fs/io_uring.c index b1ba9a738315..f707caed9f79 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2246,7 +2246,7 @@ static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush) * we wake up the task, and the next invocation will flush the * entries. We cannot safely to it from here. */ - if (noflush && !list_empty(&ctx->cq_overflow_list)) + if (noflush) return -1U; io_cqring_overflow_flush(ctx, false, NULL, NULL);
Checking !list_empty(&ctx->cq_overflow_list) around noflush in io_cqring_events() is racy, because if it fails but a request overflowed just after that, io_cqring_overflow_flush() still will be called. Remove the second check, it shouldn't be a problem for performance, because there is cq_check_overflow bit check just above. Cc: <stable@vger.kernel.org> # 5.5+ Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> --- fs/io_uring.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)