diff mbox series

[v2,2/2] io_uring: ensure task_work gets run as part of cancelations

Message ID 89990fca-63d3-cbac-85cc-bce2818dd30e@kernel.dk
State Accepted
Commit 78a780602075d8b00c98070fa26e389b3b3efa72
Headers show
Series [1/2] io_uring: check tctx->in_idle when decrementing inflight_tracked | expand

Commit Message

Jens Axboe Dec. 9, 2021, 4:16 p.m. UTC
If we successfully cancel a work item but that work item needs to be
processed through task_work, then we can be sleeping uninterruptibly
in io_uring_cancel_generic() and never process it. Hence we don't
make forward progress and we end up with an uninterruptible sleep
warning.

Add the waitqueue earlier to ensure that any wakeups from cancelations
are seen, and switch to using uninterruptible sleep so that postponed
task_work additions get seen and processed.

While in there, correct a comment that should be IFF, not IIF.

Reported-by: syzbot+21e6887c0be14181206d@syzkaller.appspotmail.com
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>

---

v2 - don't move prepare_to_wait(), it'll run into issues with locking
     etc, and we don't need to as the inflight tracking guards against
     missing a wakeup for a completion.
diff mbox series

Patch

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b4d5b8d168bf..111db33b940e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -9826,7 +9826,7 @@  static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
 
 /*
  * Find any io_uring ctx that this task has registered or done IO on, and cancel
- * requests. @sqd should be not-null IIF it's an SQPOLL thread cancellation.
+ * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
  */
 static __cold void io_uring_cancel_generic(bool cancel_all,
 					   struct io_sq_data *sqd)
@@ -9868,8 +9868,10 @@  static __cold void io_uring_cancel_generic(bool cancel_all,
 							     cancel_all);
 		}
 
-		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
+		prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE);
+		io_run_task_work();
 		io_uring_drop_tctx_refs(current);
+
 		/*
 		 * If we've seen completions, retry without waiting. This
 		 * avoids a race where a completion comes in before we did