From patchwork Mon Aug 17 15:16:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 266044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F7BFC433E1 for ; Mon, 17 Aug 2020 19:14:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D0E82072D for ; Mon, 17 Aug 2020 19:14:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597691674; bh=9kbtJj5MZYdGlCALe7/YrBS+f1CSatDlU2VRx8ZXG9Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=MRmpDNWKb9Wfw/O267MryjJ8av/Lfb8hFi3+bmHn5nIBAslqW7jbvDLqYDoSlArUO YRk1+dKs1Lt3cB16ZnWQfFivE0Z4+BgnXir2cjeuzeYqLp8IiaQJmgfVa+Zh80J4FZ 3s7M/txPWC2y0KYxBeVR0SV2+UdfMbd4IrBTEoDg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731818AbgHQTOd (ORCPT ); Mon, 17 Aug 2020 15:14:33 -0400 Received: from mail.kernel.org ([198.145.29.99]:48798 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387478AbgHQPkM (ORCPT ); Mon, 17 Aug 2020 11:40:12 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0C05A23123; Mon, 17 Aug 2020 15:40:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597678806; bh=9kbtJj5MZYdGlCALe7/YrBS+f1CSatDlU2VRx8ZXG9Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kw2h9G5QiA5WPid7AuhXMyrnvMiITD2so0R1f+h/bNO0WIBV+rbDSMOQGf3PtwQtG rCSgbzzKdszIcyFgP89vv5EoYF0VGv5PsawRB8qz+7avmREoNjilqtmP9hj7ODFyc/ jmfZxn2ZFQd5uGobfNGKiRGqvUa4Nx0SOEfqPRu0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, syzbot+9b260fc33297966f5a8e@syzkaller.appspotmail.com, Jens Axboe Subject: [PATCH 5.8 461/464] io_uring: hold ctx reference around task_work queue + execute Date: Mon, 17 Aug 2020 17:16:54 +0200 Message-Id: <20200817143855.860533637@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200817143833.737102804@linuxfoundation.org> References: <20200817143833.737102804@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jens Axboe commit 6d816e088c359866f9867057e04f244c608c42fe upstream. We're holding the request reference, but we need to go one higher to ensure that the ctx remains valid after the request has finished. If the ring is closed with pending task_work inflight, and the given io_kiocb finishes sync during issue, then we need a reference to the ring itself around the task_work execution cycle. Cc: stable@vger.kernel.org # v5.7+ Reported-by: syzbot+9b260fc33297966f5a8e@syzkaller.appspotmail.com Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 6 ++++++ 1 file changed, 6 insertions(+) --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4140,6 +4140,8 @@ static int __io_async_wake(struct io_kio tsk = req->task; req->result = mask; init_task_work(&req->task_work, func); + percpu_ref_get(&req->ctx->refs); + /* * If this fails, then the task is exiting. When a task exits, the * work gets canceled, so just cancel this request as well instead @@ -4239,6 +4241,7 @@ static void io_poll_task_handler(struct static void io_poll_task_func(struct callback_head *cb) { struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work); + struct io_ring_ctx *ctx = req->ctx; struct io_kiocb *nxt = NULL; io_poll_task_handler(req, &nxt); @@ -4249,6 +4252,7 @@ static void io_poll_task_func(struct cal __io_queue_sqe(nxt, NULL); mutex_unlock(&ctx->uring_lock); } + percpu_ref_put(&ctx->refs); } static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode, @@ -4365,6 +4369,7 @@ static void io_async_task_func(struct ca if (io_poll_rewait(req, &apoll->poll)) { spin_unlock_irq(&ctx->completion_lock); + percpu_ref_put(&ctx->refs); return; } @@ -4402,6 +4407,7 @@ end_req: req_set_fail_links(req); io_double_put_req(req); } + percpu_ref_put(&ctx->refs); } static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,