From patchwork Tue Aug 11 01:14:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 266595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E17D5C433DF for ; Tue, 11 Aug 2020 01:14:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A85582075D for ; Tue, 11 Aug 2020 01:14:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="XJiocQll" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727088AbgHKBOO (ORCPT ); Mon, 10 Aug 2020 21:14:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727049AbgHKBON (ORCPT ); Mon, 10 Aug 2020 21:14:13 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A317BC06174A for ; Mon, 10 Aug 2020 18:14:13 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id r11so6695118pfl.11 for ; Mon, 10 Aug 2020 18:14:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=to:from:subject:message-id:date:user-agent:mime-version :content-language; bh=BUKFjtKz4pubf3RVqSbmCea+dcvkG4fg8QKJxpH6xnY=; b=XJiocQllzBrU/kJViudbzSRQ8acxyPQe8bXFSb25Oa35wbQPUKLG/nYjird7pcQOJw GJ2gsN/b6kAJ/VutKpGquFwXgIV9H3Z2puUMNtKeiRxHVwvxDCJvhANA5vWwf40MJgQr ehJnfNXJAofkyhOyuf7ofrlDkOiQJBSfxOFWvBOdCTCITJ5mY4+yv450w3+bniiB3JS1 RxWZRB25acyf2fmVY+ZpkkGReh13WOnjzVtX8ZDrFxi0TkxkLbSFRUx5ONoN9t0iWAi+ bClkAn1BQlPJ0a5fWwmZSVjVC1iflyeWsmbkeASyJShY4Ds1UT3r5fEWZE/IUXWRT7OS +7Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:from:subject:message-id:date:user-agent :mime-version:content-language; bh=BUKFjtKz4pubf3RVqSbmCea+dcvkG4fg8QKJxpH6xnY=; b=REal1e5eyEQg+footFBZE5olqnk/8kFY/wYWvjglTW+WoD/Olsm88QnEDnutlu9qwh Ii1IzcfasGM7lvMBjzeJSLHM5KI3fFKi/YU78NuXwEFzwy4Lp+55qbVwBR6fnRu1bNcr WXSylmpoz5WO7h0i7mCndXlIdNhYlOph+6tb0Eaoq1wTN+CVJHhxIb3PClmgt7iHncw1 o8IsRDvkU10OSg7zYg8nwGDBGdnaIqc6oLCi+KcCVIlaqtU1ifa/VCxwu1M7OpE/sr0f 91yQbsX64bzN7BzhSUiw9RJvUr6YE4vP8nTKYrQf5jEzQFBxGsEkdosyK8MWkirZ3GCe EFsw== X-Gm-Message-State: AOAM532NEgJgja5wRp2CjeOkLH8HfceIoR2d7k5d9JFy8c1SyncoZSJP vz/EmKnguY2SZ+h6gIaWJ40jvqW4o68= X-Google-Smtp-Source: ABdhPJzBHuSxHIU9nHCFj0+GaLEvh1RnfqFsBn9Uv3xMM2Qa2/u6n6TxP5IvY3sDwNL6g7UZWjrGHQ== X-Received: by 2002:a63:220a:: with SMTP id i10mr2838729pgi.88.1597108452880; Mon, 10 Aug 2020 18:14:12 -0700 (PDT) Received: from [192.168.1.182] ([66.219.217.173]) by smtp.gmail.com with ESMTPSA id z6sm23732411pfg.68.2020.08.10.18.14.10 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 10 Aug 2020 18:14:11 -0700 (PDT) To: stable@vger.kernel.org From: Jens Axboe Subject: Stable inclusion request Message-ID: Date: Mon, 10 Aug 2020 19:14:10 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 Content-Language: en-US Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Hi, Can we queue up a backport of: commit 4c6e277c4cc4a6b3b2b9c66a7b014787ae757cc1 Author: Jens Axboe Date: Wed Jul 1 11:29:10 2020 -0600 io_uring: abstract out task work running for 5.7 and 5.8 stable? It fixes a reported issue from Dave Chinner, since the abstraction also ensures that we always set the current task state appropriately before running task work. I've attached both a 5.8 and 5.7 port of the patch. Thanks! diff --git a/fs/io_uring.c b/fs/io_uring.c index 4e09af1d5d22..92bbbcff7777 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1692,6 +1692,17 @@ static int io_put_kbuf(struct io_kiocb *req) return cflags; } +static inline bool io_run_task_work(void) +{ + if (current->task_works) { + __set_current_state(TASK_RUNNING); + task_work_run(); + return true; + } + + return false; +} + static void io_iopoll_queue(struct list_head *again) { struct io_kiocb *req; @@ -1881,6 +1892,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events, */ if (!(++iters & 7)) { mutex_unlock(&ctx->uring_lock); + io_run_task_work(); mutex_lock(&ctx->uring_lock); } @@ -4421,7 +4433,6 @@ static void io_async_task_func(struct callback_head *cb) return; } - __set_current_state(TASK_RUNNING); if (io_sq_thread_acquire_mm(ctx, req)) { io_cqring_add_event(req, -EFAULT); goto end_req; @@ -6153,8 +6164,7 @@ static int io_sq_thread(void *data) if (!list_empty(&ctx->poll_list) || need_resched() || (!time_after(jiffies, timeout) && ret != -EBUSY && !percpu_ref_is_dying(&ctx->refs))) { - if (current->task_works) - task_work_run(); + io_run_task_work(); cond_resched(); continue; } @@ -6186,8 +6196,7 @@ static int io_sq_thread(void *data) finish_wait(&ctx->sqo_wait, &wait); break; } - if (current->task_works) { - task_work_run(); + if (io_run_task_work()) { finish_wait(&ctx->sqo_wait, &wait); continue; } @@ -6211,8 +6220,7 @@ static int io_sq_thread(void *data) timeout = jiffies + ctx->sq_thread_idle; } - if (current->task_works) - task_work_run(); + io_run_task_work(); set_fs(old_fs); io_sq_thread_drop_mm(ctx); @@ -6278,9 +6286,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, do { if (io_cqring_events(ctx, false) >= min_events) return 0; - if (!current->task_works) + if (!io_run_task_work()) break; - task_work_run(); } while (1); if (sig) { @@ -6302,8 +6309,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, prepare_to_wait_exclusive(&ctx->wait, &iowq.wq, TASK_INTERRUPTIBLE); /* make sure we run task_work before checking for signals */ - if (current->task_works) - task_work_run(); + if (io_run_task_work()) + continue; if (signal_pending(current)) { if (current->jobctl & JOBCTL_TASK_WORK) { spin_lock_irq(¤t->sighand->siglock); @@ -7691,8 +7698,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, int submitted = 0; struct fd f; - if (current->task_works) - task_work_run(); + io_run_task_work(); if (flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP)) return -EINVAL;