From patchwork Fri Aug 20 09:36:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 501002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DA12C4320E for ; Fri, 20 Aug 2021 09:37:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F0E9161131 for ; Fri, 20 Aug 2021 09:37:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236783AbhHTJh5 (ORCPT ); Fri, 20 Aug 2021 05:37:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236800AbhHTJhz (ORCPT ); Fri, 20 Aug 2021 05:37:55 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 818C5C061756; Fri, 20 Aug 2021 02:37:17 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id l7-20020a1c2507000000b002e6be5d86b3so5690482wml.3; Fri, 20 Aug 2021 02:37:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QaBGvgs4NzaJ+Nos61cajJwx29cfL8s8R2pgsSsDtbk=; b=utf4IQX3b2WsqJc3HgjIBUk1e2nWYJW7dzVTcl2w/HDhKqbheuuqQGrQEu8rvLKOvy 4kCEHCNcwMBghvFNb0g8DkAZrJ8afnUsL9ifRGXEGDnK7DQ6axYP0K52Iz6Zi/xK5Xgu Kx6ezyMwKEEnd5OPKA+wDa785IXb+wc6KXtnTStTcu+QKNFfVmpZ/4vUbVYL/JjBR42x REFly0k/qiDOMkHqOrvr7hXjAiFzemzbHe5Vj4fKyAjy0s15OUaME/h3wPdVHE1OY7Go snl6coRzLLqe27jRWhZcaNo8lRc0HlOLtMM3vV5YFitrZAzbBWFwxmqV5QnOClJn/4zb MAOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QaBGvgs4NzaJ+Nos61cajJwx29cfL8s8R2pgsSsDtbk=; b=Rb3vsbcePG6yKOoM4FbmiU33BabPWuf6YOey8tQYBpW4Yes82JS4Lw5sKJbyW8DjQM pG3PFG7ncz2UVAHD5wUJ07mI2ojqJ1oTgabcG1dLQ03Q1Op39CEU3XRFVRQCAeGU5ref 1nO9bnutkgNbaWKGpzf6dqXSMkC96CSXhBaYSh5KvHSV37+0e/oqatvwI0rK42sI0/d9 DFrYG0mBEuGbMCtgXqcPpEAJrXdaeRQQNi1PIb4OvlZcA8InYoixDjrOIcszLqvUy29w 1r3mDzd0dQU9YmXpBu61YynSxDp4nuvvogwmGDLwojxZOd92CbGXk+BEVhQSsO5ugI4J D7eA== X-Gm-Message-State: AOAM530DnktlbitDIKip6F5eeT9y1Hj0PstdDxB7MEaPx5HyEEAQTL8t z/4Tdq0HGbCXLiOZKyakHr4= X-Google-Smtp-Source: ABdhPJwTbCKzrYuxEaKBqDfqqfD2hFSEh9x12LK/9YyFjy9/yuR47LiiU9zKHjlTJGNFCGnrOZIt7Q== X-Received: by 2002:a1c:1f17:: with SMTP id f23mr2998040wmf.136.1629452236166; Fri, 20 Aug 2021 02:37:16 -0700 (PDT) Received: from localhost.localdomain ([85.255.233.190]) by smtp.gmail.com with ESMTPSA id z7sm9693402wmi.4.2021.08.20.02.37.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 02:37:15 -0700 (PDT) From: Pavel Begunkov To: Jens Axboe , io-uring@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH 1/3] io_uring: limit fixed table size by RLIMIT_NOFILE Date: Fri, 20 Aug 2021 10:36:35 +0100 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Limit the number of files in io_uring fixed tables by RLIMIT_NOFILE, that's the first and the simpliest restriction that we should impose. Cc: stable@vger.kernel.org Suggested-by: Jens Axboe Signed-off-by: Pavel Begunkov --- fs/io_uring.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index 30edc329d803..e6301d5d03a8 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -7730,6 +7730,8 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg, return -EINVAL; if (nr_args > IORING_MAX_FIXED_FILES) return -EMFILE; + if (nr_args > rlimit(RLIMIT_NOFILE)) + return -EMFILE; ret = io_rsrc_node_switch_start(ctx); if (ret) return ret; From patchwork Fri Aug 20 09:36:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 501389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7F73C4320A for ; Fri, 20 Aug 2021 09:37:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E36861053 for ; Fri, 20 Aug 2021 09:37:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236704AbhHTJh5 (ORCPT ); Fri, 20 Aug 2021 05:37:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236861AbhHTJh4 (ORCPT ); Fri, 20 Aug 2021 05:37:56 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CB0DC061757; Fri, 20 Aug 2021 02:37:18 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id f13-20020a1c6a0d000000b002e6fd0b0b3fso7011344wmc.3; Fri, 20 Aug 2021 02:37:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=11Is/JpmDwZ8TjK5P+o8Ww1+JBTDF9nhOzhsEQoY6to=; b=SKeTjonD05EJUIWGdSE0pq9Y4/0st+4/EgRz09arb7JWdQDP+pfvWZwxUNOHb4bbXY PR5UU6Pv/GpkSiVpfBXpRtIvU+jCyXV0h97mWJ2PaDyc/NccHF70jvNP85B2lXcKxItB ff2gtCtvvLyqBbBNXMsls8/CSZgmaJwQb0YHx8VoxZnwXeXN72dDoBtfPXIdQAFgcvXd ZwMkRLeuTB85n1SdPP1yEo7pIHJ9+TlCzsxZC/4GFvn8g+8RQgGYQD7OU+NyLBc3w6sO kMo221YhJ2/NM7tSY3nihLpPYXZGTwkJIZtrTXCoBZL+sDvJrS2bseZB9hLMJ2oE77zk RxiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=11Is/JpmDwZ8TjK5P+o8Ww1+JBTDF9nhOzhsEQoY6to=; b=VnIpZC0aiAWPMkTFB7vJbOmO5suiTMN1/ylOkAjnOhSjnIaUs4zadkIvvB7s5ubSJA PiCPfDbDXUAXh8iKLAzT5y1g5BQGXAozO7ZvhVsw1tg3Vs/RjMQ3OSAbpm8FRkaO5Yqh 43kfGMrP6hoB5mdpN8JXs5ILgRfMS0gSUCNFZUMjkQn86ZoVecTDokWdONVwDCHNU51s lzwpjqf6BwRLSz+KkiR7VdgYoZFJh04WDvJs8lisyodCD0MHXpvsgPQAfbBoHX1M5in1 7yX526u+as6sjnpuLC9CmdhxhuXyCxKgyYjJIxiqej+F0cgHJp3z0kjjdmQUVOQOSQNm yVsw== X-Gm-Message-State: AOAM531AAM3DEol6E+Ggdtg4ut0vobCthQdVl0ixMNQMgSa5cuMH6NhA +seX8pYZOdm0UlrhGI4kk6U= X-Google-Smtp-Source: ABdhPJzTTFiIBLWplIkvQQIHb02ufaP5r+ilk5UhyIiCY+V1OTyhr4pdMDpLqnQLX+3XJ3uaYhkVxw== X-Received: by 2002:a1c:40c:: with SMTP id 12mr2878668wme.158.1629452236945; Fri, 20 Aug 2021 02:37:16 -0700 (PDT) Received: from localhost.localdomain ([85.255.233.190]) by smtp.gmail.com with ESMTPSA id z7sm9693402wmi.4.2021.08.20.02.37.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 02:37:16 -0700 (PDT) From: Pavel Begunkov To: Jens Axboe , io-uring@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH 2/3] io_uring: place fixed tables under memcg limits Date: Fri, 20 Aug 2021 10:36:36 +0100 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Fixed tables may be large enough, place all of them together with allocated tags under memcg limits. Cc: stable@vger.kernel.org Signed-off-by: Pavel Begunkov --- fs/io_uring.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index e6301d5d03a8..976fc0509e4b 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -7135,14 +7135,14 @@ static void **io_alloc_page_table(size_t size) size_t init_size = size; void **table; - table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL); + table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL_ACCOUNT); if (!table) return NULL; for (i = 0; i < nr_tables; i++) { unsigned int this_size = min_t(size_t, size, PAGE_SIZE); - table[i] = kzalloc(this_size, GFP_KERNEL); + table[i] = kzalloc(this_size, GFP_KERNEL_ACCOUNT); if (!table[i]) { io_free_page_table(table, init_size); return NULL; @@ -7333,7 +7333,8 @@ static int io_rsrc_data_alloc(struct io_ring_ctx *ctx, rsrc_put_fn *do_put, static bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files) { - table->files = kvcalloc(nr_files, sizeof(table->files[0]), GFP_KERNEL); + table->files = kvcalloc(nr_files, sizeof(table->files[0]), + GFP_KERNEL_ACCOUNT); return !!table->files; } From patchwork Fri Aug 20 09:36:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 501388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D134C41537 for ; Fri, 20 Aug 2021 09:37:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F59661053 for ; Fri, 20 Aug 2021 09:37:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236835AbhHTJh6 (ORCPT ); Fri, 20 Aug 2021 05:37:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235001AbhHTJh4 (ORCPT ); Fri, 20 Aug 2021 05:37:56 -0400 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AB14C06175F; Fri, 20 Aug 2021 02:37:19 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id a201-20020a1c7fd2000000b002e6d33447f9so6800059wmd.0; Fri, 20 Aug 2021 02:37:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YNdmspjQNKvQ3/BEsgbzYrQjaM81DVuQiWbPbn6PEY0=; b=OfJ8ioMl4zYg12c7B/Oud3gXpdCdXFrYf2UrxZzGjGtzOIbjTlU4l1Hm1jrVlCDplm dmbhJoXS0a5zCxakJz7+g6wxadghLrdanfxPmPmxZc7NjCDxTguHPuqTVkr03gH1Yz8/ 8GWbmVqjzulLb0VI3PJI+SOBu1MDWiX9cDxBWSvcpPWsmmC3JKNonzq9hwhmui2csazY B4xFxUheCKXBMOnJIopk+9B/vm7MXLlf1aPhGIitxlq1eishpQrIT5ZwOzoTeBDuJFRn zyQR2AC59BndCshEV/7aEbAl59WMwwT8+2k4OQaw+t7HZPa25O1bCYdIlgBm31q3yJvU rCQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YNdmspjQNKvQ3/BEsgbzYrQjaM81DVuQiWbPbn6PEY0=; b=ph0NNu/IFUjIzM5qJHEoJxwCKEZAoTJNLd9ZjlL9iib71Ps5tnf3FcV7M8g2sfIM6P WRh1GuT0/aZwA0Hp9W0DXNuWgISRsX/tQS2ZN1GIsFBYTbazLBa8a2ORGDPtjtjdIYnA YwQystfbUiL/tQrV6RzYVnrVj5XGYSRP3tpEjBUCVr+KwkFsXNfInQTy0POvdnqFId8A eASSbV5XnCVOiG1fNN+Gw8NSFrnEnwmv6xgiY+dkpoBgB/3jD+xAHQbhxG0kHoDCqoCZ t8OOKxK/fmo6VztlvrDP4FSJQ3E9Q7lndWbl729jl2S665/M2zV6qlALxRPzwxVn31i7 0aAg== X-Gm-Message-State: AOAM531ndIB1x0yPF1cypWh15DaKscPHE5b+CfPRgXxqsL8vLvQMxDNq i5ABdWoGwKk7XKRXCr0GcGM= X-Google-Smtp-Source: ABdhPJxxq3aYI2pHbVrcNdG+1YtrEFl1usHXVpIHrxUfPPe38SUF3oW9z5naxJH0m/iFYuiTNQR3Kw== X-Received: by 2002:a1c:21c5:: with SMTP id h188mr2880796wmh.28.1629452237706; Fri, 20 Aug 2021 02:37:17 -0700 (PDT) Received: from localhost.localdomain ([85.255.233.190]) by smtp.gmail.com with ESMTPSA id z7sm9693402wmi.4.2021.08.20.02.37.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 02:37:17 -0700 (PDT) From: Pavel Begunkov To: Jens Axboe , io-uring@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH 3/3] io_uring: add ->splice_fd_in checks Date: Fri, 20 Aug 2021 10:36:37 +0100 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org ->splice_fd_in is used only by splice/tee, but no other request checks it for validity. Add the check for most of request types excluding reads/writes/sends/recvs, we don't want overhead for them and can leave them be as is until the field is actually used. Cc: stable@vger.kernel.org Signed-off-by: Pavel Begunkov --- fs/io_uring.c | 52 +++++++++++++++++++++++++++++---------------------- 1 file changed, 30 insertions(+), 22 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 976fc0509e4b..ff1a8c4e2881 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3502,7 +3502,7 @@ static int io_renameat_prep(struct io_kiocb *req, if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (sqe->ioprio || sqe->buf_index) + if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in) return -EINVAL; if (unlikely(req->flags & REQ_F_FIXED_FILE)) return -EBADF; @@ -3553,7 +3553,8 @@ static int io_unlinkat_prep(struct io_kiocb *req, if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index) + if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index || + sqe->splice_fd_in) return -EINVAL; if (unlikely(req->flags & REQ_F_FIXED_FILE)) return -EBADF; @@ -3599,8 +3600,8 @@ static int io_shutdown_prep(struct io_kiocb *req, #if defined(CONFIG_NET) if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (sqe->ioprio || sqe->off || sqe->addr || sqe->rw_flags || - sqe->buf_index) + if (unlikely(sqe->ioprio || sqe->off || sqe->addr || sqe->rw_flags || + sqe->buf_index || sqe->splice_fd_in)) return -EINVAL; req->shutdown.how = READ_ONCE(sqe->len); @@ -3748,7 +3749,8 @@ static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (unlikely(ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index)) + if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index || + sqe->splice_fd_in)) return -EINVAL; req->sync.flags = READ_ONCE(sqe->fsync_flags); @@ -3781,7 +3783,8 @@ static int io_fsync(struct io_kiocb *req, unsigned int issue_flags) static int io_fallocate_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { - if (sqe->ioprio || sqe->buf_index || sqe->rw_flags) + if (sqe->ioprio || sqe->buf_index || sqe->rw_flags || + sqe->splice_fd_in) return -EINVAL; if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; @@ -3814,7 +3817,7 @@ static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (unlikely(sqe->ioprio || sqe->buf_index)) + if (unlikely(sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)) return -EINVAL; if (unlikely(req->flags & REQ_F_FIXED_FILE)) return -EBADF; @@ -3933,7 +3936,8 @@ static int io_remove_buffers_prep(struct io_kiocb *req, struct io_provide_buf *p = &req->pbuf; u64 tmp; - if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off) + if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off || + sqe->splice_fd_in) return -EINVAL; tmp = READ_ONCE(sqe->fd); @@ -4004,7 +4008,7 @@ static int io_provide_buffers_prep(struct io_kiocb *req, struct io_provide_buf *p = &req->pbuf; u64 tmp; - if (sqe->ioprio || sqe->rw_flags) + if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in) return -EINVAL; tmp = READ_ONCE(sqe->fd); @@ -4091,7 +4095,7 @@ static int io_epoll_ctl_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { #if defined(CONFIG_EPOLL) - if (sqe->ioprio || sqe->buf_index) + if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in) return -EINVAL; if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; @@ -4137,7 +4141,7 @@ static int io_epoll_ctl(struct io_kiocb *req, unsigned int issue_flags) static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { #if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU) - if (sqe->ioprio || sqe->buf_index || sqe->off) + if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->splice_fd_in) return -EINVAL; if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; @@ -4172,7 +4176,7 @@ static int io_madvise(struct io_kiocb *req, unsigned int issue_flags) static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { - if (sqe->ioprio || sqe->buf_index || sqe->addr) + if (sqe->ioprio || sqe->buf_index || sqe->addr || sqe->splice_fd_in) return -EINVAL; if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; @@ -4210,7 +4214,7 @@ static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (sqe->ioprio || sqe->buf_index) + if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in) return -EINVAL; if (req->flags & REQ_F_FIXED_FILE) return -EBADF; @@ -4246,7 +4250,7 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; if (sqe->ioprio || sqe->off || sqe->addr || sqe->len || - sqe->rw_flags || sqe->buf_index) + sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in) return -EINVAL; if (req->flags & REQ_F_FIXED_FILE) return -EBADF; @@ -4307,7 +4311,8 @@ static int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (unlikely(ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index)) + if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index || + sqe->splice_fd_in)) return -EINVAL; req->sync.off = READ_ONCE(sqe->off); @@ -4734,7 +4739,7 @@ static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (sqe->ioprio || sqe->len || sqe->buf_index) + if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->splice_fd_in) return -EINVAL; accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr)); @@ -4782,7 +4787,8 @@ static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags) + if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags || + sqe->splice_fd_in) return -EINVAL; conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr)); @@ -5377,7 +5383,7 @@ static int io_poll_update_prep(struct io_kiocb *req, if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (sqe->ioprio || sqe->buf_index) + if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in) return -EINVAL; flags = READ_ONCE(sqe->len); if (flags & ~(IORING_POLL_UPDATE_EVENTS | IORING_POLL_UPDATE_USER_DATA | @@ -5616,7 +5622,7 @@ static int io_timeout_remove_prep(struct io_kiocb *req, return -EINVAL; if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT))) return -EINVAL; - if (sqe->ioprio || sqe->buf_index || sqe->len) + if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->splice_fd_in) return -EINVAL; tr->addr = READ_ONCE(sqe->addr); @@ -5677,7 +5683,8 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe, if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; - if (sqe->ioprio || sqe->buf_index || sqe->len != 1) + if (sqe->ioprio || sqe->buf_index || sqe->len != 1 || + sqe->splice_fd_in) return -EINVAL; if (off && is_timeout_link) return -EINVAL; @@ -5833,7 +5840,8 @@ static int io_async_cancel_prep(struct io_kiocb *req, return -EINVAL; if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT))) return -EINVAL; - if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags) + if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags || + sqe->splice_fd_in) return -EINVAL; req->cancel.addr = READ_ONCE(sqe->addr); @@ -5874,7 +5882,7 @@ static int io_rsrc_update_prep(struct io_kiocb *req, { if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT))) return -EINVAL; - if (sqe->ioprio || sqe->rw_flags) + if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in) return -EINVAL; req->rsrc_update.offset = READ_ONCE(sqe->off);