From patchwork Mon Sep 13 13:15:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 510556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A57D2C4727C for ; Mon, 13 Sep 2021 14:20:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C3BC61056 for ; Mon, 13 Sep 2021 14:20:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343915AbhIMOVF (ORCPT ); Mon, 13 Sep 2021 10:21:05 -0400 Received: from mail.kernel.org ([198.145.29.99]:39392 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343732AbhIMOSf (ORCPT ); Mon, 13 Sep 2021 10:18:35 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 52E7461B22; Mon, 13 Sep 2021 13:45:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1631540713; bh=zLddO4C0PErurWnxYhyFh8y/yfr5oLfmRSvT1ClUTQ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mkoqi9nSymSW6EKTusp6RcFJzfm+tuD91W9KYHV4DDvL6S0+4E4mfh9HAWAJiEl8j kiYhHy/5F0bHUJO383ttQQPIS2bOIRY8Xnbist93Sj6892XIP7fOWBnlLio52LrR4c QN5HtMLqKUZpuXE9U8WxFXj3bYjZ69gvrpKhiqGg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jens Stutte , Christoph Hellwig , Guoqing Jiang , Song Liu Subject: [PATCH 5.13 273/300] raid1: ensure write behind bio has less than BIO_MAX_VECS sectors Date: Mon, 13 Sep 2021 15:15:34 +0200 Message-Id: <20210913131118.555848843@linuxfoundation.org> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210913131109.253835823@linuxfoundation.org> References: <20210913131109.253835823@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Guoqing Jiang commit 6607cd319b6b91bff94e90f798a61c031650b514 upstream. We can't split write behind bio with more than BIO_MAX_VECS sectors, otherwise the below call trace was triggered because we could allocate oversized write behind bio later. [ 8.097936] bvec_alloc+0x90/0xc0 [ 8.098934] bio_alloc_bioset+0x1b3/0x260 [ 8.099959] raid1_make_request+0x9ce/0xc50 [raid1] [ 8.100988] ? __bio_clone_fast+0xa8/0xe0 [ 8.102008] md_handle_request+0x158/0x1d0 [md_mod] [ 8.103050] md_submit_bio+0xcd/0x110 [md_mod] [ 8.104084] submit_bio_noacct+0x139/0x530 [ 8.105127] submit_bio+0x78/0x1d0 [ 8.106163] ext4_io_submit+0x48/0x60 [ext4] [ 8.107242] ext4_writepages+0x652/0x1170 [ext4] [ 8.108300] ? do_writepages+0x41/0x100 [ 8.109338] ? __ext4_mark_inode_dirty+0x240/0x240 [ext4] [ 8.110406] do_writepages+0x41/0x100 [ 8.111450] __filemap_fdatawrite_range+0xc5/0x100 [ 8.112513] file_write_and_wait_range+0x61/0xb0 [ 8.113564] ext4_sync_file+0x73/0x370 [ext4] [ 8.114607] __x64_sys_fsync+0x33/0x60 [ 8.115635] do_syscall_64+0x33/0x40 [ 8.116670] entry_SYSCALL_64_after_hwframe+0x44/0xae Thanks for the comment from Christoph. [1]. https://bugs.archlinux.org/task/70992 Cc: stable@vger.kernel.org # v5.12+ Reported-by: Jens Stutte Tested-by: Jens Stutte Reviewed-by: Christoph Hellwig Signed-off-by: Guoqing Jiang Signed-off-by: Song Liu Signed-off-by: Greg Kroah-Hartman --- drivers/md/raid1.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1324,6 +1324,7 @@ static void raid1_write_request(struct m struct raid1_plug_cb *plug = NULL; int first_clone; int max_sectors; + bool write_behind = false; if (mddev_is_clustered(mddev) && md_cluster_ops->area_resyncing(mddev, WRITE, @@ -1376,6 +1377,15 @@ static void raid1_write_request(struct m max_sectors = r1_bio->sectors; for (i = 0; i < disks; i++) { struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev); + + /* + * The write-behind io is only attempted on drives marked as + * write-mostly, which means we could allocate write behind + * bio later. + */ + if (rdev && test_bit(WriteMostly, &rdev->flags)) + write_behind = true; + if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) { atomic_inc(&rdev->nr_pending); blocked_rdev = rdev; @@ -1449,6 +1459,15 @@ static void raid1_write_request(struct m goto retry_write; } + /* + * When using a bitmap, we may call alloc_behind_master_bio below. + * alloc_behind_master_bio allocates a copy of the data payload a page + * at a time and thus needs a new bio that can fit the whole payload + * this bio in page sized chunks. + */ + if (write_behind && bitmap) + max_sectors = min_t(int, max_sectors, + BIO_MAX_VECS * (PAGE_SIZE >> 9)); if (max_sectors < bio_sectors(bio)) { struct bio *split = bio_split(bio, max_sectors, GFP_NOIO, &conf->bio_split);