From patchwork Thu Jun 29 06:25:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 697865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D365EB64D9 for ; Thu, 29 Jun 2023 06:26:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231879AbjF2G0N (ORCPT ); Thu, 29 Jun 2023 02:26:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231332AbjF2G0H (ORCPT ); Thu, 29 Jun 2023 02:26:07 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEDC32D5B; Wed, 28 Jun 2023 23:26:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4D7C8614B2; Thu, 29 Jun 2023 06:26:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1BBCC433CA; Thu, 29 Jun 2023 06:26:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688019965; bh=9GrsNX3cIPYwDUJ797RxS6JlvLu3++Z2w7OHomzyUNM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Yqobn8ey9rOo7HaEJQaEHVF9lfLr6ByTscptqb1DiCDNAvXIw7l5KDaNuYCo7ZtvG nvrVb0wCoxLxEco0QcJUwSU6u1rIc+WoQ2ANemh12/ffgED9m8oFdwhqsUNgG8d2gf 5vPc2wjM+YuOatDHawr8yncWKuPRr7E63dhDsVwBD88DZ4G8Szgj+fMNjOZmCp9Ket nbybD4Q0oFU72eBPqYkibW4WAncsDoMO/5pjPC+MQutM1Q7BjAnV+uOtQrLXRySg+9 dd+hTt+gm/ObF5jYZm0mMpXgCFizu87gzA477XS7JP+rUjLcStwon9SimhWTUzBclk GFVwOzAxStwZA== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH 1/5] scsi: sd_zbc: Set zone limits before revalidating zones Date: Thu, 29 Jun 2023 15:25:58 +0900 Message-ID: <20230629062602.234913-2-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230629062602.234913-1-dlemoal@kernel.org> References: <20230629062602.234913-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Call blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set a ZBC device zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones() to allow this function to check zone limits. Since blk_queue_max_zone_append_sectors() already caps the device maximum zone append limit to the zone size and to the maximum command size, the max_append value passed to blk_queue_max_zone_append_sectors() is simplified to the maximum number of segments times the number of sectors per page. Signed-off-by: Damien Le Moal --- drivers/scsi/sd_zbc.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index 22801c24ea19..a25215507668 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -831,7 +831,6 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) struct request_queue *q = disk->queue; u32 zone_blocks = sdkp->early_zone_info.zone_blocks; unsigned int nr_zones = sdkp->early_zone_info.nr_zones; - u32 max_append; int ret = 0; unsigned int flags; @@ -876,6 +875,11 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) goto unlock; } + blk_queue_chunk_sectors(q, + logical_to_sectors(sdkp->device, zone_blocks)); + blk_queue_max_zone_append_sectors(q, + q->limits.max_segments << PAGE_SECTORS_SHIFT); + ret = blk_revalidate_disk_zones(disk, sd_zbc_revalidate_zones_cb); memalloc_noio_restore(flags); @@ -888,12 +892,6 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) goto unlock; } - max_append = min_t(u32, logical_to_sectors(sdkp->device, zone_blocks), - q->limits.max_segments << (PAGE_SHIFT - 9)); - max_append = min_t(u32, max_append, queue_max_hw_sectors(q)); - - blk_queue_max_zone_append_sectors(q, max_append); - sd_zbc_print_zones(sdkp); unlock: From patchwork Thu Jun 29 06:25:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 698604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D56EEB64DC for ; Thu, 29 Jun 2023 06:26:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231886AbjF2G0Q (ORCPT ); Thu, 29 Jun 2023 02:26:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231543AbjF2G0I (ORCPT ); Thu, 29 Jun 2023 02:26:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C50F42D60; Wed, 28 Jun 2023 23:26:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5A122614AC; Thu, 29 Jun 2023 06:26:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 00616C433C8; Thu, 29 Jun 2023 06:26:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688019966; bh=joAVWS+EMFJrm0FZ1gSAlMHImF7ENXf6lPvgI0U1N6E=; h=From:To:Subject:Date:In-Reply-To:References:From; b=q5jZqlIVQSNV5RPB7hvQq+PNYSeDDeyiCigy1pKttWz5iAKN6JxqztU4c72DqoA5M AMGhk7k8Gogr2Ts4y2J3HSY8OkK1LwUqQpzHqk9mFkLb50CmDa4hqSuHQr8Rl3DlTq Uy8DD42lFZiw9AFTY71KPk4QBnpULPwFTplz5DgygTBIyIJx6zuRFTC8OcRTGjtzNV QWIZwiRh2WIdyGE54Kl1qVRw6xmoo2txkRwbPO9uyAkwSmw2zZZb681WmJpM6onizF IZYxD9Jj6uR1s65U/6Kh+lAKtAC0IAN6Nifj/3b16wLCssZuuvxOVrpd9qp9b/31ew xcvjrXTPqCjvQ== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH 2/5] nvme: zns: Set zone limits before revalidating zones Date: Thu, 29 Jun 2023 15:25:59 +0900 Message-ID: <20230629062602.234913-3-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230629062602.234913-1-dlemoal@kernel.org> References: <20230629062602.234913-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In nvme_revalidate_zones(), call blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set a ZNS device zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones() to allow this function to check zone limits. Signed-off-by: Damien Le Moal --- drivers/nvme/host/zns.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c index 12316ab51bda..ec8557810c21 100644 --- a/drivers/nvme/host/zns.c +++ b/drivers/nvme/host/zns.c @@ -10,12 +10,11 @@ int nvme_revalidate_zones(struct nvme_ns *ns) { struct request_queue *q = ns->queue; - int ret; - ret = blk_revalidate_disk_zones(ns->disk, NULL); - if (!ret) - blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append); - return ret; + blk_queue_chunk_sectors(q, ns->zsze); + blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append); + + return blk_revalidate_disk_zones(ns->disk, NULL); } static int nvme_set_max_append(struct nvme_ctrl *ctrl) From patchwork Thu Jun 29 06:26:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 697864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D24FEB64D9 for ; Thu, 29 Jun 2023 06:26:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231901AbjF2G0S (ORCPT ); Thu, 29 Jun 2023 02:26:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231609AbjF2G0K (ORCPT ); Thu, 29 Jun 2023 02:26:10 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFF702D62; Wed, 28 Jun 2023 23:26:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6ED85614CF; Thu, 29 Jun 2023 06:26:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12AAFC433C0; Thu, 29 Jun 2023 06:26:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688019967; bh=iwBl29tNxXNIsNWLUVnJCMDPgmC2XkOQIOPgLE1r9Yg=; h=From:To:Subject:Date:In-Reply-To:References:From; b=OVwrYydYI/I2k+/del815xw2PgWv9avP8KzVHqLZqmoC52ikaWOvcejWOCbfbzsaY Ek4irfm8XwkLzBMKRQxBlUUvWTprGkRhjLIm2e83PzhwkOwTq1z+WxFh4N+AldBCK8 E36xBdoTlPuFm7ZKAic/7X6YU1oqAx5tCOaT3UN+tyo/+zk2k6/N+JytTB0v7dKobl G70LhFYbyPKns4gE4+xbUagWb4Tyv6Gz9x0jsLFpObtn+6WDw/GFzv5eSBU7I82z+S 4e1QvwEPrjV+/glTnIwOlgAmPOM//p2brOQCmhQOKOd+uaDUq8iyV3LsC0fj0qW6WC HzriTMtG5NQOA== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH 3/5] block: nullblk: Set zone limits before revalidating zones Date: Thu, 29 Jun 2023 15:26:00 +0900 Message-ID: <20230629062602.234913-4-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230629062602.234913-1-dlemoal@kernel.org> References: <20230629062602.234913-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In null_register_zoned_dev(), call blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set the device zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones() to allow this function to check zone limits. Signed-off-by: Damien Le Moal --- drivers/block/null_blk/zoned.c | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 635ce0648133..84fe0d92087f 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -160,22 +160,17 @@ int null_register_zoned_dev(struct nullb *nullb) struct request_queue *q = nullb->q; disk_set_zoned(nullb->disk, BLK_ZONED_HM); + disk_set_max_open_zones(nullb->disk, dev->zone_max_open); + disk_set_max_active_zones(nullb->disk, dev->zone_max_active); + blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); - - if (queue_is_mq(q)) { - int ret = blk_revalidate_disk_zones(nullb->disk, NULL); - - if (ret) - return ret; - } else { - blk_queue_chunk_sectors(q, dev->zone_size_sects); - nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0); - } - + blk_queue_chunk_sectors(q, dev->zone_size_sects); blk_queue_max_zone_append_sectors(q, dev->zone_size_sects); - disk_set_max_open_zones(nullb->disk, dev->zone_max_open); - disk_set_max_active_zones(nullb->disk, dev->zone_max_active); + nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0); + + if (queue_is_mq(q)) + return blk_revalidate_disk_zones(nullb->disk, NULL); return 0; } From patchwork Thu Jun 29 06:26:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 698603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94DF0EB64DC for ; Thu, 29 Jun 2023 06:26:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231923AbjF2G0X (ORCPT ); Thu, 29 Jun 2023 02:26:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231817AbjF2G0L (ORCPT ); Thu, 29 Jun 2023 02:26:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA75E2D5B; Wed, 28 Jun 2023 23:26:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 794E661492; Thu, 29 Jun 2023 06:26:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23616C433C9; Thu, 29 Jun 2023 06:26:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688019968; bh=xHJViNRochmFfkpGu4V5kEET8WKXpezvRtP00JVSRG8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=kOyEskx6rtk6VOMI0reImUMVyeAtzfP+X35hcsD8M9YpjZmdRj841av8fiN/rUo1U a7pQKXsrHuLjl1iPVU7bhmVwXn++Sfo2jLvFKnatroHxzbYvwOj4svYUc12rmA9l3a PISadmrykYPTUCA7/4Nu/TQojKPYgulCQ/oi3ZE/BB+uBil+03K4z0TgLuGUOScfBz YcmX/MFTOK0jXGeeKfaCxNGkNRo6kPoKCiKctEfwpuw21E6c6C3KR/rmIOq6igQbFK tvLTE1sMo62GxcNME05q9n6kLGmyoM01MG292oqjIWHAlCxFQtgCIoFb64DFL0ZgC2 qcMb3uTXaCMGg== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH 4/5] block: virtio_blk: Set zone limits before revalidating zones Date: Thu, 29 Jun 2023 15:26:01 +0900 Message-ID: <20230629062602.234913-5-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230629062602.234913-1-dlemoal@kernel.org> References: <20230629062602.234913-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In virtblk_probe_zoned_device(), call blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set a device zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones() to allow this function to check zone limits. Signed-off-by: Damien Le Moal --- drivers/block/virtio_blk.c | 35 ++++++++++++++++------------------- 1 file changed, 16 insertions(+), 19 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index b47358da92a2..7d9c9f9d2ae9 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -751,7 +751,6 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, { u32 v, wg; u8 model; - int ret; virtio_cread(vdev, struct virtio_blk_config, zoned.model, &model); @@ -806,6 +805,7 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, vblk->zone_sectors); return -ENODEV; } + blk_queue_chunk_sectors(q, vblk->zone_sectors); dev_dbg(&vdev->dev, "zone sectors = %u\n", vblk->zone_sectors); if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { @@ -814,26 +814,23 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, blk_queue_max_discard_sectors(q, 0); } - ret = blk_revalidate_disk_zones(vblk->disk, NULL); - if (!ret) { - virtio_cread(vdev, struct virtio_blk_config, - zoned.max_append_sectors, &v); - if (!v) { - dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); - return -ENODEV; - } - if ((v << SECTOR_SHIFT) < wg) { - dev_err(&vdev->dev, - "write granularity %u exceeds max_append_sectors %u limit\n", - wg, v); - return -ENODEV; - } - - blk_queue_max_zone_append_sectors(q, v); - dev_dbg(&vdev->dev, "max append sectors = %u\n", v); + virtio_cread(vdev, struct virtio_blk_config, + zoned.max_append_sectors, &v); + if (!v) { + dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); + return -ENODEV; + } + if ((v << SECTOR_SHIFT) < wg) { + dev_err(&vdev->dev, + "write granularity %u exceeds max_append_sectors %u limit\n", + wg, v); + return -ENODEV; } - return ret; + blk_queue_max_zone_append_sectors(q, v); + dev_dbg(&vdev->dev, "max append sectors = %u\n", v); + + return blk_revalidate_disk_zones(vblk->disk, NULL); } #else From patchwork Thu Jun 29 06:26:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 697863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAFF7EB64D9 for ; Thu, 29 Jun 2023 06:26:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231899AbjF2G03 (ORCPT ); Thu, 29 Jun 2023 02:26:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230446AbjF2G0M (ORCPT ); Thu, 29 Jun 2023 02:26:12 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1236B2D56; Wed, 28 Jun 2023 23:26:11 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8E60A61462; Thu, 29 Jun 2023 06:26:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33FF3C433CA; Thu, 29 Jun 2023 06:26:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688019970; bh=x1DTWx8pERy08ntVPKA0QbOmKcrPMXFRoi+PQueOi7w=; h=From:To:Subject:Date:In-Reply-To:References:From; b=DIa417w9q3DrO2LVFx9m8bNWy0UzopFx6x+04p6CceJWdYSkh+JO5L01usgLLvNkM U9D7XKGW0xCoQH3mq3/HbqaLY32nuIOtXmJfZciomgtiLhKdsoM60DbTWu0/Y1VIaG ppaI2+gCViou3rQckS7oxEbavT2EEIK3446m/boFMWPc6pcey6PzwgAUbN1r4mQUxZ lu1xTR0La8TDgkeNMcchc0ochxTkv8tIlJfWaTSzrApiEM6/WglGCiqlJsCax8ZOvr YcwxmNS1Gom82p9gTb8Q4SMpwYiVuWG23dwDPQuS5QE/xhu3Kr2NxqEXnWkXFLPaEu CDp7VIiXOpicw== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH 5/5] block: improve checks in blk_revalidate_disk_zones() Date: Thu, 29 Jun 2023 15:26:02 +0900 Message-ID: <20230629062602.234913-6-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230629062602.234913-1-dlemoal@kernel.org> References: <20230629062602.234913-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Modify blk_revalidate_disk_zones() to improves checks of a zoned block device zones and of the device limits. In particular, make sure that the device driver reported that the zoned device supports zone append operation by defining a non-zero max_zone_append_sectors queue limit. These changes rely on the constraint that when blk_revalidate_disk_zones() is called, the device driver must have set the device zone size (chunk_sectors queue limit) and the max_zone_append_sectors queue limit. With this assumption, the zone checks implemented in blk_revalidate_zone_cb() can be improved as the zone size and the total number of zones of the device are already known and can be verified against the zone report of the device. Signed-off-by: Damien Le Moal --- block/blk-zoned.c | 99 ++++++++++++++++++++++++++++++----------------- 1 file changed, 63 insertions(+), 36 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 0f9f97cdddd9..2807b4ada18b 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -442,7 +442,7 @@ struct blk_revalidate_zone_args { unsigned long *conv_zones_bitmap; unsigned long *seq_zones_wlock; unsigned int nr_zones; - sector_t zone_sectors; + unsigned int reported_zones; sector_t sector; }; @@ -456,35 +456,40 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx, struct gendisk *disk = args->disk; struct request_queue *q = disk->queue; sector_t capacity = get_capacity(disk); + sector_t zone_sectors = q->limits.chunk_sectors; + unsigned int nr_zones = args->nr_zones; + + /* Check that the device is not reporting too many zones */ + args->reported_zones++; + if (args->reported_zones > nr_zones) { + pr_warn("%s: Too many zones reported\n", disk->disk_name); + return -ENODEV; + } + + /* Check that the zone is valid and within the disk capacity */ + if (!zone->len || zone->start + zone->len > capacity) { + pr_warn("%s: Invalid zone start %llu, len %llu\n", + disk->disk_name, zone->start, zone->len); + return -ENODEV; + } /* * All zones must have the same size, with the exception on an eventual * smaller last zone. */ - if (zone->start == 0) { - if (zone->len == 0 || !is_power_of_2(zone->len)) { - pr_warn("%s: Invalid zoned device with non power of two zone size (%llu)\n", - disk->disk_name, zone->len); - return -ENODEV; - } - - args->zone_sectors = zone->len; - args->nr_zones = (capacity + zone->len - 1) >> ilog2(zone->len); - } else if (zone->start + args->zone_sectors < capacity) { - if (zone->len != args->zone_sectors) { + if (zone->start + zone_sectors < capacity) { + if (zone->len != zone_sectors) { pr_warn("%s: Invalid zoned device with non constant zone size\n", disk->disk_name); return -ENODEV; } - } else { - if (zone->len > args->zone_sectors) { - pr_warn("%s: Invalid zoned device with larger last zone size\n", - disk->disk_name); - return -ENODEV; - } + } else if (zone->len > zone_sectors) { + pr_warn("%s: Invalid zoned device with larger last zone size\n", + disk->disk_name); + return -ENODEV; } - /* Check for holes in the zone report */ + /* Check for invalid zone start and holes in the zone report */ if (zone->start != args->sector) { pr_warn("%s: Zone gap at sectors %llu..%llu\n", disk->disk_name, args->sector, zone->start); @@ -496,7 +501,7 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx, case BLK_ZONE_TYPE_CONVENTIONAL: if (!args->conv_zones_bitmap) { args->conv_zones_bitmap = - blk_alloc_zone_bitmap(q->node, args->nr_zones); + blk_alloc_zone_bitmap(q->node, nr_zones); if (!args->conv_zones_bitmap) return -ENOMEM; } @@ -506,7 +511,7 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx, case BLK_ZONE_TYPE_SEQWRITE_PREF: if (!args->seq_zones_wlock) { args->seq_zones_wlock = - blk_alloc_zone_bitmap(q->node, args->nr_zones); + blk_alloc_zone_bitmap(q->node, nr_zones); if (!args->seq_zones_wlock) return -ENOMEM; } @@ -518,6 +523,7 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx, } args->sector += zone->len; + return 0; } @@ -526,11 +532,13 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx, * @disk: Target disk * @update_driver_data: Callback to update driver data on the frozen disk * - * Helper function for low-level device drivers to (re) allocate and initialize - * a disk request queue zone bitmaps. This functions should normally be called - * within the disk ->revalidate method for blk-mq based drivers. For BIO based - * drivers only q->nr_zones needs to be updated so that the sysfs exposed value - * is correct. + * Helper function for low-level device drivers to check and (re) allocate and + * initialize a disk request queue zone bitmaps. This functions should normally + * be called within the disk ->revalidate method for blk-mq based drivers. + * Before calling this function, the device driver must already have set the + * device zone size (chunk_sector limit) and the max zone append limit. + * For BIO based drivers, this function cannot be used. BIO based device drivers + * only need to set disk->nr_zones so that the sysfs exposed value is correct. * If the @update_driver_data callback function is not NULL, the callback is * executed with the device request queue frozen after all zones have been * checked. @@ -539,9 +547,9 @@ int blk_revalidate_disk_zones(struct gendisk *disk, void (*update_driver_data)(struct gendisk *disk)) { struct request_queue *q = disk->queue; - struct blk_revalidate_zone_args args = { - .disk = disk, - }; + sector_t zone_sectors = q->limits.chunk_sectors; + sector_t capacity = get_capacity(disk); + struct blk_revalidate_zone_args args = { }; unsigned int noio_flag; int ret; @@ -550,13 +558,31 @@ int blk_revalidate_disk_zones(struct gendisk *disk, if (WARN_ON_ONCE(!queue_is_mq(q))) return -EIO; - if (!get_capacity(disk)) - return -EIO; + if (!capacity) + return -ENODEV; + + /* + * Checks that the device driver indicated a valid zone size and that + * the max zone append limit is set. + */ + if (!zone_sectors || !is_power_of_2(zone_sectors)) { + pr_warn("%s: Invalid non power of two zone size (%llu)\n", + disk->disk_name, zone_sectors); + return -ENODEV; + } + + if (!q->limits.max_zone_append_sectors) { + pr_warn("%s: Invalid 0 maximum zone append limit\n", + disk->disk_name); + return -ENODEV; + } /* * Ensure that all memory allocations in this context are done as if * GFP_NOIO was specified. */ + args.disk = disk; + args.nr_zones = (capacity + zone_sectors - 1) >> ilog2(zone_sectors); noio_flag = memalloc_noio_save(); ret = disk->fops->report_zones(disk, 0, UINT_MAX, blk_revalidate_zone_cb, &args); @@ -568,11 +594,13 @@ int blk_revalidate_disk_zones(struct gendisk *disk, /* * If zones where reported, make sure that the entire disk capacity - * has been checked. + * has been checked and that the total number of reported zones matches + * the number of zones of the device. */ - if (ret > 0 && args.sector != get_capacity(disk)) { - pr_warn("%s: Missing zones from sector %llu\n", - disk->disk_name, args.sector); + if (ret > 0 && + (args.sector != capacity || args.reported_zones != args.nr_zones)) { + pr_warn("%s: Invalid zone report\n", + disk->disk_name); ret = -ENODEV; } @@ -583,7 +611,6 @@ int blk_revalidate_disk_zones(struct gendisk *disk, */ blk_mq_freeze_queue(q); if (ret > 0) { - blk_queue_chunk_sectors(q, args.zone_sectors); disk->nr_zones = args.nr_zones; swap(disk->seq_zones_wlock, args.seq_zones_wlock); swap(disk->conv_zones_bitmap, args.conv_zones_bitmap);