From patchwork Thu Jan 28 04:47:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 373014 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79FF9C433DB for ; Thu, 28 Jan 2021 04:49:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B3BE64DD9 for ; Thu, 28 Jan 2021 04:49:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229892AbhA1EtY (ORCPT ); Wed, 27 Jan 2021 23:49:24 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:21969 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229545AbhA1EtW (ORCPT ); Wed, 27 Jan 2021 23:49:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1611809361; x=1643345361; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nlVDSYPsDiNDIn4MgX4E8oTX+gmER/SxjozWNm4/+I0=; b=AC2wC8ob7S7spqukE39AM5m+6snPfo+ftl1PgZQB2+E5TZmxGaT8HB8D i2UC4P9CaaLFmahBiWRFN4/qVOj9Ueg5qKzpqXdqCDpoFOEElfoHNSd81 RYFjKS63rPkSjJTXXsCdby1ZhbhUEzKWMRtGxaPAYpa9qN0xJvRDG8Pcu Rm0LsY0LdH4XUdW1ihUqC5RGdhgn813ULLLawd7MZ7eT0mKuZUG57aOqF v+4nt6BmPl2+hooTJA7u4ObItaEkiCcSnNRXSBTh6+jZ16eBPZ/ZNAjIm mzR2Yiz4CoWBY86a/JSlBIUehkhEy8IUnWlER0QPoBBT67rJjxdReWeV7 w==; IronPort-SDR: dvMYoRePYBMgIp4/nNXX1K4pgNepYOHgMYaVVkjMl045+LxL+AEYc7nCpf8JMolBUrEfp2VIKY xHF1/hLY5obhwJW5XWqBZ/WM6mTFrLl7PmDBKpRw58p7tcYxORAFhd2sdp6dio0QdiKYE3F+dM sEMlWjoHO5ALBaGNbdjTsKoSnM5xsEaHSKv/Fyx6Dg9FpDOxUz1M6GiKbXNDJG0flMEhttIoRJ x8uJcyr/xNumQLyVBXtJ7bS9PHqBF+8HJdEkGkomeaSRMEoyS+6V78Y1nBQaUrdYnqAFsYL6VT uZ0= X-IronPort-AV: E=Sophos;i="5.79,381,1602518400"; d="scan'208";a="158509126" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 28 Jan 2021 12:47:37 +0800 IronPort-SDR: jk7I/2Q10HaA7mQHUWqRZsgowiItqCMimrhCIyluGOzKQdhJkMvjzsTa1RTdrys0KiiNvD4Dvp GE25Soy67RnBe3TBmEs2WskukWZdtloTORChy9Bd2Lz9bldLslCgx3HMz4O68phbORjOzBtOQE WWYQeDAc3t1zVccs1t9AuL686gOVYyTOHjzFjC6TWSF3YlAxbj6uVGVKQx3tQ0rWUxkYyOdRrL OWtD/BBdvF5p0VPSYJzELEohX+PTlwEbDDf5rt8IOdBu/dnmJt/MqvAVxWvkf29jFECP3PjrRK sQkUDw2T3E5oaX8MhZXv5AVh Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2021 20:29:56 -0800 IronPort-SDR: Rw4bUzOvN4mJwZLFxFUm6LpJsGwe5YcjBXIX2Ymv9mt+5XHcopn8O27sy142fveBFZy18GIqep dvyoGjJpSCtn0JKO6dGlnahV4dhyjT/iii6yhL5o/XGgczi5bC4+HJYI97zt156LnEvC9UlYVk OWD50plCdjYBxhJSEehpZDV18NEpG1mFbF301ZIQVHxOfwa450BEWZvJlo/Sadvlt4IBLUHmdk /zz5jCgU8+44Juwu2kXXwWCyftM6fkjACRZQfke/V/+xM0bxwwgoMgC2ZAsHUtKWQYOGvuSUoL usU= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 27 Jan 2021 20:47:36 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Cc: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , Chaitanya Kulkarni Subject: [PATCH v4 1/8] block: document zone_append_max_bytes attribute Date: Thu, 28 Jan 2021 13:47:26 +0900 Message-Id: <20210128044733.503606-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210128044733.503606-1-damien.lemoal@wdc.com> References: <20210128044733.503606-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The description of the zone_append_max_bytes sysfs queue attribute is missing from Documentation/block/queue-sysfs.rst. Add it. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Martin K. Petersen Reviewed-by: Johannes Thumshirn --- Documentation/block/queue-sysfs.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/Documentation/block/queue-sysfs.rst b/Documentation/block/queue-sysfs.rst index 2638d3446b79..edc6e6960b96 100644 --- a/Documentation/block/queue-sysfs.rst +++ b/Documentation/block/queue-sysfs.rst @@ -261,6 +261,12 @@ For block drivers that support REQ_OP_WRITE_ZEROES, the maximum number of bytes that can be zeroed at once. The value 0 means that REQ_OP_WRITE_ZEROES is not supported. +zone_append_max_bytes (RO) +-------------------------- +This is the maximum number of bytes that can be written to a sequential +zone of a zoned block device using a zone append write operation +(REQ_OP_ZONE_APPEND). This value is always 0 for regular block devices. + zoned (RO) ---------- This indicates if the device is a zoned block device and the zone model of the From patchwork Thu Jan 28 04:47:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 373013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF1E3C433E0 for ; Thu, 28 Jan 2021 04:50:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 78FE764DDA for ; Thu, 28 Jan 2021 04:50:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231158AbhA1EuH (ORCPT ); Wed, 27 Jan 2021 23:50:07 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:21969 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231147AbhA1EuD (ORCPT ); Wed, 27 Jan 2021 23:50:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1611809403; x=1643345403; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YobZghyNsgskNB+fC+I1GvLwxiefgtnuuC0Tcp7ryyI=; b=NNTAQ9tVBPqmkVeHutbFLdWmYEhd+TpmV8QKOmLbeBaE4F2mwp2rCXwn oGNhv+bcKkxPO5oZisyWkMN0GEBYJknDySg3ohiyn7Ym7fXsdfEX4rlvw SW1VwUyPlSxjMJ01R2M2aGmbHqknzvaa8rZnvYVVMxlFyF+NalLhXhO1E kKiQlirxkmfr+8l85gGNPtbqGyetFbyL57m1NhrMAkDwkgtrtiHJKuGaO 8lpmmhUydylkbyRYKNsMo/eIDtgDjQupmwphwacSJmhTwEejXBHAgWu4C XU3s764F7TKEW0zNmz1c6TEhdeg6UHFDoim1Mj6C9nqpRhatFq+s8dA3R Q==; IronPort-SDR: SaF8XR6C8U22TQ9035EkOg5imG2DYaliY5mYa3nnIFVHxz0+Q7T2ELNvVK66DRHdldlof3lRpK RU0ic4/kUlcoph7aRc1E8kJOJxCcZ0ffBWV/Knlt5g397+t2BQMojRqvi0qs18ojuT1MqXXlOt kqg3teILg7k28hpYMrRBVN/K+ZzsA1Qcis0v5UxMss8QQNlQHq9Cf8Jo6dKc399M3l89DlAD9R UIlU5qVT76Pna9eV3anYYZ6uUi3kE/KesLBta2KNm7qMjPUv/Rga8Bv1t9+u16+X/WQT6o4Mow 2JM= X-IronPort-AV: E=Sophos;i="5.79,381,1602518400"; d="scan'208";a="158509132" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 28 Jan 2021 12:47:40 +0800 IronPort-SDR: SiSp3TYuXQEqH998FV4e83lTSZORVJYnRuTe/2nVj5VA8quS5iiLLy6K1HL5cQqk3QG6rg7Ip1 /YbmPepJRGezJO9Ccxgxbpnsqq+TpNrG2Q1ECotU2Od0hhZakOWXflK96P5Elimnqw3mSxjj73 rqpUdk0a9o9xtKPBTq1XHi2MLEa+QrcpiTTtkZIn1Z7o+yKGHIQ1TbbpRQfg8AIGWwtgYPFL3H JdTvNZP6HvFUlVPopHRFAjIkgn/MLwIU4tctqloqc1WtFgXffLxQio7816IviHA6fFQq+ID2sW xDH7JOh0pdq5gqfXpLjMKm0m Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2021 20:29:59 -0800 IronPort-SDR: 9RcohAp9AMgU5B7rQIxsmbG4uch5GxDip1QD1xSnMBIuPmFlTmbHioiPbNRL2EDa/fBYDWBb08 f9Q4NGR3usz6Ez9dTU6cRvCiM/DaUVDZz55/Dq2niCiGulwBXt9Q5eJjpwoF9ZRmGKkECu7Fly wUG9p3+HrZa0On4ZF0ntny/L7BWLJKixVz/R5j4b1xwAe4delg5B2v9/Yv4AbKTaM+FnN9KZMi 9C1jqkMK2sq/P8NliAxdm6KzkwU9IMtTQK5eYWvsLALI9vUiWJuk6i8f5306kiBEX3ElOf2Wlk 3iQ= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 27 Jan 2021 20:47:39 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Cc: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , Chaitanya Kulkarni Subject: [PATCH v4 3/8] nullb: use blk_queue_set_zoned() to setup zoned devices Date: Thu, 28 Jan 2021 13:47:28 +0900 Message-Id: <20210128044733.503606-4-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210128044733.503606-1-damien.lemoal@wdc.com> References: <20210128044733.503606-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use blk_queue_set_zoned() to set a nullb device zone model instead of directly assigning the device queue zoned limit. This initialization of the devicve zoned model as well as the setup of the queue flag QUEUE_FLAG_ZONE_RESETALL and of the device queue elevator feature are moved from null_init_zoned_dev() to null_register_zoned_dev() so that the initialization of the queue limits is done when the gendisk of the nullb device is available. Signed-off-by: Damien Le Moal Reviewed-by: Chaitanya Kulkarni --- drivers/block/null_blk/zoned.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 148b871f263b..78cae8703dcf 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -146,10 +146,6 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) sector += dev->zone_size_sects; } - q->limits.zoned = BLK_ZONED_HM; - blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); - blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); - return 0; } @@ -158,6 +154,10 @@ int null_register_zoned_dev(struct nullb *nullb) struct nullb_device *dev = nullb->dev; struct request_queue *q = nullb->q; + blk_queue_set_zoned(nullb->disk, BLK_ZONED_HM); + blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); + blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); + if (queue_is_mq(q)) { int ret = blk_revalidate_disk_zones(nullb->disk, NULL); From patchwork Thu Jan 28 04:47:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 373012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B52E3C433E9 for ; Thu, 28 Jan 2021 04:50:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 848E264DD9 for ; Thu, 28 Jan 2021 04:50:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231259AbhA1Euf (ORCPT ); Wed, 27 Jan 2021 23:50:35 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:22123 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231243AbhA1Eu3 (ORCPT ); Wed, 27 Jan 2021 23:50:29 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1611809428; x=1643345428; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FYa88HT0yWBywxqBZGFR8tCz7AMVDuJCnFOsRnimI0M=; b=S/Ngb5UmdID3Fi1rnflqq/b9DGEyEOo04pEvOJ8GypaXCAkW36izPOhT Kt8ZZm6aDt5G51VjEMteg1fhgB7AQnbXa6TxucHBHQRW2mEjXsypTUNBn HSydGWN9Uo2F0kvG5wUYya0gNuXFaNgcLa2O1cpyN+He8m6SqhduHkSA1 +ccMWZb0pJgYg159BQR+LUOkex/+0/jjuqhBHXQ23CoBmMk6RUj3KNZcJ BCxUj4vFsB416hPpjV1SXNeCTfXCgiuRIpGSe3xKRcjlixec0L2HBGo0W 05ovWBOVfnEGDU7e2BMBNfVovS4IGHQ7mTyigBCLPouOp4nPiazfW2EX/ A==; IronPort-SDR: 9VwR49JZBk2581ynwlWv69cD6DcOeLkDVFmY5+I+UOGVguXt6KX2LUlWYt7ULLl/E588/7f2rO Vzyzc+gg9JgC/zpNKodfFx9ygpxq66pkoVlL/3n0uKm94Dg5HsnE/gW7899ye8u6TqQBncnizg y+gCfS581rhXiqPaWSbgrlFrhtTMvjctP4Bl+fR3FVFQZGaNcbwjISX5+HME0RMDqsq2iPa1uQ XMXwSF+GdEXADR3h1vuQd0VI8dzzfe05Rof+2k8vHRe8udXVfvinrOMRifEUjyAI84xMRAnIFH XzA= X-IronPort-AV: E=Sophos;i="5.79,381,1602518400"; d="scan'208";a="158509136" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 28 Jan 2021 12:47:43 +0800 IronPort-SDR: bU9IvMTj4o/RjNgHIh2Vwgczaft8crkbQxAUijzz5ZdIUMJHB1JEwcb2ih6wmB5AwIgG7pzmIN 6rTOyfX/OAxsJwRr/5bFKYTdO9VjXU1gW3EczG9RLr3oPLT3NgtfSfZx7Rgr6vbRui+vkBpVb/ jUYRB+AvWNRwStmAydVu072G1tPK/6NArJfNNJQMUj/yP+52NdWJPqn/18WhWvWQEAHeYTozrL GkBM67oVnHG6z2+z9F8Ow3k7RbRdGAJ2XZBQFQsnXIHnOWg25BiH8XcApuBFFBPYQv0RhAY74G bSUDtq+f6fPqouR7kROD1woK Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2021 20:30:02 -0800 IronPort-SDR: c2uU3xUrafHzTvInYusEBlkZFhykLUHkikWL1bY/D0HixajYtB+H/8+7/8J3BxAl+/sQkwwgen kOsXKwa7T6GdbPJvyDJbNSLIA0qKhAR6x6CiCVViEFKAmuOVTNS/Y1Igqw6pDguJhYlzk2igIp mMWnW1DuNbonLU5OmyHug7NfP9fWF/hL/EvbEdtuSz84juITF14ZEKTGzQAWf2acYNUpflkMA2 Gck7JNM3OmoZEBBSlJxBNcf0U+gITl/bP/K78FpfJYVv5nrGesXq8uSRjiFmrhvrwNClCSqZYQ 3no= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 27 Jan 2021 20:47:42 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Cc: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , Chaitanya Kulkarni Subject: [PATCH v4 5/8] block: introduce zone_write_granularity limit Date: Thu, 28 Jan 2021 13:47:30 +0900 Message-Id: <20210128044733.503606-6-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210128044733.503606-1-damien.lemoal@wdc.com> References: <20210128044733.503606-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Per ZBC and ZAC specifications, host-managed SMR hard-disks mandate that all writes into sequential write required zones be aligned to the device physical block size. However, NVMe ZNS does not have this constraint and allows write operations into sequential zones to be aligned to the device logical block size. This inconsistency does not help with software portability across device types. To solve this, introduce the zone_write_granularity queue limit to indicate the alignment constraint, in bytes, of write operations into zones of a zoned block device. This new limit is exported as a read-only sysfs queue attribute and the helper blk_queue_zone_write_granularity() introduced for drivers to set this limit. The function blk_queue_set_zoned() is modified to set this new limit to the device logical block size by default. NVMe ZNS devices as well as zoned nullb devices use this default value as is. The scsi disk driver is modified to execute the blk_queue_zone_write_granularity() helper to set the zone write granularity of host-managed SMR disks to the disk physical block size. The accessor functions queue_zone_write_granularity() and bdev_zone_write_granularity() are also introduced. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Martin K. Petersen --- Documentation/block/queue-sysfs.rst | 7 ++++++ block/blk-settings.c | 37 ++++++++++++++++++++++++++++- block/blk-sysfs.c | 8 +++++++ drivers/scsi/sd_zbc.c | 8 +++++++ include/linux/blkdev.h | 15 ++++++++++++ 5 files changed, 74 insertions(+), 1 deletion(-) diff --git a/Documentation/block/queue-sysfs.rst b/Documentation/block/queue-sysfs.rst index edc6e6960b96..4dc7f0d499a8 100644 --- a/Documentation/block/queue-sysfs.rst +++ b/Documentation/block/queue-sysfs.rst @@ -279,4 +279,11 @@ devices are described in the ZBC (Zoned Block Commands) and ZAC do not support zone commands, they will be treated as regular block devices and zoned will report "none". +zone_write_granularity (RO) +--------------------------- +This indicates the alignment constraint, in bytes, for write operations in +sequential zones of zoned block devices (devices with a zoned attributed +that reports "host-managed" or "host-aware"). This value is always 0 for +regular block devices. + Jens Axboe , February 2009 diff --git a/block/blk-settings.c b/block/blk-settings.c index 4c974340f1a9..a1e66165adcf 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -60,6 +60,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->io_opt = 0; lim->misaligned = 0; lim->zoned = BLK_ZONED_NONE; + lim->zone_write_granularity = 0; } EXPORT_SYMBOL(blk_set_default_limits); @@ -366,6 +367,28 @@ void blk_queue_physical_block_size(struct request_queue *q, unsigned int size) } EXPORT_SYMBOL(blk_queue_physical_block_size); +/** + * blk_queue_zone_write_granularity - set zone write granularity for the queue + * @q: the request queue for the zoned device + * @size: the zone write granularity size, in bytes + * + * Description: + * This should be set to the lowest possible size allowing to write in + * sequential zones of a zoned block device. + */ +void blk_queue_zone_write_granularity(struct request_queue *q, + unsigned int size) +{ + if (WARN_ON_ONCE(!blk_queue_is_zoned(q))) + return; + + q->limits.zone_write_granularity = size; + + if (q->limits.zone_write_granularity < q->limits.logical_block_size) + q->limits.zone_write_granularity = q->limits.logical_block_size; +} +EXPORT_SYMBOL_GPL(blk_queue_zone_write_granularity); + /** * blk_queue_alignment_offset - set physical block alignment offset * @q: the request queue for the device @@ -631,6 +654,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->discard_granularity; } + t->zone_write_granularity = max(t->zone_write_granularity, + b->zone_write_granularity); t->zoned = max(t->zoned, b->zoned); return ret; } @@ -847,6 +872,8 @@ EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging); */ void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model) { + struct request_queue *q = disk->queue; + switch (model) { case BLK_ZONED_HM: /* @@ -875,7 +902,15 @@ void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model) break; } - disk->queue->limits.zoned = model; + q->limits.zoned = model; + if (model != BLK_ZONED_NONE) { + /* + * Set the zone write granularity to the device logical block + * size by default. The driver can change this value if needed. + */ + blk_queue_zone_write_granularity(q, + queue_logical_block_size(q)); + } } EXPORT_SYMBOL_GPL(blk_queue_set_zoned); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index b513f1683af0..ae39c7f3d83d 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -219,6 +219,12 @@ static ssize_t queue_write_zeroes_max_show(struct request_queue *q, char *page) (unsigned long long)q->limits.max_write_zeroes_sectors << 9); } +static ssize_t queue_zone_write_granularity_show(struct request_queue *q, + char *page) +{ + return queue_var_show(queue_zone_write_granularity(q), page); +} + static ssize_t queue_zone_append_max_show(struct request_queue *q, char *page) { unsigned long long max_sectors = q->limits.max_zone_append_sectors; @@ -585,6 +591,7 @@ QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data"); QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes"); QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes"); QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes"); +QUEUE_RO_ENTRY(queue_zone_write_granularity, "zone_write_granularity"); QUEUE_RO_ENTRY(queue_zoned, "zoned"); QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones"); @@ -639,6 +646,7 @@ static struct attribute *queue_attrs[] = { &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, &queue_zone_append_max_entry.attr, + &queue_zone_write_granularity_entry.attr, &queue_nonrot_entry.attr, &queue_zoned_entry.attr, &queue_nr_zones_entry.attr, diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index cf07b7f93579..8293b29584b3 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -789,6 +789,14 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) blk_queue_max_active_zones(q, 0); nr_zones = round_up(sdkp->capacity, zone_blocks) >> ilog2(zone_blocks); + /* + * Per ZBC and ZAC specifications, writes in sequential write required + * zones of host-managed devices must be aligned to the device physical + * block size. + */ + if (blk_queue_zoned_model(q) == BLK_ZONED_HM) + blk_queue_zone_write_granularity(q, sdkp->physical_block_size); + /* READ16/WRITE16 is mandatory for ZBC disks */ sdkp->device->use_16_for_rw = 1; sdkp->device->use_10_for_rw = 0; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 0dea268bd61b..9149f4a5adb3 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -337,6 +337,7 @@ struct queue_limits { unsigned int max_zone_append_sectors; unsigned int discard_granularity; unsigned int discard_alignment; + unsigned int zone_write_granularity; unsigned short max_segments; unsigned short max_integrity_segments; @@ -1160,6 +1161,8 @@ extern void blk_queue_logical_block_size(struct request_queue *, unsigned int); extern void blk_queue_max_zone_append_sectors(struct request_queue *q, unsigned int max_zone_append_sectors); extern void blk_queue_physical_block_size(struct request_queue *, unsigned int); +void blk_queue_zone_write_granularity(struct request_queue *q, + unsigned int size); extern void blk_queue_alignment_offset(struct request_queue *q, unsigned int alignment); void blk_queue_update_readahead(struct request_queue *q); @@ -1473,6 +1476,18 @@ static inline int bdev_io_opt(struct block_device *bdev) return queue_io_opt(bdev_get_queue(bdev)); } +static inline unsigned int +queue_zone_write_granularity(const struct request_queue *q) +{ + return q->limits.zone_write_granularity; +} + +static inline unsigned int +bdev_zone_write_granularity(struct block_device *bdev) +{ + return queue_zone_write_granularity(bdev_get_queue(bdev)); +} + static inline int queue_alignment_offset(const struct request_queue *q) { if (q->limits.misaligned) From patchwork Thu Jan 28 04:47:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 373011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27654C433E6 for ; Thu, 28 Jan 2021 04:51:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C699964DD6 for ; Thu, 28 Jan 2021 04:51:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231269AbhA1EvL (ORCPT ); Wed, 27 Jan 2021 23:51:11 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:22074 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231174AbhA1EvK (ORCPT ); Wed, 27 Jan 2021 23:51:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1611809470; x=1643345470; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=To8ibe4IoXLbKNtSVr85Y+rPsQgn0MZHdGIA9grMAlY=; b=V6Ghz+9Axl9/yxaIYG7lGwYPUcczyjlkCd12ZDmFkD36LLfHCWWFUl8H IHq4ZAfggWMquiwyqVtPu7tdkb8dHEYjY6za36o8NM1Z1XKc+Y0Xw3ZSh AeRdeB4+xd1wgnCX6jP6RW/UagwkgNBZYiEl3nih0m2jaOTDPERL4j8i4 FsqIv6RS+KYIyK1rg+wiQtAT0AksM2QNh0cggUaPB2D0wBZ0g/GTYvp+4 I2wJnY74OpSa2MQjCGohy8PnL/TaZe3A38ypQoh4hf4t4XS3YS5I4WWxd iRTjnIOTsbpoEvJSpb3wysrufZrGJqxrKiF6OnsnDo0rKt4ruo7rwOJNe w==; IronPort-SDR: ZzXf/Kormxb38aYW+flfF650NDqrl6HT4+FXHdT3upBqeQFI15Y7+7ojlsHEk73B9845KJ7t2I 9jAgoqMAN3TOpI04bcIe94r/1nrNJkMdcqf/qVqqxqXZJv6s+kZLuH4rJVRkfYT9pE4S3TpBvK RvVNAlqG1zT4mxVEV7gA+L+jZnhqrWS7NaHIG1+oGVpXbK4YzEYPKE7h1QG/b54bI7ybnfJi1A i2vuBgxu6cLXpNR1Mx7cTsK8Vc5wtKLEJrWVxwXE6TtxN/y/Zli6tarQHRHxq6pVGUWcwuETu+ bgY= X-IronPort-AV: E=Sophos;i="5.79,381,1602518400"; d="scan'208";a="158509145" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 28 Jan 2021 12:47:47 +0800 IronPort-SDR: wh8YZcCoSyyWdWR0VSoWJ15PVnbFMMJBF8ne4V7ZDqHNXlH2n7XITa4uC9ogPBFksc1FgWWzzy etep7vdPiGTaUzrTLdz3QnM4idi2ERDtuLzgpUoUAcrFi9c9Oyawg0UiEorC0UjMmOR6oxKSX9 ETcAFEOX47PvHxOfIGwMM1i/3MBaCilLB6BilCrtgV3liAwNMQEkerKjO2yeibGOLlcYn8oDXu w/nzmekLtD/QVO+wn4c8Yz1a5om0uugSJyd01U0d9gsIB0mK7HR5JmvP16iE61XoDsP52+qe6y 5mXFPiaN4DGRlpGgDTCjFOFU Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jan 2021 20:30:05 -0800 IronPort-SDR: kgqL13pGkC8QABIy4e6bYHJfwv9FlTi5e0mJkrgoRIiurS/qlobH/8gLRvVFaW2WqnXxqAx0Zk X1xIeQcLXveAeTMlasw9QF9po0Xl1/JmRVWjPqXx/LMqeXisQUsKKLWvTzwXJoeHpQ/4EL6KW0 FwVwB5RH6c8vow0S5BXbHszwhzUvco0p2cKlowbRu58yxbUvzTwJIFzkJwTI1MwQdqmntJeFZY 7/OH+3s/0PkTZM79M+d9+Oc3Oz586aak3QVemcPXkEKBJlwuBrhpp6S9b2bPVlR7/Cjl79oxwc bMg= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 27 Jan 2021 20:47:45 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Cc: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , Chaitanya Kulkarni Subject: [PATCH v4 7/8] block: introduce blk_queue_clear_zone_settings() Date: Thu, 28 Jan 2021 13:47:32 +0900 Message-Id: <20210128044733.503606-8-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210128044733.503606-1-damien.lemoal@wdc.com> References: <20210128044733.503606-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Introduce the internal function blk_queue_clear_zone_settings() to cleanup all limits and resources related to zoned block devices. This new function is called from blk_queue_set_zoned() when a disk zoned model is set to BLK_ZONED_NONE. This particular case can happens when a partition is created on a host-aware scsi disk. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- block/blk-settings.c | 2 ++ block/blk-zoned.c | 17 +++++++++++++++++ block/blk.h | 2 ++ 3 files changed, 21 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index a1e66165adcf..7dd8be314ac6 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -910,6 +910,8 @@ void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model) */ blk_queue_zone_write_granularity(q, queue_logical_block_size(q)); + } else { + blk_queue_clear_zone_settings(q); } } EXPORT_SYMBOL_GPL(blk_queue_set_zoned); diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 7a68b6e4300c..833978c02e60 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -549,3 +549,20 @@ int blk_revalidate_disk_zones(struct gendisk *disk, return ret; } EXPORT_SYMBOL_GPL(blk_revalidate_disk_zones); + +void blk_queue_clear_zone_settings(struct request_queue *q) +{ + blk_mq_freeze_queue(q); + + blk_queue_free_zone_bitmaps(q); + blk_queue_flag_clear(QUEUE_FLAG_ZONE_RESETALL, q); + q->required_elevator_features &= ~ELEVATOR_F_ZBD_SEQ_WRITE; + q->nr_zones = 0; + q->max_open_zones = 0; + q->max_active_zones = 0; + q->limits.chunk_sectors = 0; + q->limits.zone_write_granularity = 0; + q->limits.max_zone_append_sectors = 0; + + blk_mq_unfreeze_queue(q); +} diff --git a/block/blk.h b/block/blk.h index 0198335c5838..977d79a0d99a 100644 --- a/block/blk.h +++ b/block/blk.h @@ -333,8 +333,10 @@ struct bio *blk_next_bio(struct bio *bio, unsigned int nr_pages, gfp_t gfp); #ifdef CONFIG_BLK_DEV_ZONED void blk_queue_free_zone_bitmaps(struct request_queue *q); +void blk_queue_clear_zone_settings(struct request_queue *q); #else static inline void blk_queue_free_zone_bitmaps(struct request_queue *q) {} +static inline void blk_queue_clear_zone_settings(struct request_queue *q) {} #endif int blk_alloc_devt(struct block_device *part, dev_t *devt);