From patchwork Mon Jan 30 21:26:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 648879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A01AC636D3 for ; Mon, 30 Jan 2023 21:27:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230510AbjA3V1I (ORCPT ); Mon, 30 Jan 2023 16:27:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229963AbjA3V1H (ORCPT ); Mon, 30 Jan 2023 16:27:07 -0500 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F62C30E9D; Mon, 30 Jan 2023 13:27:06 -0800 (PST) Received: by mail-pj1-f44.google.com with SMTP id cl23-20020a17090af69700b0022c745bfdc3so5839535pjb.3; Mon, 30 Jan 2023 13:27:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Skju3H/RsWPZRhCsRg0OOLUt7scOZ3q1EB0H4HbGA64=; b=UaL/7BmboqyzylgXqsuPC5PmIIBq5rRBW3RTZe1ZWYLFVqLZxcvaa/wYGtTyPBUOia lOVTq/rJyBt3BTFQlcmfDtq/Wi51obJji/pKHs20SusADTnkWprPRwIataI9301YChSa yACkpGraZuJXwFB6OhdDUiVd4BtpEqrCL7ri8yP7Bco9yxzj2d5aCyOkqn9FOqV26AwR lLzNo73FCsEEezgE1MEvPStwkWfW63UeP/eBP/BK3hiWRp6tqEMMAIljAwU8/gBAYay7 jrlWdos+hruJUP250zFZgfdLsC8I0phKNipqebDPEcJAz75kwMG4K0C9Sw/T1gfKOtjo MSug== X-Gm-Message-State: AO0yUKX6r0jy2EUyJVGEYIXZZrdJLm8Gjj1DcwmVUT1JwSYvNhmdpUGK U8BxPYcW+Yhht/iacZBCtB8= X-Google-Smtp-Source: AK7set86MoqEQQ1eZYkC2Ju4X5pyp/se66X59m99iCtPC9JQVSbakaKa4QCsy5TwrHzlSY85WXfnOg== X-Received: by 2002:a17:903:1d1:b0:196:2ade:6e21 with SMTP id e17-20020a17090301d100b001962ade6e21mr25931713plh.14.1675114026065; Mon, 30 Jan 2023 13:27:06 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5016:3bcd:59fe:334b]) by smtp.gmail.com with ESMTPSA id u18-20020a170902e5d200b00196087a3d7csm7425613plf.77.2023.01.30.13.27.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 13:27:05 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v4 1/7] block: Introduce blk_mq_debugfs_init() Date: Mon, 30 Jan 2023 13:26:50 -0800 Message-Id: <20230130212656.876311-2-bvanassche@acm.org> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog In-Reply-To: <20230130212656.876311-1-bvanassche@acm.org> References: <20230130212656.876311-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move the code for creating the block layer debugfs root directory into blk-mq-debugfs.c. This patch prepares for adding more debugfs initialization code by introducing the function blk_mq_debugfs_init(). Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche Reviewed-by: Luis Chamberlain --- block/blk-core.c | 3 ++- block/blk-mq-debugfs.c | 5 +++++ block/blk-mq-debugfs.h | 6 ++++++ 3 files changed, 13 insertions(+), 1 deletion(-) diff --git a/block/blk-core.c b/block/blk-core.c index ccf9a7683a3c..0dacc2df9588 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -45,6 +45,7 @@ #include #include "blk.h" +#include "blk-mq-debugfs.h" #include "blk-mq-sched.h" #include "blk-pm.h" #include "blk-cgroup.h" @@ -1202,7 +1203,7 @@ int __init blk_dev_init(void) blk_requestq_cachep = kmem_cache_create("request_queue", sizeof(struct request_queue), 0, SLAB_PANIC, NULL); - blk_debugfs_root = debugfs_create_dir("block", NULL); + blk_mq_debugfs_init(); return 0; } diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index bd942341b638..60d1de0ce624 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -874,3 +874,8 @@ void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx) debugfs_remove_recursive(hctx->sched_debugfs_dir); hctx->sched_debugfs_dir = NULL; } + +void blk_mq_debugfs_init(void) +{ + blk_debugfs_root = debugfs_create_dir("block", NULL); +} diff --git a/block/blk-mq-debugfs.h b/block/blk-mq-debugfs.h index 9c7d4b6117d4..7942119051f5 100644 --- a/block/blk-mq-debugfs.h +++ b/block/blk-mq-debugfs.h @@ -17,6 +17,8 @@ struct blk_mq_debugfs_attr { const struct seq_operations *seq_ops; }; +void blk_mq_debugfs_init(void); + int __blk_mq_debugfs_rq_show(struct seq_file *m, struct request *rq); int blk_mq_debugfs_rq_show(struct seq_file *m, void *v); @@ -36,6 +38,10 @@ void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx); void blk_mq_debugfs_register_rqos(struct rq_qos *rqos); void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos); #else +static inline void blk_mq_debugfs_init(void) +{ +} + static inline void blk_mq_debugfs_register(struct request_queue *q) { } From patchwork Mon Jan 30 21:26:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 649572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 112B1C636CD for ; Mon, 30 Jan 2023 21:27:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230511AbjA3V1N (ORCPT ); Mon, 30 Jan 2023 16:27:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229963AbjA3V1J (ORCPT ); Mon, 30 Jan 2023 16:27:09 -0500 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 655C84742D; Mon, 30 Jan 2023 13:27:08 -0800 (PST) Received: by mail-pl1-f178.google.com with SMTP id h9so4445679plf.9; Mon, 30 Jan 2023 13:27:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2kOh/pSCbQUixF2/djzjgkt1h7JC/Pa8ZoAwxGOtFdE=; b=VksATimUMFQ5rwlFObX68KU+jvI99T5kyAP5Je/7S+Xqvql1ysin43ZbsjiE1YYuTY hb08WSlFud6ikdWrkiQi+Wve8ae+l+kPwOHky0tn1zAuLyRNqjvbYJPt4ICTohJUXAd2 268OYItzFNC3BTrcN9z3NcKkCYyZFAae2z2Ojq3sMJyn0APKwcuPgu1Da4ptZunPCb5Z DOogJM7AxnmQg3++dnmxd3FSkoOqND+HpNaAKGhSK5MH10m+qzSK3h4IioGXrHmKN7rk 7diyvYeQxXlzqi0L6A+CCQenRSCirV+dRG9mPG6klXstZrITnRSsp6aJsLmjBW4CPsOI NDGA== X-Gm-Message-State: AO0yUKWaYu2UMU3lzZ03JfSBnMuNRbX+nYZ7iOj5gKKLcJbl6jw7+vxy 8Kid9Cfnvni57XUPveZIdR2tPMUgy4iGEQ== X-Google-Smtp-Source: AK7set9qNKIpeOLd9Isd+tQSc1vijkSsVlFqHFlUDZSQDYoMF7cziUBEJo1QlA90ALmJDwV3FE9LXQ== X-Received: by 2002:a05:6a20:3d19:b0:b8:841d:85bb with SMTP id y25-20020a056a203d1900b000b8841d85bbmr12683057pzi.0.1675114027824; Mon, 30 Jan 2023 13:27:07 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5016:3bcd:59fe:334b]) by smtp.gmail.com with ESMTPSA id u18-20020a170902e5d200b00196087a3d7csm7425613plf.77.2023.01.30.13.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 13:27:07 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v4 2/7] block: Support configuring limits below the page size Date: Mon, 30 Jan 2023 13:26:51 -0800 Message-Id: <20230130212656.876311-3-bvanassche@acm.org> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog In-Reply-To: <20230130212656.876311-1-bvanassche@acm.org> References: <20230130212656.876311-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Allow block drivers to configure the following: * Maximum number of hardware sectors values smaller than PAGE_SIZE >> SECTOR_SHIFT. With PAGE_SIZE = 4096 this means that values below 8 are supported. * A maximum segment size below the page size. This is most useful for page sizes above 4096 bytes. The blk_sub_page_segments static branch will be used in later patches to prevent that performance of block drivers that support segments >= PAGE_SIZE and max_hw_sectors >= PAGE_SIZE >> SECTOR_SHIFT would be affected. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/blk-core.c | 1 + block/blk-mq-debugfs.c | 5 +++ block/blk-settings.c | 82 +++++++++++++++++++++++++++++++++++++----- block/blk.h | 10 ++++++ include/linux/blkdev.h | 2 ++ 5 files changed, 92 insertions(+), 8 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 0dacc2df9588..b193040c7c73 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -270,6 +270,7 @@ static void blk_free_queue(struct request_queue *q) blk_free_queue_stats(q->stats); kfree(q->poll_stat); + blk_disable_sub_page_limits(&q->limits); if (queue_is_mq(q)) blk_mq_release(q); diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 60d1de0ce624..4f06e02961f3 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -875,7 +875,12 @@ void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx) hctx->sched_debugfs_dir = NULL; } +DEFINE_DEBUGFS_ATTRIBUTE(blk_sub_page_limit_queues_fops, + blk_sub_page_limit_queues_get, NULL, "%llu\n"); + void blk_mq_debugfs_init(void) { blk_debugfs_root = debugfs_create_dir("block", NULL); + debugfs_create_file("sub_page_limit_queues", 0400, blk_debugfs_root, + NULL, &blk_sub_page_limit_queues_fops); } diff --git a/block/blk-settings.c b/block/blk-settings.c index 9c9713c9269c..46d43cef8377 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -18,6 +18,11 @@ #include "blk.h" #include "blk-wbt.h" +/* Protects blk_nr_sub_page_limit_queues and blk_sub_page_limits changes. */ +static DEFINE_MUTEX(blk_sub_page_limit_lock); +static uint32_t blk_nr_sub_page_limit_queues; +DEFINE_STATIC_KEY_FALSE(blk_sub_page_limits); + void blk_queue_rq_timeout(struct request_queue *q, unsigned int timeout) { q->rq_timeout = timeout; @@ -58,6 +63,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; lim->dma_alignment = 511; + lim->sub_page_limits = false; } /** @@ -100,6 +106,55 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce) } EXPORT_SYMBOL(blk_queue_bounce_limit); +/* For debugfs. */ +int blk_sub_page_limit_queues_get(void *data, u64 *val) +{ + *val = READ_ONCE(blk_nr_sub_page_limit_queues); + + return 0; +} + +/** + * blk_enable_sub_page_limits - enable support for max_segment_size values smaller than PAGE_SIZE and for max_hw_sectors values below PAGE_SIZE >> SECTOR_SHIFT + * @lim: request queue limits for which to enable support of these features. + * + * Support for these features is not enabled all the time because of the + * runtime overhead of these features. + */ +static void blk_enable_sub_page_limits(struct queue_limits *lim) +{ + if (lim->sub_page_limits) + return; + + lim->sub_page_limits = true; + + mutex_lock(&blk_sub_page_limit_lock); + if (++blk_nr_sub_page_limit_queues == 1) + static_branch_enable(&blk_sub_page_limits); + mutex_unlock(&blk_sub_page_limit_lock); +} + +/** + * blk_disable_sub_page_limits - disable support for max_segment_size values smaller than PAGE_SIZE and for max_hw_sectors values below PAGE_SIZE >> SECTOR_SHIFT + * @lim: request queue limits for which to enable support of these features. + * + * Support for these features is not enabled all the time because of the + * runtime overhead of these features. + */ +void blk_disable_sub_page_limits(struct queue_limits *lim) +{ + if (!lim->sub_page_limits) + return; + + lim->sub_page_limits = false; + + mutex_lock(&blk_sub_page_limit_lock); + WARN_ON_ONCE(blk_nr_sub_page_limit_queues <= 0); + if (--blk_nr_sub_page_limit_queues == 0) + static_branch_disable(&blk_sub_page_limits); + mutex_unlock(&blk_sub_page_limit_lock); +} + /** * blk_queue_max_hw_sectors - set max sectors for a request for this queue * @q: the request queue for the device @@ -122,12 +177,17 @@ EXPORT_SYMBOL(blk_queue_bounce_limit); void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_sectors) { struct queue_limits *limits = &q->limits; + unsigned int min_max_hw_sectors = PAGE_SIZE >> SECTOR_SHIFT; unsigned int max_sectors; - if ((max_hw_sectors << 9) < PAGE_SIZE) { - max_hw_sectors = 1 << (PAGE_SHIFT - 9); - printk(KERN_INFO "%s: set to minimum %d\n", - __func__, max_hw_sectors); + if (max_hw_sectors < min_max_hw_sectors) { + blk_enable_sub_page_limits(limits); + min_max_hw_sectors = 1; + } + + if (max_hw_sectors < min_max_hw_sectors) { + max_hw_sectors = min_max_hw_sectors; + pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors); } max_hw_sectors = round_down(max_hw_sectors, @@ -282,10 +342,16 @@ EXPORT_SYMBOL_GPL(blk_queue_max_discard_segments); **/ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size) { - if (max_size < PAGE_SIZE) { - max_size = PAGE_SIZE; - printk(KERN_INFO "%s: set to minimum %d\n", - __func__, max_size); + unsigned int min_max_segment_size = PAGE_SIZE; + + if (max_size < min_max_segment_size) { + blk_enable_sub_page_limits(&q->limits); + min_max_segment_size = SECTOR_SIZE; + } + + if (max_size < min_max_segment_size) { + max_size = min_max_segment_size; + pr_info("%s: set to minimum %u\n", __func__, max_size); } /* see blk_queue_virt_boundary() for the explanation */ diff --git a/block/blk.h b/block/blk.h index 4c3b3325219a..9a56d7002efc 100644 --- a/block/blk.h +++ b/block/blk.h @@ -13,6 +13,7 @@ struct elevator_type; #define BLK_MAX_TIMEOUT (5 * HZ) extern struct dentry *blk_debugfs_root; +DECLARE_STATIC_KEY_FALSE(blk_sub_page_limits); struct blk_flush_queue { unsigned int flush_pending_idx:1; @@ -32,6 +33,15 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size, gfp_t flags); void blk_free_flush_queue(struct blk_flush_queue *q); +static inline bool blk_queue_sub_page_limits(const struct queue_limits *lim) +{ + return static_branch_unlikely(&blk_sub_page_limits) && + lim->sub_page_limits; +} + +int blk_sub_page_limit_queues_get(void *data, u64 *val); +void blk_disable_sub_page_limits(struct queue_limits *q); + void blk_freeze_queue(struct request_queue *q); void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); void blk_queue_start_drain(struct request_queue *q); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b9637d63e6f0..af04bf241714 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -319,6 +319,8 @@ struct queue_limits { * due to possible offsets. */ unsigned int dma_alignment; + + bool sub_page_limits; }; typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx, From patchwork Mon Jan 30 21:26:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 648878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2D93C636D6 for ; Mon, 30 Jan 2023 21:27:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230519AbjA3V1O (ORCPT ); Mon, 30 Jan 2023 16:27:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230515AbjA3V1K (ORCPT ); Mon, 30 Jan 2023 16:27:10 -0500 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30E9E485A7; Mon, 30 Jan 2023 13:27:10 -0800 (PST) Received: by mail-pj1-f49.google.com with SMTP id n20-20020a17090aab9400b00229ca6a4636so16957511pjq.0; Mon, 30 Jan 2023 13:27:10 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=idfcEYfQnXda9MtyBSs+vCtAqmQuKIMEL+tvI67zHT4=; b=CYUGwHJjHOdyqdJpHoTPbLpBV5xn+h+U5bJoS/bMdcZPdGoItDYlsskq3aInsKKZSJ Q4xhL559Sizq8cdNNFXuQ//eusGZz7sljr/LVtAxfqcqcmfwJYcUfoQWrbdVjMZb5+Fd WX6cvpY5kisDiqtC9k81beX9OA8UnnwENohBLTmeg0gqaSYD1e5XGxm+iF0G/JfYnWGZ dZpTBASl7uIqqEJxYgEUEoqT4Sw7ua+MHARXAVaxj83fypg4RSR+yUJioseSBdh5hITu N6QBJqvaArz+V8U4BkYsVoaofMFRVVZsbnodcjPKw/VKxct2uEQMaliIQfQNJ36MZAne vXpg== X-Gm-Message-State: AO0yUKWN2Ld+PeytkzfBGtOXfp3pQ+luPCW05Q7JIBYQkqHSlYOqtEnz kS7jefsjGl/B1fU6v/xKsyQ= X-Google-Smtp-Source: AK7set9yBb5kiXQ8NbiLhcIuk9sp63h3wbmDzuyqRDEXkfNmNx3bMFqDe2LHwBDOg4/bS7nPPeQUcQ== X-Received: by 2002:a17:903:32c1:b0:196:82d2:93a with SMTP id i1-20020a17090332c100b0019682d2093amr6168491plr.11.1675114029446; Mon, 30 Jan 2023 13:27:09 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5016:3bcd:59fe:334b]) by smtp.gmail.com with ESMTPSA id u18-20020a170902e5d200b00196087a3d7csm7425613plf.77.2023.01.30.13.27.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 13:27:08 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v4 3/7] block: Support submitting passthrough requests with small segments Date: Mon, 30 Jan 2023 13:26:52 -0800 Message-Id: <20230130212656.876311-4-bvanassche@acm.org> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog In-Reply-To: <20230130212656.876311-1-bvanassche@acm.org> References: <20230130212656.876311-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org If the segment size is smaller than the page size there may be multiple segments per bvec even if a bvec only contains a single page. Hence this patch. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/blk-map.c | 2 +- block/blk.h | 18 ++++++++++++++++++ 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/block/blk-map.c b/block/blk-map.c index 9ee4be4ba2f1..eb059d3a1be2 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -534,7 +534,7 @@ int blk_rq_append_bio(struct request *rq, struct bio *bio) unsigned int nr_segs = 0; bio_for_each_bvec(bv, bio, iter) - nr_segs++; + nr_segs += blk_segments(&rq->q->limits, bv.bv_len); if (!rq->bio) { blk_rq_bio_prep(rq, bio, nr_segs); diff --git a/block/blk.h b/block/blk.h index 9a56d7002efc..b39938255d13 100644 --- a/block/blk.h +++ b/block/blk.h @@ -86,6 +86,24 @@ struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs, gfp_t gfp_mask); void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs); +/* Number of DMA segments required to transfer @bytes data. */ +static inline unsigned int blk_segments(const struct queue_limits *limits, + unsigned int bytes) +{ + if (!blk_queue_sub_page_limits(limits)) + return 1; + + { + const unsigned int mss = limits->max_segment_size; + + if (bytes <= mss) + return 1; + if (is_power_of_2(mss)) + return round_up(bytes, mss) >> ilog2(mss); + return (bytes + mss - 1) / mss; + } +} + static inline bool biovec_phys_mergeable(struct request_queue *q, struct bio_vec *vec1, struct bio_vec *vec2) { From patchwork Mon Jan 30 21:26:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 649571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8387FC636D7 for ; Mon, 30 Jan 2023 21:27:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230496AbjA3V1P (ORCPT ); Mon, 30 Jan 2023 16:27:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230521AbjA3V1N (ORCPT ); Mon, 30 Jan 2023 16:27:13 -0500 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A63730E9D; Mon, 30 Jan 2023 13:27:11 -0800 (PST) Received: by mail-pl1-f178.google.com with SMTP id k13so13104651plg.0; Mon, 30 Jan 2023 13:27:11 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gT/65YVqsvhlObcz45anS+pTHdEkcxX0FR157j8sKeY=; b=uHgwowCYxClqPFHpAd6aIrYcXcZZ99C7yI2uIvV/RvuWdD8qTLR7UJDDXWfXXNtGxq eL1K7cH6/En646uo1PDKw4EEOmDhn59t+9HAHptby/qWgn2u8frfCnl3kSbHxq/5dvMe j700VeNcHBpFNMlpYncsd4H/+zQRjyKZh57MULW0bD5mdg9xWTWH+/8fhEfhhd/WsD4K jJAHeTR+IZq8JwwL55owQwQbHd1s6YzxBEtpVYuvg3+KpWrHAYt6DHwy7fqIO4piYY36 YIrf06g29FeC8NKaw+A/UpbLB6UsL9hfIgzhAH+7BX5WfrVmxOAw1EOS7qn1EHmXW2Kc vIgQ== X-Gm-Message-State: AO0yUKVeqUIOC7OpIzAkky/eRXAXKhiupb/uFP8Cf8Pqfo03Brj1BxiX zay1v6I5GUwu+8ACbMSJ+wA= X-Google-Smtp-Source: AK7set/9cHggGNQt5QzonjVmO87KdHLRhB+xC0XkUH9ljN8l+RHD5WMOQMeuY9ZRZtnfBjQJbjHeGA== X-Received: by 2002:a17:902:f0d4:b0:196:56ae:ed19 with SMTP id v20-20020a170902f0d400b0019656aeed19mr11311882pla.2.1675114031060; Mon, 30 Jan 2023 13:27:11 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5016:3bcd:59fe:334b]) by smtp.gmail.com with ESMTPSA id u18-20020a170902e5d200b00196087a3d7csm7425613plf.77.2023.01.30.13.27.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 13:27:10 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v4 4/7] block: Add support for filesystem requests and small segments Date: Mon, 30 Jan 2023 13:26:53 -0800 Message-Id: <20230130212656.876311-5-bvanassche@acm.org> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog In-Reply-To: <20230130212656.876311-1-bvanassche@acm.org> References: <20230130212656.876311-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add support in the bio splitting code and also in the bio submission code for bios with segments smaller than the page size. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/blk-merge.c | 7 +++++-- block/blk-mq.c | 2 ++ block/blk.h | 11 +++++------ 3 files changed, 12 insertions(+), 8 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index b7c193d67185..bf21475e8a13 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -294,7 +294,8 @@ static struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim, if (nsegs < lim->max_segments && bytes + bv.bv_len <= max_bytes && bv.bv_offset + bv.bv_len <= PAGE_SIZE) { - nsegs++; + /* single-page bvec optimization */ + nsegs += blk_segments(lim, bv.bv_len); bytes += bv.bv_len; } else { if (bvec_split_segs(lim, &bv, &nsegs, &bytes, @@ -543,7 +544,9 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, __blk_segment_map_sg_merge(q, &bvec, &bvprv, sg)) goto next_bvec; - if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE) + if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE && + (!blk_queue_sub_page_limits(&q->limits) || + bvec.bv_len <= q->limits.max_segment_size)) nsegs += __blk_bvec_map_sg(bvec, sglist, sg); else nsegs += blk_bvec_map_sg(q, &bvec, sglist, sg); diff --git a/block/blk-mq.c b/block/blk-mq.c index 9c8dc70020bc..a62b79e97a30 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2959,6 +2959,8 @@ void blk_mq_submit_bio(struct bio *bio) bio = __bio_split_to_limits(bio, &q->limits, &nr_segs); if (!bio) return; + } else if (bio->bi_vcnt == 1) { + nr_segs = blk_segments(&q->limits, bio->bi_io_vec[0].bv_len); } if (!bio_integrity_prep(bio)) diff --git a/block/blk.h b/block/blk.h index b39938255d13..5c248b80a218 100644 --- a/block/blk.h +++ b/block/blk.h @@ -333,13 +333,12 @@ static inline bool bio_may_exceed_limits(struct bio *bio, } /* - * All drivers must accept single-segments bios that are <= PAGE_SIZE. - * This is a quick and dirty check that relies on the fact that - * bi_io_vec[0] is always valid if a bio has data. The check might - * lead to occasional false negatives when bios are cloned, but compared - * to the performance impact of cloned bios themselves the loop below - * doesn't matter anyway. + * Check whether bio splitting should be performed. This check may + * trigger the bio splitting code even if splitting is not necessary. */ + if (blk_queue_sub_page_limits(lim) && + bio->bi_io_vec && bio->bi_io_vec->bv_len > lim->max_segment_size) + return true; return lim->chunk_sectors || bio->bi_vcnt != 1 || bio->bi_io_vec->bv_len + bio->bi_io_vec->bv_offset > PAGE_SIZE; } From patchwork Mon Jan 30 21:26:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 648877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4963DC6379F for ; Mon, 30 Jan 2023 21:27:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231130AbjA3V1Q (ORCPT ); Mon, 30 Jan 2023 16:27:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230513AbjA3V1N (ORCPT ); Mon, 30 Jan 2023 16:27:13 -0500 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F88636FD8; Mon, 30 Jan 2023 13:27:13 -0800 (PST) Received: by mail-pl1-f174.google.com with SMTP id m13so1146938plx.13; Mon, 30 Jan 2023 13:27:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rtucAFCDyuLZyrm6z27M/7hQlUm2l9x/qBJJhm+fNmU=; b=nbLprZAxrbq+6CTEVxAAhbkwXujmC2CeZ62qcKuIgDaw7yQ6C5w9n4A55yVRXOYFFQ Yz6z4F0/E3yQEtyU1tqdVTyPdNU6FStpjGUjTJovBBIxUwnGkSpyrM/ar97SumsBTj6H n1i54/6ZDe6S/XDXZpncWXAOmlHLh1rrxGxIxEQ0b00L1+9/Hy+ZWc2+Oc8ivAoV37vc j0IwoAbWE9y4+lmkQtgPLLuWkTuYD2JbtMssNclmxkwVXlU+v/RhECPa58pH48OgDcu+ ESmbiMWAIAGaGBTsl7E2SukJv0uzWccQx131j5R4Cx8MXe00W7e/b9B1Ri9A4DLZsN5t pSHQ== X-Gm-Message-State: AFqh2kprDdFO1ClLM3A/yU8typMYTvKnyecDFrA6XOSJVfAgsENYQd2M HEhOKcwhyfauVnmwSbPXIho= X-Google-Smtp-Source: AMrXdXtlZP1XxVoQCk5h0aRRePW2d8tgHgY7Xxu8XR1dpG1TYt/mXu08DHFujMUaY5mJx99pADsEag== X-Received: by 2002:a17:903:248f:b0:189:6ab3:9e64 with SMTP id p15-20020a170903248f00b001896ab39e64mr47521682plw.34.1675114032726; Mon, 30 Jan 2023 13:27:12 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5016:3bcd:59fe:334b]) by smtp.gmail.com with ESMTPSA id u18-20020a170902e5d200b00196087a3d7csm7425613plf.77.2023.01.30.13.27.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 13:27:12 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v4 5/7] block: Add support for small segments in blk_rq_map_user_iov() Date: Mon, 30 Jan 2023 13:26:54 -0800 Message-Id: <20230130212656.876311-6-bvanassche@acm.org> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog In-Reply-To: <20230130212656.876311-1-bvanassche@acm.org> References: <20230130212656.876311-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Before changing the return value of bio_add_hw_page() into a value in the range [0, len], make blk_rq_map_user_iov() fall back to copying data if mapping the data is not possible due to the segment limit. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/blk-map.c | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) diff --git a/block/blk-map.c b/block/blk-map.c index eb059d3a1be2..b1dad4690472 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -307,17 +307,26 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, else { for (j = 0; j < npages; j++) { struct page *page = pages[j]; - unsigned int n = PAGE_SIZE - offs; + unsigned int n = PAGE_SIZE - offs, added; bool same_page = false; if (n > bytes) n = bytes; - if (!bio_add_hw_page(rq->q, bio, page, n, offs, - max_sectors, &same_page)) { + added = bio_add_hw_page(rq->q, bio, page, n, + offs, max_sectors, &same_page); + if (added == 0) { if (same_page) put_page(page); break; + } else if (added != n) { + /* + * The segment size is smaller than the + * page size and an iov exceeds the + * segment size. Give up. + */ + ret = -EREMOTEIO; + goto out_unmap; } bytes -= n; @@ -657,10 +666,18 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq, i = *iter; do { - if (copy) + if (copy) { ret = bio_copy_user_iov(rq, map_data, &i, gfp_mask); - else + } else { ret = bio_map_user_iov(rq, &i, gfp_mask); + /* + * Fall back to copying the data if bio_map_user_iov() + * returns -EREMOTEIO. + */ + if (ret == -EREMOTEIO) + ret = bio_copy_user_iov(rq, map_data, &i, + gfp_mask); + } if (ret) goto unmap_rq; if (!bio) From patchwork Mon Jan 30 21:26:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 649570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17D7CC636CD for ; Mon, 30 Jan 2023 21:27:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231134AbjA3V1R (ORCPT ); Mon, 30 Jan 2023 16:27:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230510AbjA3V1Q (ORCPT ); Mon, 30 Jan 2023 16:27:16 -0500 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 378DB47434; Mon, 30 Jan 2023 13:27:15 -0800 (PST) Received: by mail-pl1-f172.google.com with SMTP id p24so13022748plw.11; Mon, 30 Jan 2023 13:27:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SldGN1dTvTGuXHNWg2lfvw9/Jk1AJrvi4a6o1b8Gk+4=; b=FaQKMH6v5RKuz/jr+Hq5OGGHmH2hzu//7knzMSejFc5YOhCQR+1nh7qws6CiQFQxPM 8zxxyvmqtqusg5+8VJJ4dmol1ioYYXgBqj/s5QMJ1WcUq5hD7n7VZ0o8ptvzzNTJQ8OA a5DP6zRyn1x2uMkrEkLS2+T+ErPy5nRS3IBm0H9Arps5bkNggnbJgJYK+068u0vzyOp/ l0OiiW80KnWlNnMckQAYbFMv8LVXsetW0TUJQ9UxlF0P/ioq26lID7mQSbuXMzgNiTOk ReIIGaYZ6YnswliXm7luQSZkKSYze4zSKHhwFiDZpeAUP7gDERw/aHezz44eJhe7IQc2 ZvEw== X-Gm-Message-State: AO0yUKV8Q/IfL4Ot29YdTMj9zdBJXdwatBU451P++Wdg7ad06QJc5ZI+ mNu2hrkcHrM7zRhCNduYxuY= X-Google-Smtp-Source: AK7set/xOe7eZz8L05C1PTkr4NN1MCrz5XuzkPY+weBmhgTZmsWgWvpzdZ0//FREt41xOwYf5wMjfQ== X-Received: by 2002:a17:902:c7cd:b0:196:5852:4092 with SMTP id r13-20020a170902c7cd00b0019658524092mr6758166pla.56.1675114034647; Mon, 30 Jan 2023 13:27:14 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5016:3bcd:59fe:334b]) by smtp.gmail.com with ESMTPSA id u18-20020a170902e5d200b00196087a3d7csm7425613plf.77.2023.01.30.13.27.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 13:27:13 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Doug Gilbert , "Martin K . Petersen" Subject: [PATCH v4 6/7] scsi_debug: Support configuring the maximum segment size Date: Mon, 30 Jan 2023 13:26:55 -0800 Message-Id: <20230130212656.876311-7-bvanassche@acm.org> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog In-Reply-To: <20230130212656.876311-1-bvanassche@acm.org> References: <20230130212656.876311-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add a kernel module parameter for configuring the maximum segment size. This patch enables testing SCSI support for segments smaller than the page size. Cc: Doug Gilbert Cc: Martin K. Petersen Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 8553277effb3..603c9faac56f 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -752,6 +752,7 @@ static int sdebug_host_max_queue; /* per host */ static int sdebug_lowest_aligned = DEF_LOWEST_ALIGNED; static int sdebug_max_luns = DEF_MAX_LUNS; static int sdebug_max_queue = SDEBUG_CANQUEUE; /* per submit queue */ +static unsigned int sdebug_max_segment_size = BLK_MAX_SEGMENT_SIZE; static unsigned int sdebug_medium_error_start = OPT_MEDIUM_ERR_ADDR; static int sdebug_medium_error_count = OPT_MEDIUM_ERR_NUM; static atomic_t retired_max_queue; /* if > 0 then was prior max_queue */ @@ -5841,6 +5842,7 @@ module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO); module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR); module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR); module_param_named(max_queue, sdebug_max_queue, int, S_IRUGO | S_IWUSR); +module_param_named(max_segment_size, sdebug_max_segment_size, uint, S_IRUGO); module_param_named(medium_error_count, sdebug_medium_error_count, int, S_IRUGO | S_IWUSR); module_param_named(medium_error_start, sdebug_medium_error_start, int, @@ -5917,6 +5919,7 @@ MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)"); MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method"); MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)"); MODULE_PARM_DESC(max_queue, "max number of queued commands (1 to max(def))"); +MODULE_PARM_DESC(max_segment_size, "max bytes in a single segment"); MODULE_PARM_DESC(medium_error_count, "count of sectors to return follow on MEDIUM error"); MODULE_PARM_DESC(medium_error_start, "starting sector number to return MEDIUM error"); MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)"); @@ -7816,6 +7819,7 @@ static int sdebug_driver_probe(struct device *dev) sdebug_driver_template.can_queue = sdebug_max_queue; sdebug_driver_template.cmd_per_lun = sdebug_max_queue; + sdebug_driver_template.max_segment_size = sdebug_max_segment_size; if (!sdebug_clustering) sdebug_driver_template.dma_boundary = PAGE_SIZE - 1; From patchwork Mon Jan 30 21:26:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 648876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E462FC54EAA for ; Mon, 30 Jan 2023 21:27:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231158AbjA3V13 (ORCPT ); Mon, 30 Jan 2023 16:27:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230510AbjA3V1T (ORCPT ); Mon, 30 Jan 2023 16:27:19 -0500 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96F1744BD6; Mon, 30 Jan 2023 13:27:17 -0800 (PST) Received: by mail-pl1-f169.google.com with SMTP id 5so13059211plo.3; Mon, 30 Jan 2023 13:27:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AAt+l10ON2vRWznkQT8USZE8dWAuuKP2YrkJ5a3o/kc=; b=EWZwVMtH7/KVj9y6j/SOTGZpYIeC2I6EKbmCG7w4Q6/e3U0R8Nx/qGOgPlLoxkw4de YjqbZ+xTpxJzdg8E4Ben9j1FPMc0BQ2bAzLq6Q6zwuKakqD0rbS7S1byqSkIwItrCj2k +o/7tqS7FsFe4oOSwkYCRRdAdnmhTkRIPSfDgDCnwF6vYl14Q5kzVx4wyR/CSpsA3tKD VW1l/jhL+2Rk7OgQs1h28CkyFYKjprD0h5Td0A6E4+qP4ucNvm+z1GRZ4RuIpjK4817j WLMIOFyIDXKqTkhRxM/4VyxOehVyLEaWVfvIGWY1vr6LgWASdLNzYEksW386YZHOvZy5 1u4w== X-Gm-Message-State: AO0yUKVM231MRe/jc9w69pJm3dJ//A/rnDdPbg3OEBqBAgYWUs2r9Bew CrtPg5T/kJmK4ZXN8q78HbZBI2txfEQ5YA== X-Google-Smtp-Source: AK7set9ugoOtK1jnZ+tp5EBVD4ZwPVgZ98Jxl5DFNlP/KfTLtBzZ6oTq6u3fyuklqII69bFWXTJQUQ== X-Received: by 2002:a17:902:d4cc:b0:196:3ae3:4ffe with SMTP id o12-20020a170902d4cc00b001963ae34ffemr23578957plg.41.1675114037031; Mon, 30 Jan 2023 13:27:17 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:5016:3bcd:59fe:334b]) by smtp.gmail.com with ESMTPSA id u18-20020a170902e5d200b00196087a3d7csm7425613plf.77.2023.01.30.13.27.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 13:27:15 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Damien Le Moal , Chaitanya Kulkarni Subject: [PATCH v4 7/7] null_blk: Support configuring the maximum segment size Date: Mon, 30 Jan 2023 13:26:56 -0800 Message-Id: <20230130212656.876311-8-bvanassche@acm.org> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog In-Reply-To: <20230130212656.876311-1-bvanassche@acm.org> References: <20230130212656.876311-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add support for configuring the maximum segment size. Add support for segments smaller than the page size. This patch enables testing segments smaller than the page size with a driver that does not call blk_rq_map_sg(). Cc: Christoph Hellwig Cc: Ming Lei Cc: Damien Le Moal Cc: Chaitanya Kulkarni Signed-off-by: Bart Van Assche --- drivers/block/null_blk/main.c | 19 ++++++++++++++++--- drivers/block/null_blk/null_blk.h | 1 + 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 4c601ca9552a..06eaa7ff4a86 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -157,6 +157,10 @@ static int g_max_sectors; module_param_named(max_sectors, g_max_sectors, int, 0444); MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)"); +static unsigned int g_max_segment_size = BLK_MAX_SEGMENT_SIZE; +module_param_named(max_segment_size, g_max_segment_size, int, 0444); +MODULE_PARM_DESC(max_segment_size, "Maximum size of a segment in bytes"); + static unsigned int nr_devices = 1; module_param(nr_devices, uint, 0444); MODULE_PARM_DESC(nr_devices, "Number of devices to register"); @@ -409,6 +413,7 @@ NULLB_DEVICE_ATTR(home_node, uint, NULL); NULLB_DEVICE_ATTR(queue_mode, uint, NULL); NULLB_DEVICE_ATTR(blocksize, uint, NULL); NULLB_DEVICE_ATTR(max_sectors, uint, NULL); +NULLB_DEVICE_ATTR(max_segment_size, uint, NULL); NULLB_DEVICE_ATTR(irqmode, uint, NULL); NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL); NULLB_DEVICE_ATTR(index, uint, NULL); @@ -550,6 +555,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_queue_mode, &nullb_device_attr_blocksize, &nullb_device_attr_max_sectors, + &nullb_device_attr_max_segment_size, &nullb_device_attr_irqmode, &nullb_device_attr_hw_queue_depth, &nullb_device_attr_index, @@ -630,7 +636,8 @@ static ssize_t memb_group_features_show(struct config_item *item, char *page) return snprintf(page, PAGE_SIZE, "badblocks,blocking,blocksize,cache_size," "completion_nsec,discard,home_node,hw_queue_depth," - "irqmode,max_sectors,mbps,memory_backed,no_sched," + "irqmode,max_sectors,max_segment_size,mbps," + "memory_backed,no_sched," "poll_queues,power,queue_mode,shared_tag_bitmap,size," "submit_queues,use_per_node_hctx,virt_boundary,zoned," "zone_capacity,zone_max_active,zone_max_open," @@ -693,6 +700,7 @@ static struct nullb_device *null_alloc_dev(void) dev->queue_mode = g_queue_mode; dev->blocksize = g_bs; dev->max_sectors = g_max_sectors; + dev->max_segment_size = g_max_segment_size; dev->irqmode = g_irqmode; dev->hw_queue_depth = g_hw_queue_depth; dev->blocking = g_blocking; @@ -1234,6 +1242,8 @@ static int null_transfer(struct nullb *nullb, struct page *page, unsigned int valid_len = len; int err = 0; + WARN_ONCE(len > dev->max_segment_size, "%u > %u\n", len, + dev->max_segment_size); if (!is_write) { if (dev->zoned) valid_len = null_zone_valid_read_len(nullb, @@ -1269,7 +1279,8 @@ static int null_handle_rq(struct nullb_cmd *cmd) spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { - len = bvec.bv_len; + len = min(bvec.bv_len, nullb->dev->max_segment_size); + bvec.bv_len = len; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, op_is_write(req_op(rq)), sector, rq->cmd_flags & REQ_FUA); @@ -1296,7 +1307,8 @@ static int null_handle_bio(struct nullb_cmd *cmd) spin_lock_irq(&nullb->lock); bio_for_each_segment(bvec, bio, iter) { - len = bvec.bv_len; + len = min(bvec.bv_len, nullb->dev->max_segment_size); + bvec.bv_len = len; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, op_is_write(bio_op(bio)), sector, bio->bi_opf & REQ_FUA); @@ -2125,6 +2137,7 @@ static int null_add_dev(struct nullb_device *dev) dev->max_sectors = queue_max_hw_sectors(nullb->q); dev->max_sectors = min(dev->max_sectors, BLK_DEF_MAX_SECTORS); blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); + blk_queue_max_segment_size(nullb->q, dev->max_segment_size); if (dev->virt_boundary) blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1); diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index eb5972c50be8..8cb73fe05fa3 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -102,6 +102,7 @@ struct nullb_device { unsigned int queue_mode; /* block interface */ unsigned int blocksize; /* block size */ unsigned int max_sectors; /* Max sectors per command */ + unsigned int max_segment_size; /* Max size of a single DMA segment. */ unsigned int irqmode; /* IRQ completion handler */ unsigned int hw_queue_depth; /* queue depth */ unsigned int index; /* index of the disk, only valid with a disk */