From patchwork Mon Aug 9 14:29:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493869 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977144jap; Mon, 9 Aug 2021 07:38:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwiFdTos4LFO7ouLo9kaRuDZcmY7MT8ST9JdaLtLhtJx1Knr0O15NWLje/amUb8G7IvhOvv X-Received: by 2002:a02:698f:: with SMTP id e137mr23283716jac.89.1628519882238; Mon, 09 Aug 2021 07:38:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519882; cv=none; d=google.com; s=arc-20160816; b=OsX+g47PuPRBcPLiVrnOIHtSNpeen1WxPb49DHrQM1R+sl4i9dowY15FgHpiRj3I87 pFQdl+mgBt/8FI1w4J0Oh3nxLUMFPK+mWENlc91tNjOWWTcy/9dWk2ANl9189t+ajCKd Qljm+xYRujCwaDMJ03fpr6NP/Jmu4hVp/IXMk9eDQzNOpN9pN6JzRiBkv/43P3s0t7yd WCJ0qsWOsEqKc6GuG/8YGGlnD3CmEZoYdBrf79GyV5EmfwSC2ktgw8MklRxXRlwCxpb4 Gp7bI3bA4X3u4WZJcZbjm9iozDKEpstpmsZ3aMEfzsyFVXCDHO7TzhIeKx5VO7gseumq mr6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=FSzGJZCsm7n+Gs9oOY6e8+pqiQ5+3tUXczEltde/ZOw=; b=0EYWOIEqx75Jf5pIK6jcc2hBbMPDPBIyeON+gxXpQcDIMbjXoCIP7T7cIejj59j1C8 3KTjlxzyODk/cyf84Wm4zIXKqU7GDI0U/2UrixUr5PRWgEsiBGT51IGZb2LEeki5RDRT DmnEaY/k0jKrJUDpyoddvtpqcdsm2Vw+9Dxl+4Tpc8AIdC8WpcSkwbK5pi9It5P/5J7I 7AMbQ6bK12LmqG8RbH/dKGWt1Gy9CufgTkv3k+vaNQg8oEx3swydBZlSVd8g0o9+UzVz C1GrpB8O98Q3XC+IRvC10361r/zzx8ZjgzphxF0HBhsZhR2el7Rm4e6B5EIpOjfZVxw8 ozkw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.02 for ; Mon, 09 Aug 2021 07:38:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235365AbhHIOhh (ORCPT ); Mon, 9 Aug 2021 10:37:37 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3607 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235249AbhHIOfA (ORCPT ); Mon, 9 Aug 2021 10:35:00 -0400 Received: from fraeml737-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9L3kDjz6C9Mp; Mon, 9 Aug 2021 22:33:58 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml737-chm.china.huawei.com (10.206.15.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:27 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:24 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 01/11] blk-mq: Change rqs check in blk_mq_free_rqs() Date: Mon, 9 Aug 2021 22:29:28 +0800 Message-ID: <1628519378-211232-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The original code in commit 24d2f90309b23 ("blk-mq: split out tag initialization, support shared tags") would check tags->rqs is non-NULL and then dereference tags->rqs[]. Then in commit 2af8cbe30531 ("blk-mq: split tag ->rqs[] into two"), we started to dereference tags->static_rqs[], but continued to check non-NULL tags->rqs. Check tags->static_rqs as non-NULL instead, which is more logical. Signed-off-by: John Garry Reviewed-by: Ming Lei --- block/blk-mq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.26.2 diff --git a/block/blk-mq.c b/block/blk-mq.c index 2c4ac51e54eb..ae28f470893c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2348,7 +2348,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, { struct page *page; - if (tags->rqs && set->ops->exit_request) { + if (tags->static_rqs && set->ops->exit_request) { int i; for (i = 0; i < tags->nr_tags; i++) { From patchwork Mon Aug 9 14:29:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493872 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977195jap; Mon, 9 Aug 2021 07:38:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxcKUz2WSEaWYKp82/PWGfpO7gT286B0yTFFYKEe+l3q8+wilXO42tmhqR1vzhxGp0l63s1 X-Received: by 2002:a05:6e02:128a:: with SMTP id y10mr94547ilq.161.1628519885168; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519885; cv=none; d=google.com; s=arc-20160816; b=AuCjA0G2rfGkwf1pH47wyLYvAnoiwO8KE46whYo1QPh2in0xjCI0HbLGqBeBc2N8gC e03qjZSfNuKevw14rw6LE/fAarOqgkdx7Z+aZSqANmu/MztGdL8nF5mTzcQyMx4q4Ahl CeCIpwQcDT/g2vDnJailK/Til5QCrY2hZsd62hMaO5QABbNUjd8MnG+Zncl1/FcZLCte 3hOYziVjSc42YhNZ+kmRpvb6B5oRjq72dw8UotFSt5/XFjKcmX/MBzEcH/NhaV7GLP1J wPgaEYZwDkoQmdJ98/3efEQw7ZeS5D1+cC0cCZjLQtymHQER0vE6ahXU6dUTLIGVVEHW WBpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=4vogFmseDum2ZxEpw6wsLKRCyGzRC7qSwnywUYEttp4=; b=u/geylEb5jLvoMnaCik5uxAvpgjhROck2GuXHSgcWGctt23Ue5ciFiO3fK7+LLIRK0 egSbmrRc/Xs/ZadcX0sjmq1vVPSg6ySLaR8Pu//xJ0QYToG6ltQVJ/D0qjD9w6oZ7/AL fu9Q1/igP2b2+/P58ksY51+U/1lccT15c2+mNNTzY3A1sGOiZbaErGNuBMrJxFH1Ragc rfOGAWpxp1bb1/KHaQTcI7kQMrOdYzKSKt1/emKF11m6IQE1GHEq9kbCawVjfsgOFhsk r6KwOIdG87PjFq0vHJBT4L1bL9kDqqAN5s040a5KiaE9GFRvqrF3YdsXARdXYSF5Xa64 dKPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.05 for ; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235421AbhHIOiL (ORCPT ); Mon, 9 Aug 2021 10:38:11 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3608 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235255AbhHIOfA (ORCPT ); Mon, 9 Aug 2021 10:35:00 -0400 Received: from fraeml735-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9P5Kdzz6CB0T; Mon, 9 Aug 2021 22:34:01 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml735-chm.china.huawei.com (10.206.15.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:30 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:27 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 02/11] block: Rename BLKDEV_MAX_RQ -> BLKDEV_DEFAULT_RQ Date: Mon, 9 Aug 2021 22:29:29 +0800 Message-ID: <1628519378-211232-3-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org It is a bit confusing that there is BLKDEV_MAX_RQ and MAX_SCHED_RQ, as the name BLKDEV_MAX_RQ would imply the max requests always, which it is not. Rename to BLKDEV_MAX_RQ to BLKDEV_DEFAULT_RQ, matching it's usage - that being the default number of requests assigned when allocating a request queue. Signed-off-by: John Garry Reviewed-by: Ming Lei --- block/blk-core.c | 2 +- block/blk-mq-sched.c | 2 +- block/blk-mq-sched.h | 2 +- drivers/block/rbd.c | 2 +- include/linux/blkdev.h | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) -- 2.26.2 diff --git a/block/blk-core.c b/block/blk-core.c index 04477697ee4b..5d71382b6131 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -579,7 +579,7 @@ struct request_queue *blk_alloc_queue(int node_id) blk_queue_dma_alignment(q, 511); blk_set_default_limits(&q->limits); - q->nr_requests = BLKDEV_MAX_RQ; + q->nr_requests = BLKDEV_DEFAULT_RQ; return q; diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 0f006cabfd91..2231fb0d4c35 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -606,7 +606,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) * Additionally, this is a per-hw queue depth. */ q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth, - BLKDEV_MAX_RQ); + BLKDEV_DEFAULT_RQ); queue_for_each_hw_ctx(q, hctx, i) { ret = blk_mq_sched_alloc_tags(q, hctx, i); diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 5246ae040704..1e46be6c5178 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -5,7 +5,7 @@ #include "blk-mq.h" #include "blk-mq-tag.h" -#define MAX_SCHED_RQ (16 * BLKDEV_MAX_RQ) +#define MAX_SCHED_RQ (16 * BLKDEV_DEFAULT_RQ) void blk_mq_sched_assign_ioc(struct request *rq); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 6d596c2c2cd6..8bae60f6fa75 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -836,7 +836,7 @@ struct rbd_options { u32 alloc_hint_flags; /* CEPH_OSD_OP_ALLOC_HINT_FLAG_* */ }; -#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_MAX_RQ +#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_DEFAULT_RQ #define RBD_ALLOC_SIZE_DEFAULT (64 * 1024) #define RBD_LOCK_TIMEOUT_DEFAULT 0 /* no timeout */ #define RBD_READ_ONLY_DEFAULT false diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b5c033cf5f26..56870a43ae4c 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -45,7 +45,7 @@ struct blk_stat_callback; struct blk_keyslot_manager; #define BLKDEV_MIN_RQ 4 -#define BLKDEV_MAX_RQ 128 /* Default maximum */ +#define BLKDEV_DEFAULT_RQ 128 /* Must be consistent with blk_mq_poll_stats_bkt() */ #define BLK_MQ_POLL_STATS_BKTS 16 From patchwork Mon Aug 9 14:29:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493870 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977180jap; Mon, 9 Aug 2021 07:38:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxpCs52QPthDDwPN9vJBVd05P7SlXvpfUcDloU+AKAKjFRc+lGgilQhf62Hp1268hfFxnCl X-Received: by 2002:a02:958e:: with SMTP id b14mr23106156jai.123.1628519884093; Mon, 09 Aug 2021 07:38:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519884; cv=none; d=google.com; s=arc-20160816; b=K8SMxPBBaWsY9K2GmdaVo0qzq6F9B6n0Hn0aWaCFJ9uETFmGynxub105QiDjEN5HrZ w0g0iU+JeO/9fS7eyQHrzfJqeYQRRhQdKs/JxVIlC2PO4P/VGINhhhFWBuNmaZrQS3Ka 0F2jx9gU6hbSMA29WPWhNGQkXwdezN3hRbKAWaQQTHhWN3fqrYbGIyhn247dqcjcNabR vS1xzUkBi/5fNejINT6CM6/cz4bDF7UsyREXf2y+yQJ6TCtZjKh/BY6S58ehJGPvTq4R H2xiyPdM+/Le0XR96R8G1ZhGgxv+WmbZ4qw5VJRKmp2tTe1rziSyVSRC1/9DTlf6U/O/ 5PSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=NJq2wa0nCqrJ/qUdgE067Voi/ejlrTHzxs3ZWP60rwQ=; b=0nubh7WjJyScRUhfDRHFgP0uO/szWY1uWlQXvfmuAF8vACiCnc/2rBT+oKucFW3k5Q D010xFoJlwYgalNBkju/KjPHaaCvrEzAzONF04aPYX64PmJlc2uazaitfIijEv1Q9MW8 4tPyrMHE3Eiorm6T7EylDTfDNlQ2xMKfA1srasHaBCFLJ8AeuyLz9BbRNWkSHeU+++Ce DJGwAOn5xJ0UlPC5XBoRg4E8glLIaNQSA6429NvgSTDDVqRThrEK1pAahm2VHLVUuPnC rYXRxnH7aXtlb+MIXpqXiPaj/Eey+Ulqf5kA9vHUwgASe6i/PI7t3+y5PwLU1ENvo3jO N95A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.04 for ; Mon, 09 Aug 2021 07:38:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235230AbhHIOgy (ORCPT ); Mon, 9 Aug 2021 10:36:54 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3609 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235256AbhHIOfA (ORCPT ); Mon, 9 Aug 2021 10:35:00 -0400 Received: from fraeml734-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9V4HLYz6D9Hg; Mon, 9 Aug 2021 22:34:06 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml734-chm.china.huawei.com (10.206.15.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:33 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:30 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 03/11] blk-mq: Relocate shared sbitmap resize in blk_mq_update_nr_requests() Date: Mon, 9 Aug 2021 22:29:30 +0800 Message-ID: <1628519378-211232-4-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org For shared sbitmap, if the call to blk_mq_tag_update_depth() was successful for any hctx when hctx->sched_tags is not set, then it would be successful for all (due to nature in which blk_mq_tag_update_depth() fails). As such, there is no need to call blk_mq_tag_resize_shared_sbitmap() for each hctx. So relocate the call until after the hctx iteration under the !q->elevator check, which is equivalent (to !hctx->sched_tags). Signed-off-by: John Garry --- block/blk-mq.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) -- 2.26.2 Reviewed-by: Ming Lei diff --git a/block/blk-mq.c b/block/blk-mq.c index ae28f470893c..0e4825ca9869 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3624,8 +3624,6 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) if (!hctx->sched_tags) { ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr, false); - if (!ret && blk_mq_is_sbitmap_shared(set->flags)) - blk_mq_tag_resize_shared_sbitmap(set, nr); } else { ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags, nr, true); @@ -3643,9 +3641,13 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) } if (!ret) { q->nr_requests = nr; - if (q->elevator && blk_mq_is_sbitmap_shared(set->flags)) - sbitmap_queue_resize(&q->sched_bitmap_tags, - nr - set->reserved_tags); + if (blk_mq_is_sbitmap_shared(set->flags)) { + if (q->elevator) + sbitmap_queue_resize(&q->sched_bitmap_tags, + nr - set->reserved_tags); + else + blk_mq_tag_resize_shared_sbitmap(set, nr); + } } blk_mq_unquiesce_queue(q); From patchwork Mon Aug 9 14:29:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493871 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977186jap; Mon, 9 Aug 2021 07:38:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzVVkRCfVPgTKKrkJHn+bcytpnCAoOqx/o9Jaip/RbhYJE+4aR5nT7qiTOh6z2af6Yiih3x X-Received: by 2002:a5e:d91a:: with SMTP id n26mr374872iop.96.1628519884526; Mon, 09 Aug 2021 07:38:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519884; cv=none; d=google.com; s=arc-20160816; b=naqR5uk76oT5DrvUH9RYjTR7kF59f56Y8YFivacyWk8iWrh/cneRQt9ZW0GPB2+na0 dI5ZA73NjAx+7mFCXJD/A23pwb6JjRBukhbx00xLSog+eORgIWxK9zoMhZ4oFtHuUK+S oqTRAS4Obu0Q/Dnr5wlgMZKWQldlxbzC4yHT0JGvKlW8Szm0OWSMehojIChQC0LLn5Zk sTYtvwL2IMD7tspvqUaS45kzN0qnbwmwiqDaG08o8qIueL20vEUTpbjiGKqF7i6MqygM guAlRvx8yH2vR93QXMxAXqtOlCjdzpH7myp+xhMh3UizlmDAyFj1COKAkSHdiQrXB/lo hWCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=WN395213LYDhU3axw/Ycio+wDHie6EeXR/2LCytdilw=; b=jkWgoLGEkRB/EID0LvZX6YC5GL7ZMBddlA2JbcjchHynfoYpDxYFhmZeUuGsOho2Yc 4Zk3PlG/ebacXrvm9hk2VRCXHDpeXDe8oejgOo+Msf5TWa24r2v2nroTvSzBZHFkiH4W Zj+9MrQ6c4IfoyYtSYMDky/5K6heJj6MKhpbd0vpZNrjx3ATQBXmjfSPyTI6OMBigonC cMxM16axbW3ACBhej/mYfp36M0OMxyU1PEE0UOQugBXCRrOQQZtrcxO7vgG1oxKXa9EP Ewuw+e8NbKr8cUF4iMWPstGGM+7HJdM0+PkLCRyUXDTCJfGHg27FkPACrfE6xoSVG4oJ 2cvQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.04 for ; Mon, 09 Aug 2021 07:38:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235438AbhHIOiG (ORCPT ); Mon, 9 Aug 2021 10:38:06 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3610 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235266AbhHIOfA (ORCPT ); Mon, 9 Aug 2021 10:35:00 -0400 Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9W2dt1z6CB0Y; Mon, 9 Aug 2021 22:34:07 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:36 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:33 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 04/11] blk-mq: Invert check in blk_mq_update_nr_requests() Date: Mon, 9 Aug 2021 22:29:31 +0800 Message-ID: <1628519378-211232-5-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org It's easier to read: if (x) X; else Y; over: if (!x) Y; else X; No functional change intended. Signed-off-by: John Garry --- block/blk-mq.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) -- 2.26.2 Reviewed-by: Ming Lei diff --git a/block/blk-mq.c b/block/blk-mq.c index 0e4825ca9869..42c4b8d1a570 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3621,18 +3621,18 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) * If we're using an MQ scheduler, just update the scheduler * queue depth. This is similar to what the old code would do. */ - if (!hctx->sched_tags) { - ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr, - false); - } else { + if (hctx->sched_tags) { ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags, - nr, true); + nr, true); if (blk_mq_is_sbitmap_shared(set->flags)) { hctx->sched_tags->bitmap_tags = &q->sched_bitmap_tags; hctx->sched_tags->breserved_tags = &q->sched_breserved_tags; } + } else { + ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr, + false); } if (ret) break; From patchwork Mon Aug 9 14:29:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493874 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977203jap; Mon, 9 Aug 2021 07:38:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwKfjoHc9zteqsgl6rOKkBW6sZCGv6jft3FPLv5vIw2DakZxSZh8xetBS/+q1EzHR9c/JQX X-Received: by 2002:a92:7d08:: with SMTP id y8mr572773ilc.111.1628519885558; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519885; cv=none; d=google.com; s=arc-20160816; b=H/TVsUY4DcthBML8tbqysoQ6EJLXmz0TRCKoZdd+OSOGaXRTKcRCruSrWGCstbeNFa 5lUlT8cZzdoI52fzrHYt0bOFFmNdLaVqX2jl7XXpHHRSoX86DllS/LlAL/uvo2fSwcwx GOcr68V0CX91z6Ni06/jZoJvlHSyZIpn3XEUJcocURfR01Ku+U2GxbvM5O1uTAbGWmSv lSdlDjG/4yx8RNDBA8rb5wq6rwSwWgWpKZAFgPhhp97NYITu4M7tqrxNFSClQos3kEU3 dbwZtAxSD1IOse3C1mJ3hhLSceZ5kcEgsGCsNWQlbOqVv364+1UjY+xmRt2+wy518GVS AAZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=07MO+UKrJWJLtz9Ryi/mULaYddTIKmAHbZ6RqTNl+LM=; b=NoWQbj+obwhC2q22ubIgFI7NWq2WlRJpTZa2dPRr70O2rBuhp+6J+yQIftose81svH d1+RNzj5zVSsI3dst27lb/bhNLcNKEwMyAN5XoJBL+lOMxIjD1yHKpRaaKRBxba0ep73 eTTZ0z3EoDLk/hiXa9E/budl0fC1t1GGVzYqSgOUyAnJ9fOBWNSUYslH6GWAsINzsPs3 kHFXbSck0+m3+ARWrWHJkRmGZWPV5CIzIRebXFCYnVy3dmThtDzJjThkf/cJQJZQur2Z DspL9zj6CAkixRpeVT8JGCDrg5thD+6jFUAJ14jFGkcJHUwEgX0LUDws9sGGT2JNuwCr QRsg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.05 for ; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235460AbhHIOiN (ORCPT ); Mon, 9 Aug 2021 10:38:13 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3611 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233984AbhHIOfB (ORCPT ); Mon, 9 Aug 2021 10:35:01 -0400 Received: from fraeml713-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9Y1Jzyz6C990; Mon, 9 Aug 2021 22:34:09 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml713-chm.china.huawei.com (10.206.15.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:39 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:36 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 05/11] blk-mq-sched: Rename blk_mq_sched_alloc_{tags -> map_and_request}() Date: Mon, 9 Aug 2021 22:29:32 +0800 Message-ID: <1628519378-211232-6-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Function blk_mq_sched_alloc_tags() does same as __blk_mq_alloc_map_and_request(), so give a similar name to be consistent. Similarly rename label err_free_tags -> err_free_map_and_request. Signed-off-by: John Garry --- block/blk-mq-sched.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) -- 2.26.2 Reviewed-by: Ming Lei diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 2231fb0d4c35..b4d7ad9a7a60 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -515,9 +515,9 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx, percpu_ref_put(&q->q_usage_counter); } -static int blk_mq_sched_alloc_tags(struct request_queue *q, - struct blk_mq_hw_ctx *hctx, - unsigned int hctx_idx) +static int blk_mq_sched_alloc_map_and_request(struct request_queue *q, + struct blk_mq_hw_ctx *hctx, + unsigned int hctx_idx) { struct blk_mq_tag_set *set = q->tag_set; int ret; @@ -609,15 +609,15 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) BLKDEV_DEFAULT_RQ); queue_for_each_hw_ctx(q, hctx, i) { - ret = blk_mq_sched_alloc_tags(q, hctx, i); + ret = blk_mq_sched_alloc_map_and_request(q, hctx, i); if (ret) - goto err_free_tags; + goto err_free_map_and_request; } if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { ret = blk_mq_init_sched_shared_sbitmap(q); if (ret) - goto err_free_tags; + goto err_free_map_and_request; } ret = e->ops.init_sched(q, e); @@ -645,7 +645,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) err_free_sbitmap: if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) blk_mq_exit_sched_shared_sbitmap(q); -err_free_tags: +err_free_map_and_request: blk_mq_sched_free_requests(q); blk_mq_sched_tags_teardown(q); q->elevator = NULL; From patchwork Mon Aug 9 14:29:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493876 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977217jap; Mon, 9 Aug 2021 07:38:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwdk95YLNidpTJBi5mOKEfAyWm2x1YScOOg5ScDonB0z3M7E+v9znWmW0dugVuwwfhSOANg X-Received: by 2002:a05:6e02:128a:: with SMTP id y10mr94600ilq.161.1628519886140; Mon, 09 Aug 2021 07:38:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519886; cv=none; d=google.com; s=arc-20160816; b=i1znvjNSx70TqjrCD1b+P0Majn+8IxZ0PYZn3y6d9WAfgjusz6lMEnHA5EWTWDuE6U rNnNbB2UClgLlKmMnEjuGseJ/ZBze6tmpi7KRURr3lNfTZuia0ZziKHSJRwbLkkcCxCO sX3DGuC9PAGza4cJ6j8i5kBkcuMRimYCeik/V21rV7/NGFzZejDXpmmUlifIUFqnZCeR vhaqcoebAeYBKqMa2s6EwLJfHjKvD8IzIXAfLEtF28fVFEympQD7d+RVU7TfcFt1+Jtw +ZAcV7YZoNkUOLLnElA+eCMGWdR3YNpx7FXsbm9dkWL9ufNO9brAQR9SBB8XvhOSaWd6 3nfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=ygdyXnVi6i0z3TdW1kqtPHMeZvbfXMCghhmfEAOhEx4=; b=qgail3BaomTCrhPr/MHuQucCc6qwVxy84FOl70+074f/PP8eDc99QDCFdQfQn3pm1b uOpUGkFSIPZP234PednQgx7vy8IKaerWIcsQro2hl4A7reQWpZ21Bo+Dqqylws0/4Qlr gN+EUf5RPUJXNVw7LKm7er3+3YciHu/9Qd7SzXZ6qXIo7gqL8oLXZWs2cPEpkftQ2aIC 6z3qKp/m3deG9p/UV8aIJLoj11L8CUNvyXT94CvXYKzndInRPMjTJMuUB5+9+EgXAMli TnHqv7Y1FCYROs+qKPwsRXjKkYl5GYmOzSsdNaC6JkBcQ1JK2Uk4X0hFMnxvRLoSjNoC G43A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.06 for ; Mon, 09 Aug 2021 07:38:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235505AbhHIOiP (ORCPT ); Mon, 9 Aug 2021 10:38:15 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3612 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234244AbhHIOgR (ORCPT ); Mon, 9 Aug 2021 10:36:17 -0400 Received: from fraeml715-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9c6gdbz6CB0L; Mon, 9 Aug 2021 22:34:12 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml715-chm.china.huawei.com (10.206.15.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:42 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:39 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 06/11] blk-mq: Pass driver tags to blk_mq_clear_rq_mapping() Date: Mon, 9 Aug 2021 22:29:33 +0800 Message-ID: <1628519378-211232-7-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Function blk_mq_clear_rq_mapping() will be used for shared sbitmap tags in future, so pass a driver tags pointer instead of the tagset container and HW queue index. Signed-off-by: John Garry --- block/blk-mq.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) -- 2.26.2 Reported-by: kernel test robot Reported-by: kernel test robot diff --git a/block/blk-mq.c b/block/blk-mq.c index 42c4b8d1a570..0bb596f4d061 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2310,10 +2310,9 @@ static size_t order_to_size(unsigned int order) } /* called before freeing request pool in @tags */ -static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set, - struct blk_mq_tags *tags, unsigned int hctx_idx) +void blk_mq_clear_rq_mapping(struct blk_mq_tags *drv_tags, + struct blk_mq_tags *tags) { - struct blk_mq_tags *drv_tags = set->tags[hctx_idx]; struct page *page; unsigned long flags; @@ -2322,7 +2321,7 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set, unsigned long end = start + order_to_size(page->private); int i; - for (i = 0; i < set->queue_depth; i++) { + for (i = 0; i < drv_tags->nr_tags; i++) { struct request *rq = drv_tags->rqs[i]; unsigned long rq_addr = (unsigned long)rq; @@ -2346,8 +2345,11 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set, void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx) { + struct blk_mq_tags *drv_tags; struct page *page; + drv_tags = set->tags[hctx_idx]; + if (tags->static_rqs && set->ops->exit_request) { int i; @@ -2361,7 +2363,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, } } - blk_mq_clear_rq_mapping(set, tags, hctx_idx); + blk_mq_clear_rq_mapping(drv_tags, tags); while (!list_empty(&tags->page_list)) { page = list_first_entry(&tags->page_list, struct page, lru); From patchwork Mon Aug 9 14:29:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493878 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977221jap; Mon, 9 Aug 2021 07:38:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy3JRO7h6/2NKfMYBrV/V2WCHzrZ3U1wxtGeZT8oaRWwKnIKIQRsxseuOeFMwEBQUrb1eaG X-Received: by 2002:a02:698f:: with SMTP id e137mr23284009jac.89.1628519886597; Mon, 09 Aug 2021 07:38:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519886; cv=none; d=google.com; s=arc-20160816; b=cc86E1zfSnyRyRQF7EEJ854zKKgGSRQAtppQS9a8M2qAr0kYQcHM1tMyM1IL4V6PYv RM2wVmziRKC712EEMpsPb2QgFOveUSkWxx0H6LtqgH9mZhAauQgL3s6ZCWosfNYvHo4m qaMvEnxEbz47paOCRUz4k1g1I6kFmDz2kK0SyDUqiM0UsIpobYYE0xTFFZhB77BMpMf+ cZMjZEDZhBAhsqF19bYe6sW2NDvk3ELJvgu2+u6/tNjLyHVIteRhUgIPErYrIAH0ipMk qASe9UL+N2dBVx7/UK5tLK5Fry1KSEL3Zg3qqOfwSD8P4tMIKpqj6QarZ4Ovg3baDSpX yG5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=RrjcEoxElDaNt5R4V024lIwoW/BIqtb62oYpDGbFLh4=; b=ArT1o/9VXbr3RMdx5Yy/fC5l4IVwx3GNlVTcSd+HnHDLGfd8xKY6krWw2P6ptOSAr8 cjc8Gwf7wZZRKtmCaWirdO0yVK+/e2ZZatHmZuN6E9Cd3oYIhW+5yOeNYbrR78+t/YDL LRk1l0CJR/SvI3RxwuxddXp6ZQDhUGRHJCn3WhZzKnaLPs5aTBkw5RHw9bIkx7RKRB56 OJ6JPH9knTN3PJFhy00qSLxxdRD+SOflZRm1K35tkWinVKHnLM+7TFul/JSgNEpxF7b3 QU+KebmCenqtsTkOfV8g0g0muYttdpvPShL9V+hXOKY+ezE6aKK13J2P6qVjmHRghtvi 6ojQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.06 for ; Mon, 09 Aug 2021 07:38:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235536AbhHIOiQ (ORCPT ); Mon, 9 Aug 2021 10:38:16 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3613 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234378AbhHIOgR (ORCPT ); Mon, 9 Aug 2021 10:36:17 -0400 Received: from fraeml710-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9k06cVz6D9Ll; Mon, 9 Aug 2021 22:34:18 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml710-chm.china.huawei.com (10.206.15.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:45 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:42 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 07/11] blk-mq: Add blk_mq_tag_update_sched_shared_sbitmap() Date: Mon, 9 Aug 2021 22:29:34 +0800 Message-ID: <1628519378-211232-8-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Put the functionality to update the sched shared sbitmap size in a common function. Since the same formula is always used to resize, and it can be got from the request queue argument, so just pass the request queue pointer. Signed-off-by: John Garry --- block/blk-mq-sched.c | 3 +-- block/blk-mq-tag.c | 6 ++++++ block/blk-mq-tag.h | 1 + block/blk-mq.c | 3 +-- 4 files changed, 9 insertions(+), 4 deletions(-) -- 2.26.2 Reviewed-by: Ming Lei diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index b4d7ad9a7a60..ac0408ebcd47 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -575,8 +575,7 @@ static int blk_mq_init_sched_shared_sbitmap(struct request_queue *queue) &queue->sched_breserved_tags; } - sbitmap_queue_resize(&queue->sched_bitmap_tags, - queue->nr_requests - set->reserved_tags); + blk_mq_tag_update_sched_shared_sbitmap(queue); return 0; } diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 86f87346232a..5f06ad6efc8f 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -634,6 +634,12 @@ void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int s sbitmap_queue_resize(&set->__bitmap_tags, size - set->reserved_tags); } +void blk_mq_tag_update_sched_shared_sbitmap(struct request_queue *q) +{ + sbitmap_queue_resize(&q->sched_bitmap_tags, + q->nr_requests - q->tag_set->reserved_tags); +} + /** * blk_mq_unique_tag() - return a tag that is unique queue-wide * @rq: request for which to compute a unique tag diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 8ed55af08427..88f3c6485543 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -48,6 +48,7 @@ extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, unsigned int depth, bool can_grow); extern void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int size); +extern void blk_mq_tag_update_sched_shared_sbitmap(struct request_queue *q); extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool); void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, diff --git a/block/blk-mq.c b/block/blk-mq.c index 0bb596f4d061..f14cc2705f9b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3645,8 +3645,7 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) q->nr_requests = nr; if (blk_mq_is_sbitmap_shared(set->flags)) { if (q->elevator) - sbitmap_queue_resize(&q->sched_bitmap_tags, - nr - set->reserved_tags); + blk_mq_tag_update_sched_shared_sbitmap(q); else blk_mq_tag_resize_shared_sbitmap(set, nr); } From patchwork Mon Aug 9 14:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493880 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977513jap; Mon, 9 Aug 2021 07:38:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyxWkadfWSYYumwRKIwb6pBGtbeNQxlugeceWH2K97MqiIUcsZie5XZ4D7rzVny1bdISa38 X-Received: by 2002:a05:6e02:106d:: with SMTP id q13mr138768ilj.164.1628519907512; Mon, 09 Aug 2021 07:38:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519907; cv=none; d=google.com; s=arc-20160816; b=HKiBJEGvZldLTycd+i2IkWjP5PVYo0eKfFW4Cg7NUXoUD8HhbMMyENtUpyzlJrpKYG U1OU/s3+XWgf5+V0srU4dwms+VHkA3ZrcVpY9EYciYl+Cv9xnYCHU42wKUlw01p9kKA7 8F7OxWBBkgc4/Y2pam0R+LwbLJGdCZ7FLvYh1kDHtmzQ/mq8uG1Rv3QPftdydSXiu0gG 12RnVzXRdZ5KNwAcYDuUPPnWsUxNrDJ2OYu7StHpVeyHkMI2djK2ecnk0We9gk44NtX2 rDiSJ43PIZfeIix8AD0wNEeR3Ws3p+8D7esAB2xK8lKKuRxS1gWIISxAQWLnVZPq0V6h ZhjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=JT4/G9xcBdWi2ed/wqG19e0BBmYGdZ+ZC3pCUKXstAY=; b=t+P65KXQ/Jc/kfs1wL4zDC3Au7/u1tRRqJfsZYJMxI05K18RWUSAzpMbYFkXKb/Cpq NRWjlgTN0k1OGpT5rY5+6mQ5G1hxVCMQNVFWM14DKBcEC0oZko/bR8RHpVrycZ0Gl3Co 0T63DefL7bA03aEpkZXdyt4Hu/FIbtqntV1+SBLdu07+YzJ922EYPupyOVxd5jhOf7NG EHr9okf5G9BRJEZkwnFcez1NOuvZWH+c1/lwNdwu2lz/fuHbGT5ook1cwToRgMUoma1z G5xb83s1YZbQIOLP1e0GQE7ylrwUOZrovEy3ARta27kbpSMJ4davQOKKgxNxjBD9fDoI bavw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v17si61292iln.134.2021.08.09.07.38.27 for ; Mon, 09 Aug 2021 07:38:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235374AbhHIOio (ORCPT ); Mon, 9 Aug 2021 10:38:44 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3614 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235267AbhHIOgR (ORCPT ); Mon, 9 Aug 2021 10:36:17 -0400 Received: from fraeml712-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9k5j6mz6CB0X; Mon, 9 Aug 2021 22:34:18 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml712-chm.china.huawei.com (10.206.15.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:48 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:45 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 08/11] blk-mq: Add blk_mq_ops.init_request_no_hctx() Date: Mon, 9 Aug 2021 22:29:35 +0800 Message-ID: <1628519378-211232-9-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add a variant of the init_request function which does not pass a hctx_idx arg. This is important for shared sbitmap support, as it needs to be ensured for introducing shared static rqs that the LLDD cannot think that requests are associated with a specific HW queue. Signed-off-by: John Garry --- block/blk-mq.c | 15 ++++++++++----- include/linux/blk-mq.h | 7 +++++++ 2 files changed, 17 insertions(+), 5 deletions(-) -- 2.26.2 diff --git a/block/blk-mq.c b/block/blk-mq.c index f14cc2705f9b..4d6723cfa582 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2427,13 +2427,15 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq, unsigned int hctx_idx, int node) { - int ret; + int ret = 0; - if (set->ops->init_request) { + if (set->ops->init_request) ret = set->ops->init_request(set, rq, hctx_idx, node); - if (ret) - return ret; - } + else if (set->ops->init_request_no_hctx) + ret = set->ops->init_request_no_hctx(set, rq, node); + + if (ret) + return ret; WRITE_ONCE(rq->state, MQ_RQ_IDLE); return 0; @@ -3487,6 +3489,9 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) if (!set->ops->get_budget ^ !set->ops->put_budget) return -EINVAL; + if (set->ops->init_request && set->ops->init_request_no_hctx) + return -EINVAL; + if (set->queue_depth > BLK_MQ_MAX_DEPTH) { pr_info("blk-mq: reduced tag depth to %u\n", BLK_MQ_MAX_DEPTH); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 22215db36122..c838b24944c2 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -357,6 +357,13 @@ struct blk_mq_ops { */ int (*init_request)(struct blk_mq_tag_set *set, struct request *, unsigned int, unsigned int); + + /** + * @init_request: Same as init_request, except no hw queue index is passed + */ + int (*init_request_no_hctx)(struct blk_mq_tag_set *set, struct request *, + unsigned int); + /** * @exit_request: Ditto for exit/teardown. */ From patchwork Mon Aug 9 14:29:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493873 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977198jap; Mon, 9 Aug 2021 07:38:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz3d15VDiH845ZWhRZc2gxV15jvuYhbgjn81M16wlluJm7KgH6HP/ik5Z5J9lC+742XGbEc X-Received: by 2002:a92:6909:: with SMTP id e9mr210997ilc.231.1628519885386; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519885; cv=none; d=google.com; s=arc-20160816; b=F8lMThKrGv2iLN4JvpVB8zhVJDcnYZ7NNnSwJoXV/wlj90Zpb7F9RBNQpGS6hzSeeZ sKCpRQneyin8H7YEZBAAzv/qrlpCo1M0JwiojDR9nlxiQ42x8gCCE2DMdzoZBnxX77yr O0v657lwdIzIC3WtC9anBfoETMMtGkEzHHnONPwdzcnFKZO1maFi2X+GNCg9YYlpgY0B U5iC81QD1PqxVNB0c6xrdQgWf6NqpuiykYsPon309uFpcigBAA3ntM1fnEAO1yDjjp3Q 6TtXHqRhDG+psQwMYNyXc+2bEA7a8KjH4q8/y4eVLb3U6xhh5rGS7cPTSEzw8hhusU7z wsCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=H+aFnM4mrLf5K8JkEUF1K+zWWITyKKqCklaFbeEkMpw=; b=YXkGYWBROEnpCohBPtT9KKrdiha7IWpmF4oVEmTKFdBl5bDwMS8CAPXZPUb07PAud7 TNo5X3+kUCd5HJoyE575qKKXfP7CHjM3tmaIa5iKxNXI20Nxtri0tBtU+odxkURQzAX7 7JearNnbd2GwQveLZqxTjWuF8IR7GdQqvDGRPpeNjgpVvUjurasD532dKW0BAdALhwf/ A/raYsizLj17eHuRHuZQhThEptxX4TXNLM8IMEnxMwZKKTHnH91EERTtulCFzqgTOgke S3eEZWzdritD1ZOsuLbEVUzmyrQs+qIIoLCIBGFUXD+HYhNOoal1RTLo+dLbDCMkuqiv XApQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.05 for ; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235478AbhHIOiO (ORCPT ); Mon, 9 Aug 2021 10:38:14 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3615 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235268AbhHIOgP (ORCPT ); Mon, 9 Aug 2021 10:36:15 -0400 Received: from fraeml711-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9m3Dlnz6C997; Mon, 9 Aug 2021 22:34:20 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml711-chm.china.huawei.com (10.206.15.60) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:50 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:47 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 09/11] scsi: Set blk_mq_ops.init_request_no_hctx Date: Mon, 9 Aug 2021 22:29:36 +0800 Message-ID: <1628519378-211232-10-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The hctx_idx argument is not used in scsi_mq_init_request(), so set as the blk_mq_ops.init_request_no_hctx callback instead. Signed-off-by: John Garry --- drivers/scsi/scsi_lib.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) -- 2.26.2 diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 7456a26aef51..6ea4d0847970 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1738,7 +1738,7 @@ static enum blk_eh_timer_return scsi_timeout(struct request *req, } static int scsi_mq_init_request(struct blk_mq_tag_set *set, struct request *rq, - unsigned int hctx_idx, unsigned int numa_node) + unsigned int numa_node) { struct Scsi_Host *shost = set->driver_data; struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq); @@ -1856,7 +1856,7 @@ static const struct blk_mq_ops scsi_mq_ops_no_commit = { #ifdef CONFIG_BLK_DEBUG_FS .show_rq = scsi_show_rq, #endif - .init_request = scsi_mq_init_request, + .init_request_no_hctx = scsi_mq_init_request, .exit_request = scsi_mq_exit_request, .initialize_rq_fn = scsi_initialize_rq, .cleanup_rq = scsi_cleanup_rq, @@ -1886,7 +1886,7 @@ static const struct blk_mq_ops scsi_mq_ops = { #ifdef CONFIG_BLK_DEBUG_FS .show_rq = scsi_show_rq, #endif - .init_request = scsi_mq_init_request, + .init_request_no_hctx = scsi_mq_init_request, .exit_request = scsi_mq_exit_request, .initialize_rq_fn = scsi_initialize_rq, .cleanup_rq = scsi_cleanup_rq, From patchwork Mon Aug 9 14:29:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493877 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977208jap; Mon, 9 Aug 2021 07:38:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxaPxwLVYMAIfJpXD051pp8s4gC0U94iE9RjldTFmr8SuTcBIy0XPk/0If/2SGxwiTIbZmS X-Received: by 2002:a05:6602:14a:: with SMTP id v10mr414231iot.36.1628519885779; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519885; cv=none; d=google.com; s=arc-20160816; b=EinpOyyeL0qopCzhgUqRhBmoCSXL6nE1mUNnRy6S7/kto6N9lvItGlYunAjPZUdM16 lriPA0H4EXrwa4ybyhC8GqgtYeFLXRkhVrjOg4VFkGvS1uh9kBAEZkdmW2YRdSDqTgTe jond2l5N+7KNSK3UlYBz1krzc0oNbHdzVU86zIDUKslFBIL/xYtll4ErcD3XxmuGvUWF Dc1HOZN66hartxhfMralNLKRhGM07n1tYartZOXUX6uS1A+9jgFKm/yhlyrkHe9LznvE UsYLZLpxUbVV02StJSbK5g0xx1RGFzS+z5MRYxgJMeCwaxICe+jzD2Ff2pdAPKTr/ooi 5fCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=zxwUtg6b3NwHf81nkE9RBpoIjMs1bQvnu21Rkbr7jpc=; b=ihHbJQdykDWR7y7W9c6ZPYviA/wfdzy79Rp/XTCP6FW2I9uoLA7CApSd27c7SNDD8s ZjSe+fkResd5e/5EFl/MrC0O3vaBSda7ZINg5Nu30JsFQD4p5y+F+tvXTwNRIbhYbeLG 2TU1iynf+XmFvq6E4WJP/inwZ3X5DU+RVu3rzcw5WjqWZhgIMl7b3ZL1jm2BFgnregFe +6gleP/pOIHvmn2303TUsI2S1fiv5WfD4agaONDECN+NCwydr13KX9m8EPbYRNtrPmla S4vLUSrur7uOFiUsEDsyhteUP8tQQ2ER0cU/xqL9fI3yGp6clnVGLuZUYrGyj75r/SHH ZRuw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.05 for ; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235494AbhHIOiO (ORCPT ); Mon, 9 Aug 2021 10:38:14 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3616 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235284AbhHIOgR (ORCPT ); Mon, 9 Aug 2021 10:36:17 -0400 Received: from fraeml709-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9v3cVZz6D9Lr; Mon, 9 Aug 2021 22:34:27 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml709-chm.china.huawei.com (10.206.15.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:54 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:50 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 10/11] blk-mq: Use shared tags for shared sbitmap support Date: Mon, 9 Aug 2021 22:29:37 +0800 Message-ID: <1628519378-211232-11-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Currently we use separate sbitmap pairs and active_queues atomic_t for shared sbitmap support. However a full set of static requests are used per HW queue, which is quite wasteful, considering that the total number of requests usable at any given time across all HW queues is limited by the shared sbitmap depth. As such, it is considerably more memory efficient in the case of shared sbitmap to allocate a set of static rqs per tag set or request queue, and not per HW queue. So replace the sbitmap pairs and active_queues atomic_t with a shared tags per tagset and request queue. Continue to use term "shared sbitmap" for now, as the meaning is known. Signed-off-by: John Garry --- block/blk-mq-sched.c | 77 ++++++++++++++++++++----------------- block/blk-mq-tag.c | 65 ++++++++++++------------------- block/blk-mq-tag.h | 4 +- block/blk-mq.c | 86 +++++++++++++++++++++++++----------------- block/blk-mq.h | 8 ++-- include/linux/blk-mq.h | 13 +++---- include/linux/blkdev.h | 3 +- 7 files changed, 131 insertions(+), 125 deletions(-) -- 2.26.2 diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index ac0408ebcd47..1101a2e4da9a 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -522,14 +522,19 @@ static int blk_mq_sched_alloc_map_and_request(struct request_queue *q, struct blk_mq_tag_set *set = q->tag_set; int ret; + if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { + hctx->sched_tags = q->shared_sbitmap_tags; + return 0; + } + hctx->sched_tags = blk_mq_alloc_rq_map(set, hctx_idx, q->nr_requests, - set->reserved_tags, set->flags); + set->reserved_tags); if (!hctx->sched_tags) return -ENOMEM; ret = blk_mq_alloc_rqs(set, hctx->sched_tags, hctx_idx, q->nr_requests); if (ret) { - blk_mq_free_rq_map(hctx->sched_tags, set->flags); + blk_mq_free_rq_map(hctx->sched_tags); hctx->sched_tags = NULL; } @@ -544,35 +549,39 @@ static void blk_mq_sched_tags_teardown(struct request_queue *q) queue_for_each_hw_ctx(q, hctx, i) { if (hctx->sched_tags) { - blk_mq_free_rq_map(hctx->sched_tags, hctx->flags); + if (!blk_mq_is_sbitmap_shared(q->tag_set->flags)) + blk_mq_free_rq_map(hctx->sched_tags); hctx->sched_tags = NULL; } } } +static void blk_mq_exit_sched_shared_sbitmap(struct request_queue *queue) +{ + blk_mq_free_rq_map(queue->shared_sbitmap_tags); + queue->shared_sbitmap_tags = NULL; +} + static int blk_mq_init_sched_shared_sbitmap(struct request_queue *queue) { struct blk_mq_tag_set *set = queue->tag_set; - int alloc_policy = BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags); - struct blk_mq_hw_ctx *hctx; - int ret, i; + struct blk_mq_tags *tags; + int ret; /* * Set initial depth at max so that we don't need to reallocate for * updating nr_requests. */ - ret = blk_mq_init_bitmaps(&queue->sched_bitmap_tags, - &queue->sched_breserved_tags, - MAX_SCHED_RQ, set->reserved_tags, - set->numa_node, alloc_policy); - if (ret) - return ret; + tags = queue->shared_sbitmap_tags = blk_mq_alloc_rq_map(set, 0, + set->queue_depth, + set->reserved_tags); + if (!queue->shared_sbitmap_tags) + return -ENOMEM; - queue_for_each_hw_ctx(queue, hctx, i) { - hctx->sched_tags->bitmap_tags = - &queue->sched_bitmap_tags; - hctx->sched_tags->breserved_tags = - &queue->sched_breserved_tags; + ret = blk_mq_alloc_rqs(set, tags, 0, set->queue_depth); + if (ret) { + blk_mq_exit_sched_shared_sbitmap(queue); + return ret; } blk_mq_tag_update_sched_shared_sbitmap(queue); @@ -580,12 +589,6 @@ static int blk_mq_init_sched_shared_sbitmap(struct request_queue *queue) return 0; } -static void blk_mq_exit_sched_shared_sbitmap(struct request_queue *queue) -{ - sbitmap_queue_free(&queue->sched_bitmap_tags); - sbitmap_queue_free(&queue->sched_breserved_tags); -} - int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) { struct blk_mq_hw_ctx *hctx; @@ -607,21 +610,21 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth, BLKDEV_DEFAULT_RQ); - queue_for_each_hw_ctx(q, hctx, i) { - ret = blk_mq_sched_alloc_map_and_request(q, hctx, i); + if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { + ret = blk_mq_init_sched_shared_sbitmap(q); if (ret) - goto err_free_map_and_request; + return ret; } - if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { - ret = blk_mq_init_sched_shared_sbitmap(q); + queue_for_each_hw_ctx(q, hctx, i) { + ret = blk_mq_sched_alloc_map_and_request(q, hctx, i); if (ret) goto err_free_map_and_request; } ret = e->ops.init_sched(q, e); if (ret) - goto err_free_sbitmap; + goto err_free_map_and_request; blk_mq_debugfs_register_sched(q); @@ -641,12 +644,12 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) return 0; -err_free_sbitmap: - if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) - blk_mq_exit_sched_shared_sbitmap(q); err_free_map_and_request: blk_mq_sched_free_requests(q); blk_mq_sched_tags_teardown(q); + if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) + blk_mq_exit_sched_shared_sbitmap(q); + q->elevator = NULL; return ret; } @@ -660,9 +663,13 @@ void blk_mq_sched_free_requests(struct request_queue *q) struct blk_mq_hw_ctx *hctx; int i; - queue_for_each_hw_ctx(q, hctx, i) { - if (hctx->sched_tags) - blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i); + if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { + blk_mq_free_rqs(q->tag_set, q->shared_sbitmap_tags, 0); + } else { + queue_for_each_hw_ctx(q, hctx, i) { + if (hctx->sched_tags) + blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i); + } } } diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 5f06ad6efc8f..e97bbf477b96 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -27,10 +27,11 @@ bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx) if (blk_mq_is_sbitmap_shared(hctx->flags)) { struct request_queue *q = hctx->queue; struct blk_mq_tag_set *set = q->tag_set; + struct blk_mq_tags *tags = set->shared_sbitmap_tags; if (!test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags) && !test_and_set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) - atomic_inc(&set->active_queues_shared_sbitmap); + atomic_inc(&tags->active_queues); } else { if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) && !test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) @@ -61,10 +62,12 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx) struct blk_mq_tag_set *set = q->tag_set; if (blk_mq_is_sbitmap_shared(hctx->flags)) { + struct blk_mq_tags *tags = set->shared_sbitmap_tags; + if (!test_and_clear_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) return; - atomic_dec(&set->active_queues_shared_sbitmap); + atomic_dec(&tags->active_queues); } else { if (!test_and_clear_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) return; @@ -510,38 +513,16 @@ static int blk_mq_init_bitmap_tags(struct blk_mq_tags *tags, return 0; } -int blk_mq_init_shared_sbitmap(struct blk_mq_tag_set *set) -{ - int alloc_policy = BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags); - int i, ret; - - ret = blk_mq_init_bitmaps(&set->__bitmap_tags, &set->__breserved_tags, - set->queue_depth, set->reserved_tags, - set->numa_node, alloc_policy); - if (ret) - return ret; - - for (i = 0; i < set->nr_hw_queues; i++) { - struct blk_mq_tags *tags = set->tags[i]; - - tags->bitmap_tags = &set->__bitmap_tags; - tags->breserved_tags = &set->__breserved_tags; - } - - return 0; -} - void blk_mq_exit_shared_sbitmap(struct blk_mq_tag_set *set) { - sbitmap_queue_free(&set->__bitmap_tags); - sbitmap_queue_free(&set->__breserved_tags); + blk_mq_free_rq_map(set->shared_sbitmap_tags); + set->shared_sbitmap_tags = NULL; } struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, unsigned int reserved_tags, - int node, unsigned int flags) + int node, int alloc_policy) { - int alloc_policy = BLK_MQ_FLAG_TO_ALLOC_POLICY(flags); struct blk_mq_tags *tags; if (total_tags > BLK_MQ_TAG_MAX) { @@ -557,9 +538,6 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, tags->nr_reserved_tags = reserved_tags; spin_lock_init(&tags->lock); - if (blk_mq_is_sbitmap_shared(flags)) - return tags; - if (blk_mq_init_bitmap_tags(tags, node, alloc_policy) < 0) { kfree(tags); return NULL; @@ -567,12 +545,10 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, return tags; } -void blk_mq_free_tags(struct blk_mq_tags *tags, unsigned int flags) +void blk_mq_free_tags(struct blk_mq_tags *tags) { - if (!blk_mq_is_sbitmap_shared(flags)) { - sbitmap_queue_free(tags->bitmap_tags); - sbitmap_queue_free(tags->breserved_tags); - } + sbitmap_queue_free(tags->bitmap_tags); + sbitmap_queue_free(tags->breserved_tags); kfree(tags); } @@ -604,18 +580,25 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, if (tdepth > MAX_SCHED_RQ) return -EINVAL; + if (blk_mq_is_sbitmap_shared(set->flags)) { + /* No point in allowing this to happen */ + if (tdepth > set->queue_depth) + return -EINVAL; + return 0; + } + new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth, - tags->nr_reserved_tags, set->flags); + tags->nr_reserved_tags); if (!new) return -ENOMEM; ret = blk_mq_alloc_rqs(set, new, hctx->queue_num, tdepth); if (ret) { - blk_mq_free_rq_map(new, set->flags); + blk_mq_free_rq_map(new); return -ENOMEM; } blk_mq_free_rqs(set, *tagsptr, hctx->queue_num); - blk_mq_free_rq_map(*tagsptr, set->flags); + blk_mq_free_rq_map(*tagsptr); *tagsptr = new; } else { /* @@ -631,12 +614,14 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int size) { - sbitmap_queue_resize(&set->__bitmap_tags, size - set->reserved_tags); + struct blk_mq_tags *tags = set->shared_sbitmap_tags; + + sbitmap_queue_resize(&tags->__bitmap_tags, size - set->reserved_tags); } void blk_mq_tag_update_sched_shared_sbitmap(struct request_queue *q) { - sbitmap_queue_resize(&q->sched_bitmap_tags, + sbitmap_queue_resize(q->shared_sbitmap_tags->bitmap_tags, q->nr_requests - q->tag_set->reserved_tags); } diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 88f3c6485543..c9fc52ee07c4 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -30,8 +30,8 @@ struct blk_mq_tags { extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags, unsigned int reserved_tags, - int node, unsigned int flags); -extern void blk_mq_free_tags(struct blk_mq_tags *tags, unsigned int flags); + int node, int alloc_policy); +extern void blk_mq_free_tags(struct blk_mq_tags *tags); extern int blk_mq_init_bitmaps(struct sbitmap_queue *bitmap_tags, struct sbitmap_queue *breserved_tags, unsigned int queue_depth, diff --git a/block/blk-mq.c b/block/blk-mq.c index 4d6723cfa582..d3dd5fab3426 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2348,6 +2348,9 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, struct blk_mq_tags *drv_tags; struct page *page; + if (blk_mq_is_sbitmap_shared(set->flags)) + drv_tags = set->shared_sbitmap_tags; + else drv_tags = set->tags[hctx_idx]; if (tags->static_rqs && set->ops->exit_request) { @@ -2377,21 +2380,20 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, } } -void blk_mq_free_rq_map(struct blk_mq_tags *tags, unsigned int flags) +void blk_mq_free_rq_map(struct blk_mq_tags *tags) { kfree(tags->rqs); tags->rqs = NULL; kfree(tags->static_rqs); tags->static_rqs = NULL; - blk_mq_free_tags(tags, flags); + blk_mq_free_tags(tags); } struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, unsigned int hctx_idx, unsigned int nr_tags, - unsigned int reserved_tags, - unsigned int flags) + unsigned int reserved_tags) { struct blk_mq_tags *tags; int node; @@ -2400,7 +2402,8 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, if (node == NUMA_NO_NODE) node = set->numa_node; - tags = blk_mq_init_tags(nr_tags, reserved_tags, node, flags); + tags = blk_mq_init_tags(nr_tags, reserved_tags, node, + BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags)); if (!tags) return NULL; @@ -2408,7 +2411,7 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, node); if (!tags->rqs) { - blk_mq_free_tags(tags, flags); + blk_mq_free_tags(tags); return NULL; } @@ -2417,7 +2420,7 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, node); if (!tags->static_rqs) { kfree(tags->rqs); - blk_mq_free_tags(tags, flags); + blk_mq_free_tags(tags); return NULL; } @@ -2859,8 +2862,14 @@ static bool __blk_mq_alloc_map_and_request(struct blk_mq_tag_set *set, unsigned int flags = set->flags; int ret = 0; + if (blk_mq_is_sbitmap_shared(flags)) { + set->tags[hctx_idx] = set->shared_sbitmap_tags; + + return true; + } + set->tags[hctx_idx] = blk_mq_alloc_rq_map(set, hctx_idx, - set->queue_depth, set->reserved_tags, flags); + set->queue_depth, set->reserved_tags); if (!set->tags[hctx_idx]) return false; @@ -2869,7 +2878,7 @@ static bool __blk_mq_alloc_map_and_request(struct blk_mq_tag_set *set, if (!ret) return true; - blk_mq_free_rq_map(set->tags[hctx_idx], flags); + blk_mq_free_rq_map(set->tags[hctx_idx]); set->tags[hctx_idx] = NULL; return false; } @@ -2877,11 +2886,11 @@ static bool __blk_mq_alloc_map_and_request(struct blk_mq_tag_set *set, static void blk_mq_free_map_and_requests(struct blk_mq_tag_set *set, unsigned int hctx_idx) { - unsigned int flags = set->flags; - if (set->tags && set->tags[hctx_idx]) { - blk_mq_free_rqs(set, set->tags[hctx_idx], hctx_idx); - blk_mq_free_rq_map(set->tags[hctx_idx], flags); + if (!blk_mq_is_sbitmap_shared(set->flags)) { + blk_mq_free_rqs(set, set->tags[hctx_idx], hctx_idx); + blk_mq_free_rq_map(set->tags[hctx_idx]); + } set->tags[hctx_idx] = NULL; } } @@ -3348,6 +3357,21 @@ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) { int i; + if (blk_mq_is_sbitmap_shared(set->flags)) { + int ret; + + set->shared_sbitmap_tags = blk_mq_alloc_rq_map(set, 0, + set->queue_depth, + set->reserved_tags); + if (!set->shared_sbitmap_tags) + return -ENOMEM; + + ret = blk_mq_alloc_rqs(set, set->shared_sbitmap_tags, 0, + set->queue_depth); + if (ret) + goto out_free_sbitmap_tags; + } + for (i = 0; i < set->nr_hw_queues; i++) { if (!__blk_mq_alloc_map_and_request(set, i)) goto out_unwind; @@ -3359,6 +3383,11 @@ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) out_unwind: while (--i >= 0) blk_mq_free_map_and_requests(set, i); + if (blk_mq_is_sbitmap_shared(set->flags)) + blk_mq_free_rqs(set, set->shared_sbitmap_tags, 0); +out_free_sbitmap_tags: + if (blk_mq_is_sbitmap_shared(set->flags)) + blk_mq_exit_shared_sbitmap(set); return -ENOMEM; } @@ -3492,6 +3521,9 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) if (set->ops->init_request && set->ops->init_request_no_hctx) return -EINVAL; + if (set->ops->init_request && blk_mq_is_sbitmap_shared(set->flags)) + return -EINVAL; + if (set->queue_depth > BLK_MQ_MAX_DEPTH) { pr_info("blk-mq: reduced tag depth to %u\n", BLK_MQ_MAX_DEPTH); @@ -3541,23 +3573,11 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) if (ret) goto out_free_mq_map; - if (blk_mq_is_sbitmap_shared(set->flags)) { - atomic_set(&set->active_queues_shared_sbitmap, 0); - - if (blk_mq_init_shared_sbitmap(set)) { - ret = -ENOMEM; - goto out_free_mq_rq_maps; - } - } - mutex_init(&set->tag_list_lock); INIT_LIST_HEAD(&set->tag_list); return 0; -out_free_mq_rq_maps: - for (i = 0; i < set->nr_hw_queues; i++) - blk_mq_free_map_and_requests(set, i); out_free_mq_map: for (i = 0; i < set->nr_maps; i++) { kfree(set->map[i].mq_map); @@ -3589,11 +3609,15 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set) { int i, j; - for (i = 0; i < set->nr_hw_queues; i++) - blk_mq_free_map_and_requests(set, i); + if (blk_mq_is_sbitmap_shared(set->flags)) { + struct blk_mq_tags *tags = set->shared_sbitmap_tags; - if (blk_mq_is_sbitmap_shared(set->flags)) + blk_mq_free_rqs(set, tags, 0); blk_mq_exit_shared_sbitmap(set); + } + + for (i = 0; i < set->nr_hw_queues; i++) + blk_mq_free_map_and_requests(set, i); for (j = 0; j < set->nr_maps; j++) { kfree(set->map[j].mq_map); @@ -3631,12 +3655,6 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) if (hctx->sched_tags) { ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags, nr, true); - if (blk_mq_is_sbitmap_shared(set->flags)) { - hctx->sched_tags->bitmap_tags = - &q->sched_bitmap_tags; - hctx->sched_tags->breserved_tags = - &q->sched_breserved_tags; - } } else { ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr, false); diff --git a/block/blk-mq.h b/block/blk-mq.h index d08779f77a26..f595521a4b3d 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -54,12 +54,11 @@ void blk_mq_put_rq_ref(struct request *rq); */ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx); -void blk_mq_free_rq_map(struct blk_mq_tags *tags, unsigned int flags); +void blk_mq_free_rq_map(struct blk_mq_tags *tags); struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, unsigned int hctx_idx, unsigned int nr_tags, - unsigned int reserved_tags, - unsigned int flags); + unsigned int reserved_tags); int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx, unsigned int depth); @@ -334,10 +333,11 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, if (blk_mq_is_sbitmap_shared(hctx->flags)) { struct request_queue *q = hctx->queue; struct blk_mq_tag_set *set = q->tag_set; + struct blk_mq_tags *tags = set->shared_sbitmap_tags; if (!test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) return true; - users = atomic_read(&set->active_queues_shared_sbitmap); + users = atomic_read(&tags->active_queues); } else { if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) return true; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index c838b24944c2..e67068bf648c 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -232,13 +232,11 @@ enum hctx_type { * @flags: Zero or more BLK_MQ_F_* flags. * @driver_data: Pointer to data owned by the block driver that created this * tag set. - * @active_queues_shared_sbitmap: - * number of active request queues per tag set. - * @__bitmap_tags: A shared tags sbitmap, used over all hctx's - * @__breserved_tags: - * A shared reserved tags sbitmap, used over all hctx's * @tags: Tag sets. One tag set per hardware queue. Has @nr_hw_queues * elements. + * @shared_sbitmap_tags: + * Shared sbitmap set of tags. Has @nr_hw_queues elements. If + * set, shared by all @tags. * @tag_list_lock: Serializes tag_list accesses. * @tag_list: List of the request queues that use this tag set. See also * request_queue.tag_set_list. @@ -255,12 +253,11 @@ struct blk_mq_tag_set { unsigned int timeout; unsigned int flags; void *driver_data; - atomic_t active_queues_shared_sbitmap; - struct sbitmap_queue __bitmap_tags; - struct sbitmap_queue __breserved_tags; struct blk_mq_tags **tags; + struct blk_mq_tags *shared_sbitmap_tags; + struct mutex tag_list_lock; struct list_head tag_list; }; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 56870a43ae4c..f5a039c251e3 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -467,8 +467,7 @@ struct request_queue { atomic_t nr_active_requests_shared_sbitmap; - struct sbitmap_queue sched_bitmap_tags; - struct sbitmap_queue sched_breserved_tags; + struct blk_mq_tags *shared_sbitmap_tags; struct list_head icq_list; #ifdef CONFIG_BLK_CGROUP From patchwork Mon Aug 9 14:29:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 493875 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:396:0:0:0:0 with SMTP id y22csp2977212jap; Mon, 9 Aug 2021 07:38:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzQFVgw7WvNDVICu5pqbyzDTvz4nsFHRU4gsNwKSbtpdNFTjeMTdo/Nx/Y0X+ZxWqQUeHIr X-Received: by 2002:a05:6e02:1d12:: with SMTP id i18mr65917ila.26.1628519885965; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628519885; cv=none; d=google.com; s=arc-20160816; b=LrZlaVri7hYf8Tntxy/gg5TSoXaOmT0JTZBj/zQN9m4cFjEgz9XApTULX1UqdbnN+V dyRzKlqulgAFgdy1/8CEXQR9tv/oLURfGzPURdqXDEbCkWaJgZzG7yD81ID7/aMDMa44 VzhK7eng6huyllFMkNm8TnE3MLrFw1JryM1Z8B8AsgVkY5DN6xoafzopJxjPzO8QTYZ4 l+vycV3KQgkZZbK0wyonihJngQrroSToR8rbZ8N/d6KliP80AUdJ1k8DGCClEET72b6u y/KV8FZyC+y2/C+gxtQ7n/D6R4UdZgm+x3K/nTtsu1XEGOLdAeoyGOk/tD1vuSX6jpDT yh5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=BqUg/79iVoYGcYv4Zv2OjT/BlWabsU8uWm/nZzEvmko=; b=WRroYFC7PGlO2oT+MuPZkJ2KXmUV5PeBo/ztcp5ZriA01IBb6AJhkFfk6qpx4bZHi6 RrEHyO5nMbqY9SCsAfkMyUbqX7L7SYnLfUTixhkUeJB1gbO2A+tdKnExpaNEAK/wEP4E 5lx4sZWfHac9hcw7yih1vKqxZBwbFv3Z/HZloFIrlVmunIoo3sDwJySbgkVlCR7RHUB2 xF2Kk3D/EZUfVL8DibLGgfyB5Q6V0lkkX28QUQf66tl6pFQY6KlHigMSyIFSIlyKtait 04p3vSMuFw9t8AEKUB4GEetN12NaUV2Cxc21Ewy58GoTTFm4GJz4AYghcsRl9viz45Uw 3C3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si21191782iov.104.2021.08.09.07.38.05 for ; Mon, 09 Aug 2021 07:38:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235521AbhHIOiQ (ORCPT ); Mon, 9 Aug 2021 10:38:16 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3617 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234924AbhHIOgR (ORCPT ); Mon, 9 Aug 2021 10:36:17 -0400 Received: from fraeml707-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Gjz9v3Yk0z6C9lG; Mon, 9 Aug 2021 22:34:27 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml707-chm.china.huawei.com (10.206.15.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 9 Aug 2021 16:34:56 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 9 Aug 2021 15:34:53 +0100 From: John Garry To: , , CC: , , , , , , John Garry Subject: [PATCH v2 11/11] blk-mq: Stop using pointers for blk_mq_tags bitmap tags Date: Mon, 9 Aug 2021 22:29:38 +0800 Message-ID: <1628519378-211232-12-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1628519378-211232-1-git-send-email-john.garry@huawei.com> References: <1628519378-211232-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Now that we use shared tags for shared sbitmap support, we don't require the tags sbitmap pointers, so drop them. This essentially reverts commit 222a5ae03cdd ("blk-mq: Use pointers for blk_mq_tags bitmap tags"). Signed-off-by: John Garry --- block/bfq-iosched.c | 4 ++-- block/blk-mq-debugfs.c | 8 +++---- block/blk-mq-tag.c | 50 ++++++++++++++++------------------------ block/blk-mq-tag.h | 7 ++---- block/blk-mq.c | 8 +++---- block/kyber-iosched.c | 4 ++-- block/mq-deadline-main.c | 2 +- 7 files changed, 35 insertions(+), 48 deletions(-) -- 2.26.2 Reviewed-by: Ming Lei diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 727955918563..91ba4eacafaa 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6881,8 +6881,8 @@ static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx) struct blk_mq_tags *tags = hctx->sched_tags; unsigned int min_shallow; - min_shallow = bfq_update_depths(bfqd, tags->bitmap_tags); - sbitmap_queue_min_shallow_depth(tags->bitmap_tags, min_shallow); + min_shallow = bfq_update_depths(bfqd, &tags->bitmap_tags); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, min_shallow); } static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 4b66d2776eda..4000376330c9 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -452,11 +452,11 @@ static void blk_mq_debugfs_tags_show(struct seq_file *m, atomic_read(&tags->active_queues)); seq_puts(m, "\nbitmap_tags:\n"); - sbitmap_queue_show(tags->bitmap_tags, m); + sbitmap_queue_show(&tags->bitmap_tags, m); if (tags->nr_reserved_tags) { seq_puts(m, "\nbreserved_tags:\n"); - sbitmap_queue_show(tags->breserved_tags, m); + sbitmap_queue_show(&tags->breserved_tags, m); } } @@ -487,7 +487,7 @@ static int hctx_tags_bitmap_show(void *data, struct seq_file *m) if (res) goto out; if (hctx->tags) - sbitmap_bitmap_show(&hctx->tags->bitmap_tags->sb, m); + sbitmap_bitmap_show(&hctx->tags->bitmap_tags.sb, m); mutex_unlock(&q->sysfs_lock); out: @@ -521,7 +521,7 @@ static int hctx_sched_tags_bitmap_show(void *data, struct seq_file *m) if (res) goto out; if (hctx->sched_tags) - sbitmap_bitmap_show(&hctx->sched_tags->bitmap_tags->sb, m); + sbitmap_bitmap_show(&hctx->sched_tags->bitmap_tags.sb, m); mutex_unlock(&q->sysfs_lock); out: diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index e97bbf477b96..b84955ee0967 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -46,9 +46,9 @@ bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx) */ void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool include_reserve) { - sbitmap_queue_wake_all(tags->bitmap_tags); + sbitmap_queue_wake_all(&tags->bitmap_tags); if (include_reserve) - sbitmap_queue_wake_all(tags->breserved_tags); + sbitmap_queue_wake_all(&tags->breserved_tags); } /* @@ -104,10 +104,10 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) WARN_ON_ONCE(1); return BLK_MQ_NO_TAG; } - bt = tags->breserved_tags; + bt = &tags->breserved_tags; tag_offset = 0; } else { - bt = tags->bitmap_tags; + bt = &tags->bitmap_tags; tag_offset = tags->nr_reserved_tags; } @@ -153,9 +153,9 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) data->ctx); tags = blk_mq_tags_from_data(data); if (data->flags & BLK_MQ_REQ_RESERVED) - bt = tags->breserved_tags; + bt = &tags->breserved_tags; else - bt = tags->bitmap_tags; + bt = &tags->bitmap_tags; /* * If destination hw queue is changed, fake wake up on @@ -189,10 +189,10 @@ void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx, const int real_tag = tag - tags->nr_reserved_tags; BUG_ON(real_tag >= tags->nr_tags); - sbitmap_queue_clear(tags->bitmap_tags, real_tag, ctx->cpu); + sbitmap_queue_clear(&tags->bitmap_tags, real_tag, ctx->cpu); } else { BUG_ON(tag >= tags->nr_reserved_tags); - sbitmap_queue_clear(tags->breserved_tags, tag, ctx->cpu); + sbitmap_queue_clear(&tags->breserved_tags, tag, ctx->cpu); } } @@ -343,9 +343,9 @@ static void __blk_mq_all_tag_iter(struct blk_mq_tags *tags, WARN_ON_ONCE(flags & BT_TAG_ITER_RESERVED); if (tags->nr_reserved_tags) - bt_tags_for_each(tags, tags->breserved_tags, fn, priv, + bt_tags_for_each(tags, &tags->breserved_tags, fn, priv, flags | BT_TAG_ITER_RESERVED); - bt_tags_for_each(tags, tags->bitmap_tags, fn, priv, flags); + bt_tags_for_each(tags, &tags->bitmap_tags, fn, priv, flags); } /** @@ -462,8 +462,8 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, continue; if (tags->nr_reserved_tags) - bt_for_each(hctx, tags->breserved_tags, fn, priv, true); - bt_for_each(hctx, tags->bitmap_tags, fn, priv, false); + bt_for_each(hctx, &tags->breserved_tags, fn, priv, true); + bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false); } blk_queue_exit(q); } @@ -498,19 +498,9 @@ int blk_mq_init_bitmaps(struct sbitmap_queue *bitmap_tags, static int blk_mq_init_bitmap_tags(struct blk_mq_tags *tags, int node, int alloc_policy) { - int ret; - - ret = blk_mq_init_bitmaps(&tags->__bitmap_tags, - &tags->__breserved_tags, - tags->nr_tags, tags->nr_reserved_tags, - node, alloc_policy); - if (ret) - return ret; - - tags->bitmap_tags = &tags->__bitmap_tags; - tags->breserved_tags = &tags->__breserved_tags; - - return 0; + return blk_mq_init_bitmaps(&tags->bitmap_tags, &tags->breserved_tags, + tags->nr_tags, tags->nr_reserved_tags, + node, alloc_policy); } void blk_mq_exit_shared_sbitmap(struct blk_mq_tag_set *set) @@ -547,8 +537,8 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, void blk_mq_free_tags(struct blk_mq_tags *tags) { - sbitmap_queue_free(tags->bitmap_tags); - sbitmap_queue_free(tags->breserved_tags); + sbitmap_queue_free(&tags->bitmap_tags); + sbitmap_queue_free(&tags->breserved_tags); kfree(tags); } @@ -605,7 +595,7 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, * Don't need (or can't) update reserved tags here, they * remain static and should never need resizing. */ - sbitmap_queue_resize(tags->bitmap_tags, + sbitmap_queue_resize(&tags->bitmap_tags, tdepth - tags->nr_reserved_tags); } @@ -616,12 +606,12 @@ void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int s { struct blk_mq_tags *tags = set->shared_sbitmap_tags; - sbitmap_queue_resize(&tags->__bitmap_tags, size - set->reserved_tags); + sbitmap_queue_resize(&tags->bitmap_tags, size - set->reserved_tags); } void blk_mq_tag_update_sched_shared_sbitmap(struct request_queue *q) { - sbitmap_queue_resize(q->shared_sbitmap_tags->bitmap_tags, + sbitmap_queue_resize(&q->shared_sbitmap_tags->bitmap_tags, q->nr_requests - q->tag_set->reserved_tags); } diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index c9fc52ee07c4..ba6502a9a5d8 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -11,11 +11,8 @@ struct blk_mq_tags { atomic_t active_queues; - struct sbitmap_queue *bitmap_tags; - struct sbitmap_queue *breserved_tags; - - struct sbitmap_queue __bitmap_tags; - struct sbitmap_queue __breserved_tags; + struct sbitmap_queue bitmap_tags; + struct sbitmap_queue breserved_tags; struct request **rqs; struct request **static_rqs; diff --git a/block/blk-mq.c b/block/blk-mq.c index d3dd5fab3426..a98ba16f7a76 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1082,14 +1082,14 @@ static inline unsigned int queued_to_index(unsigned int queued) static bool __blk_mq_get_driver_tag(struct request *rq) { - struct sbitmap_queue *bt = rq->mq_hctx->tags->bitmap_tags; + struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; int tag; blk_mq_tag_busy(rq->mq_hctx); if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { - bt = rq->mq_hctx->tags->breserved_tags; + bt = &rq->mq_hctx->tags->breserved_tags; tag_offset = 0; } else { if (!hctx_may_queue(rq->mq_hctx, bt)) @@ -1132,7 +1132,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode, struct sbitmap_queue *sbq; list_del_init(&wait->entry); - sbq = hctx->tags->bitmap_tags; + sbq = &hctx->tags->bitmap_tags; atomic_dec(&sbq->ws_active); } spin_unlock(&hctx->dispatch_wait_lock); @@ -1150,7 +1150,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode, static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx, struct request *rq) { - struct sbitmap_queue *sbq = hctx->tags->bitmap_tags; + struct sbitmap_queue *sbq = &hctx->tags->bitmap_tags; struct wait_queue_head *wq; wait_queue_entry_t *wait; bool ret; diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index 81e3279ecd57..d6f9f17e40b4 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -451,11 +451,11 @@ static void kyber_depth_updated(struct blk_mq_hw_ctx *hctx) { struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data; struct blk_mq_tags *tags = hctx->sched_tags; - unsigned int shift = tags->bitmap_tags->sb.shift; + unsigned int shift = tags->bitmap_tags.sb.shift; kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U; - sbitmap_queue_min_shallow_depth(tags->bitmap_tags, kqd->async_depth); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, kqd->async_depth); } static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) diff --git a/block/mq-deadline-main.c b/block/mq-deadline-main.c index 6f612e6dc82b..9cdc80da28fe 100644 --- a/block/mq-deadline-main.c +++ b/block/mq-deadline-main.c @@ -554,7 +554,7 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) dd->async_depth = max(1UL, 3 * q->nr_requests / 4); - sbitmap_queue_min_shallow_depth(tags->bitmap_tags, dd->async_depth); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); } /* Called by blk_mq_init_hctx() and blk_mq_init_sched(). */