From patchwork Tue Mar 12 08:59:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 160078 Delivered-To: patch@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp12861162jad; Tue, 12 Mar 2019 02:00:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqwxXgZCUxPifvH9T/UY3BJ47NtQ/I4JdAvyNtISiTasp2OVFoKX+ZDdrQjVTvjWMinnvhjO X-Received: by 2002:a17:902:24:: with SMTP id 33mr33452982pla.259.1552381244901; Tue, 12 Mar 2019 02:00:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552381244; cv=none; d=google.com; s=arc-20160816; b=l2miZ5TRADSVdtYll6gB93v4M23YnB6xRsT8uibMc6WgzoZIHLQ2y3YBNmtAcjIiZn TISDn8AwZvcjrdWrZLLrrkYrrzIRcpvKWndTTxeoIcSgaxvmzAcNubiNQkqumR+oBXG3 7a2OY5w4fLJhJXqyDx4/t8CLR8tnwO8fuYQd6RMi/YVUsTEA71NcvNrfLt1sZxCQ0oYE c0/fSzv3Nj8rnp1OpY8mMbH9/tCQYEfF072IDIm4IFepz8fFzaOJiJOzZBDp87BbUW9X HM+5wPOD47/76i8y4cyfO+nFMdPMDbcHJ6uWApKu3S/OLgZOx4QeU4GX/DesBz/XjW41 KXGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=axOM28S4gOS/WhjiBsehvxDPipmerEG+68f8K41Ab7I=; b=Nn23ViNYJ0VFsjQG8nEnBrlNx0hVgQ7oG9sN9qJDsAIneUkaIHg9kQcvKNBKwyJ6Tv LhdPmh6hQc5grapZ0famFH7EqAUd05bZ1/vZm+yYlji5Fkw5ykEaXOeElP+62pI1u4+T DXG3xOFDr0pD1tYb6wRSuKlMSd/cahB3WaQhLJIBe73+tuuMBbyJxJfVNw+1LLFfruNe W4HSY3bC4+UYu51gbKrcNoyhb/AwlOR/mWXMagOh2M1t7uYmSivUCfN+pYl6kU0WcIhs /k4GqOtMg2EnT3uTAB5e0j8V+FgikQLjPHlw2jtX+Q9ILvDBGzJL0m8/X+n73jGtZWIE SH6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JP6QsrxM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x6si7137636pge.211.2019.03.12.02.00.44; Tue, 12 Mar 2019 02:00:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JP6QsrxM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727806AbfCLJAn (ORCPT + 31 others); Tue, 12 Mar 2019 05:00:43 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:36271 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726924AbfCLJAK (ORCPT ); Tue, 12 Mar 2019 05:00:10 -0400 Received: by mail-wr1-f65.google.com with SMTP id g18so1763361wru.3 for ; Tue, 12 Mar 2019 02:00:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=axOM28S4gOS/WhjiBsehvxDPipmerEG+68f8K41Ab7I=; b=JP6QsrxMUPTt5P5/YwXE4RMsvQjIQYi29r2fk+eO2JzYC4fJTBTaOqhx0EEydZjVYZ PkQb+jiY/pdBoPpzWXJICXfnONncxXnlnEYO1Ns1/3COeQE9Vgq5WksTzbDe9KK8uVaz 8pNQZT712UJFDsWI1hLRLuNef6IU7kG8+VFrFIZGBC7PsrGxucfBaZbF5LVWNxBaCT8H jPm7Wy4FaTrbJ1iwI3h8eespZ9+YWT6Bs4Bz9S7NiPTOW2aE+C/QhBnp1XX/bFoC8CsQ /rDC5I1AtY6soxxJirXEpcezf7reLsA5OTnIGgyzy1AHKinQLoIpEWXCL0+tosRDFK1e GGdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=axOM28S4gOS/WhjiBsehvxDPipmerEG+68f8K41Ab7I=; b=o9HBJJMgBBkzcPyC7RQc6+UenB0WsTHtYlnrL3zzFw4+7dLI5gtS2oz4BowwsKuNPJ VRFuOyrzdZ21negzZnbGfsKOcl8jB8gE0qweIMR0E0ZN46iCe5Ge0zJ2GUoOKEzPZyjP NPtdW/G56NI1Czd9dktEkeF0CIyD6i7B0QbfvBYBIcczrimHOditfY3svYbs6DMGunj/ eeHbK6IX9kRRk1ENbZq8MlkDoDkgLZshy2yx+3eBtRS68A6zVQ6F6dhr9aLMq7vDmPR2 Rl0cSORgzsS+3CDXfnw0UVgyEwc2W1jDGm4Gv+7cK+AgmziqYz3m5yDta2xnRDfNHZhv 2otQ== X-Gm-Message-State: APjAAAV9B9MTnqMiW3I1GIxwEKxw7Kx+LItPzkOwxsYg346Fgz4D/Ebm h4Q1FKJSehQ4V2o0Hvr7qD7krw== X-Received: by 2002:adf:e90b:: with SMTP id f11mr23482653wrm.36.1552381207804; Tue, 12 Mar 2019 02:00:07 -0700 (PDT) Received: from localhost.localdomain ([84.33.65.66]) by smtp.gmail.com with ESMTPSA id y20sm3953043wmi.34.2019.03.12.02.00.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Mar 2019 02:00:06 -0700 (PDT) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, broonie@kernel.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, fra.fra.800@gmail.com, alessio.masola@gmail.com, holger@applied-asynchrony.com, Paolo Valente Subject: [PATCH BUGFIX IMPROVEMENT V3 4/9] block, bfq: do not merge queues on flash storage with queueing Date: Tue, 12 Mar 2019 09:59:30 +0100 Message-Id: <20190312085935.11340-5-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190312085935.11340-1-paolo.valente@linaro.org> References: <20190312085935.11340-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To boost throughput with a set of processes doing interleaved I/O (i.e., a set of processes whose individual I/O is random, but whose merged cumulative I/O is sequential), BFQ merges the queues associated with these processes, i.e., redirects the I/O of these processes into a common, shared queue. In the shared queue, I/O requests are ordered by their position on the medium, thus sequential I/O gets dispatched to the device when the shared queue is served. Queue merging costs execution time, because, to detect which queues to merge, BFQ must maintain a list of the head I/O requests of active queues, ordered by request positions. Measurements showed that this costs about 10% of BFQ's total per-request processing time. Request processing time becomes more and more critical as the speed of the underlying storage device grows. Yet, fortunately, queue merging is basically useless on the very devices that are so fast to make request processing time critical. To reach a high throughput, these devices must have many requests queued at the same time. But, in this configuration, the internal scheduling algorithms of these devices do also the job of queue merging: they reorder requests so as to obtain as much as possible a sequential I/O pattern. As a consequence, with processes doing interleaved I/O, the throughput reached by one such device is likely to be the same, with and without queue merging. In view of this fact, this commit disables queue merging, and all related housekeeping, for non-rotational devices with internal queueing. The total, single-lock-protected, per-request processing time of BFQ drops to, e.g., 1.9 us on an Intel Core i7-2760QM@2.40GHz (time measured with simple code instrumentation, and using the throughput-sync.sh script of the S suite [1], in performance-profiling mode). To put this result into context, the total, single-lock-protected, per-request execution time of the lightest I/O scheduler available in blk-mq, mq-deadline, is 0.7 us (mq-deadline is ~800 LOC, against ~10500 LOC for BFQ). Disabling merging provides a further, remarkable benefit in terms of throughput. Merging tends to make many workloads artificially more uneven, mainly because of shared queues remaining non empty for incomparably more time than normal queues. So, if, e.g., one of the queues in a set of merged queues has a higher weight than a normal queue, then the shared queue may inherit such a high weight and, by staying almost always active, may force BFQ to perform I/O plugging most of the time. This evidently makes it harder for BFQ to let the device reach a high throughput. As a practical example of this problem, and of the benefits of this commit, we measured again the throughput in the nasty scenario considered in previous commit messages: dbench test (in the Phoronix suite), with 6 clients, on a filesystem with journaling, and with the journaling daemon enjoying a higher weight than normal processes. With this commit, the throughput grows from ~150 MB/s to ~200 MB/s on a PLEXTOR PX-256M5 SSD. This is the same peak throughput reached by any of the other I/O schedulers. As such, this is also likely to be the maximum possible throughput reachable with this workload on this device, because I/O is mostly random, and the other schedulers basically just pass I/O requests to the drive as fast as possible. [1] https://github.com/Algodev-github/S Tested-by: Holger Hoffstätte Tested-by: Oleksandr Natalenko Tested-by: Francesco Pollicino Signed-off-by: Alessio Masola Signed-off-by: Paolo Valente --- block/bfq-cgroup.c | 3 +- block/bfq-iosched.c | 73 +++++++++++++++++++++++++++++++++++++++++---- block/bfq-iosched.h | 3 ++ 3 files changed, 73 insertions(+), 6 deletions(-) -- 2.20.1 diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index c6113af31960..2a74a3f2a8f7 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -578,7 +578,8 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfqg_and_blkg_get(bfqg); if (bfq_bfqq_busy(bfqq)) { - bfq_pos_tree_add_move(bfqd, bfqq); + if (unlikely(!bfqd->nonrot_with_queueing)) + bfq_pos_tree_add_move(bfqd, bfqq); bfq_activate_bfqq(bfqd, bfqq); } diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 41364c0cca8c..b96be3764b8a 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -595,7 +595,16 @@ static bool bfq_too_late_for_merging(struct bfq_queue *bfqq) bfq_merge_time_limit); } -void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) +/* + * The following function is not marked as __cold because it is + * actually cold, but for the same performance goal described in the + * comments on the likely() at the beginning of + * bfq_setup_cooperator(). Unexpectedly, to reach an even lower + * execution time for the case where this function is not invoked, we + * had to add an unlikely() in each involved if(). + */ +void __cold +bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) { struct rb_node **p, *parent; struct bfq_queue *__bfqq; @@ -1849,8 +1858,9 @@ static void bfq_add_request(struct request *rq) /* * Adjust priority tree position, if next_rq changes. + * See comments on bfq_pos_tree_add_move() for the unlikely(). */ - if (prev != bfqq->next_rq) + if (unlikely(!bfqd->nonrot_with_queueing && prev != bfqq->next_rq)) bfq_pos_tree_add_move(bfqd, bfqq); if (!bfq_bfqq_busy(bfqq)) /* switching to busy ... */ @@ -1990,7 +2000,9 @@ static void bfq_remove_request(struct request_queue *q, bfqq->pos_root = NULL; } } else { - bfq_pos_tree_add_move(bfqd, bfqq); + /* see comments on bfq_pos_tree_add_move() for the unlikely() */ + if (unlikely(!bfqd->nonrot_with_queueing)) + bfq_pos_tree_add_move(bfqd, bfqq); } if (rq->cmd_flags & REQ_META) @@ -2075,7 +2087,12 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, */ if (prev != bfqq->next_rq) { bfq_updated_next_req(bfqd, bfqq); - bfq_pos_tree_add_move(bfqd, bfqq); + /* + * See comments on bfq_pos_tree_add_move() for + * the unlikely(). + */ + if (unlikely(!bfqd->nonrot_with_queueing)) + bfq_pos_tree_add_move(bfqd, bfqq); } } } @@ -2357,6 +2374,46 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, { struct bfq_queue *in_service_bfqq, *new_bfqq; + /* + * Do not perform queue merging if the device is non + * rotational and performs internal queueing. In fact, such a + * device reaches a high speed through internal parallelism + * and pipelining. This means that, to reach a high + * throughput, it must have many requests enqueued at the same + * time. But, in this configuration, the internal scheduling + * algorithm of the device does exactly the job of queue + * merging: it reorders requests so as to obtain as much as + * possible a sequential I/O pattern. As a consequence, with + * the workload generated by processes doing interleaved I/O, + * the throughput reached by the device is likely to be the + * same, with and without queue merging. + * + * Disabling merging also provides a remarkable benefit in + * terms of throughput. Merging tends to make many workloads + * artificially more uneven, because of shared queues + * remaining non empty for incomparably more time than + * non-merged queues. This may accentuate workload + * asymmetries. For example, if one of the queues in a set of + * merged queues has a higher weight than a normal queue, then + * the shared queue may inherit such a high weight and, by + * staying almost always active, may force BFQ to perform I/O + * plugging most of the time. This evidently makes it harder + * for BFQ to let the device reach a high throughput. + * + * Finally, the likely() macro below is not used because one + * of the two branches is more likely than the other, but to + * have the code path after the following if() executed as + * fast as possible for the case of a non rotational device + * with queueing. We want it because this is the fastest kind + * of device. On the opposite end, the likely() may lengthen + * the execution time of BFQ for the case of slower devices + * (rotational or at least without queueing). But in this case + * the execution time of BFQ matters very little, if not at + * all. + */ + if (likely(bfqd->nonrot_with_queueing)) + return NULL; + /* * Prevent bfqq from being merged if it has been created too * long ago. The idea is that true cooperating processes, and @@ -2986,8 +3043,10 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) bfq_requeue_bfqq(bfqd, bfqq, true); /* * Resort priority tree of potential close cooperators. + * See comments on bfq_pos_tree_add_move() for the unlikely(). */ - bfq_pos_tree_add_move(bfqd, bfqq); + if (unlikely(!bfqd->nonrot_with_queueing)) + bfq_pos_tree_add_move(bfqd, bfqq); } /* @@ -5051,6 +5110,9 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd) bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD; bfqd->max_rq_in_driver = 0; bfqd->hw_tag_samples = 0; + + bfqd->nonrot_with_queueing = + blk_queue_nonrot(bfqd->queue) && bfqd->hw_tag; } static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) @@ -5882,6 +5944,7 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) INIT_HLIST_HEAD(&bfqd->burst_list); bfqd->hw_tag = -1; + bfqd->nonrot_with_queueing = blk_queue_nonrot(bfqd->queue); bfqd->bfq_max_budget = bfq_default_max_budget; diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 26869cfbbfa9..829730b96fb2 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -497,6 +497,9 @@ struct bfq_data { /* number of requests dispatched and waiting for completion */ int rq_in_driver; + /* true if the device is non rotational and performs queueing */ + bool nonrot_with_queueing; + /* * Maximum number of requests in driver in the last * @hw_tag_samples completed requests.