From patchwork Fri Jan 18 11:52:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 155922 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp3163643jaa; Fri, 18 Jan 2019 03:52:42 -0800 (PST) X-Google-Smtp-Source: ALg8bN422e/OrC5aESV4mQ57K7Nh6TEMAUjc82z4J3+BOyRB0YT7mbvOQWaay/+LAUsoym++NbKB X-Received: by 2002:a62:5d0c:: with SMTP id r12mr19808790pfb.0.1547812362760; Fri, 18 Jan 2019 03:52:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547812362; cv=none; d=google.com; s=arc-20160816; b=Ld8oJS1BG9PR9ikZBk2JUtzTw49qyTO1v2MUtJNYsRU4ny7HKQRHjfGi8vnFy0Ky2W nusiyxTI6gugR07YJ2GB5CXO7Qcs/pG7hhJWb54UBYrsvujKVe5RgFg76Op+voeGZigD p0CCY2sFX9kvsomeArNNtjO547h8jXDUueH85OJ9njjwstB2838O+ghKGbEsX8iqmGv3 8lG5ahy4KeNulyBBhmyrdADoO7vWNXw5uhoYvjAaDLej8w9J1zdgVm0s8Qcq3YDyRO4a tOPkjG7DnvSEcX6bjOZRr1SouRn00a1NVRZ8OzfxICpNt55XJfNkOBpZAwTxwOG+Sclw eFgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=T27H7ztOcp7A/3V22IUmBI8925liW6dEWx0lFYpzxQM=; b=OWj+pQ10OIt6v0tMo/8zDrNd2R225Vg9fVGl3EwE9s3RHnj5GDl0s1yeirFJyD+QSd SjQOu72vBNTs9Z67ulL1W++vkGXi1YMpINW1ca1kB9n6iligWwav/753KgkERvgwNu5x HcHGqwdETyZ0wsMLlZ6iDpComK1NvjHbsP+Z+kLx87ooXeI6/6Chln/9PNCDNf/+zwzn 1MIxK1oRgn4ABq7q5VlodSJTrTT843Df4oJ1i0MfjCKI/RHFzcbEtzx7s002y2sZwHZF 7dCVdVb+WLc2BBHQQiLbM8TteEY3L36tOkMe0Y0+O5Il+0DA7C/KR4MKGxgHDgA9C/cI eWmQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KBXZJ1xr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 36si4382754pgt.213.2019.01.18.03.52.42; Fri, 18 Jan 2019 03:52:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KBXZJ1xr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727496AbfARLwk (ORCPT + 13 others); Fri, 18 Jan 2019 06:52:40 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:50898 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727177AbfARLwi (ORCPT ); Fri, 18 Jan 2019 06:52:38 -0500 Received: by mail-wm1-f68.google.com with SMTP id n190so4263197wmd.0 for ; Fri, 18 Jan 2019 03:52:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T27H7ztOcp7A/3V22IUmBI8925liW6dEWx0lFYpzxQM=; b=KBXZJ1xr/Gu0llfl5PqSk6cW/UpSN10BkwwI+mbh83BYmn3Od9x+bRjey4zR/RiNqU VGKn1Zjq6+69gkWbvt4PvxyX/aiEEnwU6peo0Rt5njglMFdxEZMoVYF/y1eEPsbM+1Ts oUvCpdXtySPwkRZ7eC80EX7yqjPvMh3HpMhB4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T27H7ztOcp7A/3V22IUmBI8925liW6dEWx0lFYpzxQM=; b=fEpUlEhYkSFIF1ZXtzmsiIxCIOuqtul32f/JM7FpfCU2TdDM4Y6thZoyQfY9oJbXQE 9+eGJTDbJoSegRG0pdXULNyzLjxm9So/FgKmb0/FVOa2GQEXaOhgHNJ1P2O+7kH/bvrW gnzvTLwnUR15SbK5X9RQz/qMPuKCW0DV+b2HhvsSK9vY4hoqmdYqju2n1eNH0qPjw12f PbGqoY0LDESJ6HAqhvzo5fdsf++40hnHY+8YVwgjGZXLIkcfurBYHGS/d/WuYdY0jrbW 1OzBe4vacwh7jK5rD++PwCfSXgkA1xRfuHtx85FjOZxg8wzBxHjf5U9FReYSdM97tWuC V8dQ== X-Gm-Message-State: AJcUukcymGa55NxFWu3ASeenFdouTHCjL9p3mM8z6Rc2L+RH/CP7zHhe MMZFDZVw8ITV0KUfRLCzDG2q4qmsGA9B1Q== X-Received: by 2002:a1c:9a0d:: with SMTP id c13mr15884561wme.41.1547812355617; Fri, 18 Jan 2019 03:52:35 -0800 (PST) Received: from localhost.localdomain (146-241-71-93.dyn.eolo.it. [146.241.71.93]) by smtp.gmail.com with ESMTPSA id i13sm75183049wrw.32.2019.01.18.03.52.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 Jan 2019 03:52:35 -0800 (PST) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, broonie@kernel.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, hurikhan77+bko@gmail.com, Paolo Valente Subject: [PATCH BUGFIX RFC 2/2] Revert "bfq: calculate shallow depths at init time" Date: Fri, 18 Jan 2019 12:52:19 +0100 Message-Id: <20190118115219.63576-3-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190118115219.63576-1-paolo.valente@linaro.org> References: <20190118115219.63576-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This reverts commit f0635b8a416e3b99dc6fd9ac3ce534764869d0c8. --- block/bfq-iosched.c | 117 +++++++++++++++++++++----------------------- 1 file changed, 57 insertions(+), 60 deletions(-) -- 2.20.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 8cc3032b66de..92214d58510c 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -520,6 +520,54 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd, } } +/* + * See the comments on bfq_limit_depth for the purpose of + * the depths set in the function. Return minimum shallow depth we'll use. + */ +static unsigned int bfq_update_depths(struct bfq_data *bfqd, + struct sbitmap_queue *bt) +{ + unsigned int i, j, min_shallow = UINT_MAX; + bfqd->sb_shift = bt->sb.shift; + + /* + * In-word depths if no bfq_queue is being weight-raised: + * leaving 25% of tags only for sync reads. + * + * In next formulas, right-shift the value + * (1U<sb_shift), instead of computing directly + * (1U<<(bfqd->sb_shift - something)), to be robust against + * any possible value of bfqd->sb_shift, without having to + * limit 'something'. + */ + /* no more than 50% of tags for async I/O */ + bfqd->word_depths[0][0] = max((1U<sb_shift)>>1, 1U); + /* + * no more than 75% of tags for sync writes (25% extra tags + * w.r.t. async I/O, to prevent async I/O from starving sync + * writes) + */ + bfqd->word_depths[0][1] = max(((1U<sb_shift) * 3)>>2, 1U); + + /* + * In-word depths in case some bfq_queue is being weight- + * raised: leaving ~63% of tags for sync reads. This is the + * highest percentage for which, in our tests, application + * start-up times didn't suffer from any regression due to tag + * shortage. + */ + /* no more than ~18% of tags for async I/O */ + bfqd->word_depths[1][0] = max(((1U<sb_shift) * 3)>>4, 1U); + /* no more than ~37% of tags for sync writes (~20% extra tags) */ + bfqd->word_depths[1][1] = max(((1U<sb_shift) * 6)>>4, 1U); + + for (i = 0; i < 2; i++) + for (j = 0; j < 2; j++) + min_shallow = min(min_shallow, bfqd->word_depths[i][j]); + + return min_shallow; +} + /* * Async I/O can easily starve sync I/O (both sync reads and sync * writes), by consuming all tags. Similarly, storms of sync writes, @@ -529,11 +577,20 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd, */ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) { + struct blk_mq_tags *tags = blk_mq_tags_from_data(data); struct bfq_data *bfqd = data->q->elevator->elevator_data; + struct sbitmap_queue *bt; if (op_is_sync(op) && !op_is_write(op)) return; + bt = &tags->bitmap_tags; + + if (unlikely(bfqd->sb_shift != bt->sb.shift)) { + unsigned int min_shallow = bfq_update_depths(bfqd, bt); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, min_shallow); + } + data->shallow_depth = bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; @@ -5295,65 +5352,6 @@ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg) __bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq); } -/* - * See the comments on bfq_limit_depth for the purpose of - * the depths set in the function. Return minimum shallow depth we'll use. - */ -static unsigned int bfq_update_depths(struct bfq_data *bfqd, - struct sbitmap_queue *bt) -{ - unsigned int i, j, min_shallow = UINT_MAX; - bfqd->sb_shift = bt->sb.shift; - - /* - * In-word depths if no bfq_queue is being weight-raised: - * leaving 25% of tags only for sync reads. - * - * In next formulas, right-shift the value - * (1U<sb_shift), instead of computing directly - * (1U<<(bfqd->sb_shift - something)), to be robust against - * any possible value of bfqd->sb_shift, without having to - * limit 'something'. - */ - /* no more than 50% of tags for async I/O */ - bfqd->word_depths[0][0] = max((1U<sb_shift)>>1, 1U); - /* - * no more than 75% of tags for sync writes (25% extra tags - * w.r.t. async I/O, to prevent async I/O from starving sync - * writes) - */ - bfqd->word_depths[0][1] = max(((1U<sb_shift) * 3)>>2, 1U); - - /* - * In-word depths in case some bfq_queue is being weight- - * raised: leaving ~63% of tags for sync reads. This is the - * highest percentage for which, in our tests, application - * start-up times didn't suffer from any regression due to tag - * shortage. - */ - /* no more than ~18% of tags for async I/O */ - bfqd->word_depths[1][0] = max(((1U<sb_shift) * 3)>>4, 1U); - /* no more than ~37% of tags for sync writes (~20% extra tags) */ - bfqd->word_depths[1][1] = max(((1U<sb_shift) * 6)>>4, 1U); - - for (i = 0; i < 2; i++) - for (j = 0; j < 2; j++) - min_shallow = min(min_shallow, bfqd->word_depths[i][j]); - - return min_shallow; -} - -static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index) -{ - struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; - struct blk_mq_tags *tags = hctx->sched_tags; - unsigned int min_shallow; - - min_shallow = bfq_update_depths(bfqd, &tags->bitmap_tags); - sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, min_shallow); - return 0; -} - static void bfq_exit_queue(struct elevator_queue *e) { struct bfq_data *bfqd = e->elevator_data; @@ -5773,7 +5771,6 @@ static struct elevator_type iosched_bfq_mq = { .requests_merged = bfq_requests_merged, .request_merged = bfq_request_merged, .has_work = bfq_has_work, - .init_hctx = bfq_init_hctx, .init_sched = bfq_init_queue, .exit_sched = bfq_exit_queue, },