From patchwork Tue Mar 9 03:27:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jingbo Xu X-Patchwork-Id: 397453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43550C433E6 for ; Tue, 9 Mar 2021 03:28:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 197106528C for ; Tue, 9 Mar 2021 03:28:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229463AbhCID2J (ORCPT ); Mon, 8 Mar 2021 22:28:09 -0500 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:43090 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229446AbhCID1v (ORCPT ); Mon, 8 Mar 2021 22:27:51 -0500 X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R191e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=e01e04420; MF=jefflexu@linux.alibaba.com; NM=1; PH=DS; RN=5; SR=0; TI=SMTPD_---0UR.QfYW_1615260468; Received: from localhost(mailfrom:jefflexu@linux.alibaba.com fp:SMTPD_---0UR.QfYW_1615260468) by smtp.aliyun-inc.com(127.0.0.1); Tue, 09 Mar 2021 11:27:49 +0800 From: Jeffle Xu To: snitzer@redhat.com, gregkh@linuxfoundation.org, sashal@kernel.org Cc: stable@vger.kernel.org, jefflexu@linux.alibaba.com Subject: [PATCH v2 4.14 3/3] dm table: fix zoned iterate_devices based device capability checks Date: Tue, 9 Mar 2021 11:27:45 +0800 Message-Id: <20210309032745.106175-4-jefflexu@linux.alibaba.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210309032745.106175-1-jefflexu@linux.alibaba.com> References: <20210309032745.106175-1-jefflexu@linux.alibaba.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit 24f6b6036c9eec21191646930ad42808e6180510 upstream. Fix dm_table_supports_zoned_model() and invert logic of both iterate_devices_callout_fn so that all devices' zoned capabilities are properly checked. Add one more parameter to dm_table_any_dev_attr(), which is actually used as the @data parameter of iterate_devices_callout_fn, so that dm_table_matches_zone_sectors() can be replaced by dm_table_any_dev_attr(). Fixes: dd88d313bef02 ("dm table: add zoned block devices validation") Cc: stable@vger.kernel.org Signed-off-by: Jeffle Xu Signed-off-by: Mike Snitzer [jeffle: also convert no_sg_merge check] --- drivers/md/dm-table.c | 50 +++++++++++++++---------------------------- 1 file changed, 17 insertions(+), 33 deletions(-) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 96ac87f9cb85..3b2a880eed68 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1372,10 +1372,10 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector) * should use the iteration structure like dm_table_supports_nowait() or * dm_table_supports_discards(). Or introduce dm_table_all_devs_attr() that * uses an @anti_func that handle semantics of counter examples, e.g. not - * capable of something. So: return !dm_table_any_dev_attr(t, anti_func); + * capable of something. So: return !dm_table_any_dev_attr(t, anti_func, data); */ static bool dm_table_any_dev_attr(struct dm_table *t, - iterate_devices_callout_fn func) + iterate_devices_callout_fn func, void *data) { struct dm_target *ti; unsigned int i; @@ -1384,7 +1384,7 @@ static bool dm_table_any_dev_attr(struct dm_table *t, ti = dm_table_get_target(t, i); if (ti->type->iterate_devices && - ti->type->iterate_devices(ti, func, NULL)) + ti->type->iterate_devices(ti, func, data)) return true; } @@ -1427,13 +1427,13 @@ bool dm_table_has_no_data_devices(struct dm_table *table) return true; } -static int device_is_zoned_model(struct dm_target *ti, struct dm_dev *dev, - sector_t start, sector_t len, void *data) +static int device_not_zoned_model(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) { struct request_queue *q = bdev_get_queue(dev->bdev); enum blk_zoned_model *zoned_model = data; - return q && blk_queue_zoned_model(q) == *zoned_model; + return !q || blk_queue_zoned_model(q) != *zoned_model; } static bool dm_table_supports_zoned_model(struct dm_table *t, @@ -1450,37 +1450,20 @@ static bool dm_table_supports_zoned_model(struct dm_table *t, return false; if (!ti->type->iterate_devices || - !ti->type->iterate_devices(ti, device_is_zoned_model, &zoned_model)) + ti->type->iterate_devices(ti, device_not_zoned_model, &zoned_model)) return false; } return true; } -static int device_matches_zone_sectors(struct dm_target *ti, struct dm_dev *dev, - sector_t start, sector_t len, void *data) +static int device_not_matches_zone_sectors(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) { struct request_queue *q = bdev_get_queue(dev->bdev); unsigned int *zone_sectors = data; - return q && blk_queue_zone_sectors(q) == *zone_sectors; -} - -static bool dm_table_matches_zone_sectors(struct dm_table *t, - unsigned int zone_sectors) -{ - struct dm_target *ti; - unsigned i; - - for (i = 0; i < dm_table_get_num_targets(t); i++) { - ti = dm_table_get_target(t, i); - - if (!ti->type->iterate_devices || - !ti->type->iterate_devices(ti, device_matches_zone_sectors, &zone_sectors)) - return false; - } - - return true; + return !q || blk_queue_zone_sectors(q) != *zone_sectors; } static int validate_hardware_zoned_model(struct dm_table *table, @@ -1500,7 +1483,7 @@ static int validate_hardware_zoned_model(struct dm_table *table, if (!zone_sectors || !is_power_of_2(zone_sectors)) return -EINVAL; - if (!dm_table_matches_zone_sectors(table, zone_sectors)) { + if (dm_table_any_dev_attr(table, device_not_matches_zone_sectors, &zone_sectors)) { DMERR("%s: zone sectors is not consistent across all devices", dm_device_name(table->md)); return -EINVAL; @@ -1837,11 +1820,11 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, else queue_flag_clear_unlocked(QUEUE_FLAG_DAX, q); - if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled)) + if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL)) dax_write_cache(t->md->dax_dev, true); /* Ensure that all underlying devices are non-rotational. */ - if (dm_table_any_dev_attr(t, device_is_rotational)) + if (dm_table_any_dev_attr(t, device_is_rotational, NULL)) queue_flag_clear_unlocked(QUEUE_FLAG_NONROT, q); else queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q); @@ -1851,7 +1834,7 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, if (!dm_table_supports_write_zeroes(t)) q->limits.max_write_zeroes_sectors = 0; - if (dm_table_any_dev_attr(t, queue_no_sg_merge)) + if (dm_table_any_dev_attr(t, queue_no_sg_merge, NULL)) queue_flag_set_unlocked(QUEUE_FLAG_NO_SG_MERGE, q); else queue_flag_clear_unlocked(QUEUE_FLAG_NO_SG_MERGE, q); @@ -1865,7 +1848,7 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, * them as well. Only targets that support iterate_devices are considered: * don't want error, zero, etc to require stable pages. */ - if (dm_table_any_dev_attr(t, device_requires_stable_pages)) + if (dm_table_any_dev_attr(t, device_requires_stable_pages, NULL)) q->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES; else q->backing_dev_info->capabilities &= ~BDI_CAP_STABLE_WRITES; @@ -1876,7 +1859,8 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, * Clear QUEUE_FLAG_ADD_RANDOM if any underlying device does not * have it set. */ - if (blk_queue_add_random(q) && dm_table_any_dev_attr(t, device_is_not_random)) + if (blk_queue_add_random(q) && + dm_table_any_dev_attr(t, device_is_not_random, NULL)) queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q); /*