From patchwork Mon Oct 5 15:27:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 268128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0FBCC4363D for ; Mon, 5 Oct 2020 15:38:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD78E206DD for ; Mon, 5 Oct 2020 15:38:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601912322; bh=XOUTYVG2a3yotQL5srfT1iatkSW7dYJre/pBT2dojM0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ZBCSvLa5d7kfgVJs3TrfGm/5rdOlwyH0neRnuCNl1uhBUag/3ulrFUJKm0Q3d7glu tMfdZ6aM1+sS+Rt+l5H5J01PZQ3wlOo+7WEn0Yx1OTlyCTBEBu1t6yMwBfpRZsnznu xkBTYSa0mq9sy7xhOHcrXItzRs4M11rsO4MkVh9o= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727453AbgJEPim (ORCPT ); Mon, 5 Oct 2020 11:38:42 -0400 Received: from mail.kernel.org ([198.145.29.99]:57632 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727188AbgJEPa4 (ORCPT ); Mon, 5 Oct 2020 11:30:56 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 137692085B; Mon, 5 Oct 2020 15:30:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601911855; bh=XOUTYVG2a3yotQL5srfT1iatkSW7dYJre/pBT2dojM0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=irRkET+V7ERBBs5XtqbN757KmjJS+Q53drdyPy3rcG3LvfGtRnQYeIe886gwpfnGQ EK3ON5FxYP9px4S/IJB9d9JCH+lBf1gRLMilTIaV3nPc8GMS5YZSZqRubNgbSBmAup GGaOVNjqPxErz6I4rY/fasfBkPjvlpV+VrG3Y7n8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Johannes Thumshirn , Christoph Hellwig , Damien Le Moal , Keith Busch , Jens Axboe , Revanth Rajashekar Subject: [PATCH 5.4 51/57] nvme: Introduce nvme_lba_to_sect() Date: Mon, 5 Oct 2020 17:27:03 +0200 Message-Id: <20201005142112.254052588@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201005142109.796046410@linuxfoundation.org> References: <20201005142109.796046410@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Damien Le Moal commit e08f2ae850929d40e66268ee47e443e7ea56eeb7 upstream. Introduce the new helper function nvme_lba_to_sect() to convert a device logical block number to a 512B sector number. Use this new helper in obvious places, cleaning up the code. Reviewed-by: Johannes Thumshirn Reviewed-by: Christoph Hellwig Signed-off-by: Damien Le Moal Signed-off-by: Keith Busch Signed-off-by: Jens Axboe Signed-off-by: Revanth Rajashekar Signed-off-by: Greg Kroah-Hartman --- drivers/nvme/host/core.c | 14 +++++++------- drivers/nvme/host/nvme.h | 8 ++++++++ 2 files changed, 15 insertions(+), 7 deletions(-) --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1682,7 +1682,7 @@ static void nvme_init_integrity(struct g static void nvme_set_chunk_size(struct nvme_ns *ns) { - u32 chunk_size = (((u32)ns->noiob) << (ns->lba_shift - 9)); + u32 chunk_size = nvme_lba_to_sect(ns, ns->noiob); blk_queue_chunk_sectors(ns->queue, rounddown_pow_of_two(chunk_size)); } @@ -1719,8 +1719,7 @@ static void nvme_config_discard(struct g static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns) { - u32 max_sectors; - unsigned short bs = 1 << ns->lba_shift; + u64 max_blocks; if (!(ns->ctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES) || (ns->ctrl->quirks & NVME_QUIRK_DISABLE_WRITE_ZEROES)) @@ -1736,11 +1735,12 @@ static void nvme_config_write_zeroes(str * nvme_init_identify() if available. */ if (ns->ctrl->max_hw_sectors == UINT_MAX) - max_sectors = ((u32)(USHRT_MAX + 1) * bs) >> 9; + max_blocks = (u64)USHRT_MAX + 1; else - max_sectors = ((u32)(ns->ctrl->max_hw_sectors + 1) * bs) >> 9; + max_blocks = ns->ctrl->max_hw_sectors + 1; - blk_queue_max_write_zeroes_sectors(disk->queue, max_sectors); + blk_queue_max_write_zeroes_sectors(disk->queue, + nvme_lba_to_sect(ns, max_blocks)); } static int nvme_report_ns_ids(struct nvme_ctrl *ctrl, unsigned int nsid, @@ -1774,7 +1774,7 @@ static bool nvme_ns_ids_equal(struct nvm static void nvme_update_disk_info(struct gendisk *disk, struct nvme_ns *ns, struct nvme_id_ns *id) { - sector_t capacity = le64_to_cpu(id->nsze) << (ns->lba_shift - 9); + sector_t capacity = nvme_lba_to_sect(ns, le64_to_cpu(id->nsze)); unsigned short bs = 1 << ns->lba_shift; u32 atomic_bs, phys_bs, io_opt; --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -437,6 +437,14 @@ static inline u64 nvme_sect_to_lba(struc return sector >> (ns->lba_shift - SECTOR_SHIFT); } +/* + * Convert a device logical block number to a 512B sector number. + */ +static inline sector_t nvme_lba_to_sect(struct nvme_ns *ns, u64 lba) +{ + return lba << (ns->lba_shift - SECTOR_SHIFT); +} + static inline void nvme_end_request(struct request *req, __le16 status, union nvme_result result) {