From patchwork Tue Aug 6 12:06:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 817987 Received: from mail.nearlyone.de (mail.nearlyone.de [49.12.199.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15C3E1C9DDC; Tue, 6 Aug 2024 12:07:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=49.12.199.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722946058; cv=none; b=kceygOpUfrXkAcozwbkPw8jikKd2XA15wnMDJHxe2WZR2rMRFYE30vGE2LYpuA9vnE5u8lxXJ/nSDOhF3COmTR3K4csFJFI7FTly8vR6PRNARjk+5ECxJj+c32OFno0/9FS75qRbkAuBCBlMPX5cGTuKY+DY/T3cKPCTPNFf+58= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722946058; c=relaxed/simple; bh=Lr7SynRaa54/AOj91T/N9W6ideQj7nWtjUg90T1d/PQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=iV/Amy3cTB474mgGVG9uGcH21ogW5pg3T8/9KoWvsxvXWjRqo2vPTlNesQNsb2C61ZicjSrup4+tNiYeaKcPlTPqUddyiCaHfuxHyJSOZtswPa7RmH6wbOUdBBKKeORc3F0G61Vxjvkf4XK3/MFd2XKHnaAn/fZ//XgJmfGBMIw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=suse.de; spf=pass smtp.mailfrom=monom.org; dkim=pass (2048-bit key) header.d=monom.org header.i=@monom.org header.b=1BBIiT+5; arc=none smtp.client-ip=49.12.199.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=suse.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=monom.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=monom.org header.i=@monom.org header.b="1BBIiT+5" Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 1526CDAE20; Tue, 6 Aug 2024 14:07:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monom.org; s=dkim; t=1722946055; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=fXBiuIcH0LFGhWNjjfems+ZR7wdNt9OPsh1m9i54e84=; b=1BBIiT+5W+0JU1Z7KFJ/cOZrcSvUWKp9/EP+a0u2wM+HHSfHYQXjOhN1VVMJOtmKwcM2ud PKx44PQ3dFYa5d4Du8vpnPapwRrUmQewUTxRfpa6gdeQ/fiMPaJ7H6r7pBD2knRdEAmSfj b6/MRNAK1xWcLAt7DjTrTqzYkNZYH1HWKZ8ZXEr7HEAyaCI6NgOqWZwffgVochF+R6yP2J yP4025qHNEwt4/zukD5NzBB6LO5qTI1odacLgc9/pdUK/J9PMsswmHWOATbc7K0dMwekhy uGb1s/BcEWjfVNvRAPEOUYPgLbutpXPBSx0UtV/OfU00tpOKRtSgpldCT4Eoxg== From: Daniel Wagner Date: Tue, 06 Aug 2024 14:06:43 +0200 Subject: [PATCH v3 11/15] nvme-pci: use block layer helpers to calculate num of queues Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240806-isolcpus-io-queues-v3-11-da0eecfeaf8b@suse.de> References: <20240806-isolcpus-io-queues-v3-0-da0eecfeaf8b@suse.de> In-Reply-To: <20240806-isolcpus-io-queues-v3-0-da0eecfeaf8b@suse.de> To: Jens Axboe , Keith Busch , Sagi Grimberg , Thomas Gleixner , Christoph Hellwig , "Martin K. Petersen" , John Garry , "Michael S. Tsirkin" , Jason Wang , Kashyap Desai , Sumit Saxena , Shivasharan S , Chandrakanth patil , Sathya Prakash Veerichetty , Suganath Prabu Subramani , Nilesh Javali , GR-QLogic-Storage-Upstream@marvell.com, Jonathan Corbet Cc: Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Sridhar Balaraman , "brookxu.cn" , Ming Lei , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, virtualization@lists.linux.dev, megaraidlinux.pdl@broadcom.com, mpi3mr-linuxdrv.pdl@broadcom.com, MPT-FusionLinux.pdl@broadcom.com, storagedev@microchip.com, linux-doc@vger.kernel.org, Daniel Wagner X-Mailer: b4 0.14.0 X-Last-TLS-Session-Version: TLSv1.3 Multiqueue devices should only allocate queues for the housekeeping CPUs when isolcpus=io_queue is set. This avoids that the isolated CPUs get disturbed with OS workload. Use helpers which calculates the correct number of queues which should be used when isolcpus is used. Signed-off-by: Daniel Wagner Reviewed-by: Christoph Hellwig --- drivers/nvme/host/pci.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 5d509405a3e4..50ff2cef3d4f 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -81,7 +81,7 @@ static int io_queue_count_set(const char *val, const struct kernel_param *kp) int ret; ret = kstrtouint(val, 10, &n); - if (ret != 0 || n > num_possible_cpus()) + if (ret != 0 || n > blk_mq_num_possible_queues(0)) return -EINVAL; return param_set_uint(val, kp); } @@ -2300,7 +2300,8 @@ static unsigned int nvme_max_io_queues(struct nvme_dev *dev) */ if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS) return 1; - return num_possible_cpus() + dev->nr_write_queues + dev->nr_poll_queues; + return blk_mq_num_possible_queues(0) + dev->nr_write_queues + + dev->nr_poll_queues; } static int nvme_setup_io_queues(struct nvme_dev *dev)