From patchwork Fri Oct 20 19:06:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DF8CC001DF for ; Fri, 20 Oct 2023 19:07:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229555AbjJTTHL (ORCPT ); Fri, 20 Oct 2023 15:07:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230028AbjJTTHF (ORCPT ); Fri, 20 Oct 2023 15:07:05 -0400 Received: from rcdn-iport-5.cisco.com (rcdn-iport-5.cisco.com [173.37.86.76]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3ACD5D76; Fri, 20 Oct 2023 12:07:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=1929; q=dns/txt; s=iport; t=1697828823; x=1699038423; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bfN4RnDRd/wQBtbHZU4sV3kl2Zo449NxAVA6sUVHsHA=; b=bwCAHWwNWpcVNSUrMoD9HnhWyI8K95HVlhJWvZiEhZZ0p6UT1Z+uW7x0 DYZg0mnUgSRq1adBbEA+OnoZ0Ml2ai5/4rui+IsttHlgQCv0x+L34LjRc dzWzANLs4btKaaUctrTljxtYti8KyVWuY+/qsxGoVRmcxoQGm8c8k0UuL A=; X-CSE-ConnectionGUID: wEAiJYrnQfKpoxYuknY4Kg== X-CSE-MsgGUID: 9QoOHDiIROqS/IxMzlWusg== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126839759" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-5.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:02 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPD026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:01 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 01/13] scsi: fnic: Modify definitions to sync with VIC firmware Date: Fri, 20 Oct 2023 12:06:17 -0700 Message-Id: <20231020190629.338623-2-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org VIC firmware has updated definitions. Modify structure and definitions to sync with the latest VIC firmware. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/vnic_scsi.h | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/fnic/vnic_scsi.h b/drivers/scsi/fnic/vnic_scsi.h index 4e12f7b32d9d..f715f7942bfe 100644 --- a/drivers/scsi/fnic/vnic_scsi.h +++ b/drivers/scsi/fnic/vnic_scsi.h @@ -26,7 +26,7 @@ #define VNIC_FNIC_RATOV_MAX 255000 #define VNIC_FNIC_MAXDATAFIELDSIZE_MIN 256 -#define VNIC_FNIC_MAXDATAFIELDSIZE_MAX 2112 +#define VNIC_FNIC_MAXDATAFIELDSIZE_MAX 2048 #define VNIC_FNIC_FLOGI_RETRIES_MIN 0 #define VNIC_FNIC_FLOGI_RETRIES_MAX 0xffffffff @@ -55,7 +55,7 @@ #define VNIC_FNIC_PORT_DOWN_IO_RETRIES_MAX 255 #define VNIC_FNIC_LUNS_PER_TARGET_MIN 1 -#define VNIC_FNIC_LUNS_PER_TARGET_MAX 1024 +#define VNIC_FNIC_LUNS_PER_TARGET_MAX 4096 /* Device-specific region: scsi configuration */ struct vnic_fc_config { @@ -79,10 +79,19 @@ struct vnic_fc_config { u16 ra_tov; u16 intr_timer; u8 intr_timer_type; + u8 intr_mode; + u8 lun_queue_depth; + u8 io_timeout_retry; + u16 wq_copy_count; }; #define VFCF_FCP_SEQ_LVL_ERR 0x1 /* Enable FCP-2 Error Recovery */ #define VFCF_PERBI 0x2 /* persistent binding info available */ #define VFCF_FIP_CAPABLE 0x4 /* firmware can handle FIP */ +#define VFCF_FC_INITIATOR 0x20 /* FC Initiator Mode */ +#define VFCF_FC_TARGET 0x40 /* FC Target Mode */ +#define VFCF_FC_NVME_INITIATOR 0x80 /* FC-NVMe Initiator Mode */ +#define VFCF_FC_NVME_TARGET 0x100 /* FC-NVMe Target Mode */ + #endif /* _VNIC_SCSI_H_ */ From patchwork Fri Oct 20 19:06:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06FA4C004C0 for ; Fri, 20 Oct 2023 19:07:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230168AbjJTTHN (ORCPT ); Fri, 20 Oct 2023 15:07:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230106AbjJTTHJ (ORCPT ); Fri, 20 Oct 2023 15:07:09 -0400 Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7233C10C9; Fri, 20 Oct 2023 12:07:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2502; q=dns/txt; s=iport; t=1697828825; x=1699038425; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=slFAmSPvNAgschnNw3qQmhrZbli7ashLCnTbqYkfOek=; b=Qe0WIIl6P7yegH4FBMARHpZZ+wYwsMmG6Gimu0hHPwQ5KUU5mgm/OX3m JwP5JGcVXQ1nZzKE2kjDgc5jPL5Txye0OEkQJ9dpZR0SlXjasSSDKe5oo kJzx5QQYJjr3/CWYSoz8zahhE0cusm8Mq4fhH04vwaGZFpZKPhgllMA4y E=; X-CSE-ConnectionGUID: LOxnrS9lTw2AjmEyNgFijw== X-CSE-MsgGUID: WBJA61wgQo6wqkkpOtqStA== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126421320" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-7.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:04 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPE026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:03 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 02/13] scsi: fnic: Add and use fnic number Date: Fri, 20 Oct 2023 12:06:18 -0700 Message-Id: <20231020190629.338623-3-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add fnic_num in fnic.h to identify fnic in a multi-fnic environment. Increment and set the fnic number during driver load in fnic_probe. Replace the host number with fnic number in debugfs. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic.h | 1 + drivers/scsi/fnic/fnic_debugfs.c | 2 +- drivers/scsi/fnic/fnic_main.c | 6 +++++- 3 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 93c68931a593..c6c549c633b1 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -216,6 +216,7 @@ struct fnic_event { /* Per-instance private data structure */ struct fnic { + int fnic_num; struct fc_lport *lport; struct fcoe_ctlr ctlr; /* FIP FCoE controller structure */ struct vnic_dev_bar bar0; diff --git a/drivers/scsi/fnic/fnic_debugfs.c b/drivers/scsi/fnic/fnic_debugfs.c index c4d9ed0d7d75..fac617672868 100644 --- a/drivers/scsi/fnic/fnic_debugfs.c +++ b/drivers/scsi/fnic/fnic_debugfs.c @@ -676,7 +676,7 @@ void fnic_stats_debugfs_init(struct fnic *fnic) { char name[16]; - snprintf(name, sizeof(name), "host%d", fnic->lport->host->host_no); + snprintf(name, sizeof(name), "fnic%d", fnic->fnic_num); fnic->fnic_stats_debugfs_host = debugfs_create_dir(name, fnic_stats_debugfs_root); diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 984bc5fc55e2..ea7b1ba27ac7 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -84,6 +84,9 @@ static struct libfc_function_template fnic_transport_template = { .exch_mgr_reset = fnic_exch_mgr_reset }; + +atomic_t fnic_num; + static int fnic_slave_alloc(struct scsi_device *sdev) { struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); @@ -587,6 +590,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) int i; unsigned long flags; + atomic_inc(&fnic_num); /* * Allocate SCSI Host and set up association between host, * local port, and fnic @@ -608,7 +612,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) host->host_no); host->transportt = fnic_fc_transport; - + fnic->fnic_num = atomic_read(&fnic_num); fnic_stats_debugfs_init(fnic); /* Setup PCI resources */ From patchwork Fri Oct 20 19:06:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2086C0032E for ; Fri, 20 Oct 2023 19:07:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230028AbjJTTHP (ORCPT ); Fri, 20 Oct 2023 15:07:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230142AbjJTTHK (ORCPT ); Fri, 20 Oct 2023 15:07:10 -0400 Received: from rcdn-iport-9.cisco.com (rcdn-iport-9.cisco.com [173.37.86.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98C9F10D9; Fri, 20 Oct 2023 12:07:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4997; q=dns/txt; s=iport; t=1697828827; x=1699038427; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2D8EExdDVIfSV/dUPsyrfX2rTUTRqFnK6MF55T7m9M0=; b=WpIoC+VRp8CYDxWmgrExlzB14lHgZEOUBAyM5YB5sb5hiujKTsKtKRqb YvtCt21aPqFB++ptyY49ZaCXtQU+yLKwaEXtJCrccCAAHGH55X+26xUKJ Lu69vRM08ei+4S+EI8ep2GeztU8f/q2p6FvbXJ5+B0+wmdffF0RsQDS/e o=; X-CSE-ConnectionGUID: 1Gt/DcgQRL2REF3oDvNHeQ== X-CSE-MsgGUID: FrDGP5nWTlahWundTS7x7Q== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126312391" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-9.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:06 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPF026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:05 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 03/13] scsi: fnic: Add and improve log messages Date: Fri, 20 Oct 2023 12:06:19 -0700 Message-Id: <20231020190629.338623-4-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add link related log messages in fnic_fcs.c, Improve log message in fnic_fcs.c, Add log message in vnic_dev.c. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic_fcs.c | 36 ++++++++++++++++++++++++------------ drivers/scsi/fnic/vnic_dev.c | 4 ++++ 2 files changed, 28 insertions(+), 12 deletions(-) diff --git a/drivers/scsi/fnic/fnic_fcs.c b/drivers/scsi/fnic/fnic_fcs.c index 55632c67a8f2..203ffec625a4 100644 --- a/drivers/scsi/fnic/fnic_fcs.c +++ b/drivers/scsi/fnic/fnic_fcs.c @@ -64,8 +64,8 @@ void fnic_handle_link(struct work_struct *work) new_port_speed); if (old_port_speed != new_port_speed) FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host, - "Current vnic speed set to : %llu\n", - new_port_speed); + "fnic<%d>: %s: %d: Current vnic speed set to: %llu\n", + fnic->fnic_num, __func__, __LINE__, new_port_speed); switch (vnic_dev_port_speed(fnic->vdev)) { case DCEM_PORTSPEED_10G: @@ -102,6 +102,9 @@ void fnic_handle_link(struct work_struct *work) fnic_fc_trace_set_data(fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: DOWN->DOWN", strlen("Link Status: DOWN->DOWN")); + FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: down->down\n", + fnic->fnic_num, __func__, __LINE__); } else { if (old_link_down_cnt != fnic->link_down_cnt) { /* UP -> DOWN -> UP */ @@ -128,8 +131,9 @@ void fnic_handle_link(struct work_struct *work) fnic_fcoe_send_vlan_req(fnic); return; } - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, - "link up\n"); + FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: up->down->up: Link up\n", + fnic->fnic_num, __func__, __LINE__); fcoe_ctlr_link_up(&fnic->ctlr); } else { /* UP -> UP */ @@ -138,6 +142,9 @@ void fnic_handle_link(struct work_struct *work) fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: UP_UP", strlen("Link Status: UP_UP")); + FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: up->up\n", + fnic->fnic_num, __func__, __LINE__); } } } else if (fnic->link_status) { @@ -153,7 +160,9 @@ void fnic_handle_link(struct work_struct *work) return; } - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, "link up\n"); + FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: down->up: Link up\n", + fnic->fnic_num, __func__, __LINE__); fnic_fc_trace_set_data(fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: DOWN_UP", strlen("Link Status: DOWN_UP")); fcoe_ctlr_link_up(&fnic->ctlr); @@ -161,7 +170,9 @@ void fnic_handle_link(struct work_struct *work) /* UP -> DOWN */ fnic->lport->host_stats.link_failure_count++; spin_unlock_irqrestore(&fnic->fnic_lock, flags); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, "link down\n"); + FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: up->down: Link down\n", + fnic->fnic_num, __func__, __LINE__); fnic_fc_trace_set_data( fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: UP_DOWN", @@ -763,8 +774,9 @@ void fnic_set_port_id(struct fc_lport *lport, u32 port_id, struct fc_frame *fp) u8 *mac; int ret; - FNIC_FCS_DBG(KERN_DEBUG, lport->host, "set port_id %x fp %p\n", - port_id, fp); + FNIC_FCS_DBG(KERN_DEBUG, lport->host, + "fnic<%d>: %s: %d: set port_id 0x%x fp 0x%p\n", + fnic->fnic_num, __func__, __LINE__, port_id, fp); /* * If we're clearing the FC_ID, change to use the ctl_src_addr. @@ -790,10 +802,10 @@ void fnic_set_port_id(struct fc_lport *lport, u32 port_id, struct fc_frame *fp) if (fnic->state == FNIC_IN_ETH_MODE || fnic->state == FNIC_IN_FC_MODE) fnic->state = FNIC_IN_ETH_TRANS_FC_MODE; else { - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, - "Unexpected fnic state %s while" - " processing flogi resp\n", - fnic_state_to_str(fnic->state)); + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: Unexpected fnic state: %s processing FLOGI response", + fnic->fnic_num, __func__, __LINE__, + fnic_state_to_str(fnic->state)); spin_unlock_irq(&fnic->fnic_lock); return; } diff --git a/drivers/scsi/fnic/vnic_dev.c b/drivers/scsi/fnic/vnic_dev.c index 3e5b437c0492..e0b173cc9d5f 100644 --- a/drivers/scsi/fnic/vnic_dev.c +++ b/drivers/scsi/fnic/vnic_dev.c @@ -143,6 +143,10 @@ static int vnic_dev_discover_res(struct vnic_dev *vdev, vdev->res[type].vaddr = (char __iomem *)bar->vaddr + bar_offset; } + pr_info("res_type_wq: %d res_type_rq: %d res_type_cq: %d res_type_intr_ctrl: %d\n", + vdev->res[RES_TYPE_WQ].count, vdev->res[RES_TYPE_RQ].count, + vdev->res[RES_TYPE_CQ].count, vdev->res[RES_TYPE_INTR_CTRL].count); + return 0; } From patchwork Fri Oct 20 19:06:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5149C004C0 for ; Fri, 20 Oct 2023 19:07:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231190AbjJTTHU (ORCPT ); Fri, 20 Oct 2023 15:07:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230100AbjJTTHM (ORCPT ); Fri, 20 Oct 2023 15:07:12 -0400 Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C4DC10DF; Fri, 20 Oct 2023 12:07:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=6775; q=dns/txt; s=iport; t=1697828828; x=1699038428; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KALeQZwu/E6nMCSKd3+RrezZye5Cwyf7qnDuQppITX0=; b=Z0USufTxlHRsIU9Vk/LOuelXV3gtRmmfvQjtXPw2l/tl9u9Tq0oT6Wbu MGLfslhstlicOPraEtWFSaxQdJcIfjY938dM+vs5ISrLs/nkIthxOAZEi M57dKj6U3wwQPtV1DOGRjBrv8pE0GOv3RW7gnMkR+uaEsdiU8GaB+8bBa s=; X-CSE-ConnectionGUID: 35lRIjLmRfO5mtY2jSF47g== X-CSE-MsgGUID: 9MfUrtdETnWqe/e8ZyOadA== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126421333" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-7.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:08 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPG026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:07 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 04/13] scsi: fnic: Rename wq_copy to hw_copy_wq Date: Fri, 20 Oct 2023 12:06:20 -0700 Message-Id: <20231020190629.338623-5-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Rename wq_copy to hw_copy_wq to accurately describe the copy workqueue. This will also help distinguish this data structure from software data structures that can be introduced. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic.h | 2 +- drivers/scsi/fnic/fnic_isr.c | 2 +- drivers/scsi/fnic/fnic_main.c | 8 ++++---- drivers/scsi/fnic/fnic_res.c | 6 +++--- drivers/scsi/fnic/fnic_scsi.c | 12 ++++++------ 5 files changed, 15 insertions(+), 15 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index c6c549c633b1..9fd01a867788 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -301,7 +301,7 @@ struct fnic { /*** FIP related data members -- end ***/ /* copy work queue cache line section */ - ____cacheline_aligned struct vnic_wq_copy wq_copy[FNIC_WQ_COPY_MAX]; + ____cacheline_aligned struct vnic_wq_copy hw_copy_wq[FNIC_WQ_COPY_MAX]; /* completion queue cache line section */ ____cacheline_aligned struct vnic_cq cq[FNIC_CQ_MAX]; diff --git a/drivers/scsi/fnic/fnic_isr.c b/drivers/scsi/fnic/fnic_isr.c index 8896758fed8c..dff9689023e4 100644 --- a/drivers/scsi/fnic/fnic_isr.c +++ b/drivers/scsi/fnic/fnic_isr.c @@ -224,7 +224,7 @@ int fnic_set_intr_mode(struct fnic *fnic) { unsigned int n = ARRAY_SIZE(fnic->rq); unsigned int m = ARRAY_SIZE(fnic->wq); - unsigned int o = ARRAY_SIZE(fnic->wq_copy); + unsigned int o = ARRAY_SIZE(fnic->hw_copy_wq); /* * Set interrupt mode (INTx, MSI, MSI-X) depending diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index ea7b1ba27ac7..887e3e168579 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -357,7 +357,7 @@ void fnic_log_q_error(struct fnic *fnic) } for (i = 0; i < fnic->wq_copy_count; i++) { - error_status = ioread32(&fnic->wq_copy[i].ctrl->error_status); + error_status = ioread32(&fnic->hw_copy_wq[i].ctrl->error_status); if (error_status) shost_printk(KERN_ERR, fnic->lport->host, "CWQ[%d] error_status" @@ -494,7 +494,7 @@ static int fnic_cleanup(struct fnic *fnic) return err; } for (i = 0; i < fnic->wq_copy_count; i++) { - err = vnic_wq_copy_disable(&fnic->wq_copy[i]); + err = vnic_wq_copy_disable(&fnic->hw_copy_wq[i]); if (err) return err; } @@ -510,7 +510,7 @@ static int fnic_cleanup(struct fnic *fnic) for (i = 0; i < fnic->rq_count; i++) vnic_rq_clean(&fnic->rq[i], fnic_free_rq_buf); for (i = 0; i < fnic->wq_copy_count; i++) - vnic_wq_copy_clean(&fnic->wq_copy[i], + vnic_wq_copy_clean(&fnic->hw_copy_wq[i], fnic_wq_copy_cleanup_handler); for (i = 0; i < fnic->cq_count; i++) @@ -901,7 +901,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) for (i = 0; i < fnic->raw_wq_count; i++) vnic_wq_enable(&fnic->wq[i]); for (i = 0; i < fnic->wq_copy_count; i++) - vnic_wq_copy_enable(&fnic->wq_copy[i]); + vnic_wq_copy_enable(&fnic->hw_copy_wq[i]); fc_fabric_login(lp); diff --git a/drivers/scsi/fnic/fnic_res.c b/drivers/scsi/fnic/fnic_res.c index a1c9cfcace7f..109316cc4ad9 100644 --- a/drivers/scsi/fnic/fnic_res.c +++ b/drivers/scsi/fnic/fnic_res.c @@ -203,7 +203,7 @@ void fnic_free_vnic_resources(struct fnic *fnic) vnic_wq_free(&fnic->wq[i]); for (i = 0; i < fnic->wq_copy_count; i++) - vnic_wq_copy_free(&fnic->wq_copy[i]); + vnic_wq_copy_free(&fnic->hw_copy_wq[i]); for (i = 0; i < fnic->rq_count; i++) vnic_rq_free(&fnic->rq[i]); @@ -250,7 +250,7 @@ int fnic_alloc_vnic_resources(struct fnic *fnic) /* Allocate Copy WQs used for SCSI IOs */ for (i = 0; i < fnic->wq_copy_count; i++) { - err = vnic_wq_copy_alloc(fnic->vdev, &fnic->wq_copy[i], + err = vnic_wq_copy_alloc(fnic->vdev, &fnic->hw_copy_wq[i], (fnic->raw_wq_count + i), fnic->config.wq_copy_desc_count, sizeof(struct fcpio_host_req)); @@ -357,7 +357,7 @@ int fnic_alloc_vnic_resources(struct fnic *fnic) } for (i = 0; i < fnic->wq_copy_count; i++) { - vnic_wq_copy_init(&fnic->wq_copy[i], + vnic_wq_copy_init(&fnic->hw_copy_wq[i], 0 /* cq_index 0 - always */, error_interrupt_enable, error_interrupt_offset); diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index 9761b2c9db48..3498a8d670b1 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -193,7 +193,7 @@ __fnic_set_state_flags(struct fnic *fnic, unsigned long st_flags, */ int fnic_fw_reset_handler(struct fnic *fnic) { - struct vnic_wq_copy *wq = &fnic->wq_copy[0]; + struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; int ret = 0; unsigned long flags; @@ -246,7 +246,7 @@ int fnic_fw_reset_handler(struct fnic *fnic) */ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id) { - struct vnic_wq_copy *wq = &fnic->wq_copy[0]; + struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; enum fcpio_flogi_reg_format_type format; struct fc_lport *lp = fnic->lport; u8 gw_mac[ETH_ALEN]; @@ -551,7 +551,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) fnic_priv(sc)->flags |= FNIC_IO_INITIALIZED; /* create copy wq desc and enqueue it */ - wq = &fnic->wq_copy[0]; + wq = &fnic->hw_copy_wq[0]; ret = fnic_queue_wq_copy_desc(fnic, wq, io_req, sc, sg_count); if (ret) { /* @@ -782,7 +782,7 @@ static inline void fnic_fcpio_ack_handler(struct fnic *fnic, u64 *ox_id_tag = (u64 *)(void *)desc; /* mark the ack state */ - wq = &fnic->wq_copy[cq_index - fnic->raw_wq_count - fnic->rq_count]; + wq = &fnic->hw_copy_wq[cq_index - fnic->raw_wq_count - fnic->rq_count]; spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); fnic->fnic_stats.misc_stats.last_ack_time = jiffies; @@ -1486,7 +1486,7 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, u32 task_req, u8 *fc_lun, struct fnic_io_req *io_req) { - struct vnic_wq_copy *wq = &fnic->wq_copy[0]; + struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; struct Scsi_Host *host = fnic->lport->host; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned long flags; @@ -1932,7 +1932,7 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, struct scsi_cmnd *sc, struct fnic_io_req *io_req) { - struct vnic_wq_copy *wq = &fnic->wq_copy[0]; + struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; struct Scsi_Host *host = fnic->lport->host; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; struct scsi_lun fc_lun; From patchwork Fri Oct 20 19:06:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CDC9C0032E for ; Fri, 20 Oct 2023 19:07:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230173AbjJTTHZ (ORCPT ); Fri, 20 Oct 2023 15:07:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230174AbjJTTHO (ORCPT ); Fri, 20 Oct 2023 15:07:14 -0400 Received: from rcdn-iport-8.cisco.com (rcdn-iport-8.cisco.com [173.37.86.79]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D642DD65; Fri, 20 Oct 2023 12:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4125; q=dns/txt; s=iport; t=1697828832; x=1699038432; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VppeFccfO0i91O9QTT2hurPRL41mnAm70iNys3yfLpY=; b=I8w6ZBxehrVzza+f/vul5XNEydOn0PGFuTOSmIh64DA8ZPb6uzdJh5Xj UeXUY7cupf93eac0mvgjqVXWZXWgxV1OmwCnh1TEf6GpofJF2+CB9ipyS fwX2bkFDK/VfS8mh1/ErV4QUq99k9QT4vXphnszaTbb0ecCkIrdvtswIw M=; X-CSE-ConnectionGUID: tRGTVq1wTNq2L3nhCWPHfQ== X-CSE-MsgGUID: G1DATn6DSVeZHn38K4dWVw== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="119702172" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-8.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:11 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPH026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:09 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 05/13] scsi: fnic: Get copy workqueue count and interrupt mode from config Date: Fri, 20 Oct 2023 12:06:21 -0700 Message-Id: <20231020190629.338623-6-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Get the copy workqueue count and interrupt mode from the configuration. The config can be changed via UCSM. Add logs to print the interrupt mode and copy workqueue count. Add logs to print the vNIC resources. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic_res.c | 42 ++++++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/fnic/fnic_res.c b/drivers/scsi/fnic/fnic_res.c index 109316cc4ad9..33dd27f6f24e 100644 --- a/drivers/scsi/fnic/fnic_res.c +++ b/drivers/scsi/fnic/fnic_res.c @@ -57,6 +57,8 @@ int fnic_get_vnic_config(struct fnic *fnic) GET_CONFIG(port_down_timeout); GET_CONFIG(port_down_io_retries); GET_CONFIG(luns_per_tgt); + GET_CONFIG(intr_mode); + GET_CONFIG(wq_copy_count); c->wq_enet_desc_count = min_t(u32, VNIC_FNIC_WQ_DESCS_MAX, @@ -131,6 +133,12 @@ int fnic_get_vnic_config(struct fnic *fnic) c->intr_timer = min_t(u16, VNIC_INTR_TIMER_MAX, c->intr_timer); c->intr_timer_type = c->intr_timer_type; + /* for older firmware, GET_CONFIG will not return anything */ + if (c->wq_copy_count == 0) + c->wq_copy_count = 1; + + c->wq_copy_count = min_t(u16, FNIC_WQ_COPY_MAX, c->wq_copy_count); + shost_printk(KERN_INFO, fnic->lport->host, "vNIC MAC addr %pM " "wq/wq_copy/rq %d/%d/%d\n", @@ -161,6 +169,10 @@ int fnic_get_vnic_config(struct fnic *fnic) shost_printk(KERN_INFO, fnic->lport->host, "vNIC port dn io retries %d port dn timeout %d\n", c->port_down_io_retries, c->port_down_timeout); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC wq_copy_count: %d\n", c->wq_copy_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC intr mode: %d\n", c->intr_mode); return 0; } @@ -187,12 +199,25 @@ int fnic_set_nic_config(struct fnic *fnic, u8 rss_default_cpu, void fnic_get_res_counts(struct fnic *fnic) { fnic->wq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_WQ); - fnic->raw_wq_count = fnic->wq_count - 1; - fnic->wq_copy_count = fnic->wq_count - fnic->raw_wq_count; + fnic->raw_wq_count = 1; + fnic->wq_copy_count = fnic->config.wq_copy_count; fnic->rq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_RQ); fnic->cq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_CQ); fnic->intr_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_INTR_CTRL); + + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources wq_count: %d\n", fnic->wq_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources raw_wq_count: %d\n", fnic->raw_wq_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources wq_copy_count: %d\n", fnic->wq_copy_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources rq_count: %d\n", fnic->rq_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources cq_count: %d\n", fnic->cq_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources intr_count: %d\n", fnic->intr_count); } void fnic_free_vnic_resources(struct fnic *fnic) @@ -234,10 +259,15 @@ int fnic_alloc_vnic_resources(struct fnic *fnic) intr_mode == VNIC_DEV_INTR_MODE_MSIX ? "MSI-X" : "unknown"); - shost_printk(KERN_INFO, fnic->lport->host, "vNIC resources avail: " - "wq %d cp_wq %d raw_wq %d rq %d cq %d intr %d\n", - fnic->wq_count, fnic->wq_copy_count, fnic->raw_wq_count, - fnic->rq_count, fnic->cq_count, fnic->intr_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC resources avail: wq %d cp_wq %d raw_wq %d rq %d", + fnic->wq_count, fnic->wq_copy_count, + fnic->raw_wq_count, fnic->rq_count); + + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC resources avail: cq %d intr %d cpy-wq desc count %d\n", + fnic->cq_count, fnic->intr_count, + fnic->config.wq_copy_desc_count); /* Allocate Raw WQ used for FCS frames */ for (i = 0; i < fnic->raw_wq_count; i++) { From patchwork Fri Oct 20 19:06:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6311EC00A8F for ; Fri, 20 Oct 2023 19:07:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230286AbjJTTH0 (ORCPT ); Fri, 20 Oct 2023 15:07:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230297AbjJTTHT (ORCPT ); Fri, 20 Oct 2023 15:07:19 -0400 Received: from rcdn-iport-2.cisco.com (rcdn-iport-2.cisco.com [173.37.86.73]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B743CD7A; Fri, 20 Oct 2023 12:07:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=3120; q=dns/txt; s=iport; t=1697828833; x=1699038433; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sjJkSqdln9qea/n4Fpj7gGInKVCSDN0znydRXK9ldtE=; b=CZXxTqKhayY1c+zfYKPwQa8YGh5R9HMOkbUBR/L8aBSnKg8nuIX3RpTT rVTyob7q1G5P4V4AjaaMBgL23c8SNz/Cb4KoMPVBT21UKU517SUueoUop 2hHFKb4Gl/OK+zchDtGP9rJr3+/GHFOS+7EpHRSLFG0qRmu/dmZO3VcKx s=; X-CSE-ConnectionGUID: cVOFaWG+TJGASHi4IPtMJg== X-CSE-MsgGUID: OweevoQdSeObhtMcX43nDQ== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="127269668" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-2.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:11 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPI026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:11 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 06/13] scsi: fnic: Refactor and redefine fnic.h for multiqueue Date: Fri, 20 Oct 2023 12:06:22 -0700 Message-Id: <20231020190629.338623-7-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Refactor and re-define values in fnic.h to implement multiqueue(MQ) functionality. VIC firmware allows fnic to create up to 64 copy workqueues. Update the copy workqueue max to 64. Modify the interrupt index to be in sync with the firmware to support MQ. Add irq number to the MSIX entry. Define a software workqueue table to track the status of io_reqs. Define a base for the copy workqueue. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic.h | 30 ++++++++++++++++++++++-------- 1 file changed, 22 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 9fd01a867788..f4390fc96323 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -163,12 +163,21 @@ do { \ #define FNIC_MAIN_NOTE(kern_level, host, fmt, args...) \ shost_printk(kern_level, host, fmt, ##args) +#define FNIC_WQ_COPY_MAX 64 +#define FNIC_WQ_MAX 1 +#define FNIC_RQ_MAX 1 +#define FNIC_CQ_MAX (FNIC_WQ_COPY_MAX + FNIC_WQ_MAX + FNIC_RQ_MAX) +#define FNIC_DFLT_IO_COMPLETIONS 256 + +#define FNIC_MQ_CQ_INDEX 2 + extern const char *fnic_state_str[]; enum fnic_intx_intr_index { FNIC_INTX_WQ_RQ_COPYWQ, - FNIC_INTX_ERR, + FNIC_INTX_DUMMY, FNIC_INTX_NOTIFY, + FNIC_INTX_ERR, FNIC_INTX_INTR_MAX, }; @@ -176,7 +185,7 @@ enum fnic_msix_intr_index { FNIC_MSIX_RQ, FNIC_MSIX_WQ, FNIC_MSIX_WQ_COPY, - FNIC_MSIX_ERR_NOTIFY, + FNIC_MSIX_ERR_NOTIFY = FNIC_MSIX_WQ_COPY + FNIC_WQ_COPY_MAX, FNIC_MSIX_INTR_MAX, }; @@ -185,6 +194,7 @@ struct fnic_msix_entry { char devname[IFNAMSIZ + 11]; irqreturn_t (*isr)(int, void *); void *devid; + int irq_num; }; enum fnic_state { @@ -194,12 +204,6 @@ enum fnic_state { FNIC_IN_ETH_TRANS_FC_MODE, }; -#define FNIC_WQ_COPY_MAX 1 -#define FNIC_WQ_MAX 1 -#define FNIC_RQ_MAX 1 -#define FNIC_CQ_MAX (FNIC_WQ_COPY_MAX + FNIC_WQ_MAX + FNIC_RQ_MAX) -#define FNIC_DFLT_IO_COMPLETIONS 256 - struct mempool; enum fnic_evt { @@ -214,6 +218,13 @@ struct fnic_event { enum fnic_evt event; }; +struct fnic_cpy_wq { + unsigned long hw_lock_flags; + u16 active_ioreq_count; + u16 ioreq_table_size; + ____cacheline_aligned struct fnic_io_req **io_req_table; +}; + /* Per-instance private data structure */ struct fnic { int fnic_num; @@ -283,6 +294,7 @@ struct fnic { mempool_t *io_sgl_pool[FNIC_SGL_NUM_CACHES]; spinlock_t io_req_lock[FNIC_IO_LOCKS]; /* locks for scsi cmnds */ + unsigned int cpy_wq_base; struct work_struct link_work; struct work_struct frame_work; struct sk_buff_head frame_queue; @@ -302,6 +314,8 @@ struct fnic { /* copy work queue cache line section */ ____cacheline_aligned struct vnic_wq_copy hw_copy_wq[FNIC_WQ_COPY_MAX]; + ____cacheline_aligned struct fnic_cpy_wq sw_copy_wq[FNIC_WQ_COPY_MAX]; + /* completion queue cache line section */ ____cacheline_aligned struct vnic_cq cq[FNIC_CQ_MAX]; From patchwork Fri Oct 20 19:06:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A143C0032E for ; Fri, 20 Oct 2023 19:07:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232123AbjJTTHk (ORCPT ); Fri, 20 Oct 2023 15:07:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231153AbjJTTHU (ORCPT ); Fri, 20 Oct 2023 15:07:20 -0400 Received: from rcdn-iport-6.cisco.com (rcdn-iport-6.cisco.com [173.37.86.77]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F8E710E5; Fri, 20 Oct 2023 12:07:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=13780; q=dns/txt; s=iport; t=1697828835; x=1699038435; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ecE46MIxuLJ0fkfleQ7ZtvTVgz4mn+aFDwYinzVVxV4=; b=D3rMvIwRkJ4f8SE7q1OmRbfWaBQmiUI6brbIyWVtiOYz4fzdPbJLpfz4 OnsKyciG2qS6NvhZpmu1DVRG+Cx5AwcFC69Lpz8jjGq5fv4zILiaPyXEk WLCpSdu/NWBrGjLxh3SMe045v+b+yrpPEQg4pxxZB1UO63EEXg4urRSgJ Y=; X-CSE-ConnectionGUID: RgBilDxVTBaDNtPzymD6xw== X-CSE-MsgGUID: PypvpBMHSsO4PUXWhqWcmQ== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126928609" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-6.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:13 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPJ026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:12 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 07/13] scsi: fnic: Modify ISRs to support multiqueue(MQ) Date: Fri, 20 Oct 2023 12:06:23 -0700 Message-Id: <20231020190629.338623-8-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Modify interrupt service routines for INTx, MSI, and MSI-x to support multiqueue. Modify parameter list of fnic_wq_copy_cmpl_handler to take cq_index. Modify fnic_cleanup function to use the new function call of fnic_wq_copy_cmpl_handler. Refactor code to set interrupt mode to MSI-x to a new function. Add a new stat for intx_dummy. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic.h | 2 +- drivers/scsi/fnic/fnic_isr.c | 164 ++++++++++++++++++++++++--------- drivers/scsi/fnic/fnic_main.c | 4 +- drivers/scsi/fnic/fnic_scsi.c | 38 +++----- drivers/scsi/fnic/fnic_stats.h | 1 + 5 files changed, 139 insertions(+), 70 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index f4390fc96323..692175cfa557 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -369,7 +369,7 @@ void fnic_scsi_cleanup(struct fc_lport *); void fnic_scsi_abort_io(struct fc_lport *); void fnic_empty_scsi_cleanup(struct fc_lport *); void fnic_exch_mgr_reset(struct fc_lport *, u32, u32); -int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int); +int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned int cq_index); int fnic_wq_cmpl_handler(struct fnic *fnic, int); int fnic_flogi_reg_handler(struct fnic *fnic, u32); void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, diff --git a/drivers/scsi/fnic/fnic_isr.c b/drivers/scsi/fnic/fnic_isr.c index dff9689023e4..82e4e3e8ec4b 100644 --- a/drivers/scsi/fnic/fnic_isr.c +++ b/drivers/scsi/fnic/fnic_isr.c @@ -38,8 +38,13 @@ static irqreturn_t fnic_isr_legacy(int irq, void *data) fnic_log_q_error(fnic); } + if (pba & (1 << FNIC_INTX_DUMMY)) { + atomic64_inc(&fnic->fnic_stats.misc_stats.intx_dummy); + vnic_intr_return_all_credits(&fnic->intr[FNIC_INTX_DUMMY]); + } + if (pba & (1 << FNIC_INTX_WQ_RQ_COPYWQ)) { - work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions); + work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions, FNIC_MQ_CQ_INDEX); work_done += fnic_wq_cmpl_handler(fnic, -1); work_done += fnic_rq_cmpl_handler(fnic, -1); @@ -60,7 +65,7 @@ static irqreturn_t fnic_isr_msi(int irq, void *data) fnic->fnic_stats.misc_stats.last_isr_time = jiffies; atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count); - work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions); + work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions, FNIC_MQ_CQ_INDEX); work_done += fnic_wq_cmpl_handler(fnic, -1); work_done += fnic_rq_cmpl_handler(fnic, -1); @@ -109,12 +114,22 @@ static irqreturn_t fnic_isr_msix_wq_copy(int irq, void *data) { struct fnic *fnic = data; unsigned long wq_copy_work_done = 0; + int i; fnic->fnic_stats.misc_stats.last_isr_time = jiffies; atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count); - wq_copy_work_done = fnic_wq_copy_cmpl_handler(fnic, io_completions); - vnic_intr_return_credits(&fnic->intr[FNIC_MSIX_WQ_COPY], + i = irq - fnic->msix[0].irq_num; + if (i >= fnic->wq_copy_count + fnic->cpy_wq_base || + i < 0 || fnic->msix[i].irq_num != irq) { + for (i = fnic->cpy_wq_base; i < fnic->wq_copy_count + fnic->cpy_wq_base ; i++) { + if (fnic->msix[i].irq_num == irq) + break; + } + } + + wq_copy_work_done = fnic_wq_copy_cmpl_handler(fnic, io_completions, i); + vnic_intr_return_credits(&fnic->intr[i], wq_copy_work_done, 1 /* unmask intr */, 1 /* reset intr timer */); @@ -128,7 +143,7 @@ static irqreturn_t fnic_isr_msix_err_notify(int irq, void *data) fnic->fnic_stats.misc_stats.last_isr_time = jiffies; atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count); - vnic_intr_return_all_credits(&fnic->intr[FNIC_MSIX_ERR_NOTIFY]); + vnic_intr_return_all_credits(&fnic->intr[fnic->err_intr_offset]); fnic_log_q_error(fnic); fnic_handle_link_event(fnic); @@ -186,26 +201,30 @@ int fnic_request_intr(struct fnic *fnic) fnic->msix[FNIC_MSIX_WQ].isr = fnic_isr_msix_wq; fnic->msix[FNIC_MSIX_WQ].devid = fnic; - sprintf(fnic->msix[FNIC_MSIX_WQ_COPY].devname, - "%.11s-scsi-wq", fnic->name); - fnic->msix[FNIC_MSIX_WQ_COPY].isr = fnic_isr_msix_wq_copy; - fnic->msix[FNIC_MSIX_WQ_COPY].devid = fnic; + for (i = fnic->cpy_wq_base; i < fnic->wq_copy_count + fnic->cpy_wq_base; i++) { + sprintf(fnic->msix[i].devname, + "%.11s-scsi-wq-%d", fnic->name, i-FNIC_MSIX_WQ_COPY); + fnic->msix[i].isr = fnic_isr_msix_wq_copy; + fnic->msix[i].devid = fnic; + } - sprintf(fnic->msix[FNIC_MSIX_ERR_NOTIFY].devname, + sprintf(fnic->msix[fnic->err_intr_offset].devname, "%.11s-err-notify", fnic->name); - fnic->msix[FNIC_MSIX_ERR_NOTIFY].isr = + fnic->msix[fnic->err_intr_offset].isr = fnic_isr_msix_err_notify; - fnic->msix[FNIC_MSIX_ERR_NOTIFY].devid = fnic; + fnic->msix[fnic->err_intr_offset].devid = fnic; + + for (i = 0; i < fnic->intr_count; i++) { + fnic->msix[i].irq_num = pci_irq_vector(fnic->pdev, i); - for (i = 0; i < ARRAY_SIZE(fnic->msix); i++) { - err = request_irq(pci_irq_vector(fnic->pdev, i), - fnic->msix[i].isr, 0, - fnic->msix[i].devname, - fnic->msix[i].devid); + err = request_irq(fnic->msix[i].irq_num, + fnic->msix[i].isr, 0, + fnic->msix[i].devname, + fnic->msix[i].devid); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "MSIX: request_irq" - " failed %d\n", err); + FNIC_ISR_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d request_irq failed with error: %d\n", + fnic->fnic_num, __func__, __LINE__, err); fnic_free_intr(fnic); break; } @@ -220,44 +239,101 @@ int fnic_request_intr(struct fnic *fnic) return err; } -int fnic_set_intr_mode(struct fnic *fnic) +int fnic_set_intr_mode_msix(struct fnic *fnic) { unsigned int n = ARRAY_SIZE(fnic->rq); unsigned int m = ARRAY_SIZE(fnic->wq); unsigned int o = ARRAY_SIZE(fnic->hw_copy_wq); + unsigned int min_irqs = n + m + 1 + 1; /*rq, raw wq, wq, err*/ /* - * Set interrupt mode (INTx, MSI, MSI-X) depending - * system capabilities. - * - * Try MSI-X first - * * We need n RQs, m WQs, o Copy WQs, n+m+o CQs, and n+m+o+1 INTRs * (last INTR is used for WQ/RQ errors and notification area) */ - if (fnic->rq_count >= n && - fnic->raw_wq_count >= m && - fnic->wq_copy_count >= o && - fnic->cq_count >= n + m + o) { - int vecs = n + m + o + 1; - - if (pci_alloc_irq_vectors(fnic->pdev, vecs, vecs, - PCI_IRQ_MSIX) == vecs) { + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: rq-array size: %d wq-array size: %d copy-wq array size: %d\n", + fnic->fnic_num, __func__, __LINE__, n, m, o); + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: rq_count: %d raw_wq_count: %d wq_copy_count: %d cq_count: %d\n", + fnic->fnic_num, __func__, __LINE__, fnic->rq_count, fnic->raw_wq_count, + fnic->wq_copy_count, fnic->cq_count); + + if (fnic->rq_count <= n && fnic->raw_wq_count <= m && + fnic->wq_copy_count <= o) { + int vec_count = 0; + int vecs = fnic->rq_count + fnic->raw_wq_count + fnic->wq_copy_count + 1; + + vec_count = pci_alloc_irq_vectors(fnic->pdev, min_irqs, vecs, + PCI_IRQ_MSIX | PCI_IRQ_AFFINITY); + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: allocated %d MSI-X vectors\n", + fnic->fnic_num, __func__, __LINE__, vec_count); + + if (vec_count > 0) { + if (vec_count < vecs) { + FNIC_ISR_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: interrupts number mismatch: vec_count: %d vecs: %d\n", + fnic->fnic_num, __func__, __LINE__, vec_count, vecs); + if (vec_count < min_irqs) { + FNIC_ISR_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: no interrupts for copy wq\n", + fnic->fnic_num, __func__, __LINE__); + return 1; + } + } + fnic->rq_count = n; fnic->raw_wq_count = m; - fnic->wq_copy_count = o; - fnic->wq_count = m + o; - fnic->cq_count = n + m + o; - fnic->intr_count = vecs; - fnic->err_intr_offset = FNIC_MSIX_ERR_NOTIFY; - - FNIC_ISR_DBG(KERN_DEBUG, fnic->lport->host, - "Using MSI-X Interrupts\n"); - vnic_dev_set_intr_mode(fnic->vdev, - VNIC_DEV_INTR_MODE_MSIX); + fnic->cpy_wq_base = fnic->rq_count + fnic->raw_wq_count; + fnic->wq_copy_count = vec_count - n - m - 1; + fnic->wq_count = fnic->raw_wq_count + fnic->wq_copy_count; + if (fnic->cq_count != vec_count - 1) { + FNIC_ISR_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: CQ count: %d does not match MSI-X vector count: %d\n", + fnic->fnic_num, __func__, __LINE__, fnic->cq_count, vec_count); + fnic->cq_count = vec_count - 1; + } + fnic->intr_count = vec_count; + fnic->err_intr_offset = fnic->rq_count + fnic->wq_count; + + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: rq_count: %d raw_wq_count: %d cpy_wq_base: %d\n", + fnic->fnic_num, __func__, __LINE__, fnic->rq_count, + fnic->raw_wq_count, fnic->cpy_wq_base); + + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: wq_copy_count: %d wq_count: %d cq_count: %d\n", + fnic->fnic_num, __func__, __LINE__, fnic->wq_copy_count, + fnic->wq_count, fnic->cq_count); + + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: intr_count: %d err_intr_offset: %u", + fnic->fnic_num, __func__, __LINE__, fnic->intr_count, + fnic->err_intr_offset); + + vnic_dev_set_intr_mode(fnic->vdev, VNIC_DEV_INTR_MODE_MSIX); + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: fnic using MSI-X\n", fnic->fnic_num, + __func__, __LINE__); return 0; } } + return 1; +} //fnic_set_intr_mode_msix + +int fnic_set_intr_mode(struct fnic *fnic) +{ + int ret_status = 0; + + /* + * Set interrupt mode (INTx, MSI, MSI-X) depending + * system capabilities. + * + * Try MSI-X first + */ + ret_status = fnic_set_intr_mode_msix(fnic); + if (ret_status == 0) + return ret_status; /* * Next try MSI diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 887e3e168579..66e85754c1d6 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -478,6 +478,7 @@ static int fnic_cleanup(struct fnic *fnic) { unsigned int i; int err; + int raw_wq_rq_counts; vnic_dev_disable(fnic->vdev); for (i = 0; i < fnic->intr_count; i++) @@ -497,10 +498,11 @@ static int fnic_cleanup(struct fnic *fnic) err = vnic_wq_copy_disable(&fnic->hw_copy_wq[i]); if (err) return err; + raw_wq_rq_counts = fnic->raw_wq_count + fnic->rq_count; + fnic_wq_copy_cmpl_handler(fnic, -1, i + raw_wq_rq_counts); } /* Clean up completed IOs and FCS frames */ - fnic_wq_copy_cmpl_handler(fnic, io_completions); fnic_wq_cmpl_handler(fnic, -1); fnic_rq_cmpl_handler(fnic, -1); diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index 3498a8d670b1..f32781f8fdd0 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -1303,10 +1303,8 @@ static int fnic_fcpio_cmpl_handler(struct vnic_dev *vdev, * fnic_wq_copy_cmpl_handler * Routine to process wq copy */ -int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do) +int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned int cq_index) { - unsigned int wq_work_done = 0; - unsigned int i, cq_index; unsigned int cur_work_done; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; u64 start_jiffies = 0; @@ -1314,28 +1312,20 @@ int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do) u64 delta_jiffies = 0; u64 delta_ms = 0; - for (i = 0; i < fnic->wq_copy_count; i++) { - cq_index = i + fnic->raw_wq_count + fnic->rq_count; - - start_jiffies = jiffies; - cur_work_done = vnic_cq_copy_service(&fnic->cq[cq_index], - fnic_fcpio_cmpl_handler, - copy_work_to_do); - end_jiffies = jiffies; - - wq_work_done += cur_work_done; - delta_jiffies = end_jiffies - start_jiffies; - if (delta_jiffies > - (u64) atomic64_read(&misc_stats->max_isr_jiffies)) { - atomic64_set(&misc_stats->max_isr_jiffies, - delta_jiffies); - delta_ms = jiffies_to_msecs(delta_jiffies); - atomic64_set(&misc_stats->max_isr_time_ms, delta_ms); - atomic64_set(&misc_stats->corr_work_done, - cur_work_done); - } + start_jiffies = jiffies; + cur_work_done = vnic_cq_copy_service(&fnic->cq[cq_index], + fnic_fcpio_cmpl_handler, + copy_work_to_do); + end_jiffies = jiffies; + delta_jiffies = end_jiffies - start_jiffies; + if (delta_jiffies > (u64) atomic64_read(&misc_stats->max_isr_jiffies)) { + atomic64_set(&misc_stats->max_isr_jiffies, delta_jiffies); + delta_ms = jiffies_to_msecs(delta_jiffies); + atomic64_set(&misc_stats->max_isr_time_ms, delta_ms); + atomic64_set(&misc_stats->corr_work_done, cur_work_done); } - return wq_work_done; + + return cur_work_done; } static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) diff --git a/drivers/scsi/fnic/fnic_stats.h b/drivers/scsi/fnic/fnic_stats.h index bdf639eef8cf..07d1556e3c32 100644 --- a/drivers/scsi/fnic/fnic_stats.h +++ b/drivers/scsi/fnic/fnic_stats.h @@ -103,6 +103,7 @@ struct misc_stats { atomic64_t rport_not_ready; atomic64_t frame_errors; atomic64_t current_port_speed; + atomic64_t intx_dummy; }; struct fnic_stats { From patchwork Fri Oct 20 19:06:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0344CC004C0 for ; Fri, 20 Oct 2023 19:07:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230174AbjJTTH2 (ORCPT ); Fri, 20 Oct 2023 15:07:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230371AbjJTTHT (ORCPT ); Fri, 20 Oct 2023 15:07:19 -0400 Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C74C10E7; Fri, 20 Oct 2023 12:07:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2334; q=dns/txt; s=iport; t=1697828835; x=1699038435; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fv30yt0LjIxxDkJmPu1aq2CxwjmyWduwzcIDXyk48UI=; b=WE7WbTRApBnG3u2KC99W6ZnMJfasi1zo0z0GgqBN1y3v0xOjFyndnpXF +F+1HAVksHvZR0Gj2Gxg7mWVZW35mfLM4ub8bNGKlmkvwdv6Cs4a2gHmg +o6/9C2+s3p5gOs2wfyvmC3aLfW5chQEhb318HHIOhmxtJ7I5+dkjTSRH U=; X-CSE-ConnectionGUID: /f4vCI4+SCy2t33xxkI+0A== X-CSE-MsgGUID: Gj2Be+N3Qfa5FTHTijoMvA== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126421357" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-7.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:15 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPK026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:14 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 08/13] scsi: fnic: Define stats to track multiqueue (MQ) IOs Date: Fri, 20 Oct 2023 12:06:24 -0700 Message-Id: <20231020190629.338623-9-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Define an array to track IOs for the different queues, print the IO stats in fnic get stats data. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic_stats.h | 2 ++ drivers/scsi/fnic/fnic_trace.c | 11 +++++++++++ 2 files changed, 13 insertions(+) diff --git a/drivers/scsi/fnic/fnic_stats.h b/drivers/scsi/fnic/fnic_stats.h index 07d1556e3c32..9d7f98c452dd 100644 --- a/drivers/scsi/fnic/fnic_stats.h +++ b/drivers/scsi/fnic/fnic_stats.h @@ -2,6 +2,7 @@ /* Copyright 2013 Cisco Systems, Inc. All rights reserved. */ #ifndef _FNIC_STATS_H_ #define _FNIC_STATS_H_ +#define FNIC_MQ_MAX_QUEUES 64 struct stats_timestamps { struct timespec64 last_reset_time; @@ -26,6 +27,7 @@ struct io_path_stats { atomic64_t io_btw_10000_to_30000_msec; atomic64_t io_greater_than_30000_msec; atomic64_t current_max_io_time; + atomic64_t ios[FNIC_MQ_MAX_QUEUES]; }; struct abort_stats { diff --git a/drivers/scsi/fnic/fnic_trace.c b/drivers/scsi/fnic/fnic_trace.c index be0d7c57b242..aaa4ea02fb7c 100644 --- a/drivers/scsi/fnic/fnic_trace.c +++ b/drivers/scsi/fnic/fnic_trace.c @@ -204,6 +204,7 @@ int fnic_get_stats_data(struct stats_debug_info *debug, int len = 0; int buf_size = debug->buf_size; struct timespec64 val1, val2; + int i = 0; ktime_get_real_ts64(&val1); len = scnprintf(debug->debug_buffer + len, buf_size - len, @@ -266,6 +267,16 @@ int fnic_get_stats_data(struct stats_debug_info *debug, (u64)atomic64_read(&stats->io_stats.io_btw_10000_to_30000_msec), (u64)atomic64_read(&stats->io_stats.io_greater_than_30000_msec)); + len += scnprintf(debug->debug_buffer + len, buf_size - len, + "------------------------------------------\n" + "\t\tIO Queues and cumulative IOs\n" + "------------------------------------------\n"); + + for (i = 0; i < FNIC_MQ_MAX_QUEUES; i++) { + len += scnprintf(debug->debug_buffer + len, buf_size - len, + "Q:%d -> %lld\n", i, (u64)atomic64_read(&stats->io_stats.ios[i])); + } + len += scnprintf(debug->debug_buffer + len, buf_size - len, "\nCurrent Max IO time : %lld\n", (u64)atomic64_read(&stats->io_stats.current_max_io_time)); From patchwork Fri Oct 20 19:06:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8BF0C001DF for ; Fri, 20 Oct 2023 19:07:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230409AbjJTTHj (ORCPT ); Fri, 20 Oct 2023 15:07:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231156AbjJTTHU (ORCPT ); Fri, 20 Oct 2023 15:07:20 -0400 Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C49910CE; Fri, 20 Oct 2023 12:07:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=3879; q=dns/txt; s=iport; t=1697828837; x=1699038437; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hZgDvmmCEAxvPN4JkZ38Ljv3qILjjtSEDDA9V6Ihu/s=; b=Y4k2rjY0KboSYN1kVMz0iW2zNTF+GScqTVLaWKWnFbYsjW6UfXjNBKys TEOPM3FMLElksYdCg06Knux5qLkQJjHz5yqPIh6ps1qEmXbEDsdAY6Y6E ka+vZkceDGTyCDgrGlTkheFKiAvYDEqkHFHBDb4eQtPrvgJLHPl2br+QX 8=; X-CSE-ConnectionGUID: uknWTbhaT7aHbKUh/XUSUA== X-CSE-MsgGUID: 7Jhf3EGIQ6ybieU/1iB7tw== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126421366" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-7.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:17 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPL026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:16 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 09/13] scsi: fnic: Remove usage of host_lock Date: Fri, 20 Oct 2023 12:06:25 -0700 Message-Id: <20231020190629.338623-10-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Remove usage of host_lock. Replace with fnic_lock, where necessary. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic_scsi.c | 27 ++++++--------------------- 1 file changed, 6 insertions(+), 21 deletions(-) diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index f32781f8fdd0..9a1beb3e7269 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -170,17 +170,14 @@ __fnic_set_state_flags(struct fnic *fnic, unsigned long st_flags, unsigned long clearbits) { unsigned long flags = 0; - unsigned long host_lock_flags = 0; spin_lock_irqsave(&fnic->fnic_lock, flags); - spin_lock_irqsave(fnic->lport->host->host_lock, host_lock_flags); if (clearbits) fnic->state_flags &= ~st_flags; else fnic->state_flags |= st_flags; - spin_unlock_irqrestore(fnic->lport->host->host_lock, host_lock_flags); spin_unlock_irqrestore(&fnic->fnic_lock, flags); return; @@ -479,12 +476,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) atomic_inc(&fnic->in_flight); - /* - * Release host lock, use driver resource specific locks from here. - * Don't re-enable interrupts in case they were disabled prior to the - * caller disabling them. - */ - spin_unlock(lp->host->host_lock); fnic_priv(sc)->state = FNIC_IOREQ_NOT_INITED; fnic_priv(sc)->flags = FNIC_NO_FLAGS; @@ -569,8 +560,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) mempool_free(io_req, fnic->io_req_pool); } atomic_dec(&fnic->in_flight); - /* acquire host lock before returning to SCSI */ - spin_lock(lp->host->host_lock); return ret; } else { atomic64_inc(&fnic_stats->io_stats.active_ios); @@ -598,8 +587,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) spin_unlock_irqrestore(io_lock, flags); atomic_dec(&fnic->in_flight); - /* acquire host lock before returning to SCSI */ - spin_lock(lp->host->host_lock); return ret; } @@ -1477,18 +1464,17 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, struct fnic_io_req *io_req) { struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; - struct Scsi_Host *host = fnic->lport->host; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned long flags; - spin_lock_irqsave(host->host_lock, flags); + spin_lock_irqsave(&fnic->fnic_lock, flags); if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { - spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); return 1; } else atomic_inc(&fnic->in_flight); - spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); @@ -1923,20 +1909,19 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, struct fnic_io_req *io_req) { struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; - struct Scsi_Host *host = fnic->lport->host; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; struct scsi_lun fc_lun; int ret = 0; unsigned long intr_flags; - spin_lock_irqsave(host->host_lock, intr_flags); + spin_lock_irqsave(&fnic->fnic_lock, intr_flags); if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { - spin_unlock_irqrestore(host->host_lock, intr_flags); + spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags); return FAILED; } else atomic_inc(&fnic->in_flight); - spin_unlock_irqrestore(host->host_lock, intr_flags); + spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags); spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags); From patchwork Fri Oct 20 19:06:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31F9DC0032E for ; Fri, 20 Oct 2023 19:08:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230162AbjJTTIC (ORCPT ); Fri, 20 Oct 2023 15:08:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230190AbjJTTHV (ORCPT ); Fri, 20 Oct 2023 15:07:21 -0400 Received: from rcdn-iport-2.cisco.com (rcdn-iport-2.cisco.com [173.37.86.73]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C3B6D6B; Fri, 20 Oct 2023 12:07:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=10379; q=dns/txt; s=iport; t=1697828839; x=1699038439; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aT4ditRrYjCWvqrXVFbK+ZNBht4l64SOiW4neDulBpU=; b=Vn45NzACE0UID05XEAMnQiEMppL71HwAVeumkYG23QHmThEarh8UaacT ZKmhEMXmjCPM7UiPNSaVKOrtZVUh3N9N6v3cy30JXCgcX6cx/wMvspMkg /S9Ee8eYSPWv3++AKVf6YnNrFyediHNTw+lfj728Tix1Ti0NV2f8W1oSo 8=; X-CSE-ConnectionGUID: 6jusVPlCQBuh87WwFqxFGQ== X-CSE-MsgGUID: SNeyOtoCSkSNNFivPn+WsA== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="127269694" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-2.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:18 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPM026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:17 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 10/13] scsi: fnic: Add support for multiqueue (MQ) in fnic_main.c Date: Fri, 20 Oct 2023 12:06:26 -0700 Message-Id: <20231020190629.338623-11-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Set map_queues in the fnic_host_template to fnic_mq_map_queues_cpus. Define fnic_mq_map_queues_cpus to set cpu assignment to fnic queues. Refactor code in fnic_probe to enable vnic queues before scsi_add_host. Modify notify set to the correct index. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic.h | 2 +- drivers/scsi/fnic/fnic_main.c | 137 ++++++++++++++++++++++++++-------- 2 files changed, 105 insertions(+), 34 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 692175cfa557..182194b6d51d 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -377,7 +377,7 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, int fnic_fw_reset_handler(struct fnic *fnic); void fnic_terminate_rport_io(struct fc_rport *); const char *fnic_state_to_str(unsigned int state); - +void fnic_mq_map_queues_cpus(struct Scsi_Host *host); void fnic_log_q_error(struct fnic *fnic); void fnic_handle_link_event(struct fnic *fnic); diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 66e85754c1d6..5b60396e7349 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -116,6 +117,7 @@ static const struct scsi_host_template fnic_host_template = { .shost_groups = fnic_host_groups, .track_queue_depth = 1, .cmd_size = sizeof(struct fnic_cmd_priv), + .map_queues = fnic_mq_map_queues_cpus, }; static void @@ -392,7 +394,7 @@ static int fnic_notify_set(struct fnic *fnic) err = vnic_dev_notify_set(fnic->vdev, -1); break; case VNIC_DEV_INTR_MODE_MSIX: - err = vnic_dev_notify_set(fnic->vdev, FNIC_MSIX_ERR_NOTIFY); + err = vnic_dev_notify_set(fnic->vdev, fnic->wq_copy_count + fnic->cpy_wq_base); break; default: shost_printk(KERN_ERR, fnic->lport->host, @@ -565,11 +567,6 @@ static int fnic_scsi_drv_init(struct fnic *fnic) host->max_cmd_len = FCOE_MAX_CMD_LEN; host->nr_hw_queues = fnic->wq_copy_count; - if (host->nr_hw_queues > 1) - shost_printk(KERN_ERR, host, - "fnic: blk-mq is not supported"); - - host->nr_hw_queues = fnic->wq_copy_count = 1; shost_printk(KERN_INFO, host, "fnic: can_queue: %d max_lun: %llu", @@ -582,15 +579,71 @@ static int fnic_scsi_drv_init(struct fnic *fnic) return 0; } +void fnic_mq_map_queues_cpus(struct Scsi_Host *host) +{ + const struct cpumask *mask; + unsigned int queue, cpu; + int irq_num; + struct fc_lport *lp = shost_priv(host); + struct fnic *fnic = lport_priv(lp); + struct pci_dev *l_pdev = fnic->pdev; + struct blk_mq_tag_set *set = &host->tag_set; + int intr_mode = fnic->config.intr_mode; + + if (intr_mode == VNIC_DEV_INTR_MODE_MSI || intr_mode == VNIC_DEV_INTR_MODE_INTX) { + shost_printk(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: intr_mode is not msix\n", + fnic->fnic_num, __func__, __LINE__); + return; + } + + shost_printk(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: set->nr_hw_queues: %d\n", + fnic->fnic_num, __func__, __LINE__, set->nr_hw_queues); + + for (queue = 0; queue < set->nr_hw_queues; queue++) { + if (l_pdev == NULL) { + shost_printk(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: l_pdev is null\n", + fnic->fnic_num, __func__, __LINE__); + return; + } + + irq_num = pci_irq_vector(l_pdev, queue+2); + if (irq_num < 0) { + shost_printk(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: irq_num less than zero: %d\n", + fnic->fnic_num, __func__, __LINE__, irq_num); + continue; + } + + mask = irq_get_effective_affinity_mask(irq_num); + if (!mask) { + shost_printk(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: failed to get irq_affinity map for queue: %d\n", + fnic->fnic_num, __func__, __LINE__, irq_num); + continue; + } + + for_each_cpu(cpu, mask) { + set->map[HCTX_TYPE_DEFAULT].mq_map[cpu] = queue; + shost_printk(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: cpu: %d <=> queue: %d\n", + fnic->fnic_num, __func__, __LINE__, cpu, irq_num); + } + } +} + static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { struct Scsi_Host *host; struct fc_lport *lp; struct fnic *fnic; mempool_t *pool; - int err; + int err = 0; int i; unsigned long flags; + int hwq; atomic_inc(&fnic_num); /* @@ -607,8 +660,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic = lport_priv(lp); fnic->lport = lp; fnic->ctlr.lp = lp; - fnic->link_events = 0; + fnic->pdev = pdev; snprintf(fnic->name, sizeof(fnic->name) - 1, "%s%d", DRV_NAME, host->host_no); @@ -617,11 +670,6 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic->fnic_num = atomic_read(&fnic_num); fnic_stats_debugfs_init(fnic); - /* Setup PCI resources */ - pci_set_drvdata(pdev, fnic); - - fnic->pdev = pdev; - err = pci_enable_device(pdev); if (err) { shost_printk(KERN_ERR, fnic->lport->host, @@ -723,7 +771,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_out_dev_close; } - fnic_scsi_drv_init(fnic); + /* Setup PCI resources */ + pci_set_drvdata(pdev, fnic); fnic_get_res_counts(fnic); @@ -743,6 +792,16 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_out_clear_intr; } + fnic_scsi_drv_init(fnic); + + for (hwq = 0; hwq < fnic->wq_copy_count; hwq++) { + fnic->sw_copy_wq[hwq].ioreq_table_size = fnic->fnic_max_tag_id; + fnic->sw_copy_wq[hwq].io_req_table = + kzalloc((fnic->sw_copy_wq[hwq].ioreq_table_size + 1) * + sizeof(struct fnic_io_req *), GFP_KERNEL); + } + shost_printk(KERN_INFO, fnic->lport->host, "fnic copy wqs: %d, Q0 ioreq table size: %d\n", + fnic->wq_copy_count, fnic->sw_copy_wq[0].ioreq_table_size); /* initialize all fnic locks */ spin_lock_init(&fnic->fnic_lock); @@ -827,16 +886,32 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* allocate RQ buffers and post them to RQ*/ for (i = 0; i < fnic->rq_count; i++) { - vnic_rq_enable(&fnic->rq[i]); err = vnic_rq_fill(&fnic->rq[i], fnic_alloc_rq_frame); if (err) { shost_printk(KERN_ERR, fnic->lport->host, "fnic_alloc_rq_frame can't alloc " "frame\n"); - goto err_out_free_rq_buf; + goto err_out_rq_buf; } } + /* Enable all queues */ + for (i = 0; i < fnic->raw_wq_count; i++) + vnic_wq_enable(&fnic->wq[i]); + for (i = 0; i < fnic->rq_count; i++) { + if (!ioread32(&fnic->rq[i].ctrl->enable)) + vnic_rq_enable(&fnic->rq[i]); + } + for (i = 0; i < fnic->wq_copy_count; i++) + vnic_wq_copy_enable(&fnic->hw_copy_wq[i]); + + err = fnic_request_intr(fnic); + if (err) { + shost_printk(KERN_ERR, fnic->lport->host, + "Unable to request irq.\n"); + goto err_out_request_intr; + } + /* * Initialization done with PCI system, hardware, firmware. * Add host to SCSI @@ -845,9 +920,10 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (err) { shost_printk(KERN_ERR, fnic->lport->host, "fnic: scsi_add_host failed...exiting\n"); - goto err_out_free_rq_buf; + goto err_out_scsi_add_host; } + /* Start local port initiatialization */ lp->link_up = 0; @@ -871,7 +947,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (!fc_exch_mgr_alloc(lp, FC_CLASS_3, FCPIO_HOST_EXCH_RANGE_START, FCPIO_HOST_EXCH_RANGE_END, NULL)) { err = -ENOMEM; - goto err_out_remove_scsi_host; + goto err_out_fc_exch_mgr_alloc; } fc_lport_init_stats(lp); @@ -899,21 +975,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) skb_queue_head_init(&fnic->frame_queue); skb_queue_head_init(&fnic->tx_queue); - /* Enable all queues */ - for (i = 0; i < fnic->raw_wq_count; i++) - vnic_wq_enable(&fnic->wq[i]); - for (i = 0; i < fnic->wq_copy_count; i++) - vnic_wq_copy_enable(&fnic->hw_copy_wq[i]); - fc_fabric_login(lp); - err = fnic_request_intr(fnic); - if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "Unable to request irq.\n"); - goto err_out_free_exch_mgr; - } - vnic_dev_enable(fnic->vdev); for (i = 0; i < fnic->intr_count; i++) @@ -925,12 +988,15 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) err_out_free_exch_mgr: fc_exch_mgr_free(lp); -err_out_remove_scsi_host: +err_out_fc_exch_mgr_alloc: fc_remove_host(lp->host); scsi_remove_host(lp->host); -err_out_free_rq_buf: +err_out_scsi_add_host: + fnic_free_intr(fnic); +err_out_request_intr: for (i = 0; i < fnic->rq_count; i++) vnic_rq_clean(&fnic->rq[i], fnic_free_rq_buf); +err_out_rq_buf: vnic_dev_notify_unset(fnic->vdev); err_out_free_max_pool: mempool_destroy(fnic->io_sgl_pool[FNIC_SGL_CACHE_MAX]); @@ -939,6 +1005,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) err_out_free_ioreq_pool: mempool_destroy(fnic->io_req_pool); err_out_free_resources: + for (hwq = 0; hwq < fnic->wq_copy_count; hwq++) + kfree(fnic->sw_copy_wq[hwq].io_req_table); fnic_free_vnic_resources(fnic); err_out_clear_intr: fnic_clear_intr_mode(fnic); @@ -965,6 +1033,7 @@ static void fnic_remove(struct pci_dev *pdev) struct fnic *fnic = pci_get_drvdata(pdev); struct fc_lport *lp = fnic->lport; unsigned long flags; + int hwq; /* * Mark state so that the workqueue thread stops forwarding @@ -1025,6 +1094,8 @@ static void fnic_remove(struct pci_dev *pdev) fc_remove_host(fnic->lport->host); scsi_remove_host(fnic->lport->host); + for (hwq = 0; hwq < fnic->wq_copy_count; hwq++) + kfree(fnic->sw_copy_wq[hwq].io_req_table); fc_exch_mgr_free(fnic->lport); vnic_dev_notify_unset(fnic->vdev); fnic_free_intr(fnic); From patchwork Fri Oct 20 19:06:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B706C004C0 for ; Fri, 20 Oct 2023 19:08:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232946AbjJTTIE (ORCPT ); Fri, 20 Oct 2023 15:08:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230026AbjJTTHX (ORCPT ); Fri, 20 Oct 2023 15:07:23 -0400 Received: from rcdn-iport-4.cisco.com (rcdn-iport-4.cisco.com [173.37.86.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0090BD5B; Fri, 20 Oct 2023 12:07:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=3648; q=dns/txt; s=iport; t=1697828841; x=1699038441; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZMSQe9mdkW5xEKe9HP1NH7xCq2D6OUVO+7WD8rJZ7u0=; b=lhohRmS60e/Vg440/KZF5ZY/jxy89lv6ByO8Pm3hzfOvhEkDuxFHbH+y V2H44oKEuw6F490w69/vbPULLA05Ce66hObGjaEOM07yemvbPgNG+8Ta0 ZutBpkenT4B7lBG34s2neO27QX0Tz48oyEYZ1k26GUwksH2UbSUSKqFU1 0=; X-CSE-ConnectionGUID: GwJxUmVuR2ONj7mAhNcViA== X-CSE-MsgGUID: 3aqJGy4lTFq9UWbB46EPYg== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126820825" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-4.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:20 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPN026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:19 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 11/13] scsi: fnic: Use fnic_lock to protect fnic structures in queuecommand Date: Fri, 20 Oct 2023 12:06:27 -0700 Message-Id: <20231020190629.338623-12-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org fnic does not use host_lock. fnic uses fnic_lock. Use fnic lock to protect fnic members in fnic_queuecommand. Add log messages in error cases. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic_scsi.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index 9a1beb3e7269..ffdbdbfcdf57 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -424,14 +424,27 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) int io_lock_acquired = 0; struct fc_rport_libfc_priv *rp; - if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) + spin_lock_irqsave(&fnic->fnic_lock, flags); + + if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: fnic IO blocked flags: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n", + fnic->fnic_num, __func__, __LINE__, fnic->state_flags); return SCSI_MLQUEUE_HOST_BUSY; + } - if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_FWRESET))) + if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_FWRESET))) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: fnic flags: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n", + fnic->fnic_num, __func__, __LINE__, fnic->state_flags); return SCSI_MLQUEUE_HOST_BUSY; + } rport = starget_to_rport(scsi_target(sc->device)); if (!rport) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "returning DID_NO_CONNECT for IO as rport is NULL\n"); sc->result = DID_NO_CONNECT << 16; @@ -441,6 +454,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) ret = fc_remote_port_chkready(rport); if (ret) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "rport is not ready\n"); atomic64_inc(&fnic_stats->misc_stats.rport_not_ready); @@ -451,6 +465,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) rp = rport->dd_data; if (!rp || rp->rp_state == RPORT_ST_DELETE) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "rport 0x%x removed, returning DID_NO_CONNECT\n", rport->port_id); @@ -462,6 +477,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) } if (rp->rp_state != RPORT_ST_READY) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "rport 0x%x in state 0x%x, returning DID_IMM_RETRY\n", rport->port_id, rp->rp_state); @@ -471,11 +487,17 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) return 0; } - if (lp->state != LPORT_ST_READY || !(lp->link_up)) + if (lp->state != LPORT_ST_READY || !(lp->link_up)) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: state not ready: %d/link not up: %d Returning HOST_BUSY\n", + fnic->fnic_num, __func__, __LINE__, lp->state, lp->link_up); return SCSI_MLQUEUE_HOST_BUSY; + } atomic_inc(&fnic->in_flight); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); fnic_priv(sc)->state = FNIC_IOREQ_NOT_INITED; fnic_priv(sc)->flags = FNIC_NO_FLAGS; From patchwork Fri Oct 20 19:06:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA784C001DF for ; Fri, 20 Oct 2023 19:08:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233155AbjJTTI3 (ORCPT ); Fri, 20 Oct 2023 15:08:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbjJTTH2 (ORCPT ); Fri, 20 Oct 2023 15:07:28 -0400 Received: from rcdn-iport-6.cisco.com (rcdn-iport-6.cisco.com [173.37.86.77]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68E1FD46; Fri, 20 Oct 2023 12:07:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=57048; q=dns/txt; s=iport; t=1697828842; x=1699038442; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zPYZHw9Xur2XqhrJ4FjO7dshiZ/7USPAkS9gOs0T6h0=; b=KmV0WaH5xL4XENpys5p8/bdTTO5hw3618icodAJSD0ubnIEfyde9as9H JUdLPoimsQjp6VLvVGy6yBC5lfT7G6voMhowi4KcfC45gzYgzqI8qymgl LUsN+xGIA1A0zdtcDg1BweRzl8qS74Ya7CmiWJteg+7ZVJQh/q7+zlpBp A=; X-CSE-ConnectionGUID: DnXP316OQBa15gpQh3n46Q== X-CSE-MsgGUID: ow7Ns1tqRL2aQe8xZ7wUrQ== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126928635" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-6.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:22 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPO026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:21 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 12/13] scsi: fnic: Add support for multiqueue (MQ) in fnic driver Date: Fri, 20 Oct 2023 12:06:28 -0700 Message-Id: <20231020190629.338623-13-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Implement support for MQ in fnic driver: The block multiqueue layer issues IO to the fnic driver with an MQ tag. Use the mqtag and derive a tag from it. Derive the hardware queue from the mqtag and use it in all paths. Modify queuecommand to handle mqtag. Replace wq and cq indices to support MQ. Replace the zeroth queue with a hardware queue. Implement spin locks on a per hardware queue basis. Replace io_lock with per hardware queue spinlock. Implement out of range tag checks. Allocate an io_req_table to track status of the io_req. Test the driver by building it, loading it, and configuring 64 queues in UCSM. Issue IOs using Medusa on multiple fnics. Enable/disable links to exercise the abort and clean up path. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Tested-by: Karan Tilak Kumar Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic.h | 2 - drivers/scsi/fnic/fnic_io.h | 2 + drivers/scsi/fnic/fnic_main.c | 3 - drivers/scsi/fnic/fnic_scsi.c | 579 +++++++++++++++++++++------------- 4 files changed, 359 insertions(+), 227 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 182194b6d51d..bfc333366d1f 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -36,7 +36,6 @@ #define FNIC_MIN_IO_REQ 256 /* Min IO throttle count */ #define FNIC_MAX_IO_REQ 1024 /* scsi_cmnd tag map entries */ #define FNIC_DFLT_IO_REQ 256 /* Default scsi_cmnd tag map entries */ -#define FNIC_IO_LOCKS 64 /* IO locks: power of 2 */ #define FNIC_DFLT_QUEUE_DEPTH 256 #define FNIC_STATS_RATE_LIMIT 4 /* limit rate at which stats are pulled up */ @@ -292,7 +291,6 @@ struct fnic { struct fnic_host_tag *tags; mempool_t *io_req_pool; mempool_t *io_sgl_pool[FNIC_SGL_NUM_CACHES]; - spinlock_t io_req_lock[FNIC_IO_LOCKS]; /* locks for scsi cmnds */ unsigned int cpy_wq_base; struct work_struct link_work; diff --git a/drivers/scsi/fnic/fnic_io.h b/drivers/scsi/fnic/fnic_io.h index f4c8769df312..5895ead20e14 100644 --- a/drivers/scsi/fnic/fnic_io.h +++ b/drivers/scsi/fnic/fnic_io.h @@ -52,6 +52,8 @@ struct fnic_io_req { unsigned long start_time; /* in jiffies */ struct completion *abts_done; /* completion for abts */ struct completion *dr_done; /* completion for device reset */ + unsigned int tag; + struct scsi_cmnd *sc; /* midlayer's cmd pointer */ }; enum fnic_port_speeds { diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 5b60396e7349..9ab115a5fc74 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -816,9 +816,6 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic->fw_ack_index[i] = -1; } - for (i = 0; i < FNIC_IO_LOCKS; i++) - spin_lock_init(&fnic->io_req_lock[i]); - err = -ENOMEM; fnic->io_req_pool = mempool_create_slab_pool(2, fnic_io_req_cache); if (!fnic->io_req_pool) diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index ffdbdbfcdf57..7ca34a76f84e 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -92,20 +92,6 @@ static const char *fnic_fcpio_status_to_str(unsigned int status) static void fnic_cleanup_io(struct fnic *fnic); -static inline spinlock_t *fnic_io_lock_hash(struct fnic *fnic, - struct scsi_cmnd *sc) -{ - u32 hash = scsi_cmd_to_rq(sc)->tag & (FNIC_IO_LOCKS - 1); - - return &fnic->io_req_lock[hash]; -} - -static inline spinlock_t *fnic_io_lock_tag(struct fnic *fnic, - int tag) -{ - return &fnic->io_req_lock[tag & (FNIC_IO_LOCKS - 1)]; -} - /* * Unmap the data buffer and sense buffer for an io_req, * also unmap and free the device-private scatter/gather list. @@ -129,23 +115,23 @@ static void fnic_release_ioreq_buf(struct fnic *fnic, } /* Free up Copy Wq descriptors. Called with copy_wq lock held */ -static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq) +static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq, unsigned int hwq) { /* if no Ack received from firmware, then nothing to clean */ - if (!fnic->fw_ack_recd[0]) + if (!fnic->fw_ack_recd[hwq]) return 1; /* * Update desc_available count based on number of freed descriptors * Account for wraparound */ - if (wq->to_clean_index <= fnic->fw_ack_index[0]) - wq->ring.desc_avail += (fnic->fw_ack_index[0] + if (wq->to_clean_index <= fnic->fw_ack_index[hwq]) + wq->ring.desc_avail += (fnic->fw_ack_index[hwq] - wq->to_clean_index + 1); else wq->ring.desc_avail += (wq->ring.desc_count - wq->to_clean_index - + fnic->fw_ack_index[0] + 1); + + fnic->fw_ack_index[hwq] + 1); /* * just bump clean index to ack_index+1 accounting for wraparound @@ -153,10 +139,10 @@ static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq) * to_clean_index and fw_ack_index, both inclusive */ wq->to_clean_index = - (fnic->fw_ack_index[0] + 1) % wq->ring.desc_count; + (fnic->fw_ack_index[hwq] + 1) % wq->ring.desc_count; /* we have processed the acks received so far */ - fnic->fw_ack_recd[0] = 0; + fnic->fw_ack_recd[hwq] = 0; return 0; } @@ -207,7 +193,7 @@ int fnic_fw_reset_handler(struct fnic *fnic) spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + free_wq_copy_descs(fnic, wq, 0); if (!vnic_wq_copy_desc_avail(wq)) ret = -EAGAIN; @@ -253,7 +239,7 @@ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id) spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + free_wq_copy_descs(fnic, wq, 0); if (!vnic_wq_copy_desc_avail(wq)) { ret = -EAGAIN; @@ -303,7 +289,9 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, struct vnic_wq_copy *wq, struct fnic_io_req *io_req, struct scsi_cmnd *sc, - int sg_count) + int sg_count, + uint32_t mqtag, + uint16_t hwq) { struct scatterlist *sg; struct fc_rport *rport = starget_to_rport(scsi_target(sc->device)); @@ -311,7 +299,6 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, struct host_sg_desc *desc; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned int i; - unsigned long intr_flags; int flags; u8 exch_flags; struct scsi_lun fc_lun; @@ -351,13 +338,10 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, int_to_scsilun(sc->device->lun, &fc_lun); /* Enqueue the descriptor in the Copy WQ */ - spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags); - - if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq]) + free_wq_copy_descs(fnic, wq, hwq); if (unlikely(!vnic_wq_copy_desc_avail(wq))) { - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags); FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, "fnic_queue_wq_copy_desc failure - no descriptors\n"); atomic64_inc(&misc_stats->io_cpwq_alloc_failures); @@ -375,7 +359,7 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, (rp->flags & FC_RP_FLAGS_RETRY)) exch_flags |= FCPIO_ICMND_SRFLAG_RETRY; - fnic_queue_wq_copy_desc_icmnd_16(wq, scsi_cmd_to_rq(sc)->tag, + fnic_queue_wq_copy_desc_icmnd_16(wq, mqtag, 0, exch_flags, io_req->sgl_cnt, SCSI_SENSE_BUFFERSIZE, io_req->sgl_list_pa, @@ -396,7 +380,6 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, atomic64_set(&fnic->fnic_stats.fw_stats.max_fw_reqs, atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs)); - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags); return 0; } @@ -405,22 +388,20 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, * Routine to send a scsi cdb * Called with host_lock held and interrupts disabled. */ -static int fnic_queuecommand_lck(struct scsi_cmnd *sc) +static int fnic_queuecommand_lck(struct scsi_cmnd *sc, uint32_t mqtag, uint16_t hwq) { void (*done)(struct scsi_cmnd *) = scsi_done; - const int tag = scsi_cmd_to_rq(sc)->tag; struct fc_lport *lp = shost_priv(sc->device->host); struct fc_rport *rport; struct fnic_io_req *io_req = NULL; struct fnic *fnic = lport_priv(lp); struct fnic_stats *fnic_stats = &fnic->fnic_stats; struct vnic_wq_copy *wq; - int ret; + int ret = 1; u64 cmd_trace; int sg_count = 0; unsigned long flags = 0; unsigned long ptr; - spinlock_t *io_lock = NULL; int io_lock_acquired = 0; struct fc_rport_libfc_priv *rp; @@ -514,7 +495,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) sg_count = scsi_dma_map(sc); if (sg_count < 0) { FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no, - tag, sc, 0, sc->cmnd[0], sg_count, fnic_priv(sc)->state); + mqtag, sc, 0, sc->cmnd[0], sg_count, fnic_priv(sc)->state); mempool_free(io_req, fnic->io_req_pool); goto out; } @@ -549,11 +530,9 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) } /* - * Will acquire lock defore setting to IO initialized. + * Will acquire lock before setting to IO initialized. */ - - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); /* initialize rest of io_req */ io_lock_acquired = 1; @@ -562,21 +541,34 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) fnic_priv(sc)->state = FNIC_IOREQ_CMD_PENDING; fnic_priv(sc)->io_req = io_req; fnic_priv(sc)->flags |= FNIC_IO_INITIALIZED; + io_req->sc = sc; + + if (fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] != NULL) { + WARN(1, "fnic<%d>: %s: hwq: %d tag 0x%x already exists\n", + fnic->fnic_num, __func__, hwq, blk_mq_unique_tag_to_tag(mqtag)); + return SCSI_MLQUEUE_HOST_BUSY; + } + + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = io_req; + io_req->tag = mqtag; /* create copy wq desc and enqueue it */ - wq = &fnic->hw_copy_wq[0]; - ret = fnic_queue_wq_copy_desc(fnic, wq, io_req, sc, sg_count); + wq = &fnic->hw_copy_wq[hwq]; + atomic64_inc(&fnic_stats->io_stats.ios[hwq]); + ret = fnic_queue_wq_copy_desc(fnic, wq, io_req, sc, sg_count, mqtag, hwq); if (ret) { /* * In case another thread cancelled the request, * refetch the pointer under the lock. */ FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no, - tag, sc, 0, 0, 0, fnic_flags_and_state(sc)); + mqtag, sc, 0, 0, 0, fnic_flags_and_state(sc)); io_req = fnic_priv(sc)->io_req; fnic_priv(sc)->io_req = NULL; + if (io_req) + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = NULL; fnic_priv(sc)->state = FNIC_IOREQ_CMD_COMPLETE; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (io_req) { fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); @@ -601,18 +593,43 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) sc->cmnd[5]); FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no, - tag, sc, io_req, sg_count, cmd_trace, + mqtag, sc, io_req, sg_count, cmd_trace, fnic_flags_and_state(sc)); /* if only we issued IO, will we have the io lock */ if (io_lock_acquired) - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); return ret; } -DEF_SCSI_QCMD(fnic_queuecommand) +int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) +{ + struct request *const rq = scsi_cmd_to_rq(sc); + int rc; + struct fc_lport *lp = shost_priv(sc->device->host); + struct fnic *fnic = lport_priv(lp); + uint32_t mqtag = 0; + int tag = 0; + uint16_t hwq = 0; + + mqtag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(mqtag); + tag = blk_mq_unique_tag_to_tag(mqtag); + + if (tag >= fnic->fnic_max_tag_id) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: Out of range tag: 0x%x\n", + fnic->fnic_num, __func__, tag); + sc->result = DID_ERROR << 16; + scsi_done(sc); + return 0; + } + + rc = fnic_queuecommand_lck(sc, mqtag, hwq); + return rc; +} /* * fnic_fcpio_fw_reset_cmpl_handler @@ -789,20 +806,21 @@ static inline void fnic_fcpio_ack_handler(struct fnic *fnic, u16 request_out = desc->u.ack.request_out; unsigned long flags; u64 *ox_id_tag = (u64 *)(void *)desc; + unsigned int wq_index = cq_index; /* mark the ack state */ - wq = &fnic->hw_copy_wq[cq_index - fnic->raw_wq_count - fnic->rq_count]; - spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); + wq = &fnic->hw_copy_wq[cq_index]; + spin_lock_irqsave(&fnic->wq_copy_lock[wq_index], flags); fnic->fnic_stats.misc_stats.last_ack_time = jiffies; if (is_ack_index_in_range(wq, request_out)) { - fnic->fw_ack_index[0] = request_out; - fnic->fw_ack_recd[0] = 1; + fnic->fw_ack_index[wq_index] = request_out; + fnic->fw_ack_recd[wq_index] = 1; } else atomic64_inc( &fnic->fnic_stats.misc_stats.ack_index_out_of_range); - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[wq_index], flags); FNIC_TRACE(fnic_fcpio_ack_handler, fnic->lport->host->host_no, 0, 0, ox_id_tag[2], ox_id_tag[3], ox_id_tag[4], ox_id_tag[5]); @@ -812,12 +830,12 @@ static inline void fnic_fcpio_ack_handler(struct fnic *fnic, * fnic_fcpio_icmnd_cmpl_handler * Routine to handle icmnd completions */ -static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, +static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, unsigned int cq_index, struct fcpio_fw_req *desc) { u8 type; u8 hdr_status; - struct fcpio_tag tag; + struct fcpio_tag ftag; u32 id; u64 xfer_len = 0; struct fcpio_icmnd_cmpl *icmnd_cmpl; @@ -825,27 +843,49 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, struct scsi_cmnd *sc; struct fnic_stats *fnic_stats = &fnic->fnic_stats; unsigned long flags; - spinlock_t *io_lock; u64 cmd_trace; unsigned long start_time; unsigned long io_duration_time; + unsigned int hwq = 0; + unsigned int mqtag = 0; + unsigned int tag = 0; /* Decode the cmpl description to get the io_req id */ - fcpio_header_dec(&desc->hdr, &type, &hdr_status, &tag); - fcpio_tag_id_dec(&tag, &id); + fcpio_header_dec(&desc->hdr, &type, &hdr_status, &ftag); + fcpio_tag_id_dec(&ftag, &id); icmnd_cmpl = &desc->u.icmnd_cmpl; - if (id >= fnic->fnic_max_tag_id) { + mqtag = id; + tag = blk_mq_unique_tag_to_tag(mqtag); + hwq = blk_mq_unique_tag_to_hwq(mqtag); + + if (hwq != cq_index) { + shost_printk(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, cq_index); + shost_printk(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hdr status: %s icmnd completion on the wrong queue\n", + fnic->fnic_num, __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); + } + + if (tag >= fnic->fnic_max_tag_id) { shost_printk(KERN_ERR, fnic->lport->host, - "Tag out of range tag %x hdr status = %s\n", - id, fnic_fcpio_status_to_str(hdr_status)); + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, cq_index); + shost_printk(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hdr status: %s Out of range tag\n", + fnic->fnic_num, __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); return; } + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); sc = scsi_host_find_tag(fnic->lport->host, id); WARN_ON_ONCE(!sc); if (!sc) { atomic64_inc(&fnic_stats->io_stats.sc_null); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); shost_printk(KERN_ERR, fnic->lport->host, "icmnd_cmpl sc is null - " "hdr status = %s tag = 0x%x desc = 0x%p\n", @@ -861,14 +901,19 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, return; } - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); io_req = fnic_priv(sc)->io_req; + if (fnic->sw_copy_wq[hwq].io_req_table[tag] != io_req) { + WARN(1, "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x io_req tag mismatch\n", + __func__, __LINE__, hwq, mqtag, tag); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); + return; + } + WARN_ON_ONCE(!io_req); if (!io_req) { atomic64_inc(&fnic_stats->io_stats.ioreq_null); fnic_priv(sc)->flags |= FNIC_IO_REQ_NULL; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); shost_printk(KERN_ERR, fnic->lport->host, "icmnd_cmpl io_req is null - " "hdr status = %s tag = 0x%x sc 0x%p\n", @@ -892,7 +937,7 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, */ fnic_priv(sc)->flags |= FNIC_IO_DONE; fnic_priv(sc)->flags |= FNIC_IO_ABTS_PENDING; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if(FCPIO_ABORTED == hdr_status) fnic_priv(sc)->flags |= FNIC_IO_ABORTED; @@ -980,7 +1025,11 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, /* Break link with the SCSI command */ fnic_priv(sc)->io_req = NULL; + io_req->sc = NULL; fnic_priv(sc)->flags |= FNIC_IO_DONE; + fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL; + + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (hdr_status != FCPIO_SUCCESS) { atomic64_inc(&fnic_stats->io_stats.io_failures); @@ -1014,7 +1063,6 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, /* Call SCSI completion function to complete the IO */ scsi_done(sc); - spin_unlock_irqrestore(io_lock, flags); mempool_free(io_req, fnic->io_req_pool); @@ -1051,12 +1099,12 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, /* fnic_fcpio_itmf_cmpl_handler * Routine to handle itmf completions */ -static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, +static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_index, struct fcpio_fw_req *desc) { u8 type; u8 hdr_status; - struct fcpio_tag tag; + struct fcpio_tag ftag; u32 id; struct scsi_cmnd *sc; struct fnic_io_req *io_req; @@ -1065,35 +1113,76 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned long flags; - spinlock_t *io_lock; unsigned long start_time; + unsigned int hwq = cq_index; + unsigned int mqtag; + unsigned int tag; - fcpio_header_dec(&desc->hdr, &type, &hdr_status, &tag); - fcpio_tag_id_dec(&tag, &id); + fcpio_header_dec(&desc->hdr, &type, &hdr_status, &ftag); + fcpio_tag_id_dec(&ftag, &id); + + mqtag = id & FNIC_TAG_MASK; + tag = blk_mq_unique_tag_to_tag(id & FNIC_TAG_MASK); + hwq = blk_mq_unique_tag_to_hwq(id & FNIC_TAG_MASK); + + if (hwq != cq_index) { + shost_printk(KERN_ERR, fnic->lport->host, + "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + __func__, __LINE__, hwq, mqtag, tag, cq_index); + shost_printk(KERN_ERR, fnic->lport->host, + "%s: %d: hdr status: %s ITMF completion on the wrong queue\n", + __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); + } - if ((id & FNIC_TAG_MASK) >= fnic->fnic_max_tag_id) { + if (tag > fnic->fnic_max_tag_id) { shost_printk(KERN_ERR, fnic->lport->host, - "Tag out of range tag %x hdr status = %s\n", - id, fnic_fcpio_status_to_str(hdr_status)); + "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + __func__, __LINE__, hwq, mqtag, tag, cq_index); + shost_printk(KERN_ERR, fnic->lport->host, + "%s: %d: hdr status: %s Tag out of range\n", + __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); + return; + } else if ((tag == fnic->fnic_max_tag_id) && !(id & FNIC_TAG_DEV_RST)) { + shost_printk(KERN_ERR, fnic->lport->host, + "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + __func__, __LINE__, hwq, mqtag, tag, cq_index); + shost_printk(KERN_ERR, fnic->lport->host, + "%s: %d: hdr status: %s Tag out of range\n", + __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); return; } - sc = scsi_host_find_tag(fnic->lport->host, id & FNIC_TAG_MASK); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); + + /* If it is sg3utils allocated SC then tag_id + * is max_tag_id and SC is retrieved from io_req + */ + if ((mqtag == fnic->fnic_max_tag_id) && (id & FNIC_TAG_DEV_RST)) { + io_req = fnic->sw_copy_wq[hwq].io_req_table[tag]; + if (io_req) + sc = io_req->sc; + } else { + sc = scsi_host_find_tag(fnic->lport->host, id & FNIC_TAG_MASK); + } + WARN_ON_ONCE(!sc); if (!sc) { atomic64_inc(&fnic_stats->io_stats.sc_null); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); shost_printk(KERN_ERR, fnic->lport->host, "itmf_cmpl sc is null - hdr status = %s tag = 0x%x\n", fnic_fcpio_status_to_str(hdr_status), id); return; } - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + io_req = fnic_priv(sc)->io_req; WARN_ON_ONCE(!io_req); if (!io_req) { atomic64_inc(&fnic_stats->io_stats.ioreq_null); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL; shost_printk(KERN_ERR, fnic->lport->host, "itmf_cmpl io_req is null - " @@ -1114,7 +1203,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, fnic_priv(sc)->flags |= FNIC_DEV_RST_DONE; if (io_req->abts_done) complete(io_req->abts_done); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else if (id & FNIC_TAG_ABORT) { /* Completion of abort cmd */ switch (hdr_status) { @@ -1149,7 +1238,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, } if (fnic_priv(sc)->state != FNIC_IOREQ_ABTS_PENDING) { /* This is a late completion. Ignore it */ - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return; } @@ -1175,14 +1264,14 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, */ if (io_req->abts_done) { complete(io_req->abts_done); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else { FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "abts cmpl, completing IO\n"); fnic_priv(sc)->io_req = NULL; sc->result = (DID_ERROR << 16); - - spin_unlock_irqrestore(io_lock, flags); + fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL; + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); @@ -1208,7 +1297,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, /* Completion of device reset */ fnic_priv(sc)->lr_status = hdr_status; if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_DEV_RST_ABTS_PENDING; FNIC_TRACE(fnic_fcpio_itmf_cmpl_handler, sc->device->host->host_no, id, sc, @@ -1223,7 +1312,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, } if (fnic_priv(sc)->flags & FNIC_DEV_RST_TIMED_OUT) { /* Need to wait for terminate completion */ - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); FNIC_TRACE(fnic_fcpio_itmf_cmpl_handler, sc->device->host->host_no, id, sc, jiffies_to_msecs(jiffies - start_time), @@ -1243,13 +1332,13 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, fnic_fcpio_status_to_str(hdr_status)); if (io_req->dr_done) complete(io_req->dr_done); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else { shost_printk(KERN_ERR, fnic->lport->host, "Unexpected itmf io state %s tag %x\n", fnic_ioreq_state_to_str(fnic_priv(sc)->state), id); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } } @@ -1276,17 +1365,19 @@ static int fnic_fcpio_cmpl_handler(struct vnic_dev *vdev, break; } + cq_index -= fnic->cpy_wq_base; + switch (desc->hdr.type) { case FCPIO_ACK: /* fw copied copy wq desc to its queue */ fnic_fcpio_ack_handler(fnic, cq_index, desc); break; case FCPIO_ICMND_CMPL: /* fw completed a command */ - fnic_fcpio_icmnd_cmpl_handler(fnic, desc); + fnic_fcpio_icmnd_cmpl_handler(fnic, cq_index, desc); break; case FCPIO_ITMF_CMPL: /* fw completed itmf (abort cmd, lun reset)*/ - fnic_fcpio_itmf_cmpl_handler(fnic, desc); + fnic_fcpio_itmf_cmpl_handler(fnic, cq_index, desc); break; case FCPIO_FLOGI_REG_CMPL: /* fw completed flogi_reg */ @@ -1339,18 +1430,33 @@ int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned i static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) { - const int tag = scsi_cmd_to_rq(sc)->tag; + struct request *const rq = scsi_cmd_to_rq(sc); struct fnic *fnic = data; struct fnic_io_req *io_req; unsigned long flags = 0; - spinlock_t *io_lock; unsigned long start_time = 0; struct fnic_stats *fnic_stats = &fnic->fnic_stats; + uint16_t hwq = 0; + int tag; + int mqtag; + + mqtag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(mqtag); + tag = blk_mq_unique_tag_to_tag(mqtag); - io_lock = fnic_io_lock_tag(fnic, tag); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); + + fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL; io_req = fnic_priv(sc)->io_req; + if (!io_req) { + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x flags: 0x%x No ioreq. Returning\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, fnic_priv(sc)->flags); + return true; + } + if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) && !(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) { /* @@ -1362,20 +1468,16 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) complete(io_req->dr_done); else if (io_req && io_req->abts_done) complete(io_req->abts_done); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } else if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } - if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); - goto cleanup_scsi_cmd; - } fnic_priv(sc)->io_req = NULL; - - spin_unlock_irqrestore(io_lock, flags); + io_req->sc = NULL; + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* * If there is a scsi_cmnd associated with this io_req, then @@ -1385,7 +1487,6 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); -cleanup_scsi_cmd: sc->result = DID_TRANSPORT_DISRUPTED << 16; FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "fnic_cleanup_io: tag:0x%x : sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n", @@ -1396,12 +1497,6 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) else atomic64_inc(&fnic_stats->io_stats.io_completions); - /* Complete the command to SCSI */ - if (!(fnic_priv(sc)->flags & FNIC_IO_ISSUED)) - shost_printk(KERN_ERR, fnic->lport->host, - "Calling done for IO not issued to fw: tag:0x%x sc:0x%p\n", - tag, sc); - FNIC_TRACE(fnic_cleanup_io, sc->device->host->host_no, tag, sc, jiffies_to_msecs(jiffies - start_time), @@ -1430,8 +1525,8 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, struct fnic_io_req *io_req; struct scsi_cmnd *sc; unsigned long flags; - spinlock_t *io_lock; unsigned long start_time = 0; + uint16_t hwq; /* get the tag reference */ fcpio_tag_id_dec(&desc->hdr.tag, &id); @@ -1444,8 +1539,8 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, if (!sc) return; - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + hwq = blk_mq_unique_tag_to_hwq(id); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); /* Get the IO context which this desc refers to */ io_req = fnic_priv(sc)->io_req; @@ -1453,13 +1548,15 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, /* fnic interrupts are turned off by now */ if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto wq_copy_cleanup_scsi_cmd; } fnic_priv(sc)->io_req = NULL; + io_req->sc = NULL; + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(id)] = NULL; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); start_time = io_req->start_time; fnic_release_ioreq_buf(fnic, io_req, sc); @@ -1483,9 +1580,10 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, u32 task_req, u8 *fc_lun, - struct fnic_io_req *io_req) + struct fnic_io_req *io_req, + unsigned int hwq) { - struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; + struct vnic_wq_copy *wq = &fnic->hw_copy_wq[hwq]; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned long flags; @@ -1498,13 +1596,13 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, atomic_inc(&fnic->in_flight); spin_unlock_irqrestore(&fnic->fnic_lock, flags); - spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); - if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq]) + free_wq_copy_descs(fnic, wq, hwq); if (!vnic_wq_copy_desc_avail(wq)) { - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "fnic_queue_abort_io_req: failure: no descriptors\n"); @@ -1521,7 +1619,7 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, atomic64_set(&fnic->fnic_stats.fw_stats.max_fw_reqs, atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs)); - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); return 0; @@ -1535,24 +1633,27 @@ struct fnic_rport_abort_io_iter_data { static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) { + struct request *const rq = scsi_cmd_to_rq(sc); struct fnic_rport_abort_io_iter_data *iter_data = data; struct fnic *fnic = iter_data->fnic; - int abt_tag = scsi_cmd_to_rq(sc)->tag; + int abt_tag = 0; struct fnic_io_req *io_req; - spinlock_t *io_lock; unsigned long flags; struct reset_stats *reset_stats = &fnic->fnic_stats.reset_stats; struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats; struct scsi_lun fc_lun; enum fnic_ioreq_state old_ioreq_state; + uint16_t hwq = 0; + + abt_tag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(abt_tag); - io_lock = fnic_io_lock_tag(fnic, abt_tag); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req || io_req->port_id != iter_data->port_id) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -1561,7 +1662,7 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "fnic_rport_exch_reset dev rst not pending sc 0x%p\n", sc); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -1570,7 +1671,7 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) * belongs to rport that went away */ if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } if (io_req->abts_done) { @@ -1601,31 +1702,31 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "fnic_rport_reset_exch: Issuing abts\n"); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* Now queue the abort command to firmware */ int_to_scsilun(sc->device->lun, &fc_lun); if (fnic_queue_abort_io_req(fnic, abt_tag, FCPIO_ITMF_ABT_TASK_TERM, - fc_lun.scsi_lun, io_req)) { + fc_lun.scsi_lun, io_req, hwq)) { /* * Revert the cmd state back to old state, if * it hasn't changed in between. This cmd will get * aborted later by scsi_eh, or cleaned up during * lun reset */ - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) fnic_priv(sc)->state = old_ioreq_state; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED; else fnic_priv(sc)->flags |= FNIC_IO_INTERNAL_TERM_ISSUED; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic64_inc(&term_stats->terminates); iter_data->term_cnt++; } @@ -1703,7 +1804,6 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) struct fnic *fnic; struct fnic_io_req *io_req = NULL; struct fc_rport *rport; - spinlock_t *io_lock; unsigned long flags; unsigned long start_time = 0; int ret = SUCCESS; @@ -1713,8 +1813,10 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) struct abort_stats *abts_stats; struct terminate_stats *term_stats; enum fnic_ioreq_state old_ioreq_state; - const int tag = rq->tag; + int mqtag; unsigned long abt_issued_time; + uint16_t hwq = 0; + DECLARE_COMPLETION_ONSTACK(tm_done); /* Wait for rport to unblock */ @@ -1724,23 +1826,25 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) lp = shost_priv(sc->device->host); fnic = lport_priv(lp); + + spin_lock_irqsave(&fnic->fnic_lock, flags); fnic_stats = &fnic->fnic_stats; abts_stats = &fnic->fnic_stats.abts_stats; term_stats = &fnic->fnic_stats.term_stats; rport = starget_to_rport(scsi_target(sc->device)); - FNIC_SCSI_DBG(KERN_DEBUG, - fnic->lport->host, - "Abort Cmd called FCID 0x%x, LUN 0x%llx TAG %x flags %x\n", - rport->port_id, sc->device->lun, tag, fnic_priv(sc)->flags); + mqtag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(mqtag); fnic_priv(sc)->flags = FNIC_NO_FLAGS; if (lp->state != LPORT_ST_READY || !(lp->link_up)) { ret = FAILED; + spin_unlock_irqrestore(&fnic->fnic_lock, flags); goto fnic_abort_cmd_end; } + spin_unlock_irqrestore(&fnic->fnic_lock, flags); /* * Avoid a race between SCSI issuing the abort and the device * completing the command. @@ -1753,18 +1857,17 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) * * .io_req will not be cleared except while holding io_req_lock. */ - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto fnic_abort_cmd_end; } io_req->abts_done = &tm_done; if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto wait_pending; } @@ -1796,7 +1899,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) fnic_priv(sc)->state = FNIC_IOREQ_ABTS_PENDING; fnic_priv(sc)->abts_status = FCPIO_INVALID_CODE; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* * Check readiness of the remote port. If the path to remote @@ -1813,15 +1916,15 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) /* Now queue the abort command to firmware */ int_to_scsilun(sc->device->lun, &fc_lun); - if (fnic_queue_abort_io_req(fnic, tag, task_req, fc_lun.scsi_lun, - io_req)) { - spin_lock_irqsave(io_lock, flags); + if (fnic_queue_abort_io_req(fnic, mqtag, task_req, fc_lun.scsi_lun, + io_req, hwq)) { + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) fnic_priv(sc)->state = old_ioreq_state; io_req = fnic_priv(sc)->io_req; if (io_req) io_req->abts_done = NULL; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); ret = FAILED; goto fnic_abort_cmd_end; } @@ -1845,12 +1948,12 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) fnic->config.ed_tov)); /* Check the abort status */ - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { atomic64_inc(&fnic_stats->io_stats.ioreq_null); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL; ret = FAILED; goto fnic_abort_cmd_end; @@ -1859,7 +1962,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) /* fw did not complete abort, timed out */ if (fnic_priv(sc)->abts_status == FCPIO_INVALID_CODE) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (task_req == FCPIO_ITMF_ABT_TASK) { atomic64_inc(&abts_stats->abort_drv_timeouts); } else { @@ -1873,7 +1976,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) /* IO out of order */ if (!(fnic_priv(sc)->flags & (FNIC_IO_ABORTED | FNIC_IO_DONE))) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "Issuing Host reset due to out of order IO\n"); @@ -1889,15 +1992,18 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) * free the io_req if successful. If abort fails, * Device reset will clean the I/O. */ - if (fnic_priv(sc)->abts_status == FCPIO_SUCCESS) { + if (fnic_priv(sc)->abts_status == FCPIO_SUCCESS || + (fnic_priv(sc)->abts_status == FCPIO_ABORTED)) { fnic_priv(sc)->io_req = NULL; + io_req->sc = NULL; } else { ret = FAILED; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto fnic_abort_cmd_end; } - spin_unlock_irqrestore(io_lock, flags); + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = NULL; + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); @@ -1912,7 +2018,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) atomic64_inc(&fnic_stats->io_stats.io_completions); fnic_abort_cmd_end: - FNIC_TRACE(fnic_abort_cmd, sc->device->host->host_no, tag, sc, + FNIC_TRACE(fnic_abort_cmd, sc->device->host->host_no, mqtag, sc, jiffies_to_msecs(jiffies - start_time), 0, ((u64)sc->cmnd[0] << 32 | (u64)sc->cmnd[2] << 24 | (u64)sc->cmnd[3] << 16 | @@ -1930,25 +2036,31 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, struct scsi_cmnd *sc, struct fnic_io_req *io_req) { - struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; + struct vnic_wq_copy *wq; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; struct scsi_lun fc_lun; int ret = 0; - unsigned long intr_flags; + unsigned long flags; + uint16_t hwq = 0; + uint32_t tag = 0; - spin_lock_irqsave(&fnic->fnic_lock, intr_flags); + tag = io_req->tag; + hwq = blk_mq_unique_tag_to_hwq(tag); + wq = &fnic->hw_copy_wq[hwq]; + + spin_lock_irqsave(&fnic->fnic_lock, flags); if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { - spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); return FAILED; } else atomic_inc(&fnic->in_flight); - spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); - spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); - if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq]) + free_wq_copy_descs(fnic, wq, hwq); if (!vnic_wq_copy_desc_avail(wq)) { FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, @@ -1973,7 +2085,7 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs)); lr_io_req_end: - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); return ret; @@ -1988,12 +2100,13 @@ struct fnic_pending_aborts_iter_data { static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) { + struct request *const rq = scsi_cmd_to_rq(sc); struct fnic_pending_aborts_iter_data *iter_data = data; struct fnic *fnic = iter_data->fnic; struct scsi_device *lun_dev = iter_data->lun_dev; - int abt_tag = scsi_cmd_to_rq(sc)->tag; + unsigned long abt_tag = 0; + uint16_t hwq = 0; struct fnic_io_req *io_req; - spinlock_t *io_lock; unsigned long flags; struct scsi_lun fc_lun; DECLARE_COMPLETION_ONSTACK(tm_done); @@ -2002,11 +2115,13 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) if (sc == iter_data->lr_sc || sc->device != lun_dev) return true; - io_lock = fnic_io_lock_tag(fnic, abt_tag); - spin_lock_irqsave(io_lock, flags); + abt_tag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(abt_tag); + + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -2019,7 +2134,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) fnic_ioreq_state_to_str(fnic_priv(sc)->state)); if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) && @@ -2027,7 +2142,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, "%s dev rst not pending sc 0x%p\n", __func__, sc); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -2048,35 +2163,34 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) BUG_ON(io_req->abts_done); if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) { - abt_tag |= FNIC_TAG_DEV_RST; FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, "%s: dev rst sc 0x%p\n", __func__, sc); } fnic_priv(sc)->abts_status = FCPIO_INVALID_CODE; io_req->abts_done = &tm_done; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* Now queue the abort command to firmware */ int_to_scsilun(sc->device->lun, &fc_lun); if (fnic_queue_abort_io_req(fnic, abt_tag, FCPIO_ITMF_ABT_TASK_TERM, - fc_lun.scsi_lun, io_req)) { - spin_lock_irqsave(io_lock, flags); + fc_lun.scsi_lun, io_req, hwq)) { + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (io_req) io_req->abts_done = NULL; if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) fnic_priv(sc)->state = old_ioreq_state; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); iter_data->ret = FAILED; return false; } else { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } fnic_priv(sc)->flags |= FNIC_IO_INTERNAL_TERM_ISSUED; @@ -2084,10 +2198,10 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) (fnic->config.ed_tov)); /* Recheck cmd state to check if it is now aborted */ - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL; return true; } @@ -2096,7 +2210,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) /* if abort is still pending with fw, fail */ if (fnic_priv(sc)->abts_status == FCPIO_INVALID_CODE) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_DONE; iter_data->ret = FAILED; return false; @@ -2104,9 +2218,11 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) fnic_priv(sc)->state = FNIC_IOREQ_ABTS_COMPLETE; /* original sc used for lr is handled by dev reset code */ - if (sc != iter_data->lr_sc) + if (sc != iter_data->lr_sc) { fnic_priv(sc)->io_req = NULL; - spin_unlock_irqrestore(io_lock, flags); + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(abt_tag)] = NULL; + } + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* original sc used for lr is handled by dev reset code */ if (sc != iter_data->lr_sc) { @@ -2178,15 +2294,15 @@ int fnic_device_reset(struct scsi_cmnd *sc) struct fc_rport *rport; int status; int ret = FAILED; - spinlock_t *io_lock; unsigned long flags; unsigned long start_time = 0; struct scsi_lun fc_lun; struct fnic_stats *fnic_stats; struct reset_stats *reset_stats; - int tag = rq->tag; + int mqtag = rq->tag; DECLARE_COMPLETION_ONSTACK(tm_done); bool new_sc = 0; + uint16_t hwq = 0; /* Wait for rport to unblock */ fc_block_scsi_eh(sc); @@ -2216,7 +2332,7 @@ int fnic_device_reset(struct scsi_cmnd *sc) fnic_priv(sc)->flags = FNIC_DEVICE_RESET; - if (unlikely(tag < 0)) { + if (unlikely(mqtag < 0)) { /* * For device reset issued through sg3utils, we let * only one LUN_RESET to go through and use a special @@ -2225,11 +2341,14 @@ int fnic_device_reset(struct scsi_cmnd *sc) * allocated by mid layer. */ mutex_lock(&fnic->sgreset_mutex); - tag = fnic->fnic_max_tag_id; + mqtag = fnic->fnic_max_tag_id; new_sc = 1; + } else { + mqtag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(mqtag); } - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; /* @@ -2239,34 +2358,43 @@ int fnic_device_reset(struct scsi_cmnd *sc) if (!io_req) { io_req = mempool_alloc(fnic->io_req_pool, GFP_ATOMIC); if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto fnic_device_reset_end; } memset(io_req, 0, sizeof(*io_req)); io_req->port_id = rport->port_id; + io_req->tag = mqtag; fnic_priv(sc)->io_req = io_req; + io_req->sc = sc; + + if (fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] != NULL) + WARN(1, "fnic<%d>: %s: tag 0x%x already exists\n", + fnic->fnic_num, __func__, blk_mq_unique_tag_to_tag(mqtag)); + + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = + io_req; } io_req->dr_done = &tm_done; fnic_priv(sc)->state = FNIC_IOREQ_CMD_PENDING; fnic_priv(sc)->lr_status = FCPIO_INVALID_CODE; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "TAG %x\n", tag); + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "TAG %x\n", mqtag); /* * issue the device reset, if enqueue failed, clean up the ioreq * and break assoc with scsi cmd */ if (fnic_queue_dr_io_req(fnic, sc, io_req)) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (io_req) io_req->dr_done = NULL; goto fnic_device_reset_clean; } - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_DEV_RST_ISSUED; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* * Wait on the local completion for LUN reset. The io_req may be @@ -2275,12 +2403,12 @@ int fnic_device_reset(struct scsi_cmnd *sc) wait_for_completion_timeout(&tm_done, msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT)); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "io_req is null tag 0x%x sc 0x%p\n", tag, sc); + "io_req is null mqtag 0x%x sc 0x%p\n", mqtag, sc); goto fnic_device_reset_end; } io_req->dr_done = NULL; @@ -2296,41 +2424,41 @@ int fnic_device_reset(struct scsi_cmnd *sc) FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "Device reset timed out\n"); fnic_priv(sc)->flags |= FNIC_DEV_RST_TIMED_OUT; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); int_to_scsilun(sc->device->lun, &fc_lun); /* * Issue abort and terminate on device reset request. * If q'ing of terminate fails, retry it after a delay. */ while (1) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->flags & FNIC_DEV_RST_TERM_ISSUED) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); break; } - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (fnic_queue_abort_io_req(fnic, - tag | FNIC_TAG_DEV_RST, + mqtag | FNIC_TAG_DEV_RST, FCPIO_ITMF_ABT_TASK_TERM, - fc_lun.scsi_lun, io_req)) { + fc_lun.scsi_lun, io_req, hwq)) { wait_for_completion_timeout(&tm_done, msecs_to_jiffies(FNIC_ABT_TERM_DELAY_TIMEOUT)); } else { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED; fnic_priv(sc)->state = FNIC_IOREQ_ABTS_PENDING; io_req->abts_done = &tm_done; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "Abort and terminate issued on Device reset " - "tag 0x%x sc 0x%p\n", tag, sc); + "Abort and terminate issued on Device reset mqtag 0x%x sc 0x%p\n", + mqtag, sc); break; } } while (1) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (!(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); wait_for_completion_timeout(&tm_done, msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT)); break; @@ -2341,12 +2469,12 @@ int fnic_device_reset(struct scsi_cmnd *sc) } } } else { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } /* Completed, but not successful, clean up the io_req, return fail */ if (status != FCPIO_SUCCESS) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "Device reset completed - failed\n"); @@ -2362,7 +2490,7 @@ int fnic_device_reset(struct scsi_cmnd *sc) * succeeds */ if (fnic_clean_pending_aborts(fnic, sc, new_sc)) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "Device reset failed" @@ -2371,17 +2499,20 @@ int fnic_device_reset(struct scsi_cmnd *sc) } /* Clean lun reset command */ - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (io_req) /* Completed, and successful */ ret = SUCCESS; fnic_device_reset_clean: - if (io_req) + if (io_req) { fnic_priv(sc)->io_req = NULL; + io_req->sc = NULL; + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(io_req->tag)] = NULL; + } - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (io_req) { start_time = io_req->start_time; @@ -2619,12 +2750,17 @@ void fnic_exch_mgr_reset(struct fc_lport *lp, u32 sid, u32 did) static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data) { + struct request *const rq = scsi_cmd_to_rq(sc); struct fnic_pending_aborts_iter_data *iter_data = data; struct fnic *fnic = iter_data->fnic; int cmd_state; struct fnic_io_req *io_req; - spinlock_t *io_lock; unsigned long flags; + uint16_t hwq = 0; + int tag; + + tag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(tag); /* * ignore this lun reset cmd or cmds that do not belong to @@ -2635,12 +2771,11 @@ static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data) if (iter_data->lun_dev && sc->device != iter_data->lun_dev) return true; - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -2652,7 +2787,7 @@ static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data) "Found IO in %s on lun\n", fnic_ioreq_state_to_str(fnic_priv(sc)->state)); cmd_state = fnic_priv(sc)->state; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (cmd_state == FNIC_IOREQ_ABTS_PENDING) iter_data->ret = 1; From patchwork Fri Oct 20 19:06:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 736527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82AC1C001DF for ; Fri, 20 Oct 2023 19:08:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233102AbjJTTIX (ORCPT ); Fri, 20 Oct 2023 15:08:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230308AbjJTTH1 (ORCPT ); Fri, 20 Oct 2023 15:07:27 -0400 Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EE43D7D; Fri, 20 Oct 2023 12:07:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=14031; q=dns/txt; s=iport; t=1697828844; x=1699038444; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=La0OzOjS9Qjtj75o0AAOU1kJMPpGDvVA+U0OgbgI454=; b=WUVRxO0XRIXoUopHvgulzjqqrR7tPv1UQUofU1FjBq2wiTQq4Mwmom0y DZ6AkhHDS+5qaRspqY6Cg/NW35+NkQVxh3frMqGcAVwjsownZlRFHletG 4Xxi7a2GaMAtCnxbCp13Wkl+eXKsfQF4lliymVO0l/l0LlxgaKc/HmzxH w=; X-CSE-ConnectionGUID: LT/J5LskRuy8DBmHCxZWbQ== X-CSE-MsgGUID: y4j48l5yTCa9xRejKqw0OQ== X-IronPort-AV: E=Sophos;i="6.03,239,1694736000"; d="scan'208";a="126421382" Received: from rcdn-core-11.cisco.com ([173.37.93.147]) by rcdn-iport-7.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 19:07:23 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-11.cisco.com (8.15.2/8.15.2) with ESMTPSA id 39KJ6XPP026372 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 20 Oct 2023 19:07:23 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH 13/13] scsi: fnic: Improve logs and add support for multiqueue (MQ) Date: Fri, 20 Oct 2023 12:06:29 -0700 Message-Id: <20231020190629.338623-14-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231020190629.338623-1-kartilak@cisco.com> References: <20231020190629.338623-1-kartilak@cisco.com> MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-11.cisco.com Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Improve existing logs by adding fnic number, hardware queue, tag, and mqtag in the prints. Add logs with the above elements for effective debugging. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Tested-by: Karan Tilak Kumar Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic.h | 2 +- drivers/scsi/fnic/fnic_scsi.c | 127 ++++++++++++++++++++-------------- 2 files changed, 77 insertions(+), 52 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index bfc333366d1f..35c570d595e9 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -27,7 +27,7 @@ #define DRV_NAME "fnic" #define DRV_DESCRIPTION "Cisco FCoE HBA Driver" -#define DRV_VERSION "1.6.0.56" +#define DRV_VERSION "1.6.0.58" #define PFX DRV_NAME ": " #define DFX DRV_NAME "%d: " diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index 7ca34a76f84e..f5b7a7de83d7 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -211,12 +211,14 @@ int fnic_fw_reset_handler(struct fnic *fnic) if (!ret) { atomic64_inc(&fnic->fnic_stats.reset_stats.fw_resets); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "Issued fw reset\n"); + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: Issued fw reset\n", + fnic->fnic_num, __func__, __LINE__); } else { fnic_clear_state_flags(fnic, FNIC_FLAGS_FWRESET); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "Failed to issue fw reset\n"); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: Failed to issue fw reset\n", + fnic->fnic_num, __func__, __LINE__); } return ret; @@ -265,9 +267,9 @@ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id) } else { fnic_queue_wq_copy_desc_flogi_reg(wq, SCSI_NO_TAG, format, fc_id, gw_mac); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "FLOGI reg issued fcid %x map %d dest %pM\n", - fc_id, fnic->ctlr.map_dest, gw_mac); + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: FLOGI reg issued fcid 0x%x map %d dest 0x%p\n", + fnic->fnic_num, __func__, __LINE__, fc_id, fnic->ctlr.map_dest, gw_mac); } atomic64_inc(&fnic->fnic_stats.fw_stats.active_fw_reqs); @@ -662,15 +664,16 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic, if (fnic->state == FNIC_IN_FC_TRANS_ETH_MODE) { /* Check status of reset completion */ if (!hdr_status) { - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "reset cmpl success\n"); + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: reset cmpl success\n", + fnic->fnic_num, __func__, __LINE__); /* Ready to send flogi out */ fnic->state = FNIC_IN_ETH_MODE; } else { - FNIC_SCSI_DBG(KERN_DEBUG, - fnic->lport->host, - "fnic fw_reset : failed %s\n", - fnic_fcpio_status_to_str(hdr_status)); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: reset failed with header status: %s\n", + fnic->fnic_num, __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); /* * Unable to change to eth mode, cannot send out flogi @@ -683,10 +686,9 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic, ret = -1; } } else { - FNIC_SCSI_DBG(KERN_DEBUG, - fnic->lport->host, - "Unexpected state %s while processing" - " reset cmpl\n", fnic_state_to_str(fnic->state)); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: Unexpected state while processing reset completion: %s\n", + fnic->fnic_num, __func__, __LINE__, fnic_state_to_str(fnic->state)); atomic64_inc(&reset_stats->fw_reset_failures); ret = -1; } @@ -1195,9 +1197,10 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde if ((id & FNIC_TAG_ABORT) && (id & FNIC_TAG_DEV_RST)) { /* Abort and terminate completion of device reset req */ /* REVISIT : Add asserts about various flags */ - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "dev reset abts cmpl recd. id %x status %s\n", - id, fnic_fcpio_status_to_str(hdr_status)); + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x hst: %s Abt/term completion received\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, + fnic_fcpio_status_to_str(hdr_status)); fnic_priv(sc)->state = FNIC_IOREQ_ABTS_COMPLETE; fnic_priv(sc)->abts_status = hdr_status; fnic_priv(sc)->flags |= FNIC_DEV_RST_DONE; @@ -1206,6 +1209,10 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else if (id & FNIC_TAG_ABORT) { /* Completion of abort cmd */ + shost_printk(KERN_DEBUG, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x Abort header status: %s\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, + fnic_fcpio_status_to_str(hdr_status)); switch (hdr_status) { case FCPIO_SUCCESS: break; @@ -1265,9 +1272,14 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde if (io_req->abts_done) { complete(io_req->abts_done); spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); + shost_printk(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x Waking up abort thread\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag); } else { FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "abts cmpl, completing IO\n"); + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x hst: %s Completing IO\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, + tag, fnic_fcpio_status_to_str(hdr_status)); fnic_priv(sc)->io_req = NULL; sc->result = (DID_ERROR << 16); fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL; @@ -1295,6 +1307,10 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde } } else if (id & FNIC_TAG_DEV_RST) { /* Completion of device reset */ + shost_printk(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x DR hst: %s\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, + tag, fnic_fcpio_status_to_str(hdr_status)); fnic_priv(sc)->lr_status = hdr_status; if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); @@ -1304,10 +1320,9 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde jiffies_to_msecs(jiffies - start_time), desc, 0, fnic_flags_and_state(sc)); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "Terminate pending " - "dev reset cmpl recd. id %d status %s\n", - (int)(id & FNIC_TAG_MASK), - fnic_fcpio_status_to_str(hdr_status)); + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x hst: %s Terminate pending\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, + tag, fnic_fcpio_status_to_str(hdr_status)); return; } if (fnic_priv(sc)->flags & FNIC_DEV_RST_TIMED_OUT) { @@ -1326,18 +1341,18 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde } fnic_priv(sc)->state = FNIC_IOREQ_CMD_COMPLETE; fnic_priv(sc)->flags |= FNIC_DEV_RST_DONE; - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "dev reset cmpl recd. id %d status %s\n", - (int)(id & FNIC_TAG_MASK), - fnic_fcpio_status_to_str(hdr_status)); + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x hst: %s DR completion received\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, + tag, fnic_fcpio_status_to_str(hdr_status)); if (io_req->dr_done) complete(io_req->dr_done); spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else { shost_printk(KERN_ERR, fnic->lport->host, - "Unexpected itmf io state %s tag %x\n", - fnic_ioreq_state_to_str(fnic_priv(sc)->state), id); + "%s: Unexpected itmf io state: hwq: %d tag 0x%x %s\n", + __func__, hwq, id, fnic_ioreq_state_to_str(fnic_priv(sc)->state)); spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } @@ -1488,9 +1503,9 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) mempool_free(io_req, fnic->io_req_pool); sc->result = DID_TRANSPORT_DISRUPTED << 16; - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "fnic_cleanup_io: tag:0x%x : sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n", - tag, sc, jiffies - start_time); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: mqtag:0x%x tag: 0x%x sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n", + fnic->fnic_num, __func__, __LINE__, mqtag, tag, sc, (jiffies - start_time)); if (atomic64_read(&fnic->io_cmpl_skip)) atomic64_dec(&fnic->io_cmpl_skip); @@ -1659,9 +1674,9 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) && !(fnic_priv(sc)->flags & FNIC_DEV_RST_ISSUED)) { - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "fnic_rport_exch_reset dev rst not pending sc 0x%p\n", - sc); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d abt_tag: 0x%x flags: 0x%x Device reset is not pending\n", + fnic->fnic_num, __func__, __LINE__, hwq, abt_tag, fnic_priv(sc)->flags); spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -1717,6 +1732,9 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) * lun reset */ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d abt_tag: 0x%x flags: 0x%x Queuing abort failed\n", + fnic->fnic_num, __func__, __LINE__, hwq, abt_tag, fnic_priv(sc)->flags); if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) fnic_priv(sc)->state = old_ioreq_state; spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); @@ -1887,8 +1905,9 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) else atomic64_inc(&abts_stats->abort_issued_greater_than_60_sec); - FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, - "CBD Opcode: %02x Abort issued time: %lu msec\n", sc->cmnd[0], abt_issued_time); + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, + "fnic<%d>: %s: CDB Opcode: 0x%02x Abort issued time: %lu msec\n", + fnic->fnic_num, __func__, sc->cmnd[0], abt_issued_time); /* * Command is still pending, need to abort it * If the firmware completes the command after this point, @@ -1977,8 +1996,9 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) if (!(fnic_priv(sc)->flags & (FNIC_IO_ABORTED | FNIC_IO_DONE))) { spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "Issuing Host reset due to out of order IO\n"); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: Issuing host reset due to out of order IO\n", + fnic->fnic_num, __func__); ret = FAILED; goto fnic_abort_cmd_end; @@ -2185,6 +2205,9 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) fnic_priv(sc)->state = old_ioreq_state; spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); iter_data->ret = FAILED; + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d abt_tag: 0x%lx Abort could not be queued\n", + fnic->fnic_num, __func__, __LINE__, hwq, abt_tag); return false; } else { spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); @@ -2318,8 +2341,9 @@ int fnic_device_reset(struct scsi_cmnd *sc) rport = starget_to_rport(scsi_target(sc->device)); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "Device reset called FCID 0x%x, LUN 0x%llx sc 0x%p\n", - rport->port_id, sc->device->lun, sc); + "fnic<%d>: %s: %d: fcid: 0x%x lun: 0x%llx hwq: %d mqtag: 0x%x flags: 0x%x Device reset\n", + fnic->fnic_num, __func__, __LINE__, rport->port_id, sc->device->lun, hwq, mqtag, + fnic_priv(sc)->flags); if (lp->state != LPORT_ST_READY || !(lp->link_up)) goto fnic_device_reset_end; @@ -2554,8 +2578,9 @@ int fnic_reset(struct Scsi_Host *shost) fnic = lport_priv(lp); reset_stats = &fnic->fnic_stats.reset_stats; - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "fnic_reset called\n"); + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: Issuing fnic reset\n", + fnic->fnic_num, __func__, __LINE__); atomic64_inc(&reset_stats->fnic_resets); @@ -2565,10 +2590,9 @@ int fnic_reset(struct Scsi_Host *shost) */ ret = fc_lport_reset(lp); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "Returning from fnic reset %s\n", - (ret == 0) ? - "SUCCESS" : "FAILED"); + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, + "fnic<%d>: %s: %d: Returning from fnic reset with: %s\n", + fnic->fnic_num, __func__, __LINE__, (ret == 0) ? "SUCCESS" : "FAILED"); if (ret == 0) atomic64_inc(&reset_stats->fnic_reset_completions); @@ -2784,8 +2808,9 @@ static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data) * belongs to the LUN that we are resetting */ FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, - "Found IO in %s on lun\n", - fnic_ioreq_state_to_str(fnic_priv(sc)->state)); + "fnic<%d>: %s: %d: hwq: %d tag: 0x%x Found IO in state: %s on lun\n", + fnic->fnic_num, __func__, __LINE__, hwq, tag, + fnic_ioreq_state_to_str(fnic_priv(sc)->state)); cmd_state = fnic_priv(sc)->state; spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (cmd_state == FNIC_IOREQ_ABTS_PENDING)