From patchwork Wed Oct 2 18:23:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karan Tilak Kumar X-Patchwork-Id: 832361 Received: from rcdn-iport-8.cisco.com (rcdn-iport-8.cisco.com [173.37.86.79]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FC751D0BA2; Wed, 2 Oct 2024 18:24:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=173.37.86.79 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893493; cv=none; b=ZdqIvsjVlEKav2yjuKvLTy7QbQPGIFfLxTwhWl6W1Pi40oXunqMNLtXZjihd/055VsA6lzrLjPLCrRf/b0HSwXjAddjnkjgyahV1MvJy8Be8/PwhznqGTv/2BPswduRvm7VvdTJeMHvkLHP4Ip8aTIFNQzVnnNqflsOOVn0Gx6U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893493; c=relaxed/simple; bh=fYJXGx3mM8+4tiR4dZYLps0OH4TvRmwj2vdtUKGUFMY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MryZnvsBTO/GvOueXF7aPLO1As1ySuN+qST8nrRigpUYl6Db3bXIEyy/hiVjzq9k1lqoS7SmHNY46wMKIEzKDldDf7MJiRebp8YZBxhsExd2tLyYxFaX97LExGdNlRWo6lgIZ6xEj/evow5AHLSlFU3a2Hcvt/bwvoDp3xOpsaE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com; spf=pass smtp.mailfrom=cisco.com; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b=LwmUfVaj; arc=none smtp.client-ip=173.37.86.79 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cisco.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="LwmUfVaj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.com; i=@cisco.com; l=16814; q=dns/txt; s=iport; t=1727893491; x=1729103091; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xfBW3Iv1RXlf16bnEoktOQaChWcArNbC4OX2eZ0YBgo=; b=LwmUfVajVA0D1UYUvopTaqYpd9Iw7U3pvDHVil8Z2DRKlOG3jTRYNj0j 62qVya/cu69MTaYE4VVRCr/R5li+lTfo28TJiOmlhdCiuUOSJo6W3/IN0 7mtkB9D9H4Ku7bVcxMz08ao4gWJuI/7T2HrCxnaUf8RNsJ1esNA96FFyh c=; X-CSE-ConnectionGUID: SPH9mVV2RyCuDgglzI+tXg== X-CSE-MsgGUID: qkgQd4uyTKuRzO7LKEiA6Q== X-IronPort-AV: E=Sophos;i="6.11,172,1725321600"; d="scan'208";a="260381971" Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by rcdn-iport-8.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2024 18:24:44 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-8.cisco.com (8.15.2/8.15.2) with ESMTPSA id 492IOHQk009807 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 2 Oct 2024 18:24:43 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, aeasi@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH v4 01/14] scsi: fnic: Replace shost_printk with dev_info/dev_err Date: Wed, 2 Oct 2024 11:23:57 -0700 Message-Id: <20241002182410.68093-2-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20241002182410.68093-1-kartilak@cisco.com> References: <20241002182410.68093-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-8.cisco.com Sending host information to shost_printk prior to host initialization in fnic is unnecessary. Replace shost_printk and a printk prior to this initialization with dev_info and dev_err accordingly. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Reviewed-by: Gian Carlo Boffa Signed-off-by: Karan Tilak Kumar --- Changes between v1 and v2: Incorporate review comments from Hannes: Replace pr_info with dev_info. Replace pr_err with dev_err. Modify patch heading and description appropriately. --- drivers/scsi/fnic/fnic_main.c | 84 ++++++++++++----------------------- drivers/scsi/fnic/fnic_res.c | 69 ++++++++++------------------ 2 files changed, 51 insertions(+), 102 deletions(-) diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 0044717d4486..1633f7e6af1e 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -344,25 +344,19 @@ void fnic_log_q_error(struct fnic *fnic) for (i = 0; i < fnic->raw_wq_count; i++) { error_status = ioread32(&fnic->wq[i].ctrl->error_status); if (error_status) - shost_printk(KERN_ERR, fnic->lport->host, - "WQ[%d] error_status" - " %d\n", i, error_status); + dev_err(&fnic->pdev->dev, "WQ[%d] error_status %d\n", i, error_status); } for (i = 0; i < fnic->rq_count; i++) { error_status = ioread32(&fnic->rq[i].ctrl->error_status); if (error_status) - shost_printk(KERN_ERR, fnic->lport->host, - "RQ[%d] error_status" - " %d\n", i, error_status); + dev_err(&fnic->pdev->dev, "RQ[%d] error_status %d\n", i, error_status); } for (i = 0; i < fnic->wq_copy_count; i++) { error_status = ioread32(&fnic->hw_copy_wq[i].ctrl->error_status); if (error_status) - shost_printk(KERN_ERR, fnic->lport->host, - "CWQ[%d] error_status" - " %d\n", i, error_status); + dev_err(&fnic->pdev->dev, "CWQ[%d] error_status %d\n", i, error_status); } } @@ -396,8 +390,7 @@ static int fnic_notify_set(struct fnic *fnic) err = vnic_dev_notify_set(fnic->vdev, fnic->wq_copy_count + fnic->copy_wq_base); break; default: - shost_printk(KERN_ERR, fnic->lport->host, - "Interrupt mode should be set up" + dev_err(&fnic->pdev->dev, "Interrupt mode should be set up" " before devcmd notify set %d\n", vnic_dev_get_intr_mode(fnic->vdev)); err = -1; @@ -567,12 +560,10 @@ static int fnic_scsi_drv_init(struct fnic *fnic) host->nr_hw_queues = fnic->wq_copy_count; - shost_printk(KERN_INFO, host, - "fnic: can_queue: %d max_lun: %llu", + dev_info(&fnic->pdev->dev, "fnic: can_queue: %d max_lun: %llu", host->can_queue, host->max_lun); - shost_printk(KERN_INFO, host, - "fnic: max_id: %d max_cmd_len: %d nr_hw_queues: %d", + dev_info(&fnic->pdev->dev, "fnic: max_id: %d max_cmd_len: %d nr_hw_queues: %d", host->max_id, host->max_cmd_len, host->nr_hw_queues); return 0; @@ -622,7 +613,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) */ lp = libfc_host_alloc(&fnic_host_template, sizeof(struct fnic)); if (!lp) { - printk(KERN_ERR PFX "Unable to alloc libfc local port\n"); + dev_err(&pdev->dev, "Unable to alloc libfc local port\n"); err = -ENOMEM; goto err_out; } @@ -632,7 +623,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic_id = ida_alloc(&fnic_ida, GFP_KERNEL); if (fnic_id < 0) { - pr_err("Unable to alloc fnic ID\n"); + dev_err(&pdev->dev, "Unable to alloc fnic ID\n"); err = fnic_id; goto err_out_ida_alloc; } @@ -650,15 +641,13 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) err = pci_enable_device(pdev); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "Cannot enable PCI device, aborting.\n"); + dev_err(&fnic->pdev->dev, "Cannot enable PCI device, aborting.\n"); goto err_out_free_hba; } err = pci_request_regions(pdev, DRV_NAME); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "Cannot enable PCI resources, aborting\n"); + dev_err(&fnic->pdev->dev, "Cannot enable PCI resources, aborting\n"); goto err_out_disable_device; } @@ -672,8 +661,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (err) { err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "No usable DMA configuration " + dev_err(&fnic->pdev->dev, "No usable DMA configuration " "aborting\n"); goto err_out_release_regions; } @@ -681,8 +669,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* Map vNIC resources from BAR0 */ if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) { - shost_printk(KERN_ERR, fnic->lport->host, - "BAR0 not memory-map'able, aborting.\n"); + dev_err(&fnic->pdev->dev, "BAR0 not memory-map'able, aborting.\n"); err = -ENODEV; goto err_out_release_regions; } @@ -692,8 +679,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic->bar0.len = pci_resource_len(pdev, 0); if (!fnic->bar0.vaddr) { - shost_printk(KERN_ERR, fnic->lport->host, - "Cannot memory-map BAR0 res hdr, " + dev_err(&fnic->pdev->dev, "Cannot memory-map BAR0 res hdr, " "aborting.\n"); err = -ENODEV; goto err_out_release_regions; @@ -701,8 +687,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic->vdev = vnic_dev_register(NULL, fnic, pdev, &fnic->bar0); if (!fnic->vdev) { - shost_printk(KERN_ERR, fnic->lport->host, - "vNIC registration failed, " + dev_err(&fnic->pdev->dev, "vNIC registration failed, " "aborting.\n"); err = -ENODEV; goto err_out_iounmap; @@ -710,8 +695,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) err = vnic_dev_cmd_init(fnic->vdev); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "vnic_dev_cmd_init() returns %d, aborting\n", + dev_err(&fnic->pdev->dev, "vnic_dev_cmd_init() returns %d, aborting\n", err); goto err_out_vnic_unregister; } @@ -719,22 +703,19 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) err = fnic_dev_wait(fnic->vdev, vnic_dev_open, vnic_dev_open_done, CMD_OPENF_RQ_ENABLE_THEN_POST); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "vNIC dev open failed, aborting.\n"); + dev_err(&fnic->pdev->dev, "vNIC dev open failed, aborting.\n"); goto err_out_dev_cmd_deinit; } err = vnic_dev_init(fnic->vdev, 0); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "vNIC dev init failed, aborting.\n"); + dev_err(&fnic->pdev->dev, "vNIC dev init failed, aborting.\n"); goto err_out_dev_close; } err = vnic_dev_mac_addr(fnic->vdev, fnic->ctlr.ctl_src_addr); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "vNIC get MAC addr failed \n"); + dev_err(&fnic->pdev->dev, "vNIC get MAC addr failed\n"); goto err_out_dev_close; } /* set data_src for point-to-point mode and to keep it non-zero */ @@ -743,8 +724,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* Get vNIC configuration */ err = fnic_get_vnic_config(fnic); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "Get vNIC configuration failed, " + dev_err(&fnic->pdev->dev, "Get vNIC configuration failed, " "aborting.\n"); goto err_out_dev_close; } @@ -756,16 +736,14 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) err = fnic_set_intr_mode(fnic); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "Failed to set intr mode, " + dev_err(&fnic->pdev->dev, "Failed to set intr mode, " "aborting.\n"); goto err_out_dev_close; } err = fnic_alloc_vnic_resources(fnic); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "Failed to alloc vNIC resources, " + dev_err(&fnic->pdev->dev, "Failed to alloc vNIC resources, " "aborting.\n"); goto err_out_clear_intr; } @@ -778,7 +756,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) kzalloc((fnic->sw_copy_wq[hwq].ioreq_table_size + 1) * sizeof(struct fnic_io_req *), GFP_KERNEL); } - shost_printk(KERN_INFO, fnic->lport->host, "fnic copy wqs: %d, Q0 ioreq table size: %d\n", + dev_info(&fnic->pdev->dev, "fnic copy wqs: %d, Q0 ioreq table size: %d\n", fnic->wq_copy_count, fnic->sw_copy_wq[0].ioreq_table_size); /* initialize all fnic locks */ @@ -818,8 +796,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic->ctlr.update_mac = fnic_update_mac; fnic->ctlr.get_src_addr = fnic_get_mac; if (fnic->config.flags & VFCF_FIP_CAPABLE) { - shost_printk(KERN_INFO, fnic->lport->host, - "firmware supports FIP\n"); + dev_info(&fnic->pdev->dev, "firmware supports FIP\n"); /* enable directed and multicast */ vnic_dev_packet_filter(fnic->vdev, 1, 1, 0, 0, 0); vnic_dev_add_addr(fnic->vdev, FIP_ALL_ENODE_MACS); @@ -835,8 +812,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) INIT_LIST_HEAD(&fnic->evlist); INIT_LIST_HEAD(&fnic->vlans); } else { - shost_printk(KERN_INFO, fnic->lport->host, - "firmware uses non-FIP mode\n"); + dev_info(&fnic->pdev->dev, "firmware uses non-FIP mode\n"); fcoe_ctlr_init(&fnic->ctlr, FIP_MODE_NON_FIP); fnic->ctlr.state = FIP_ST_NON_FIP; } @@ -851,8 +827,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* Setup notification buffer area */ err = fnic_notify_set(fnic); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "Failed to alloc notify buffer, aborting.\n"); + dev_err(&fnic->pdev->dev, "Failed to alloc notify buffer, aborting.\n"); goto err_out_free_max_pool; } @@ -864,8 +839,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) for (i = 0; i < fnic->rq_count; i++) { err = vnic_rq_fill(&fnic->rq[i], fnic_alloc_rq_frame); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "fnic_alloc_rq_frame can't alloc " + dev_err(&fnic->pdev->dev, "fnic_alloc_rq_frame can't alloc " "frame\n"); goto err_out_rq_buf; } @@ -883,8 +857,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) err = fnic_request_intr(fnic); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "Unable to request irq.\n"); + dev_err(&fnic->pdev->dev, "Unable to request irq.\n"); goto err_out_request_intr; } @@ -894,8 +867,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) */ err = scsi_add_host(lp->host, &pdev->dev); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "fnic: scsi_add_host failed...exiting\n"); + dev_err(&fnic->pdev->dev, "fnic: scsi_add_host failed...exiting\n"); goto err_out_scsi_add_host; } diff --git a/drivers/scsi/fnic/fnic_res.c b/drivers/scsi/fnic/fnic_res.c index 33dd27f6f24e..dd24e25574db 100644 --- a/drivers/scsi/fnic/fnic_res.c +++ b/drivers/scsi/fnic/fnic_res.c @@ -30,9 +30,7 @@ int fnic_get_vnic_config(struct fnic *fnic) offsetof(struct vnic_fc_config, m), \ sizeof(c->m), &c->m); \ if (err) { \ - shost_printk(KERN_ERR, fnic->lport->host, \ - "Error getting %s, %d\n", #m, \ - err); \ + dev_err(&fnic->pdev->dev, "Error getting %s, %d\n", #m, err); \ return err; \ } \ } while (0); @@ -139,40 +137,29 @@ int fnic_get_vnic_config(struct fnic *fnic) c->wq_copy_count = min_t(u16, FNIC_WQ_COPY_MAX, c->wq_copy_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC MAC addr %pM " + dev_info(&fnic->pdev->dev, "vNIC MAC addr %pM " "wq/wq_copy/rq %d/%d/%d\n", fnic->ctlr.ctl_src_addr, c->wq_enet_desc_count, c->wq_copy_desc_count, c->rq_desc_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC node wwn %llx port wwn %llx\n", + dev_info(&fnic->pdev->dev, "vNIC node wwn %llx port wwn %llx\n", c->node_wwn, c->port_wwn); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC ed_tov %d ra_tov %d\n", + dev_info(&fnic->pdev->dev, "vNIC ed_tov %d ra_tov %d\n", c->ed_tov, c->ra_tov); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC mtu %d intr timer %d\n", + dev_info(&fnic->pdev->dev, "vNIC mtu %d intr timer %d\n", c->maxdatafieldsize, c->intr_timer); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC flags 0x%x luns per tgt %d\n", + dev_info(&fnic->pdev->dev, "vNIC flags 0x%x luns per tgt %d\n", c->flags, c->luns_per_tgt); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC flogi_retries %d flogi timeout %d\n", + dev_info(&fnic->pdev->dev, "vNIC flogi_retries %d flogi timeout %d\n", c->flogi_retries, c->flogi_timeout); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC plogi retries %d plogi timeout %d\n", + dev_info(&fnic->pdev->dev, "vNIC plogi retries %d plogi timeout %d\n", c->plogi_retries, c->plogi_timeout); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC io throttle count %d link dn timeout %d\n", + dev_info(&fnic->pdev->dev, "vNIC io throttle count %d link dn timeout %d\n", c->io_throttle_count, c->link_down_timeout); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC port dn io retries %d port dn timeout %d\n", + dev_info(&fnic->pdev->dev, "vNIC port dn io retries %d port dn timeout %d\n", c->port_down_io_retries, c->port_down_timeout); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC wq_copy_count: %d\n", c->wq_copy_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC intr mode: %d\n", c->intr_mode); + dev_info(&fnic->pdev->dev, "vNIC wq_copy_count: %d\n", c->wq_copy_count); + dev_info(&fnic->pdev->dev, "vNIC intr mode: %d\n", c->intr_mode); return 0; } @@ -206,18 +193,12 @@ void fnic_get_res_counts(struct fnic *fnic) fnic->intr_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_INTR_CTRL); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC fw resources wq_count: %d\n", fnic->wq_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC fw resources raw_wq_count: %d\n", fnic->raw_wq_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC fw resources wq_copy_count: %d\n", fnic->wq_copy_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC fw resources rq_count: %d\n", fnic->rq_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC fw resources cq_count: %d\n", fnic->cq_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC fw resources intr_count: %d\n", fnic->intr_count); + dev_info(&fnic->pdev->dev, "vNIC fw resources wq_count: %d\n", fnic->wq_count); + dev_info(&fnic->pdev->dev, "vNIC fw resources raw_wq_count: %d\n", fnic->raw_wq_count); + dev_info(&fnic->pdev->dev, "vNIC fw resources wq_copy_count: %d\n", fnic->wq_copy_count); + dev_info(&fnic->pdev->dev, "vNIC fw resources rq_count: %d\n", fnic->rq_count); + dev_info(&fnic->pdev->dev, "vNIC fw resources cq_count: %d\n", fnic->cq_count); + dev_info(&fnic->pdev->dev, "vNIC fw resources intr_count: %d\n", fnic->intr_count); } void fnic_free_vnic_resources(struct fnic *fnic) @@ -253,19 +234,17 @@ int fnic_alloc_vnic_resources(struct fnic *fnic) intr_mode = vnic_dev_get_intr_mode(fnic->vdev); - shost_printk(KERN_INFO, fnic->lport->host, "vNIC interrupt mode: %s\n", + dev_info(&fnic->pdev->dev, "vNIC interrupt mode: %s\n", intr_mode == VNIC_DEV_INTR_MODE_INTX ? "legacy PCI INTx" : intr_mode == VNIC_DEV_INTR_MODE_MSI ? "MSI" : intr_mode == VNIC_DEV_INTR_MODE_MSIX ? "MSI-X" : "unknown"); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC resources avail: wq %d cp_wq %d raw_wq %d rq %d", + dev_info(&fnic->pdev->dev, "res avail: wq %d cp_wq %d raw_wq %d rq %d", fnic->wq_count, fnic->wq_copy_count, fnic->raw_wq_count, fnic->rq_count); - shost_printk(KERN_INFO, fnic->lport->host, - "vNIC resources avail: cq %d intr %d cpy-wq desc count %d\n", + dev_info(&fnic->pdev->dev, "res avail: cq %d intr %d cpy-wq desc count %d\n", fnic->cq_count, fnic->intr_count, fnic->config.wq_copy_desc_count); @@ -340,8 +319,7 @@ int fnic_alloc_vnic_resources(struct fnic *fnic) RES_TYPE_INTR_PBA_LEGACY, 0); if (!fnic->legacy_pba && intr_mode == VNIC_DEV_INTR_MODE_INTX) { - shost_printk(KERN_ERR, fnic->lport->host, - "Failed to hook legacy pba resource\n"); + dev_err(&fnic->pdev->dev, "Failed to hook legacy pba resource\n"); err = -ENODEV; goto err_out_cleanup; } @@ -444,8 +422,7 @@ int fnic_alloc_vnic_resources(struct fnic *fnic) /* init the stats memory by making the first call here */ err = vnic_dev_stats_dump(fnic->vdev, &fnic->stats); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "vnic_dev_stats_dump failed - x%x\n", err); + dev_err(&fnic->pdev->dev, "vnic_dev_stats_dump failed - x%x\n", err); goto err_out_cleanup; } From patchwork Wed Oct 2 18:23:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karan Tilak Kumar X-Patchwork-Id: 832360 Received: from rcdn-iport-3.cisco.com (rcdn-iport-3.cisco.com [173.37.86.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0C5A2139A1; Wed, 2 Oct 2024 18:25:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=173.37.86.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893543; cv=none; b=cVbHzccArvrc6NlTKXHC8luYDyguyLvcWuwtjXDryfm0dB+cZp9NmA8uyM+iKewzYFeellTqlABOU5BKnGWd8QvaOlAlSrWvwbmV+TWBzVUqKQ3kcjRicLpsIJivoC3LbwvYKyGZp7Ol9esdm5iWNwpzjD3AcDWI0PFibiU06Zk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893543; c=relaxed/simple; bh=XgtXG/4Lh6AtqGaQ9XX5x912D/YHDmk3uJsph7PCVVg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WMk886pcu5llElRxJvaPg7Y3aQrvHYy8U/4cNe3ZdWHDGh0KRvGVB//oPoXu+mR4ejp5RL6O5t3tSS6VdWlGctWSpUvlIBdgb7DMeJUGDiWXkEtLMLQmvmlD9E6BiMm0qHbA417oeJWicjphaolncFORuVXwhFYu4ZiI1s2iarg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com; spf=pass smtp.mailfrom=cisco.com; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b=cDcD0VZL; arc=none smtp.client-ip=173.37.86.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cisco.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="cDcD0VZL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.com; i=@cisco.com; l=87007; q=dns/txt; s=iport; t=1727893538; x=1729103138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O+2X92KYkaq9nz17iGH8YM+yfDTWcvAnGHYz10YJwBI=; b=cDcD0VZLp2Hf6/1rLiGX4qf4+AMOyzuBXA8TraPuxyvgmCHEhPqRscLZ 99bnQZ1eD2N72q1iY/yM7sACCmJEG1l5fNRfWq3VDMuuP/4pjjKRtqNN9 gFlp3OqUznuS2KFlOZK2/VgEtAsrzCcgNBvplRZTkObVZ8LJa7CwK0qSC 8=; X-CSE-ConnectionGUID: z5lI/JCPTPWY3UAd7DE49w== X-CSE-MsgGUID: 1XYfMM6lSoWTtEaMpbu9XA== X-IronPort-AV: E=Sophos;i="6.11,172,1725321600"; d="scan'208";a="269300556" Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by rcdn-iport-3.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2024 18:25:32 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-8.cisco.com (8.15.2/8.15.2) with ESMTPSA id 492IOHQm009807 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 2 Oct 2024 18:25:31 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, aeasi@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar , kernel test robot Subject: [PATCH v4 03/14] scsi: fnic: Add support for fabric based solicited requests and responses Date: Wed, 2 Oct 2024 11:23:59 -0700 Message-Id: <20241002182410.68093-4-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20241002182410.68093-1-kartilak@cisco.com> References: <20241002182410.68093-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-8.cisco.com Add fdls_disc.c to support fabric based solicited requests and responses. Clean up obsolete code but keep the function template so as to not break compilation. Remove duplicate definitions from header files. Modify definitions of data members. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202406112309.8GiDUvIM-lkp@ intel.com/ Closes: https://lore.kernel.org/oe-kbuild-all/202406120201.VakI9Dly-lkp@ intel.com/ Reviewed-by: Sesidhar Baddela Signed-off-by: Gian Carlo Boffa Signed-off-by: Arulprabhu Ponnusamy Signed-off-by: Arun Easi Signed-off-by: Karan Tilak Kumar --- Changes between v2 and v3: Fix issue found by kernel test robot. Incorporate review comments from Hannes: Replace redundant structure definitions with standard definitions. Replace static OXIDs with pool-based OXIDs for fabric. Changes between v1 and v2: Incorporate review comments from Hannes: Remove double newline. Fix indentation in fnic_send_frame. Use the correct kernel-doc format. Replace htonll() with get_unaligned_be64(). Add fnic_del_fabric_timer_sync as an inline function. Add fnic_del_tport_timer_sync as an inline function. Modify fc_abts_s variable name to fc_fabric_abts_s. Modify fc_fabric_abts_s to be a global frame. Rename pfc_abts to fabric_abts. Replace definitions with standard definitions from fc_els.h. Incorporate review comments from John: Replace kmalloc with kzalloc. Modify fnic_send_fcoe_frame to not send a return value. Replace simultaneous use of pointer and structure with just pointer. Fix issue found by kernel test robot. --- drivers/scsi/fnic/Makefile | 1 + drivers/scsi/fnic/fdls_disc.c | 1904 +++++++++++++++++++++++++++++++++ drivers/scsi/fnic/fnic.h | 26 +- drivers/scsi/fnic/fnic_fcs.c | 400 ++++--- drivers/scsi/fnic/fnic_fdls.h | 5 +- drivers/scsi/fnic/fnic_io.h | 11 - drivers/scsi/fnic/fnic_main.c | 10 +- drivers/scsi/fnic/fnic_scsi.c | 6 +- 8 files changed, 2169 insertions(+), 194 deletions(-) create mode 100644 drivers/scsi/fnic/fdls_disc.c diff --git a/drivers/scsi/fnic/Makefile b/drivers/scsi/fnic/Makefile index 6214a6b2e96d..3bd6b1c8b643 100644 --- a/drivers/scsi/fnic/Makefile +++ b/drivers/scsi/fnic/Makefile @@ -7,6 +7,7 @@ fnic-y := \ fnic_main.o \ fnic_res.o \ fnic_fcs.o \ + fdls_disc.o \ fnic_scsi.o \ fnic_trace.o \ fnic_debugfs.o \ diff --git a/drivers/scsi/fnic/fdls_disc.c b/drivers/scsi/fnic/fdls_disc.c new file mode 100644 index 000000000000..d203398b2743 --- /dev/null +++ b/drivers/scsi/fnic/fdls_disc.c @@ -0,0 +1,1904 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2008 Cisco Systems, Inc. All rights reserved. + * Copyright 2007 Nuova Systems, Inc. All rights reserved. + */ + +#include +#include "fnic.h" +#include "fdls_fc.h" +#include "fnic_fdls.h" +#include +#include + +/* Frame initialization */ +/* + * Variables: + * sid + */ +struct fc_std_flogi fnic_std_flogi_req = { + .fchdr = {.fh_r_ctl = FC_RCTL_ELS_REQ, .fh_d_id = {0xFF, 0xFF, 0xFE}, + .fh_type = FC_TYPE_ELS, .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}, + .fh_rx_id = 0xFFFF}, + .els.fl_cmd = ELS_FLOGI, + .els.fl_csp = {.sp_hi_ver = FNIC_FC_PH_VER_HI, + .sp_lo_ver = FNIC_FC_PH_VER_LO, + .sp_bb_cred = cpu_to_be16(FNIC_FC_B2B_CREDIT), + .sp_bb_data = cpu_to_be16(FNIC_FC_B2B_RDF_SZ)}, + .els.fl_cssp[2].cp_class = cpu_to_be16(FC_CPC_VALID | FC_CPC_SEQ) +}; + +/* + * Variables: + * sid, did(nport logins), ox_id(nport logins), nport_name, node_name + */ +struct fc_std_flogi fnic_std_plogi_req = { + .fchdr = {.fh_r_ctl = FC_RCTL_ELS_REQ, .fh_d_id = {0xFF, 0xFF, 0xFC}, + .fh_type = FC_TYPE_ELS, .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}, + .fh_rx_id = 0xFFFF}, + .els = { + .fl_cmd = ELS_PLOGI, + .fl_csp = {.sp_hi_ver = FNIC_FC_PH_VER_HI, + .sp_lo_ver = FNIC_FC_PH_VER_LO, + .sp_bb_cred = cpu_to_be16(FNIC_FC_B2B_CREDIT), + .sp_features = cpu_to_be16(FC_SP_FT_CIRO), + .sp_bb_data = cpu_to_be16(FNIC_FC_B2B_RDF_SZ), + .sp_tot_seq = cpu_to_be16(FNIC_FC_CONCUR_SEQS), + .sp_rel_off = cpu_to_be16(FNIC_FC_RO_INFO), + .sp_e_d_tov = cpu_to_be32(FNIC_E_D_TOV)}, + .fl_cssp[2].cp_class = cpu_to_be16(FC_CPC_VALID | FC_CPC_SEQ), + .fl_cssp[2].cp_rdfs = cpu_to_be16(0x800), + .fl_cssp[2].cp_con_seq = cpu_to_be16(0xFF), + .fl_cssp[2].cp_open_seq = 1} +}; + +/* + * Variables: + * sid, port_id, port_name + */ +struct fc_std_rpn_id fnic_std_rpn_id_req = { + .fchdr = {.fh_r_ctl = FC_RCTL_DD_UNSOL_CTL, + .fh_d_id = {0xFF, 0xFF, 0xFC}, .fh_type = FC_TYPE_CT, + .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}, + .fh_rx_id = 0xFFFF}, + .fc_std_ct_hdr = {.ct_rev = FC_CT_REV, .ct_fs_type = FC_FST_DIR, + .ct_fs_subtype = FC_NS_SUBTYPE, + .ct_cmd = cpu_to_be16(FC_NS_RPN_ID)} +}; + +/* + * Variables: + * fh_s_id, port_id, port_name + */ +struct fc_std_rft_id fnic_std_rft_id_req = { + .fchdr = {.fh_r_ctl = FC_RCTL_DD_UNSOL_CTL, + .fh_d_id = {0xFF, 0xFF, 0xFC}, .fh_type = FC_TYPE_CT, + .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}, + .fh_rx_id = 0xFFFF}, + .fc_std_ct_hdr = {.ct_rev = FC_CT_REV, .ct_fs_type = FC_FST_DIR, + .ct_fs_subtype = FC_NS_SUBTYPE, + .ct_cmd = cpu_to_be16(FC_NS_RFT_ID)} +}; + +/* + * Variables: + * fh_s_id, port_id, port_name + */ +struct fc_std_rff_id fnic_std_rff_id_req = { + .fchdr = {.fh_r_ctl = FC_RCTL_DD_UNSOL_CTL, + .fh_d_id = {0xFF, 0xFF, 0xFC}, .fh_type = FC_TYPE_CT, + .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}, + .fh_rx_id = 0xFFFF}, + .fc_std_ct_hdr = {.ct_rev = FC_CT_REV, .ct_fs_type = FC_FST_DIR, + .ct_fs_subtype = FC_NS_SUBTYPE, + .ct_cmd = cpu_to_be16(FC_NS_RFF_ID)}, + .rff_id.fr_feat = 0x2, + .rff_id.fr_type = FC_TYPE_FCP +}; + +/* + * Variables: + * sid + */ +struct fc_std_gpn_ft fnic_std_gpn_ft_req = { + .fchdr = {.fh_r_ctl = FC_RCTL_DD_UNSOL_CTL, + .fh_d_id = {0xFF, 0xFF, 0xFC}, .fh_type = FC_TYPE_CT, + .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}, + .fh_rx_id = 0xFFFF}, + .fc_std_ct_hdr = {.ct_rev = FC_CT_REV, .ct_fs_type = FC_FST_DIR, + .ct_fs_subtype = FC_NS_SUBTYPE, + .ct_cmd = cpu_to_be16(FC_NS_GPN_FT)}, + .gpn_ft.fn_fc4_type = 0x08 +}; + +/* + * Variables: + * sid + */ +struct fc_std_scr fnic_std_scr_req = { + .fchdr = {.fh_r_ctl = FC_RCTL_ELS_REQ, + .fh_d_id = {0xFF, 0xFF, 0xFD}, .fh_type = FC_TYPE_ELS, + .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}, + .fh_rx_id = 0xFFFF}, + .scr = {.scr_cmd = ELS_SCR, + .scr_reg_func = ELS_SCRF_FULL} +}; + +/* + * Variables: + * did, ox_id, rx_id, fcid, wwpn + */ +struct fc_std_logo fnic_std_logo_req = { + .fchdr = {.fh_r_ctl = FC_RCTL_ELS_REQ, .fh_type = FC_TYPE_ELS, + .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}}, + .els.fl_cmd = ELS_LOGO, +}; + +struct fc_frame_header fc_std_fabric_abts = { + .fh_r_ctl = FC_RCTL_BA_ABTS, /* ABTS */ + .fh_d_id = {0xFF, 0xFF, 0xFF}, .fh_s_id = {0x00, 0x00, 0x00}, + .fh_cs_ctl = 0x00, .fh_type = FC_TYPE_BLS, + .fh_f_ctl = {FNIC_REQ_ABTS_FCTL, 0, 0}, .fh_seq_id = 0x00, + .fh_df_ctl = 0x00, .fh_seq_cnt = 0x0000, .fh_rx_id = 0xFFFF, + .fh_parm_offset = 0x00000000, /* bit:0 = 0 Abort a exchange */ +}; + +#define RETRIES_EXHAUSTED(iport) \ + (iport->fabric.retry_counter == FABRIC_LOGO_MAX_RETRY) + +/* + * For fabric requests and fdmi, once OXIDs are allocated from the pool + * (and a range) they are encoded with expected rsp type as + * they come in convenience with identifying the received frames + * and debugging with switches (OXID pool base will be chosen + * such a way bits 6-11 are always 0) oxid(16 bits) - + * bits 0-5: idx into the pool(bitmap) + * bits 6-11: expected response types + */ +#define FDLS_OXID_ENCODE(oxid, rsp_type) ((oxid) | (rsp_type << 6)) +#define FDLS_OXID_RSP_TYPE_UNMASKED(oxid) ((oxid) & ~0x0FC0) +#define FDLS_OXID_TO_RSP_TYPE(oxid) (((oxid) >> 6) & 0x3F) +/* meta->size has to be power-of-2 */ +#define FDLS_OXID_TO_IDX(meta, oxid) ((oxid) & (meta->sz - 1)) + +/* Private Functions */ +static void fdls_process_flogi_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr, + void *rx_frame); +static void fnic_fdls_start_plogi(struct fnic_iport_s *iport); +static void fdls_start_fabric_timer(struct fnic_iport_s *iport, + int timeout); +static void +fdls_init_fabric_oxid_pool(struct fnic_fabric_oxid_pool_s *oxid_pool, + int oxid_base, int sz); + +void fdls_init_oxid_pool(struct fnic_iport_s *iport) +{ + fdls_init_fabric_oxid_pool(&iport->fabric_oxid_pool, + FDLS_FABRIC_OXID_POOL_BASE, + FDLS_FABRIC_OXID_POOL_SZ); + + fdls_init_fabric_oxid_pool(&iport->fdmi_oxid_pool, + FDLS_FDMI_OXID_POOL_BASE, + FDLS_FDMI_OXID_POOL_SZ); +} + +uint16_t fdls_alloc_oxid(struct fnic_iport_s *iport, + struct fnic_oxid_pool_meta_s *meta, + unsigned long *bitmap) +{ + struct fnic *fnic = iport->fnic; + struct reclaim_entry_s *reclaim_entry, *next; + int idx; + uint16_t oxid; + unsigned long expiry_ts; + + /* first walk through the oxid slots and reclaim any expired */ + list_for_each_entry_safe(reclaim_entry, next, + &(meta->reclaim_list), links) { + expiry_ts = reclaim_entry->timestamp + + msecs_to_jiffies(2 * iport->r_a_tov); + if (time_after(jiffies, expiry_ts)) { + list_del(&reclaim_entry->links); + fdls_free_oxid(iport, meta, bitmap, reclaim_entry->oxid); + kfree(reclaim_entry); + } + } + + /* Allocate next available oxid from bitmap */ + idx = find_next_zero_bit(bitmap, meta->sz, meta->next_idx); + + if (idx == meta->sz) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Alloc oxid: all oxid slots are busy iport state:%d\n", + iport->state); + return 0xFFFF; + } + + /* adding the oxid index to pool base will give the oxid */ + oxid = idx + meta->oxid_base; + set_bit(idx, bitmap); + meta->next_idx = (idx + 1) % meta->sz; /* cycle through the bitmap */ + + return oxid; +} + +void fdls_free_oxid(struct fnic_iport_s *iport, + struct fnic_oxid_pool_meta_s *meta, + unsigned long *bitmap, uint16_t oxid) +{ + struct fnic *fnic = iport->fnic; + int idx; + + idx = FDLS_OXID_TO_IDX(meta, oxid); + + if (!test_bit(idx, bitmap)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Free oxid: already freed, iport state:%d\n", + iport->state); + } + clear_bit(idx, bitmap); +} + +static void +fdls_init_fabric_oxid_pool(struct fnic_fabric_oxid_pool_s *oxid_pool, + int oxid_base, int sz) +{ + memset(oxid_pool, 0, sizeof(struct fnic_fabric_oxid_pool_s)); + oxid_pool->meta.oxid_base = oxid_base; + oxid_pool->meta.sz = sz; + INIT_LIST_HEAD(&oxid_pool->meta.reclaim_list); +} + +/* + * fdls_alloc_fabric_oxid - Allocate fabric oxid from a pool + * @iport: Handle to fnic iport + * @oxid_pool: Pool to allocate from + * @exp_rsp_type: Response type that we expect + * + * Wrapper for fabric and fdmi oxid allocations. + * Called with fnic_lock held. + * Encode the allocated OXID with expected rsp type for easier + * identification of the rx response and convenient debugging with switches + */ +uint16_t fdls_alloc_fabric_oxid(struct fnic_iport_s *iport, + struct fnic_fabric_oxid_pool_s *oxid_pool, + int exp_rsp_type) +{ + uint16_t oxid; + + oxid = fdls_alloc_oxid(iport, &oxid_pool->meta, oxid_pool->bitmap); + + oxid = FDLS_OXID_ENCODE(oxid, exp_rsp_type); + + /* Save the corresponding active oxid for abort, if needed. + * Since fabric requests are serialized, they need only one. + * fdmi_reg_hba and fdmi_hpa can go in parallel. Save them + * separately. + */ + switch (exp_rsp_type) { + default: + oxid_pool->active_oxid_fabric_req = oxid; + break; + } + + return oxid; +} + +inline void fdls_free_fabric_oxid(struct fnic_iport_s *iport, + struct fnic_fabric_oxid_pool_s + *oxid_pool, uint16_t oxid) +{ + fdls_free_oxid(iport, &oxid_pool->meta, oxid_pool->bitmap, oxid); +} + +static void fdls_schedule_oxid_free(struct fnic_oxid_pool_meta_s *meta, + uint16_t oxid) +{ + struct reclaim_entry_s *reclaim_entry; + + reclaim_entry = (struct reclaim_entry_s *) + kzalloc(sizeof(struct reclaim_entry_s), GFP_KERNEL); + reclaim_entry->oxid = oxid; + reclaim_entry->timestamp = jiffies; + + list_add_tail(&reclaim_entry->links, &meta->reclaim_list); +} + +static inline void fdls_schedule_fabric_oxid_free(struct fnic_iport_s + *iport) +{ + fdls_schedule_oxid_free(&iport->fabric_oxid_pool.meta, + iport->fabric_oxid_pool.active_oxid_fabric_req); +} + +int fnic_fdls_expected_rsp(struct fnic_iport_s *iport, uint16_t oxid) +{ + struct fnic *fnic = iport->fnic; + + /* Received oxid should match with one of the following */ + if ((oxid != iport->fabric_oxid_pool.active_oxid_fabric_req) && + (oxid != iport->fdmi_oxid_pool.active_oxid_fdmi_plogi) && + (oxid != iport->fdmi_oxid_pool.active_oxid_fdmi_rhba) && + (oxid != iport->fdmi_oxid_pool.active_oxid_fdmi_rpa)) { + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Received oxid(0x%x) not matching in oxid pool (0x%x) state:%d", + oxid, iport->fabric_oxid_pool.active_oxid_fabric_req, + iport->fabric.state); + return 0xFFFF; + } + + return FDLS_OXID_TO_RSP_TYPE(oxid); +} + +static int fdls_is_oxid_in_fabric_range(uint16_t oxid) +{ + uint16_t oxid_unmasked = FDLS_OXID_RSP_TYPE_UNMASKED(oxid); + + return ((oxid_unmasked >= FDLS_FABRIC_OXID_POOL_BASE) && + (oxid_unmasked <= FDLS_FABRIC_OXID_POOL_END)); +} + +inline void fnic_del_fabric_timer_sync(struct fnic *fnic) +{ + fnic->iport.fabric.del_timer_inprogress = 1; + spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); + del_timer_sync(&fnic->iport.fabric.retry_timer); + spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags); + fnic->iport.fabric.del_timer_inprogress = 0; +} + +inline void fnic_del_tport_timer_sync(struct fnic *fnic, + struct fnic_tport_s *tport) +{ + tport->del_timer_inprogress = 1; + spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); + del_timer_sync(&tport->retry_timer); + spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags); + tport->del_timer_inprogress = 0; +} + +static void +fdls_start_fabric_timer(struct fnic_iport_s *iport, int timeout) +{ + u64 fabric_tov; + struct fnic *fnic = iport->fnic; + + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "iport fcid: 0x%x: Canceling fabric disc timer\n", + iport->fcid); + fnic_del_fabric_timer_sync(fnic); + iport->fabric.timer_pending = 0; + } + + if (!(iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) + iport->fabric.retry_counter++; + + fabric_tov = jiffies + msecs_to_jiffies(timeout); + mod_timer(&iport->fabric.retry_timer, round_jiffies(fabric_tov)); + iport->fabric.timer_pending = 1; + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fabric timer is %d ", timeout); +} + +static void fdls_send_fabric_abts(struct fnic_iport_s *iport) +{ + uint8_t fcid[3]; + struct fnic *fnic = iport->fnic; + struct fc_frame_header fabric_abort = fc_std_fabric_abts; + struct fc_frame_header *fabric_abts = &fabric_abort; + uint16_t oxid; + + switch (iport->fabric.state) { + case FDLS_STATE_FABRIC_LOGO: + fabric_abts->fh_d_id[2] = 0xFE; + break; + + case FDLS_STATE_FABRIC_FLOGI: + fabric_abts->fh_d_id[2] = 0xFE; + break; + + case FDLS_STATE_FABRIC_PLOGI: + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID(fabric_abts, fcid); + fabric_abts->fh_d_id[2] = 0xFC; + break; + + case FDLS_STATE_RPN_ID: + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID(fabric_abts, fcid); + fabric_abts->fh_d_id[2] = 0xFC; + break; + + case FDLS_STATE_SCR: + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID(fabric_abts, fcid); + fabric_abts->fh_d_id[2] = 0xFD; + break; + + case FDLS_STATE_REGISTER_FC4_TYPES: + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID(fabric_abts, fcid); + fabric_abts->fh_d_id[2] = 0xFC; + break; + + case FDLS_STATE_REGISTER_FC4_FEATURES: + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID(fabric_abts, fcid); + fabric_abts->fh_d_id[2] = 0xFC; + break; + + case FDLS_STATE_GPN_FT: + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID(fabric_abts, fcid); + fabric_abts->fh_d_id[2] = 0xFC; + break; + default: + return; + } + + oxid = iport->fabric_oxid_pool.active_oxid_fabric_req; + FNIC_STD_SET_OX_ID(fabric_abts, htons(oxid)); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FDLS sending fabric abts. iport->fabric.state: %d, oxid:%x", + iport->fabric.state, oxid); + + iport->fabric.flags |= FNIC_FDLS_FABRIC_ABORT_ISSUED; + + fnic_send_fcoe_frame(iport, fabric_abts, + sizeof(struct fc_frame_header)); + /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); + iport->fabric.timer_pending = 1; +} + +static void fdls_send_fabric_flogi(struct fnic_iport_s *iport) +{ + struct fc_std_flogi flogi; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + memcpy(&flogi, &fnic_std_flogi_req, sizeof(struct fc_std_flogi)); + FNIC_LOGI_SET_NPORT_NAME(&flogi.els, iport->wwpn); + FNIC_LOGI_SET_NODE_NAME(&flogi.els, iport->wwnn); + FNIC_LOGI_SET_RDF_SIZE(&flogi.els, iport->max_payload_size); + FNIC_LOGI_SET_R_A_TOV(&flogi.els, iport->r_a_tov); + FNIC_LOGI_SET_E_D_TOV(&flogi.els, iport->e_d_tov); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_FLOGI_RSP); + if (oxid == 0xFFFF) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate OXID to send flogi %p", iport); + return; + } + FNIC_STD_SET_OX_ID(&flogi.fchdr, htons(oxid)); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS send fabric flogi with oxid:%x", iport->fcid, + oxid); + + fnic_send_fcoe_frame(iport, &flogi, sizeof(struct fc_std_flogi)); + /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); +} + +static void fdls_send_fabric_plogi(struct fnic_iport_s *iport) +{ + struct fc_std_flogi plogi; + struct fc_frame_header *fchdr = &plogi.fchdr; + uint8_t fcid[3]; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + memcpy(&plogi, &fnic_std_plogi_req, sizeof(struct fc_std_flogi)); + + hton24(fcid, iport->fcid); + + FNIC_STD_SET_S_ID(fchdr, fcid); + FNIC_LOGI_SET_NPORT_NAME(&plogi.els, iport->wwpn); + FNIC_LOGI_SET_NODE_NAME(&plogi.els, iport->wwnn); + FNIC_LOGI_SET_RDF_SIZE(&plogi.els, iport->max_payload_size); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_PLOGI_RSP); + if (oxid == 0xFFFF) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate OXID to send fabric plogi %p", + iport); + return; + } + FNIC_STD_SET_OX_ID(fchdr, htons(oxid)); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS send fabric PLOGI with oxid:%x", iport->fcid, + oxid); + + fnic_send_fcoe_frame(iport, &plogi, sizeof(struct fc_std_flogi)); + /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); +} + +static void fdls_send_rpn_id(struct fnic_iport_s *iport) +{ + struct fc_std_rpn_id rpn_id; + uint8_t fcid[3]; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + memcpy(&rpn_id, &fnic_std_rpn_id_req, sizeof(struct fc_std_rpn_id)); + + hton24(fcid, iport->fcid); + + FNIC_STD_SET_S_ID((&rpn_id.fchdr), fcid); + FNIC_STD_SET_PORT_ID((&rpn_id.rpn_id), fcid); + FNIC_STD_SET_PORT_NAME((&rpn_id.rpn_id), iport->wwpn); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_RPN_RSP); + if (oxid == 0xFFFF) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate OXID to send rpn id %p", iport); + return; + } + FNIC_STD_SET_OX_ID(&rpn_id.fchdr, htons(oxid)); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS send RPN ID with oxid:%x", iport->fcid, oxid); + + fnic_send_fcoe_frame(iport, &rpn_id, sizeof(struct fc_std_rpn_id)); + /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); +} + +static void fdls_send_scr(struct fnic_iport_s *iport) +{ + struct fc_std_scr scr; + uint8_t fcid[3]; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + memcpy(&scr, &fnic_std_scr_req, sizeof(struct fc_std_scr)); + + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID((&scr.fchdr), fcid); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_SCR_RSP); + if (oxid == 0xFFFF) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate OXID to send scr %p", iport); + return; + } + FNIC_STD_SET_OX_ID(&scr.fchdr, htons(oxid)); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS send SCR with oxid:%x", iport->fcid, oxid); + + + fnic_send_fcoe_frame(iport, &scr, sizeof(struct fc_std_scr)); + /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); +} + +static void fdls_send_gpn_ft(struct fnic_iport_s *iport, int fdls_state) +{ + struct fc_std_gpn_ft gpn_ft; + uint8_t fcid[3]; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + memcpy(&gpn_ft, &fnic_std_gpn_ft_req, sizeof(struct fc_std_gpn_ft)); + + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID((&gpn_ft.fchdr), fcid); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_GPN_FT_RSP); + if (oxid == 0xFFFF) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate OXID to send GPN FT %p", iport); + return; + } + FNIC_STD_SET_OX_ID(&gpn_ft.fchdr, htons(oxid)); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS send GPN FT with oxid:%x", iport->fcid, oxid); + + fnic_send_fcoe_frame(iport, &gpn_ft, sizeof(struct fc_std_gpn_ft)); + /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); + fdls_set_state((&iport->fabric), fdls_state); +} + +static void fdls_send_register_fc4_types(struct fnic_iport_s *iport) +{ + struct fc_std_rft_id rft_id; + uint8_t fcid[3]; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + memset(&rft_id, 0, sizeof(struct fc_std_rft_id)); + memcpy(&rft_id, &fnic_std_rft_id_req, sizeof(struct fc_std_rft_id)); + hton24(fcid, iport->fcid); + + FNIC_STD_SET_S_ID((&rft_id.fchdr), fcid); + FNIC_STD_SET_PORT_ID((&rft_id.rft_id), fcid); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_RFT_RSP); + if (oxid == 0xFFFF) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate OXID to send RFT %p", iport); + return; + } + FNIC_STD_SET_OX_ID(&rft_id.fchdr, htons(oxid)); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS sending FC4 Typeswith oxid:%x", iport->fcid, + oxid); + + if (IS_FNIC_FCP_INITIATOR(fnic)) + rft_id.rft_id.fr_fts.ff_type_map[0] = + cpu_to_be32(1 << FC_TYPE_FCP); + + rft_id.rft_id.fr_fts.ff_type_map[1] = + cpu_to_be32(1 << (FC_TYPE_CT % FC_NS_BPW)); + + fnic_send_fcoe_frame(iport, &rft_id, sizeof(struct fc_std_rft_id)); + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); +} + +static void fdls_send_register_fc4_features(struct fnic_iport_s *iport) +{ + struct fc_std_rff_id rff_id; + uint8_t fcid[3]; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + memcpy(&rff_id, &fnic_std_rff_id_req, sizeof(struct fc_std_rff_id)); + + hton24(fcid, iport->fcid); + + FNIC_STD_SET_S_ID((&rff_id.fchdr), fcid); + FNIC_STD_SET_PORT_ID((&rff_id.rff_id), fcid); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_RFF_RSP); + if (oxid == 0xFFFF) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate OXID to send RFF %p", iport); + return; + } + FNIC_STD_SET_OX_ID(&rff_id.fchdr, htons(oxid)); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS sending FC4 features with %x", iport->fcid, + oxid); + + if (IS_FNIC_FCP_INITIATOR(fnic)) { + rff_id.rff_id.fr_type = FC_TYPE_FCP; + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: Unknown type", iport->fcid); + } + + fnic_send_fcoe_frame(iport, &rff_id, sizeof(struct fc_std_rff_id)); + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); +} + +/** + * fdls_send_fabric_logo - Send flogo to the fcf + * @iport: Handle to fnic iport + * + * This function does not change or check the fabric state. + * It the caller's responsibility to set the appropriate iport fabric + * state when this is called. Normally it is FDLS_STATE_FABRIC_LOGO. + * Currently this assumes to be called with fnic lock held. + */ +void fdls_send_fabric_logo(struct fnic_iport_s *iport) +{ + struct fc_std_logo logo; + uint8_t s_id[3]; + uint8_t d_id[3] = { 0xFF, 0xFF, 0xFE }; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Sending logo to fabric from iport->fcid: 0x%x", + iport->fcid); + memcpy(&logo, &fnic_std_logo_req, sizeof(struct fc_std_logo)); + + hton24(s_id, iport->fcid); + + FNIC_STD_SET_S_ID((&logo.fchdr), s_id); + FNIC_STD_SET_D_ID((&logo.fchdr), d_id); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_LOGO_RSP); + if (oxid == 0xFFFF) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate OXID to send fabric logo %p", + iport); + return; + } + FNIC_STD_SET_OX_ID((&logo.fchdr), htons(oxid)); + + memcpy(&logo.els.fl_n_port_id, s_id, 3); + FNIC_STD_SET_NPORT_NAME(&logo.els.fl_n_port_wwn, + le64_to_cpu(iport->wwpn)); + + fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); + + iport->fabric.flags &= ~FNIC_FDLS_FABRIC_ABORT_ISSUED; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Sending logo to fabric from fcid %x with oxid %x", + iport->fcid, oxid); + + fnic_send_fcoe_frame(iport, &logo, sizeof(struct fc_std_logo)); +} + +void fdls_fabric_timer_callback(struct timer_list *t) +{ + struct fnic_fdls_fabric_s *fabric = from_timer(fabric, t, retry_timer); + struct fnic_iport_s *iport = + container_of(fabric, struct fnic_iport_s, fabric); + struct fnic *fnic = iport->fnic; + unsigned long flags; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "tp: %d fab state: %d fab retry counter: %d max_flogi_retries: %d", + iport->fabric.timer_pending, iport->fabric.state, + iport->fabric.retry_counter, iport->max_flogi_retries); + + spin_lock_irqsave(&fnic->fnic_lock, flags); + + if (!iport->fabric.timer_pending) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + return; + } + + if (iport->fabric.del_timer_inprogress) { + iport->fabric.del_timer_inprogress = 0; + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fabric_del_timer inprogress(%d). Skip timer cb", + iport->fabric.del_timer_inprogress); + return; + } + + iport->fabric.timer_pending = 0; + + /* The fabric state indicates which frames have time out, and we retry */ + switch (iport->fabric.state) { + case FDLS_STATE_FABRIC_FLOGI: + /* Flogi received a LS_RJT with busy we retry from here */ + if ((iport->fabric.flags & FNIC_FDLS_RETRY_FRAME) + && (iport->fabric.retry_counter < iport->max_flogi_retries)) { + iport->fabric.flags &= ~FNIC_FDLS_RETRY_FRAME; + fdls_send_fabric_flogi(iport); + } else if (!(iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) { + /* Flogi has time out 2*ed_tov send abts */ + fdls_send_fabric_abts(iport); + } else { + /* ABTS has timed out + * Mark the OXID to be freed after 2 * r_a_tov and retry the req + */ + fdls_schedule_fabric_oxid_free(iport); + if (iport->fabric.retry_counter < iport->max_flogi_retries) { + iport->fabric.flags &= ~FNIC_FDLS_FABRIC_ABORT_ISSUED; + fdls_send_fabric_flogi(iport); + } else + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Exceeded max FLOGI retries"); + } + break; + case FDLS_STATE_FABRIC_PLOGI: + /* Plogi received a LS_RJT with busy we retry from here */ + if ((iport->fabric.flags & FNIC_FDLS_RETRY_FRAME) + && (iport->fabric.retry_counter < iport->max_plogi_retries)) { + iport->fabric.flags &= ~FNIC_FDLS_RETRY_FRAME; + fdls_send_fabric_plogi(iport); + } else if (!(iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) { + /* Plogi has timed out 2*ed_tov send abts */ + fdls_send_fabric_abts(iport); + } else { + /* ABTS has timed out + * Mark the OXID to be freed after 2 * r_a_tov and retry the req + */ + fdls_schedule_fabric_oxid_free(iport); + if (iport->fabric.retry_counter < iport->max_plogi_retries) { + iport->fabric.flags &= ~FNIC_FDLS_FABRIC_ABORT_ISSUED; + fdls_send_fabric_plogi(iport); + } else + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Exceeded max PLOGI retries"); + } + break; + case FDLS_STATE_RPN_ID: + /* Rpn_id received a LS_RJT with busy we retry from here */ + if ((iport->fabric.flags & FNIC_FDLS_RETRY_FRAME) + && (iport->fabric.retry_counter < FDLS_RETRY_COUNT)) { + iport->fabric.flags &= ~FNIC_FDLS_RETRY_FRAME; + fdls_send_rpn_id(iport); + } else if (!(iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) + /* RPN has timed out. Send abts */ + fdls_send_fabric_abts(iport); + else { + /* ABTS has timed out */ + fdls_schedule_fabric_oxid_free(iport); + fnic_fdls_start_plogi(iport); /* go back to fabric Plogi */ + } + break; + case FDLS_STATE_SCR: + /* scr received a LS_RJT with busy we retry from here */ + if ((iport->fabric.flags & FNIC_FDLS_RETRY_FRAME) + && (iport->fabric.retry_counter < FDLS_RETRY_COUNT)) { + iport->fabric.flags &= ~FNIC_FDLS_RETRY_FRAME; + fdls_send_scr(iport); + } else if (!(iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) + /* scr has timed out. Send abts */ + fdls_send_fabric_abts(iport); + else { + /* ABTS has timed out */ + fdls_schedule_fabric_oxid_free(iport); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "ABTS timed out. Starting PLOGI: %p", iport); + fnic_fdls_start_plogi(iport); + } + break; + case FDLS_STATE_REGISTER_FC4_TYPES: + /* scr received a LS_RJT with busy we retry from here */ + if ((iport->fabric.flags & FNIC_FDLS_RETRY_FRAME) + && (iport->fabric.retry_counter < FDLS_RETRY_COUNT)) { + iport->fabric.flags &= ~FNIC_FDLS_RETRY_FRAME; + fdls_send_register_fc4_types(iport); + } else if (!(iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) { + /* RFT_ID timed out send abts */ + fdls_send_fabric_abts(iport); + } else { + /* ABTS has timed out */ + fdls_schedule_fabric_oxid_free(iport); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "ABTS timed out. Starting PLOGI: %p", iport); + fnic_fdls_start_plogi(iport); /* go back to fabric Plogi */ + } + break; + case FDLS_STATE_REGISTER_FC4_FEATURES: + /* scr received a LS_RJT with busy we retry from here */ + if ((iport->fabric.flags & FNIC_FDLS_RETRY_FRAME) + && (iport->fabric.retry_counter < FDLS_RETRY_COUNT)) { + iport->fabric.flags &= ~FNIC_FDLS_RETRY_FRAME; + fdls_send_register_fc4_features(iport); + } else if (!(iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) + /* SCR has timed out. Send abts */ + fdls_send_fabric_abts(iport); + else { + /* ABTS has timed out */ + fdls_schedule_fabric_oxid_free(iport); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "ABTS timed out. Starting PLOGI %p", iport); + fnic_fdls_start_plogi(iport); /* go back to fabric Plogi */ + } + break; + case FDLS_STATE_RSCN_GPN_FT: + case FDLS_STATE_SEND_GPNFT: + case FDLS_STATE_GPN_FT: + /* GPN_FT received a LS_RJT with busy we retry from here */ + if ((iport->fabric.flags & FNIC_FDLS_RETRY_FRAME) + && (iport->fabric.retry_counter < FDLS_RETRY_COUNT)) { + iport->fabric.flags &= ~FNIC_FDLS_RETRY_FRAME; + fdls_send_gpn_ft(iport, iport->fabric.state); + } else if (!(iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) { + /* gpn_ft has timed out. Send abts */ + fdls_send_fabric_abts(iport); + } else { + /* ABTS has timed out */ + fdls_schedule_fabric_oxid_free(iport); + if (iport->fabric.retry_counter < FDLS_RETRY_COUNT) { + fdls_send_gpn_ft(iport, iport->fabric.state); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "ABTS timeout for fabric GPN_FT. Check name server: %p", + iport); + } + } + break; + default: + break; + } + spin_unlock_irqrestore(&fnic->fnic_lock, flags); +} + +static void fnic_fdls_start_flogi(struct fnic_iport_s *iport) +{ + iport->fabric.retry_counter = 0; + fdls_send_fabric_flogi(iport); + fdls_set_state((&iport->fabric), FDLS_STATE_FABRIC_FLOGI); + iport->fabric.flags = 0; +} + +static void fnic_fdls_start_plogi(struct fnic_iport_s *iport) +{ + iport->fabric.retry_counter = 0; + fdls_send_fabric_plogi(iport); + fdls_set_state((&iport->fabric), FDLS_STATE_FABRIC_PLOGI); + iport->fabric.flags &= ~FNIC_FDLS_FABRIC_ABORT_ISSUED; +} + +static void +fdls_process_rff_id_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fnic *fnic = iport->fnic; + struct fnic_fdls_fabric_s *fdls = &iport->fabric; + struct fc_std_rff_id *rff_rsp = (struct fc_std_rff_id *) fchdr; + uint16_t rsp; + uint8_t reason_code; + + if (fdls_get_state(fdls) != FDLS_STATE_REGISTER_FC4_FEATURES) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RFF_ID resp recvd in state(%d). Dropping.", + fdls_get_state(fdls)); + return; + } + + rsp = FNIC_STD_GET_FC_CT_CMD((&rff_rsp->fc_std_ct_hdr)); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS process RFF ID response: 0x%04x", iport->fcid, + (uint32_t) rsp); + + fdls_free_fabric_oxid(iport, &iport->fdmi_oxid_pool, + ntohs(FNIC_STD_GET_OX_ID(fchdr))); + + switch (rsp) { + case FC_FS_ACC: + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + iport->fabric.timer_pending = 0; + fdls->retry_counter = 0; + fdls_set_state((&iport->fabric), FDLS_STATE_SCR); + fdls_send_scr(iport); + break; + case FC_FS_RJT: + reason_code = rff_rsp->fc_std_ct_hdr.ct_reason; + if (((reason_code == FC_FS_RJT_BSY) + || (reason_code == FC_FS_RJT_UNABL)) + && (fdls->retry_counter < FDLS_RETRY_COUNT)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RFF_ID ret ELS_LS_RJT BUSY. Retry from timer routine %p", + iport); + + /* Retry again from the timer routine */ + fdls->flags |= FNIC_FDLS_RETRY_FRAME; + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RFF_ID returned ELS_LS_RJT. Halting discovery %p", + iport); + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + fdls->timer_pending = 0; + fdls->retry_counter = 0; + } + break; + default: + break; + } +} + +static void +fdls_process_rft_id_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fnic_fdls_fabric_s *fdls = &iport->fabric; + struct fc_std_rft_id *rft_rsp = (struct fc_std_rft_id *) fchdr; + uint16_t rsp; + uint8_t reason_code; + struct fnic *fnic = iport->fnic; + + if (fdls_get_state(fdls) != FDLS_STATE_REGISTER_FC4_TYPES) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RFT_ID resp recvd in state(%d). Dropping.", + fdls_get_state(fdls)); + return; + } + + rsp = FNIC_STD_GET_FC_CT_CMD((&rft_rsp->fc_std_ct_hdr)); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS process RFT ID response: 0x%04x", iport->fcid, + (uint32_t) rsp); + + fdls_free_fabric_oxid(iport, &iport->fabric_oxid_pool, + ntohs(FNIC_STD_GET_OX_ID(fchdr))); + + switch (rsp) { + case FC_FS_ACC: + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + iport->fabric.timer_pending = 0; + fdls->retry_counter = 0; + fdls_send_register_fc4_features(iport); + fdls_set_state((&iport->fabric), FDLS_STATE_REGISTER_FC4_FEATURES); + break; + case FC_FS_RJT: + reason_code = rft_rsp->fc_std_ct_hdr.ct_reason; + if (((reason_code == FC_FS_RJT_BSY) + || (reason_code == FC_FS_RJT_UNABL)) + && (fdls->retry_counter < FDLS_RETRY_COUNT)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: RFT_ID ret ELS_LS_RJT BUSY. Retry from timer routine", + iport->fcid); + + /* Retry again from the timer routine */ + fdls->flags |= FNIC_FDLS_RETRY_FRAME; + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: RFT_ID REJ. Halting discovery reason %d expl %d", + iport->fcid, reason_code, + rft_rsp->fc_std_ct_hdr.ct_explan); + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + fdls->timer_pending = 0; + fdls->retry_counter = 0; + } + break; + default: + break; + } +} + +static void +fdls_process_rpn_id_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fnic_fdls_fabric_s *fdls = &iport->fabric; + struct fc_std_rpn_id *rpn_rsp = (struct fc_std_rpn_id *) fchdr; + uint16_t rsp; + uint8_t reason_code; + struct fnic *fnic = iport->fnic; + + if (fdls_get_state(fdls) != FDLS_STATE_RPN_ID) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RPN_ID resp recvd in state(%d). Dropping.", + fdls_get_state(fdls)); + return; + } + + rsp = FNIC_STD_GET_FC_CT_CMD((&rpn_rsp->fc_std_ct_hdr)); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS process RPN ID response: 0x%04x", iport->fcid, + (uint32_t) rsp); + + fdls_free_fabric_oxid(iport, &iport->fabric_oxid_pool, + ntohs(FNIC_STD_GET_OX_ID(fchdr))); + + switch (rsp) { + case FC_FS_ACC: + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + iport->fabric.timer_pending = 0; + fdls->retry_counter = 0; + fdls_send_register_fc4_types(iport); + fdls_set_state((&iport->fabric), FDLS_STATE_REGISTER_FC4_TYPES); + break; + case FC_FS_RJT: + reason_code = rpn_rsp->fc_std_ct_hdr.ct_reason; + if (((reason_code == FC_FS_RJT_BSY) + || (reason_code == FC_FS_RJT_UNABL)) + && (fdls->retry_counter < FDLS_RETRY_COUNT)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RPN_ID returned REJ BUSY. Retry from timer routine %p", + iport); + + /* Retry again from the timer routine */ + fdls->flags |= FNIC_FDLS_RETRY_FRAME; + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RPN_ID ELS_LS_RJT. Halting discovery %p", iport); + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + fdls->timer_pending = 0; + fdls->retry_counter = 0; + } + break; + default: + break; + } +} + +static void +fdls_process_scr_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fnic_fdls_fabric_s *fdls = &iport->fabric; + struct fc_std_scr *scr_rsp = (struct fc_std_scr *) fchdr; + struct fc_std_els_rsp *els_rjt = (struct fc_std_els_rsp *) fchdr; + struct fnic *fnic = iport->fnic; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FDLS process SCR response: 0x%04x", + (uint32_t) scr_rsp->scr.scr_cmd); + + if (fdls_get_state(fdls) != FDLS_STATE_SCR) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "SCR resp recvd in state(%d). Dropping.", + fdls_get_state(fdls)); + return; + } + + fdls_free_fabric_oxid(iport, &iport->fabric_oxid_pool, + ntohs(FNIC_STD_GET_OX_ID(fchdr))); + + switch (scr_rsp->scr.scr_cmd) { + case ELS_LS_ACC: + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + iport->fabric.timer_pending = 0; + iport->fabric.retry_counter = 0; + fdls_send_gpn_ft(iport, FDLS_STATE_GPN_FT); + break; + + case ELS_LS_RJT: + if (((els_rjt->u.rej.er_reason == ELS_RJT_BUSY) + || (els_rjt->u.rej.er_reason == ELS_RJT_UNAB)) + && (fdls->retry_counter < FDLS_RETRY_COUNT)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "SCR ELS_LS_RJT BUSY. Retry from timer routine %p", + iport); + /* Retry again from the timer routine */ + fdls->flags |= FNIC_FDLS_RETRY_FRAME; + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "SCR returned ELS_LS_RJT. Halting discovery %p", + iport); + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + fdls->timer_pending = 0; + fdls->retry_counter = 0; + } + break; + + default: + break; + } +} + +static void +fdls_process_gpn_ft_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr, int len) +{ + struct fnic_fdls_fabric_s *fdls = &iport->fabric; + struct fc_std_gpn_ft *gpn_ft_rsp = (struct fc_std_gpn_ft *) fchdr; + uint16_t rsp; + uint8_t reason_code; + struct fnic *fnic = iport->fnic; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FDLS process GPN_FT response: iport state: %d len: %d", + iport->state, len); + + /* + * GPNFT response :- + * FDLS_STATE_GPN_FT : GPNFT send after SCR state + * during fabric discovery(FNIC_IPORT_STATE_FABRIC_DISC) + * FDLS_STATE_RSCN_GPN_FT : GPNFT send in response to RSCN + * FDLS_STATE_SEND_GPNFT : GPNFT send after deleting a Target, + * e.g. after receiving Target LOGO + * FDLS_STATE_TGT_DISCOVERY :Target discovery is currently in progress + * from previous GPNFT response,a new GPNFT response has come. + */ + if (!(((iport->state == FNIC_IPORT_STATE_FABRIC_DISC) + && (fdls_get_state(fdls) == FDLS_STATE_GPN_FT)) + || ((iport->state == FNIC_IPORT_STATE_READY) + && ((fdls_get_state(fdls) == FDLS_STATE_RSCN_GPN_FT) + || (fdls_get_state(fdls) == FDLS_STATE_SEND_GPNFT) + || (fdls_get_state(fdls) == FDLS_STATE_TGT_DISCOVERY))))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "GPNFT resp recvd in fab state(%d) iport_state(%d). Dropping.", + fdls_get_state(fdls), iport->state); + return; + } + + fdls_free_fabric_oxid(iport, &iport->fabric_oxid_pool, + ntohs(FNIC_STD_GET_OX_ID(fchdr))); + + iport->state = FNIC_IPORT_STATE_READY; + rsp = FNIC_STD_GET_FC_CT_CMD((&gpn_ft_rsp->fc_std_ct_hdr)); + + switch (rsp) { + + case FC_FS_ACC: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: GPNFT_RSP accept", iport->fcid); + break; + + case FC_FS_RJT: + reason_code = gpn_ft_rsp->fc_std_ct_hdr.ct_reason; + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: GPNFT_RSP Reject reason: %d", iport->fcid, reason_code); + break; + + default: + break; + } +} + +/** + * fdls_process_fabric_logo_rsp - Handle an flogo response from the fcf + * @iport: Handle to fnic iport + * @fchdr: Incoming frame + */ +static void +fdls_process_fabric_logo_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fc_std_flogi *flogo_rsp = (struct fc_std_flogi *) fchdr; + struct fnic *fnic = iport->fnic; + + fdls_free_fabric_oxid(iport, &iport->fabric_oxid_pool, + ntohs(FNIC_STD_GET_OX_ID(fchdr))); + + switch (flogo_rsp->els.fl_cmd) { + case ELS_LS_ACC: + if (iport->fabric.state != FDLS_STATE_FABRIC_LOGO) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Flogo response. Fabric not in LOGO state. Dropping! %p", + iport); + return; + } + + iport->fabric.state = FDLS_STATE_FLOGO_DONE; + iport->state = FNIC_IPORT_STATE_LINK_WAIT; + + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "iport 0x%p Canceling fabric disc timer\n", + iport); + fnic_del_fabric_timer_sync(fnic); + } + iport->fabric.timer_pending = 0; + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Flogo response from Fabric for did: 0x%x", + ntoh24(fchdr->fh_d_id)); + return; + + case ELS_LS_RJT: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Flogo response from Fabric for did: 0x%x returned ELS_LS_RJT", + ntoh24(fchdr->fh_d_id)); + return; + + default: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGO response not accepted or rejected: 0x%x", + flogo_rsp->els.fl_cmd); + } +} + +static void +fdls_process_flogi_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr, void *rx_frame) +{ + struct fnic_fdls_fabric_s *fabric = &iport->fabric; + struct fc_std_flogi *flogi_rsp = (struct fc_std_flogi *) fchdr; + uint8_t *fcid; + int rdf_size; + uint8_t fcmac[6] = { 0x0E, 0XFC, 0x00, 0x00, 0x00, 0x00 }; + struct fnic *fnic = iport->fnic; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FDLS processing FLOGI response", iport->fcid); + + if (fdls_get_state(fabric) != FDLS_STATE_FABRIC_FLOGI) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI response received in state (%d). Dropping frame", + fdls_get_state(fabric)); + return; + } + + fdls_free_fabric_oxid(iport, &iport->fabric_oxid_pool, + ntohs(FNIC_STD_GET_OX_ID(fchdr))); + + switch (flogi_rsp->els.fl_cmd) { + case ELS_LS_ACC: + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "iport fcid: 0x%x Canceling fabric disc timer\n", + iport->fcid); + fnic_del_fabric_timer_sync(fnic); + } + + iport->fabric.timer_pending = 0; + iport->fabric.retry_counter = 0; + fcid = FNIC_STD_GET_D_ID(fchdr); + iport->fcid = ntoh24(fcid); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: FLOGI response accepted", iport->fcid); + + /* Learn the Service Params */ + rdf_size = ntohs(FNIC_LOGI_RDF_SIZE(&flogi_rsp->els)); + if ((rdf_size >= FNIC_MIN_DATA_FIELD_SIZE) + && (rdf_size < FNIC_FC_MAX_PAYLOAD_LEN)) + iport->max_payload_size = MIN(rdf_size, + iport->max_payload_size); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "max_payload_size from fabric: %d set: %d", rdf_size, + iport->max_payload_size); + + iport->r_a_tov = ntohl(FNIC_LOGI_R_A_TOV(&flogi_rsp->els)); + iport->e_d_tov = ntohl(FNIC_LOGI_E_D_TOV(&flogi_rsp->els)); + + if (FNIC_LOGI_FEATURES(&flogi_rsp->els) & FNIC_FC_EDTOV_NSEC) + iport->e_d_tov = iport->e_d_tov / FNIC_NSEC_TO_MSEC; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "From fabric: R_A_TOV: %d E_D_TOV: %d", + iport->r_a_tov, iport->e_d_tov); + + if (IS_FNIC_FCP_INITIATOR(fnic)) { + fc_host_fabric_name(iport->fnic->lport->host) = + get_unaligned_be64(&FNIC_LOGI_NODE_NAME(&flogi_rsp->els)); + fc_host_port_id(iport->fnic->lport->host) = iport->fcid; + } + + fnic_fdls_learn_fcoe_macs(iport, rx_frame, fcid); + + memcpy(&fcmac[3], fcid, 3); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Adding vNIC device MAC addr: %02x:%02x:%02x:%02x:%02x:%02x", + fcmac[0], fcmac[1], fcmac[2], fcmac[3], fcmac[4], + fcmac[5]); + vnic_dev_add_addr(iport->fnic->vdev, fcmac); + + if (fdls_get_state(fabric) == FDLS_STATE_FABRIC_FLOGI) { + fnic_fdls_start_plogi(iport); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI response received. Starting PLOGI"); + } else { + /* From FDLS_STATE_FABRIC_FLOGI state fabric can only go to + * FDLS_STATE_LINKDOWN + * state, hence we don't have to worry about undoing: + * the fnic_fdls_register_portid and vnic_dev_add_addr + */ + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI response received in state (%d). Dropping frame", + fdls_get_state(fabric)); + } + break; + + case ELS_LS_RJT: + if (fabric->retry_counter < iport->max_flogi_retries) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI returned ELS_LS_RJT BUSY. Retry from timer routine %p", + iport); + + /* Retry Flogi again from the timer routine. */ + fabric->flags |= FNIC_FDLS_RETRY_FRAME; + + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI returned ELS_LS_RJT. Halting discovery %p", + iport); + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "iport 0x%p Canceling fabric disc timer\n", + iport); + fnic_del_fabric_timer_sync(fnic); + } + fabric->timer_pending = 0; + fabric->retry_counter = 0; + } + break; + + default: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI response not accepted: 0x%x", + flogi_rsp->els.fl_cmd); + break; + } +} + +static void +fdls_process_fabric_plogi_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fc_std_flogi *plogi_rsp = (struct fc_std_flogi *) fchdr; + struct fc_std_els_rsp *els_rjt = (struct fc_std_els_rsp *) fchdr; + struct fnic *fnic = iport->fnic; + + if (fdls_get_state((&iport->fabric)) != FDLS_STATE_FABRIC_PLOGI) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Fabric PLOGI response received in state (%d). Dropping frame", + fdls_get_state(&iport->fabric)); + return; + } + + fdls_free_fabric_oxid(iport, &iport->fabric_oxid_pool, + ntohs(FNIC_STD_GET_OX_ID(fchdr))); + + switch (plogi_rsp->els.fl_cmd) { + case ELS_LS_ACC: + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "iport fcid: 0x%x fabric PLOGI response: Accepted\n", + iport->fcid); + fnic_del_fabric_timer_sync(fnic); + } + iport->fabric.timer_pending = 0; + iport->fabric.retry_counter = 0; + fdls_set_state(&iport->fabric, FDLS_STATE_RPN_ID); + fdls_send_rpn_id(iport); + break; + case ELS_LS_RJT: + if (((els_rjt->u.rej.er_reason == ELS_RJT_BUSY) + || (els_rjt->u.rej.er_reason == ELS_RJT_UNAB)) + && (iport->fabric.retry_counter < iport->max_plogi_retries)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: Fabric PLOGI ELS_LS_RJT BUSY. Retry from timer routine", + iport->fcid); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "0x%x: Fabric PLOGI ELS_LS_RJT. Halting discovery", + iport->fcid); + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "iport fcid: 0x%x Canceling fabric disc timer\n", + iport->fcid); + fnic_del_fabric_timer_sync(fnic); + } + iport->fabric.timer_pending = 0; + iport->fabric.retry_counter = 0; + return; + } + break; + default: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "PLOGI response not accepted: 0x%x", + plogi_rsp->els.fl_cmd); + break; + } +} + +static void +fdls_process_fabric_abts_rsp(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + uint32_t s_id; + struct fc_std_abts_ba_acc *ba_acc = + (struct fc_std_abts_ba_acc *) fchdr; + struct fc_std_abts_ba_rjt *ba_rjt; + uint32_t fabric_state = iport->fabric.state; + struct fnic *fnic = iport->fnic; + int expected_rsp; + uint16_t oxid; + + s_id = ntoh24(fchdr->fh_s_id); + ba_rjt = (struct fc_std_abts_ba_rjt *) fchdr; + + if (!((s_id == FC_DIR_SERVER) || (s_id == FC_DOMAIN_CONTR) + || (s_id == FC_FABRIC_CONTROLLER))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received abts rsp with invalid SID: 0x%x. Dropping frame", + s_id); + return; + } + + if (iport->fabric.timer_pending) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Canceling fabric disc timer %p\n", iport); + fnic_del_fabric_timer_sync(fnic); + } + iport->fabric.timer_pending = 0; + iport->fabric.flags &= ~FNIC_FDLS_FABRIC_ABORT_ISSUED; + + if (fchdr->fh_r_ctl == FNIC_BA_ACC_RCTL) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received abts rsp BA_ACC for fabric_state: %d OX_ID: 0x%x", + fabric_state, be16_to_cpu(ba_acc->acc.ba_ox_id)); + } else if (fchdr->fh_r_ctl == FNIC_BA_RJT_RCTL) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "BA_RJT fs: %d OX_ID: 0x%x rc: 0x%x rce: 0x%x", + fabric_state, be16_to_cpu(ba_rjt->fchdr.fh_ox_id), + ba_rjt->rjt.br_reason, ba_rjt->rjt.br_explan); + } + + oxid = ntohs(FNIC_STD_GET_OX_ID(fchdr)); + expected_rsp = FDLS_OXID_TO_RSP_TYPE(oxid); + fdls_free_fabric_oxid(iport, &iport->fabric_oxid_pool, oxid); + + /* currently error handling/retry logic is same for ABTS BA_ACC & BA_RJT */ + switch (fabric_state) { + case FDLS_STATE_FABRIC_FLOGI: + if (expected_rsp == FNIC_FABRIC_FLOGI_RSP) { + if (iport->fabric.retry_counter < iport->max_flogi_retries) + fdls_send_fabric_flogi(iport); + else + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Exceeded max FLOGI retries"); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unknown abts rsp OX_ID: 0x%x FABRIC_FLOGI state. Drop frame", + fchdr->fh_ox_id); + } + break; + case FDLS_STATE_FABRIC_LOGO: + if (expected_rsp == FNIC_FABRIC_LOGO_RSP) { + if (!RETRIES_EXHAUSTED(iport)) + fdls_send_fabric_logo(iport); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unknown abts rsp OX_ID: 0x%x FABRIC_FLOGI state. Drop frame", + fchdr->fh_ox_id); + } + break; + case FDLS_STATE_FABRIC_PLOGI: + if (expected_rsp == FNIC_FABRIC_PLOGI_RSP) { + if (iport->fabric.retry_counter < iport->max_plogi_retries) + fdls_send_fabric_plogi(iport); + else + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Exceeded max PLOGI retries"); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unknown abts rsp OX_ID: 0x%x FABRIC_PLOGI state. Drop frame", + fchdr->fh_ox_id); + } + break; + + case FDLS_STATE_RPN_ID: + if (expected_rsp == FNIC_FABRIC_RPN_RSP) { + if (iport->fabric.retry_counter < FDLS_RETRY_COUNT) { + fdls_send_rpn_id(iport); + } else { + /* go back to fabric Plogi */ + fnic_fdls_start_plogi(iport); + } + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unknown abts rsp OX_ID: 0x%x RPN_ID state. Drop frame", + fchdr->fh_ox_id); + } + break; + + case FDLS_STATE_SCR: + if (expected_rsp == FNIC_FABRIC_SCR_RSP) { + if (iport->fabric.retry_counter <= FDLS_RETRY_COUNT) + fdls_send_scr(iport); + else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "abts rsp fab SCR after two tries. Start fabric PLOGI %p", + iport); + fnic_fdls_start_plogi(iport); /* go back to fabric Plogi */ + } + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unknown abts rsp OX_ID: 0x%x SCR state. Drop frame", + fchdr->fh_ox_id); + } + break; + case FDLS_STATE_REGISTER_FC4_TYPES: + if (expected_rsp == FNIC_FABRIC_RFT_RSP) { + if (iport->fabric.retry_counter <= FDLS_RETRY_COUNT) { + fdls_send_register_fc4_types(iport); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "abts rsp fab RFT_ID two tries. Start fabric PLOGI %p", + iport); + fnic_fdls_start_plogi(iport); /* go back to fabric Plogi */ + } + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unknown abts rsp OX_ID: 0x%x RFT state. Drop frame", + fchdr->fh_ox_id); + } + break; + case FDLS_STATE_REGISTER_FC4_FEATURES: + if (expected_rsp == FNIC_FABRIC_RFF_RSP) { + if (iport->fabric.retry_counter <= FDLS_RETRY_COUNT) + fdls_send_register_fc4_features(iport); + else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "abts rsp fab SCR after two tries. Start fabric PLOGI %p", + iport); + fnic_fdls_start_plogi(iport); /* go back to fabric Plogi */ + } + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unknown abts rsp OX_ID: 0x%x RFF state. Drop frame", + fchdr->fh_ox_id); + } + break; + + case FDLS_STATE_GPN_FT: + if (expected_rsp == FNIC_FABRIC_GPN_FT_RSP) { + if (iport->fabric.retry_counter <= FDLS_RETRY_COUNT) { + fdls_send_gpn_ft(iport, fabric_state); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "abts rsp fab GPN_FT after two tries %p", + iport); + } + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unknown abts rsp OX_ID: 0x%x GPN_FT state. Drop frame", + fchdr->fh_ox_id); + } + break; + + default: + return; + } +} + +/* + * Performs a validation for all FCOE frames and return the frame type + */ +int +fnic_fdls_validate_and_get_frame_type(struct fnic_iport_s *iport, + void *rx_frame, int len, + int fchdr_offset) +{ + struct fc_frame_header *fchdr; + uint8_t type; + uint8_t *fc_payload; + uint16_t oxid; + uint32_t s_id; + uint32_t d_id; + struct fnic *fnic = iport->fnic; + struct fnic_fdls_fabric_s *fabric = &iport->fabric; + int rsp_type; + + fchdr = + (struct fc_frame_header *) ((uint8_t *) rx_frame + fchdr_offset); + oxid = FNIC_STD_GET_OX_ID(fchdr); + fc_payload = (uint8_t *) fchdr + sizeof(struct fc_frame_header); + type = *fc_payload; + s_id = ntoh24(fchdr->fh_s_id); + d_id = ntoh24(fchdr->fh_d_id); + + /* some common validation */ + if (iport->fcid) + if (fdls_get_state(fabric) > FDLS_STATE_FABRIC_FLOGI) { + if ((iport->fcid != d_id) || (!FNIC_FC_FRAME_CS_CTL(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "invalid frame received. Dropping frame"); + return -1; + } + } + + /* ABTS response */ + if ((fchdr->fh_r_ctl == FNIC_BA_ACC_RCTL) + || (fchdr->fh_r_ctl == FNIC_BA_RJT_RCTL)) { + if (!(FNIC_FC_FRAME_TYPE_BLS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received ABTS invalid frame. Dropping frame"); + return -1; + + } + return FNIC_BLS_ABTS_RSP; + } + if ((fchdr->fh_r_ctl == FC_ABTS_RCTL) + && (FNIC_FC_FRAME_TYPE_BLS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Receiving Abort Request from s_id: 0x%x", s_id); + return FNIC_BLS_ABTS_REQ; + } + + /* unsolicited requests frames */ + if (FNIC_FC_FRAME_UNSOLICITED(fchdr)) { + switch (type) { + case ELS_LOGO: + if ((!FNIC_FC_FRAME_FCTL_FIRST_LAST_SEQINIT(fchdr)) + || (!FNIC_FC_FRAME_UNSOLICITED(fchdr)) + || (!FNIC_FC_FRAME_TYPE_ELS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received LOGO invalid frame. Dropping frame"); + return -1; + } + return FNIC_ELS_LOGO_REQ; + case ELS_RSCN: + if ((!FNIC_FC_FRAME_FCTL_FIRST_LAST_SEQINIT(fchdr)) + || (!FNIC_FC_FRAME_TYPE_ELS(fchdr)) + || (!FNIC_FC_FRAME_UNSOLICITED(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received RSCN invalid FCTL. Dropping frame"); + return -1; + } + if (s_id != FC_FABRIC_CONTROLLER) + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received RSCN from target FCTL: 0x%x type: 0x%x s_id: 0x%x.", + fchdr->fh_f_ctl[0], fchdr->fh_type, s_id); + return FNIC_ELS_RSCN_REQ; + case ELS_PLOGI: + return FNIC_ELS_PLOGI_REQ; + case ELS_ECHO: + return FNIC_ELS_ECHO_REQ; + case ELS_ADISC: + return FNIC_ELS_ADISC; + case ELS_RLS: + return FNIC_ELS_RLS; + case ELS_RRQ: + return FNIC_ELS_RRQ; + default: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unsupported frame (type:0x%02x) from fcid: 0x%x", + type, s_id); + return FNIC_ELS_UNSUPPORTED_REQ; + } + } + + /*response from fabric */ + rsp_type = fnic_fdls_expected_rsp(iport, ntohs(oxid)); + + switch (rsp_type) { + + case FNIC_FABRIC_FLOGI_RSP: + if (type == ELS_LS_ACC) { + if ((s_id != FC_DOMAIN_CONTR) + || (!FNIC_FC_FRAME_TYPE_ELS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received unknown frame. Dropping frame"); + return -1; + } + } + break; + + case FNIC_FABRIC_PLOGI_RSP: + if (type == ELS_LS_ACC) { + if ((s_id != FC_DIR_SERVER) + || (!FNIC_FC_FRAME_TYPE_ELS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received unknown frame. Dropping frame"); + return -1; + } + } + break; + + case FNIC_FABRIC_SCR_RSP: + if (type == ELS_LS_ACC) { + if ((s_id != FC_FABRIC_CONTROLLER) + || (!FNIC_FC_FRAME_TYPE_ELS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received unknown frame. Dropping frame"); + return -1; + } + } + break; + + case FNIC_FABRIC_RPN_RSP: + if ((s_id != FC_DIR_SERVER) || (!FNIC_FC_FRAME_TYPE_FC_GS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received unknown frame. Dropping frame"); + return -1; + } + break; + + case FNIC_FABRIC_RFT_RSP: + if ((s_id != FC_DIR_SERVER) || (!FNIC_FC_FRAME_TYPE_FC_GS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received unknown frame. Dropping frame"); + return -1; + } + break; + + case FNIC_FABRIC_RFF_RSP: + if ((s_id != FC_DIR_SERVER) || (!FNIC_FC_FRAME_TYPE_FC_GS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received unknown frame. Dropping frame"); + return -1; + } + break; + + case FNIC_FABRIC_GPN_FT_RSP: + if ((s_id != FC_DIR_SERVER) || (!FNIC_FC_FRAME_TYPE_FC_GS(fchdr))) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received unknown frame. Dropping frame"); + return -1; + } + break; + + case FNIC_FABRIC_LOGO_RSP: + break; + default: + /* Drop the Rx frame and log/stats it */ + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Solicited response: unknown OXID: 0x%x", oxid); + return -1; + } + + return rsp_type; +} + +void fnic_fdls_recv_frame(struct fnic_iport_s *iport, void *rx_frame, + int len, int fchdr_offset) +{ + uint16_t oxid; + struct fc_frame_header *fchdr; + uint32_t s_id = 0; + uint32_t d_id = 0; + struct fnic *fnic = iport->fnic; + int frame_type; + + fchdr = + (struct fc_frame_header *) ((uint8_t *) rx_frame + fchdr_offset); + s_id = ntoh24(fchdr->fh_s_id); + d_id = ntoh24(fchdr->fh_d_id); + + frame_type = + fnic_fdls_validate_and_get_frame_type(iport, rx_frame, len, + fchdr_offset); + + /*if we are in flogo drop everything else */ + if (iport->fabric.state == FDLS_STATE_FABRIC_LOGO && + frame_type != FNIC_FABRIC_LOGO_RSP) + return; + + switch (frame_type) { + case FNIC_FABRIC_FLOGI_RSP: + fdls_process_flogi_rsp(iport, fchdr, rx_frame); + break; + case FNIC_FABRIC_PLOGI_RSP: + fdls_process_fabric_plogi_rsp(iport, fchdr); + break; + case FNIC_FABRIC_RPN_RSP: + fdls_process_rpn_id_rsp(iport, fchdr); + break; + case FNIC_FABRIC_RFT_RSP: + fdls_process_rft_id_rsp(iport, fchdr); + break; + case FNIC_FABRIC_RFF_RSP: + fdls_process_rff_id_rsp(iport, fchdr); + break; + case FNIC_FABRIC_SCR_RSP: + fdls_process_scr_rsp(iport, fchdr); + break; + case FNIC_FABRIC_GPN_FT_RSP: + fdls_process_gpn_ft_rsp(iport, fchdr, len); + break; + case FNIC_FABRIC_LOGO_RSP: + fdls_process_fabric_logo_rsp(iport, fchdr); + break; + + case FNIC_BLS_ABTS_RSP: + oxid = ntohs(FNIC_STD_GET_OX_ID(fchdr)); + if (fdls_is_oxid_in_fabric_range(oxid) && + (iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) { + fdls_process_fabric_abts_rsp(iport, fchdr); + } + break; + default: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "s_id: 0x%x d_did: 0x%x", s_id, d_id); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received unknown FCoE frame of len: %d. Dropping frame", len); + break; + } +} diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index ce73f08ee889..2d5f438f2cc4 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -24,6 +24,7 @@ #include "vnic_intr.h" #include "vnic_stats.h" #include "vnic_scsi.h" +#include "fnic_fdls.h" #define DRV_NAME "fnic" #define DRV_DESCRIPTION "Cisco FCoE HBA Driver" @@ -31,6 +32,7 @@ #define PFX DRV_NAME ": " #define DFX DRV_NAME "%d: " +#define FABRIC_LOGO_MAX_RETRY 3 #define DESC_CLEAN_LOW_WATERMARK 8 #define FNIC_UCSM_DFLT_THROTTLE_CNT_BLD 16 /* UCSM default throttle count */ #define FNIC_MIN_IO_REQ 256 /* Min IO throttle count */ @@ -75,6 +77,8 @@ #define FNIC_DEV_RST_TERM_DONE BIT(20) #define FNIC_DEV_RST_ABTS_PENDING BIT(21) +#define IS_FNIC_FCP_INITIATOR(fnic) (fnic->role == FNIC_ROLE_FCP_INITIATOR) + /* * fnic private data per SCSI command. * These fields are locked by the hashed io_req_lock. @@ -213,12 +217,26 @@ enum fnic_state { struct mempool; +enum fnic_role_e { + FNIC_ROLE_FCP_INITIATOR = 0, +}; + enum fnic_evt { FNIC_EVT_START_VLAN_DISC = 1, FNIC_EVT_START_FCF_DISC = 2, FNIC_EVT_MAX, }; +struct fnic_frame_list { + /* + * Link to frame lists + */ + struct list_head links; + void *fp; + int frame_len; + int rx_ethhdr_stripped; +}; + struct fnic_event { struct list_head list; struct fnic *fnic; @@ -235,6 +253,8 @@ struct fnic_cpy_wq { /* Per-instance private data structure */ struct fnic { int fnic_num; + enum fnic_role_e role; + struct fnic_iport_s iport; struct fc_lport *lport; struct fcoe_ctlr ctlr; /* FIP FCoE controller structure */ struct vnic_dev_bar bar0; @@ -278,6 +298,7 @@ struct fnic { unsigned long state_flags; /* protected by host lock */ enum fnic_state state; spinlock_t fnic_lock; + unsigned long lock_flags; u16 vlan_id; /* VLAN tag including priority */ u8 data_src_addr[ETH_ALEN]; @@ -307,7 +328,7 @@ struct fnic { struct work_struct frame_work; struct work_struct flush_work; struct sk_buff_head frame_queue; - struct sk_buff_head tx_queue; + struct list_head tx_queue; /*** FIP related data members -- start ***/ void (*set_vlan)(struct fnic *, u16 vlan); @@ -397,7 +418,6 @@ void fnic_handle_fip_frame(struct work_struct *work); void fnic_handle_fip_event(struct fnic *fnic); void fnic_fcoe_reset_vlans(struct fnic *fnic); void fnic_fcoe_evlist_free(struct fnic *fnic); -extern void fnic_handle_fip_timer(struct fnic *fnic); static inline int fnic_chk_state_flags_locked(struct fnic *fnic, unsigned long st_flags) @@ -406,4 +426,6 @@ fnic_chk_state_flags_locked(struct fnic *fnic, unsigned long st_flags) } void __fnic_set_state_flags(struct fnic *, unsigned long, unsigned long); void fnic_dump_fchost_stats(struct Scsi_Host *, struct fc_host_statistics *); +void fnic_free_txq(struct list_head *head); + #endif /* _FNIC_H_ */ diff --git a/drivers/scsi/fnic/fnic_fcs.c b/drivers/scsi/fnic/fnic_fcs.c index a08293b2ad9f..e5d5a63e0281 100644 --- a/drivers/scsi/fnic/fnic_fcs.c +++ b/drivers/scsi/fnic/fnic_fcs.c @@ -20,6 +20,8 @@ #include "fnic_io.h" #include "fnic.h" #include "fnic_fip.h" +#include "fnic_fdls.h" +#include "fdls_fc.h" #include "cq_enet_desc.h" #include "cq_exch_desc.h" @@ -28,12 +30,90 @@ struct workqueue_struct *fnic_fip_queue; struct workqueue_struct *fnic_event_queue; static void fnic_set_eth_mode(struct fnic *); -static void fnic_fcoe_send_vlan_req(struct fnic *fnic); static void fnic_fcoe_start_fcf_disc(struct fnic *fnic); static void fnic_fcoe_process_vlan_resp(struct fnic *fnic, struct sk_buff *); static int fnic_fcoe_vlan_check(struct fnic *fnic, u16 flag); static int fnic_fcoe_handle_fip_frame(struct fnic *fnic, struct sk_buff *skb); +/* Frame initialization */ +/* + * Variables: + * dst_mac, src_mac + */ +struct fnic_eth_hdr_s fnic_eth_hdr_fcoe = { + .ether_type = 0x0689 +}; + +/* + * Variables: + * None + */ +struct fnic_fcoe_hdr_s fnic_fcoe_hdr = { + .sof = 0x2E +}; + +uint8_t FCOE_ALL_FCF_MAC[6] = { 0x0e, 0xfc, 0x00, 0xff, 0xff, 0xfe }; + +/* + * Internal Functions + * This function will initialize the src_mac address to be + * used in outgoing frames + */ +static inline void fnic_fdls_set_fcoe_srcmac(struct fnic *fnic, + uint8_t *src_mac) +{ + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Setting src mac: %02x:%02x:%02x:%02x:%02x:%02x", + src_mac[0], src_mac[1], src_mac[2], src_mac[3], + src_mac[4], src_mac[5]); + + memcpy(fnic->iport.fpma, src_mac, 6); +} + +/* + * This function will initialize the dst_mac address to be + * used in outgoing frames + */ +static inline void fnic_fdls_set_fcoe_dstmac(struct fnic *fnic, + uint8_t *dst_mac) +{ + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Setting dst mac: %02x:%02x:%02x:%02x:%02x:%02x", + dst_mac[0], dst_mac[1], dst_mac[2], dst_mac[3], + dst_mac[4], dst_mac[5]); + + memcpy(fnic->iport.fcfmac, dst_mac, 6); +} + +/* + * FPMA can be either taken from ethhdr(dst_mac) or flogi resp + * or derive from FC_MAP and FCID combination. While it should be + * same, revisit this if there is any possibility of not-correct. + */ +void fnic_fdls_learn_fcoe_macs(struct fnic_iport_s *iport, void *rx_frame, + uint8_t *fcid) +{ + struct fnic *fnic = iport->fnic; + struct fnic_eth_hdr_s *ethhdr = (struct fnic_eth_hdr_s *) rx_frame; + uint8_t fcmac[6] = { 0x0E, 0xFC, 0x00, 0x00, 0x00, 0x00 }; + + memcpy(&fcmac[3], fcid, 3); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "learn fcoe: dst_mac: %02x:%02x:%02x:%02x:%02x:%02x", + ethhdr->dst_mac[0], ethhdr->dst_mac[1], + ethhdr->dst_mac[2], ethhdr->dst_mac[3], + ethhdr->dst_mac[4], ethhdr->dst_mac[5]); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "learn fcoe: fc_mac: %02x:%02x:%02x:%02x:%02x:%02x", + fcmac[0], fcmac[1], fcmac[2], fcmac[3], fcmac[4], + fcmac[5]); + + fnic_fdls_set_fcoe_srcmac(fnic, fcmac); + fnic_fdls_set_fcoe_dstmac(fnic, ethhdr->src_mac); +} + void fnic_handle_link(struct work_struct *work) { struct fnic *fnic = container_of(work, struct fnic, link_work); @@ -363,7 +443,7 @@ static inline int is_fnic_fip_flogi_reject(struct fcoe_ctlr *fip, return 0; } -static void fnic_fcoe_send_vlan_req(struct fnic *fnic) +void fnic_fcoe_send_vlan_req(struct fnic *fnic) { struct fcoe_ctlr *fip = &fnic->ctlr; struct fnic_stats *fnic_stats = &fnic->fnic_stats; @@ -1068,116 +1148,119 @@ void fnic_eth_send(struct fcoe_ctlr *fip, struct sk_buff *skb) /* * Send FC frame. */ -static int fnic_send_frame(struct fnic *fnic, struct fc_frame *fp) +static int fnic_send_frame(struct fnic *fnic, void *frame, int frame_len) { struct vnic_wq *wq = &fnic->wq[0]; - struct sk_buff *skb; dma_addr_t pa; - struct ethhdr *eth_hdr; - struct vlan_ethhdr *vlan_hdr; - struct fcoe_hdr *fcoe_hdr; - struct fc_frame_header *fh; - u32 tot_len, eth_hdr_len; int ret = 0; unsigned long flags; - fh = fc_frame_header_get(fp); - skb = fp_skb(fp); - - if (unlikely(fh->fh_r_ctl == FC_RCTL_ELS_REQ) && - fcoe_ctlr_els_send(&fnic->ctlr, fnic->lport, skb)) - return 0; - - if (!fnic->vlan_hw_insert) { - eth_hdr_len = sizeof(*vlan_hdr) + sizeof(*fcoe_hdr); - vlan_hdr = skb_push(skb, eth_hdr_len); - eth_hdr = (struct ethhdr *)vlan_hdr; - vlan_hdr->h_vlan_proto = htons(ETH_P_8021Q); - vlan_hdr->h_vlan_encapsulated_proto = htons(ETH_P_FCOE); - vlan_hdr->h_vlan_TCI = htons(fnic->vlan_id); - fcoe_hdr = (struct fcoe_hdr *)(vlan_hdr + 1); - } else { - eth_hdr_len = sizeof(*eth_hdr) + sizeof(*fcoe_hdr); - eth_hdr = skb_push(skb, eth_hdr_len); - eth_hdr->h_proto = htons(ETH_P_FCOE); - fcoe_hdr = (struct fcoe_hdr *)(eth_hdr + 1); - } - - if (fnic->ctlr.map_dest) - fc_fcoe_set_mac(eth_hdr->h_dest, fh->fh_d_id); - else - memcpy(eth_hdr->h_dest, fnic->ctlr.dest_addr, ETH_ALEN); - memcpy(eth_hdr->h_source, fnic->data_src_addr, ETH_ALEN); - - tot_len = skb->len; - BUG_ON(tot_len % 4); + pa = dma_map_single(&fnic->pdev->dev, frame, frame_len, DMA_TO_DEVICE); - memset(fcoe_hdr, 0, sizeof(*fcoe_hdr)); - fcoe_hdr->fcoe_sof = fr_sof(fp); - if (FC_FCOE_VER) - FC_FCOE_ENCAPS_VER(fcoe_hdr, FC_FCOE_VER); - - pa = dma_map_single(&fnic->pdev->dev, eth_hdr, tot_len, DMA_TO_DEVICE); - if (dma_mapping_error(&fnic->pdev->dev, pa)) { - ret = -ENOMEM; - printk(KERN_ERR "DMA map failed with error %d\n", ret); - goto free_skb_on_err; - } - - if ((fnic_fc_trace_set_data(fnic->lport->host->host_no, FNIC_FC_SEND, - (char *)eth_hdr, tot_len)) != 0) { - printk(KERN_ERR "fnic ctlr frame trace error!!!"); + if ((fnic_fc_trace_set_data(fnic->fnic_num, + FNIC_FC_SEND | 0x80, (char *) frame, + frame_len)) != 0) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic ctlr frame trace error"); } spin_lock_irqsave(&fnic->wq_lock[0], flags); if (!vnic_wq_desc_avail(wq)) { - dma_unmap_single(&fnic->pdev->dev, pa, tot_len, DMA_TO_DEVICE); + dma_unmap_single(&fnic->pdev->dev, pa, frame_len, DMA_TO_DEVICE); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "vnic work queue descriptor is not available"); ret = -1; - goto irq_restore; + goto fnic_send_frame_end; } - fnic_queue_wq_desc(wq, skb, pa, tot_len, fr_eof(fp), - 0 /* hw inserts cos value */, - fnic->vlan_id, 1, 1, 1); + /* hw inserts cos value */ + fnic_queue_wq_desc(wq, frame, pa, frame_len, FNIC_FCOE_EOF, + 0, fnic->vlan_id, 1, 1, 1); -irq_restore: +fnic_send_frame_end: spin_unlock_irqrestore(&fnic->wq_lock[0], flags); - -free_skb_on_err: - if (ret) - dev_kfree_skb_any(fp_skb(fp)); - return ret; } -/* - * fnic_send - * Routine to send a raw frame - */ -int fnic_send(struct fc_lport *lp, struct fc_frame *fp) +static int +fdls_send_fcoe_frame(struct fnic *fnic, void *payload, int payload_sz, + uint8_t *srcmac, uint8_t *dstmac) { - struct fnic *fnic = lport_priv(lp); - unsigned long flags; + uint8_t *frame; + struct fnic_eth_hdr_s *ethhdr; + struct fnic_frame_list *frame_elem; + int max_framesz = FNIC_FCOE_FRAME_MAXSZ; + int len = 0; + int ret; - if (fnic->in_remove) { - dev_kfree_skb(fp_skb(fp)); - return -1; + frame = kzalloc(max_framesz, GFP_ATOMIC); + if (!frame) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate frame for flogi\n"); + return -ENOMEM; } + ethhdr = (struct fnic_eth_hdr_s *) frame; + + memcpy(frame, (uint8_t *) &fnic_eth_hdr_fcoe, + sizeof(struct fnic_eth_hdr_s)); + len = sizeof(struct fnic_eth_hdr_s); + + memcpy(ethhdr->src_mac, srcmac, ETH_ALEN); + memcpy(ethhdr->dst_mac, dstmac, ETH_ALEN); + + memcpy(frame + len, (uint8_t *) &fnic_fcoe_hdr, + sizeof(struct fnic_fcoe_hdr_s)); + len += sizeof(struct fnic_fcoe_hdr_s); + + memcpy(frame + len, (uint8_t *) payload, payload_sz); + len += payload_sz; /* * Queue frame if in a transitional state. * This occurs while registering the Port_ID / MAC address after FLOGI. */ - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (fnic->state != FNIC_IN_FC_MODE && fnic->state != FNIC_IN_ETH_MODE) { - skb_queue_tail(&fnic->tx_queue, fp_skb(fp)); - spin_unlock_irqrestore(&fnic->fnic_lock, flags); + if ((fnic->state != FNIC_IN_FC_MODE) + && (fnic->state != FNIC_IN_ETH_MODE)) { + frame_elem = kzalloc(sizeof(struct fnic_frame_list), GFP_ATOMIC); + if (!frame_elem) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate memory for fnic_frame_list: %lu\n", + sizeof(struct fnic_frame_list)); + return -ENOMEM; + } + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Queuing frame: 0x%p\n", frame); + + frame_elem->fp = frame; + frame_elem->frame_len = len; + list_add_tail(&frame_elem->links, &fnic->tx_queue); return 0; } - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - return fnic_send_frame(fnic, fp); + ret = fnic_send_frame(fnic, frame, len); + return ret; +} + +void fnic_send_fcoe_frame(struct fnic_iport_s *iport, void *payload, + int payload_sz) +{ + struct fnic *fnic = iport->fnic; + uint8_t *dstmac, *srcmac; + + /* If module unload is in-progress, don't send */ + if (fnic->in_remove) + return; + + if (iport->fabric.flags & FNIC_FDLS_FPMA_LEARNT) { + srcmac = iport->fpma; + dstmac = iport->fcfmac; + } else { + srcmac = iport->hwmac; + dstmac = FCOE_ALL_FCF_MAC; + } + + fdls_send_fcoe_frame(fnic, payload, payload_sz, srcmac, dstmac); } /** @@ -1193,15 +1276,66 @@ int fnic_send(struct fc_lport *lp, struct fc_frame *fp) void fnic_flush_tx(struct work_struct *work) { struct fnic *fnic = container_of(work, struct fnic, flush_work); - struct sk_buff *skb; struct fc_frame *fp; + struct fnic_frame_list *cur_frame, *next; - while ((skb = skb_dequeue(&fnic->tx_queue))) { - fp = (struct fc_frame *)skb; - fnic_send_frame(fnic, fp); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Flush queued frames"); + + list_for_each_entry_safe(cur_frame, next, &fnic->tx_queue, links) { + fp = cur_frame->fp; + list_del(&cur_frame->links); + fnic_send_frame(fnic, fp, cur_frame->frame_len); } } +int +fnic_fdls_register_portid(struct fnic_iport_s *iport, u32 port_id, + void *fp) +{ + struct fnic *fnic = iport->fnic; + struct fnic_eth_hdr_s *ethhdr; + int ret; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Setting port id: 0x%x fp: 0x%p fnic state: %d", port_id, + fp, fnic->state); + + if (fp) { + ethhdr = (struct fnic_eth_hdr_s *) fp; + vnic_dev_add_addr(fnic->vdev, ethhdr->dst_mac); + } + + /* Change state to reflect transition to FC mode */ + if (fnic->state == FNIC_IN_ETH_MODE || fnic->state == FNIC_IN_FC_MODE) + fnic->state = FNIC_IN_ETH_TRANS_FC_MODE; + else { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unexpected fnic state while processing FLOGI response\n"); + return -1; + } + + /* + * Send FLOGI registration to firmware to set up FC mode. + * The new address will be set up when registration completes. + */ + ret = fnic_flogi_reg_handler(fnic, port_id); + if (ret < 0) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI registration error ret: %d fnic state: %d\n", + ret, fnic->state); + if (fnic->state == FNIC_IN_ETH_TRANS_FC_MODE) + fnic->state = FNIC_IN_ETH_MODE; + + return -1; + } + iport->fabric.flags |= FNIC_FDLS_FPMA_LEARNT; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI registration success\n"); + return 0; +} + /** * fnic_set_eth_mode() - put fnic into ethernet mode. * @fnic: fnic device @@ -1240,6 +1374,17 @@ static void fnic_set_eth_mode(struct fnic *fnic) spin_unlock_irqrestore(&fnic->fnic_lock, flags); } +void fnic_free_txq(struct list_head *head) +{ + struct fnic_frame_list *cur_frame, *next; + + list_for_each_entry_safe(cur_frame, next, head, links) { + list_del(&cur_frame->links); + kfree(cur_frame->fp); + kfree(cur_frame); + } +} + static void fnic_wq_complete_frame_send(struct vnic_wq *wq, struct cq_desc *cq_desc, struct vnic_wq_buf *buf, void *opaque) @@ -1319,88 +1464,3 @@ void fnic_fcoe_reset_vlans(struct fnic *fnic) spin_unlock_irqrestore(&fnic->vlans_lock, flags); } -void fnic_handle_fip_timer(struct fnic *fnic) -{ - unsigned long flags; - struct fcoe_vlan *vlan; - struct fnic_stats *fnic_stats = &fnic->fnic_stats; - u64 sol_time; - - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (fnic->stop_rx_link_events) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - return; - } - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - - if (fnic->ctlr.mode == FIP_MODE_NON_FIP) - return; - - spin_lock_irqsave(&fnic->vlans_lock, flags); - if (list_empty(&fnic->vlans)) { - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - /* no vlans available, try again */ - if (unlikely(fnic_log_level & FNIC_FCS_LOGGING)) - if (printk_ratelimit()) - shost_printk(KERN_DEBUG, fnic->lport->host, - "Start VLAN Discovery\n"); - fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC); - return; - } - - vlan = list_first_entry(&fnic->vlans, struct fcoe_vlan, list); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "fip_timer: vlan %d state %d sol_count %d\n", - vlan->vid, vlan->state, vlan->sol_count); - switch (vlan->state) { - case FIP_VLAN_USED: - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "FIP VLAN is selected for FC transaction\n"); - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - break; - case FIP_VLAN_FAILED: - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - /* if all vlans are in failed state, restart vlan disc */ - if (unlikely(fnic_log_level & FNIC_FCS_LOGGING)) - if (printk_ratelimit()) - shost_printk(KERN_DEBUG, fnic->lport->host, - "Start VLAN Discovery\n"); - fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC); - break; - case FIP_VLAN_SENT: - if (vlan->sol_count >= FCOE_CTLR_MAX_SOL) { - /* - * no response on this vlan, remove from the list. - * Try the next vlan - */ - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "Dequeue this VLAN ID %d from list\n", - vlan->vid); - list_del(&vlan->list); - kfree(vlan); - vlan = NULL; - if (list_empty(&fnic->vlans)) { - /* we exhausted all vlans, restart vlan disc */ - spin_unlock_irqrestore(&fnic->vlans_lock, - flags); - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "fip_timer: vlan list empty, " - "trigger vlan disc\n"); - fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC); - return; - } - /* check the next vlan */ - vlan = list_first_entry(&fnic->vlans, struct fcoe_vlan, - list); - fnic->set_vlan(fnic, vlan->vid); - vlan->state = FIP_VLAN_SENT; /* sent now */ - } - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - atomic64_inc(&fnic_stats->vlan_stats.sol_expiry_count); - vlan->sol_count++; - sol_time = jiffies + msecs_to_jiffies - (FCOE_CTLR_START_DELAY); - mod_timer(&fnic->fip_timer, round_jiffies(sol_time)); - break; - } -} diff --git a/drivers/scsi/fnic/fnic_fdls.h b/drivers/scsi/fnic/fnic_fdls.h index 0097ba836ff1..f53c1d8dbe7e 100644 --- a/drivers/scsi/fnic/fnic_fdls.h +++ b/drivers/scsi/fnic/fnic_fdls.h @@ -382,8 +382,8 @@ inline void fnic_del_tport_timer_sync(struct fnic *fnic, struct fnic_tport_s *tport); void fdls_send_fabric_logo(struct fnic_iport_s *iport); int fnic_fdls_validate_and_get_frame_type(struct fnic_iport_s *iport, - void *rx_frame, int len, - int fchdr_offset); + void *rx_frame, int len, + int fchdr_offset); void fdls_send_tport_abts(struct fnic_iport_s *iport, struct fnic_tport_s *tport); bool fdls_delete_tport(struct fnic_iport_s *iport, @@ -399,7 +399,6 @@ int fnic_send_fip_frame(struct fnic_iport_s *iport, void *payload, int payload_sz); void fnic_fdls_learn_fcoe_macs(struct fnic_iport_s *iport, void *rx_frame, uint8_t *fcid); - void fnic_fdls_add_tport(struct fnic_iport_s *iport, struct fnic_tport_s *tport, unsigned long flags); void fnic_fdls_remove_tport(struct fnic_iport_s *iport, diff --git a/drivers/scsi/fnic/fnic_io.h b/drivers/scsi/fnic/fnic_io.h index 5895ead20e14..6fe642cb387b 100644 --- a/drivers/scsi/fnic/fnic_io.h +++ b/drivers/scsi/fnic/fnic_io.h @@ -55,15 +55,4 @@ struct fnic_io_req { unsigned int tag; struct scsi_cmnd *sc; /* midlayer's cmd pointer */ }; - -enum fnic_port_speeds { - DCEM_PORTSPEED_NONE = 0, - DCEM_PORTSPEED_1G = 1000, - DCEM_PORTSPEED_10G = 10000, - DCEM_PORTSPEED_20G = 20000, - DCEM_PORTSPEED_25G = 25000, - DCEM_PORTSPEED_40G = 40000, - DCEM_PORTSPEED_4x10G = 41000, - DCEM_PORTSPEED_100G = 100000, -}; #endif /* _FNIC_IO_H_ */ diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 1633f7e6af1e..16c9f87c932b 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -31,6 +31,8 @@ #include "fnic_io.h" #include "fnic_fip.h" #include "fnic.h" +#include "fnic_fdls.h" +#include "fdls_fc.h" #define PCI_DEVICE_ID_CISCO_FNIC 0x0045 @@ -80,7 +82,6 @@ module_param(fnic_max_qdepth, uint, S_IRUGO|S_IWUSR); MODULE_PARM_DESC(fnic_max_qdepth, "Queue depth to report for each LUN"); static struct libfc_function_template fnic_transport_template = { - .frame_send = fnic_send, .lport_set_port_id = fnic_set_port_id, .fcp_abort_io = fnic_empty_scsi_cleanup, .fcp_cleanup = fnic_empty_scsi_cleanup, @@ -413,7 +414,7 @@ static void fnic_fip_notify_timer(struct timer_list *t) { struct fnic *fnic = from_timer(fnic, t, fip_timer); - fnic_handle_fip_timer(fnic); + /* Placeholder function */ } static void fnic_notify_timer_start(struct fnic *fnic) @@ -921,7 +922,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) INIT_WORK(&fnic->link_work, fnic_handle_link); INIT_WORK(&fnic->frame_work, fnic_handle_frame); skb_queue_head_init(&fnic->frame_queue); - skb_queue_head_init(&fnic->tx_queue); + INIT_LIST_HEAD(&fnic->tx_queue); fc_fabric_login(lp); @@ -1004,7 +1005,7 @@ static void fnic_remove(struct pci_dev *pdev) */ flush_workqueue(fnic_event_queue); skb_queue_purge(&fnic->frame_queue); - skb_queue_purge(&fnic->tx_queue); + fnic_free_txq(&fnic->tx_queue); if (fnic->config.flags & VFCF_FIP_CAPABLE) { del_timer_sync(&fnic->fip_timer); @@ -1036,7 +1037,6 @@ static void fnic_remove(struct pci_dev *pdev) fnic_cleanup(fnic); BUG_ON(!skb_queue_empty(&fnic->frame_queue)); - BUG_ON(!skb_queue_empty(&fnic->tx_queue)); spin_lock_irqsave(&fnic_list_lock, flags); list_del(&fnic->list); diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index 2ba61dba4569..295dcda4ec16 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -184,7 +184,7 @@ int fnic_fw_reset_handler(struct fnic *fnic) fnic_set_state_flags(fnic, FNIC_FLAGS_FWRESET); skb_queue_purge(&fnic->frame_queue); - skb_queue_purge(&fnic->tx_queue); + fnic_free_txq(&fnic->tx_queue); /* wait for io cmpl */ while (atomic_read(&fnic->in_flight)) @@ -674,7 +674,7 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic, */ if (fnic->remove_wait || ret) { spin_unlock_irqrestore(&fnic->fnic_lock, flags); - skb_queue_purge(&fnic->tx_queue); + fnic_free_txq(&fnic->tx_queue); goto reset_cmpl_handler_end; } @@ -1717,7 +1717,7 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) return true; } -static void fnic_rport_exch_reset(struct fnic *fnic, u32 port_id) +void fnic_rport_exch_reset(struct fnic *fnic, u32 port_id) { struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats; struct fnic_rport_abort_io_iter_data iter_data = { From patchwork Wed Oct 2 18:24:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karan Tilak Kumar X-Patchwork-Id: 832359 Received: from rcdn-iport-5.cisco.com (rcdn-iport-5.cisco.com [173.37.86.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64DAE1D0F58; Wed, 2 Oct 2024 18:26:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=173.37.86.76 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893598; cv=none; b=BZGO4donXQFYvy4QPc1fZBZeptXySPKURTKM8ldKbt/qjypP7wDTRBuyh+Y7eoF9aYKAuQYkLFqlEfEv+xxKD0bL6qJwo44huI0KZef4bS/Tk8b5nwHSgl8Vz48fDqfHmNfskzNVULELBMgnPbbJXE4tjg1HHJFj1WZaNP4uPtk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893598; c=relaxed/simple; bh=ygxxKjTp+biUKdXvjixBeOsFy/LRPIwHYpj6hTWtztI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eTI0Ee0XYGRI09TrV8C+JLZAOWBxchTEvXm1s48FdaBURL1y+jqS4qRv6uZCWWP05oa+mQdhcVvDNFww5+AwKQ3g0J0HKvkI+LofbAB/K9c4eL7ZtfEbI4cosHcsq5UUO/bNfUqZ626Z2XzG7KGtYuhrXwyxl7mVN5Su541EX3I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com; spf=pass smtp.mailfrom=cisco.com; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b=ch196Wod; arc=none smtp.client-ip=173.37.86.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cisco.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="ch196Wod" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.com; i=@cisco.com; l=21581; q=dns/txt; s=iport; t=1727893596; x=1729103196; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=f7vSjAO/KL5A+R+uPMSIkcFY/AL5ZX3DYXfZ3rVenAk=; b=ch196WodD4UW5uJu+gIp1kXZzM+UKp936XQUVbHQ6C4wzRDi+cZDg7Rl 9D1UHLE21q/5rnLxl65bOy8xX9jYK3xUqYkqWt6lM9oknSDcugcwjOLz2 Vu63XYbjRjpAEso1tIFcBIPDpwHiay4XhZ8wzf1Inpwnmxyq4zJfB3ZMD k=; X-CSE-ConnectionGUID: EqSINs7cQQaWRltiKr3Q9Q== X-CSE-MsgGUID: dISbuyouT6qPIm51NgQFqw== X-IronPort-AV: E=Sophos;i="6.11,172,1725321600"; d="scan'208";a="269111525" Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by rcdn-iport-5.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2024 18:26:35 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-8.cisco.com (8.15.2/8.15.2) with ESMTPSA id 492IOHQo009807 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 2 Oct 2024 18:26:34 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, aeasi@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar , kernel test robot Subject: [PATCH v4 05/14] scsi: fnic: Add support for unsolicited requests and responses Date: Wed, 2 Oct 2024 11:24:01 -0700 Message-Id: <20241002182410.68093-6-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20241002182410.68093-1-kartilak@cisco.com> References: <20241002182410.68093-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-8.cisco.com Add support for unsolicited requests and responses. Add support to accept and reject frames. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202409291705.MugERX98-lkp@ intel.com/ Reviewed-by: Sesidhar Baddela Signed-off-by: Gian Carlo Boffa Signed-off-by: Arulprabhu Ponnusamy Signed-off-by: Arun Easi Signed-off-by: Karan Tilak Kumar --- Changes between v3 and v4: Fix kernel test robot warnings. Changes between v2 and v3: Add fnic_std_ba_acc removed from previous patch. Incorporate review comments from Hannes: Replace redundant definitions with standard definitions. Changes between v1 and v2: Incorporate review comments from Hannes: Replace fnic_del_tport_timer_sync macro calls with function calls. Fix indentation. Replace definitions with standard definitions from fc_els.h. --- drivers/scsi/fnic/fdls_disc.c | 578 +++++++++++++++++++++++++++++++++- 1 file changed, 576 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/fnic/fdls_disc.c b/drivers/scsi/fnic/fdls_disc.c index 0757b9ab1a28..e0b6976bb7f6 100644 --- a/drivers/scsi/fnic/fdls_disc.c +++ b/drivers/scsi/fnic/fdls_disc.c @@ -142,6 +142,22 @@ struct fc_std_scr fnic_std_scr_req = { .scr_reg_func = ELS_SCRF_FULL} }; +/* + * Variables: + * did, ox_id, rx_id + */ +struct fc_std_els_rsp fnic_std_els_acc = { + .fchdr = {.fh_r_ctl = FC_RCTL_ELS_REP, .fh_d_id = {0xFF, 0xFF, 0xFD}, + .fh_type = FC_TYPE_ELS, .fh_f_ctl = {FNIC_ELS_REP_FCTL, 0, 0}}, + .u.acc.la_cmd = ELS_LS_ACC, +}; + +struct fc_std_els_rsp fnic_std_els_rjt = { + .fchdr = {.fh_r_ctl = FC_RCTL_ELS_REP, .fh_type = FC_TYPE_ELS, + .fh_f_ctl = {FNIC_ELS_REP_FCTL, 0, 0}}, + .u.rej.er_cmd = ELS_LS_RJT, +}; + /* * Variables: * did, ox_id, rx_id, fcid, wwpn @@ -169,6 +185,12 @@ struct fc_frame_header fc_std_tport_abts = { .fh_parm_offset = 0x00000000, /* bit:0 = 0 Abort a exchange */ }; +static struct fc_std_abts_ba_acc fnic_std_ba_acc = { + .fchdr = {.fh_r_ctl = FC_RCTL_BA_ACC, + .fh_f_ctl = {FNIC_FCP_RSP_FCTL, 0, 0}}, + .acc = {.ba_low_seq_cnt = 0, .ba_high_seq_cnt = 0xFFFF} +}; + #define RETRIES_EXHAUSTED(iport) \ (iport->fabric.retry_counter == FABRIC_LOGO_MAX_RETRY) @@ -201,7 +223,6 @@ static void fdls_target_restart_nexus(struct fnic_tport_s *tport); static void fdls_start_tport_timer(struct fnic_iport_s *iport, struct fnic_tport_s *tport, int timeout); static void fdls_tport_timer_callback(struct timer_list *t); - static void fdls_start_fabric_timer(struct fnic_iport_s *iport, int timeout); static void @@ -506,6 +527,50 @@ fdls_start_tport_timer(struct fnic_iport_s *iport, tport->timer_pending = 1; } +static void +fdls_send_rscn_resp(struct fnic_iport_s *iport, + struct fc_frame_header *rscn_fchdr) +{ + struct fc_std_els_rsp els_acc; + uint16_t oxid; + uint8_t fcid[3]; + + memcpy(&els_acc, &fnic_std_els_acc, FC_ELS_RSP_ACC_SIZE); + + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID((&els_acc.fchdr), fcid); + FNIC_STD_SET_D_ID((&els_acc.fchdr), rscn_fchdr->fh_s_id); + + oxid = FNIC_STD_GET_OX_ID(rscn_fchdr); + FNIC_STD_SET_OX_ID((&els_acc.fchdr), oxid); + + FNIC_STD_SET_RX_ID((&els_acc.fchdr), FNIC_UNASSIGNED_RXID); + + fnic_send_fcoe_frame(iport, &els_acc, FC_ELS_RSP_ACC_SIZE); +} + +static void +fdls_send_logo_resp(struct fnic_iport_s *iport, + struct fc_frame_header *req_fchdr) +{ + struct fc_std_els_rsp logo_resp; + uint16_t oxid; + uint8_t fcid[3]; + + memcpy(&logo_resp, &fnic_std_els_acc, sizeof(struct fc_std_els_rsp)); + + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID((&logo_resp.fchdr), fcid); + FNIC_STD_SET_D_ID((&logo_resp.fchdr), req_fchdr->fh_s_id); + + oxid = FNIC_STD_GET_OX_ID(req_fchdr); + FNIC_STD_SET_OX_ID((&logo_resp.fchdr), oxid); + + FNIC_STD_SET_RX_ID((&logo_resp.fchdr), FNIC_UNASSIGNED_RXID); + + fnic_send_fcoe_frame(iport, &logo_resp, FC_ELS_RSP_ACC_SIZE); +} + void fdls_send_tport_abts(struct fnic_iport_s *iport, struct fnic_tport_s *tport) @@ -2888,6 +2953,192 @@ fdls_process_fabric_abts_rsp(struct fnic_iport_s *iport, } } +static void +fdls_process_abts_req(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) +{ + struct fc_std_abts_ba_acc ba_acc; + uint32_t nport_id; + uint16_t oxid; + struct fnic_tport_s *tport; + struct fnic *fnic = iport->fnic; + struct fnic_tgt_oxid_pool_s *oxid_pool; + + nport_id = ntoh24(fchdr->fh_s_id); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received abort from SID %8x", nport_id); + + tport = fnic_find_tport_by_fcid(iport, nport_id); + if (tport) { + oxid = FNIC_STD_GET_OX_ID(fchdr); + if (tport->oxid_used == oxid) { + tport->flags |= FNIC_FDLS_TGT_ABORT_ISSUED; + oxid_pool = fdls_get_tgt_oxid_pool(tport); + fdls_free_tgt_oxid(iport, oxid_pool, ntohs(oxid)); + } + } + + memcpy(&ba_acc, &fnic_std_ba_acc, sizeof(struct fc_std_abts_ba_acc)); + FNIC_STD_SET_S_ID((&ba_acc.fchdr), fchdr->fh_d_id); + FNIC_STD_SET_D_ID((&ba_acc.fchdr), fchdr->fh_s_id); + + ba_acc.fchdr.fh_rx_id = fchdr->fh_rx_id; + ba_acc.acc.ba_rx_id = ba_acc.fchdr.fh_rx_id; + ba_acc.fchdr.fh_ox_id = fchdr->fh_ox_id; + ba_acc.acc.ba_ox_id = ba_acc.fchdr.fh_ox_id; + + fnic_send_fcoe_frame(iport, &ba_acc, sizeof(struct fc_std_abts_ba_acc)); +} + +static void +fdls_process_unsupported_els_req(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fc_std_els_rsp ls_rsp; + uint16_t oxid; + uint32_t d_id = ntoh24(fchdr->fh_d_id); + struct fnic *fnic = iport->fnic; + + memcpy(&ls_rsp, &fnic_std_els_rjt, FC_ELS_RSP_REJ_SIZE); + + if (iport->fcid != d_id) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Dropping unsupported ELS with illegal frame bits 0x%x\n", + d_id); + return; + } + + if ((iport->state != FNIC_IPORT_STATE_READY) + && (iport->state != FNIC_IPORT_STATE_FABRIC_DISC)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Dropping unsupported ELS request in iport state: %d", + iport->state); + return; + } + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Process unsupported ELS request from SID: 0x%x", + ntoh24(fchdr->fh_s_id)); + /* We don't support this ELS request, send a reject */ + ls_rsp.u.rej.er_reason = 0x0B; + ls_rsp.u.rej.er_explan = 0x0; + ls_rsp.u.rej.er_vendor = 0x0; + + FNIC_STD_SET_S_ID((&ls_rsp.fchdr), fchdr->fh_d_id); + FNIC_STD_SET_D_ID((&ls_rsp.fchdr), fchdr->fh_s_id); + oxid = FNIC_STD_GET_OX_ID(fchdr); + FNIC_STD_SET_OX_ID((&ls_rsp.fchdr), oxid); + + FNIC_STD_SET_RX_ID((&ls_rsp.fchdr), FNIC_UNSUPPORTED_RESP_OXID); + + fnic_send_fcoe_frame(iport, &ls_rsp, FC_ELS_RSP_REJ_SIZE); +} + +static void +fdls_process_rls_req(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) +{ + struct fc_std_rls_acc rls_acc_rsp; + uint16_t oxid; + struct fnic *fnic = iport->fnic; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Process RLS request %d", iport->fnic->fnic_num); + + if ((iport->state != FNIC_IPORT_STATE_READY) + && (iport->state != FNIC_IPORT_STATE_FABRIC_DISC)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received RLS req in iport state: %d. Dropping the frame.", + iport->state); + return; + } + + memset(&rls_acc_rsp, 0, sizeof(struct fc_std_rls_acc)); + + FNIC_STD_SET_S_ID((&rls_acc_rsp.fchdr), fchdr->fh_d_id); + FNIC_STD_SET_D_ID((&rls_acc_rsp.fchdr), fchdr->fh_s_id); + oxid = FNIC_STD_GET_OX_ID(fchdr); + FNIC_STD_SET_OX_ID((&rls_acc_rsp.fchdr), oxid); + FNIC_STD_SET_RX_ID((&rls_acc_rsp.fchdr), 0xffff); + FNIC_STD_SET_F_CTL(&rls_acc_rsp.fchdr, FNIC_ELS_REP_FCTL << 16); + FNIC_STD_SET_R_CTL(&rls_acc_rsp.fchdr, FC_RCTL_ELS_REP); + FNIC_STD_SET_TYPE(&rls_acc_rsp.fchdr, FC_TYPE_ELS); + rls_acc_rsp.els.rls_cmd = ELS_LS_ACC; + rls_acc_rsp.els.rls_lesb.lesb_link_fail = + htonl(iport->fnic->link_down_cnt); + + fnic_send_fcoe_frame(iport, &rls_acc_rsp, + sizeof(struct fc_std_rls_acc)); +} + +static void +fdls_process_els_req(struct fnic_iport_s *iport, struct fc_frame_header *fchdr, + uint32_t len) +{ + struct fc_std_els_rsp *els_acc; + uint16_t oxid; + uint8_t fcid[3]; + uint8_t *fc_payload; + uint8_t *dst_frame; + uint8_t type; + struct fnic *fnic = iport->fnic; + + fc_payload = (uint8_t *) fchdr + sizeof(struct fc_frame_header); + type = *fc_payload; + + if ((iport->state != FNIC_IPORT_STATE_READY) + && (iport->state != FNIC_IPORT_STATE_FABRIC_DISC)) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Dropping ELS frame type :%x in iport state: %d", + type, iport->state); + return; + } + switch (type) { + case ELS_ECHO: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "sending LS_ACC for ECHO request %d\n", + iport->fnic->fnic_num); + break; + + case ELS_RRQ: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "sending LS_ACC for RRQ request %d\n", + iport->fnic->fnic_num); + break; + + default: + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "sending LS_ACC for %x ELS frame\n", type); + break; + } + dst_frame = kzalloc(len, GFP_ATOMIC); + if (!dst_frame) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Failed to allocate ELS response for %x", type); + return; + } + if (type == ELS_ECHO) { + /* Brocade sends a longer payload, copy all frame back */ + memcpy(dst_frame, fchdr, len); + } + + els_acc = (struct fc_std_els_rsp *)dst_frame; + memcpy(els_acc, &fnic_std_els_acc, FC_ELS_RSP_ACC_SIZE); + + hton24(fcid, iport->fcid); + FNIC_STD_SET_S_ID((&els_acc->fchdr), fcid); + FNIC_STD_SET_D_ID((&els_acc->fchdr), fchdr->fh_s_id); + + oxid = FNIC_STD_GET_OX_ID(fchdr); + FNIC_STD_SET_OX_ID((&els_acc->fchdr), oxid); + FNIC_STD_SET_RX_ID((&els_acc->fchdr), 0xffff); + + if (type == ELS_ECHO) + fnic_send_fcoe_frame(iport, els_acc, len); + else + fnic_send_fcoe_frame(iport, els_acc, FC_ELS_RSP_ACC_SIZE); + + kfree(dst_frame); +} + static void fdls_process_tgt_abts_rsp(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) @@ -3030,6 +3281,303 @@ fdls_process_tgt_abts_rsp(struct fnic_iport_s *iport, "Received ABTS response for unknown frame %p", iport); } +static void +fdls_process_plogi_req(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fc_std_els_rsp plogi_rsp; + uint16_t oxid; + uint32_t d_id = ntoh24(fchdr->fh_d_id); + struct fnic *fnic = iport->fnic; + + memcpy(&plogi_rsp, &fnic_std_els_rjt, sizeof(struct fc_std_els_rsp)); + + if (iport->fcid != d_id) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received PLOGI with illegal frame bits. Dropping frame %p", + iport); + return; + } + + if (iport->state != FNIC_IPORT_STATE_READY) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Received PLOGI request in iport state: %d Dropping frame", + iport->state); + return; + } + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Process PLOGI request from SID: 0x%x", + ntoh24(fchdr->fh_s_id)); + + /* We don't support PLOGI request, send a reject */ + plogi_rsp.u.rej.er_reason = 0x0B; + plogi_rsp.u.rej.er_explan = 0x0; + plogi_rsp.u.rej.er_vendor = 0x0; + + FNIC_STD_SET_S_ID((&plogi_rsp.fchdr), fchdr->fh_d_id); + FNIC_STD_SET_D_ID((&plogi_rsp.fchdr), fchdr->fh_s_id); + + oxid = FNIC_STD_GET_OX_ID(fchdr); + FNIC_STD_SET_OX_ID((&plogi_rsp.fchdr), oxid); + + FNIC_STD_SET_RX_ID((&plogi_rsp.fchdr), FNIC_UNASSIGNED_RXID); + + fnic_send_fcoe_frame(iport, &plogi_rsp, FC_ELS_RSP_REJ_SIZE); +} + +static void +fdls_process_logo_req(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) +{ + struct fc_std_logo *logo = (struct fc_std_logo *)fchdr; + uint32_t nport_id; + uint64_t nport_name; + struct fnic_tport_s *tport; + struct fnic *fnic = iport->fnic; + uint16_t oxid; + + nport_id = ntoh24(logo->els.fl_n_port_id); + nport_name = logo->els.fl_n_port_wwn; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Process LOGO request from fcid: 0x%x", nport_id); + + if (iport->state != FNIC_IPORT_STATE_READY) { + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Dropping LOGO req from 0x%x in iport state: %d", + nport_id, iport->state); + return; + } + + tport = fnic_find_tport_by_fcid(iport, nport_id); + + if (!tport) { + /* We are not logged in with the nport, log and drop... */ + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Received LOGO from an nport not logged in: 0x%x(0x%llx)", + nport_id, nport_name); + return; + } + if (tport->fcid != nport_id) { + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Received LOGO with invalid target port fcid: 0x%x(0x%llx)", + nport_id, nport_name); + return; + } + if (tport->timer_pending) { + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "tport fcid 0x%x: Canceling disc timer\n", + tport->fcid); + fnic_del_tport_timer_sync(fnic, tport); + tport->timer_pending = 0; + } + + /* got a logo in response to adisc to a target which has logged out */ + if (tport->state == FDLS_TGT_STATE_ADISC) { + tport->retry_counter = 0; + oxid = ntohs(tport->oxid_used); + fdls_free_tgt_oxid(iport, &iport->adisc_oxid_pool, oxid); + fdls_delete_tport(iport, tport); + fdls_send_logo_resp(iport, &logo->fchdr); + if ((iport->state == FNIC_IPORT_STATE_READY) + && (fdls_get_state(&iport->fabric) != FDLS_STATE_SEND_GPNFT) + && (fdls_get_state(&iport->fabric) != FDLS_STATE_RSCN_GPN_FT)) { + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Sending GPNFT in response to LOGO from Target:0x%x", + nport_id); + fdls_send_gpn_ft(iport, FDLS_STATE_SEND_GPNFT); + return; + } + } else { + fdls_delete_tport(iport, tport); + } + if (iport->state == FNIC_IPORT_STATE_READY) { + fdls_send_logo_resp(iport, &logo->fchdr); + if ((fdls_get_state(&iport->fabric) != FDLS_STATE_SEND_GPNFT) && + (fdls_get_state(&iport->fabric) != FDLS_STATE_RSCN_GPN_FT)) { + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Sending GPNFT in response to LOGO from Target:0x%x", + nport_id); + fdls_send_gpn_ft(iport, FDLS_STATE_SEND_GPNFT); + } + } +} + +static void +fdls_process_rscn(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) +{ + struct fc_std_rscn *rscn; + struct fc_els_rscn_page *rscn_port = NULL; + int num_ports; + struct fnic_tport_s *tport, *next; + uint32_t nport_id; + uint8_t fcid[3]; + int newports = 0; + struct fnic_fdls_fabric_s *fdls = &iport->fabric; + struct fnic *fnic = iport->fnic; + uint16_t rscn_payload_len; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FDLS process RSCN %p", iport); + + if (iport->state != FNIC_IPORT_STATE_READY) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FDLS RSCN received in state(%d). Dropping", + fdls_get_state(fdls)); + return; + } + + rscn = (struct fc_std_rscn *)fchdr; + rscn_payload_len = be16_to_cpu(rscn->els.rscn_plen); + + /* frame validation */ + if ((rscn_payload_len % 4 != 0) || (rscn_payload_len < 8) + || (rscn_payload_len > 1024) + || (rscn->els.rscn_page_len != 4)) { + num_ports = 0; + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RSCN payload_len: 0x%x page_len: 0x%x", + rscn_payload_len, rscn->els.rscn_page_len); + /* if this happens then we need to send ADISC to all the tports. */ + list_for_each_entry_safe(tport, next, &iport->tport_list, links) { + if (tport->state == FDLS_TGT_STATE_READY) + tport->flags |= FNIC_FDLS_TPORT_SEND_ADISC; + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RSCN for port id: 0x%x", tport->fcid); + } + } else { + num_ports = (rscn_payload_len - 4) / rscn->els.rscn_page_len; + rscn_port = (struct fc_els_rscn_page *)(rscn + 1); + } + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RSCN received for num_ports: %d payload_len: %d page_len: %d ", + num_ports, rscn_payload_len, rscn->els.rscn_page_len); + + /* + * RSCN have at least one Port_ID page , but may not have any port_id + * in it. If no port_id is specified in the Port_ID page , we send + * ADISC to all the tports + */ + + while (num_ports) { + + memcpy(fcid, rscn_port->rscn_fid, 3); + + nport_id = ntoh24(fcid); + rscn_port++; + num_ports--; + /* if this happens then we need to send ADISC to all the tports. */ + if (nport_id == 0) { + list_for_each_entry_safe(tport, next, &iport->tport_list, + links) { + if (tport->state == FDLS_TGT_STATE_READY) + tport->flags |= FNIC_FDLS_TPORT_SEND_ADISC; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RSCN for port id: 0x%x", tport->fcid); + } + break; + } + tport = fnic_find_tport_by_fcid(iport, nport_id); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "RSCN port id list: 0x%x", nport_id); + + if (!tport) { + newports++; + continue; + } + if (tport->state == FDLS_TGT_STATE_READY) + tport->flags |= FNIC_FDLS_TPORT_SEND_ADISC; + } + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FDLS process RSCN sending GPN_FT: newports: %d", newports); + fdls_send_gpn_ft(iport, FDLS_STATE_RSCN_GPN_FT); + fdls_send_rscn_resp(iport, fchdr); +} + +static void +fdls_process_adisc_req(struct fnic_iport_s *iport, + struct fc_frame_header *fchdr) +{ + struct fc_std_els_adisc adisc_acc; + struct fc_std_els_adisc *adisc_req = (struct fc_std_els_adisc *)fchdr; + uint64_t frame_wwnn; + uint64_t frame_wwpn; + uint32_t tgt_fcid; + struct fnic_tport_s *tport; + uint8_t *fcid; + struct fc_std_els_rsp rjts_rsp; + uint16_t oxid; + struct fnic *fnic = iport->fnic; + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Process ADISC request %d", iport->fnic->fnic_num); + + fcid = FNIC_STD_GET_S_ID(fchdr); + tgt_fcid = ntoh24(fcid); + tport = fnic_find_tport_by_fcid(iport, tgt_fcid); + if (!tport) { + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "tport for fcid: 0x%x not found. Dropping ADISC req.", + tgt_fcid); + return; + } + if (iport->state != FNIC_IPORT_STATE_READY) { + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Dropping ADISC req from fcid: 0x%x in iport state: %d", + tgt_fcid, iport->state); + return; + } + + frame_wwnn = ntohll(adisc_req->els.adisc_wwnn); + frame_wwpn = ntohll(adisc_req->els.adisc_wwpn); + + if ((frame_wwnn != tport->wwnn) || (frame_wwpn != tport->wwpn)) { + /* send reject */ + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "ADISC req from fcid: 0x%x mismatch wwpn: 0x%llx wwnn: 0x%llx", + tgt_fcid, frame_wwpn, frame_wwnn); + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "local tport wwpn: 0x%llx wwnn: 0x%llx. Sending RJT", + tport->wwpn, tport->wwnn); + + memcpy(&rjts_rsp, &fnic_std_els_rjt, + sizeof(struct fc_std_els_rsp)); + + rjts_rsp.u.rej.er_reason = 0x03; /* logical error */ + rjts_rsp.u.rej.er_explan = 0x1E; /* N_port login required */ + rjts_rsp.u.rej.er_vendor = 0x0; + FNIC_STD_SET_S_ID((&rjts_rsp.fchdr), fchdr->fh_d_id); + FNIC_STD_SET_D_ID((&rjts_rsp.fchdr), fchdr->fh_s_id); + oxid = FNIC_STD_GET_OX_ID(fchdr); + FNIC_STD_SET_OX_ID((&rjts_rsp.fchdr), oxid); + FNIC_STD_SET_RX_ID((&rjts_rsp.fchdr), FNIC_UNASSIGNED_RXID); + + fnic_send_fcoe_frame(iport, &rjts_rsp, FC_ELS_RSP_REJ_SIZE); + return; + } + memset(&adisc_acc.fchdr, 0, sizeof(struct fc_frame_header)); + FNIC_STD_SET_S_ID(&adisc_acc.fchdr, fchdr->fh_d_id); + FNIC_STD_SET_D_ID(&adisc_acc.fchdr, fchdr->fh_s_id); + FNIC_STD_SET_F_CTL(&adisc_acc.fchdr, FNIC_ELS_REP_FCTL << 16); + FNIC_STD_SET_R_CTL(&adisc_acc.fchdr, FC_RCTL_ELS_REP); + FNIC_STD_SET_TYPE(&adisc_acc.fchdr, FC_TYPE_ELS); + oxid = FNIC_STD_GET_OX_ID(fchdr); + FNIC_STD_SET_OX_ID(&adisc_acc.fchdr, oxid); + FNIC_STD_SET_RX_ID(&adisc_acc.fchdr, FNIC_UNASSIGNED_RXID); + adisc_acc.els.adisc_cmd = ELS_LS_ACC; + + FNIC_STD_SET_NPORT_NAME(&adisc_acc.els.adisc_wwpn, + le64_to_cpu(iport->wwpn)); + FNIC_STD_SET_NODE_NAME(&adisc_acc.els.adisc_wwnn, + le64_to_cpu(iport->wwnn)); + memcpy(adisc_acc.els.adisc_port_id, fchdr->fh_d_id, 3); + + fnic_send_fcoe_frame(iport, &adisc_acc, + sizeof(struct fc_std_els_adisc)); +} + /* * Performs a validation for all FCOE frames and return the frame type */ @@ -3312,8 +3860,34 @@ void fnic_fdls_recv_frame(struct fnic_iport_s *iport, void *rx_frame, if (fdls_is_oxid_in_fabric_range(oxid) && (iport->fabric.flags & FNIC_FDLS_FABRIC_ABORT_ISSUED)) { fdls_process_fabric_abts_rsp(iport, fchdr); - } else + } else { fdls_process_tgt_abts_rsp(iport, fchdr); + } + break; + case FNIC_BLS_ABTS_REQ: + fdls_process_abts_req(iport, fchdr); + break; + case FNIC_ELS_UNSUPPORTED_REQ: + fdls_process_unsupported_els_req(iport, fchdr); + break; + case FNIC_ELS_PLOGI_REQ: + fdls_process_plogi_req(iport, fchdr); + break; + case FNIC_ELS_RSCN_REQ: + fdls_process_rscn(iport, fchdr); + break; + case FNIC_ELS_LOGO_REQ: + fdls_process_logo_req(iport, fchdr); + break; + case FNIC_ELS_RRQ: + case FNIC_ELS_ECHO_REQ: + fdls_process_els_req(iport, fchdr, len); + break; + case FNIC_ELS_ADISC: + fdls_process_adisc_req(iport, fchdr); + break; + case FNIC_ELS_RLS: + fdls_process_rls_req(iport, fchdr); break; default: FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, From patchwork Wed Oct 2 18:24:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karan Tilak Kumar X-Patchwork-Id: 832358 Received: from rcdn-iport-9.cisco.com (rcdn-iport-9.cisco.com [173.37.86.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E5451D0BA2; Wed, 2 Oct 2024 18:28:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=173.37.86.80 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893688; cv=none; b=cXso8wWjpASmYP0kt9LrMqteHAwOqmI9mDBDkq+3Knv7U+D+oUsrAC+wzhw9SydhbEoaL4yowNK+6C7UFvfppenU/AaJSxFlwcKa8dP0NEeI9bqPvDbcN5hGvMv72wxxD1omFIT0J5x5meDT+xGYYqxxhOV+6iG/Q7les8aLbhM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893688; c=relaxed/simple; bh=w06lTPjtojHYkFih0oIMiOXL65GdLgBac/4VlWsxFXs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BKAMLy0xFpg+oEHYAGkw0o78IjnGtWvFUDAH8AwE+t0HjWrzERrg4jtJ4uVsO5PO2mil+2gqZkZX3rOyNe75HJeZFVTJHBI2GdNTInOKSbj/2hnrj/3tTntzvCmYYd1DoAbj1K+oO1usisnLzW1fyGdKenukH9VbAQd9cDhccQk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com; spf=pass smtp.mailfrom=cisco.com; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b=SOgUvCXJ; arc=none smtp.client-ip=173.37.86.80 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cisco.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="SOgUvCXJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.com; i=@cisco.com; l=72571; q=dns/txt; s=iport; t=1727893684; x=1729103284; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YTE6GCSiXf+PrHtzz1K+UME7sHo9xlJnVGQeCZ0BhyQ=; b=SOgUvCXJCCEov/8Ze/zqfuHlZzhUxUlwfQocehCLSPBZ+9i1JTcVoZNM w/mqy3ZA/nf37lyVxpZB+xM7OdqSWwsELFTSoTGud8VhQ6lWapqO5nx9j OE1A8wtiSi/NhVBgXNJP0eMfSG7PwyAgB2bRVl4L+DP+D11Ccqc3/4aK0 E=; X-CSE-ConnectionGUID: d/09zUfOQ0uNL1Oqb4yyCw== X-CSE-MsgGUID: GjWXV/8RTOmwM16Q82iX0g== X-IronPort-AV: E=Sophos;i="6.11,172,1725321600"; d="scan'208";a="268408872" Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by rcdn-iport-9.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2024 18:27:57 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-8.cisco.com (8.15.2/8.15.2) with ESMTPSA id 492IOHQq009807 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 2 Oct 2024 18:27:56 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, aeasi@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar , kernel test robot Subject: [PATCH v4 07/14] scsi: fnic: Add and integrate support for FIP Date: Wed, 2 Oct 2024 11:24:03 -0700 Message-Id: <20241002182410.68093-8-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20241002182410.68093-1-kartilak@cisco.com> References: <20241002182410.68093-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-8.cisco.com Add and integrate support for FCoE Initialization (protocol) FIP. This protocol will be exercised on Cisco UCS rack servers. Add support to specifically print FIP related debug messages. Replace existing definitions to handle new data structures. Clean up old and obsolete definitions. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202409291955.FcMZfNSt-lkp@ intel.com/ Reviewed-by: Sesidhar Baddela Signed-off-by: Gian Carlo Boffa Signed-off-by: Arulprabhu Ponnusamy Signed-off-by: Arun Easi Signed-off-by: Karan Tilak Kumar --- Changes between v3 and v4: Fix kernel test robot warnings. Changes between v2 and v3: Incorporate review comments from Hannes: Replace redundant definitions with standard definitions. Incorporate review comments from John: Replace GFP_ATOMIC with GFP_KERNEL where applicable. Changes between v1 and v2: Incorporate review comments from Hannes from other patches: Use the correct kernel-doc format. Replace htonll() with get_unaligned_be64(). Replace definitions with standard definitions from fc_els.h. Incorporate review comments from John: Remove unreferenced variable. Use standard definitions of true and false. Replace if else clauses with switch statement. Use kzalloc instead of kmalloc. --- drivers/scsi/fnic/Makefile | 1 + drivers/scsi/fnic/fip.c | 854 ++++++++++++++++++++++++++++++++ drivers/scsi/fnic/fip.h | 345 +++++++++++++ drivers/scsi/fnic/fnic.h | 23 +- drivers/scsi/fnic/fnic_fcs.c | 883 ++++++---------------------------- drivers/scsi/fnic/fnic_main.c | 47 +- 6 files changed, 1379 insertions(+), 774 deletions(-) create mode 100644 drivers/scsi/fnic/fip.c create mode 100644 drivers/scsi/fnic/fip.h diff --git a/drivers/scsi/fnic/Makefile b/drivers/scsi/fnic/Makefile index af156c69da0c..c025e875009e 100644 --- a/drivers/scsi/fnic/Makefile +++ b/drivers/scsi/fnic/Makefile @@ -2,6 +2,7 @@ obj-$(CONFIG_FCOE_FNIC) += fnic.o fnic-y := \ + fip.o\ fnic_attrs.o \ fnic_isr.o \ fnic_main.o \ diff --git a/drivers/scsi/fnic/fip.c b/drivers/scsi/fnic/fip.c new file mode 100644 index 000000000000..870b8e64a75c --- /dev/null +++ b/drivers/scsi/fnic/fip.c @@ -0,0 +1,854 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2008 Cisco Systems, Inc. All rights reserved. + * Copyright 2007 Nuova Systems, Inc. All rights reserved. + */ +#include "fnic.h" +#include "fip.h" +#include + +extern struct workqueue_struct *fnic_fip_queue; + +#define FIP_FNIC_RESET_WAIT_COUNT 15 + +/** + * fnic_fcoe_reset_vlans - Free up the list of discovered vlans + * @fnic: Handle to fnic driver instance + */ +void fnic_fcoe_reset_vlans(struct fnic *fnic) +{ + unsigned long flags; + struct fcoe_vlan *vlan, *next; + + spin_lock_irqsave(&fnic->vlans_lock, flags); + if (!list_empty(&fnic->vlan_list)) { + list_for_each_entry_safe(vlan, next, &fnic->vlan_list, list) { + list_del(&vlan->list); + kfree(vlan); + } + } + + spin_unlock_irqrestore(&fnic->vlans_lock, flags); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Reset vlan complete\n"); +} + +/** + * fnic_fcoe_send_vlan_req - Send FIP vlan request to all FCFs MAC + * @fnic: Handle to fnic driver instance + */ +void fnic_fcoe_send_vlan_req(struct fnic *fnic) +{ + struct fnic_iport_s *iport = &fnic->iport; + struct fnic_stats *fnic_stats = &fnic->fnic_stats; + u64 vlan_tov; + + int fr_len; + struct fip_vlan_req_s vlan_req; + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Enter send vlan req\n"); + fnic_fcoe_reset_vlans(fnic); + + fnic->set_vlan(fnic, 0); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "set vlan done\n"); + + fr_len = sizeof(struct fip_vlan_req_s); + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "got MAC 0x%x:%x:%x:%x:%x:%x\n", iport->hwmac[0], + iport->hwmac[1], iport->hwmac[2], iport->hwmac[3], + iport->hwmac[4], iport->hwmac[5]); + + memcpy(&vlan_req, &fip_vlan_req_tmpl, fr_len); + memcpy(vlan_req.eth.smac, iport->hwmac, ETH_ALEN); + memcpy(vlan_req.mac_desc.mac, iport->hwmac, ETH_ALEN); + + atomic64_inc(&fnic_stats->vlan_stats.vlan_disc_reqs); + + iport->fip.state = FDLS_FIP_VLAN_DISCOVERY_STARTED; + + fnic_send_fip_frame(iport, &vlan_req, fr_len); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "vlan req sent\n"); + + vlan_tov = jiffies + msecs_to_jiffies(FCOE_CTLR_FIPVLAN_TOV); + mod_timer(&fnic->retry_fip_timer, round_jiffies(vlan_tov)); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fip timer set\n"); +} + +/** + * fnic_fcoe_process_vlan_resp - Processes the vlan response from one FCF and + * populates VLAN list. + * @fnic: Handle to fnic driver instance + * @fiph: Received FIP frame + * + * Will wait for responses from multiple FCFs until timeout. + */ +void fnic_fcoe_process_vlan_resp(struct fnic *fnic, struct fip_header_s *fiph) +{ + struct fip_vlan_notif_s *vlan_notif = (struct fip_vlan_notif_s *)fiph; + + struct fnic_stats *fnic_stats = &fnic->fnic_stats; + u16 vid; + int num_vlan = 0; + int cur_desc, desc_len; + struct fcoe_vlan *vlan; + struct fip_vlan_desc_s *vlan_desc; + unsigned long flags; + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p got vlan resp\n", fnic); + + desc_len = ntohs(vlan_notif->fip.desc_len); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "desc_len %d\n", desc_len); + + spin_lock_irqsave(&fnic->vlans_lock, flags); + + cur_desc = 0; + while (desc_len > 0) { + vlan_desc = + (struct fip_vlan_desc_s *)(((char *)vlan_notif->vlans_desc) + + cur_desc * 4); + + if (vlan_desc->type == FIP_TYPE_VLAN) { + if (vlan_desc->len != 1) { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "Invalid descriptor length(%x) in VLan response\n", + vlan_desc->len); + + } + num_vlan++; + vid = ntohs(vlan_desc->vlan); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "process_vlan_resp: FIP VLAN %d\n", vid); + vlan = kzalloc(sizeof(*vlan), GFP_KERNEL); + + if (!vlan) { + /* retry from timer */ + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "Mem Alloc failure\n"); + spin_unlock_irqrestore(&fnic->vlans_lock, + flags); + goto out; + } + vlan->vid = vid & 0x0fff; + vlan->state = FIP_VLAN_AVAIL; + list_add_tail(&vlan->list, &fnic->vlan_list); + break; + } else { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "Invalid descriptor type(%x) in VLan response\n", + vlan_desc->type); + /* + * Note : received a type=2 descriptor here i.e. FIP + * MAC Address Descriptor + */ + } + cur_desc += vlan_desc->len; + desc_len -= vlan_desc->len; + } + + /* any VLAN descriptors present ? */ + if (num_vlan == 0) { + atomic64_inc(&fnic_stats->vlan_stats.resp_withno_vlanID); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p No VLAN descriptors in FIP VLAN response\n", + fnic); + } + + spin_unlock_irqrestore(&fnic->vlans_lock, flags); + + out: + return; +} + +/** + * fnic_fcoe_start_fcf_discovery - Start FIP FCF discovery in a selected vlan + * @fnic: Handle to fnic driver instance + */ +void fnic_fcoe_start_fcf_discovery(struct fnic *fnic) +{ + struct fnic_iport_s *iport = &fnic->iport; + u64 fcs_tov; + + int fr_len; + struct fip_discovery_s disc_sol; + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p start fcf discovery\n", fnic); + fr_len = sizeof(struct fip_discovery_s); + memset(iport->selected_fcf.fcf_mac, 0, ETH_ALEN); + + memcpy(&disc_sol, &fip_discovery_tmpl, fr_len); + memcpy(disc_sol.eth.smac, iport->hwmac, ETH_ALEN); + memcpy(disc_sol.mac_desc.mac, iport->hwmac, ETH_ALEN); + iport->selected_fcf.fcf_priority = 0xFF; + + disc_sol.name_desc.name = get_unaligned_be64(&iport->wwnn); + fnic_send_fip_frame(iport, &disc_sol, fr_len); + + iport->fip.state = FDLS_FIP_FCF_DISCOVERY_STARTED; + + fcs_tov = jiffies + msecs_to_jiffies(FCOE_CTLR_FCS_TOV); + mod_timer(&fnic->retry_fip_timer, round_jiffies(fcs_tov)); + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p Started FCF discovery", fnic); + +} + +/** + * fnic_fcoe_fip_discovery_resp - Processes FCF advertisements. + * @fnic: Handle to fnic driver instance + * @fiph: Received frame + * + * FCF advertisements can be: + * solicited - Sent in response of a discover FCF FIP request + * Store the information of the FCF with highest priority. + * Wait until timeout in case of multiple FCFs. + * + * unsolicited - Sent periodically by the FCF for keep alive. + * If FLOGI is in progress or completed and the advertisement is + * received by our selected FCF, refresh the keep alive timer. + */ +void fnic_fcoe_fip_discovery_resp(struct fnic *fnic, struct fip_header_s *fiph) +{ + struct fnic_iport_s *iport = &fnic->iport; + struct fip_disc_adv_s *disc_adv = (struct fip_disc_adv_s *)fiph; + u64 fcs_ka_tov; + u64 tov; + int fka_has_changed; + + switch (iport->fip.state) { + case FDLS_FIP_FCF_DISCOVERY_STARTED: + if (ntohs(disc_adv->fip.flags) & FIP_FLAG_S) { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "fnic 0x%p Solicited adv\n", fnic); + + if ((disc_adv->prio_desc.priority < + iport->selected_fcf.fcf_priority) + && (ntohs(disc_adv->fip.flags) & FIP_FLAG_A)) { + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "fnic 0x%p FCF Available\n", fnic); + memcpy(iport->selected_fcf.fcf_mac, + disc_adv->mac_desc.mac, ETH_ALEN); + iport->selected_fcf.fcf_priority = + disc_adv->prio_desc.priority; + iport->selected_fcf.fka_adv_period = + ntohl(disc_adv->fka_adv_desc.fka_adv); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, "adv time %d", + iport->selected_fcf.fka_adv_period); + iport->selected_fcf.ka_disabled = + (disc_adv->fka_adv_desc.rsvd_D & 1); + } + } + break; + case FDLS_FIP_FLOGI_STARTED: + case FDLS_FIP_FLOGI_COMPLETE: + if (!(ntohs(disc_adv->fip.flags) & FIP_FLAG_S)) { + /* same fcf */ + if (memcmp + (iport->selected_fcf.fcf_mac, + disc_adv->mac_desc.mac, ETH_ALEN) == 0) { + if (iport->selected_fcf.fka_adv_period != + ntohl(disc_adv->fka_adv_desc.fka_adv)) { + iport->selected_fcf.fka_adv_period = + ntohl(disc_adv->fka_adv_desc.fka_adv); + FNIC_FIP_DBG(KERN_INFO, + fnic->lport->host, + fnic->fnic_num, + "change fka to %d", + iport->selected_fcf.fka_adv_period); + } + + fka_has_changed = + (iport->selected_fcf.ka_disabled == 1) + && ((disc_adv->fka_adv_desc.rsvd_D & 1) == + 0); + + iport->selected_fcf.ka_disabled = + (disc_adv->fka_adv_desc.rsvd_D & 1); + if (!((iport->selected_fcf.ka_disabled) + || (iport->selected_fcf.fka_adv_period == + 0))) { + + fcs_ka_tov = jiffies + + 3 + * + msecs_to_jiffies(iport->selected_fcf.fka_adv_period); + mod_timer(&fnic->fcs_ka_timer, + round_jiffies(fcs_ka_tov)); + } else { + if (timer_pending(&fnic->fcs_ka_timer)) + del_timer_sync(&fnic->fcs_ka_timer); + } + + if (fka_has_changed) { + if (iport->selected_fcf.fka_adv_period != 0) { + tov = + jiffies + + msecs_to_jiffies( + iport->selected_fcf.fka_adv_period); + mod_timer(&fnic->enode_ka_timer, + round_jiffies(tov)); + + tov = + jiffies + + msecs_to_jiffies + (FCOE_CTLR_VN_KA_TOV); + mod_timer(&fnic->vn_ka_timer, + round_jiffies(tov)); + } + } + } + } + break; + default: + break; + } /* end switch */ +} + +/** + * fnic_fcoe_start_flogi - Send FIP FLOGI to the selected FCF + * @fnic: Handle to fnic driver instance + */ +void fnic_fcoe_start_flogi(struct fnic *fnic) +{ + struct fnic_iport_s *iport = &fnic->iport; + + int fr_len; + struct fip_flogi_s flogi_req; + u64 flogi_tov; + uint16_t oxid; + struct fc_frame_header *fchdr = &flogi_req.flogi_desc.flogi.fchdr; + + fr_len = sizeof(struct fip_flogi_s); + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p Start fip FLOGI\n", fnic); + + memcpy(&flogi_req, &fip_flogi_tmpl, fr_len); + memcpy(flogi_req.eth.smac, iport->hwmac, ETH_ALEN); + if (iport->usefip) + memcpy(flogi_req.eth.dmac, iport->selected_fcf.fcf_mac, + ETH_ALEN); + + oxid = fdls_alloc_fabric_oxid(iport, &iport->fabric_oxid_pool, + FNIC_FABRIC_FLOGI_RSP); + FNIC_STD_SET_OX_ID(fchdr, htons(oxid)); + + flogi_req.flogi_desc.flogi.els.fl_wwpn = + get_unaligned_be64(&iport->wwpn); + flogi_req.flogi_desc.flogi.els.fl_wwnn = + get_unaligned_be64(&iport->wwnn); + + fnic_send_fip_frame(iport, &flogi_req, fr_len); + iport->fip.flogi_retry++; + + iport->fip.state = FDLS_FIP_FLOGI_STARTED; + flogi_tov = jiffies + msecs_to_jiffies(fnic->config.flogi_timeout); + mod_timer(&fnic->retry_fip_timer, round_jiffies(flogi_tov)); +} + +/** + * fnic_fcoe_process_flogi_resp - Processes FLOGI response from FCF. + * @fnic: Handle to fnic driver instance + * @fiph: Received frame + * + * If successful save assigned fc_id and MAC, program firmware + * and start fdls discovery, else restart vlan discovery. + */ +void fnic_fcoe_process_flogi_resp(struct fnic *fnic, struct fip_header_s *fiph) +{ + struct fnic_iport_s *iport = &fnic->iport; + struct fip_flogi_rsp_s *flogi_rsp = (struct fip_flogi_rsp_s *)fiph; + int desc_len; + uint32_t s_id; + int exp_rsp_type; + + struct fnic_stats *fnic_stats = &fnic->fnic_stats; + struct fc_frame_header *fchdr = &flogi_rsp->rsp_desc.flogi.fchdr; + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p FIP FLOGI rsp\n", fnic); + desc_len = ntohs(flogi_rsp->fip.desc_len); + if (desc_len != 38) { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Invalid Descriptor List len (%x). Dropping frame\n", + desc_len); + return; + } + + if (!((flogi_rsp->rsp_desc.type == 7) + && (flogi_rsp->rsp_desc.len == 36)) + || !((flogi_rsp->mac_desc.type == 2) + && (flogi_rsp->mac_desc.len == 2))) { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Dropping frame invalid type and len mix\n"); + return; + } + + exp_rsp_type = fnic_fdls_expected_rsp(iport, ntohs(fchdr->fh_ox_id)); + + s_id = ntoh24(fchdr->fh_s_id); + if ((fchdr->fh_f_ctl[0] != 0x98) + || (fchdr->fh_r_ctl != 0x23) + || (s_id != 0xFFFFFE) + || (exp_rsp_type != FNIC_FABRIC_FLOGI_RSP) + || (fchdr->fh_type != 0x01)) { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Dropping invalid frame: s_id %x F %x R %x t %x OX_ID %x\n", + s_id, fchdr->fh_f_ctl[0], fchdr->fh_r_ctl, + fchdr->fh_type, fchdr->fh_ox_id); + return; + } + + if (iport->fip.state == FDLS_FIP_FLOGI_STARTED) { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p rsp for pending FLOGI\n", fnic); + + del_timer_sync(&fnic->retry_fip_timer); + + if ((ntohs(flogi_rsp->fip.desc_len) == 38) + && (flogi_rsp->rsp_desc.flogi.els.fl_cmd == ELS_LS_ACC)) { + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "fnic 0x%p FLOGI success\n", fnic); + memcpy(iport->fpma, flogi_rsp->mac_desc.mac, ETH_ALEN); + iport->fcid = + ntoh24(flogi_rsp->rsp_desc.flogi.fchdr.fh_d_id); + + iport->r_a_tov = + ntohl(flogi_rsp->rsp_desc.flogi.els.fl_csp.sp_r_a_tov); + iport->e_d_tov = + ntohl(flogi_rsp->rsp_desc.flogi.els.fl_csp.sp_e_d_tov); + memcpy(fnic->iport.fcfmac, iport->selected_fcf.fcf_mac, + ETH_ALEN); + vnic_dev_add_addr(fnic->vdev, flogi_rsp->mac_desc.mac); + + if (fnic_fdls_register_portid(iport, iport->fcid, NULL) + != 0) { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "fnic 0x%p flogi registration failed\n", + fnic); + return; + } + + iport->fip.state = FDLS_FIP_FLOGI_COMPLETE; + iport->state = FNIC_IPORT_STATE_FABRIC_DISC; + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, "iport->state:%d\n", + iport->state); + if (!((iport->selected_fcf.ka_disabled) + || (iport->selected_fcf.fka_adv_period == 0))) { + u64 tov; + + tov = jiffies + + + msecs_to_jiffies(iport->selected_fcf.fka_adv_period); + mod_timer(&fnic->enode_ka_timer, + round_jiffies(tov)); + + tov = + jiffies + + msecs_to_jiffies(FCOE_CTLR_VN_KA_TOV); + mod_timer(&fnic->vn_ka_timer, + round_jiffies(tov)); + + } + } else { + /* + * If there's FLOGI rejects - clear all + * fcf's & restart from scratch + */ + atomic64_inc(&fnic_stats->vlan_stats.flogi_rejects); + /* start FCoE VLAN discovery */ + fnic_fcoe_send_vlan_req(fnic); + + iport->fip.state = FDLS_FIP_VLAN_DISCOVERY_STARTED; + } + } +} + +/** + * fnic_common_fip_cleanup - Clean up FCF info and timers in case of + * link down/CVL + * @fnic: Handle to fnic driver instance + */ +void fnic_common_fip_cleanup(struct fnic *fnic) +{ + + struct fnic_iport_s *iport = &fnic->iport; + + if (!iport->usefip) + return; + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p fip cleanup\n", fnic); + + iport->fip.state = FDLS_FIP_INIT; + + del_timer_sync(&fnic->retry_fip_timer); + del_timer_sync(&fnic->fcs_ka_timer); + del_timer_sync(&fnic->enode_ka_timer); + del_timer_sync(&fnic->vn_ka_timer); + + if (!is_zero_ether_addr(iport->fpma)) + vnic_dev_del_addr(fnic->vdev, iport->fpma); + + memset(iport->fpma, 0, ETH_ALEN); + iport->fcid = 0; + iport->r_a_tov = 0; + iport->e_d_tov = 0; + memset(fnic->iport.fcfmac, 0, ETH_ALEN); + memset(iport->selected_fcf.fcf_mac, 0, ETH_ALEN); + iport->selected_fcf.fcf_priority = 0; + iport->selected_fcf.fka_adv_period = 0; + iport->selected_fcf.ka_disabled = 0; + + fnic_fcoe_reset_vlans(fnic); +} + +/** + * fnic_fcoe_process_cvl - Processes Clear Virtual Link from FCF. + * @fnic: Handle to fnic driver instance + * @fiph: Received frame + * + * Verify that cvl is received from our current FCF for our assigned MAC + * and clean up and restart the vlan discovery. + */ +void fnic_fcoe_process_cvl(struct fnic *fnic, struct fip_header_s *fiph) +{ + struct fnic_iport_s *iport = &fnic->iport; + struct fip_cvl_s *cvl_msg = (struct fip_cvl_s *)fiph; + int i; + int found = false; + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p clear virtual link handler\n", fnic); + + if (!((cvl_msg->fcf_mac_desc.type == 2) + && (cvl_msg->fcf_mac_desc.len == 2)) + || !((cvl_msg->name_desc.type == 4) + && (cvl_msg->name_desc.len == 3))) { + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "invalid mix: ft %x fl %x ndt %x ndl %x", + cvl_msg->fcf_mac_desc.type, + cvl_msg->fcf_mac_desc.len, cvl_msg->name_desc.type, + cvl_msg->name_desc.len); + } + + if (memcmp + (iport->selected_fcf.fcf_mac, cvl_msg->fcf_mac_desc.mac, ETH_ALEN) + == 0) { + for (i = 0; i < ((ntohs(fiph->desc_len) / 5) - 1); i++) { + if (!((cvl_msg->vn_ports_desc[i].type == 11) + && (cvl_msg->vn_ports_desc[i].len == 5))) { + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, + "Invalid type and len mix type: %d len: %d\n", + cvl_msg->vn_ports_desc[i].type, + cvl_msg->vn_ports_desc[i].len); + } + if (memcmp + (iport->fpma, cvl_msg->vn_ports_desc[i].vn_port_mac, + ETH_ALEN) == 0) { + found = true; + break; + } + } + if (!found) + return; + fnic_common_fip_cleanup(fnic); + + fnic_fcoe_send_vlan_req(fnic); + } +} + +/** + * fdls_fip_recv_frame - Demultiplexer for FIP frames + * @fnic: Handle to fnic driver instance + * @frame: Received ethernet frame + */ +int fdls_fip_recv_frame(struct fnic *fnic, void *frame) +{ + struct eth_hdr_s *eth = (struct eth_hdr_s *)frame; + struct fip_header_s *fiph; + u16 protocol; + u8 sub; + + if (eth->eth_type == ntohs(FIP_ETH_TYPE)) { + fiph = (struct fip_header_s *)(eth + 1); + protocol = ntohs(fiph->protocol); + sub = ntohs(fiph->subcode); + + if (protocol == FIP_DISCOVERY && sub == FIP_SUBCODE_RESP) + fnic_fcoe_fip_discovery_resp(fnic, fiph); + else if (protocol == FIP_VLAN_DISC && sub == FIP_SUBCODE_RESP) + fnic_fcoe_process_vlan_resp(fnic, fiph); + else if (protocol == FIP_KA_CVL && sub == FIP_SUBCODE_RESP) + fnic_fcoe_process_cvl(fnic, fiph); + else if (protocol == FIP_FLOGI && sub == FIP_SUBCODE_RESP) + fnic_fcoe_process_flogi_resp(fnic, fiph); + + /* Return true if the frame was a FIP frame */ + return true; + } + return false; +} + +void fnic_work_on_fip_timer(struct work_struct *work) +{ + struct fnic *fnic = container_of(work, struct fnic, fip_timer_work); + struct fnic_iport_s *iport = &fnic->iport; + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FIP timeout\n"); + + if (iport->fip.state == FDLS_FIP_VLAN_DISCOVERY_STARTED) { + fnic_vlan_discovery_timeout(fnic); + } else if (iport->fip.state == FDLS_FIP_FCF_DISCOVERY_STARTED) { + u8 zmac[ETH_ALEN] = { 0, 0, 0, 0, 0, 0 }; + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FCF Discovery timeout\n"); + if (memcmp(iport->selected_fcf.fcf_mac, zmac, ETH_ALEN) != 0) { + + if (iport->flags & FNIC_FIRST_LINK_UP) + iport->flags &= ~FNIC_FIRST_LINK_UP; + + fnic_fcoe_start_flogi(fnic); + if (!((iport->selected_fcf.ka_disabled) + || (iport->selected_fcf.fka_adv_period == 0))) { + u64 fcf_tov; + + fcf_tov = jiffies + + 3 + * + msecs_to_jiffies(iport->selected_fcf.fka_adv_period); + mod_timer(&fnic->fcs_ka_timer, + round_jiffies(fcf_tov)); + } + } else { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, + fnic->fnic_num, "FCF Discovery timeout\n"); + fnic_vlan_discovery_timeout(fnic); + } + } else if (iport->fip.state == FDLS_FIP_FLOGI_STARTED) { + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI timeout\n"); + if (iport->fip.flogi_retry < fnic->config.flogi_retries) + fnic_fcoe_start_flogi(fnic); + else + fnic_vlan_discovery_timeout(fnic); + } +} + +/** + * fnic_handle_fip_timer - Timeout handler for FIP discover phase. + * @t: Handle to the timer list + * + * Based on the current state, start next phase or restart discovery. + */ +void fnic_handle_fip_timer(struct timer_list *t) +{ + struct fnic *fnic = from_timer(fnic, t, retry_fip_timer); + + INIT_WORK(&fnic->fip_timer_work, fnic_work_on_fip_timer); + queue_work(fnic_fip_queue, &fnic->fip_timer_work); +} + +/** + * fnic_handle_enode_ka_timer - FIP node keep alive. + * @t: Handle to the timer list + */ +void fnic_handle_enode_ka_timer(struct timer_list *t) +{ + struct fnic *fnic = from_timer(fnic, t, enode_ka_timer); + + struct fnic_iport_s *iport = &fnic->iport; + int fr_len; + struct fip_enode_ka_s enode_ka; + u64 enode_ka_tov; + + if (iport->fip.state != FDLS_FIP_FLOGI_COMPLETE) + return; + + if ((iport->selected_fcf.ka_disabled) + || (iport->selected_fcf.fka_adv_period == 0)) { + return; + } + + fr_len = sizeof(struct fip_enode_ka_s); + + memcpy(&enode_ka, &fip_enode_ka_tmpl, fr_len); + memcpy(enode_ka.eth.smac, iport->hwmac, ETH_ALEN); + memcpy(enode_ka.eth.dmac, iport->selected_fcf.fcf_mac, ETH_ALEN); + memcpy(enode_ka.mac_desc.mac, iport->hwmac, ETH_ALEN); + + fnic_send_fip_frame(iport, &enode_ka, fr_len); + enode_ka_tov = jiffies + + msecs_to_jiffies(iport->selected_fcf.fka_adv_period); + mod_timer(&fnic->enode_ka_timer, round_jiffies(enode_ka_tov)); +} + +/** + * fnic_handle_vn_ka_timer - FIP virtual port keep alive. + * @t: Handle to the timer list + */ +void fnic_handle_vn_ka_timer(struct timer_list *t) +{ + struct fnic *fnic = from_timer(fnic, t, vn_ka_timer); + + struct fnic_iport_s *iport = &fnic->iport; + int fr_len; + struct fip_vn_port_ka_s vn_port_ka; + u64 vn_ka_tov; + uint8_t fcid[3]; + + if (iport->fip.state != FDLS_FIP_FLOGI_COMPLETE) + return; + + if ((iport->selected_fcf.ka_disabled) + || (iport->selected_fcf.fka_adv_period == 0)) { + return; + } + + fr_len = sizeof(struct fip_vn_port_ka_s); + + memcpy(&vn_port_ka, &fip_vn_port_ka_tmpl, fr_len); + memcpy(vn_port_ka.eth.smac, iport->fpma, ETH_ALEN); + memcpy(vn_port_ka.eth.dmac, iport->selected_fcf.fcf_mac, ETH_ALEN); + memcpy(vn_port_ka.mac_desc.mac, iport->hwmac, ETH_ALEN); + memcpy(vn_port_ka.vn_port_desc.vn_port_mac, iport->fpma, ETH_ALEN); + hton24(fcid, iport->fcid); + memcpy(vn_port_ka.vn_port_desc.vn_port_id, fcid, 3); + vn_port_ka.vn_port_desc.vn_port_name = get_unaligned_be64(&iport->wwpn); + + fnic_send_fip_frame(iport, &vn_port_ka, fr_len); + vn_ka_tov = jiffies + msecs_to_jiffies(FCOE_CTLR_VN_KA_TOV); + mod_timer(&fnic->vn_ka_timer, round_jiffies(vn_ka_tov)); +} + +/** + * fnic_vlan_discovery_timeout - Handle vlan discovery timeout + * @fnic: Handle to fnic driver instance + * + * End of VLAN discovery or FCF discovery time window. + * Start the FCF discovery if VLAN was never used. + */ +void fnic_vlan_discovery_timeout(struct fnic *fnic) +{ + struct fcoe_vlan *vlan; + struct fnic_iport_s *iport = &fnic->iport; + struct fnic_stats *fnic_stats = &fnic->fnic_stats; + unsigned long flags; + + spin_lock_irqsave(&fnic->fnic_lock, flags); + if (fnic->stop_rx_link_events) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + return; + } + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + + if (!iport->usefip) + return; + + spin_lock_irqsave(&fnic->vlans_lock, flags); + if (list_empty(&fnic->vlan_list)) { + /* no vlans available, try again */ + spin_unlock_irqrestore(&fnic->vlans_lock, flags); + fnic_fcoe_send_vlan_req(fnic); + return; + } + + vlan = list_first_entry(&fnic->vlan_list, struct fcoe_vlan, list); + + if (vlan->state == FIP_VLAN_SENT) { + if (vlan->sol_count >= FCOE_CTLR_MAX_SOL) { + /* + * no response on this vlan, remove from the list. + * Try the next vlan + */ + list_del(&vlan->list); + kfree(vlan); + vlan = NULL; + if (list_empty(&fnic->vlan_list)) { + /* we exhausted all vlans, restart vlan disc */ + spin_unlock_irqrestore(&fnic->vlans_lock, + flags); + fnic_fcoe_send_vlan_req(fnic); + return; + } + /* check the next vlan */ + vlan = + list_first_entry(&fnic->vlan_list, struct fcoe_vlan, + list); + + fnic->set_vlan(fnic, vlan->vid); + vlan->state = FIP_VLAN_SENT; /* sent now */ + + } + atomic64_inc(&fnic_stats->vlan_stats.sol_expiry_count); + + } else { + fnic->set_vlan(fnic, vlan->vid); + vlan->state = FIP_VLAN_SENT; /* sent now */ + } + vlan->sol_count++; + spin_unlock_irqrestore(&fnic->vlans_lock, flags); + fnic_fcoe_start_fcf_discovery(fnic); +} + +/** + * fnic_work_on_fcs_ka_timer - Handle work on FCS keep alive timer. + * @work: the work queue to be serviced + * + * Finish handling fcs_ka_timer in process context. + * Clean up, bring the link down, and restart all FIP discovery. + */ +void fnic_work_on_fcs_ka_timer(struct work_struct *work) +{ + struct fnic + *fnic = container_of(work, struct fnic, fip_timer_work); + struct fnic_iport_s *iport = &fnic->iport; + + FNIC_FIP_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p fcs ka timeout\n", fnic); + + fnic_common_fip_cleanup(fnic); + spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags); + iport->state = FNIC_IPORT_STATE_FIP; + spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); + + fnic_fcoe_send_vlan_req(fnic); +} + +/** + * fnic_handle_fcs_ka_timer - Handle FCS keep alive timer. + * @t: Handle to the timer list + * + * No keep alives received from FCF. Clean up, bring the link down + * and restart all the FIP discovery. + */ +void fnic_handle_fcs_ka_timer(struct timer_list *t) +{ + struct fnic *fnic = from_timer(fnic, t, fcs_ka_timer); + + INIT_WORK(&fnic->fip_timer_work, fnic_work_on_fcs_ka_timer); + queue_work(fnic_fip_queue, &fnic->fip_timer_work); +} diff --git a/drivers/scsi/fnic/fip.h b/drivers/scsi/fnic/fip.h new file mode 100644 index 000000000000..acc50ee79997 --- /dev/null +++ b/drivers/scsi/fnic/fip.h @@ -0,0 +1,345 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright 2008 Cisco Systems, Inc. All rights reserved. + * Copyright 2007 Nuova Systems, Inc. All rights reserved. + */ +#ifndef _FIP_H_ +#define _FIP_H_ + +#include "fdls_fc.h" +#include "fnic_fdls.h" + +#define FCOE_ALL_FCFS_MAC {0x01, 0x10, 0x18, 0x01, 0x00, 0x02} +#define FIP_ETH_TYPE 0x8914 + +#define FIP_ETH_TYPE_LE 0x1489 +#define FCOE_MAX_SIZE_LE 0x2E08 + +#define WWNN_LEN 8 + +#define FCOE_CTLR_FIPVLAN_TOV (3*1000) +#define FCOE_CTLR_FCS_TOV (3*1000) +#define FCOE_CTLR_VN_KA_TOV (90*1000) +#define FCOE_CTLR_MAX_SOL (5*1000) + +#define FIP_SUBCODE_REQ 1 +#define FIP_SUBCODE_RESP 2 + +#define FIP_FLAG_S 0x2 +#define FIP_FLAG_A 0x4 + +/* + * VLAN entry. + */ +struct fcoe_vlan { + struct list_head list; + uint16_t vid; /* vlan ID */ + uint16_t sol_count; /* no. of sols sent */ + uint16_t state; /* state */ +}; + +enum fdls_vlan_state_e { + FIP_VLAN_AVAIL, + FIP_VLAN_SENT +}; + +enum fdls_fip_state_e { + FDLS_FIP_INIT, + FDLS_FIP_VLAN_DISCOVERY_STARTED, + FDLS_FIP_FCF_DISCOVERY_STARTED, + FDLS_FIP_FLOGI_STARTED, + FDLS_FIP_FLOGI_COMPLETE, +}; + +enum fip_protocol_code_e { + FIP_DISCOVERY = 1, + FIP_FLOGI, + FIP_KA_CVL, + FIP_VLAN_DISC +}; + +struct eth_hdr_s { + uint8_t dmac[6]; + uint8_t smac[6]; + uint16_t eth_type; +}; + +struct fip_header_s { + uint32_t ver:16; + + uint32_t protocol:16; + uint32_t subcode:16; + + uint32_t desc_len:16; + uint32_t flags:16; +} __packed; + +enum fip_desc_type_e { + FIP_TYPE_PRIORITY = 1, + FIP_TYPE_MAC, + FIP_TYPE_FCMAP, + FIP_TYPE_NAME_ID, + FIP_TYPE_FABRIC, + FIP_TYPE_MAX_FCOE, + FIP_TYPE_FLOGI, + FIP_TYPE_FDISC, + FIP_TYPE_LOGO, + FIP_TYPE_ELP, + FIP_TYPE_VX_PORT, + FIP_TYPE_FKA_ADV, + FIP_TYPE_VENDOR_ID, + FIP_TYPE_VLAN +}; + +struct fip_mac_desc_s { + uint8_t type; + uint8_t len; + uint8_t mac[6]; +} __packed; + +struct fip_vlan_desc_s { + uint8_t type; + uint8_t len; + uint16_t vlan; +} __packed; + +struct fip_vlan_req_s { + struct eth_hdr_s eth; + struct fip_header_s fip; + struct fip_mac_desc_s mac_desc; +} __packed; + + /* + * Variables: + * eth.smac, mac_desc.mac + */ +struct fip_vlan_req_s fip_vlan_req_tmpl = { + .eth = {.dmac = FCOE_ALL_FCFS_MAC, + .eth_type = FIP_ETH_TYPE_LE}, + .fip = {.ver = 0x10, + .protocol = FIP_VLAN_DISC << 8, + .subcode = FIP_SUBCODE_REQ << 8, + .desc_len = 2 << 8}, + .mac_desc = {.type = FIP_TYPE_MAC, .len = 2} +}; + +struct fip_vlan_notif_s { + struct fip_header_s fip; + struct fip_vlan_desc_s vlans_desc[]; +} __packed; + +struct fip_vn_port_desc_s { + uint8_t type; + uint8_t len; + uint8_t vn_port_mac[6]; + uint8_t rsvd[1]; + uint8_t vn_port_id[3]; + uint64_t vn_port_name; +} __packed; + +struct fip_vn_port_ka_s { + struct eth_hdr_s eth; + struct fip_header_s fip; + struct fip_mac_desc_s mac_desc; + struct fip_vn_port_desc_s vn_port_desc; +} __packed; + +/* + * Variables: + * fcf_mac, eth.smac, mac_desc.enode_mac + * vn_port_desc:mac, id, port_name + */ +struct fip_vn_port_ka_s fip_vn_port_ka_tmpl = { + .eth = { + .eth_type = FIP_ETH_TYPE_LE}, + .fip = { + .ver = 0x10, + .protocol = FIP_KA_CVL << 8, + .subcode = FIP_SUBCODE_REQ << 8, + .desc_len = 7 << 8}, + .mac_desc = {.type = FIP_TYPE_MAC, .len = 2}, + .vn_port_desc = {.type = FIP_TYPE_VX_PORT, .len = 5} +}; + +struct fip_enode_ka_s { + struct eth_hdr_s eth; + struct fip_header_s fip; + struct fip_mac_desc_s mac_desc; +} __packed; + +/* + * Variables: + * fcf_mac, eth.smac, mac_desc.enode_mac + */ +struct fip_enode_ka_s fip_enode_ka_tmpl = { + .eth = { + .eth_type = FIP_ETH_TYPE_LE}, + .fip = { + .ver = 0x10, + .protocol = FIP_KA_CVL << 8, + .subcode = FIP_SUBCODE_REQ << 8, + .desc_len = 2 << 8}, + .mac_desc = {.type = FIP_TYPE_MAC, .len = 2} +}; + +struct fip_name_desc_s { + uint8_t type; + uint8_t len; + uint8_t rsvd[2]; + uint64_t name; +} __packed; + +struct fip_cvl_s { + struct fip_header_s fip; + struct fip_mac_desc_s fcf_mac_desc; + struct fip_name_desc_s name_desc; + struct fip_vn_port_desc_s vn_ports_desc[]; +} __packed; + +struct fip_flogi_desc_s { + uint8_t type; + uint8_t len; + uint16_t rsvd; + struct fc_std_flogi flogi; +} __packed; + +struct fip_flogi_rsp_desc_s { + uint8_t type; + uint8_t len; + uint16_t rsvd; + struct fc_std_flogi flogi; +} __packed; + +struct fip_flogi_s { + struct eth_hdr_s eth; + struct fip_header_s fip; + struct fip_flogi_desc_s flogi_desc; + struct fip_mac_desc_s mac_desc; +} __packed; + +struct fip_flogi_rsp_s { + struct fip_header_s fip; + struct fip_flogi_rsp_desc_s rsp_desc; + struct fip_mac_desc_s mac_desc; +} __packed; + +/* + * Variables: + * fcf_mac, eth.smac, mac_desc.enode_mac + */ +struct fip_flogi_s fip_flogi_tmpl = { + .eth = { + .eth_type = FIP_ETH_TYPE_LE}, + .fip = { + .ver = 0x10, + .protocol = FIP_FLOGI << 8, + .subcode = FIP_SUBCODE_REQ << 8, + .desc_len = 38 << 8, + .flags = 0x80}, + .flogi_desc = { + .type = FIP_TYPE_FLOGI, .len = 36, + .flogi = { + .fchdr = { + .fh_r_ctl = FC_RCTL_ELS_REQ, + .fh_d_id = {0xFF, 0xFF, 0xFE}, + .fh_type = FC_TYPE_ELS, + .fh_f_ctl = {FNIC_ELS_REQ_FCTL, 0, 0}, + .fh_rx_id = 0xFFFF}, + .els = { + .fl_cmd = ELS_FLOGI, + .fl_csp = { + .sp_hi_ver = + FNIC_FC_PH_VER_HI, + .sp_lo_ver = + FNIC_FC_PH_VER_LO, + .sp_bb_cred = + cpu_to_be16 + (FNIC_FC_B2B_CREDIT), + .sp_bb_data = + cpu_to_be16 + (FNIC_FC_B2B_RDF_SZ)}, + .fl_cssp[2].cp_class = + cpu_to_be16(FC_CPC_VALID | FC_CPC_SEQ) + }, + } + }, + .mac_desc = {.type = FIP_TYPE_MAC, .len = 2} +}; + +struct fip_fcoe_desc_s { + uint8_t type; + uint8_t len; + uint16_t max_fcoe_size; +} __packed; + +struct fip_discovery_s { + struct eth_hdr_s eth; + struct fip_header_s fip; + struct fip_mac_desc_s mac_desc; + struct fip_name_desc_s name_desc; + struct fip_fcoe_desc_s fcoe_desc; +} __packed; + +/* + * Variables: + * eth.smac, mac_desc.enode_mac, node_name + */ +struct fip_discovery_s fip_discovery_tmpl = { + .eth = {.dmac = FCOE_ALL_FCFS_MAC, + .eth_type = FIP_ETH_TYPE_LE}, + .fip = { + .ver = 0x10, .protocol = FIP_DISCOVERY << 8, + .subcode = FIP_SUBCODE_REQ << 8, .desc_len = 6 << 8, + .flags = 0x80}, + .mac_desc = {.type = FIP_TYPE_MAC, .len = 2}, + .name_desc = {.type = FIP_TYPE_NAME_ID, .len = 3}, + .fcoe_desc = { + .type = FIP_TYPE_MAX_FCOE, .len = 1, + .max_fcoe_size = FCOE_MAX_SIZE_LE} +}; + +struct fip_prio_desc_s { + uint8_t type; + uint8_t len; + uint8_t rsvd; + uint8_t priority; +} __packed; + +struct fip_fabric_desc_s { + uint8_t type; + uint8_t len; + uint16_t vf_id; + uint8_t rsvd; + uint8_t fc_map[3]; + uint64_t fabric_name; +} __packed; + +struct fip_fka_adv_desc_s { + uint8_t type; + uint8_t len; + uint8_t rsvd; + uint8_t rsvd_D; + uint32_t fka_adv; +} __packed; + +struct fip_disc_adv_s { + struct fip_header_s fip; + struct fip_prio_desc_s prio_desc; + struct fip_mac_desc_s mac_desc; + struct fip_name_desc_s name_desc; + struct fip_fabric_desc_s fabric_desc; + struct fip_fka_adv_desc_s fka_adv_desc; +} __packed; + +void fnic_fcoe_process_vlan_resp(struct fnic *fnic, struct fip_header_s *fiph); +void fnic_fcoe_fip_discovery_resp(struct fnic *fnic, struct fip_header_s *fiph); +void fnic_fcoe_process_flogi_resp(struct fnic *fnic, struct fip_header_s *fiph); +void fnic_work_on_fip_timer(struct work_struct *work); +void fnic_work_on_fcs_ka_timer(struct work_struct *work); +void fnic_fcoe_send_vlan_req(struct fnic *fnic); +void fnic_fcoe_start_fcf_discovery(struct fnic *fnic); +void fnic_fcoe_start_flogi(struct fnic *fnic); +void fnic_fcoe_process_cvl(struct fnic *fnic, struct fip_header_s *fiph); +void fnic_vlan_discovery_timeout(struct fnic *fnic); + +#endif /* _FIP_H_ */ diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 5d8315b24085..45bee896153e 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -230,6 +230,12 @@ do { \ "fnic<%d>: %s: %d: " fmt, fnic_num,\ __func__, __LINE__, ##args);) +#define FNIC_FIP_DBG(kern_level, host, fnic_num, fmt, args...) \ + FNIC_CHECK_LOGGING(FNIC_FCS_LOGGING, \ + shost_printk(kern_level, host, \ + "fnic<%d>: %s: %d: " fmt, fnic_num,\ + __func__, __LINE__, ##args);) + #define FNIC_SCSI_DBG(kern_level, host, fnic_num, fmt, args...) \ FNIC_CHECK_LOGGING(FNIC_SCSI_LOGGING, \ shost_printk(kern_level, host, \ @@ -409,13 +415,15 @@ struct fnic { /*** FIP related data members -- start ***/ void (*set_vlan)(struct fnic *, u16 vlan); struct work_struct fip_frame_work; - struct sk_buff_head fip_frame_queue; + struct work_struct fip_timer_work; + struct list_head fip_frame_queue; struct timer_list fip_timer; - struct list_head vlans; spinlock_t vlans_lock; - - struct work_struct event_work; - struct list_head evlist; + struct timer_list retry_fip_timer; + struct timer_list fcs_ka_timer; + struct timer_list enode_ka_timer; + struct timer_list vn_ka_timer; + struct list_head vlan_list; /*** FIP related data members -- end ***/ /* copy work queue cache line section */ @@ -462,9 +470,6 @@ int fnic_rq_cmpl_handler(struct fnic *fnic, int); int fnic_alloc_rq_frame(struct vnic_rq *rq); void fnic_free_rq_buf(struct vnic_rq *rq, struct vnic_rq_buf *buf); void fnic_flush_tx(struct work_struct *work); -void fnic_eth_send(struct fcoe_ctlr *, struct sk_buff *skb); -void fnic_set_port_id(struct fc_lport *, u32, struct fc_frame *); -void fnic_update_mac(struct fc_lport *, u8 *new); void fnic_update_mac_locked(struct fnic *, u8 *new); int fnic_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); @@ -493,7 +498,7 @@ int fnic_is_abts_pending(struct fnic *, struct scsi_cmnd *); void fnic_handle_fip_frame(struct work_struct *work); void fnic_handle_fip_event(struct fnic *fnic); void fnic_fcoe_reset_vlans(struct fnic *fnic); -void fnic_fcoe_evlist_free(struct fnic *fnic); +extern void fnic_handle_fip_timer(struct timer_list *t); static inline int fnic_chk_state_flags_locked(struct fnic *fnic, unsigned long st_flags) diff --git a/drivers/scsi/fnic/fnic_fcs.c b/drivers/scsi/fnic/fnic_fcs.c index e5d5a63e0281..802d8e2965a9 100644 --- a/drivers/scsi/fnic/fnic_fcs.c +++ b/drivers/scsi/fnic/fnic_fcs.c @@ -25,16 +25,9 @@ #include "cq_enet_desc.h" #include "cq_exch_desc.h" -static u8 fcoe_all_fcfs[ETH_ALEN] = FIP_ALL_FCF_MACS; -struct workqueue_struct *fnic_fip_queue; +extern struct workqueue_struct *fnic_fip_queue; struct workqueue_struct *fnic_event_queue; -static void fnic_set_eth_mode(struct fnic *); -static void fnic_fcoe_start_fcf_disc(struct fnic *fnic); -static void fnic_fcoe_process_vlan_resp(struct fnic *fnic, struct sk_buff *); -static int fnic_fcoe_vlan_check(struct fnic *fnic, u16 flag); -static int fnic_fcoe_handle_fip_frame(struct fnic *fnic, struct sk_buff *skb); - /* Frame initialization */ /* * Variables: @@ -252,11 +245,6 @@ void fnic_handle_link(struct work_struct *work) fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: UP_DOWN", strlen("Link Status: UP_DOWN")); - if (fnic->config.flags & VFCF_FIP_CAPABLE) { - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "deleting fip-timer during link-down\n"); - del_timer_sync(&fnic->fip_timer); - } fcoe_ctlr_link_down(&fnic->ctlr); } @@ -299,496 +287,70 @@ void fnic_handle_frame(struct work_struct *work) } } -void fnic_fcoe_evlist_free(struct fnic *fnic) -{ - struct fnic_event *fevt = NULL; - struct fnic_event *next = NULL; - unsigned long flags; - - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (list_empty(&fnic->evlist)) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - return; - } - - list_for_each_entry_safe(fevt, next, &fnic->evlist, list) { - list_del(&fevt->list); - kfree(fevt); - } - spin_unlock_irqrestore(&fnic->fnic_lock, flags); -} - -void fnic_handle_event(struct work_struct *work) +void fnic_handle_fip_frame(struct work_struct *work) { - struct fnic *fnic = container_of(work, struct fnic, event_work); - struct fnic_event *fevt = NULL; - struct fnic_event *next = NULL; - unsigned long flags; + struct fnic_frame_list *cur_frame, *next; + struct fnic *fnic = container_of(work, struct fnic, fip_frame_work); - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (list_empty(&fnic->evlist)) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - return; - } + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Processing FIP frame\n"); - list_for_each_entry_safe(fevt, next, &fnic->evlist, list) { + spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags); + list_for_each_entry_safe(cur_frame, next, &fnic->fip_frame_queue, + links) { if (fnic->stop_rx_link_events) { - list_del(&fevt->list); - kfree(fevt); - spin_unlock_irqrestore(&fnic->fnic_lock, flags); + list_del(&cur_frame->links); + spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); + kfree(cur_frame->fp); + kfree(cur_frame); return; } + /* * If we're in a transitional state, just re-queue and return. * The queue will be serviced when we get to a stable state. */ if (fnic->state != FNIC_IN_FC_MODE && - fnic->state != FNIC_IN_ETH_MODE) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); + fnic->state != FNIC_IN_ETH_MODE) { + spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); return; } - list_del(&fevt->list); - switch (fevt->event) { - case FNIC_EVT_START_VLAN_DISC: - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - fnic_fcoe_send_vlan_req(fnic); - spin_lock_irqsave(&fnic->fnic_lock, flags); - break; - case FNIC_EVT_START_FCF_DISC: - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "Start FCF Discovery\n"); - fnic_fcoe_start_fcf_disc(fnic); - break; - default: - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "Unknown event 0x%x\n", fevt->event); - break; - } - kfree(fevt); - } - spin_unlock_irqrestore(&fnic->fnic_lock, flags); -} - -/** - * is_fnic_fip_flogi_reject() - Check if the Received FIP FLOGI frame is rejected - * @fip: The FCoE controller that received the frame - * @skb: The received FIP frame - * - * Returns non-zero if the frame is rejected with unsupported cmd with - * insufficient resource els explanation. - */ -static inline int is_fnic_fip_flogi_reject(struct fcoe_ctlr *fip, - struct sk_buff *skb) -{ - struct fc_lport *lport = fip->lp; - struct fip_header *fiph; - struct fc_frame_header *fh = NULL; - struct fip_desc *desc; - struct fip_encaps *els; - u16 op; - u8 els_op; - u8 sub; - - size_t rlen; - size_t dlen = 0; - - if (skb_linearize(skb)) - return 0; - - if (skb->len < sizeof(*fiph)) - return 0; - - fiph = (struct fip_header *)skb->data; - op = ntohs(fiph->fip_op); - sub = fiph->fip_subcode; - - if (op != FIP_OP_LS) - return 0; - - if (sub != FIP_SC_REP) - return 0; - - rlen = ntohs(fiph->fip_dl_len) * 4; - if (rlen + sizeof(*fiph) > skb->len) - return 0; - - desc = (struct fip_desc *)(fiph + 1); - dlen = desc->fip_dlen * FIP_BPW; - - if (desc->fip_dtype == FIP_DT_FLOGI) { - - if (dlen < sizeof(*els) + sizeof(*fh) + 1) - return 0; - - els = (struct fip_encaps *)desc; - fh = (struct fc_frame_header *)(els + 1); - - if (!fh) - return 0; - - /* - * ELS command code, reason and explanation should be = Reject, - * unsupported command and insufficient resource - */ - els_op = *(u8 *)(fh + 1); - if (els_op == ELS_LS_RJT) { - shost_printk(KERN_INFO, lport->host, - "Flogi Request Rejected by Switch\n"); - return 1; - } - shost_printk(KERN_INFO, lport->host, - "Flogi Request Accepted by Switch\n"); - } - return 0; -} - -void fnic_fcoe_send_vlan_req(struct fnic *fnic) -{ - struct fcoe_ctlr *fip = &fnic->ctlr; - struct fnic_stats *fnic_stats = &fnic->fnic_stats; - struct sk_buff *skb; - char *eth_fr; - struct fip_vlan *vlan; - u64 vlan_tov; - - fnic_fcoe_reset_vlans(fnic); - fnic->set_vlan(fnic, 0); - - if (printk_ratelimit()) - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "Sending VLAN request...\n"); - - skb = dev_alloc_skb(sizeof(struct fip_vlan)); - if (!skb) - return; - - eth_fr = (char *)skb->data; - vlan = (struct fip_vlan *)eth_fr; - - memset(vlan, 0, sizeof(*vlan)); - memcpy(vlan->eth.h_source, fip->ctl_src_addr, ETH_ALEN); - memcpy(vlan->eth.h_dest, fcoe_all_fcfs, ETH_ALEN); - vlan->eth.h_proto = htons(ETH_P_FIP); - - vlan->fip.fip_ver = FIP_VER_ENCAPS(FIP_VER); - vlan->fip.fip_op = htons(FIP_OP_VLAN); - vlan->fip.fip_subcode = FIP_SC_VL_REQ; - vlan->fip.fip_dl_len = htons(sizeof(vlan->desc) / FIP_BPW); - - vlan->desc.mac.fd_desc.fip_dtype = FIP_DT_MAC; - vlan->desc.mac.fd_desc.fip_dlen = sizeof(vlan->desc.mac) / FIP_BPW; - memcpy(&vlan->desc.mac.fd_mac, fip->ctl_src_addr, ETH_ALEN); - - vlan->desc.wwnn.fd_desc.fip_dtype = FIP_DT_NAME; - vlan->desc.wwnn.fd_desc.fip_dlen = sizeof(vlan->desc.wwnn) / FIP_BPW; - put_unaligned_be64(fip->lp->wwnn, &vlan->desc.wwnn.fd_wwn); - atomic64_inc(&fnic_stats->vlan_stats.vlan_disc_reqs); - - skb_put(skb, sizeof(*vlan)); - skb->protocol = htons(ETH_P_FIP); - skb_reset_mac_header(skb); - skb_reset_network_header(skb); - fip->send(fip, skb); - - /* set a timer so that we can retry if there no response */ - vlan_tov = jiffies + msecs_to_jiffies(FCOE_CTLR_FIPVLAN_TOV); - mod_timer(&fnic->fip_timer, round_jiffies(vlan_tov)); -} - -static void fnic_fcoe_process_vlan_resp(struct fnic *fnic, struct sk_buff *skb) -{ - struct fcoe_ctlr *fip = &fnic->ctlr; - struct fip_header *fiph; - struct fip_desc *desc; - struct fnic_stats *fnic_stats = &fnic->fnic_stats; - u16 vid; - size_t rlen; - size_t dlen; - struct fcoe_vlan *vlan; - u64 sol_time; - unsigned long flags; - - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "Received VLAN response...\n"); - - fiph = (struct fip_header *) skb->data; + list_del(&cur_frame->links); - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "Received VLAN response... OP 0x%x SUB_OP 0x%x\n", - ntohs(fiph->fip_op), fiph->fip_subcode); - - rlen = ntohs(fiph->fip_dl_len) * 4; - fnic_fcoe_reset_vlans(fnic); - spin_lock_irqsave(&fnic->vlans_lock, flags); - desc = (struct fip_desc *)(fiph + 1); - while (rlen > 0) { - dlen = desc->fip_dlen * FIP_BPW; - switch (desc->fip_dtype) { - case FIP_DT_VLAN: - vid = ntohs(((struct fip_vlan_desc *)desc)->fd_vlan); - shost_printk(KERN_INFO, fnic->lport->host, - "process_vlan_resp: FIP VLAN %d\n", vid); - vlan = kzalloc(sizeof(*vlan), GFP_ATOMIC); - if (!vlan) { - /* retry from timer */ - spin_unlock_irqrestore(&fnic->vlans_lock, - flags); - goto out; - } - vlan->vid = vid & 0x0fff; - vlan->state = FIP_VLAN_AVAIL; - list_add_tail(&vlan->list, &fnic->vlans); - break; - } - desc = (struct fip_desc *)((char *)desc + dlen); - rlen -= dlen; - } - - /* any VLAN descriptors present ? */ - if (list_empty(&fnic->vlans)) { - /* retry from timer */ - atomic64_inc(&fnic_stats->vlan_stats.resp_withno_vlanID); - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "No VLAN descriptors in FIP VLAN response\n"); - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - goto out; - } - - vlan = list_first_entry(&fnic->vlans, struct fcoe_vlan, list); - fnic->set_vlan(fnic, vlan->vid); - vlan->state = FIP_VLAN_SENT; /* sent now */ - vlan->sol_count++; - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - - /* start the solicitation */ - fcoe_ctlr_link_up(fip); - - sol_time = jiffies + msecs_to_jiffies(FCOE_CTLR_START_DELAY); - mod_timer(&fnic->fip_timer, round_jiffies(sol_time)); -out: - return; -} - -static void fnic_fcoe_start_fcf_disc(struct fnic *fnic) -{ - unsigned long flags; - struct fcoe_vlan *vlan; - u64 sol_time; - - spin_lock_irqsave(&fnic->vlans_lock, flags); - vlan = list_first_entry(&fnic->vlans, struct fcoe_vlan, list); - fnic->set_vlan(fnic, vlan->vid); - vlan->state = FIP_VLAN_SENT; /* sent now */ - vlan->sol_count = 1; - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - - /* start the solicitation */ - fcoe_ctlr_link_up(&fnic->ctlr); - - sol_time = jiffies + msecs_to_jiffies(FCOE_CTLR_START_DELAY); - mod_timer(&fnic->fip_timer, round_jiffies(sol_time)); -} - -static int fnic_fcoe_vlan_check(struct fnic *fnic, u16 flag) -{ - unsigned long flags; - struct fcoe_vlan *fvlan; - - spin_lock_irqsave(&fnic->vlans_lock, flags); - if (list_empty(&fnic->vlans)) { - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - return -EINVAL; - } - - fvlan = list_first_entry(&fnic->vlans, struct fcoe_vlan, list); - if (fvlan->state == FIP_VLAN_USED) { - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - return 0; - } - - if (fvlan->state == FIP_VLAN_SENT) { - fvlan->state = FIP_VLAN_USED; - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - return 0; - } - spin_unlock_irqrestore(&fnic->vlans_lock, flags); - return -EINVAL; -} - -static void fnic_event_enq(struct fnic *fnic, enum fnic_evt ev) -{ - struct fnic_event *fevt; - unsigned long flags; - - fevt = kmalloc(sizeof(*fevt), GFP_ATOMIC); - if (!fevt) - return; - - fevt->fnic = fnic; - fevt->event = ev; - - spin_lock_irqsave(&fnic->fnic_lock, flags); - list_add_tail(&fevt->list, &fnic->evlist); - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - - schedule_work(&fnic->event_work); -} - -static int fnic_fcoe_handle_fip_frame(struct fnic *fnic, struct sk_buff *skb) -{ - struct fip_header *fiph; - int ret = 1; - u16 op; - u8 sub; - - if (!skb || !(skb->data)) - return -1; - - if (skb_linearize(skb)) - goto drop; - - fiph = (struct fip_header *)skb->data; - op = ntohs(fiph->fip_op); - sub = fiph->fip_subcode; - - if (FIP_VER_DECAPS(fiph->fip_ver) != FIP_VER) - goto drop; - - if (ntohs(fiph->fip_dl_len) * FIP_BPW + sizeof(*fiph) > skb->len) - goto drop; - - if (op == FIP_OP_DISC && sub == FIP_SC_ADV) { - if (fnic_fcoe_vlan_check(fnic, ntohs(fiph->fip_flags))) - goto drop; - /* pass it on to fcoe */ - ret = 1; - } else if (op == FIP_OP_VLAN && sub == FIP_SC_VL_NOTE) { - /* set the vlan as used */ - fnic_fcoe_process_vlan_resp(fnic, skb); - ret = 0; - } else if (op == FIP_OP_CTRL && sub == FIP_SC_CLR_VLINK) { - /* received CVL request, restart vlan disc */ - fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC); - /* pass it on to fcoe */ - ret = 1; - } -drop: - return ret; -} - -void fnic_handle_fip_frame(struct work_struct *work) -{ - struct fnic *fnic = container_of(work, struct fnic, fip_frame_work); - struct fnic_stats *fnic_stats = &fnic->fnic_stats; - unsigned long flags; - struct sk_buff *skb; - struct ethhdr *eh; - - while ((skb = skb_dequeue(&fnic->fip_frame_queue))) { - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (fnic->stop_rx_link_events) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - dev_kfree_skb(skb); - return; - } - /* - * If we're in a transitional state, just re-queue and return. - * The queue will be serviced when we get to a stable state. - */ - if (fnic->state != FNIC_IN_FC_MODE && - fnic->state != FNIC_IN_ETH_MODE) { - skb_queue_head(&fnic->fip_frame_queue, skb); - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - return; - } - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - eh = (struct ethhdr *)skb->data; - if (eh->h_proto == htons(ETH_P_FIP)) { - skb_pull(skb, sizeof(*eh)); - if (fnic_fcoe_handle_fip_frame(fnic, skb) <= 0) { - dev_kfree_skb(skb); - continue; - } - /* - * If there's FLOGI rejects - clear all - * fcf's & restart from scratch - */ - if (is_fnic_fip_flogi_reject(&fnic->ctlr, skb)) { - atomic64_inc( - &fnic_stats->vlan_stats.flogi_rejects); - shost_printk(KERN_INFO, fnic->lport->host, - "Trigger a Link down - VLAN Disc\n"); - fcoe_ctlr_link_down(&fnic->ctlr); - /* start FCoE VLAN discovery */ - fnic_fcoe_send_vlan_req(fnic); - dev_kfree_skb(skb); - continue; - } - fcoe_ctlr_recv(&fnic->ctlr, skb); - continue; + if (fdls_fip_recv_frame(fnic, cur_frame->fp)) { + kfree(cur_frame->fp); + kfree(cur_frame); } } + spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); } /** * fnic_import_rq_eth_pkt() - handle received FCoE or FIP frame. * @fnic: fnic instance. - * @skb: Ethernet Frame. + * @fp: Ethernet Frame. */ -static inline int fnic_import_rq_eth_pkt(struct fnic *fnic, struct sk_buff *skb) +static inline int fnic_import_rq_eth_pkt(struct fnic *fnic, void *fp) { - struct fc_frame *fp; - struct ethhdr *eh; - struct fcoe_hdr *fcoe_hdr; - struct fcoe_crc_eof *ft; + struct fnic_eth_hdr_s *eh; + struct fnic_frame_list *fip_fr_elem; + unsigned long flags; - /* - * Undo VLAN encapsulation if present. - */ - eh = (struct ethhdr *)skb->data; - if (eh->h_proto == htons(ETH_P_8021Q)) { - memmove((u8 *)eh + VLAN_HLEN, eh, ETH_ALEN * 2); - eh = skb_pull(skb, VLAN_HLEN); - skb_reset_mac_header(skb); - } - if (eh->h_proto == htons(ETH_P_FIP)) { - if (!(fnic->config.flags & VFCF_FIP_CAPABLE)) { - printk(KERN_ERR "Dropped FIP frame, as firmware " - "uses non-FIP mode, Enable FIP " - "using UCSM\n"); - goto drop; - } - if ((fnic_fc_trace_set_data(fnic->lport->host->host_no, - FNIC_FC_RECV|0x80, (char *)skb->data, skb->len)) != 0) { - printk(KERN_ERR "fnic ctlr frame trace error!!!"); - } - skb_queue_tail(&fnic->fip_frame_queue, skb); + eh = (struct fnic_eth_hdr_s *) fp; + if ((eh->ether_type == htons(ETH_TYPE_FIP)) && (fnic->iport.usefip)) { + fip_fr_elem = (struct fnic_frame_list *) + kzalloc(sizeof(struct fnic_frame_list), GFP_ATOMIC); + if (!fip_fr_elem) + return 0; + fip_fr_elem->fp = fp; + spin_lock_irqsave(&fnic->fnic_lock, flags); + list_add_tail(&fip_fr_elem->links, &fnic->fip_frame_queue); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); queue_work(fnic_fip_queue, &fnic->fip_frame_work); - return 1; /* let caller know packet was used */ - } - if (eh->h_proto != htons(ETH_P_FCOE)) - goto drop; - skb_set_network_header(skb, sizeof(*eh)); - skb_pull(skb, sizeof(*eh)); - - fcoe_hdr = (struct fcoe_hdr *)skb->data; - if (FC_FCOE_DECAPS_VER(fcoe_hdr) != FC_FCOE_VER) - goto drop; - - fp = (struct fc_frame *)skb; - fc_frame_init(fp); - fr_sof(fp) = fcoe_hdr->fcoe_sof; - skb_pull(skb, sizeof(struct fcoe_hdr)); - skb_reset_transport_header(skb); - - ft = (struct fcoe_crc_eof *)(skb->data + skb->len - sizeof(*ft)); - fr_eof(fp) = ft->fcoe_eof; - skb_trim(skb, skb->len - sizeof(*ft)); - return 0; -drop: - dev_kfree_skb_irq(skb); - return -1; + return 1; /* let caller know packet was used */ + } else + return 0; } /** @@ -800,206 +362,145 @@ static inline int fnic_import_rq_eth_pkt(struct fnic *fnic, struct sk_buff *skb) */ void fnic_update_mac_locked(struct fnic *fnic, u8 *new) { - u8 *ctl = fnic->ctlr.ctl_src_addr; + struct fnic_iport_s *iport = &fnic->iport; + u8 *ctl = iport->hwmac; u8 *data = fnic->data_src_addr; if (is_zero_ether_addr(new)) new = ctl; if (ether_addr_equal(data, new)) return; - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "update_mac %pM\n", new); + + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Update MAC: %u\n", *new); + if (!is_zero_ether_addr(data) && !ether_addr_equal(data, ctl)) vnic_dev_del_addr(fnic->vdev, data); + memcpy(data, new, ETH_ALEN); if (!ether_addr_equal(new, ctl)) vnic_dev_add_addr(fnic->vdev, new); } -/** - * fnic_update_mac() - set data MAC address and filters. - * @lport: local port. - * @new: newly-assigned FCoE MAC address. - */ -void fnic_update_mac(struct fc_lport *lport, u8 *new) -{ - struct fnic *fnic = lport_priv(lport); - - spin_lock_irq(&fnic->fnic_lock); - fnic_update_mac_locked(fnic, new); - spin_unlock_irq(&fnic->fnic_lock); -} - -/** - * fnic_set_port_id() - set the port_ID after successful FLOGI. - * @lport: local port. - * @port_id: assigned FC_ID. - * @fp: received frame containing the FLOGI accept or NULL. - * - * This is called from libfc when a new FC_ID has been assigned. - * This causes us to reset the firmware to FC_MODE and setup the new MAC - * address and FC_ID. - * - * It is also called with FC_ID 0 when we're logged off. - * - * If the FC_ID is due to point-to-point, fp may be NULL. - */ -void fnic_set_port_id(struct fc_lport *lport, u32 port_id, struct fc_frame *fp) -{ - struct fnic *fnic = lport_priv(lport); - u8 *mac; - int ret; - - FNIC_FCS_DBG(KERN_DEBUG, lport->host, fnic->fnic_num, - "set port_id 0x%x fp 0x%p\n", - port_id, fp); - - /* - * If we're clearing the FC_ID, change to use the ctl_src_addr. - * Set ethernet mode to send FLOGI. - */ - if (!port_id) { - fnic_update_mac(lport, fnic->ctlr.ctl_src_addr); - fnic_set_eth_mode(fnic); - return; - } - - if (fp) { - mac = fr_cb(fp)->granted_mac; - if (is_zero_ether_addr(mac)) { - /* non-FIP - FLOGI already accepted - ignore return */ - fcoe_ctlr_recv_flogi(&fnic->ctlr, lport, fp); - } - fnic_update_mac(lport, mac); - } - - /* Change state to reflect transition to FC mode */ - spin_lock_irq(&fnic->fnic_lock); - if (fnic->state == FNIC_IN_ETH_MODE || fnic->state == FNIC_IN_FC_MODE) - fnic->state = FNIC_IN_ETH_TRANS_FC_MODE; - else { - FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, - "Unexpected fnic state: %s processing FLOGI response", - fnic_state_to_str(fnic->state)); - spin_unlock_irq(&fnic->fnic_lock); - return; - } - spin_unlock_irq(&fnic->fnic_lock); - - /* - * Send FLOGI registration to firmware to set up FC mode. - * The new address will be set up when registration completes. - */ - ret = fnic_flogi_reg_handler(fnic, port_id); - - if (ret < 0) { - spin_lock_irq(&fnic->fnic_lock); - if (fnic->state == FNIC_IN_ETH_TRANS_FC_MODE) - fnic->state = FNIC_IN_ETH_MODE; - spin_unlock_irq(&fnic->fnic_lock); - } -} - static void fnic_rq_cmpl_frame_recv(struct vnic_rq *rq, struct cq_desc *cq_desc, struct vnic_rq_buf *buf, int skipped __attribute__((unused)), void *opaque) { struct fnic *fnic = vnic_dev_priv(rq->vdev); - struct sk_buff *skb; - struct fc_frame *fp; + uint8_t *fp; struct fnic_stats *fnic_stats = &fnic->fnic_stats; + unsigned int ethhdr_stripped; u8 type, color, eop, sop, ingress_port, vlan_stripped; - u8 fcoe = 0, fcoe_sof, fcoe_eof; - u8 fcoe_fc_crc_ok = 1, fcoe_enc_error = 0; - u8 tcp_udp_csum_ok, udp, tcp, ipv4_csum_ok; - u8 ipv6, ipv4, ipv4_fragment, rss_type, csum_not_calc; + u8 fcoe_fnic_crc_ok = 1, fcoe_enc_error = 0; u8 fcs_ok = 1, packet_error = 0; - u16 q_number, completed_index, bytes_written = 0, vlan, checksum; + u16 q_number, completed_index, vlan; u32 rss_hash; + u16 checksum; + u8 csum_not_calc, rss_type, ipv4, ipv6, ipv4_fragment; + u8 tcp_udp_csum_ok, udp, tcp, ipv4_csum_ok; + u8 fcoe = 0, fcoe_sof, fcoe_eof; u16 exchange_id, tmpl; u8 sof = 0; u8 eof = 0; u32 fcp_bytes_written = 0; + u16 enet_bytes_written = 0; + u32 bytes_written = 0; unsigned long flags; + struct fnic_frame_list *frame_elem = NULL; + struct fnic_eth_hdr_s *eh; dma_unmap_single(&fnic->pdev->dev, buf->dma_addr, buf->len, - DMA_FROM_DEVICE); - skb = buf->os_buf; - fp = (struct fc_frame *)skb; + DMA_FROM_DEVICE); + fp = (uint8_t *) buf->os_buf; buf->os_buf = NULL; cq_desc_dec(cq_desc, &type, &color, &q_number, &completed_index); if (type == CQ_DESC_TYPE_RQ_FCP) { - cq_fcp_rq_desc_dec((struct cq_fcp_rq_desc *)cq_desc, - &type, &color, &q_number, &completed_index, - &eop, &sop, &fcoe_fc_crc_ok, &exchange_id, - &tmpl, &fcp_bytes_written, &sof, &eof, - &ingress_port, &packet_error, - &fcoe_enc_error, &fcs_ok, &vlan_stripped, - &vlan); - skb_trim(skb, fcp_bytes_written); - fr_sof(fp) = sof; - fr_eof(fp) = eof; - + cq_fcp_rq_desc_dec((struct cq_fcp_rq_desc *) cq_desc, &type, + &color, &q_number, &completed_index, &eop, &sop, + &fcoe_fnic_crc_ok, &exchange_id, &tmpl, + &fcp_bytes_written, &sof, &eof, &ingress_port, + &packet_error, &fcoe_enc_error, &fcs_ok, + &vlan_stripped, &vlan); + ethhdr_stripped = 1; + bytes_written = fcp_bytes_written; } else if (type == CQ_DESC_TYPE_RQ_ENET) { - cq_enet_rq_desc_dec((struct cq_enet_rq_desc *)cq_desc, - &type, &color, &q_number, &completed_index, - &ingress_port, &fcoe, &eop, &sop, - &rss_type, &csum_not_calc, &rss_hash, - &bytes_written, &packet_error, - &vlan_stripped, &vlan, &checksum, - &fcoe_sof, &fcoe_fc_crc_ok, - &fcoe_enc_error, &fcoe_eof, - &tcp_udp_csum_ok, &udp, &tcp, - &ipv4_csum_ok, &ipv6, &ipv4, - &ipv4_fragment, &fcs_ok); - skb_trim(skb, bytes_written); + cq_enet_rq_desc_dec((struct cq_enet_rq_desc *) cq_desc, &type, + &color, &q_number, &completed_index, + &ingress_port, &fcoe, &eop, &sop, &rss_type, + &csum_not_calc, &rss_hash, &enet_bytes_written, + &packet_error, &vlan_stripped, &vlan, + &checksum, &fcoe_sof, &fcoe_fnic_crc_ok, + &fcoe_enc_error, &fcoe_eof, &tcp_udp_csum_ok, + &udp, &tcp, &ipv4_csum_ok, &ipv6, &ipv4, + &ipv4_fragment, &fcs_ok); + + ethhdr_stripped = 0; + bytes_written = enet_bytes_written; + if (!fcs_ok) { atomic64_inc(&fnic_stats->misc_stats.frame_errors); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "fcs error. dropping packet.\n"); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p fcs error. Dropping packet.\n", fnic); goto drop; } - if (fnic_import_rq_eth_pkt(fnic, skb)) - return; + eh = (struct fnic_eth_hdr_s *) fp; + if (eh->ether_type != htons(ETH_TYPE_FCOE)) { + + if (fnic_import_rq_eth_pkt(fnic, fp)) + return; + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Dropping ether_type 0x%x", + ntohs(eh->ether_type)); + goto drop; + } } else { - /* wrong CQ type*/ - shost_printk(KERN_ERR, fnic->lport->host, - "fnic rq_cmpl wrong cq type x%x\n", type); + /* wrong CQ type */ + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic rq_cmpl wrong cq type x%x\n", type); goto drop; } - if (!fcs_ok || packet_error || !fcoe_fc_crc_ok || fcoe_enc_error) { + if (!fcs_ok || packet_error || !fcoe_fnic_crc_ok || fcoe_enc_error) { atomic64_inc(&fnic_stats->misc_stats.frame_errors); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "fnic rq_cmpl fcoe x%x fcsok x%x" - " pkterr x%x fcoe_fc_crc_ok x%x, fcoe_enc_err" - " x%x\n", - fcoe, fcs_ok, packet_error, - fcoe_fc_crc_ok, fcoe_enc_error); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fcoe %x fcsok %x pkterr %x ffco %x fee %x\n", + fcoe, fcs_ok, packet_error, + fcoe_fnic_crc_ok, fcoe_enc_error); goto drop; } spin_lock_irqsave(&fnic->fnic_lock, flags); if (fnic->stop_rx_link_events) { spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic->stop_rx_link_events: %d\n", + fnic->stop_rx_link_events); goto drop; } - fr_dev(fp) = fnic->lport; + spin_unlock_irqrestore(&fnic->fnic_lock, flags); - if ((fnic_fc_trace_set_data(fnic->lport->host->host_no, FNIC_FC_RECV, - (char *)skb->data, skb->len)) != 0) { - printk(KERN_ERR "fnic ctlr frame trace error!!!"); + + frame_elem = + kzalloc(sizeof(struct fnic_frame_list), + GFP_ATOMIC); + if (!frame_elem) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Unable to alloc frame element of size: %ld\n", + sizeof(struct fnic_frame_list)); + goto drop; } + frame_elem->fp = fp; + frame_elem->rx_ethhdr_stripped = ethhdr_stripped; + frame_elem->frame_len = bytes_written; - skb_queue_tail(&fnic->frame_queue, skb); queue_work(fnic_event_queue, &fnic->frame_work); - return; + drop: - dev_kfree_skb_irq(skb); + kfree(fp); } static int fnic_rq_cmpl_handler_cont(struct vnic_dev *vdev, @@ -1089,62 +590,6 @@ void fnic_free_rq_buf(struct vnic_rq *rq, struct vnic_rq_buf *buf) buf->os_buf = NULL; } -/** - * fnic_eth_send() - Send Ethernet frame. - * @fip: fcoe_ctlr instance. - * @skb: Ethernet Frame, FIP, without VLAN encapsulation. - */ -void fnic_eth_send(struct fcoe_ctlr *fip, struct sk_buff *skb) -{ - struct fnic *fnic = fnic_from_ctlr(fip); - struct vnic_wq *wq = &fnic->wq[0]; - dma_addr_t pa; - struct ethhdr *eth_hdr; - struct vlan_ethhdr *vlan_hdr; - unsigned long flags; - - if (!fnic->vlan_hw_insert) { - eth_hdr = (struct ethhdr *)skb_mac_header(skb); - vlan_hdr = skb_push(skb, sizeof(*vlan_hdr) - sizeof(*eth_hdr)); - memcpy(vlan_hdr, eth_hdr, 2 * ETH_ALEN); - vlan_hdr->h_vlan_proto = htons(ETH_P_8021Q); - vlan_hdr->h_vlan_encapsulated_proto = eth_hdr->h_proto; - vlan_hdr->h_vlan_TCI = htons(fnic->vlan_id); - if ((fnic_fc_trace_set_data(fnic->lport->host->host_no, - FNIC_FC_SEND|0x80, (char *)eth_hdr, skb->len)) != 0) { - printk(KERN_ERR "fnic ctlr frame trace error!!!"); - } - } else { - if ((fnic_fc_trace_set_data(fnic->lport->host->host_no, - FNIC_FC_SEND|0x80, (char *)skb->data, skb->len)) != 0) { - printk(KERN_ERR "fnic ctlr frame trace error!!!"); - } - } - - pa = dma_map_single(&fnic->pdev->dev, skb->data, skb->len, - DMA_TO_DEVICE); - if (dma_mapping_error(&fnic->pdev->dev, pa)) { - printk(KERN_ERR "DMA mapping failed\n"); - goto free_skb; - } - - spin_lock_irqsave(&fnic->wq_lock[0], flags); - if (!vnic_wq_desc_avail(wq)) - goto irq_restore; - - fnic_queue_wq_eth_desc(wq, skb, pa, skb->len, - 0 /* hw inserts cos value */, - fnic->vlan_id, 1); - spin_unlock_irqrestore(&fnic->wq_lock[0], flags); - return; - -irq_restore: - spin_unlock_irqrestore(&fnic->wq_lock[0], flags); - dma_unmap_single(&fnic->pdev->dev, pa, skb->len, DMA_TO_DEVICE); -free_skb: - kfree_skb(skb); -} - /* * Send FC frame. */ @@ -1242,6 +687,25 @@ fdls_send_fcoe_frame(struct fnic *fnic, void *payload, int payload_sz, return ret; } +static int +fdls_send_fip_frame(struct fnic *fnic, void *payload, int payload_sz) +{ + uint8_t *frame; + int max_framesz = FNIC_FCOE_FRAME_MAXSZ; + int ret; + + frame = kzalloc(max_framesz, GFP_ATOMIC); + if (!frame) { + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic 0x%p Failed to allocate fip frame\n", fnic); + return -1; + } + memcpy(frame, (uint8_t *) payload, payload_sz); + ret = fnic_send_frame(fnic, frame, payload_sz); + + return ret; +} + void fnic_send_fcoe_frame(struct fnic_iport_s *iport, void *payload, int payload_sz) { @@ -1263,6 +727,18 @@ void fnic_send_fcoe_frame(struct fnic_iport_s *iport, void *payload, fdls_send_fcoe_frame(fnic, payload, payload_sz, srcmac, dstmac); } +int +fnic_send_fip_frame(struct fnic_iport_s *iport, void *payload, + int payload_sz) +{ + struct fnic *fnic = iport->fnic; + + if (fnic->in_remove) + return -1; + + return fdls_send_fip_frame(fnic, payload, payload_sz); +} + /** * fnic_flush_tx() - send queued frames. * @work: pointer to work element @@ -1336,44 +812,6 @@ fnic_fdls_register_portid(struct fnic_iport_s *iport, u32 port_id, return 0; } -/** - * fnic_set_eth_mode() - put fnic into ethernet mode. - * @fnic: fnic device - * - * Called without fnic lock held. - */ -static void fnic_set_eth_mode(struct fnic *fnic) -{ - unsigned long flags; - enum fnic_state old_state; - int ret; - - spin_lock_irqsave(&fnic->fnic_lock, flags); -again: - old_state = fnic->state; - switch (old_state) { - case FNIC_IN_FC_MODE: - case FNIC_IN_ETH_TRANS_FC_MODE: - default: - fnic->state = FNIC_IN_FC_TRANS_ETH_MODE; - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - - ret = fnic_fw_reset_handler(fnic); - - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (fnic->state != FNIC_IN_FC_TRANS_ETH_MODE) - goto again; - if (ret) - fnic->state = old_state; - break; - - case FNIC_IN_FC_TRANS_ETH_MODE: - case FNIC_IN_ETH_MODE: - break; - } - spin_unlock_irqrestore(&fnic->fnic_lock, flags); -} - void fnic_free_txq(struct list_head *head) { struct fnic_frame_list *cur_frame, *next; @@ -1443,24 +881,3 @@ void fnic_free_wq_buf(struct vnic_wq *wq, struct vnic_wq_buf *buf) buf->os_buf = NULL; } -void fnic_fcoe_reset_vlans(struct fnic *fnic) -{ - unsigned long flags; - struct fcoe_vlan *vlan; - struct fcoe_vlan *next; - - /* - * indicate a link down to fcoe so that all fcf's are free'd - * might not be required since we did this before sending vlan - * discovery request - */ - spin_lock_irqsave(&fnic->vlans_lock, flags); - if (!list_empty(&fnic->vlans)) { - list_for_each_entry_safe(vlan, next, &fnic->vlans, list) { - list_del(&vlan->list); - kfree(vlan); - } - } - spin_unlock_irqrestore(&fnic->vlans_lock, flags); -} - diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 51076166431c..aa13425a010b 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -85,12 +85,13 @@ module_param(fnic_max_qdepth, uint, S_IRUGO|S_IWUSR); MODULE_PARM_DESC(fnic_max_qdepth, "Queue depth to report for each LUN"); static struct libfc_function_template fnic_transport_template = { - .lport_set_port_id = fnic_set_port_id, .fcp_abort_io = fnic_empty_scsi_cleanup, .fcp_cleanup = fnic_empty_scsi_cleanup, .exch_mgr_reset = fnic_exch_mgr_reset }; +struct workqueue_struct *fnic_fip_queue; + static int fnic_slave_alloc(struct scsi_device *sdev) { struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); @@ -413,13 +414,6 @@ static void fnic_notify_timer(struct timer_list *t) round_jiffies(jiffies + FNIC_NOTIFY_TIMER_PERIOD)); } -static void fnic_fip_notify_timer(struct timer_list *t) -{ - struct fnic *fnic = from_timer(fnic, t, fip_timer); - - /* Placeholder function */ -} - static void fnic_notify_timer_start(struct fnic *fnic) { switch (vnic_dev_get_intr_mode(fnic->vdev)) { @@ -531,17 +525,6 @@ static void fnic_iounmap(struct fnic *fnic) iounmap(fnic->bar0.vaddr); } -/** - * fnic_get_mac() - get assigned data MAC address for FIP code. - * @lport: local port. - */ -static u8 *fnic_get_mac(struct fc_lport *lport) -{ - struct fnic *fnic = lport_priv(lport); - - return fnic->data_src_addr; -} - static void fnic_set_vlan(struct fnic *fnic, u16 vlan_id) { vnic_dev_set_default_vlan(fnic->vdev, vlan_id); @@ -814,26 +797,23 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic->vlan_hw_insert = 1; fnic->vlan_id = 0; - /* Initialize the FIP fcoe_ctrl struct */ - fnic->ctlr.send = fnic_eth_send; - fnic->ctlr.update_mac = fnic_update_mac; - fnic->ctlr.get_src_addr = fnic_get_mac; if (fnic->config.flags & VFCF_FIP_CAPABLE) { dev_info(&fnic->pdev->dev, "firmware supports FIP\n"); /* enable directed and multicast */ vnic_dev_packet_filter(fnic->vdev, 1, 1, 0, 0, 0); vnic_dev_add_addr(fnic->vdev, FIP_ALL_ENODE_MACS); vnic_dev_add_addr(fnic->vdev, fnic->ctlr.ctl_src_addr); - fnic->set_vlan = fnic_set_vlan; fcoe_ctlr_init(&fnic->ctlr, FIP_MODE_AUTO); - timer_setup(&fnic->fip_timer, fnic_fip_notify_timer, 0); spin_lock_init(&fnic->vlans_lock); INIT_WORK(&fnic->fip_frame_work, fnic_handle_fip_frame); - INIT_WORK(&fnic->event_work, fnic_handle_event); INIT_WORK(&fnic->flush_work, fnic_flush_tx); - skb_queue_head_init(&fnic->fip_frame_queue); - INIT_LIST_HEAD(&fnic->evlist); - INIT_LIST_HEAD(&fnic->vlans); + INIT_LIST_HEAD(&fnic->fip_frame_queue); + INIT_LIST_HEAD(&fnic->vlan_list); + timer_setup(&fnic->retry_fip_timer, fnic_handle_fip_timer, 0); + timer_setup(&fnic->fcs_ka_timer, fnic_handle_fcs_ka_timer, 0); + timer_setup(&fnic->enode_ka_timer, fnic_handle_enode_ka_timer, 0); + timer_setup(&fnic->vn_ka_timer, fnic_handle_vn_ka_timer, 0); + fnic->set_vlan = fnic_set_vlan; } else { dev_info(&fnic->pdev->dev, "firmware uses non-FIP mode\n"); fcoe_ctlr_init(&fnic->ctlr, FIP_MODE_NON_FIP); @@ -1030,10 +1010,13 @@ static void fnic_remove(struct pci_dev *pdev) fnic_free_txq(&fnic->tx_queue); if (fnic->config.flags & VFCF_FIP_CAPABLE) { - del_timer_sync(&fnic->fip_timer); - skb_queue_purge(&fnic->fip_frame_queue); + del_timer_sync(&fnic->retry_fip_timer); + del_timer_sync(&fnic->fcs_ka_timer); + del_timer_sync(&fnic->enode_ka_timer); + del_timer_sync(&fnic->vn_ka_timer); + + fnic_free_txq(&fnic->fip_frame_queue); fnic_fcoe_reset_vlans(fnic); - fnic_fcoe_evlist_free(fnic); } if ((fnic_fdmi_support == 1) && (fnic->iport.fabric.fdmi_pending > 0)) From patchwork Wed Oct 2 18:24:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karan Tilak Kumar X-Patchwork-Id: 832357 Received: from rcdn-iport-3.cisco.com (rcdn-iport-3.cisco.com [173.37.86.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5240212F0E; Wed, 2 Oct 2024 18:29:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=173.37.86.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893749; cv=none; b=mSUMnVJfWztVLMBJ39VjsjaatsfwYHNyMA+8WJ5Itr0pMzwC8TYqqOqMrQCv2w980+ABxwZPPodqD4zWdcgQxmeCpJwSIKNoP8bwPupelLu3eaBmznvDG0uTBOnwyVoS5HM1O0aftIYCq6D8ac8IlaQKCK1CTawpo5WAX5D05+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893749; c=relaxed/simple; bh=cEmzRh1lrN2yFgWWkotXrB153IRi0Ei/0ZPu99fYZLg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=i2TgRRqZeMPY1FzMZ1TepH3lzjANFy7T+i6/y6Vh6fDM7D7HppXrty1X8UOfWc1v8BBWaNiDo14OFE6fmIbHYSVUjA//cnjpSwDO02BuBwv5o7YgB0k2O3swLNGpYG7tq6lRlGU3njfmvp3bFSS5aNb2GYtRUMKR0XQbYw1ecVE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com; spf=pass smtp.mailfrom=cisco.com; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b=lDMfOaVh; arc=none smtp.client-ip=173.37.86.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cisco.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="lDMfOaVh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.com; i=@cisco.com; l=51200; q=dns/txt; s=iport; t=1727893746; x=1729103346; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wDVbWvcIORvOiWAQD2cAvatvPjo8ooWdhvDVbaYqhWg=; b=lDMfOaVhJZtrQKmQ5qjvVxZyyOw1kDBSVxk4wv4Awar/j8ukU5UwI7G6 clKqKETOgHqPqy0aTEI5gh0+hpfPLk0oU26m+pPtNvIkOsDub6QF1Isgu wbg6lQ6hL2CmW99NEfcecHPrmqdMwcwvNTwxdvm157GXjTqCnwFjqehY9 w=; X-CSE-ConnectionGUID: P786mfZyRau43K+GmGJopg== X-CSE-MsgGUID: JhGD3ZTpSmeMfgRH5S1aNw== X-IronPort-AV: E=Sophos;i="6.11,172,1725321600"; d="scan'208";a="269301339" Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by rcdn-iport-3.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2024 18:29:04 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-8.cisco.com (8.15.2/8.15.2) with ESMTPSA id 492IOHQs009807 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 2 Oct 2024 18:29:04 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, aeasi@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH v4 09/14] scsi: fnic: Modify IO path to use FDLS Date: Wed, 2 Oct 2024 11:24:05 -0700 Message-Id: <20241002182410.68093-10-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20241002182410.68093-1-kartilak@cisco.com> References: <20241002182410.68093-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-8.cisco.com Modify IO path to use FDLS. Add helper functions to process IOs. Remove unused template functions. Cleanup obsolete code. Refactor old function definitions. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Reviewed-by: Gian Carlo Boffa Reviewed-by: Arun Easi Signed-off-by: Karan Tilak Kumar --- Changes between v2 and v3: Replace fnic->host with fnic->lport->host to prevent compilation errors. Changes between v1 and v2: Replace pr_err with printk. Incorporate review comments by Hannes: Restore usage of tagset iterators. --- drivers/scsi/fnic/fnic.h | 20 +- drivers/scsi/fnic/fnic_io.h | 3 + drivers/scsi/fnic/fnic_main.c | 5 +- drivers/scsi/fnic/fnic_scsi.c | 846 +++++++++++++++++++-------------- drivers/scsi/fnic/fnic_stats.h | 2 - 5 files changed, 513 insertions(+), 363 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 4f38cbae1994..c6fe9eec9a0c 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -40,6 +40,7 @@ #define FNIC_DFLT_IO_REQ 256 /* Default scsi_cmnd tag map entries */ #define FNIC_DFLT_QUEUE_DEPTH 256 #define FNIC_STATS_RATE_LIMIT 4 /* limit rate at which stats are pulled up */ +#define LUN0_DELAY_TIME 9 /* * Tag bits used for special requests. @@ -472,7 +473,6 @@ int fnic_set_intr_mode_msix(struct fnic *fnic); void fnic_free_intr(struct fnic *fnic); int fnic_request_intr(struct fnic *fnic); -int fnic_send(struct fc_lport *, struct fc_frame *); void fnic_free_wq_buf(struct vnic_wq *wq, struct vnic_wq_buf *buf); void fnic_handle_frame(struct work_struct *work); void fnic_tport_event_handler(struct work_struct *work); @@ -489,11 +489,9 @@ int fnic_abort_cmd(struct scsi_cmnd *); int fnic_device_reset(struct scsi_cmnd *); int fnic_eh_host_reset_handler(struct scsi_cmnd *sc); int fnic_host_reset(struct Scsi_Host *shost); -int fnic_reset(struct Scsi_Host *); -void fnic_scsi_cleanup(struct fc_lport *); -void fnic_scsi_abort_io(struct fc_lport *); -void fnic_empty_scsi_cleanup(struct fc_lport *); -void fnic_exch_mgr_reset(struct fc_lport *, u32, u32); +void fnic_reset(struct Scsi_Host *shost); +int fnic_issue_fc_host_lip(struct Scsi_Host *shost); +void fnic_scsi_fcpio_reset(struct fnic *fnic); int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned int cq_index); int fnic_wq_cmpl_handler(struct fnic *fnic, int); int fnic_flogi_reg_handler(struct fnic *fnic, u32); @@ -505,7 +503,8 @@ const char *fnic_state_to_str(unsigned int state); void fnic_mq_map_queues_cpus(struct Scsi_Host *host); void fnic_log_q_error(struct fnic *fnic); void fnic_handle_link_event(struct fnic *fnic); - +void fnic_stats_debugfs_init(struct fnic *fnic); +void fnic_stats_debugfs_remove(struct fnic *fnic); int fnic_is_abts_pending(struct fnic *, struct scsi_cmnd *); void fnic_handle_fip_frame(struct work_struct *work); @@ -526,5 +525,12 @@ int fnic_get_desc_by_devid(struct pci_dev *pdev, char **desc, void fnic_fdls_link_status_change(struct fnic *fnic, int linkup); void fnic_delete_fcp_tports(struct fnic *fnic); void fnic_flush_tport_event_list(struct fnic *fnic); +int fnic_count_ioreqs_wq(struct fnic *fnic, u32 hwq, u32 portid); +unsigned int fnic_count_ioreqs(struct fnic *fnic, u32 portid); +unsigned int fnic_count_all_ioreqs(struct fnic *fnic); +unsigned int fnic_count_lun_ioreqs_wq(struct fnic *fnic, u32 hwq, + struct scsi_device *device); +unsigned int fnic_count_lun_ioreqs(struct fnic *fnic, + struct scsi_device *device); #endif /* _FNIC_H_ */ diff --git a/drivers/scsi/fnic/fnic_io.h b/drivers/scsi/fnic/fnic_io.h index 6fe642cb387b..0d974e040ab7 100644 --- a/drivers/scsi/fnic/fnic_io.h +++ b/drivers/scsi/fnic/fnic_io.h @@ -7,6 +7,7 @@ #define _FNIC_IO_H_ #include +#include "fnic_fdls.h" #define FNIC_DFLT_SG_DESC_CNT 32 #define FNIC_MAX_SG_DESC_CNT 256 /* Maximum descriptors per sgl */ @@ -41,6 +42,8 @@ enum fnic_ioreq_state { }; struct fnic_io_req { + struct fnic_iport_s *iport; + struct fnic_tport_s *tport; struct host_sg_desc *sgl_list; /* sgl list */ void *sgl_list_alloc; /* sgl list address used for free */ dma_addr_t sense_buf_pa; /* dma address for sense buffer*/ diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 0a189714cebf..bff2f125db9e 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -85,9 +85,6 @@ module_param(fnic_max_qdepth, uint, S_IRUGO|S_IWUSR); MODULE_PARM_DESC(fnic_max_qdepth, "Queue depth to report for each LUN"); static struct libfc_function_template fnic_transport_template = { - .fcp_abort_io = fnic_empty_scsi_cleanup, - .fcp_cleanup = fnic_empty_scsi_cleanup, - .exch_mgr_reset = fnic_exch_mgr_reset }; struct workqueue_struct *fnic_fip_queue; @@ -162,7 +159,7 @@ static struct fc_function_template fnic_fc_functions = { .show_starget_port_id = 1, .show_rport_dev_loss_tmo = 1, .set_rport_dev_loss_tmo = fnic_set_rport_dev_loss_tmo, - .issue_fc_host_lip = fnic_reset, + .issue_fc_host_lip = fnic_issue_fc_host_lip, .get_fc_host_stats = fnic_get_stats, .reset_fc_host_stats = fnic_reset_host_stats, .dd_fcrport_size = sizeof(struct fc_rport_libfc_priv), diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index e0a38189764e..c9907e05b54f 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -25,9 +25,12 @@ #include #include #include +#include #include "fnic_io.h" #include "fnic.h" +static void fnic_cleanup_io(struct fnic *fnic, int exclude_id); + const char *fnic_state_str[] = { [FNIC_IN_FC_MODE] = "FNIC_IN_FC_MODE", [FNIC_IN_FC_TRANS_ETH_MODE] = "FNIC_IN_FC_TRANS_ETH_MODE", @@ -65,6 +68,18 @@ static const char *fcpio_status_str[] = { [FCPIO_LUNMAP_CHNG_PEND] = "FCPIO_LUNHMAP_CHNG_PEND", }; +enum terminate_io_return { + TERM_SUCCESS = 0, + TERM_NO_SC = 1, + TERM_IO_REQ_NOT_FOUND, + TERM_ANOTHER_PORT, + TERM_GSTATE, + TERM_IO_BLOCKED, + TERM_OUT_OF_WQ_DESC, + TERM_TIMED_OUT, + TERM_MISC, +}; + const char *fnic_state_to_str(unsigned int state) { if (state >= ARRAY_SIZE(fnic_state_str) || !fnic_state_str[state]) @@ -90,8 +105,6 @@ static const char *fnic_fcpio_status_to_str(unsigned int status) return fcpio_status_str[status]; } -static void fnic_cleanup_io(struct fnic *fnic); - /* * Unmap the data buffer and sense buffer for an io_req, * also unmap and free the device-private scatter/gather list. @@ -114,6 +127,80 @@ static void fnic_release_ioreq_buf(struct fnic *fnic, SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE); } +int fnic_count_ioreqs_wq(struct fnic *fnic, u32 hwq, u32 portid) +{ + unsigned long flags = 0; + int i = 0, count = 0; + + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); + for (i = 0; i != fnic->sw_copy_wq[hwq].ioreq_table_size; ++i) { + if (fnic->sw_copy_wq[hwq].io_req_table[i] != NULL && + (!portid + || fnic->sw_copy_wq[hwq].io_req_table[i]->port_id == portid)) + count++; + } + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); + + return count; +} + +unsigned int fnic_count_ioreqs(struct fnic *fnic, u32 portid) +{ + int i; + unsigned int count = 0; + + for (i = 0; i < fnic->wq_copy_count; i++) + count += fnic_count_ioreqs_wq(fnic, i, portid); + + return count; +} + +unsigned int fnic_count_all_ioreqs(struct fnic *fnic) +{ + return fnic_count_ioreqs(fnic, 0); +} + +unsigned int fnic_count_lun_ioreqs_wq(struct fnic *fnic, u32 hwq, + struct scsi_device *device) +{ + struct fnic_io_req *io_req; + int i; + unsigned int count = 0; + unsigned long flags = 0; + + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); + for (i = 0; i != fnic->sw_copy_wq[hwq].ioreq_table_size; ++i) { + io_req = fnic->sw_copy_wq[hwq].io_req_table[i]; + + if (io_req != NULL) { + struct scsi_cmnd *sc = + scsi_host_find_tag(fnic->lport->host, io_req->tag); + + if (!sc) + continue; + + if (sc->device == device) + count++; + } + } + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); + + return count; +} + +unsigned int fnic_count_lun_ioreqs(struct fnic *fnic, + struct scsi_device *device) +{ + int hwq; + unsigned int count = 0; + + /*count if any pending IOs on this lun */ + for (hwq = 0; hwq < fnic->wq_copy_count; hwq++) + count += fnic_count_lun_ioreqs_wq(fnic, hwq, device); + + return count; +} + /* Free up Copy Wq descriptors. Called with copy_wq lock held */ static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq, unsigned int hwq) { @@ -179,12 +266,11 @@ int fnic_fw_reset_handler(struct fnic *fnic) struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; int ret = 0; unsigned long flags; + unsigned int ioreq_count; /* indicate fwreset to io path */ fnic_set_state_flags(fnic, FNIC_FLAGS_FWRESET); - - fnic_free_txq(&fnic->frame_queue); - fnic_free_txq(&fnic->tx_queue); + ioreq_count = fnic_count_all_ioreqs(fnic); /* wait for io cmpl */ while (atomic_read(&fnic->in_flight)) @@ -231,10 +317,10 @@ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id) { struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; enum fcpio_flogi_reg_format_type format; - struct fc_lport *lp = fnic->lport; u8 gw_mac[ETH_ALEN]; int ret = 0; unsigned long flags; + struct fnic_iport_s *iport = &fnic->iport; spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); @@ -246,28 +332,23 @@ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id) goto flogi_reg_ioreq_end; } - if (fnic->ctlr.map_dest) { - eth_broadcast_addr(gw_mac); - format = FCPIO_FLOGI_REG_DEF_DEST; - } else { - memcpy(gw_mac, fnic->ctlr.dest_addr, ETH_ALEN); - format = FCPIO_FLOGI_REG_GW_DEST; - } + memcpy(gw_mac, fnic->iport.fcfmac, ETH_ALEN); + format = FCPIO_FLOGI_REG_GW_DEST; - if ((fnic->config.flags & VFCF_FIP_CAPABLE) && !fnic->ctlr.map_dest) { + if (fnic->config.flags & VFCF_FIP_CAPABLE) { fnic_queue_wq_copy_desc_fip_reg(wq, SCSI_NO_TAG, fc_id, gw_mac, - fnic->data_src_addr, - lp->r_a_tov, lp->e_d_tov); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "FLOGI FIP reg issued fcid %x src %pM dest %pM\n", - fc_id, fnic->data_src_addr, gw_mac); + fnic->iport.fpma, + iport->r_a_tov, iport->e_d_tov); + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FLOGI FIP reg issued fcid: 0x%x src %p dest %p\n", + fc_id, fnic->iport.fpma, gw_mac); } else { fnic_queue_wq_copy_desc_flogi_reg(wq, SCSI_NO_TAG, format, fc_id, gw_mac); FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "FLOGI reg issued fcid 0x%x map %d dest 0x%p\n", - fc_id, fnic->ctlr.map_dest, gw_mac); + "FLOGI reg issued fcid 0x%x dest %p\n", + fc_id, gw_mac); } atomic64_inc(&fnic->fnic_stats.fw_stats.active_fw_reqs); @@ -295,13 +376,17 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, { struct scatterlist *sg; struct fc_rport *rport = starget_to_rport(scsi_target(sc->device)); - struct fc_rport_libfc_priv *rp = rport->dd_data; struct host_sg_desc *desc; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned int i; int flags; u8 exch_flags; struct scsi_lun fc_lun; + struct fnic_tport_s *tport; + struct rport_dd_data_s *rdd_data; + + rdd_data = rport->dd_data; + tport = rdd_data->tport; if (sg_count) { /* For each SGE, create a device desc entry */ @@ -356,7 +441,7 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, exch_flags = 0; if ((fnic->config.flags & VFCF_FCP_SEQ_LVL_ERR) && - (rp->flags & FC_RP_FLAGS_RETRY)) + (tport->tgt_flags & FDLS_FC_RP_FLAGS_RETRY)) exch_flags |= FCPIO_ICMND_SRFLAG_RETRY; fnic_queue_wq_copy_desc_icmnd_16(wq, mqtag, @@ -371,8 +456,8 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, sc->cmnd, sc->cmd_len, scsi_bufflen(sc), fc_lun.scsi_lun, io_req->port_id, - rport->maxframe_size, rp->r_a_tov, - rp->e_d_tov); + tport->max_payload_size, + tport->r_a_tov, tport->e_d_tov); atomic64_inc(&fnic->fnic_stats.fw_stats.active_fw_reqs); if (atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs) > @@ -388,10 +473,10 @@ int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) struct request *const rq = scsi_cmd_to_rq(sc); uint32_t mqtag = 0; void (*done)(struct scsi_cmnd *) = scsi_done; - struct fc_lport *lp = shost_priv(sc->device->host); struct fc_rport *rport; struct fnic_io_req *io_req = NULL; - struct fnic *fnic = lport_priv(lp); + struct fnic *fnic = *((struct fnic **) shost_priv(sc->device->host)); + struct fnic_iport_s *iport = NULL; struct fnic_stats *fnic_stats = &fnic->fnic_stats; struct vnic_wq_copy *wq; int ret = 1; @@ -400,32 +485,14 @@ int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) unsigned long flags = 0; unsigned long ptr; int io_lock_acquired = 0; - struct fc_rport_libfc_priv *rp; uint16_t hwq = 0; - - mqtag = blk_mq_unique_tag(rq); - spin_lock_irqsave(&fnic->fnic_lock, flags); - - if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, - "fnic IO blocked flags: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n", - fnic->state_flags); - return SCSI_MLQUEUE_HOST_BUSY; - } - - if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_FWRESET))) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, - "fnic flags: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n", - fnic->state_flags); - return SCSI_MLQUEUE_HOST_BUSY; - } + struct fnic_tport_s *tport = NULL; + struct rport_dd_data_s *rdd_data; + uint16_t lun0_delay = 0; rport = starget_to_rport(scsi_target(sc->device)); if (!rport) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, "returning DID_NO_CONNECT for IO as rport is NULL\n"); sc->result = DID_NO_CONNECT << 16; done(sc); @@ -434,50 +501,95 @@ int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) ret = fc_remote_port_chkready(rport); if (ret) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, "rport is not ready\n"); - atomic64_inc(&fnic_stats->misc_stats.rport_not_ready); sc->result = ret; done(sc); return 0; } - rp = rport->dd_data; - if (!rp || rp->rp_state == RPORT_ST_DELETE) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "rport 0x%x removed, returning DID_NO_CONNECT\n", - rport->port_id); + mqtag = blk_mq_unique_tag(rq); + spin_lock_irqsave(&fnic->fnic_lock, flags); + iport = &fnic->iport; - atomic64_inc(&fnic_stats->misc_stats.rport_not_ready); - sc->result = DID_NO_CONNECT<<16; + if (iport->state != FNIC_IPORT_STATE_READY) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "returning DID_NO_CONNECT for IO as iport state: %d\n", + iport->state); + sc->result = DID_NO_CONNECT << 16; done(sc); return 0; } - if (rp->rp_state != RPORT_ST_READY) { - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "rport 0x%x in state 0x%x, returning DID_IMM_RETRY\n", - rport->port_id, rp->rp_state); + /* fc_remote_port_add() may have added the tport to + * fc_transport but dd_data not yet set + */ + rdd_data = rport->dd_data; + tport = rdd_data->tport; + if (!tport || (rdd_data->iport != iport)) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "dd_data not yet set in SCSI for rport portid: 0x%x\n", + rport->port_id); + tport = fnic_find_tport_by_fcid(iport, rport->port_id); + if (!tport) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "returning DID_BUS_BUSY for IO as tport not found for: 0x%x\n", + rport->port_id); + sc->result = DID_BUS_BUSY << 16; + done(sc); + return 0; + } + + /* Re-assign same params as in fnic_fdls_add_tport */ + rport->maxframe_size = FNIC_FC_MAX_PAYLOAD_LEN; + rport->supported_classes = + FC_COS_CLASS3 | FC_RPORT_ROLE_FCP_TARGET; + /* the dd_data is allocated by fctransport of size dd_fcrport_size */ + rdd_data = rport->dd_data; + rdd_data->tport = tport; + rdd_data->iport = iport; + tport->rport = rport; + tport->flags |= FNIC_FDLS_SCSI_REGISTERED; + } - sc->result = DID_IMM_RETRY << 16; + if ((tport->state != FDLS_TGT_STATE_READY) + && (tport->state != FDLS_TGT_STATE_ADISC)) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "returning DID_NO_CONNECT for IO as tport state: %d\n", + tport->state); + sc->result = DID_NO_CONNECT << 16; done(sc); return 0; } - if (lp->state != LPORT_ST_READY || !(lp->link_up)) { + atomic_inc(&fnic->in_flight); + atomic_inc(&tport->in_flight); + + if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { + atomic_dec(&fnic->in_flight); + atomic_dec(&tport->in_flight); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + return SCSI_MLQUEUE_HOST_BUSY; + } + + if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_FWRESET))) { spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, - "state not ready: %d/link not up: %d Returning HOST_BUSY\n", - lp->state, lp->link_up); + "fnic flags FW reset: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n", + fnic->state_flags); return SCSI_MLQUEUE_HOST_BUSY; } - atomic_inc(&fnic->in_flight); + if (!tport->lun0_delay) { + lun0_delay = 1; + tport->lun0_delay++; + } spin_unlock_irqrestore(&fnic->fnic_lock, flags); + fnic_priv(sc)->state = FNIC_IOREQ_NOT_INITED; fnic_priv(sc)->flags = FNIC_NO_FLAGS; @@ -499,6 +611,7 @@ int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) goto out; } + io_req->tport = tport; /* Determine the type of scatter/gather list we need */ io_req->sgl_cnt = sg_count; io_req->sgl_type = FNIC_SGL_CACHE_DFLT; @@ -575,6 +688,7 @@ int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) mempool_free(io_req, fnic->io_req_pool); } atomic_dec(&fnic->in_flight); + atomic_dec(&tport->in_flight); return ret; } else { atomic64_inc(&fnic_stats->io_stats.active_ios); @@ -602,6 +716,14 @@ int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); + atomic_dec(&tport->in_flight); + + if (lun0_delay) { + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "LUN0 delay\n"); + mdelay(LUN0_DELAY_TIME); + } + return ret; } @@ -625,7 +747,7 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic, atomic64_inc(&reset_stats->fw_reset_completions); /* Clean up all outstanding io requests */ - fnic_cleanup_io(fnic); + fnic_cleanup_io(fnic, SCSI_NO_TAG); atomic64_set(&fnic->fnic_stats.fw_stats.active_fw_reqs, 0); atomic64_set(&fnic->fnic_stats.io_stats.active_ios, 0); @@ -646,12 +768,6 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic, "reset failed with header status: %s\n", fnic_fcpio_status_to_str(hdr_status)); - /* - * Unable to change to eth mode, cannot send out flogi - * Change state to fc mode, so that subsequent Flogi - * requests from libFC will cause more attempts to - * reset the firmware. Free the cached flogi - */ fnic->state = FNIC_IN_FC_MODE; atomic64_inc(&reset_stats->fw_reset_failures); ret = -1; @@ -664,15 +780,14 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic, ret = -1; } - /* Thread removing device blocks till firmware reset is complete */ - if (fnic->remove_wait) - complete(fnic->remove_wait); + if (fnic->fw_reset_done) + complete(fnic->fw_reset_done); /* * If fnic is being removed, or fw reset failed * free the flogi frame. Else, send it out */ - if (fnic->remove_wait || ret) { + if (ret) { spin_unlock_irqrestore(&fnic->fnic_lock, flags); fnic_free_txq(&fnic->tx_queue); goto reset_cmpl_handler_end; @@ -711,12 +826,12 @@ static int fnic_fcpio_flogi_reg_cmpl_handler(struct fnic *fnic, /* Check flogi registration completion status */ if (!hdr_status) { FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "flog reg succeeded\n"); + "FLOGI reg succeeded\n"); fnic->state = FNIC_IN_FC_MODE; } else { FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "fnic flogi reg :failed %s\n", + "fnic flogi reg failed: %s\n", fnic_fcpio_status_to_str(hdr_status)); fnic->state = FNIC_IN_ETH_MODE; ret = -1; @@ -1023,15 +1138,6 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, unsigned int cq_ind jiffies_to_msecs(jiffies - start_time)), desc, cmd_trace, fnic_flags_and_state(sc)); - if (sc->sc_data_direction == DMA_FROM_DEVICE) { - fnic->lport->host_stats.fcp_input_requests++; - fnic->fcp_input_bytes += xfer_len; - } else if (sc->sc_data_direction == DMA_TO_DEVICE) { - fnic->lport->host_stats.fcp_output_requests++; - fnic->fcp_output_bytes += xfer_len; - } else - fnic->lport->host_stats.fcp_control_requests++; - /* Call SCSI completion function to complete the IO */ scsi_done(sc); @@ -1414,8 +1520,8 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) struct request *const rq = scsi_cmd_to_rq(sc); struct fnic *fnic = data; struct fnic_io_req *io_req; - unsigned long flags = 0; unsigned long start_time = 0; + unsigned long flags; struct fnic_stats *fnic_stats = &fnic->fnic_stats; uint16_t hwq = 0; int tag; @@ -1439,7 +1545,7 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) } if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) && - !(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) { + !(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) { /* * We will be here only when FW completes reset * without sending completions for outstanding ios. @@ -1449,6 +1555,7 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) complete(io_req->dr_done); else if (io_req && io_req->abts_done) complete(io_req->abts_done); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } else if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) { @@ -1458,19 +1565,19 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) fnic_priv(sc)->io_req = NULL; io_req->sc = NULL; + start_time = io_req->start_time; spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* * If there is a scsi_cmnd associated with this io_req, then * free the corresponding state */ - start_time = io_req->start_time; fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); sc->result = DID_TRANSPORT_DISRUPTED << 16; FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, - "mqtag:0x%x tag: 0x%x sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n", + "mqtag: 0x%x tag: 0x%x sc: 0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n", mqtag, tag, sc, (jiffies - start_time)); if (atomic64_read(&fnic->io_cmpl_skip)) @@ -1479,23 +1586,60 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) atomic64_inc(&fnic_stats->io_stats.io_completions); FNIC_TRACE(fnic_cleanup_io, - sc->device->host->host_no, tag, sc, - jiffies_to_msecs(jiffies - start_time), - 0, ((u64)sc->cmnd[0] << 32 | - (u64)sc->cmnd[2] << 24 | - (u64)sc->cmnd[3] << 16 | - (u64)sc->cmnd[4] << 8 | sc->cmnd[5]), - fnic_flags_and_state(sc)); - + sc->device->host->host_no, tag, sc, + jiffies_to_msecs(jiffies - start_time), + 0, ((u64) sc->cmnd[0] << 32 | + (u64) sc->cmnd[2] << 24 | + (u64) sc->cmnd[3] << 16 | + (u64) sc->cmnd[4] << 8 | sc->cmnd[5]), + (((u64) fnic_priv(sc)->flags << 32) | fnic_priv(sc)-> + state)); + + /* Complete the command to SCSI */ scsi_done(sc); - return true; } -static void fnic_cleanup_io(struct fnic *fnic) +static void fnic_cleanup_io(struct fnic *fnic, int exclude_id) { + unsigned int io_count = 0; + unsigned long flags; + struct fnic_io_req *io_req = NULL; + struct scsi_cmnd *sc = NULL; + + io_count = fnic_count_all_ioreqs(fnic); + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + "Outstanding ioreq count: %d active io count: %lld Waiting\n", + io_count, + atomic64_read(&fnic->fnic_stats.io_stats.active_ios)); + scsi_host_busy_iter(fnic->lport->host, - fnic_cleanup_io_iter, fnic); + fnic_cleanup_io_iter, fnic); + + /* with sg3utils device reset, SC needs to be retrieved from ioreq */ + spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); + io_req = fnic->sw_copy_wq[0].io_req_table[fnic->fnic_max_tag_id]; + if (io_req) { + sc = io_req->sc; + if (sc) { + if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) + && !(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) { + fnic_priv(sc)->flags |= FNIC_DEV_RST_DONE; + if (io_req && io_req->dr_done) + complete(io_req->dr_done); + } + } + } + spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags); + + while ((io_count = fnic_count_all_ioreqs(fnic))) { + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + "Outstanding ioreq count: %d active io count: %lld Waiting\n", + io_count, + atomic64_read(&fnic->fnic_stats.io_stats.active_ios)); + + schedule_timeout(msecs_to_jiffies(100)); + } } void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, @@ -1567,10 +1711,13 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, struct vnic_wq_copy *wq = &fnic->hw_copy_wq[hwq]; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned long flags; + struct fnic_tport_s *tport = io_req->tport; spin_lock_irqsave(&fnic->fnic_lock, flags); if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { + atomic_dec(&fnic->in_flight); + atomic_dec(&tport->in_flight); spin_unlock_irqrestore(&fnic->fnic_lock, flags); return 1; } else @@ -1585,6 +1732,7 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, if (!vnic_wq_copy_desc_avail(wq)) { spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); + atomic_dec(&tport->in_flight); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "fnic_queue_abort_io_req: failure: no descriptors\n"); atomic64_inc(&misc_stats->abts_cpwq_alloc_failures); @@ -1619,20 +1767,24 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) struct fnic *fnic = iter_data->fnic; int abt_tag = 0; struct fnic_io_req *io_req; - unsigned long flags; struct reset_stats *reset_stats = &fnic->fnic_stats.reset_stats; struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats; struct scsi_lun fc_lun; enum fnic_ioreq_state old_ioreq_state; uint16_t hwq = 0; + unsigned long flags; abt_tag = blk_mq_unique_tag(rq); hwq = blk_mq_unique_tag_to_hwq(abt_tag); - spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); + if (!sc) { + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + "sc is NULL abt_tag: 0x%x hwq: %d\n", abt_tag, hwq); + return true; + } + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; - if (!io_req || io_req->port_id != iter_data->port_id) { spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; @@ -1655,37 +1807,38 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } + if (io_req->abts_done) { shost_printk(KERN_ERR, fnic->lport->host, - "fnic_rport_exch_reset: io_req->abts_done is set " - "state is %s\n", + "fnic_rport_exch_reset: io_req->abts_done is set state is %s\n", fnic_ioreq_state_to_str(fnic_priv(sc)->state)); } if (!(fnic_priv(sc)->flags & FNIC_IO_ISSUED)) { shost_printk(KERN_ERR, fnic->lport->host, - "rport_exch_reset " - "IO not yet issued %p tag 0x%x flags " - "%x state %d\n", - sc, abt_tag, fnic_priv(sc)->flags, fnic_priv(sc)->state); + "rport_exch_reset IO not yet issued %p abt_tag 0x%x", + sc, abt_tag); + shost_printk(KERN_ERR, fnic->lport->host, + "flags %x state %d\n", fnic_priv(sc)->flags, + fnic_priv(sc)->state); } old_ioreq_state = fnic_priv(sc)->state; fnic_priv(sc)->state = FNIC_IOREQ_ABTS_PENDING; fnic_priv(sc)->abts_status = FCPIO_INVALID_CODE; + if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) { atomic64_inc(&reset_stats->device_reset_terminates); abt_tag |= FNIC_TAG_DEV_RST; } FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "fnic_rport_exch_reset dev rst sc 0x%p\n", sc); - BUG_ON(io_req->abts_done); - + "fnic_rport_exch_reset: dev rst sc 0x%p\n", sc); + WARN_ON_ONCE(io_req->abts_done); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "fnic_rport_reset_exch: Issuing abts\n"); spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); - /* Now queue the abort command to firmware */ + /* Queue the abort command to firmware */ int_to_scsilun(sc->device->lun, &fc_lun); if (fnic_queue_abort_io_req(fnic, abt_tag, @@ -1714,11 +1867,14 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) atomic64_inc(&term_stats->terminates); iter_data->term_cnt++; } + return true; } void fnic_rport_exch_reset(struct fnic *fnic, u32 port_id) { + unsigned int io_count = 0; + unsigned long flags; struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats; struct fnic_rport_abort_io_iter_data iter_data = { .fnic = fnic, @@ -1726,53 +1882,75 @@ void fnic_rport_exch_reset(struct fnic *fnic, u32 port_id) .term_cnt = 0, }; - FNIC_SCSI_DBG(KERN_DEBUG, - fnic->lport->host, fnic->fnic_num, - "fnic_rport_exch_reset called portid 0x%06x\n", - port_id); + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + "fnic rport exchange reset for tport: 0x%06x\n", + port_id); if (fnic->in_remove) return; + io_count = fnic_count_ioreqs(fnic, port_id); + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + "Starting terminates: rport:0x%x portid-io-count: %d active-io-count: %lld\n", + port_id, io_count, + atomic64_read(&fnic->fnic_stats.io_stats.active_ios)); + + spin_lock_irqsave(&fnic->fnic_lock, flags); + atomic_inc(&fnic->in_flight); + if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { + atomic_dec(&fnic->in_flight); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + return; + } + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + scsi_host_busy_iter(fnic->lport->host, fnic_rport_abort_io_iter, &iter_data); + if (iter_data.term_cnt > atomic64_read(&term_stats->max_terminates)) atomic64_set(&term_stats->max_terminates, iter_data.term_cnt); + atomic_dec(&fnic->in_flight); + + while ((io_count = fnic_count_ioreqs(fnic, port_id))) + schedule_timeout(msecs_to_jiffies(1000)); + + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + "rport: 0x%x remaining portid-io-count: %d ", + port_id, io_count); } void fnic_terminate_rport_io(struct fc_rport *rport) { - struct fc_rport_libfc_priv *rdata; - struct fc_lport *lport; - struct fnic *fnic; + struct fnic_tport_s *tport; + struct rport_dd_data_s *rdd_data; + struct fnic_iport_s *iport = NULL; + struct fnic *fnic = NULL; if (!rport) { - printk(KERN_ERR "fnic_terminate_rport_io: rport is NULL\n"); + pr_err("rport is NULL\n"); return; } - rdata = rport->dd_data; - if (!rdata) { - printk(KERN_ERR "fnic_terminate_rport_io: rdata is NULL\n"); - return; - } - lport = rdata->local_port; - - if (!lport) { - printk(KERN_ERR "fnic_terminate_rport_io: lport is NULL\n"); - return; + rdd_data = rport->dd_data; + if (rdd_data) { + tport = rdd_data->tport; + if (!tport) { + pr_err( + "term rport io called after tport is deleted. Returning 0x%8x\n", + rport->port_id); + } else { + pr_err( + "term rport io called after tport is set 0x%8x\n", + rport->port_id); + pr_err( + "tport maybe rediscovered\n"); + + iport = (struct fnic_iport_s *) tport->iport; + fnic = iport->fnic; + fnic_rport_exch_reset(fnic, rport->port_id); + } } - fnic = lport_priv(lport); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "wwpn 0x%llx, wwnn0x%llx, rport 0x%p, portid 0x%06x\n", - rport->port_name, rport->node_name, rport, - rport->port_id); - - if (fnic->in_remove) - return; - - fnic_rport_exch_reset(fnic, rport->port_id); } /* @@ -1783,10 +1961,12 @@ void fnic_terminate_rport_io(struct fc_rport *rport) int fnic_abort_cmd(struct scsi_cmnd *sc) { struct request *const rq = scsi_cmd_to_rq(sc); - struct fc_lport *lp; + struct fnic_iport_s *iport; + struct fnic_tport_s *tport; struct fnic *fnic; struct fnic_io_req *io_req = NULL; struct fc_rport *rport; + struct rport_dd_data_s *rdd_data; unsigned long flags; unsigned long start_time = 0; int ret = SUCCESS; @@ -1806,11 +1986,11 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) fc_block_scsi_eh(sc); /* Get local-port, check ready and link up */ - lp = shost_priv(sc->device->host); - - fnic = lport_priv(lp); + fnic = *((struct fnic **) shost_priv(sc->device->host)); spin_lock_irqsave(&fnic->fnic_lock, flags); + iport = &fnic->iport; + fnic_stats = &fnic->fnic_stats; abts_stats = &fnic->fnic_stats.abts_stats; term_stats = &fnic->fnic_stats.term_stats; @@ -1821,7 +2001,43 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) fnic_priv(sc)->flags = FNIC_NO_FLAGS; - if (lp->state != LPORT_ST_READY || !(lp->link_up)) { + rdd_data = rport->dd_data; + tport = rdd_data->tport; + + if (!tport) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Abort cmd called after tport delete! rport fcid: 0x%x", + rport->port_id); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "lun: %llu hwq: 0x%x mqtag: 0x%x Op: 0x%x flags: 0x%x\n", + sc->device->lun, hwq, mqtag, + sc->cmnd[0], fnic_priv(sc)->flags); + ret = FAILED; + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + goto fnic_abort_cmd_end; + } + + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Abort cmd called rport fcid: 0x%x lun: %llu hwq: 0x%x mqtag: 0x%x", + rport->port_id, sc->device->lun, hwq, mqtag); + + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Op: 0x%x flags: 0x%x\n", + sc->cmnd[0], + fnic_priv(sc)->flags); + + if (iport->state != FNIC_IPORT_STATE_READY) { + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "iport NOT in READY state"); + ret = FAILED; + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + goto fnic_abort_cmd_end; + } + + if ((tport->state != FDLS_TGT_STATE_READY) && + (tport->state != FDLS_TGT_STATE_ADISC)) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "tport state: %d\n", tport->state); ret = FAILED; spin_unlock_irqrestore(&fnic->fnic_lock, flags); goto fnic_abort_cmd_end; @@ -1843,6 +2059,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { + ret = FAILED; spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto fnic_abort_cmd_end; } @@ -1893,7 +2110,6 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) if (fc_remote_port_chkready(rport) == 0) task_req = FCPIO_ITMF_ABT_TASK; else { - atomic64_inc(&fnic_stats->misc_stats.rport_not_ready); task_req = FCPIO_ITMF_ABT_TASK_TERM; } @@ -2027,6 +2243,7 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, unsigned long flags; uint16_t hwq = 0; uint32_t tag = 0; + struct fnic_tport_s *tport = io_req->tport; tag = io_req->tag; hwq = blk_mq_unique_tag_to_hwq(tag); @@ -2037,8 +2254,10 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, FNIC_FLAGS_IO_BLOCKED))) { spin_unlock_irqrestore(&fnic->fnic_lock, flags); return FAILED; - } else + } else { atomic_inc(&fnic->in_flight); + atomic_inc(&tport->in_flight); + } spin_unlock_irqrestore(&fnic->fnic_lock, flags); spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); @@ -2072,6 +2291,7 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, lr_io_req_end: spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); + atomic_inc(&tport->in_flight); return ret; } @@ -2274,11 +2494,11 @@ static int fnic_clean_pending_aborts(struct fnic *fnic, int fnic_device_reset(struct scsi_cmnd *sc) { struct request *rq = scsi_cmd_to_rq(sc); - struct fc_lport *lp; struct fnic *fnic; struct fnic_io_req *io_req = NULL; struct fc_rport *rport; int status; + int count = 0; int ret = FAILED; unsigned long flags; unsigned long start_time = 0; @@ -2289,31 +2509,61 @@ int fnic_device_reset(struct scsi_cmnd *sc) DECLARE_COMPLETION_ONSTACK(tm_done); bool new_sc = 0; uint16_t hwq = 0; + struct fnic_iport_s *iport = NULL; + struct rport_dd_data_s *rdd_data; + struct fnic_tport_s *tport; + u32 old_soft_reset_count; + u32 old_link_down_cnt; + int exit_dr = 0; /* Wait for rport to unblock */ fc_block_scsi_eh(sc); /* Get local-port, check ready and link up */ - lp = shost_priv(sc->device->host); + fnic = *((struct fnic **) shost_priv(sc->device->host)); + iport = &fnic->iport; - fnic = lport_priv(lp); fnic_stats = &fnic->fnic_stats; reset_stats = &fnic->fnic_stats.reset_stats; atomic64_inc(&reset_stats->device_resets); rport = starget_to_rport(scsi_target(sc->device)); + + spin_lock_irqsave(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "fcid: 0x%x lun: 0x%llx hwq: %d mqtag: 0x%x flags: 0x%x Device reset\n", + "fcid: 0x%x lun: %llu hwq: %d mqtag: 0x%x flags: 0x%x Device reset\n", rport->port_id, sc->device->lun, hwq, mqtag, fnic_priv(sc)->flags); - if (lp->state != LPORT_ST_READY || !(lp->link_up)) + rdd_data = rport->dd_data; + tport = rdd_data->tport; + if (!tport) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Dev rst called after tport delete! rport fcid: 0x%x lun: %llu\n", + rport->port_id, sc->device->lun); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + goto fnic_device_reset_end; + } + + if (iport->state != FNIC_IPORT_STATE_READY) { + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "iport NOT in READY state"); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); goto fnic_device_reset_end; + } + + if ((tport->state != FDLS_TGT_STATE_READY) && + (tport->state != FDLS_TGT_STATE_ADISC)) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "tport state: %d\n", tport->state); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + goto fnic_device_reset_end; + } + spin_unlock_irqrestore(&fnic->fnic_lock, flags); /* Check if remote port up */ if (fc_remote_port_chkready(rport)) { - atomic64_inc(&fnic_stats->misc_stats.rport_not_ready); goto fnic_device_reset_end; } @@ -2352,6 +2602,7 @@ int fnic_device_reset(struct scsi_cmnd *sc) io_req->port_id = rport->port_id; io_req->tag = mqtag; fnic_priv(sc)->io_req = io_req; + io_req->tport = tport; io_req->sc = sc; if (fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] != NULL) @@ -2383,6 +2634,11 @@ int fnic_device_reset(struct scsi_cmnd *sc) fnic_priv(sc)->flags |= FNIC_DEV_RST_ISSUED; spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); + spin_lock_irqsave(&fnic->fnic_lock, flags); + old_link_down_cnt = iport->fnic->link_down_cnt; + old_soft_reset_count = fnic->soft_reset_count; + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + /* * Wait on the local completion for LUN reset. The io_req may be * freed while we wait since we hold no lock. @@ -2390,6 +2646,24 @@ int fnic_device_reset(struct scsi_cmnd *sc) wait_for_completion_timeout(&tm_done, msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT)); + /* + * Wake up can be due to the following reasons: + * 1) The device reset completed from target. + * 2) Device reset timed out. + * 3) A link-down/host_reset may have happened in between. + * 4) The device reset was aborted and io_req->dr_done was called. + */ + + exit_dr = 0; + spin_lock_irqsave(&fnic->fnic_lock, flags); + if ((old_link_down_cnt != fnic->link_down_cnt) || + (fnic->reset_in_progress) || + (fnic->soft_reset_count != old_soft_reset_count) || + (iport->state != FNIC_IPORT_STATE_READY)) + exit_dr = 1; + + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { @@ -2398,6 +2672,13 @@ int fnic_device_reset(struct scsi_cmnd *sc) "io_req is null mqtag 0x%x sc 0x%p\n", mqtag, sc); goto fnic_device_reset_end; } + + if (exit_dr) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Host reset called for fnic. Exit device reset\n"); + io_req->dr_done = NULL; + goto fnic_device_reset_clean; + } io_req->dr_done = NULL; status = fnic_priv(sc)->lr_status; @@ -2411,50 +2692,8 @@ int fnic_device_reset(struct scsi_cmnd *sc) FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "Device reset timed out\n"); fnic_priv(sc)->flags |= FNIC_DEV_RST_TIMED_OUT; - spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); int_to_scsilun(sc->device->lun, &fc_lun); - /* - * Issue abort and terminate on device reset request. - * If q'ing of terminate fails, retry it after a delay. - */ - while (1) { - spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); - if (fnic_priv(sc)->flags & FNIC_DEV_RST_TERM_ISSUED) { - spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); - break; - } - spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); - if (fnic_queue_abort_io_req(fnic, - mqtag | FNIC_TAG_DEV_RST, - FCPIO_ITMF_ABT_TASK_TERM, - fc_lun.scsi_lun, io_req, hwq)) { - wait_for_completion_timeout(&tm_done, - msecs_to_jiffies(FNIC_ABT_TERM_DELAY_TIMEOUT)); - } else { - spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); - fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED; - fnic_priv(sc)->state = FNIC_IOREQ_ABTS_PENDING; - io_req->abts_done = &tm_done; - spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "Abort and terminate issued on Device reset mqtag 0x%x sc 0x%p\n", - mqtag, sc); - break; - } - } - while (1) { - spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); - if (!(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) { - spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); - wait_for_completion_timeout(&tm_done, - msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT)); - break; - } else { - io_req = fnic_priv(sc)->io_req; - io_req->abts_done = NULL; - goto fnic_device_reset_clean; - } - } + goto fnic_device_reset_clean; } else { spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } @@ -2480,8 +2719,7 @@ int fnic_device_reset(struct scsi_cmnd *sc) spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "Device reset failed" - " since could not abort all IOs\n"); + "Device reset failed: Cannot abort all IOs\n"); goto fnic_device_reset_clean; } @@ -2507,6 +2745,15 @@ int fnic_device_reset(struct scsi_cmnd *sc) mempool_free(io_req, fnic->io_req_pool); } + /* + * If link-event is seen while LUN reset is issued we need + * to complete the LUN reset here + */ + if (!new_sc) { + sc->result = DID_RESET << 16; + scsi_done(sc); + } + fnic_device_reset_end: FNIC_TRACE(fnic_device_reset, sc->device->host->host_no, rq->tag, sc, jiffies_to_msecs(jiffies - start_time), @@ -2520,6 +2767,17 @@ int fnic_device_reset(struct scsi_cmnd *sc) mutex_unlock(&fnic->sgreset_mutex); } + while ((ret == SUCCESS) && fnic_count_lun_ioreqs(fnic, sc->device)) { + if (count >= 2) { + ret = FAILED; + break; + } + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Cannot clean up all IOs for the LUN\n"); + schedule_timeout(msecs_to_jiffies(1000)); + count++; + } + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "Returning from device reset %s\n", (ret == SUCCESS) ? @@ -2531,50 +2789,54 @@ int fnic_device_reset(struct scsi_cmnd *sc) return ret; } -/* Clean up all IOs, clean up libFC local port */ -int fnic_reset(struct Scsi_Host *shost) +static void fnic_post_flogo_linkflap(struct fnic *fnic) +{ + unsigned long flags; + + fnic_fdls_link_status_change(fnic, 0); + spin_lock_irqsave(&fnic->fnic_lock, flags); + + if (fnic->link_status) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + fnic_fdls_link_status_change(fnic, 1); + return; + } + spin_unlock_irqrestore(&fnic->fnic_lock, flags); +} + +/* Logout from all the targets and simulate link flap */ +void fnic_reset(struct Scsi_Host *shost) { - struct fc_lport *lp; struct fnic *fnic; - int ret = 0; struct reset_stats *reset_stats; - lp = shost_priv(shost); - fnic = lport_priv(lp); + fnic = *((struct fnic **) shost_priv(shost)); reset_stats = &fnic->fnic_stats.reset_stats; FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "Issuing fnic reset\n"); + "Issuing fnic reset\n"); atomic64_inc(&reset_stats->fnic_resets); - - /* - * Reset local port, this will clean up libFC exchanges, - * reset remote port sessions, and if link is up, begin flogi - */ - ret = fc_lport_reset(lp); + fnic_post_flogo_linkflap(fnic); FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, - "Returning from fnic reset with: %s\n", - (ret == 0) ? "SUCCESS" : "FAILED"); + "Returning from fnic reset"); - if (ret == 0) - atomic64_inc(&reset_stats->fnic_reset_completions); - else - atomic64_inc(&reset_stats->fnic_reset_failures); + atomic64_inc(&reset_stats->fnic_reset_completions); +} + +int fnic_issue_fc_host_lip(struct Scsi_Host *shost) +{ + int ret = 0; + struct fnic *fnic = *((struct fnic **) shost_priv(shost)); + + FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "FC host lip issued"); + ret = fnic_host_reset(shost); return ret; } -/* - * SCSI Error handling calls driver's eh_host_reset if all prior - * error handling levels return FAILED. If host reset completes - * successfully, and if link is up, then Fabric login begins. - * - * Host Reset is the highest level of error recovery. If this fails, then - * host is offlined by SCSI. - * - */ int fnic_host_reset(struct Scsi_Host *shost) { int ret = SUCCESS; @@ -2637,122 +2899,6 @@ int fnic_host_reset(struct Scsi_Host *shost) return ret; } -/* - * This fxn is called from libFC when host is removed - */ -void fnic_scsi_abort_io(struct fc_lport *lp) -{ - int err = 0; - unsigned long flags; - enum fnic_state old_state; - struct fnic *fnic = lport_priv(lp); - DECLARE_COMPLETION_ONSTACK(remove_wait); - - /* Issue firmware reset for fnic, wait for reset to complete */ -retry_fw_reset: - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (unlikely(fnic->state == FNIC_IN_FC_TRANS_ETH_MODE) && - fnic->link_events) { - /* fw reset is in progress, poll for its completion */ - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - schedule_timeout(msecs_to_jiffies(100)); - goto retry_fw_reset; - } - - fnic->remove_wait = &remove_wait; - old_state = fnic->state; - fnic->state = FNIC_IN_FC_TRANS_ETH_MODE; - fnic_update_mac_locked(fnic, fnic->ctlr.ctl_src_addr); - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - - err = fnic_fw_reset_handler(fnic); - if (err) { - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (fnic->state == FNIC_IN_FC_TRANS_ETH_MODE) - fnic->state = old_state; - fnic->remove_wait = NULL; - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - return; - } - - /* Wait for firmware reset to complete */ - wait_for_completion_timeout(&remove_wait, - msecs_to_jiffies(FNIC_RMDEVICE_TIMEOUT)); - - spin_lock_irqsave(&fnic->fnic_lock, flags); - fnic->remove_wait = NULL; - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, - "fnic_scsi_abort_io %s\n", - (fnic->state == FNIC_IN_ETH_MODE) ? - "SUCCESS" : "FAILED"); - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - -} - -/* - * This fxn called from libFC to clean up driver IO state on link down - */ -void fnic_scsi_cleanup(struct fc_lport *lp) -{ - unsigned long flags; - enum fnic_state old_state; - struct fnic *fnic = lport_priv(lp); - - /* issue fw reset */ -retry_fw_reset: - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (unlikely(fnic->state == FNIC_IN_FC_TRANS_ETH_MODE)) { - /* fw reset is in progress, poll for its completion */ - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - schedule_timeout(msecs_to_jiffies(100)); - goto retry_fw_reset; - } - old_state = fnic->state; - fnic->state = FNIC_IN_FC_TRANS_ETH_MODE; - fnic_update_mac_locked(fnic, fnic->ctlr.ctl_src_addr); - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - - if (fnic_fw_reset_handler(fnic)) { - spin_lock_irqsave(&fnic->fnic_lock, flags); - if (fnic->state == FNIC_IN_FC_TRANS_ETH_MODE) - fnic->state = old_state; - spin_unlock_irqrestore(&fnic->fnic_lock, flags); - } - -} - -void fnic_empty_scsi_cleanup(struct fc_lport *lp) -{ -} - -void fnic_exch_mgr_reset(struct fc_lport *lp, u32 sid, u32 did) -{ - struct fnic *fnic = lport_priv(lp); - - /* Non-zero sid, nothing to do */ - if (sid) - goto call_fc_exch_mgr_reset; - - if (did) { - fnic_rport_exch_reset(fnic, did); - goto call_fc_exch_mgr_reset; - } - - /* - * sid = 0, did = 0 - * link down or device being removed - */ - if (!fnic->in_remove) - fnic_scsi_cleanup(lp); - else - fnic_scsi_abort_io(lp); - - /* call libFC exch mgr reset to reset its exchanges */ -call_fc_exch_mgr_reset: - fc_exch_mgr_reset(lp, sid, did); - -} - static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data) { struct request *const rq = scsi_cmd_to_rq(sc); diff --git a/drivers/scsi/fnic/fnic_stats.h b/drivers/scsi/fnic/fnic_stats.h index 9d7f98c452dd..1f1a1ec90a23 100644 --- a/drivers/scsi/fnic/fnic_stats.h +++ b/drivers/scsi/fnic/fnic_stats.h @@ -127,6 +127,4 @@ struct stats_debug_info { }; int fnic_get_stats_data(struct stats_debug_info *, struct fnic_stats *); -void fnic_stats_debugfs_init(struct fnic *); -void fnic_stats_debugfs_remove(struct fnic *); #endif /* _FNIC_STATS_H_ */ From patchwork Wed Oct 2 18:24:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karan Tilak Kumar X-Patchwork-Id: 832356 Received: from rcdn-iport-5.cisco.com (rcdn-iport-5.cisco.com [173.37.86.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91777215F4F; Wed, 2 Oct 2024 18:30:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=173.37.86.76 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893833; cv=none; b=KEVETUj4Ln8hIuTRcVHU5lgoaSnrsUCavBiZFUmbrzlcUA/32VtsjXgU2lVEMY9k6xz1JiwiedLtvZ9SLG/GuBcYppc+SMkLBmzAytkbKBGFhcwbuRwyzYHaXT1t76nNlpnUyLxhUaaI9rcYZh3x8mQQSdoerrBA/4vYNQnRCv0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893833; c=relaxed/simple; bh=za9mRmFtYCpfipWUQjIdQzNGqacgx0dGYVHRBgxlRXA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mhs1rMQgTC6hLj0dTCjk32tl0U/iuHgt6DQd9lQx5o2Yvob3nR4oImfPE0UNy2eOpOuvNDc/5MBWz3Y2XG9psUGo6j6kSn605y21LR+pHV2M4Y+AZuyepRlw8vBVrdGPxuTTnMgMEtbgP7TRocVfJQIwHRnMGor5KG/pBIdyWdA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com; spf=pass smtp.mailfrom=cisco.com; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b=QhdCaw2f; arc=none smtp.client-ip=173.37.86.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cisco.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="QhdCaw2f" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.com; i=@cisco.com; l=24030; q=dns/txt; s=iport; t=1727893830; x=1729103430; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2q0knVkS+8YksJ9pnBBhgmxX5pYW7sN01BzHPBM3BPc=; b=QhdCaw2fSm4nYQYcu8NxBUZeJPzlqq7vwC+U+3kSYqONRglvaS5DpLM7 k4Aoe1wFIhsd0uOGYlVavHjHqCEV/VXRkr38YcQce4JHKnSlKtjtRi66s gcQOL/uYjq+6LLhV8LK1C0DsTvmNdrTedhskjumHQmIrJy4hEhhxC2Ncj o=; X-CSE-ConnectionGUID: 85q4HT7FQsKgVNL75F+uhw== X-CSE-MsgGUID: SO7La2meQAqMVcS6Y6XzUA== X-IronPort-AV: E=Sophos;i="6.11,172,1725321600"; d="scan'208";a="269112447" Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by rcdn-iport-5.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2024 18:30:29 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-8.cisco.com (8.15.2/8.15.2) with ESMTPSA id 492IOHQu009807 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 2 Oct 2024 18:30:28 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, aeasi@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH v4 11/14] scsi: fnic: Add stats and related functionality Date: Wed, 2 Oct 2024 11:24:07 -0700 Message-Id: <20241002182410.68093-12-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20241002182410.68093-1-kartilak@cisco.com> References: <20241002182410.68093-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-8.cisco.com Add statistics and related functionality for FDLS. Add supporting functions to display stats. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Reviewed-by: Gian Carlo Boffa Signed-off-by: Karan Tilak Kumar ---- Changes between v2 and v3: Replace fnic->host with fnic->lport->host to prevent compilation errors. --- drivers/scsi/fnic/fdls_disc.c | 33 +++++++++++--- drivers/scsi/fnic/fnic.h | 2 + drivers/scsi/fnic/fnic_fdls.h | 6 ++- drivers/scsi/fnic/fnic_main.c | 37 +++++++++++++++- drivers/scsi/fnic/fnic_scsi.c | 19 +++++++- drivers/scsi/fnic/fnic_stats.h | 46 ++++++++++++++++++- drivers/scsi/fnic/fnic_trace.c | 81 +++++++++++++++++++++++++++++----- 7 files changed, 200 insertions(+), 24 deletions(-) diff --git a/drivers/scsi/fnic/fdls_disc.c b/drivers/scsi/fnic/fdls_disc.c index 2623aa011cd6..0ff4ce683dcb 100644 --- a/drivers/scsi/fnic/fdls_disc.c +++ b/drivers/scsi/fnic/fdls_disc.c @@ -843,6 +843,7 @@ static void fdls_send_fabric_flogi(struct fnic_iport_s *iport) fnic_send_fcoe_frame(iport, &flogi, sizeof(struct fc_std_flogi)); /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ + atomic64_inc(&iport->iport_stats.fabric_flogi_sent); fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); } @@ -877,6 +878,7 @@ static void fdls_send_fabric_plogi(struct fnic_iport_s *iport) "0x%x: FDLS send fabric PLOGI with oxid:%x", iport->fcid, oxid); + atomic64_inc(&iport->iport_stats.fabric_plogi_sent); fnic_send_fcoe_frame(iport, &plogi, sizeof(struct fc_std_flogi)); /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); @@ -979,7 +981,7 @@ static void fdls_send_scr(struct fnic_iport_s *iport) FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "0x%x: FDLS send SCR with oxid:%x", iport->fcid, oxid); - + atomic64_inc(&iport->iport_stats.fabric_scr_sent); fnic_send_fcoe_frame(iport, &scr, sizeof(struct fc_std_scr)); /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ fdls_start_fabric_timer(iport, 2 * iport->e_d_tov); @@ -1056,7 +1058,7 @@ fdls_send_tgt_adisc(struct fnic_iport_s *iport, struct fnic_tport_s *tport) FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "sending ADISC to tgt fcid: 0x%x", tport->fcid); - + atomic64_inc(&iport->iport_stats.tport_adisc_sent); fnic_send_fcoe_frame(iport, &adisc, sizeof(struct fc_std_els_adisc)); /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ fdls_start_tport_timer(iport, tport, 2 * iport->e_d_tov); @@ -1162,7 +1164,7 @@ fdls_send_tgt_plogi(struct fnic_iport_s *iport, struct fnic_tport_s *tport) timeout = max(2 * iport->e_d_tov, iport->plogi_timeout); - + atomic64_inc(&iport->iport_stats.tport_plogi_sent); fnic_send_fcoe_frame(iport, &plogi, sizeof(struct fc_std_flogi)); /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ fdls_start_tport_timer(iport, tport, timeout); @@ -1297,7 +1299,7 @@ fdls_send_tgt_prli(struct fnic_iport_s *iport, struct fnic_tport_s *tport) timeout = max(2 * iport->e_d_tov, iport->plogi_timeout); - + atomic64_inc(&iport->iport_stats.tport_prli_sent); fnic_send_fcoe_frame(iport, &prli, sizeof(struct fc_std_els_prli)); /* Even if fnic_send_fcoe_frame() fails we want to retry after timeout */ fdls_start_tport_timer(iport, tport, timeout); @@ -1391,7 +1393,7 @@ void fdls_tgt_logout(struct fnic_iport_s *iport, struct fnic_tport_s *tport) FNIC_STD_SET_NPORT_NAME(&logo.els.fl_n_port_wwn, le64_to_cpu(iport->wwpn)); - + atomic64_inc(&iport->iport_stats.tport_logo_sent); fnic_send_fcoe_frame(iport, &logo, sizeof(struct fc_std_logo)); } @@ -2098,6 +2100,7 @@ fdls_process_tgt_adisc_rsp(struct fnic_iport_s *iport, switch (adisc_rsp->els.adisc_cmd) { case ELS_LS_ACC: + atomic64_inc(&iport->iport_stats.tport_adisc_ls_accepts); if (tport->timer_pending) { FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "tport 0x%p Canceling fabric disc timer\n", @@ -2120,6 +2123,7 @@ fdls_process_tgt_adisc_rsp(struct fnic_iport_s *iport, break; case ELS_LS_RJT: + atomic64_inc(&iport->iport_stats.tport_adisc_ls_rejects); if (((els_rjt->u.rej.er_reason == ELS_RJT_BUSY) || (els_rjt->u.rej.er_reason == ELS_RJT_UNAB)) && (tport->retry_counter < FDLS_RETRY_COUNT)) { @@ -2193,11 +2197,13 @@ fdls_process_tgt_plogi_rsp(struct fnic_iport_s *iport, switch (plogi_rsp->els.fl_cmd) { case ELS_LS_ACC: + atomic64_inc(&iport->iport_stats.tport_plogi_ls_accepts); FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "PLOGI accepted by target: 0x%x", tgt_fcid); break; case ELS_LS_RJT: + atomic64_inc(&iport->iport_stats.tport_plogi_ls_rejects); if (((els_rjt->u.rej.er_reason == ELS_RJT_BUSY) || (els_rjt->u.rej.er_reason == ELS_RJT_UNAB)) && (tport->retry_counter < iport->max_plogi_retries)) { @@ -2215,6 +2221,7 @@ fdls_process_tgt_plogi_rsp(struct fnic_iport_s *iport, return; default: + atomic64_inc(&iport->iport_stats.tport_plogi_misc_rejects); FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "PLOGI not accepted from target fcid: 0x%x", tgt_fcid); @@ -2319,6 +2326,7 @@ fdls_process_tgt_prli_rsp(struct fnic_iport_s *iport, switch (prli_rsp->els_prli.prli_cmd) { case ELS_LS_ACC: + atomic64_inc(&iport->iport_stats.tport_prli_ls_accepts); FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "PRLI accepted from target: 0x%x", tgt_fcid); @@ -2335,6 +2343,7 @@ fdls_process_tgt_prli_rsp(struct fnic_iport_s *iport, } break; case ELS_LS_RJT: + atomic64_inc(&iport->iport_stats.tport_prli_ls_rejects); if (((els_rjt->u.rej.er_reason == ELS_RJT_BUSY) || (els_rjt->u.rej.er_reason == ELS_RJT_UNAB)) && (tport->retry_counter < FDLS_RETRY_COUNT)) { @@ -2358,6 +2367,7 @@ fdls_process_tgt_prli_rsp(struct fnic_iport_s *iport, break; default: + atomic64_inc(&iport->iport_stats.tport_prli_misc_rejects); FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "PRLI not accepted from target: 0x%x", tgt_fcid); return; @@ -2636,6 +2646,7 @@ fdls_process_scr_rsp(struct fnic_iport_s *iport, switch (scr_rsp->scr.scr_cmd) { case ELS_LS_ACC: + atomic64_inc(&iport->iport_stats.fabric_scr_ls_accepts); if (iport->fabric.timer_pending) { FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "Canceling fabric disc timer %p\n", iport); @@ -2647,6 +2658,7 @@ fdls_process_scr_rsp(struct fnic_iport_s *iport, break; case ELS_LS_RJT: + atomic64_inc(&iport->iport_stats.fabric_scr_ls_rejects); if (((els_rjt->u.rej.er_reason == ELS_RJT_BUSY) || (els_rjt->u.rej.er_reason == ELS_RJT_UNAB)) && (fdls->retry_counter < FDLS_RETRY_COUNT)) { @@ -2671,6 +2683,7 @@ fdls_process_scr_rsp(struct fnic_iport_s *iport, break; default: + atomic64_inc(&iport->iport_stats.fabric_scr_misc_rejects); break; } } @@ -2976,6 +2989,7 @@ fdls_process_flogi_rsp(struct fnic_iport_s *iport, switch (flogi_rsp->els.fl_cmd) { case ELS_LS_ACC: + atomic64_inc(&iport->iport_stats.fabric_flogi_ls_accepts); if (iport->fabric.timer_pending) { FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "iport fcid: 0x%x Canceling fabric disc timer\n", @@ -3049,6 +3063,7 @@ fdls_process_flogi_rsp(struct fnic_iport_s *iport, break; case ELS_LS_RJT: + atomic64_inc(&iport->iport_stats.fabric_flogi_ls_rejects); if (fabric->retry_counter < iport->max_flogi_retries) { FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "FLOGI returned ELS_LS_RJT BUSY. Retry from timer routine %p", @@ -3076,6 +3091,7 @@ fdls_process_flogi_rsp(struct fnic_iport_s *iport, FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "FLOGI response not accepted: 0x%x", flogi_rsp->els.fl_cmd); + atomic64_inc(&iport->iport_stats.fabric_flogi_misc_rejects); break; } } @@ -3100,6 +3116,7 @@ fdls_process_fabric_plogi_rsp(struct fnic_iport_s *iport, switch (plogi_rsp->els.fl_cmd) { case ELS_LS_ACC: + atomic64_inc(&iport->iport_stats.fabric_plogi_ls_accepts); if (iport->fabric.timer_pending) { FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "iport fcid: 0x%x fabric PLOGI response: Accepted\n", @@ -3112,6 +3129,7 @@ fdls_process_fabric_plogi_rsp(struct fnic_iport_s *iport, fdls_send_rpn_id(iport); break; case ELS_LS_RJT: + atomic64_inc(&iport->iport_stats.fabric_plogi_ls_rejects); if (((els_rjt->u.rej.er_reason == ELS_RJT_BUSY) || (els_rjt->u.rej.er_reason == ELS_RJT_UNAB)) && (iport->fabric.retry_counter < iport->max_plogi_retries)) { @@ -3137,6 +3155,7 @@ fdls_process_fabric_plogi_rsp(struct fnic_iport_s *iport, FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "PLOGI response not accepted: 0x%x", plogi_rsp->els.fl_cmd); + atomic64_inc(&iport->iport_stats.fabric_plogi_misc_rejects); break; } } @@ -3465,6 +3484,7 @@ fdls_process_unsupported_els_req(struct fnic_iport_s *iport, FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "Dropping unsupported ELS with illegal frame bits 0x%x\n", d_id); + atomic64_inc(&iport->iport_stats.unsupported_frames_dropped); return; } @@ -3473,6 +3493,7 @@ fdls_process_unsupported_els_req(struct fnic_iport_s *iport, FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "Dropping unsupported ELS request in iport state: %d", iport->state); + atomic64_inc(&iport->iport_stats.unsupported_frames_dropped); return; } @@ -3877,6 +3898,8 @@ fdls_process_rscn(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) struct fnic *fnic = iport->fnic; uint16_t rscn_payload_len; + atomic64_inc(&iport->iport_stats.num_rscns); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "FDLS process RSCN %p", iport); diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 58d864b59c89..33256d99023a 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -532,5 +532,7 @@ unsigned int fnic_count_lun_ioreqs(struct fnic *fnic, struct scsi_device *device); void fnic_scsi_unload(struct fnic *fnic); void fnic_scsi_unload_cleanup(struct fnic *fnic); +int fnic_get_debug_info(struct stats_debug_info *info, + struct fnic *fnic); #endif /* _FNIC_H_ */ diff --git a/drivers/scsi/fnic/fnic_fdls.h b/drivers/scsi/fnic/fnic_fdls.h index d2c3ebce3209..27366d320639 100644 --- a/drivers/scsi/fnic/fnic_fdls.h +++ b/drivers/scsi/fnic/fnic_fdls.h @@ -300,10 +300,12 @@ struct fnic_iport_s { uint16_t max_payload_size; spinlock_t deleted_tport_lst_lock; struct completion *flogi_reg_done; + struct fnic_iport_stats iport_stats; char str_wwpn[20]; char str_wwnn[20]; - }; - struct rport_dd_data_s { +}; + +struct rport_dd_data_s { struct fnic_tport_s *tport; struct fnic_iport_s *iport; }; diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index e18bc9916e55..a1c7f666093b 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -162,7 +162,7 @@ static struct fc_function_template fnic_fc_functions = { .show_rport_dev_loss_tmo = 1, .set_rport_dev_loss_tmo = fnic_set_rport_dev_loss_tmo, .issue_fc_host_lip = fnic_issue_fc_host_lip, - .get_fc_host_stats = NULL, + .get_fc_host_stats = fnic_get_stats, .reset_fc_host_stats = fnic_reset_host_stats, .dd_fcrport_size = sizeof(struct rport_dd_data_s), .terminate_rport_io = fnic_terminate_rport_io, @@ -173,9 +173,11 @@ static void fnic_get_host_speed(struct Scsi_Host *shost) { struct fnic *fnic = *((struct fnic **) shost_priv(shost)); u32 port_speed = vnic_dev_port_speed(fnic->vdev); + struct fnic_stats *fnic_stats = &fnic->fnic_stats; FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "port_speed: %d Mbps", port_speed); + atomic64_set(&fnic_stats->misc_stats.port_speed_in_mbps, port_speed); /* Add in other values as they get defined in fw */ switch (port_speed) { @@ -233,7 +235,38 @@ static void fnic_get_host_speed(struct Scsi_Host *shost) /* Placeholder function */ static struct fc_host_statistics *fnic_get_stats(struct Scsi_Host *host) { - struct fc_host_statistics *stats; + int ret; + struct fnic *fnic = *((struct fnic **) shost_priv(host)); + struct fc_host_statistics *stats = &fnic->fnic_stats.host_stats; + struct vnic_stats *vs; + unsigned long flags; + + if (time_before + (jiffies, fnic->stats_time + HZ / FNIC_STATS_RATE_LIMIT)) + return stats; + fnic->stats_time = jiffies; + + spin_lock_irqsave(&fnic->fnic_lock, flags); + ret = vnic_dev_stats_dump(fnic->vdev, &fnic->stats); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + + if (ret) { + FNIC_MAIN_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + "fnic: Get vnic stats failed: 0x%x", ret); + return stats; + } + vs = fnic->stats; + stats->tx_frames = vs->tx.tx_unicast_frames_ok; + stats->tx_words = vs->tx.tx_unicast_bytes_ok / 4; + stats->rx_frames = vs->rx.rx_unicast_frames_ok; + stats->rx_words = vs->rx.rx_unicast_bytes_ok / 4; + stats->error_frames = vs->tx.tx_errors + vs->rx.rx_errors; + stats->dumped_frames = vs->tx.tx_drops + vs->rx.rx_drop; + stats->invalid_crc_count = vs->rx.rx_crc_errors; + stats->seconds_since_last_reset = + (jiffies - fnic->stats_reset_time) / HZ; + stats->fcp_input_megabytes = div_u64(fnic->fcp_input_bytes, 1000000); + stats->fcp_output_megabytes = div_u64(fnic->fcp_output_bytes, 1000000); return stats; } diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index 21bb5d92f04f..aa463ed652c0 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -503,6 +503,7 @@ int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) if (ret) { FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, "rport is not ready\n"); + atomic64_inc(&fnic_stats->misc_stats.tport_not_ready); sc->result = ret; done(sc); return 0; @@ -1138,6 +1139,15 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, unsigned int cq_ind jiffies_to_msecs(jiffies - start_time)), desc, cmd_trace, fnic_flags_and_state(sc)); + if (sc->sc_data_direction == DMA_FROM_DEVICE) { + fnic_stats->host_stats.fcp_input_requests++; + fnic->fcp_input_bytes += xfer_len; + } else if (sc->sc_data_direction == DMA_TO_DEVICE) { + fnic_stats->host_stats.fcp_output_requests++; + fnic->fcp_output_bytes += xfer_len; + } else + fnic_stats->host_stats.fcp_control_requests++; + /* Call SCSI completion function to complete the IO */ scsi_done(sc); @@ -1986,8 +1996,8 @@ void fnic_scsi_unload_cleanup(struct fnic *fnic) { int hwq = 0; - fc_remove_host(fnic->host); - scsi_remove_host(fnic->host); + fc_remove_host(fnic->lport->host); + scsi_remove_host(fnic->lport->host); for (hwq = 0; hwq < fnic->wq_copy_count; hwq++) kfree(fnic->sw_copy_wq[hwq].io_req_table); } @@ -2066,6 +2076,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) fnic_priv(sc)->flags); if (iport->state != FNIC_IPORT_STATE_READY) { + atomic64_inc(&fnic_stats->misc_stats.iport_not_ready); FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "iport NOT in READY state"); ret = FAILED; @@ -2149,6 +2160,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) if (fc_remote_port_chkready(rport) == 0) task_req = FCPIO_ITMF_ABT_TASK; else { + atomic64_inc(&fnic_stats->misc_stats.tport_not_ready); task_req = FCPIO_ITMF_ABT_TASK_TERM; } @@ -2586,6 +2598,7 @@ int fnic_device_reset(struct scsi_cmnd *sc) } if (iport->state != FNIC_IPORT_STATE_READY) { + atomic64_inc(&fnic_stats->misc_stats.iport_not_ready); FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "iport NOT in READY state"); spin_unlock_irqrestore(&fnic->fnic_lock, flags); @@ -2603,6 +2616,7 @@ int fnic_device_reset(struct scsi_cmnd *sc) /* Check if remote port up */ if (fc_remote_port_chkready(rport)) { + atomic64_inc(&fnic_stats->misc_stats.tport_not_ready); goto fnic_device_reset_end; } @@ -3080,6 +3094,7 @@ void fnic_scsi_fcpio_reset(struct fnic *fnic) "FW reset completion timed out after %d ms)\n", FNIC_FW_RESET_TIMEOUT); } + atomic64_inc(&fnic->fnic_stats.reset_stats.fw_reset_timeouts); } fnic->fw_reset_done = NULL; } diff --git a/drivers/scsi/fnic/fnic_stats.h b/drivers/scsi/fnic/fnic_stats.h index 1f1a1ec90a23..ef7cdc909940 100644 --- a/drivers/scsi/fnic/fnic_stats.h +++ b/drivers/scsi/fnic/fnic_stats.h @@ -63,6 +63,7 @@ struct reset_stats { atomic64_t fw_resets; atomic64_t fw_reset_completions; atomic64_t fw_reset_failures; + atomic64_t fw_reset_timeouts; atomic64_t fnic_resets; atomic64_t fnic_reset_completions; atomic64_t fnic_reset_failures; @@ -102,10 +103,51 @@ struct misc_stats { atomic64_t no_icmnd_itmf_cmpls; atomic64_t check_condition; atomic64_t queue_fulls; - atomic64_t rport_not_ready; + atomic64_t tport_not_ready; + atomic64_t iport_not_ready; atomic64_t frame_errors; atomic64_t current_port_speed; atomic64_t intx_dummy; + atomic64_t port_speed_in_mbps; +}; + +struct fnic_iport_stats { + atomic64_t num_linkdn; + atomic64_t num_linkup; + atomic64_t link_failure_count; + atomic64_t num_rscns; + atomic64_t rscn_redisc; + atomic64_t rscn_not_redisc; + atomic64_t frame_err; + atomic64_t num_rnid; + atomic64_t fabric_flogi_sent; + atomic64_t fabric_flogi_ls_accepts; + atomic64_t fabric_flogi_ls_rejects; + atomic64_t fabric_flogi_misc_rejects; + atomic64_t fabric_plogi_sent; + atomic64_t fabric_plogi_ls_accepts; + atomic64_t fabric_plogi_ls_rejects; + atomic64_t fabric_plogi_misc_rejects; + atomic64_t fabric_scr_sent; + atomic64_t fabric_scr_ls_accepts; + atomic64_t fabric_scr_ls_rejects; + atomic64_t fabric_scr_misc_rejects; + atomic64_t fabric_logo_sent; + atomic64_t tport_alive; + atomic64_t tport_plogi_sent; + atomic64_t tport_plogi_ls_accepts; + atomic64_t tport_plogi_ls_rejects; + atomic64_t tport_plogi_misc_rejects; + atomic64_t tport_prli_sent; + atomic64_t tport_prli_ls_accepts; + atomic64_t tport_prli_ls_rejects; + atomic64_t tport_prli_misc_rejects; + atomic64_t tport_adisc_sent; + atomic64_t tport_adisc_ls_accepts; + atomic64_t tport_adisc_ls_rejects; + atomic64_t tport_logo_sent; + atomic64_t unsupported_frames_ls_rejects; + atomic64_t unsupported_frames_dropped; }; struct fnic_stats { @@ -116,6 +158,7 @@ struct fnic_stats { struct reset_stats reset_stats; struct fw_stats fw_stats; struct vlan_stats vlan_stats; + struct fc_host_statistics host_stats; struct misc_stats misc_stats; }; @@ -127,4 +170,5 @@ struct stats_debug_info { }; int fnic_get_stats_data(struct stats_debug_info *, struct fnic_stats *); +const char *fnic_role_to_str(unsigned int role); #endif /* _FNIC_STATS_H_ */ diff --git a/drivers/scsi/fnic/fnic_trace.c b/drivers/scsi/fnic/fnic_trace.c index d717886808df..6729b988c541 100644 --- a/drivers/scsi/fnic/fnic_trace.c +++ b/drivers/scsi/fnic/fnic_trace.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "fnic_io.h" #include "fnic.h" @@ -29,6 +30,17 @@ int fnic_fc_tracing_enabled = 1; int fnic_fc_trace_cleared = 1; static DEFINE_SPINLOCK(fnic_fc_trace_lock); +const char *fnic_role_str[] = { + [FNIC_ROLE_FCP_INITIATOR] = "FCP_Initiator", +}; + +const char *fnic_role_to_str(unsigned int role) +{ + if (role >= ARRAY_SIZE(fnic_role_str) || !fnic_role_str[role]) + return "Unknown"; + + return fnic_role_str[role]; +} /* * fnic_trace_get_buf - Give buffer pointer to user to fill up trace information @@ -423,7 +435,8 @@ int fnic_get_stats_data(struct stats_debug_info *debug, "Number of Check Conditions encountered: %lld\n" "Number of QUEUE Fulls: %lld\n" "Number of rport not ready: %lld\n" - "Number of receive frame errors: %lld\n", + "Number of receive frame errors: %lld\n" + "Port speed (in Mbps): %lld\n", (u64)stats->misc_stats.last_isr_time, (s64)val1.tv_sec, val1.tv_nsec, (u64)stats->misc_stats.last_ack_time, @@ -446,13 +459,9 @@ int fnic_get_stats_data(struct stats_debug_info *debug, (u64)atomic64_read(&stats->misc_stats.no_icmnd_itmf_cmpls), (u64)atomic64_read(&stats->misc_stats.check_condition), (u64)atomic64_read(&stats->misc_stats.queue_fulls), - (u64)atomic64_read(&stats->misc_stats.rport_not_ready), - (u64)atomic64_read(&stats->misc_stats.frame_errors)); - - len += scnprintf(debug->debug_buffer + len, buf_size - len, - "Firmware reported port speed: %llu\n", - (u64)atomic64_read( - &stats->misc_stats.current_port_speed)); + (u64)atomic64_read(&stats->misc_stats.tport_not_ready), + (u64)atomic64_read(&stats->misc_stats.frame_errors), + (u64)atomic64_read(&stats->misc_stats.port_speed_in_mbps)); return len; @@ -460,8 +469,56 @@ int fnic_get_stats_data(struct stats_debug_info *debug, int fnic_get_debug_info(struct stats_debug_info *info, struct fnic *fnic) { - /* Placeholder function */ - return 0; + struct fnic_iport_s *iport = &fnic->iport; + int buf_size = info->buf_size; + int len = info->buffer_len; + struct fnic_tport_s *tport, *next; + unsigned long flags; + + len += snprintf(info->debug_buffer + len, buf_size - len, + "------------------------------------------\n" + "\t\t Debug Info\n" + "------------------------------------------\n"); + len += snprintf(info->debug_buffer + len, buf_size - len, + "fnic Name:%s number:%d Role:%s State:%s\n", + fnic->name, fnic->fnic_num, + fnic_role_to_str(fnic->role), + fnic_state_to_str(fnic->state)); + len += + snprintf(info->debug_buffer + len, buf_size - len, + "iport State:%d Flags:0x%x vlan_id:%d fcid:0x%x\n", + iport->state, iport->flags, iport->vlan_id, iport->fcid); + len += + snprintf(info->debug_buffer + len, buf_size - len, + "usefip:%d fip_state:%d fip_flogi_retry:%d\n", + iport->usefip, iport->fip.state, iport->fip.flogi_retry); + len += + snprintf(info->debug_buffer + len, buf_size - len, + "fpma %02x:%02x:%02x:%02x:%02x:%02x", + iport->fpma[5], iport->fpma[4], iport->fpma[3], + iport->fpma[2], iport->fpma[1], iport->fpma[0]); + len += + snprintf(info->debug_buffer + len, buf_size - len, + "fcfmac %02x:%02x:%02x:%02x:%02x:%02x\n", + iport->fcfmac[5], iport->fcfmac[4], iport->fcfmac[3], + iport->fcfmac[2], iport->fcfmac[1], iport->fcfmac[0]); + len += + snprintf(info->debug_buffer + len, buf_size - len, + "fabric state:%d flags:0x%x retry_counter:%d e_d_tov:%d r_a_tov:%d\n", + iport->fabric.state, iport->fabric.flags, + iport->fabric.retry_counter, iport->e_d_tov, + iport->r_a_tov); + + spin_lock_irqsave(&fnic->fnic_lock, flags); + list_for_each_entry_safe(tport, next, &iport->tport_list, links) { + len += snprintf(info->debug_buffer + len, buf_size - len, + "tport fcid:0x%x state:%d flags:0x%x inflight:%d retry_counter:%d\n", + tport->fcid, tport->state, tport->flags, + atomic_read(&tport->in_flight), + tport->retry_counter); + } + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + return len; } /* @@ -694,7 +751,7 @@ int fnic_fc_trace_set_data(u32 host_no, u8 frame_type, */ if (frame_type == FNIC_FC_RECV) { eth_fcoe_hdr_len = sizeof(struct ethhdr) + - sizeof(struct fcoe_hdr); + sizeof(struct fnic_fcoe_hdr_s); memset((char *)fc_trace, 0xff, eth_fcoe_hdr_len); /* Copy the rest of data frame */ memcpy((char *)(fc_trace + eth_fcoe_hdr_len), (void *)frame, @@ -800,7 +857,7 @@ void copy_and_format_trace_data(struct fc_trace_hdr *tdata, { int j, i = 1, len; int ethhdr_len = sizeof(struct ethhdr) - 1; - int fcoehdr_len = sizeof(struct fcoe_hdr); + int fcoehdr_len = sizeof(struct fnic_fcoe_hdr_s); int fchdr_len = sizeof(struct fc_frame_header); int max_size = fnic_fc_trace_max_pages * PAGE_SIZE * 3; char *fc_trace; From patchwork Wed Oct 2 18:24:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karan Tilak Kumar X-Patchwork-Id: 832355 Received: from rcdn-iport-6.cisco.com (rcdn-iport-6.cisco.com [173.37.86.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D55A22141BB; Wed, 2 Oct 2024 18:31:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=173.37.86.77 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893904; cv=none; b=CGHGfJz0iEiXFUs53JiNR4uluy1Z7x0+R/eolcxx6hrnA7W623/x86B8G2Hcm52xEkluVa7yupFUxaJxfHJsgbCetvA7BiTsQKoOvBZnTszPyKR4981JoeMRoIvS5uG71i1YK5KVvyM8s4IbbNuSzYkYsOIYxcQspD274fOROP0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727893904; c=relaxed/simple; bh=P5E9ZJI1OLoV6GRocE0mATUBrw1oeuL8iFJuf+baB/c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DMVEIpPLABMZYEUOrNpqRLMh68QpMgopaMe/HAMlQP8nkSMn2o/vbFbzxxLjFdco1GimDSYP8mTuUj8KP7LAn6xE5AeVPBVxfLjAKC9k8dDzr0RhN3oiAxmBVFBxt3kQ/bvX66ZpNAbcDWVRQO2V4N53LJZON9BdtVM8inv6Yhs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com; spf=pass smtp.mailfrom=cisco.com; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b=m/dTl4m5; arc=none smtp.client-ip=173.37.86.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=cisco.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cisco.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="m/dTl4m5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.com; i=@cisco.com; l=11009; q=dns/txt; s=iport; t=1727893903; x=1729103503; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DpbmHys3I7Jy/90z/2w+QF1kYz6E0NZTRLq5C/o4fA4=; b=m/dTl4m5X/b2sPkTk5/7R3xNxEsf7YWfOboFNvFtp/bjlmWdaWVI4iio T/+iE9W7xnXBfA+pTCRSw0XxqVtXN8hBTVu8y3driGWxaV2s0ZgZM960P EvJmDueFWV1Dtar3p1Kk5AaHYovyYroLAClUqrGCjMVFsmjL92tSYM0oP M=; X-CSE-ConnectionGUID: ODlYHfyfRNW1twN4L0iWOw== X-CSE-MsgGUID: DTj6yqzRRuSsBGzuj+iGnQ== X-IronPort-AV: E=Sophos;i="6.11,172,1725321600"; d="scan'208";a="268950566" Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by rcdn-iport-6.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2024 18:31:41 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by rcdn-core-8.cisco.com (8.15.2/8.15.2) with ESMTPSA id 492IOHQw009807 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 2 Oct 2024 18:31:41 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, aeasi@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH v4 13/14] scsi: fnic: Add support to handle port channel RSCN Date: Wed, 2 Oct 2024 11:24:09 -0700 Message-Id: <20241002182410.68093-14-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20241002182410.68093-1-kartilak@cisco.com> References: <20241002182410.68093-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: rcdn-core-8.cisco.com Add support to handle port channel RSCN. Port channel RSCN is a Cisco vendor specific RSCN event. It is applicable only to Cisco UCS fabrics. If there's a change in the port channel configuration, an RCSN is sent to fnic. This is used to serially reset the scsi initiator fnics so that there's no all paths down scenario. The affected fnics are added to a list that are reset with a small time gap between them. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Reviewed-by: Gian Carlo Boffa Signed-off-by: Karan Tilak Kumar --- Changes between v2 and v3: Incorporate review comments from Hannes from other patches: Replace redundant definitions with standard definitions. Changes between v1 and v2: Replace pr_err with printk. Incorporate review comments from Hannes from other patches: Replace pr_err with dev_err. --- drivers/scsi/fnic/fdls_disc.c | 62 ++++++++++++++++++++++++++++------- drivers/scsi/fnic/fnic.h | 25 ++++++++++++++ drivers/scsi/fnic/fnic_fcs.c | 35 ++++++++++++++++++++ drivers/scsi/fnic/fnic_main.c | 30 +++++++++++++++++ 4 files changed, 140 insertions(+), 12 deletions(-) diff --git a/drivers/scsi/fnic/fdls_disc.c b/drivers/scsi/fnic/fdls_disc.c index 383f61670ea7..d721b014c088 100644 --- a/drivers/scsi/fnic/fdls_disc.c +++ b/drivers/scsi/fnic/fdls_disc.c @@ -3904,6 +3904,9 @@ fdls_process_rscn(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) int newports = 0; struct fnic_fdls_fabric_s *fdls = &iport->fabric; struct fnic *fnic = iport->fnic; + int rscn_type = NOT_PC_RSCN; + uint32_t sid = ntoh24(fchdr->fh_s_id); + unsigned long reset_fnic_list_lock_flags = 0; uint16_t rscn_payload_len; atomic64_inc(&iport->iport_stats.num_rscns); @@ -3926,16 +3929,24 @@ fdls_process_rscn(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) || (rscn_payload_len > 1024) || (rscn->els.rscn_page_len != 4)) { num_ports = 0; - FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, - "RSCN payload_len: 0x%x page_len: 0x%x", - rscn_payload_len, rscn->els.rscn_page_len); - /* if this happens then we need to send ADISC to all the tports. */ - list_for_each_entry_safe(tport, next, &iport->tport_list, links) { - if (tport->state == FDLS_TGT_STATE_READY) - tport->flags |= FNIC_FDLS_TPORT_SEND_ADISC; + if ((rscn_payload_len == 0xFFFF) + && (sid == FC_FABRIC_CONTROLLER)) { + rscn_type = PC_RSCN; FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, - "RSCN for port id: 0x%x", tport->fcid); - } + "pcrscn: PCRSCN received. sid: 0x%x payload len: 0x%x", + sid, rscn_payload_len); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, + "RSCN payload_len: 0x%x page_len: 0x%x", + rscn_payload_len, rscn->els.rscn_page_len); + /* if this happens then we need to send ADISC to all the tports. */ + list_for_each_entry_safe(tport, next, &iport->tport_list, links) { + if (tport->state == FDLS_TGT_STATE_READY) + tport->flags |= FNIC_FDLS_TPORT_SEND_ADISC; + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, + "RSCN for port id: 0x%x", tport->fcid); + } + } /* end else */ } else { num_ports = (rscn_payload_len - 4) / rscn->els.rscn_page_len; rscn_port = (struct fc_els_rscn_page *)(rscn + 1); @@ -3982,10 +3993,37 @@ fdls_process_rscn(struct fnic_iport_s *iport, struct fc_frame_header *fchdr) tport->flags |= FNIC_FDLS_TPORT_SEND_ADISC; } - FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, + if (pc_rscn_handling_feature_flag == PC_RSCN_HANDLING_FEATURE_ON && + rscn_type == PC_RSCN && fnic->role == FNIC_ROLE_FCP_INITIATOR) { + + if (fnic->pc_rscn_handling_status == PC_RSCN_HANDLING_IN_PROGRESS) { + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, + "PCRSCN handling already in progress. Skip host reset: %d", + iport->fnic->fnic_num); + return; + } + + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, + "Processing PCRSCN. Queuing fnic for host reset: %d", + iport->fnic->fnic_num); + fnic->pc_rscn_handling_status = PC_RSCN_HANDLING_IN_PROGRESS; + + spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); + + spin_lock_irqsave(&reset_fnic_list_lock, + reset_fnic_list_lock_flags); + list_add_tail(&fnic->links, &reset_fnic_list); + spin_unlock_irqrestore(&reset_fnic_list_lock, + reset_fnic_list_lock_flags); + + queue_work(reset_fnic_work_queue, &reset_fnic_work); + spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags); + } else { + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, "FDLS process RSCN sending GPN_FT: newports: %d", newports); - fdls_send_gpn_ft(iport, FDLS_STATE_RSCN_GPN_FT); - fdls_send_rscn_resp(iport, fchdr); + fdls_send_gpn_ft(iport, FDLS_STATE_RSCN_GPN_FT); + fdls_send_rscn_resp(iport, fchdr); + } } void fnic_fdls_disc_start(struct fnic_iport_s *iport) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 5ad9d6df92a8..5fd506fcea35 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -211,11 +211,33 @@ enum reset_states { RESET_ERROR }; +enum rscn_type { + NOT_PC_RSCN = 0, + PC_RSCN +}; + +enum pc_rscn_handling_status { + PC_RSCN_HANDLING_NOT_IN_PROGRESS = 0, + PC_RSCN_HANDLING_IN_PROGRESS +}; + +enum pc_rscn_handling_feature { + PC_RSCN_HANDLING_FEATURE_OFF = 0, + PC_RSCN_HANDLING_FEATURE_ON +}; + extern unsigned int fnic_fdmi_support; extern unsigned int fnic_log_level; extern unsigned int io_completions; extern struct workqueue_struct *fnic_event_queue; +extern unsigned int pc_rscn_handling_feature_flag; +extern spinlock_t reset_fnic_list_lock; +extern struct list_head reset_fnic_list; +extern struct workqueue_struct *reset_fnic_work_queue; +extern struct work_struct reset_fnic_work; + + #define FNIC_MAIN_LOGGING 0x01 #define FNIC_FCS_LOGGING 0x02 #define FNIC_SCSI_LOGGING 0x04 @@ -396,6 +418,7 @@ struct fnic { int link_status; struct list_head list; + struct list_head links; struct pci_dev *pdev; struct vnic_fc_config config; struct vnic_dev *vdev; @@ -422,6 +445,7 @@ struct fnic { char subsys_desc[14]; int subsys_desc_len; + int pc_rscn_handling_status; /*** FIP related data members -- start ***/ void (*set_vlan)(struct fnic *, u16 vlan); @@ -503,6 +527,7 @@ void fnic_stats_debugfs_remove(struct fnic *fnic); int fnic_is_abts_pending(struct fnic *, struct scsi_cmnd *); void fnic_handle_fip_frame(struct work_struct *work); +void fnic_reset_work_handler(struct work_struct *work); void fnic_handle_fip_event(struct fnic *fnic); void fnic_fcoe_reset_vlans(struct fnic *fnic); extern void fnic_handle_fip_timer(struct timer_list *t); diff --git a/drivers/scsi/fnic/fnic_fcs.c b/drivers/scsi/fnic/fnic_fcs.c index 314ed3dd0a7a..018d9e34eaa5 100644 --- a/drivers/scsi/fnic/fnic_fcs.c +++ b/drivers/scsi/fnic/fnic_fcs.c @@ -1109,4 +1109,39 @@ void fnic_flush_tport_event_list(struct fnic *fnic) spin_unlock_irqrestore(&fnic->fnic_lock, flags); } +void fnic_reset_work_handler(struct work_struct *work) +{ + struct fnic *cur_fnic, *next_fnic; + unsigned long reset_fnic_list_lock_flags; + int host_reset_ret_code; + + /* + * This is a single thread. It is per fnic module, not per fnic + * All the fnics that need to be reset + * have been serialized via the reset fnic list. + */ + spin_lock_irqsave(&reset_fnic_list_lock, reset_fnic_list_lock_flags); + list_for_each_entry_safe(cur_fnic, next_fnic, &reset_fnic_list, links) { + list_del(&cur_fnic->links); + spin_unlock_irqrestore(&reset_fnic_list_lock, + reset_fnic_list_lock_flags); + + dev_err(&cur_fnic->pdev->dev, "fnic: <%d>: issuing a host reset\n", + cur_fnic->fnic_num); + host_reset_ret_code = fnic_host_reset(cur_fnic->host); + dev_err(&cur_fnic->pdev->dev, + "fnic: <%d>: returned from host reset with status: %d\n", + cur_fnic->fnic_num, host_reset_ret_code); + + spin_lock_irqsave(&cur_fnic->fnic_lock, cur_fnic->lock_flags); + cur_fnic->pc_rscn_handling_status = + PC_RSCN_HANDLING_NOT_IN_PROGRESS; + spin_unlock_irqrestore(&cur_fnic->fnic_lock, cur_fnic->lock_flags); + + spin_lock_irqsave(&reset_fnic_list_lock, + reset_fnic_list_lock_flags); + } + spin_unlock_irqrestore(&reset_fnic_list_lock, + reset_fnic_list_lock_flags); +} diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 8fcfcaf8a527..daa1abf7dbea 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -44,6 +44,10 @@ static LIST_HEAD(fnic_list); static DEFINE_SPINLOCK(fnic_list_lock); static DEFINE_IDA(fnic_ida); +struct work_struct reset_fnic_work; +LIST_HEAD(reset_fnic_list); +DEFINE_SPINLOCK(reset_fnic_list_lock); + /* Supported devices by fnic module */ static struct pci_device_id fnic_id_table[] = { { PCI_DEVICE(PCI_VENDOR_ID_CISCO, PCI_DEVICE_ID_CISCO_FNIC) }, @@ -88,6 +92,12 @@ static unsigned int fnic_max_qdepth = FNIC_DFLT_QUEUE_DEPTH; module_param(fnic_max_qdepth, uint, S_IRUGO|S_IWUSR); MODULE_PARM_DESC(fnic_max_qdepth, "Queue depth to report for each LUN"); +unsigned int pc_rscn_handling_feature_flag = PC_RSCN_HANDLING_FEATURE_ON; +module_param(pc_rscn_handling_feature_flag, uint, 0644); +MODULE_PARM_DESC(pc_rscn_handling_feature_flag, + "PCRSCN handling (0 for none. 1 to handle PCRSCN (default))"); + +struct workqueue_struct *reset_fnic_work_queue; struct workqueue_struct *fnic_fip_queue; static int fnic_slave_alloc(struct scsi_device *sdev) @@ -1223,6 +1233,19 @@ static int __init fnic_init_module(void) goto err_create_fip_workq; } + if (pc_rscn_handling_feature_flag == PC_RSCN_HANDLING_FEATURE_ON) { + reset_fnic_work_queue = + create_singlethread_workqueue("reset_fnic_work_queue"); + if (!reset_fnic_work_queue) { + pr_err("reset fnic work queue create failed\n"); + err = -ENOMEM; + goto err_create_reset_fnic_workq; + } + spin_lock_init(&reset_fnic_list_lock); + INIT_LIST_HEAD(&reset_fnic_list); + INIT_WORK(&reset_fnic_work, fnic_reset_work_handler); + } + fnic_fc_transport = fc_attach_transport(&fnic_fc_functions); if (!fnic_fc_transport) { printk(KERN_ERR PFX "fc_attach_transport error\n"); @@ -1243,6 +1266,9 @@ static int __init fnic_init_module(void) err_fc_transport: destroy_workqueue(fnic_fip_queue); err_create_fip_workq: + if (pc_rscn_handling_feature_flag == PC_RSCN_HANDLING_FEATURE_ON) + destroy_workqueue(reset_fnic_work_queue); +err_create_reset_fnic_workq: destroy_workqueue(fnic_event_queue); err_create_fnic_workq: kmem_cache_destroy(fnic_io_req_cache); @@ -1261,6 +1287,10 @@ static void __exit fnic_cleanup_module(void) { pci_unregister_driver(&fnic_driver); destroy_workqueue(fnic_event_queue); + + if (pc_rscn_handling_feature_flag == PC_RSCN_HANDLING_FEATURE_ON) + destroy_workqueue(reset_fnic_work_queue); + if (fnic_fip_queue) { flush_workqueue(fnic_fip_queue); destroy_workqueue(fnic_fip_queue);