From patchwork Thu Aug 17 13:12:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 715347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E5C7C2FC24 for ; Thu, 17 Aug 2023 13:12:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351237AbjHQNMS (ORCPT ); Thu, 17 Aug 2023 09:12:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351403AbjHQNMJ (ORCPT ); Thu, 17 Aug 2023 09:12:09 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.153.233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D980C359B for ; Thu, 17 Aug 2023 06:11:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277897; x=1723813897; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Tndc486r/yb0mEdyFztaFK0k7BrodNkx7Nm22hP37ik=; b=zc8OY9Pfd1b96eAHdDpg8nXDAhTYPgLzjuebWWcJ7HG2Nw4ZjcNFgMww pdvHUlxOAVudLS73Zkd4cZjc0+pAnlk6scygd9q/o5ht5EBOB/zIoErrx mhPibBt9mz/By+bjBydREKkNZ8AcHqEypL+5C7t1VYe5z6DSa4tnz6jOI warXDnAga9JrQPEhR6/195EEcjc1PRl+6ylVFRCFgcI9zJ8dueaeK8/i0 obc+9D7h2ezsKwuGLBGPNcUA6uEm4NV2zQM8tQmRd4iIzmtuSLdTInGOd 67TegJq2yJjpGhmuRThq0LIElYJIRIZg7P3aMQzwBKopOXdjcgQunWCQW Q==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="230328052" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa5.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:10:44 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:10:27 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:10:26 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 1/9] smartpqi: reformat to align with oob driver Date: Thu, 17 Aug 2023 08:12:24 -0500 Message-ID: <20230817131232.86754-2-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Kevin Barnett Align with our oob driver to simplify patch management. No functional changes. Reviewed-by: Justin Lindley Reviewed-by: Scott Teel Reviewed-by: Scott Benesh Reviewed-by: Mike McGowen Signed-off-by: Kevin Barnett Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi_init.c | 1598 +++++++++---------------- 1 file changed, 540 insertions(+), 1058 deletions(-) diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index 6aaaa7ebca37..4486259f85ab 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -527,8 +527,7 @@ static inline bool pqi_is_io_high_priority(struct pqi_scsi_dev *device, struct s io_high_prio = false; if (device->ncq_prio_enable) { - priority_class = - IOPRIO_PRIO_CLASS(req_get_ioprio(scsi_cmd_to_rq(scmd))); + priority_class = IOPRIO_PRIO_CLASS(req_get_ioprio(scsi_cmd_to_rq(scmd))); if (priority_class == IOPRIO_CLASS_RT) { /* Set NCQ priority for read/write commands. */ switch (scmd->cmnd[0]) { @@ -558,8 +557,7 @@ static int pqi_map_single(struct pci_dev *pci_dev, if (!buffer || buffer_length == 0 || data_direction == DMA_NONE) return 0; - bus_address = dma_map_single(&pci_dev->dev, buffer, buffer_length, - data_direction); + bus_address = dma_map_single(&pci_dev->dev, buffer, buffer_length, data_direction); if (dma_mapping_error(&pci_dev->dev, bus_address)) return -ENOMEM; @@ -939,8 +937,7 @@ static int pqi_flush_cache(struct pqi_ctrl_info *ctrl_info, flush_cache->shutdown_event = shutdown_event; - rc = pqi_send_ctrl_raid_request(ctrl_info, SA_FLUSH_CACHE, flush_cache, - sizeof(*flush_cache)); + rc = pqi_send_ctrl_raid_request(ctrl_info, SA_FLUSH_CACHE, flush_cache, sizeof(*flush_cache)); kfree(flush_cache); @@ -1197,7 +1194,7 @@ static int pqi_report_phys_logical_luns(struct pqi_ctrl_info *ctrl_info, u8 cmd, return rc; } -static inline int pqi_report_phys_luns(struct pqi_ctrl_info *ctrl_info, void **buffer) +static int pqi_report_phys_luns(struct pqi_ctrl_info *ctrl_info, void **buffer) { int rc; unsigned int i; @@ -1292,20 +1289,16 @@ static int pqi_get_device_lists(struct pqi_ctrl_info *ctrl_info, logdev_data = *logdev_list; if (logdev_data) { - logdev_list_length = - get_unaligned_be32(&logdev_data->header.list_length); + logdev_list_length = get_unaligned_be32(&logdev_data->header.list_length); } else { memset(&report_lun_header, 0, sizeof(report_lun_header)); - logdev_data = - (struct report_log_lun_list *)&report_lun_header; + logdev_data = (struct report_log_lun_list *)&report_lun_header; logdev_list_length = 0; } - logdev_data_length = sizeof(struct report_lun_header) + - logdev_list_length; + logdev_data_length = sizeof(struct report_lun_header) + logdev_list_length; - internal_logdev_list = kmalloc(logdev_data_length + - sizeof(struct report_log_lun), GFP_KERNEL); + internal_logdev_list = kmalloc(logdev_data_length + sizeof(struct report_log_lun), GFP_KERNEL); if (!internal_logdev_list) { kfree(*logdev_list); *logdev_list = NULL; @@ -1313,10 +1306,8 @@ static int pqi_get_device_lists(struct pqi_ctrl_info *ctrl_info, } memcpy(internal_logdev_list, logdev_data, logdev_data_length); - memset((u8 *)internal_logdev_list + logdev_data_length, 0, - sizeof(struct report_log_lun)); - put_unaligned_be32(logdev_list_length + - sizeof(struct report_log_lun), + memset((u8 *)internal_logdev_list + logdev_data_length, 0, sizeof(struct report_log_lun)); + put_unaligned_be32(logdev_list_length + sizeof(struct report_log_lun), &internal_logdev_list->header.list_length); kfree(*logdev_list); @@ -1445,8 +1436,7 @@ static int pqi_validate_raid_map(struct pqi_ctrl_info *ctrl_info, return -EINVAL; } -static int pqi_get_raid_map(struct pqi_ctrl_info *ctrl_info, - struct pqi_scsi_dev *device) +static int pqi_get_raid_map(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device) { int rc; u32 raid_map_size; @@ -1624,8 +1614,7 @@ static int pqi_get_physical_device_info(struct pqi_ctrl_info *ctrl_info, memset(id_phys, 0, sizeof(*id_phys)); - rc = pqi_identify_physical_device(ctrl_info, device, - id_phys, sizeof(*id_phys)); + rc = pqi_identify_physical_device(ctrl_info, device, id_phys, sizeof(*id_phys)); if (rc) { device->queue_depth = PQI_PHYSICAL_DISK_DEFAULT_MAX_QUEUE_DEPTH; return rc; @@ -1640,8 +1629,7 @@ static int pqi_get_physical_device_info(struct pqi_ctrl_info *ctrl_info, device->box_index = id_phys->box_index; device->phys_box_on_bus = id_phys->phys_box_on_bus; device->phy_connected_dev_type = id_phys->phy_connected_dev_type[0]; - device->queue_depth = - get_unaligned_le16(&id_phys->current_queue_depth_limit); + device->queue_depth = get_unaligned_le16(&id_phys->current_queue_depth_limit); device->active_path_index = id_phys->active_path_number; device->path_map = id_phys->redundant_path_present_map; memcpy(&device->box, @@ -1652,10 +1640,8 @@ static int pqi_get_physical_device_info(struct pqi_ctrl_info *ctrl_info, sizeof(device->phys_connector)); device->bay = id_phys->phys_bay_in_box; device->lun_count = id_phys->multi_lun_device_lun_count; - if ((id_phys->even_more_flags & PQI_DEVICE_PHY_MAP_SUPPORTED) && - id_phys->phy_count) - device->phy_id = - id_phys->phy_to_phy_map[device->active_path_index]; + if ((id_phys->even_more_flags & PQI_DEVICE_PHY_MAP_SUPPORTED) && id_phys->phy_count) + device->phy_id = id_phys->phy_to_phy_map[device->active_path_index]; else device->phy_id = 0xFF; @@ -1758,8 +1744,7 @@ static void pqi_show_volume_status(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device) { char *status; - static const char unknown_state_str[] = - "Volume is in an unknown state (%u)"; + static const char unknown_state_str[] = "Volume is in an unknown state (%u)"; char unknown_state_buffer[sizeof(unknown_state_str) + 10]; switch (device->volume_status) { @@ -2281,7 +2266,6 @@ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info, } ctrl_info->logical_volume_rescan_needed = false; - } static inline bool pqi_is_supported_device(struct pqi_scsi_dev *device) @@ -2394,15 +2378,12 @@ static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) } } - if (num_logicals && - (logdev_list->header.flags & CISS_REPORT_LOG_FLAG_DRIVE_TYPE_MIX)) + if (num_logicals && (logdev_list->header.flags & CISS_REPORT_LOG_FLAG_DRIVE_TYPE_MIX)) ctrl_info->lv_drive_type_mix_valid = true; num_new_devices = num_physicals + num_logicals; - new_device_list = kmalloc_array(num_new_devices, - sizeof(*new_device_list), - GFP_KERNEL); + new_device_list = kmalloc_array(num_new_devices, sizeof(*new_device_list), GFP_KERNEL); if (!new_device_list) { dev_warn(&ctrl_info->pci_dev->dev, "%s\n", out_of_memory_msg); rc = -ENOMEM; @@ -2412,13 +2393,11 @@ static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) for (i = 0; i < num_new_devices; i++) { device = kzalloc(sizeof(*device), GFP_KERNEL); if (!device) { - dev_warn(&ctrl_info->pci_dev->dev, "%s\n", - out_of_memory_msg); + dev_warn(&ctrl_info->pci_dev->dev, "%s\n", out_of_memory_msg); rc = -ENOMEM; goto out; } - list_add_tail(&device->new_device_list_entry, - &new_device_list_head); + list_add_tail(&device->new_device_list_entry, &new_device_list_head); } device = NULL; @@ -2457,8 +2436,7 @@ static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) if (device->device_type == SA_DEVICE_TYPE_EXPANDER_SMP) device->is_expander_smp_device = true; } else { - device->is_external_raid_device = - pqi_is_external_raid_addr(scsi3addr); + device->is_external_raid_device = pqi_is_external_raid_addr(scsi3addr); } if (!pqi_is_supported_device(device)) @@ -2467,8 +2445,7 @@ static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) /* Gather information about the device. */ rc = pqi_get_device_info(ctrl_info, device, id_phys); if (rc == -ENOMEM) { - dev_warn(&ctrl_info->pci_dev->dev, "%s\n", - out_of_memory_msg); + dev_warn(&ctrl_info->pci_dev->dev, "%s\n", out_of_memory_msg); goto out; } if (rc) { @@ -2494,16 +2471,12 @@ static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) if (device->is_physical_device) { memcpy(device->wwid, phys_lun->wwid, sizeof(device->wwid)); - if ((phys_lun->device_flags & - CISS_REPORT_PHYS_DEV_FLAG_AIO_ENABLED) && - phys_lun->aio_handle) { - device->aio_enabled = true; - device->aio_handle = - phys_lun->aio_handle; + if ((phys_lun->device_flags & CISS_REPORT_PHYS_DEV_FLAG_AIO_ENABLED) && phys_lun->aio_handle) { + device->aio_enabled = true; + device->aio_handle = phys_lun->aio_handle; } } else { - memcpy(device->volume_id, log_lun->volume_id, - sizeof(device->volume_id)); + memcpy(device->volume_id, log_lun->volume_id, sizeof(device->volume_id)); } device->sas_address = get_unaligned_be64(&device->wwid[0]); @@ -2514,8 +2487,7 @@ static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) pqi_update_device_list(ctrl_info, new_device_list, num_valid_devices); out: - list_for_each_entry_safe(device, next, &new_device_list_head, - new_device_list_entry) { + list_for_each_entry_safe(device, next, &new_device_list_head, new_device_list_entry) { if (device->keep_device) continue; list_del(&device->new_device_list_entry); @@ -2591,8 +2563,7 @@ static inline void pqi_set_encryption_info(struct pqi_encryption_info *encryptio if (volume_blk_size != 512) first_block = (first_block * volume_blk_size) / 512; - encryption_info->data_encryption_key_index = - get_unaligned_le16(&raid_map->data_encryption_key_index); + encryption_info->data_encryption_key_index = get_unaligned_le16(&raid_map->data_encryption_key_index); encryption_info->encrypt_tweak_lower = lower_32_bits(first_block); encryption_info->encrypt_tweak_upper = upper_32_bits(first_block); } @@ -2695,13 +2666,11 @@ static int pci_get_aio_common_raid_map_values(struct pqi_ctrl_info *ctrl_info, rmd->last_block = rmd->first_block + rmd->block_cnt - 1; /* Check for invalid block or wraparound. */ - if (rmd->last_block >= - get_unaligned_le64(&raid_map->volume_blk_cnt) || + if (rmd->last_block >= get_unaligned_le64(&raid_map->volume_blk_cnt) || rmd->last_block < rmd->first_block) return PQI_RAID_BYPASS_INELIGIBLE; - rmd->data_disks_per_row = - get_unaligned_le16(&raid_map->data_disks_per_row); + rmd->data_disks_per_row = get_unaligned_le16(&raid_map->data_disks_per_row); rmd->strip_size = get_unaligned_le16(&raid_map->strip_size); rmd->layout_map_count = get_unaligned_le16(&raid_map->layout_map_count); @@ -2727,27 +2696,22 @@ static int pci_get_aio_common_raid_map_values(struct pqi_ctrl_info *ctrl_info, #else rmd->first_row = rmd->first_block / rmd->blocks_per_row; rmd->last_row = rmd->last_block / rmd->blocks_per_row; - rmd->first_row_offset = (u32)(rmd->first_block - - (rmd->first_row * rmd->blocks_per_row)); - rmd->last_row_offset = (u32)(rmd->last_block - (rmd->last_row * - rmd->blocks_per_row)); + rmd->first_row_offset = (u32)(rmd->first_block - (rmd->first_row * rmd->blocks_per_row)); + rmd->last_row_offset = (u32)(rmd->last_block - (rmd->last_row * rmd->blocks_per_row)); rmd->first_column = rmd->first_row_offset / rmd->strip_size; rmd->last_column = rmd->last_row_offset / rmd->strip_size; #endif /* If this isn't a single row/column then give to the controller. */ - if (rmd->first_row != rmd->last_row || - rmd->first_column != rmd->last_column) + if (rmd->first_row != rmd->last_row || rmd->first_column != rmd->last_column) return PQI_RAID_BYPASS_INELIGIBLE; /* Proceeding with driver mapping. */ rmd->total_disks_per_row = rmd->data_disks_per_row + get_unaligned_le16(&raid_map->metadata_disks_per_row); - rmd->map_row = ((u32)(rmd->first_row >> - raid_map->parity_rotation_shift)) % + rmd->map_row = ((u32)(rmd->first_row >> raid_map->parity_rotation_shift)) % get_unaligned_le16(&raid_map->row_cnt); - rmd->map_index = (rmd->map_row * rmd->total_disks_per_row) + - rmd->first_column; + rmd->map_index = (rmd->map_row * rmd->total_disks_per_row) + rmd->first_column; return 0; } @@ -2819,15 +2783,12 @@ static int pqi_calc_aio_r5_or_r6(struct pqi_scsi_dev_raid_map_data *rmd, rmd->r5or6_last_column = tmpdiv; #else rmd->first_row_offset = rmd->r5or6_first_row_offset = - (u32)((rmd->first_block % rmd->stripesize) % - rmd->blocks_per_row); + (u32)((rmd->first_block % rmd->stripesize) % rmd->blocks_per_row); rmd->r5or6_last_row_offset = - (u32)((rmd->last_block % rmd->stripesize) % - rmd->blocks_per_row); + (u32)((rmd->last_block % rmd->stripesize) % rmd->blocks_per_row); - rmd->first_column = - rmd->r5or6_first_row_offset / rmd->strip_size; + rmd->first_column = rmd->r5or6_first_row_offset / rmd->strip_size; rmd->r5or6_first_column = rmd->first_column; rmd->r5or6_last_column = rmd->r5or6_last_row_offset / rmd->strip_size; #endif @@ -2835,13 +2796,10 @@ static int pqi_calc_aio_r5_or_r6(struct pqi_scsi_dev_raid_map_data *rmd, return PQI_RAID_BYPASS_INELIGIBLE; /* Request is eligible. */ - rmd->map_row = - ((u32)(rmd->first_row >> raid_map->parity_rotation_shift)) % + rmd->map_row = ((u32)(rmd->first_row >> raid_map->parity_rotation_shift)) % get_unaligned_le16(&raid_map->row_cnt); - rmd->map_index = (rmd->first_group * - (get_unaligned_le16(&raid_map->row_cnt) * - rmd->total_disks_per_row)) + + rmd->map_index = (rmd->first_group * (get_unaligned_le16(&raid_map->row_cnt) * rmd->total_disks_per_row)) + (rmd->map_row * rmd->total_disks_per_row) + rmd->first_column; if (rmd->is_write) { @@ -2949,8 +2907,7 @@ static int pqi_raid_bypass_submit_scsi_cmd(struct pqi_ctrl_info *ctrl_info, if (rc) return PQI_RAID_BYPASS_INELIGIBLE; - if (device->raid_level == SA_RAID_1 || - device->raid_level == SA_RAID_TRIPLE) { + if (device->raid_level == SA_RAID_1 || device->raid_level == SA_RAID_TRIPLE) { if (rmd.is_write) { pqi_calc_aio_r1_nexus(raid_map, &rmd); } else { @@ -2961,8 +2918,7 @@ static int pqi_raid_bypass_submit_scsi_cmd(struct pqi_ctrl_info *ctrl_info, device->next_bypass_group[rmd.map_index] = next_bypass_group; rmd.map_index += group * rmd.data_disks_per_row; } - } else if ((device->raid_level == SA_RAID_5 || - device->raid_level == SA_RAID_6) && + } else if ((device->raid_level == SA_RAID_5 || device->raid_level == SA_RAID_6) && (rmd.layout_map_count > 1 || rmd.is_write)) { rc = pqi_calc_aio_r5_or_r6(&rmd, raid_map); if (rc) @@ -3129,8 +3085,7 @@ static void pqi_process_raid_io_error(struct pqi_io_request *io_request) case PQI_DATA_IN_OUT_GOOD: break; case PQI_DATA_IN_OUT_UNDERFLOW: - xfer_count = - get_unaligned_le32(&error_info->data_out_transferred); + xfer_count = get_unaligned_le32(&error_info->data_out_transferred); residual_count = scsi_bufflen(scmd) - xfer_count; scsi_set_resid(scmd, residual_count); if (xfer_count < scmd->underflow) @@ -3166,8 +3121,7 @@ static void pqi_process_raid_io_error(struct pqi_io_request *io_request) sense_data_length = get_unaligned_le16(&error_info->sense_data_length); if (sense_data_length == 0) - sense_data_length = - get_unaligned_le16(&error_info->response_data_length); + sense_data_length = get_unaligned_le16(&error_info->response_data_length); if (sense_data_length) { if (sense_data_length > sizeof(error_info->data)) sense_data_length = sizeof(error_info->data); @@ -3199,8 +3153,7 @@ static void pqi_process_raid_io_error(struct pqi_io_request *io_request) if (sense_data_length > SCSI_SENSE_BUFFERSIZE) sense_data_length = SCSI_SENSE_BUFFERSIZE; - memcpy(scmd->sense_buffer, error_info->data, - sense_data_length); + memcpy(scmd->sense_buffer, error_info->data, sense_data_length); } scmd->result = scsi_status; @@ -3237,8 +3190,7 @@ static void pqi_process_aio_io_error(struct pqi_io_request *io_request) break; case PQI_AIO_STATUS_UNDERRUN: scsi_status = SAM_STAT_GOOD; - residual_count = get_unaligned_le32( - &error_info->residual_count); + residual_count = get_unaligned_le32(&error_info->residual_count); scsi_set_resid(scmd, residual_count); xfer_count = scsi_bufflen(scmd) - residual_count; if (xfer_count < scmd->underflow) @@ -3285,15 +3237,13 @@ static void pqi_process_aio_io_error(struct pqi_io_request *io_request) } if (error_info->data_present) { - sense_data_length = - get_unaligned_le16(&error_info->data_length); + sense_data_length = get_unaligned_le16(&error_info->data_length); if (sense_data_length) { if (sense_data_length > sizeof(error_info->data)) sense_data_length = sizeof(error_info->data); if (sense_data_length > SCSI_SENSE_BUFFERSIZE) sense_data_length = SCSI_SENSE_BUFFERSIZE; - memcpy(scmd->sense_buffer, error_info->data, - sense_data_length); + memcpy(scmd->sense_buffer, error_info->data, sense_data_length); } } @@ -3376,8 +3326,7 @@ static int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, struct pqi_queue break; num_responses++; - response = queue_group->oq_element_array + - (oq_ci * PQI_OPERATIONAL_OQ_ELEMENT_LENGTH); + response = queue_group->oq_element_array + (oq_ci * PQI_OPERATIONAL_OQ_ELEMENT_LENGTH); request_id = get_unaligned_le16(&response->request_id); if (request_id >= ctrl_info->max_io_slots) { @@ -3406,13 +3355,10 @@ static int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, struct pqi_queue case PQI_RESPONSE_IU_GENERAL_MANAGEMENT: break; case PQI_RESPONSE_IU_VENDOR_GENERAL: - io_request->status = - get_unaligned_le16( - &((struct pqi_vendor_general_response *)response)->status); + io_request->status = get_unaligned_le16(&((struct pqi_vendor_general_response *)response)->status); break; case PQI_RESPONSE_IU_TASK_MANAGEMENT: - io_request->status = pqi_interpret_task_management_response(ctrl_info, - (void *)response); + io_request->status = pqi_interpret_task_management_response(ctrl_info, (void *)response); break; case PQI_RESPONSE_IU_AIO_PATH_DISABLED: pqi_aio_path_disabled(io_request); @@ -3421,8 +3367,7 @@ static int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, struct pqi_queue case PQI_RESPONSE_IU_RAID_PATH_IO_ERROR: case PQI_RESPONSE_IU_AIO_PATH_IO_ERROR: io_request->error_info = ctrl_info->error_buffer + - (get_unaligned_le16(&response->error_index) * - PQI_ERROR_BUFFER_ELEMENT_LENGTH); + (get_unaligned_le16(&response->error_index) * PQI_ERROR_BUFFER_ELEMENT_LENGTH); pqi_process_io_error(response->header.iu_type, io_request); break; default: @@ -3481,12 +3426,10 @@ static void pqi_send_event_ack(struct pqi_ctrl_info *ctrl_info, iq_pi = queue_group->iq_pi_copy[RAID_PATH]; iq_ci = readl(queue_group->iq_ci[RAID_PATH]); - if (pqi_num_elements_free(iq_pi, iq_ci, - ctrl_info->num_elements_per_iq)) + if (pqi_num_elements_free(iq_pi, iq_ci, ctrl_info->num_elements_per_iq)) break; - spin_unlock_irqrestore( - &queue_group->submit_lock[RAID_PATH], flags); + spin_unlock_irqrestore(&queue_group->submit_lock[RAID_PATH], flags); if (pqi_ctrl_offline(ctrl_info)) return; @@ -3517,8 +3460,7 @@ static void pqi_acknowledge_event(struct pqi_ctrl_info *ctrl_info, memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_ACKNOWLEDGE_VENDOR_EVENT; - put_unaligned_le16(sizeof(request) - PQI_REQUEST_HEADER_LENGTH, - &request.header.iu_length); + put_unaligned_le16(sizeof(request) - PQI_REQUEST_HEADER_LENGTH, &request.header.iu_length); request.event_type = event->event_type; put_unaligned_le16(event->event_id, &request.event_id); put_unaligned_le32(event->additional_event_id, &request.additional_event_id); @@ -3577,7 +3519,7 @@ static void pqi_process_soft_reset(struct pqi_ctrl_info *ctrl_info) fallthrough; case RESET_INITIATE_DRIVER: dev_info(&ctrl_info->pci_dev->dev, - "Online Firmware Activation: resetting controller\n"); + "Online Firmware Activation: resetting controller\n"); sis_soft_reset(ctrl_info); fallthrough; case RESET_INITIATE_FIRMWARE: @@ -3587,12 +3529,12 @@ static void pqi_process_soft_reset(struct pqi_ctrl_info *ctrl_info) pqi_ofa_free_host_buffer(ctrl_info); pqi_ctrl_ofa_done(ctrl_info); dev_info(&ctrl_info->pci_dev->dev, - "Online Firmware Activation: %s\n", - rc == 0 ? "SUCCESS" : "FAILED"); + "Online Firmware Activation: %s\n", + rc == 0 ? "SUCCESS" : "FAILED"); break; case RESET_ABORT: dev_info(&ctrl_info->pci_dev->dev, - "Online Firmware Activation ABORTED\n"); + "Online Firmware Activation ABORTED\n"); if (ctrl_info->soft_reset_handshake_supported) pqi_clear_soft_reset_status(ctrl_info); pqi_ofa_free_host_buffer(ctrl_info); @@ -3762,8 +3704,7 @@ static void pqi_heartbeat_timer_handler(struct timer_list *t) } ctrl_info->previous_heartbeat_count = heartbeat_count; - mod_timer(&ctrl_info->heartbeat_timer, - jiffies + PQI_HEARTBEAT_TIMER_INTERVAL); + mod_timer(&ctrl_info->heartbeat_timer, jiffies + PQI_HEARTBEAT_TIMER_INTERVAL); } static void pqi_start_heartbeat_timer(struct pqi_ctrl_info *ctrl_info) @@ -3771,13 +3712,10 @@ static void pqi_start_heartbeat_timer(struct pqi_ctrl_info *ctrl_info) if (!ctrl_info->heartbeat_counter) return; - ctrl_info->previous_num_interrupts = - atomic_read(&ctrl_info->num_interrupts); - ctrl_info->previous_heartbeat_count = - pqi_read_heartbeat_counter(ctrl_info); + ctrl_info->previous_num_interrupts = atomic_read(&ctrl_info->num_interrupts); + ctrl_info->previous_heartbeat_count = pqi_read_heartbeat_counter(ctrl_info); - ctrl_info->heartbeat_timer.expires = - jiffies + PQI_HEARTBEAT_TIMER_INTERVAL; + ctrl_info->heartbeat_timer.expires = jiffies + PQI_HEARTBEAT_TIMER_INTERVAL; add_timer(&ctrl_info->heartbeat_timer); } @@ -3791,12 +3729,10 @@ static void pqi_ofa_capture_event_payload(struct pqi_ctrl_info *ctrl_info, { switch (event->event_id) { case PQI_EVENT_OFA_MEMORY_ALLOCATION: - ctrl_info->ofa_bytes_requested = - get_unaligned_le32(&response->data.ofa_memory_allocation.bytes_requested); + ctrl_info->ofa_bytes_requested = get_unaligned_le32(&response->data.ofa_memory_allocation.bytes_requested); break; case PQI_EVENT_OFA_CANCELED: - ctrl_info->ofa_cancel_reason = - get_unaligned_le16(&response->data.ofa_cancelled.reason); + ctrl_info->ofa_cancel_reason = get_unaligned_le16(&response->data.ofa_cancelled.reason); break; } } @@ -3838,8 +3774,7 @@ static int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info) event->pending = true; event->event_type = response->event_type; event->event_id = get_unaligned_le16(&response->event_id); - event->additional_event_id = - get_unaligned_le32(&response->additional_event_id); + event->additional_event_id = get_unaligned_le32(&response->additional_event_id); if (event->event_type == PQI_EVENT_TYPE_OFA) pqi_ofa_capture_event_payload(ctrl_info, event, response); } @@ -4030,6 +3965,7 @@ static int pqi_enable_msix_interrupts(struct pqi_ctrl_info *ctrl_info) num_vectors_enabled = pci_alloc_irq_vectors(ctrl_info->pci_dev, PQI_MIN_MSIX_VECTORS, ctrl_info->num_queue_groups, flags); + if (num_vectors_enabled < 0) { dev_err(&ctrl_info->pci_dev->dev, "MSI-X init failed with error %d\n", @@ -4039,6 +3975,7 @@ static int pqi_enable_msix_interrupts(struct pqi_ctrl_info *ctrl_info) ctrl_info->num_msix_vectors_enabled = num_vectors_enabled; ctrl_info->irq_mode = IRQ_MODE_MSIX; + return 0; } @@ -4064,12 +4001,8 @@ static int pqi_alloc_operational_queues(struct pqi_ctrl_info *ctrl_info) unsigned int num_queue_indexes; struct pqi_queue_group *queue_group; - element_array_length_per_iq = - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH * - ctrl_info->num_elements_per_iq; - element_array_length_per_oq = - PQI_OPERATIONAL_OQ_ELEMENT_LENGTH * - ctrl_info->num_elements_per_oq; + element_array_length_per_iq = PQI_OPERATIONAL_IQ_ELEMENT_LENGTH * ctrl_info->num_elements_per_iq; + element_array_length_per_oq = PQI_OPERATIONAL_OQ_ELEMENT_LENGTH * ctrl_info->num_elements_per_oq; num_inbound_queues = ctrl_info->num_queue_groups * 2; num_outbound_queues = ctrl_info->num_queue_groups; num_queue_indexes = (ctrl_info->num_queue_groups * 3) + 1; @@ -4077,30 +4010,24 @@ static int pqi_alloc_operational_queues(struct pqi_ctrl_info *ctrl_info) aligned_pointer = NULL; for (i = 0; i < num_inbound_queues; i++) { - aligned_pointer = PTR_ALIGN(aligned_pointer, - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); + aligned_pointer = PTR_ALIGN(aligned_pointer, PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); aligned_pointer += element_array_length_per_iq; } for (i = 0; i < num_outbound_queues; i++) { - aligned_pointer = PTR_ALIGN(aligned_pointer, - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); + aligned_pointer = PTR_ALIGN(aligned_pointer, PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); aligned_pointer += element_array_length_per_oq; } - aligned_pointer = PTR_ALIGN(aligned_pointer, - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); - aligned_pointer += PQI_NUM_EVENT_QUEUE_ELEMENTS * - PQI_EVENT_OQ_ELEMENT_LENGTH; + aligned_pointer = PTR_ALIGN(aligned_pointer, PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); + aligned_pointer += PQI_NUM_EVENT_QUEUE_ELEMENTS * PQI_EVENT_OQ_ELEMENT_LENGTH; for (i = 0; i < num_queue_indexes; i++) { - aligned_pointer = PTR_ALIGN(aligned_pointer, - PQI_OPERATIONAL_INDEX_ALIGNMENT); + aligned_pointer = PTR_ALIGN(aligned_pointer, PQI_OPERATIONAL_INDEX_ALIGNMENT); aligned_pointer += sizeof(pqi_index_t); } - alloc_length = (size_t)aligned_pointer + - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT; + alloc_length = (size_t)aligned_pointer + PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT; alloc_length += PQI_EXTRA_SGL_MEMORY; @@ -4114,8 +4041,7 @@ static int pqi_alloc_operational_queues(struct pqi_ctrl_info *ctrl_info) ctrl_info->queue_memory_length = alloc_length; - element_array = PTR_ALIGN(ctrl_info->queue_memory_base, - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); + element_array = PTR_ALIGN(ctrl_info->queue_memory_base, PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); for (i = 0; i < ctrl_info->num_queue_groups; i++) { queue_group = &ctrl_info->queue_groups[i]; @@ -4124,71 +4050,52 @@ static int pqi_alloc_operational_queues(struct pqi_ctrl_info *ctrl_info) ctrl_info->queue_memory_base_dma_handle + (element_array - ctrl_info->queue_memory_base); element_array += element_array_length_per_iq; - element_array = PTR_ALIGN(element_array, - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); + element_array = PTR_ALIGN(element_array, PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); queue_group->iq_element_array[AIO_PATH] = element_array; - queue_group->iq_element_array_bus_addr[AIO_PATH] = - ctrl_info->queue_memory_base_dma_handle + + queue_group->iq_element_array_bus_addr[AIO_PATH] = ctrl_info->queue_memory_base_dma_handle + (element_array - ctrl_info->queue_memory_base); element_array += element_array_length_per_iq; - element_array = PTR_ALIGN(element_array, - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); + element_array = PTR_ALIGN(element_array, PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); } for (i = 0; i < ctrl_info->num_queue_groups; i++) { queue_group = &ctrl_info->queue_groups[i]; queue_group->oq_element_array = element_array; - queue_group->oq_element_array_bus_addr = - ctrl_info->queue_memory_base_dma_handle + + queue_group->oq_element_array_bus_addr = ctrl_info->queue_memory_base_dma_handle + (element_array - ctrl_info->queue_memory_base); element_array += element_array_length_per_oq; - element_array = PTR_ALIGN(element_array, - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); + element_array = PTR_ALIGN(element_array, PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); } ctrl_info->event_queue.oq_element_array = element_array; - ctrl_info->event_queue.oq_element_array_bus_addr = - ctrl_info->queue_memory_base_dma_handle + + ctrl_info->event_queue.oq_element_array_bus_addr = ctrl_info->queue_memory_base_dma_handle + (element_array - ctrl_info->queue_memory_base); - element_array += PQI_NUM_EVENT_QUEUE_ELEMENTS * - PQI_EVENT_OQ_ELEMENT_LENGTH; + element_array += PQI_NUM_EVENT_QUEUE_ELEMENTS * PQI_EVENT_OQ_ELEMENT_LENGTH; - next_queue_index = (void __iomem *)PTR_ALIGN(element_array, - PQI_OPERATIONAL_INDEX_ALIGNMENT); + next_queue_index = (void __iomem *)PTR_ALIGN(element_array, PQI_OPERATIONAL_INDEX_ALIGNMENT); for (i = 0; i < ctrl_info->num_queue_groups; i++) { queue_group = &ctrl_info->queue_groups[i]; queue_group->iq_ci[RAID_PATH] = next_queue_index; - queue_group->iq_ci_bus_addr[RAID_PATH] = - ctrl_info->queue_memory_base_dma_handle + - (next_queue_index - - (void __iomem *)ctrl_info->queue_memory_base); + queue_group->iq_ci_bus_addr[RAID_PATH] = ctrl_info->queue_memory_base_dma_handle + + (next_queue_index - (void __iomem *)ctrl_info->queue_memory_base); next_queue_index += sizeof(pqi_index_t); - next_queue_index = PTR_ALIGN(next_queue_index, - PQI_OPERATIONAL_INDEX_ALIGNMENT); + next_queue_index = PTR_ALIGN(next_queue_index, PQI_OPERATIONAL_INDEX_ALIGNMENT); queue_group->iq_ci[AIO_PATH] = next_queue_index; - queue_group->iq_ci_bus_addr[AIO_PATH] = - ctrl_info->queue_memory_base_dma_handle + - (next_queue_index - - (void __iomem *)ctrl_info->queue_memory_base); + queue_group->iq_ci_bus_addr[AIO_PATH] = ctrl_info->queue_memory_base_dma_handle + + (next_queue_index - (void __iomem *)ctrl_info->queue_memory_base); next_queue_index += sizeof(pqi_index_t); - next_queue_index = PTR_ALIGN(next_queue_index, - PQI_OPERATIONAL_INDEX_ALIGNMENT); + next_queue_index = PTR_ALIGN(next_queue_index, PQI_OPERATIONAL_INDEX_ALIGNMENT); queue_group->oq_pi = next_queue_index; - queue_group->oq_pi_bus_addr = - ctrl_info->queue_memory_base_dma_handle + - (next_queue_index - - (void __iomem *)ctrl_info->queue_memory_base); + queue_group->oq_pi_bus_addr = ctrl_info->queue_memory_base_dma_handle + + (next_queue_index - (void __iomem *)ctrl_info->queue_memory_base); next_queue_index += sizeof(pqi_index_t); - next_queue_index = PTR_ALIGN(next_queue_index, - PQI_OPERATIONAL_INDEX_ALIGNMENT); + next_queue_index = PTR_ALIGN(next_queue_index, PQI_OPERATIONAL_INDEX_ALIGNMENT); } ctrl_info->event_queue.oq_pi = next_queue_index; - ctrl_info->event_queue.oq_pi_bus_addr = - ctrl_info->queue_memory_base_dma_handle + - (next_queue_index - - (void __iomem *)ctrl_info->queue_memory_base); + ctrl_info->event_queue.oq_pi_bus_addr = ctrl_info->queue_memory_base_dma_handle + + (next_queue_index - (void __iomem *)ctrl_info->queue_memory_base); return 0; } @@ -4240,8 +4147,7 @@ static int pqi_alloc_admin_queues(struct pqi_ctrl_info *ctrl_info) struct pqi_admin_queues_aligned *admin_queues_aligned; struct pqi_admin_queues *admin_queues; - alloc_length = sizeof(struct pqi_admin_queues_aligned) + - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT; + alloc_length = sizeof(struct pqi_admin_queues_aligned) + PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT; ctrl_info->admin_queue_memory_base = dma_alloc_coherent(&ctrl_info->pci_dev->dev, alloc_length, @@ -4254,33 +4160,20 @@ static int pqi_alloc_admin_queues(struct pqi_ctrl_info *ctrl_info) ctrl_info->admin_queue_memory_length = alloc_length; admin_queues = &ctrl_info->admin_queues; - admin_queues_aligned = PTR_ALIGN(ctrl_info->admin_queue_memory_base, - PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); - admin_queues->iq_element_array = - &admin_queues_aligned->iq_element_array; - admin_queues->oq_element_array = - &admin_queues_aligned->oq_element_array; - admin_queues->iq_ci = - (pqi_index_t __iomem *)&admin_queues_aligned->iq_ci; - admin_queues->oq_pi = - (pqi_index_t __iomem *)&admin_queues_aligned->oq_pi; - - admin_queues->iq_element_array_bus_addr = - ctrl_info->admin_queue_memory_base_dma_handle + - (admin_queues->iq_element_array - - ctrl_info->admin_queue_memory_base); - admin_queues->oq_element_array_bus_addr = - ctrl_info->admin_queue_memory_base_dma_handle + - (admin_queues->oq_element_array - - ctrl_info->admin_queue_memory_base); - admin_queues->iq_ci_bus_addr = - ctrl_info->admin_queue_memory_base_dma_handle + - ((void __iomem *)admin_queues->iq_ci - - (void __iomem *)ctrl_info->admin_queue_memory_base); - admin_queues->oq_pi_bus_addr = - ctrl_info->admin_queue_memory_base_dma_handle + - ((void __iomem *)admin_queues->oq_pi - - (void __iomem *)ctrl_info->admin_queue_memory_base); + admin_queues_aligned = PTR_ALIGN(ctrl_info->admin_queue_memory_base, PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT); + admin_queues->iq_element_array = &admin_queues_aligned->iq_element_array; + admin_queues->oq_element_array = &admin_queues_aligned->oq_element_array; + admin_queues->iq_ci = (pqi_index_t __iomem *)&admin_queues_aligned->iq_ci; + admin_queues->oq_pi = (pqi_index_t __iomem *)&admin_queues_aligned->oq_pi; + + admin_queues->iq_element_array_bus_addr = ctrl_info->admin_queue_memory_base_dma_handle + + (admin_queues->iq_element_array - ctrl_info->admin_queue_memory_base); + admin_queues->oq_element_array_bus_addr = ctrl_info->admin_queue_memory_base_dma_handle + + (admin_queues->oq_element_array - ctrl_info->admin_queue_memory_base); + admin_queues->iq_ci_bus_addr = ctrl_info->admin_queue_memory_base_dma_handle + + ((void __iomem *)admin_queues->iq_ci - (void __iomem *)ctrl_info->admin_queue_memory_base); + admin_queues->oq_pi_bus_addr = ctrl_info->admin_queue_memory_base_dma_handle + + ((void __iomem *)admin_queues->oq_pi - (void __iomem *)ctrl_info->admin_queue_memory_base); return 0; } @@ -4299,22 +4192,15 @@ static int pqi_create_admin_queues(struct pqi_ctrl_info *ctrl_info) pqi_registers = ctrl_info->pqi_registers; admin_queues = &ctrl_info->admin_queues; - writeq((u64)admin_queues->iq_element_array_bus_addr, - &pqi_registers->admin_iq_element_array_addr); - writeq((u64)admin_queues->oq_element_array_bus_addr, - &pqi_registers->admin_oq_element_array_addr); - writeq((u64)admin_queues->iq_ci_bus_addr, - &pqi_registers->admin_iq_ci_addr); - writeq((u64)admin_queues->oq_pi_bus_addr, - &pqi_registers->admin_oq_pi_addr); - - reg = PQI_ADMIN_IQ_NUM_ELEMENTS | - (PQI_ADMIN_OQ_NUM_ELEMENTS << 8) | - (admin_queues->int_msg_num << 16); + writeq((u64)admin_queues->iq_element_array_bus_addr, &pqi_registers->admin_iq_element_array_addr); + writeq((u64)admin_queues->oq_element_array_bus_addr, &pqi_registers->admin_oq_element_array_addr); + writeq((u64)admin_queues->iq_ci_bus_addr, &pqi_registers->admin_iq_ci_addr); + writeq((u64)admin_queues->oq_pi_bus_addr, &pqi_registers->admin_oq_pi_addr); + + reg = PQI_ADMIN_IQ_NUM_ELEMENTS | (PQI_ADMIN_OQ_NUM_ELEMENTS << 8) | (admin_queues->int_msg_num << 16); writel(reg, &pqi_registers->admin_iq_num_elements); - writel(PQI_CREATE_ADMIN_QUEUE_PAIR, - &pqi_registers->function_and_status_code); + writel(PQI_CREATE_ADMIN_QUEUE_PAIR, &pqi_registers->function_and_status_code); timeout = PQI_ADMIN_QUEUE_CREATE_TIMEOUT_JIFFIES + jiffies; while (1) { @@ -4331,11 +4217,9 @@ static int pqi_create_admin_queues(struct pqi_ctrl_info *ctrl_info) * offsets until *after* the create admin queue pair command * completes successfully. */ - admin_queues->iq_pi = ctrl_info->iomem_base + - PQI_DEVICE_REGISTERS_OFFSET + + admin_queues->iq_pi = ctrl_info->iomem_base + PQI_DEVICE_REGISTERS_OFFSET + readq(&pqi_registers->admin_iq_pi_offset); - admin_queues->oq_ci = ctrl_info->iomem_base + - PQI_DEVICE_REGISTERS_OFFSET + + admin_queues->oq_ci = ctrl_info->iomem_base + PQI_DEVICE_REGISTERS_OFFSET + readq(&pqi_registers->admin_oq_ci_offset); return 0; @@ -4366,7 +4250,7 @@ static void pqi_submit_admin_request(struct pqi_ctrl_info *ctrl_info, writel(iq_pi, admin_queues->iq_pi); } -#define PQI_ADMIN_REQUEST_TIMEOUT_SECS 60 +#define PQI_ADMIN_REQUEST_TIMEOUT_SECS 60 static int pqi_poll_for_admin_response(struct pqi_ctrl_info *ctrl_info, struct pqi_general_admin_response *response) @@ -4430,45 +4314,35 @@ static void pqi_start_io(struct pqi_ctrl_info *ctrl_info, iq_pi = queue_group->iq_pi_copy[path]; - list_for_each_entry_safe(io_request, next, - &queue_group->request_list[path], request_list_entry) { - + list_for_each_entry_safe(io_request, next, &queue_group->request_list[path], request_list_entry) { request = io_request->iu; - iu_length = get_unaligned_le16(&request->iu_length) + - PQI_REQUEST_HEADER_LENGTH; - num_elements_needed = - DIV_ROUND_UP(iu_length, - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); + iu_length = get_unaligned_le16(&request->iu_length) + PQI_REQUEST_HEADER_LENGTH; + num_elements_needed = DIV_ROUND_UP(iu_length, PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); iq_ci = readl(queue_group->iq_ci[path]); - if (num_elements_needed > pqi_num_elements_free(iq_pi, iq_ci, - ctrl_info->num_elements_per_iq)) + if (num_elements_needed > pqi_num_elements_free(iq_pi, iq_ci, ctrl_info->num_elements_per_iq)) break; - put_unaligned_le16(queue_group->oq_id, - &request->response_queue_id); + put_unaligned_le16(queue_group->oq_id, &request->response_queue_id); next_element = queue_group->iq_element_array[path] + (iq_pi * PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); - num_elements_to_end_of_queue = - ctrl_info->num_elements_per_iq - iq_pi; + num_elements_to_end_of_queue = ctrl_info->num_elements_per_iq - iq_pi; if (num_elements_needed <= num_elements_to_end_of_queue) { memcpy(next_element, request, iu_length); } else { - copy_count = num_elements_to_end_of_queue * - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH; + copy_count = num_elements_to_end_of_queue * PQI_OPERATIONAL_IQ_ELEMENT_LENGTH; memcpy(next_element, request, copy_count); memcpy(queue_group->iq_element_array[path], (u8 *)request + copy_count, iu_length - copy_count); } - iq_pi = (iq_pi + num_elements_needed) % - ctrl_info->num_elements_per_iq; + iq_pi = (iq_pi + num_elements_needed) % ctrl_info->num_elements_per_iq; list_del(&io_request->request_list_entry); } @@ -4528,8 +4402,7 @@ static int pqi_process_raid_io_error_synchronous( rc = 0; break; case PQI_DATA_IN_OUT_UNDERFLOW: - if (error_info->status == SAM_STAT_GOOD || - error_info->status == SAM_STAT_CHECK_CONDITION) + if (error_info->status == SAM_STAT_GOOD || error_info->status == SAM_STAT_CHECK_CONDITION) rc = 0; break; case PQI_DATA_IN_OUT_ABORTED: @@ -4562,9 +4435,10 @@ static int pqi_submit_raid_request_synchronous(struct pqi_ctrl_info *ctrl_info, } pqi_ctrl_busy(ctrl_info); + /* - * Wait for other admin queue updates such as; - * config table changes, OFA memory updates, ... + * Wait for other admin queue updates such as + * config table changes, OFA memory updates, etc. */ if (pqi_is_blockable_request(request)) pqi_wait_if_ctrl_blocked(ctrl_info); @@ -4576,22 +4450,19 @@ static int pqi_submit_raid_request_synchronous(struct pqi_ctrl_info *ctrl_info, io_request = pqi_alloc_io_request(ctrl_info, NULL); - put_unaligned_le16(io_request->index, - &(((struct pqi_raid_path_request *)request)->request_id)); + put_unaligned_le16(io_request->index, &(((struct pqi_raid_path_request *)request)->request_id)); if (request->iu_type == PQI_REQUEST_IU_RAID_PATH_IO) ((struct pqi_raid_path_request *)request)->error_index = ((struct pqi_raid_path_request *)request)->request_id; - iu_length = get_unaligned_le16(&request->iu_length) + - PQI_REQUEST_HEADER_LENGTH; + iu_length = get_unaligned_le16(&request->iu_length) + PQI_REQUEST_HEADER_LENGTH; memcpy(io_request->iu, request, iu_length); io_request->io_complete_callback = pqi_raid_synchronous_complete; io_request->context = &wait; - pqi_start_io(ctrl_info, &ctrl_info->queue_groups[PQI_DEFAULT_QUEUE_GROUP], RAID_PATH, - io_request); + pqi_start_io(ctrl_info, &ctrl_info->queue_groups[PQI_DEFAULT_QUEUE_GROUP], RAID_PATH, io_request); pqi_wait_for_completion_io(ctrl_info, &wait); @@ -4613,14 +4484,13 @@ static int pqi_submit_raid_request_synchronous(struct pqi_ctrl_info *ctrl_info, return rc; } -static int pqi_validate_admin_response( - struct pqi_general_admin_response *response, u8 expected_function_code) +static int pqi_validate_admin_response(struct pqi_general_admin_response *response, + u8 expected_function_code) { if (response->header.iu_type != PQI_RESPONSE_IU_GENERAL_ADMIN) return -EINVAL; - if (get_unaligned_le16(&response->header.iu_length) != - PQI_GENERAL_ADMIN_IU_LENGTH) + if (get_unaligned_le16(&response->header.iu_length) != PQI_GENERAL_ADMIN_IU_LENGTH) return -EINVAL; if (response->function_code != expected_function_code) @@ -4632,10 +4502,8 @@ static int pqi_validate_admin_response( return 0; } -static int pqi_submit_admin_request_synchronous( - struct pqi_ctrl_info *ctrl_info, - struct pqi_general_admin_request *request, - struct pqi_general_admin_response *response) +static int pqi_submit_admin_request_synchronous(struct pqi_ctrl_info *ctrl_info, + struct pqi_general_admin_request *request, struct pqi_general_admin_response *response) { int rc; @@ -4664,17 +4532,13 @@ static int pqi_report_device_capability(struct pqi_ctrl_info *ctrl_info) memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_GENERAL_ADMIN; - put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, - &request.header.iu_length); - request.function_code = - PQI_GENERAL_ADMIN_FUNCTION_REPORT_DEVICE_CAPABILITY; - put_unaligned_le32(sizeof(*capability), - &request.data.report_device_capability.buffer_length); + put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, &request.header.iu_length); + request.function_code = PQI_GENERAL_ADMIN_FUNCTION_REPORT_DEVICE_CAPABILITY; + put_unaligned_le32(sizeof(*capability), &request.data.report_device_capability.buffer_length); rc = pqi_map_single(ctrl_info->pci_dev, &request.data.report_device_capability.sg_descriptor, - capability, sizeof(*capability), - DMA_FROM_DEVICE); + capability, sizeof(*capability), DMA_FROM_DEVICE); if (rc) goto out; @@ -4692,31 +4556,18 @@ static int pqi_report_device_capability(struct pqi_ctrl_info *ctrl_info) goto out; } - ctrl_info->max_inbound_queues = - get_unaligned_le16(&capability->max_inbound_queues); - ctrl_info->max_elements_per_iq = - get_unaligned_le16(&capability->max_elements_per_iq); - ctrl_info->max_iq_element_length = - get_unaligned_le16(&capability->max_iq_element_length) - * 16; - ctrl_info->max_outbound_queues = - get_unaligned_le16(&capability->max_outbound_queues); - ctrl_info->max_elements_per_oq = - get_unaligned_le16(&capability->max_elements_per_oq); - ctrl_info->max_oq_element_length = - get_unaligned_le16(&capability->max_oq_element_length) - * 16; - - sop_iu_layer_descriptor = - &capability->iu_layer_descriptors[PQI_PROTOCOL_SOP]; - - ctrl_info->max_inbound_iu_length_per_firmware = - get_unaligned_le16( - &sop_iu_layer_descriptor->max_inbound_iu_length); - ctrl_info->inbound_spanning_supported = - sop_iu_layer_descriptor->inbound_spanning_supported; - ctrl_info->outbound_spanning_supported = - sop_iu_layer_descriptor->outbound_spanning_supported; + ctrl_info->max_inbound_queues = get_unaligned_le16(&capability->max_inbound_queues); + ctrl_info->max_elements_per_iq = get_unaligned_le16(&capability->max_elements_per_iq); + ctrl_info->max_iq_element_length = get_unaligned_le16(&capability->max_iq_element_length) * 16; + ctrl_info->max_outbound_queues = get_unaligned_le16(&capability->max_outbound_queues); + ctrl_info->max_elements_per_oq = get_unaligned_le16(&capability->max_elements_per_oq); + ctrl_info->max_oq_element_length = get_unaligned_le16(&capability->max_oq_element_length) * 16; + + sop_iu_layer_descriptor = &capability->iu_layer_descriptors[PQI_PROTOCOL_SOP]; + + ctrl_info->max_inbound_iu_length_per_firmware = get_unaligned_le16(&sop_iu_layer_descriptor->max_inbound_iu_length); + ctrl_info->inbound_spanning_supported = sop_iu_layer_descriptor->inbound_spanning_supported; + ctrl_info->outbound_spanning_supported = sop_iu_layer_descriptor->outbound_spanning_supported; out: kfree(capability); @@ -4726,8 +4577,7 @@ static int pqi_report_device_capability(struct pqi_ctrl_info *ctrl_info) static int pqi_validate_device_capability(struct pqi_ctrl_info *ctrl_info) { - if (ctrl_info->max_iq_element_length < - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) { + if (ctrl_info->max_iq_element_length < PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) { dev_err(&ctrl_info->pci_dev->dev, "max. inbound queue element length of %d is less than the required length of %d\n", ctrl_info->max_iq_element_length, @@ -4735,8 +4585,7 @@ static int pqi_validate_device_capability(struct pqi_ctrl_info *ctrl_info) return -EINVAL; } - if (ctrl_info->max_oq_element_length < - PQI_OPERATIONAL_OQ_ELEMENT_LENGTH) { + if (ctrl_info->max_oq_element_length < PQI_OPERATIONAL_OQ_ELEMENT_LENGTH) { dev_err(&ctrl_info->pci_dev->dev, "max. outbound queue element length of %d is less than the required length of %d\n", ctrl_info->max_oq_element_length, @@ -4744,8 +4593,7 @@ static int pqi_validate_device_capability(struct pqi_ctrl_info *ctrl_info) return -EINVAL; } - if (ctrl_info->max_inbound_iu_length_per_firmware < - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) { + if (ctrl_info->max_inbound_iu_length_per_firmware < PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) { dev_err(&ctrl_info->pci_dev->dev, "max. inbound IU length of %u is less than the min. required length of %d\n", ctrl_info->max_inbound_iu_length_per_firmware, @@ -4783,32 +4631,22 @@ static int pqi_create_event_queue(struct pqi_ctrl_info *ctrl_info) */ memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_GENERAL_ADMIN; - put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, - &request.header.iu_length); + put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, &request.header.iu_length); request.function_code = PQI_GENERAL_ADMIN_FUNCTION_CREATE_OQ; - put_unaligned_le16(event_queue->oq_id, - &request.data.create_operational_oq.queue_id); - put_unaligned_le64((u64)event_queue->oq_element_array_bus_addr, - &request.data.create_operational_oq.element_array_addr); - put_unaligned_le64((u64)event_queue->oq_pi_bus_addr, - &request.data.create_operational_oq.pi_addr); - put_unaligned_le16(PQI_NUM_EVENT_QUEUE_ELEMENTS, - &request.data.create_operational_oq.num_elements); - put_unaligned_le16(PQI_EVENT_OQ_ELEMENT_LENGTH / 16, - &request.data.create_operational_oq.element_length); + put_unaligned_le16(event_queue->oq_id, &request.data.create_operational_oq.queue_id); + put_unaligned_le64((u64)event_queue->oq_element_array_bus_addr, &request.data.create_operational_oq.element_array_addr); + put_unaligned_le64((u64)event_queue->oq_pi_bus_addr, &request.data.create_operational_oq.pi_addr); + put_unaligned_le16(PQI_NUM_EVENT_QUEUE_ELEMENTS, &request.data.create_operational_oq.num_elements); + put_unaligned_le16(PQI_EVENT_OQ_ELEMENT_LENGTH / 16, &request.data.create_operational_oq.element_length); request.data.create_operational_oq.queue_protocol = PQI_PROTOCOL_SOP; - put_unaligned_le16(event_queue->int_msg_num, - &request.data.create_operational_oq.int_msg_num); + put_unaligned_le16(event_queue->int_msg_num, &request.data.create_operational_oq.int_msg_num); - rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, - &response); + rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, &response); if (rc) return rc; - event_queue->oq_ci = ctrl_info->iomem_base + - PQI_DEVICE_REGISTERS_OFFSET + - get_unaligned_le64( - &response.data.create_operational_oq.oq_ci_offset); + event_queue->oq_ci = ctrl_info->iomem_base + PQI_DEVICE_REGISTERS_OFFSET + + get_unaligned_le64(&response.data.create_operational_oq.oq_ci_offset); return 0; } @@ -4829,34 +4667,24 @@ static int pqi_create_queue_group(struct pqi_ctrl_info *ctrl_info, */ memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_GENERAL_ADMIN; - put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, - &request.header.iu_length); + put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, &request.header.iu_length); request.function_code = PQI_GENERAL_ADMIN_FUNCTION_CREATE_IQ; - put_unaligned_le16(queue_group->iq_id[RAID_PATH], - &request.data.create_operational_iq.queue_id); - put_unaligned_le64( - (u64)queue_group->iq_element_array_bus_addr[RAID_PATH], - &request.data.create_operational_iq.element_array_addr); - put_unaligned_le64((u64)queue_group->iq_ci_bus_addr[RAID_PATH], - &request.data.create_operational_iq.ci_addr); - put_unaligned_le16(ctrl_info->num_elements_per_iq, - &request.data.create_operational_iq.num_elements); - put_unaligned_le16(PQI_OPERATIONAL_IQ_ELEMENT_LENGTH / 16, - &request.data.create_operational_iq.element_length); + put_unaligned_le16(queue_group->iq_id[RAID_PATH], &request.data.create_operational_iq.queue_id); + put_unaligned_le64((u64)queue_group->iq_element_array_bus_addr[RAID_PATH], &request.data.create_operational_iq.element_array_addr); + put_unaligned_le64((u64)queue_group->iq_ci_bus_addr[RAID_PATH], &request.data.create_operational_iq.ci_addr); + put_unaligned_le16(ctrl_info->num_elements_per_iq, &request.data.create_operational_iq.num_elements); + put_unaligned_le16(PQI_OPERATIONAL_IQ_ELEMENT_LENGTH / 16, &request.data.create_operational_iq.element_length); request.data.create_operational_iq.queue_protocol = PQI_PROTOCOL_SOP; - rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, - &response); + rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, &response); if (rc) { dev_err(&ctrl_info->pci_dev->dev, "error creating inbound RAID queue\n"); return rc; } - queue_group->iq_pi[RAID_PATH] = ctrl_info->iomem_base + - PQI_DEVICE_REGISTERS_OFFSET + - get_unaligned_le64( - &response.data.create_operational_iq.iq_pi_offset); + queue_group->iq_pi[RAID_PATH] = ctrl_info->iomem_base + PQI_DEVICE_REGISTERS_OFFSET + + get_unaligned_le64(&response.data.create_operational_iq.iq_pi_offset); /* * Create IQ (Inbound Queue - host to device queue) for @@ -4864,34 +4692,24 @@ static int pqi_create_queue_group(struct pqi_ctrl_info *ctrl_info, */ memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_GENERAL_ADMIN; - put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, - &request.header.iu_length); + put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, &request.header.iu_length); request.function_code = PQI_GENERAL_ADMIN_FUNCTION_CREATE_IQ; - put_unaligned_le16(queue_group->iq_id[AIO_PATH], - &request.data.create_operational_iq.queue_id); - put_unaligned_le64((u64)queue_group-> - iq_element_array_bus_addr[AIO_PATH], - &request.data.create_operational_iq.element_array_addr); - put_unaligned_le64((u64)queue_group->iq_ci_bus_addr[AIO_PATH], - &request.data.create_operational_iq.ci_addr); - put_unaligned_le16(ctrl_info->num_elements_per_iq, - &request.data.create_operational_iq.num_elements); - put_unaligned_le16(PQI_OPERATIONAL_IQ_ELEMENT_LENGTH / 16, - &request.data.create_operational_iq.element_length); + put_unaligned_le16(queue_group->iq_id[AIO_PATH], &request.data.create_operational_iq.queue_id); + put_unaligned_le64((u64)queue_group->iq_element_array_bus_addr[AIO_PATH], &request.data.create_operational_iq.element_array_addr); + put_unaligned_le64((u64)queue_group->iq_ci_bus_addr[AIO_PATH], &request.data.create_operational_iq.ci_addr); + put_unaligned_le16(ctrl_info->num_elements_per_iq, &request.data.create_operational_iq.num_elements); + put_unaligned_le16(PQI_OPERATIONAL_IQ_ELEMENT_LENGTH / 16, &request.data.create_operational_iq.element_length); request.data.create_operational_iq.queue_protocol = PQI_PROTOCOL_SOP; - rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, - &response); + rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, &response); if (rc) { dev_err(&ctrl_info->pci_dev->dev, "error creating inbound AIO queue\n"); return rc; } - queue_group->iq_pi[AIO_PATH] = ctrl_info->iomem_base + - PQI_DEVICE_REGISTERS_OFFSET + - get_unaligned_le64( - &response.data.create_operational_iq.iq_pi_offset); + queue_group->iq_pi[AIO_PATH] = ctrl_info->iomem_base + PQI_DEVICE_REGISTERS_OFFSET + + get_unaligned_le64(&response.data.create_operational_iq.iq_pi_offset); /* * Designate the 2nd IQ as the AIO path. By default, all IQs are @@ -4900,16 +4718,12 @@ static int pqi_create_queue_group(struct pqi_ctrl_info *ctrl_info, */ memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_GENERAL_ADMIN; - put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, - &request.header.iu_length); + put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, &request.header.iu_length); request.function_code = PQI_GENERAL_ADMIN_FUNCTION_CHANGE_IQ_PROPERTY; - put_unaligned_le16(queue_group->iq_id[AIO_PATH], - &request.data.change_operational_iq_properties.queue_id); - put_unaligned_le32(PQI_IQ_PROPERTY_IS_AIO_QUEUE, - &request.data.change_operational_iq_properties.vendor_specific); + put_unaligned_le16(queue_group->iq_id[AIO_PATH], &request.data.change_operational_iq_properties.queue_id); + put_unaligned_le32(PQI_IQ_PROPERTY_IS_AIO_QUEUE, &request.data.change_operational_iq_properties.vendor_specific); - rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, - &response); + rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, &response); if (rc) { dev_err(&ctrl_info->pci_dev->dev, "error changing queue property\n"); @@ -4921,35 +4735,25 @@ static int pqi_create_queue_group(struct pqi_ctrl_info *ctrl_info, */ memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_GENERAL_ADMIN; - put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, - &request.header.iu_length); + put_unaligned_le16(PQI_GENERAL_ADMIN_IU_LENGTH, &request.header.iu_length); request.function_code = PQI_GENERAL_ADMIN_FUNCTION_CREATE_OQ; - put_unaligned_le16(queue_group->oq_id, - &request.data.create_operational_oq.queue_id); - put_unaligned_le64((u64)queue_group->oq_element_array_bus_addr, - &request.data.create_operational_oq.element_array_addr); - put_unaligned_le64((u64)queue_group->oq_pi_bus_addr, - &request.data.create_operational_oq.pi_addr); - put_unaligned_le16(ctrl_info->num_elements_per_oq, - &request.data.create_operational_oq.num_elements); - put_unaligned_le16(PQI_OPERATIONAL_OQ_ELEMENT_LENGTH / 16, - &request.data.create_operational_oq.element_length); + put_unaligned_le16(queue_group->oq_id, &request.data.create_operational_oq.queue_id); + put_unaligned_le64((u64)queue_group->oq_element_array_bus_addr, &request.data.create_operational_oq.element_array_addr); + put_unaligned_le64((u64)queue_group->oq_pi_bus_addr, &request.data.create_operational_oq.pi_addr); + put_unaligned_le16(ctrl_info->num_elements_per_oq, &request.data.create_operational_oq.num_elements); + put_unaligned_le16(PQI_OPERATIONAL_OQ_ELEMENT_LENGTH / 16, &request.data.create_operational_oq.element_length); request.data.create_operational_oq.queue_protocol = PQI_PROTOCOL_SOP; - put_unaligned_le16(queue_group->int_msg_num, - &request.data.create_operational_oq.int_msg_num); + put_unaligned_le16(queue_group->int_msg_num, &request.data.create_operational_oq.int_msg_num); - rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, - &response); + rc = pqi_submit_admin_request_synchronous(ctrl_info, &request, &response); if (rc) { dev_err(&ctrl_info->pci_dev->dev, "error creating outbound queue\n"); return rc; } - queue_group->oq_ci = ctrl_info->iomem_base + - PQI_DEVICE_REGISTERS_OFFSET + - get_unaligned_le64( - &response.data.create_operational_oq.oq_ci_offset); + queue_group->oq_ci = ctrl_info->iomem_base + PQI_DEVICE_REGISTERS_OFFSET + + get_unaligned_le64(&response.data.create_operational_oq.oq_ci_offset); return 0; } @@ -4991,8 +4795,7 @@ static int pqi_configure_events(struct pqi_ctrl_info *ctrl_info, struct pqi_event_descriptor *event_descriptor; struct pqi_general_management_request request; - event_config = kmalloc(PQI_REPORT_EVENT_CONFIG_BUFFER_LENGTH, - GFP_KERNEL); + event_config = kmalloc(PQI_REPORT_EVENT_CONFIG_BUFFER_LENGTH, GFP_KERNEL); if (!event_config) return -ENOMEM; @@ -5023,10 +4826,8 @@ static int pqi_configure_events(struct pqi_ctrl_info *ctrl_info, for (i = 0; i < event_config->num_event_descriptors; i++) { event_descriptor = &event_config->descriptors[i]; - if (enable_events && - pqi_is_supported_event(event_descriptor->event_type)) - put_unaligned_le16(ctrl_info->event_queue.oq_id, - &event_descriptor->oq_id); + if (enable_events && pqi_is_supported_event(event_descriptor->event_type)) + put_unaligned_le16(ctrl_info->event_queue.oq_id, &event_descriptor->oq_id); else put_unaligned_le16(0, &event_descriptor->oq_id); } @@ -5037,8 +4838,7 @@ static int pqi_configure_events(struct pqi_ctrl_info *ctrl_info, put_unaligned_le16(offsetof(struct pqi_general_management_request, data.report_event_configuration.sg_descriptors[1]) - PQI_REQUEST_HEADER_LENGTH, &request.header.iu_length); - put_unaligned_le32(PQI_REPORT_EVENT_CONFIG_BUFFER_LENGTH, - &request.data.report_event_configuration.buffer_length); + put_unaligned_le32(PQI_REPORT_EVENT_CONFIG_BUFFER_LENGTH, &request.data.report_event_configuration.buffer_length); rc = pqi_map_single(ctrl_info->pci_dev, request.data.report_event_configuration.sg_descriptors, @@ -5049,9 +4849,7 @@ static int pqi_configure_events(struct pqi_ctrl_info *ctrl_info, rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 0, NULL); - pqi_pci_unmap(ctrl_info->pci_dev, - request.data.report_event_configuration.sg_descriptors, 1, - DMA_TO_DEVICE); + pqi_pci_unmap(ctrl_info->pci_dev, request.data.report_event_configuration.sg_descriptors, 1, DMA_TO_DEVICE); out: kfree(event_config); @@ -5169,19 +4967,15 @@ static void pqi_calculate_io_resources(struct pqi_ctrl_info *ctrl_info) u32 max_transfer_size; u32 max_sg_entries; - ctrl_info->scsi_ml_can_queue = - ctrl_info->max_outstanding_requests - PQI_RESERVED_IO_SLOTS; + ctrl_info->scsi_ml_can_queue = ctrl_info->max_outstanding_requests - PQI_RESERVED_IO_SLOTS; ctrl_info->max_io_slots = ctrl_info->max_outstanding_requests; - ctrl_info->error_buffer_length = - ctrl_info->max_io_slots * PQI_ERROR_BUFFER_ELEMENT_LENGTH; + ctrl_info->error_buffer_length = ctrl_info->max_io_slots * PQI_ERROR_BUFFER_ELEMENT_LENGTH; if (reset_devices) - max_transfer_size = min(ctrl_info->max_transfer_size, - PQI_MAX_TRANSFER_SIZE_KDUMP); + max_transfer_size = min(ctrl_info->max_transfer_size, PQI_MAX_TRANSFER_SIZE_KDUMP); else - max_transfer_size = min(ctrl_info->max_transfer_size, - PQI_MAX_TRANSFER_SIZE); + max_transfer_size = min(ctrl_info->max_transfer_size, PQI_MAX_TRANSFER_SIZE); max_sg_entries = max_transfer_size / PAGE_SIZE; @@ -5192,9 +4986,7 @@ static void pqi_calculate_io_resources(struct pqi_ctrl_info *ctrl_info) max_transfer_size = (max_sg_entries - 1) * PAGE_SIZE; - ctrl_info->sg_chain_buffer_length = - (max_sg_entries * sizeof(struct pqi_sg_descriptor)) + - PQI_EXTRA_SGL_MEMORY; + ctrl_info->sg_chain_buffer_length = (max_sg_entries * sizeof(struct pqi_sg_descriptor)) + PQI_EXTRA_SGL_MEMORY; ctrl_info->sg_tablesize = max_sg_entries; ctrl_info->max_sectors = max_transfer_size / 512; } @@ -5211,8 +5003,7 @@ static void pqi_calculate_queue_resources(struct pqi_ctrl_info *ctrl_info) int num_cpus; int max_queue_groups; - max_queue_groups = min(ctrl_info->max_inbound_queues / 2, - ctrl_info->max_outbound_queues - 1); + max_queue_groups = min(ctrl_info->max_inbound_queues / 2, ctrl_info->max_outbound_queues - 1); max_queue_groups = min(max_queue_groups, PQI_MAX_QUEUE_GROUPS); num_cpus = num_online_cpus(); @@ -5226,39 +5017,27 @@ static void pqi_calculate_queue_resources(struct pqi_ctrl_info *ctrl_info) * Make sure that the max. inbound IU length is an even multiple * of our inbound element length. */ - ctrl_info->max_inbound_iu_length = - (ctrl_info->max_inbound_iu_length_per_firmware / - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) * + ctrl_info->max_inbound_iu_length = (ctrl_info->max_inbound_iu_length_per_firmware / PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) * PQI_OPERATIONAL_IQ_ELEMENT_LENGTH; - num_elements_per_iq = - (ctrl_info->max_inbound_iu_length / - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); + num_elements_per_iq = (ctrl_info->max_inbound_iu_length / PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); /* Add one because one element in each queue is unusable. */ num_elements_per_iq++; - num_elements_per_iq = min(num_elements_per_iq, - ctrl_info->max_elements_per_iq); + num_elements_per_iq = min(num_elements_per_iq, ctrl_info->max_elements_per_iq); num_elements_per_oq = ((num_elements_per_iq - 1) * 2) + 1; - num_elements_per_oq = min(num_elements_per_oq, - ctrl_info->max_elements_per_oq); + num_elements_per_oq = min(num_elements_per_oq, ctrl_info->max_elements_per_oq); ctrl_info->num_elements_per_iq = num_elements_per_iq; ctrl_info->num_elements_per_oq = num_elements_per_oq; - ctrl_info->max_sg_per_iu = - ((ctrl_info->max_inbound_iu_length - - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) / - sizeof(struct pqi_sg_descriptor)) + - PQI_MAX_EMBEDDED_SG_DESCRIPTORS; + ctrl_info->max_sg_per_iu = ((ctrl_info->max_inbound_iu_length - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) / + sizeof(struct pqi_sg_descriptor)) + PQI_MAX_EMBEDDED_SG_DESCRIPTORS; - ctrl_info->max_sg_per_r56_iu = - ((ctrl_info->max_inbound_iu_length - - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) / - sizeof(struct pqi_sg_descriptor)) + - PQI_MAX_EMBEDDED_R56_SG_DESCRIPTORS; + ctrl_info->max_sg_per_r56_iu = ((ctrl_info->max_inbound_iu_length - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH) / + sizeof(struct pqi_sg_descriptor)) + PQI_MAX_EMBEDDED_R56_SG_DESCRIPTORS; } static inline void pqi_set_sg_descriptor(struct pqi_sg_descriptor *sg_descriptor, @@ -5293,10 +5072,8 @@ static unsigned int pqi_build_sg_list(struct pqi_sg_descriptor *sg_descriptor, break; sg_descriptor++; if (i == max_sg_per_iu) { - put_unaligned_le64((u64)io_request->sg_chain_buffer_dma_handle, - &sg_descriptor->address); - put_unaligned_le32((sg_count - num_sg_in_iu) * sizeof(*sg_descriptor), - &sg_descriptor->length); + put_unaligned_le64((u64)io_request->sg_chain_buffer_dma_handle, &sg_descriptor->address); + put_unaligned_le32((sg_count - num_sg_in_iu) * sizeof(*sg_descriptor), &sg_descriptor->length); put_unaligned_le32(CISS_SG_CHAIN, &sg_descriptor->flags); *chained = true; num_sg_in_iu++; @@ -5325,8 +5102,7 @@ static int pqi_build_raid_sg_list(struct pqi_ctrl_info *ctrl_info, if (sg_count < 0) return sg_count; - iu_length = offsetof(struct pqi_raid_path_request, sg_descriptors) - - PQI_REQUEST_HEADER_LENGTH; + iu_length = offsetof(struct pqi_raid_path_request, sg_descriptors) - PQI_REQUEST_HEADER_LENGTH; if (sg_count == 0) goto out; @@ -5361,8 +5137,7 @@ static int pqi_build_aio_r1_sg_list(struct pqi_ctrl_info *ctrl_info, if (sg_count < 0) return sg_count; - iu_length = offsetof(struct pqi_aio_r1_path_request, sg_descriptors) - - PQI_REQUEST_HEADER_LENGTH; + iu_length = offsetof(struct pqi_aio_r1_path_request, sg_descriptors) - PQI_REQUEST_HEADER_LENGTH; num_sg_in_iu = 0; if (sg_count == 0) @@ -5399,8 +5174,7 @@ static int pqi_build_aio_r56_sg_list(struct pqi_ctrl_info *ctrl_info, if (sg_count < 0) return sg_count; - iu_length = offsetof(struct pqi_aio_r56_path_request, sg_descriptors) - - PQI_REQUEST_HEADER_LENGTH; + iu_length = offsetof(struct pqi_aio_r56_path_request, sg_descriptors) - PQI_REQUEST_HEADER_LENGTH; num_sg_in_iu = 0; if (sg_count != 0) { @@ -5435,8 +5209,7 @@ static int pqi_build_aio_sg_list(struct pqi_ctrl_info *ctrl_info, if (sg_count < 0) return sg_count; - iu_length = offsetof(struct pqi_aio_path_request, sg_descriptors) - - PQI_REQUEST_HEADER_LENGTH; + iu_length = offsetof(struct pqi_aio_path_request, sg_descriptors) - PQI_REQUEST_HEADER_LENGTH; num_sg_in_iu = 0; if (sg_count == 0) @@ -5911,12 +5684,10 @@ static bool pqi_is_parity_write_stream(struct pqi_ctrl_info *ctrl_info, */ if ((pqi_stream_data->next_lba && rmd.first_block >= pqi_stream_data->next_lba) && - rmd.first_block <= pqi_stream_data->next_lba + - rmd.block_cnt) { - pqi_stream_data->next_lba = rmd.first_block + - rmd.block_cnt; - pqi_stream_data->last_accessed = jiffies; - return true; + rmd.first_block <= pqi_stream_data->next_lba + rmd.block_cnt) { + pqi_stream_data->next_lba = rmd.first_block + rmd.block_cnt; + pqi_stream_data->last_accessed = jiffies; + return true; } /* unused entry */ @@ -6112,13 +5883,9 @@ static void pqi_fail_io_queued_for_device(struct pqi_ctrl_info *ctrl_info, queue_group = &ctrl_info->queue_groups[i]; for (path = 0; path < 2; path++) { - spin_lock_irqsave( - &queue_group->submit_lock[path], flags); - - list_for_each_entry_safe(io_request, next, - &queue_group->request_list[path], - request_list_entry) { + spin_lock_irqsave(&queue_group->submit_lock[path], flags); + list_for_each_entry_safe(io_request, next, &queue_group->request_list[path], request_list_entry) { scmd = io_request->scmd; if (!scmd) continue; @@ -6134,8 +5901,7 @@ static void pqi_fail_io_queued_for_device(struct pqi_ctrl_info *ctrl_info, pqi_scsi_done(scmd); } - spin_unlock_irqrestore( - &queue_group->submit_lock[path], flags); + spin_unlock_irqrestore(&queue_group->submit_lock[path], flags); } } } @@ -6241,19 +6007,16 @@ static int pqi_lun_reset(struct pqi_ctrl_info *ctrl_info, struct scsi_cmnd *scmd memset(request, 0, sizeof(*request)); request->header.iu_type = PQI_REQUEST_IU_TASK_MANAGEMENT; - put_unaligned_le16(sizeof(*request) - PQI_REQUEST_HEADER_LENGTH, - &request->header.iu_length); + put_unaligned_le16(sizeof(*request) - PQI_REQUEST_HEADER_LENGTH, &request->header.iu_length); put_unaligned_le16(io_request->index, &request->request_id); - memcpy(request->lun_number, device->scsi3addr, - sizeof(request->lun_number)); + memcpy(request->lun_number, device->scsi3addr, sizeof(request->lun_number)); if (!pqi_is_logical_device(device) && ctrl_info->multi_lun_device_supported) request->ml_device_lun_number = (u8)scmd->device->lun; request->task_management_function = SOP_TASK_MANAGEMENT_LUN_RESET; if (ctrl_info->tmf_iu_timeout_supported) put_unaligned_le16(PQI_LUN_RESET_FIRMWARE_TIMEOUT_SECS, &request->timeout); - pqi_start_io(ctrl_info, &ctrl_info->queue_groups[PQI_DEFAULT_QUEUE_GROUP], RAID_PATH, - io_request); + pqi_start_io(ctrl_info, &ctrl_info->queue_groups[PQI_DEFAULT_QUEUE_GROUP], RAID_PATH, io_request); rc = pqi_wait_for_lun_reset_completion(ctrl_info, device, (u8)scmd->device->lun, &wait); if (rc == 0) @@ -6384,8 +6147,7 @@ static int pqi_slave_alloc(struct scsi_device *sdev) device->sdev = sdev; if (device->queue_depth) { device->advertised_queue_depth = device->queue_depth; - scsi_change_queue_depth(sdev, - device->advertised_queue_depth); + scsi_change_queue_depth(sdev, device->advertised_queue_depth); } if (pqi_is_logical_device(device)) { pqi_disable_write_same(sdev); @@ -6561,11 +6323,9 @@ static void pqi_error_info_to_ciss(struct pqi_raid_error_info *pqi_error_info, break; } - sense_data_length = - get_unaligned_le16(&pqi_error_info->sense_data_length); + sense_data_length = get_unaligned_le16(&pqi_error_info->sense_data_length); if (sense_data_length == 0) - sense_data_length = - get_unaligned_le16(&pqi_error_info->response_data_length); + sense_data_length = get_unaligned_le16(&pqi_error_info->response_data_length); if (sense_data_length) if (sense_data_length > sizeof(pqi_error_info->data)) sense_data_length = sizeof(pqi_error_info->data); @@ -6632,10 +6392,8 @@ static int pqi_passthru_ioctl(struct pqi_ctrl_info *ctrl_info, void __user *arg) memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_RAID_PATH_IO; - iu_length = offsetof(struct pqi_raid_path_request, sg_descriptors) - - PQI_REQUEST_HEADER_LENGTH; - memcpy(request.lun_number, iocommand.LUN_info.LunAddrBytes, - sizeof(request.lun_number)); + iu_length = offsetof(struct pqi_raid_path_request, sg_descriptors) - PQI_REQUEST_HEADER_LENGTH; + memcpy(request.lun_number, iocommand.LUN_info.LunAddrBytes, sizeof(request.lun_number)); memcpy(request.cdb, iocommand.Request.CDB, iocommand.Request.CDBLen); request.additional_cdb_bytes_usage = SOP_ADDITIONAL_CDB_BYTES_0; @@ -6677,24 +6435,19 @@ static int pqi_passthru_ioctl(struct pqi_ctrl_info *ctrl_info, void __user *arg) PQI_SYNC_FLAGS_INTERRUPTABLE, &pqi_error_info); if (iocommand.buf_size > 0) - pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, - DMA_BIDIRECTIONAL); + pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, DMA_BIDIRECTIONAL); memset(&iocommand.error_info, 0, sizeof(iocommand.error_info)); if (rc == 0) { pqi_error_info_to_ciss(&pqi_error_info, &ciss_error_info); iocommand.error_info.ScsiStatus = ciss_error_info.scsi_status; - iocommand.error_info.CommandStatus = - ciss_error_info.command_status; + iocommand.error_info.CommandStatus = ciss_error_info.command_status; sense_data_length = ciss_error_info.sense_data_length; if (sense_data_length) { - if (sense_data_length > - sizeof(iocommand.error_info.SenseInfo)) - sense_data_length = - sizeof(iocommand.error_info.SenseInfo); - memcpy(iocommand.error_info.SenseInfo, - pqi_error_info.data, sense_data_length); + if (sense_data_length > sizeof(iocommand.error_info.SenseInfo)) + sense_data_length = sizeof(iocommand.error_info.SenseInfo); + memcpy(iocommand.error_info.SenseInfo, pqi_error_info.data, sense_data_length); iocommand.error_info.SenseLen = sense_data_length; } } @@ -7085,38 +6838,30 @@ static ssize_t pqi_path_info_show(struct device *dev, device->lun, scsi_device_type(device->devtype)); - if (device->devtype == TYPE_RAID || - pqi_is_logical_device(device)) + if (device->devtype == TYPE_RAID || pqi_is_logical_device(device)) goto end_buffer; - memcpy(&phys_connector, &device->phys_connector[i], - sizeof(phys_connector)); + memcpy(&phys_connector, &device->phys_connector[i], sizeof(phys_connector)); if (phys_connector[0] < '0') phys_connector[0] = '0'; if (phys_connector[1] < '0') phys_connector[1] = '0'; output_len += scnprintf(buf + output_len, - PAGE_SIZE - output_len, - "PORT: %.2s ", phys_connector); + PAGE_SIZE - output_len, "PORT: %.2s ", phys_connector); box = device->box[i]; if (box != 0 && box != 0xFF) output_len += scnprintf(buf + output_len, - PAGE_SIZE - output_len, - "BOX: %hhu ", box); + PAGE_SIZE - output_len, "BOX: %hhu ", box); - if ((device->devtype == TYPE_DISK || - device->devtype == TYPE_ZBC) && - pqi_expose_device(device)) + if ((device->devtype == TYPE_DISK || device->devtype == TYPE_ZBC) && pqi_expose_device(device)) output_len += scnprintf(buf + output_len, - PAGE_SIZE - output_len, - "BAY: %hhu ", bay); + PAGE_SIZE - output_len, "BAY: %hhu ", bay); end_buffer: output_len += scnprintf(buf + output_len, - PAGE_SIZE - output_len, - "%s\n", active); + PAGE_SIZE - output_len, "%s\n", active); } spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); @@ -7297,7 +7042,6 @@ static ssize_t pqi_sas_ncq_prio_enable_store(struct device *dev, spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags); device = sdev->hostdata; - if (!device) { spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); return -ENODEV; @@ -7335,7 +7079,7 @@ static DEVICE_ATTR(ssd_smart_path_enabled, 0444, pqi_ssd_smart_path_enabled_show static DEVICE_ATTR(raid_level, 0444, pqi_raid_level_show, NULL); static DEVICE_ATTR(raid_bypass_cnt, 0444, pqi_raid_bypass_cnt_show, NULL); static DEVICE_ATTR(sas_ncq_prio_enable, 0644, - pqi_sas_ncq_prio_enable_show, pqi_sas_ncq_prio_enable_store); + pqi_sas_ncq_prio_enable_show, pqi_sas_ncq_prio_enable_store); static DEVICE_ATTR(numa_node, 0444, pqi_numa_node_show, NULL); static struct attribute *pqi_sdev_attrs[] = { @@ -7510,8 +7254,7 @@ static int pqi_get_ctrl_serial_number(struct pqi_ctrl_info *ctrl_info) if (rc) goto out; - memcpy(ctrl_info->serial_number, sense_info->ctrl_serial_number, - sizeof(sense_info->ctrl_serial_number)); + memcpy(ctrl_info->serial_number, sense_info->ctrl_serial_number, sizeof(sense_info->ctrl_serial_number)); ctrl_info->serial_number[sizeof(sense_info->ctrl_serial_number)] = '\0'; out: @@ -7542,8 +7285,7 @@ static int pqi_get_ctrl_product_details(struct pqi_ctrl_info *ctrl_info) memcpy(ctrl_info->firmware_version, identify->firmware_version_short, sizeof(identify->firmware_version_short)); - ctrl_info->firmware_version - [sizeof(identify->firmware_version_short)] = '\0'; + ctrl_info->firmware_version[sizeof(identify->firmware_version_short)] = '\0'; snprintf(ctrl_info->firmware_version + strlen(ctrl_info->firmware_version), sizeof(ctrl_info->firmware_version) - @@ -7552,16 +7294,13 @@ static int pqi_get_ctrl_product_details(struct pqi_ctrl_info *ctrl_info) get_unaligned_le16(&identify->firmware_build_number)); } - memcpy(ctrl_info->model, identify->product_id, - sizeof(identify->product_id)); + memcpy(ctrl_info->model, identify->product_id, sizeof(identify->product_id)); ctrl_info->model[sizeof(identify->product_id)] = '\0'; - memcpy(ctrl_info->vendor, identify->vendor_id, - sizeof(identify->vendor_id)); + memcpy(ctrl_info->vendor, identify->vendor_id, sizeof(identify->vendor_id)); ctrl_info->vendor[sizeof(identify->vendor_id)] = '\0'; - dev_info(&ctrl_info->pci_dev->dev, - "Firmware version: %s\n", ctrl_info->firmware_version); + dev_info(&ctrl_info->pci_dev->dev, "Firmware version: %s\n", ctrl_info->firmware_version); out: kfree(identify); @@ -7631,14 +7370,10 @@ static int pqi_config_table_update(struct pqi_ctrl_info *ctrl_info, memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_VENDOR_GENERAL; - put_unaligned_le16(sizeof(request) - PQI_REQUEST_HEADER_LENGTH, - &request.header.iu_length); - put_unaligned_le16(PQI_VENDOR_GENERAL_CONFIG_TABLE_UPDATE, - &request.function_code); - put_unaligned_le16(first_section, - &request.data.config_table_update.first_section); - put_unaligned_le16(last_section, - &request.data.config_table_update.last_section); + put_unaligned_le16(sizeof(request) - PQI_REQUEST_HEADER_LENGTH, &request.header.iu_length); + put_unaligned_le16(PQI_VENDOR_GENERAL_CONFIG_TABLE_UPDATE, &request.function_code); + put_unaligned_le16(first_section, &request.data.config_table_update.first_section); + put_unaligned_le16(last_section, &request.data.config_table_update.last_section); return pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 0, NULL); } @@ -7858,8 +7593,7 @@ static void pqi_process_firmware_features( firmware_features = section_info->section; firmware_features_iomem_addr = section_info->section_iomem_addr; - for (i = 0, num_features_supported = 0; - i < ARRAY_SIZE(pqi_firmware_features); i++) { + for (i = 0, num_features_supported = 0; i < ARRAY_SIZE(pqi_firmware_features); i++) { if (pqi_is_firmware_feature_supported(firmware_features, pqi_firmware_features[i].feature_bit)) { pqi_firmware_features[i].supported = true; @@ -7880,16 +7614,14 @@ static void pqi_process_firmware_features( pqi_firmware_features[i].feature_bit); } - rc = pqi_enable_firmware_features(ctrl_info, firmware_features, - firmware_features_iomem_addr); + rc = pqi_enable_firmware_features(ctrl_info, firmware_features, firmware_features_iomem_addr); if (rc) { dev_err(&ctrl_info->pci_dev->dev, "failed to enable firmware features in PQI configuration table\n"); for (i = 0; i < ARRAY_SIZE(pqi_firmware_features); i++) { if (!pqi_firmware_features[i].supported) continue; - pqi_firmware_feature_update(ctrl_info, - &pqi_firmware_features[i]); + pqi_firmware_feature_update(ctrl_info, &pqi_firmware_features[i]); } return; } @@ -7902,8 +7634,7 @@ static void pqi_process_firmware_features( pqi_firmware_features[i].feature_bit)) { pqi_firmware_features[i].enabled = true; } - pqi_firmware_feature_update(ctrl_info, - &pqi_firmware_features[i]); + pqi_firmware_feature_update(ctrl_info, &pqi_firmware_features[i]); } } @@ -7996,18 +7727,12 @@ static int pqi_process_config_table(struct pqi_ctrl_info *ctrl_info) dev_warn(&ctrl_info->pci_dev->dev, "heartbeat disabled by module parameter\n"); else - ctrl_info->heartbeat_counter = - table_iomem_addr + - section_offset + - offsetof(struct pqi_config_table_heartbeat, - heartbeat_counter); + ctrl_info->heartbeat_counter = table_iomem_addr + section_offset + + offsetof(struct pqi_config_table_heartbeat, heartbeat_counter); break; case PQI_CONFIG_TABLE_SECTION_SOFT_RESET: - ctrl_info->soft_reset_status = - table_iomem_addr + - section_offset + - offsetof(struct pqi_config_table_soft_reset, - soft_reset_status); + ctrl_info->soft_reset_status = table_iomem_addr + section_offset + + offsetof(struct pqi_config_table_soft_reset, soft_reset_status); break; } @@ -8141,15 +7866,11 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) ctrl_info->product_revision = (u8)(product_id >> 8); if (reset_devices) { - if (ctrl_info->max_outstanding_requests > - PQI_MAX_OUTSTANDING_REQUESTS_KDUMP) - ctrl_info->max_outstanding_requests = - PQI_MAX_OUTSTANDING_REQUESTS_KDUMP; + if (ctrl_info->max_outstanding_requests > PQI_MAX_OUTSTANDING_REQUESTS_KDUMP) + ctrl_info->max_outstanding_requests = PQI_MAX_OUTSTANDING_REQUESTS_KDUMP; } else { - if (ctrl_info->max_outstanding_requests > - PQI_MAX_OUTSTANDING_REQUESTS) - ctrl_info->max_outstanding_requests = - PQI_MAX_OUTSTANDING_REQUESTS; + if (ctrl_info->max_outstanding_requests > PQI_MAX_OUTSTANDING_REQUESTS) + ctrl_info->max_outstanding_requests = PQI_MAX_OUTSTANDING_REQUESTS; } pqi_calculate_io_resources(ctrl_info); @@ -8217,8 +7938,7 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) return rc; if (ctrl_info->num_msix_vectors_enabled < ctrl_info->num_queue_groups) { - ctrl_info->max_msix_vectors = - ctrl_info->num_msix_vectors_enabled; + ctrl_info->max_msix_vectors = ctrl_info->num_msix_vectors_enabled; pqi_calculate_queue_resources(ctrl_info); } @@ -8260,8 +7980,7 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) "error obtaining advanced RAID bypass configuration\n"); return rc; } - ctrl_info->ciss_report_log_flags |= - CISS_REPORT_LOG_FLAG_DRIVE_TYPE_MIX; + ctrl_info->ciss_report_log_flags |= CISS_REPORT_LOG_FLAG_DRIVE_TYPE_MIX; } rc = pqi_enable_events(ctrl_info); @@ -8428,8 +8147,7 @@ static int pqi_ctrl_init_resume(struct pqi_ctrl_info *ctrl_info) "error obtaining advanced RAID bypass configuration\n"); return rc; } - ctrl_info->ciss_report_log_flags |= - CISS_REPORT_LOG_FLAG_DRIVE_TYPE_MIX; + ctrl_info->ciss_report_log_flags |= CISS_REPORT_LOG_FLAG_DRIVE_TYPE_MIX; } rc = pqi_enable_events(ctrl_info); @@ -8593,10 +8311,8 @@ static struct pqi_ctrl_info *pqi_alloc_ctrl_info(int numa_node) ctrl_info->max_msix_vectors = PQI_MAX_MSIX_VECTORS; ctrl_info->ciss_report_log_flags = CISS_REPORT_LOG_FLAG_UNIQUE_LUN_ID; - ctrl_info->max_transfer_encrypted_sas_sata = - PQI_DEFAULT_MAX_TRANSFER_ENCRYPTED_SAS_SATA; - ctrl_info->max_transfer_encrypted_nvme = - PQI_DEFAULT_MAX_TRANSFER_ENCRYPTED_NVME; + ctrl_info->max_transfer_encrypted_sas_sata = PQI_DEFAULT_MAX_TRANSFER_ENCRYPTED_SAS_SATA; + ctrl_info->max_transfer_encrypted_nvme = PQI_DEFAULT_MAX_TRANSFER_ENCRYPTED_NVME; ctrl_info->max_write_raid_5_6 = PQI_DEFAULT_MAX_WRITE_RAID_5_6; ctrl_info->max_write_raid_1_10_2drive = ~0; ctrl_info->max_write_raid_1_10_3drive = ~0; @@ -8756,8 +8472,7 @@ static void pqi_ofa_setup_host_buffer(struct pqi_ctrl_info *ctrl_info) dev = &ctrl_info->pci_dev->dev; - ofap = dma_alloc_coherent(dev, sizeof(*ofap), - &ctrl_info->pqi_ofa_mem_dma_handle, GFP_KERNEL); + ofap = dma_alloc_coherent(dev, sizeof(*ofap), &ctrl_info->pqi_ofa_mem_dma_handle, GFP_KERNEL); if (!ofap) return; @@ -8793,8 +8508,7 @@ static void pqi_ofa_free_host_buffer(struct pqi_ctrl_info *ctrl_info) goto out; mem_descriptor = ofap->sg_descriptor; - num_memory_descriptors = - get_unaligned_le16(&ofap->num_memory_descriptors); + num_memory_descriptors = get_unaligned_le16(&ofap->num_memory_descriptors); for (i = 0; i < num_memory_descriptors; i++) { dma_free_coherent(dev, @@ -8805,8 +8519,7 @@ static void pqi_ofa_free_host_buffer(struct pqi_ctrl_info *ctrl_info) kfree(ctrl_info->pqi_ofa_chunk_virt_addr); out: - dma_free_coherent(dev, sizeof(*ofap), ofap, - ctrl_info->pqi_ofa_mem_dma_handle); + dma_free_coherent(dev, sizeof(*ofap), ofap, ctrl_info->pqi_ofa_mem_dma_handle); ctrl_info->pqi_ofa_mem_virt_addr = NULL; } @@ -8819,10 +8532,8 @@ static int pqi_ofa_host_memory_update(struct pqi_ctrl_info *ctrl_info) memset(&request, 0, sizeof(request)); request.header.iu_type = PQI_REQUEST_IU_VENDOR_GENERAL; - put_unaligned_le16(sizeof(request) - PQI_REQUEST_HEADER_LENGTH, - &request.header.iu_length); - put_unaligned_le16(PQI_VENDOR_GENERAL_HOST_MEMORY_UPDATE, - &request.function_code); + put_unaligned_le16(sizeof(request) - PQI_REQUEST_HEADER_LENGTH, &request.header.iu_length); + put_unaligned_le16(PQI_VENDOR_GENERAL_HOST_MEMORY_UPDATE, &request.function_code); ofap = ctrl_info->pqi_ofa_mem_virt_addr; @@ -8875,12 +8586,10 @@ static void pqi_fail_all_outstanding_requests(struct pqi_ctrl_info *ctrl_info) } } else { io_request->status = -ENXIO; - io_request->error_info = - &pqi_ctrl_offline_raid_error_info; + io_request->error_info = &pqi_ctrl_offline_raid_error_info; } - io_request->io_complete_callback(io_request, - io_request->context); + io_request->io_complete_callback(io_request, io_request->context); } } @@ -9076,8 +8785,7 @@ static void pqi_process_lockup_action_param(void) return; for (i = 0; i < ARRAY_SIZE(pqi_lockup_actions); i++) { - if (strcmp(pqi_lockup_action_param, - pqi_lockup_actions[i].name) == 0) { + if (strcmp(pqi_lockup_action_param, pqi_lockup_actions[i].name) == 0) { pqi_lockup_action = pqi_lockup_actions[i].action; return; } @@ -9992,63 +9700,63 @@ static const struct pci_device_id pqi_pci_id_table[] = { }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1014, 0x0718) + 0x1014, 0x0718) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1e93, 0x1000) + 0x1e93, 0x1000) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1e93, 0x1001) + 0x1e93, 0x1001) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1e93, 0x1002) + 0x1e93, 0x1002) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1e93, 0x1005) + 0x1e93, 0x1005) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1001) + 0x1f51, 0x1001) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1002) + 0x1f51, 0x1002) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1003) + 0x1f51, 0x1003) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1004) + 0x1f51, 0x1004) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1005) + 0x1f51, 0x1005) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1006) + 0x1f51, 0x1006) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1007) + 0x1f51, 0x1007) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1008) + 0x1f51, 0x1008) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x1009) + 0x1f51, 0x1009) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, - 0x1f51, 0x100a) + 0x1f51, 0x100a) }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, @@ -10104,503 +9812,277 @@ module_exit(pqi_cleanup); static void pqi_verify_structures(void) { - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_host_to_ctrl_doorbell) != 0x20); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_interrupt_mask) != 0x34); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_ctrl_to_host_doorbell) != 0x9c); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_ctrl_to_host_doorbell_clear) != 0xa0); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_driver_scratch) != 0xb0); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_product_identifier) != 0xb4); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_firmware_status) != 0xbc); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_ctrl_shutdown_reason_code) != 0xcc); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - sis_mailbox) != 0x1000); - BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, - pqi_registers) != 0x4000); - - BUILD_BUG_ON(offsetof(struct pqi_iu_header, - iu_type) != 0x0); - BUILD_BUG_ON(offsetof(struct pqi_iu_header, - iu_length) != 0x2); - BUILD_BUG_ON(offsetof(struct pqi_iu_header, - response_queue_id) != 0x4); - BUILD_BUG_ON(offsetof(struct pqi_iu_header, - driver_flags) != 0x6); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_host_to_ctrl_doorbell) != 0x20); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_interrupt_mask) != 0x34); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_ctrl_to_host_doorbell) != 0x9c); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_ctrl_to_host_doorbell_clear) != 0xa0); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_driver_scratch) != 0xb0); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_product_identifier) != 0xb4); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_firmware_status) != 0xbc); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_ctrl_shutdown_reason_code) != 0xcc); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, sis_mailbox) != 0x1000); + BUILD_BUG_ON(offsetof(struct pqi_ctrl_registers, pqi_registers) != 0x4000); + + BUILD_BUG_ON(offsetof(struct pqi_iu_header, iu_type) != 0x0); + BUILD_BUG_ON(offsetof(struct pqi_iu_header, iu_length) != 0x2); + BUILD_BUG_ON(offsetof(struct pqi_iu_header, response_queue_id) != 0x4); + BUILD_BUG_ON(offsetof(struct pqi_iu_header, driver_flags) != 0x6); BUILD_BUG_ON(sizeof(struct pqi_iu_header) != 0x8); - BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, - status) != 0x0); - BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, - service_response) != 0x1); - BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, - data_present) != 0x2); - BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, - reserved) != 0x3); - BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, - residual_count) != 0x4); - BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, - data_length) != 0x8); - BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, - reserved1) != 0xa); - BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, - data) != 0xc); + BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, status) != 0x0); + BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, service_response) != 0x1); + BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, data_present) != 0x2); + BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, reserved) != 0x3); + BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, residual_count) != 0x4); + BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, data_length) != 0x8); + BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, reserved1) != 0xa); + BUILD_BUG_ON(offsetof(struct pqi_aio_error_info, data) != 0xc); BUILD_BUG_ON(sizeof(struct pqi_aio_error_info) != 0x10c); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - data_in_result) != 0x0); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - data_out_result) != 0x1); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - reserved) != 0x2); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - status) != 0x5); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - status_qualifier) != 0x6); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - sense_data_length) != 0x8); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - response_data_length) != 0xa); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - data_in_transferred) != 0xc); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - data_out_transferred) != 0x10); - BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, - data) != 0x14); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, data_in_result) != 0x0); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, data_out_result) != 0x1); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, reserved) != 0x2); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, status) != 0x5); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, status_qualifier) != 0x6); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, sense_data_length) != 0x8); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, response_data_length) != 0xa); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, data_in_transferred) != 0xc); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, data_out_transferred) != 0x10); + BUILD_BUG_ON(offsetof(struct pqi_raid_error_info, data) != 0x14); BUILD_BUG_ON(sizeof(struct pqi_raid_error_info) != 0x114); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - signature) != 0x0); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - function_and_status_code) != 0x8); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - max_admin_iq_elements) != 0x10); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - max_admin_oq_elements) != 0x11); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_iq_element_length) != 0x12); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_oq_element_length) != 0x13); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - max_reset_timeout) != 0x14); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - legacy_intx_status) != 0x18); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - legacy_intx_mask_set) != 0x1c); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - legacy_intx_mask_clear) != 0x20); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - device_status) != 0x40); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_iq_pi_offset) != 0x48); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_oq_ci_offset) != 0x50); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_iq_element_array_addr) != 0x58); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_oq_element_array_addr) != 0x60); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_iq_ci_addr) != 0x68); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_oq_pi_addr) != 0x70); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_iq_num_elements) != 0x78); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_oq_num_elements) != 0x79); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - admin_queue_int_msg_num) != 0x7a); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - device_error) != 0x80); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - error_details) != 0x88); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - device_reset) != 0x90); - BUILD_BUG_ON(offsetof(struct pqi_device_registers, - power_action) != 0x94); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, signature) != 0x0); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, function_and_status_code) != 0x8); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, max_admin_iq_elements) != 0x10); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, max_admin_oq_elements) != 0x11); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_iq_element_length) != 0x12); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_oq_element_length) != 0x13); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, max_reset_timeout) != 0x14); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, legacy_intx_status) != 0x18); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, legacy_intx_mask_set) != 0x1c); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, legacy_intx_mask_clear) != 0x20); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, device_status) != 0x40); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_iq_pi_offset) != 0x48); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_oq_ci_offset) != 0x50); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_iq_element_array_addr) != 0x58); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_oq_element_array_addr) != 0x60); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_iq_ci_addr) != 0x68); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_oq_pi_addr) != 0x70); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_iq_num_elements) != 0x78); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_oq_num_elements) != 0x79); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, admin_queue_int_msg_num) != 0x7a); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, device_error) != 0x80); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, error_details) != 0x88); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, device_reset) != 0x90); + BUILD_BUG_ON(offsetof(struct pqi_device_registers, power_action) != 0x94); BUILD_BUG_ON(sizeof(struct pqi_device_registers) != 0x100); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - header.driver_flags) != 6); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - request_id) != 8); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - function_code) != 10); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.report_device_capability.buffer_length) != 44); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.report_device_capability.sg_descriptor) != 48); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_iq.queue_id) != 12); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_iq.element_array_addr) != 16); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_iq.ci_addr) != 24); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_iq.num_elements) != 32); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_iq.element_length) != 34); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_iq.queue_protocol) != 36); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.queue_id) != 12); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.element_array_addr) != 16); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.pi_addr) != 24); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.num_elements) != 32); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.element_length) != 34); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.queue_protocol) != 36); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.int_msg_num) != 40); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.coalescing_count) != 42); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.min_coalescing_time) != 44); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.create_operational_oq.max_coalescing_time) != 48); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, - data.delete_operational_queue.queue_id) != 12); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, header.driver_flags) != 6); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, request_id) != 8); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, function_code) != 10); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.report_device_capability.buffer_length) != 44); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.report_device_capability.sg_descriptor) != 48); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_iq.queue_id) != 12); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_iq.element_array_addr) != 16); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_iq.ci_addr) != 24); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_iq.num_elements) != 32); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_iq.element_length) != 34); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_iq.queue_protocol) != 36); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.queue_id) != 12); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.element_array_addr) != 16); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.pi_addr) != 24); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.num_elements) != 32); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.element_length) != 34); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.queue_protocol) != 36); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.int_msg_num) != 40); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.coalescing_count) != 42); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.min_coalescing_time) != 44); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.create_operational_oq.max_coalescing_time) != 48); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_request, data.delete_operational_queue.queue_id) != 12); BUILD_BUG_ON(sizeof(struct pqi_general_admin_request) != 64); - BUILD_BUG_ON(sizeof_field(struct pqi_general_admin_request, - data.create_operational_iq) != 64 - 11); - BUILD_BUG_ON(sizeof_field(struct pqi_general_admin_request, - data.create_operational_oq) != 64 - 11); - BUILD_BUG_ON(sizeof_field(struct pqi_general_admin_request, - data.delete_operational_queue) != 64 - 11); - - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - header.driver_flags) != 6); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - request_id) != 8); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - function_code) != 10); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - status) != 11); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - data.create_operational_iq.status_descriptor) != 12); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - data.create_operational_iq.iq_pi_offset) != 16); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - data.create_operational_oq.status_descriptor) != 12); - BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, - data.create_operational_oq.oq_ci_offset) != 16); + BUILD_BUG_ON(sizeof_field(struct pqi_general_admin_request, data.create_operational_iq) != 64 - 11); + BUILD_BUG_ON(sizeof_field(struct pqi_general_admin_request, data.create_operational_oq) != 64 - 11); + BUILD_BUG_ON(sizeof_field(struct pqi_general_admin_request, data.delete_operational_queue) != 64 - 11); + + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, header.driver_flags) != 6); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, request_id) != 8); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, function_code) != 10); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, status) != 11); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, data.create_operational_iq.status_descriptor) != 12); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, data.create_operational_iq.iq_pi_offset) != 16); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, data.create_operational_oq.status_descriptor) != 12); + BUILD_BUG_ON(offsetof(struct pqi_general_admin_response, data.create_operational_oq.oq_ci_offset) != 16); BUILD_BUG_ON(sizeof(struct pqi_general_admin_response) != 64); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - header.response_queue_id) != 4); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - header.driver_flags) != 6); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - request_id) != 8); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - nexus_id) != 10); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - buffer_length) != 12); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - lun_number) != 16); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - protocol_specific) != 24); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - error_index) != 27); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - cdb) != 32); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - timeout) != 60); - BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, - sg_descriptors) != 64); - BUILD_BUG_ON(sizeof(struct pqi_raid_path_request) != - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); - - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - header.response_queue_id) != 4); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - header.driver_flags) != 6); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - request_id) != 8); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - nexus_id) != 12); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - buffer_length) != 16); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - data_encryption_key_index) != 22); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - encrypt_tweak_lower) != 24); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - encrypt_tweak_upper) != 28); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - cdb) != 32); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - error_index) != 48); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - num_sg_descriptors) != 50); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - cdb_length) != 51); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - lun_number) != 52); - BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, - sg_descriptors) != 64); - BUILD_BUG_ON(sizeof(struct pqi_aio_path_request) != - PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); - - BUILD_BUG_ON(offsetof(struct pqi_io_response, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_io_response, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_io_response, - request_id) != 8); - BUILD_BUG_ON(offsetof(struct pqi_io_response, - error_index) != 10); - - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - header.response_queue_id) != 4); - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - request_id) != 8); - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - data.report_event_configuration.buffer_length) != 12); - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - data.report_event_configuration.sg_descriptors) != 16); - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - data.set_event_configuration.global_event_oq_id) != 10); - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - data.set_event_configuration.buffer_length) != 12); - BUILD_BUG_ON(offsetof(struct pqi_general_management_request, - data.set_event_configuration.sg_descriptors) != 16); - - BUILD_BUG_ON(offsetof(struct pqi_iu_layer_descriptor, - max_inbound_iu_length) != 6); - BUILD_BUG_ON(offsetof(struct pqi_iu_layer_descriptor, - max_outbound_iu_length) != 14); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, header.response_queue_id) != 4); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, header.driver_flags) != 6); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, request_id) != 8); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, nexus_id) != 10); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, buffer_length) != 12); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, lun_number) != 16); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, protocol_specific) != 24); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, error_index) != 27); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, cdb) != 32); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, timeout) != 60); + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, sg_descriptors) != 64); + BUILD_BUG_ON(sizeof(struct pqi_raid_path_request) != PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); + + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, header.response_queue_id) != 4); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, header.driver_flags) != 6); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, request_id) != 8); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, nexus_id) != 12); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, buffer_length) != 16); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, data_encryption_key_index) != 22); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, encrypt_tweak_lower) != 24); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, encrypt_tweak_upper) != 28); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, cdb) != 32); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, error_index) != 48); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, num_sg_descriptors) != 50); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, cdb_length) != 51); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, lun_number) != 52); + BUILD_BUG_ON(offsetof(struct pqi_aio_path_request, sg_descriptors) != 64); + BUILD_BUG_ON(sizeof(struct pqi_aio_path_request) != PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); + + BUILD_BUG_ON(offsetof(struct pqi_io_response, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_io_response, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_io_response, request_id) != 8); + BUILD_BUG_ON(offsetof(struct pqi_io_response, error_index) != 10); + + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, header.response_queue_id) != 4); + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, request_id) != 8); + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, data.report_event_configuration.buffer_length) != 12); + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, data.report_event_configuration.sg_descriptors) != 16); + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, data.set_event_configuration.global_event_oq_id) != 10); + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, data.set_event_configuration.buffer_length) != 12); + BUILD_BUG_ON(offsetof(struct pqi_general_management_request, data.set_event_configuration.sg_descriptors) != 16); + + BUILD_BUG_ON(offsetof(struct pqi_iu_layer_descriptor, max_inbound_iu_length) != 6); + BUILD_BUG_ON(offsetof(struct pqi_iu_layer_descriptor, max_outbound_iu_length) != 14); BUILD_BUG_ON(sizeof(struct pqi_iu_layer_descriptor) != 16); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - data_length) != 0); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - iq_arbitration_priority_support_bitmask) != 8); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - maximum_aw_a) != 9); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - maximum_aw_b) != 10); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - maximum_aw_c) != 11); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - max_inbound_queues) != 16); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - max_elements_per_iq) != 18); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - max_iq_element_length) != 24); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - min_iq_element_length) != 26); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - max_outbound_queues) != 30); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - max_elements_per_oq) != 32); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - intr_coalescing_time_granularity) != 34); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - max_oq_element_length) != 36); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - min_oq_element_length) != 38); - BUILD_BUG_ON(offsetof(struct pqi_device_capability, - iu_layer_descriptors) != 64); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, data_length) != 0); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, iq_arbitration_priority_support_bitmask) != 8); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, maximum_aw_a) != 9); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, maximum_aw_b) != 10); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, maximum_aw_c) != 11); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, max_inbound_queues) != 16); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, max_elements_per_iq) != 18); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, max_iq_element_length) != 24); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, min_iq_element_length) != 26); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, max_outbound_queues) != 30); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, max_elements_per_oq) != 32); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, intr_coalescing_time_granularity) != 34); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, max_oq_element_length) != 36); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, min_oq_element_length) != 38); + BUILD_BUG_ON(offsetof(struct pqi_device_capability, iu_layer_descriptors) != 64); BUILD_BUG_ON(sizeof(struct pqi_device_capability) != 576); - BUILD_BUG_ON(offsetof(struct pqi_event_descriptor, - event_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_event_descriptor, - oq_id) != 2); + BUILD_BUG_ON(offsetof(struct pqi_event_descriptor, event_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_event_descriptor, oq_id) != 2); BUILD_BUG_ON(sizeof(struct pqi_event_descriptor) != 4); - BUILD_BUG_ON(offsetof(struct pqi_event_config, - num_event_descriptors) != 2); - BUILD_BUG_ON(offsetof(struct pqi_event_config, - descriptors) != 4); - - BUILD_BUG_ON(PQI_NUM_SUPPORTED_EVENTS != - ARRAY_SIZE(pqi_supported_event_types)); - - BUILD_BUG_ON(offsetof(struct pqi_event_response, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_event_response, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_event_response, - event_type) != 8); - BUILD_BUG_ON(offsetof(struct pqi_event_response, - event_id) != 10); - BUILD_BUG_ON(offsetof(struct pqi_event_response, - additional_event_id) != 12); - BUILD_BUG_ON(offsetof(struct pqi_event_response, - data) != 16); + BUILD_BUG_ON(offsetof(struct pqi_event_config, num_event_descriptors) != 2); + BUILD_BUG_ON(offsetof(struct pqi_event_config, descriptors) != 4); + + BUILD_BUG_ON(PQI_NUM_SUPPORTED_EVENTS != ARRAY_SIZE(pqi_supported_event_types)); + + BUILD_BUG_ON(offsetof(struct pqi_event_response, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_event_response, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_event_response, event_type) != 8); + BUILD_BUG_ON(offsetof(struct pqi_event_response, event_id) != 10); + BUILD_BUG_ON(offsetof(struct pqi_event_response, additional_event_id) != 12); + BUILD_BUG_ON(offsetof(struct pqi_event_response, data) != 16); BUILD_BUG_ON(sizeof(struct pqi_event_response) != 32); - BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, - event_type) != 8); - BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, - event_id) != 10); - BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, - additional_event_id) != 12); + BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, event_type) != 8); + BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, event_id) != 10); + BUILD_BUG_ON(offsetof(struct pqi_event_acknowledge_request, additional_event_id) != 12); BUILD_BUG_ON(sizeof(struct pqi_event_acknowledge_request) != 16); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - request_id) != 8); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - nexus_id) != 10); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - timeout) != 14); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - lun_number) != 16); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - protocol_specific) != 24); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - outbound_queue_id_to_manage) != 26); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - request_id_to_manage) != 28); - BUILD_BUG_ON(offsetof(struct pqi_task_management_request, - task_management_function) != 30); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, request_id) != 8); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, nexus_id) != 10); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, timeout) != 14); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, lun_number) != 16); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, protocol_specific) != 24); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, outbound_queue_id_to_manage) != 26); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, request_id_to_manage) != 28); + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, task_management_function) != 30); BUILD_BUG_ON(sizeof(struct pqi_task_management_request) != 32); - BUILD_BUG_ON(offsetof(struct pqi_task_management_response, - header.iu_type) != 0); - BUILD_BUG_ON(offsetof(struct pqi_task_management_response, - header.iu_length) != 2); - BUILD_BUG_ON(offsetof(struct pqi_task_management_response, - request_id) != 8); - BUILD_BUG_ON(offsetof(struct pqi_task_management_response, - nexus_id) != 10); - BUILD_BUG_ON(offsetof(struct pqi_task_management_response, - additional_response_info) != 12); - BUILD_BUG_ON(offsetof(struct pqi_task_management_response, - response_code) != 15); + BUILD_BUG_ON(offsetof(struct pqi_task_management_response, header.iu_type) != 0); + BUILD_BUG_ON(offsetof(struct pqi_task_management_response, header.iu_length) != 2); + BUILD_BUG_ON(offsetof(struct pqi_task_management_response, request_id) != 8); + BUILD_BUG_ON(offsetof(struct pqi_task_management_response, nexus_id) != 10); + BUILD_BUG_ON(offsetof(struct pqi_task_management_response, additional_response_info) != 12); + BUILD_BUG_ON(offsetof(struct pqi_task_management_response, response_code) != 15); BUILD_BUG_ON(sizeof(struct pqi_task_management_response) != 16); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - configured_logical_drive_count) != 0); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - configuration_signature) != 1); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - firmware_version_short) != 5); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - extended_logical_unit_count) != 154); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - firmware_build_number) != 190); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - vendor_id) != 200); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - product_id) != 208); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - extra_controller_flags) != 286); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - controller_mode) != 292); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - spare_part_number) != 293); - BUILD_BUG_ON(offsetof(struct bmic_identify_controller, - firmware_version_long) != 325); - - BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, - phys_bay_in_box) != 115); - BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, - device_type) != 120); - BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, - redundant_path_present_map) != 1736); - BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, - active_path_number) != 1738); - BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, - alternate_paths_phys_connector) != 1739); - BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, - alternate_paths_phys_box_on_port) != 1755); - BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, - current_queue_depth_limit) != 1796); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, configured_logical_drive_count) != 0); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, configuration_signature) != 1); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, firmware_version_short) != 5); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, extended_logical_unit_count) != 154); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, firmware_build_number) != 190); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, vendor_id) != 200); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, product_id) != 208); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, extra_controller_flags) != 286); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, controller_mode) != 292); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, spare_part_number) != 293); + BUILD_BUG_ON(offsetof(struct bmic_identify_controller, firmware_version_long) != 325); + + BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, phys_bay_in_box) != 115); + BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, device_type) != 120); + BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, redundant_path_present_map) != 1736); + BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, active_path_number) != 1738); + BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, alternate_paths_phys_connector) != 1739); + BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, alternate_paths_phys_box_on_port) != 1755); + BUILD_BUG_ON(offsetof(struct bmic_identify_physical_device, current_queue_depth_limit) != 1796); BUILD_BUG_ON(sizeof(struct bmic_identify_physical_device) != 2560); BUILD_BUG_ON(sizeof(struct bmic_sense_feature_buffer_header) != 4); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_buffer_header, - page_code) != 0); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_buffer_header, - subpage_code) != 1); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_buffer_header, - buffer_length) != 2); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_buffer_header, page_code) != 0); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_buffer_header, subpage_code) != 1); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_buffer_header, buffer_length) != 2); BUILD_BUG_ON(sizeof(struct bmic_sense_feature_page_header) != 4); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_page_header, - page_code) != 0); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_page_header, - subpage_code) != 1); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_page_header, - page_length) != 2); - - BUILD_BUG_ON(sizeof(struct bmic_sense_feature_io_page_aio_subpage) - != 18); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - header) != 0); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - firmware_read_support) != 4); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - driver_read_support) != 5); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - firmware_write_support) != 6); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - driver_write_support) != 7); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - max_transfer_encrypted_sas_sata) != 8); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - max_transfer_encrypted_nvme) != 10); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - max_write_raid_5_6) != 12); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - max_write_raid_1_10_2drive) != 14); - BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, - max_write_raid_1_10_3drive) != 16); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_page_header, page_code) != 0); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_page_header, subpage_code) != 1); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_page_header, page_length) != 2); + + BUILD_BUG_ON(sizeof(struct bmic_sense_feature_io_page_aio_subpage) != 18); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, header) != 0); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, firmware_read_support) != 4); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, driver_read_support) != 5); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, firmware_write_support) != 6); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, driver_write_support) != 7); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, max_transfer_encrypted_sas_sata) != 8); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, max_transfer_encrypted_nvme) != 10); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, max_write_raid_5_6) != 12); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, max_write_raid_1_10_2drive) != 14); + BUILD_BUG_ON(offsetof(struct bmic_sense_feature_io_page_aio_subpage, max_write_raid_1_10_3drive) != 16); BUILD_BUG_ON(PQI_ADMIN_IQ_NUM_ELEMENTS > 255); BUILD_BUG_ON(PQI_ADMIN_OQ_NUM_ELEMENTS > 255); - BUILD_BUG_ON(PQI_ADMIN_IQ_ELEMENT_LENGTH % - PQI_QUEUE_ELEMENT_LENGTH_ALIGNMENT != 0); - BUILD_BUG_ON(PQI_ADMIN_OQ_ELEMENT_LENGTH % - PQI_QUEUE_ELEMENT_LENGTH_ALIGNMENT != 0); + BUILD_BUG_ON(PQI_ADMIN_IQ_ELEMENT_LENGTH % PQI_QUEUE_ELEMENT_LENGTH_ALIGNMENT != 0); + BUILD_BUG_ON(PQI_ADMIN_OQ_ELEMENT_LENGTH % PQI_QUEUE_ELEMENT_LENGTH_ALIGNMENT != 0); BUILD_BUG_ON(PQI_OPERATIONAL_IQ_ELEMENT_LENGTH > 1048560); - BUILD_BUG_ON(PQI_OPERATIONAL_IQ_ELEMENT_LENGTH % - PQI_QUEUE_ELEMENT_LENGTH_ALIGNMENT != 0); + BUILD_BUG_ON(PQI_OPERATIONAL_IQ_ELEMENT_LENGTH % PQI_QUEUE_ELEMENT_LENGTH_ALIGNMENT != 0); BUILD_BUG_ON(PQI_OPERATIONAL_OQ_ELEMENT_LENGTH > 1048560); - BUILD_BUG_ON(PQI_OPERATIONAL_OQ_ELEMENT_LENGTH % - PQI_QUEUE_ELEMENT_LENGTH_ALIGNMENT != 0); + BUILD_BUG_ON(PQI_OPERATIONAL_OQ_ELEMENT_LENGTH % PQI_QUEUE_ELEMENT_LENGTH_ALIGNMENT != 0); BUILD_BUG_ON(PQI_RESERVED_IO_SLOTS >= PQI_MAX_OUTSTANDING_REQUESTS); - BUILD_BUG_ON(PQI_RESERVED_IO_SLOTS >= - PQI_MAX_OUTSTANDING_REQUESTS_KDUMP); + BUILD_BUG_ON(PQI_RESERVED_IO_SLOTS >= PQI_MAX_OUTSTANDING_REQUESTS_KDUMP); } From patchwork Thu Aug 17 13:12:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 715344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFB73C2FC20 for ; Thu, 17 Aug 2023 13:13:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351343AbjHQNMz (ORCPT ); Thu, 17 Aug 2023 09:12:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351399AbjHQNMe (ORCPT ); Thu, 17 Aug 2023 09:12:34 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.153.233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F11C435A0 for ; Thu, 17 Aug 2023 06:12:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277929; x=1723813929; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6F3Gas77tgk0GAtuIprWROrk0DrR3O2Me2dEWRK1gRM=; b=S4U31+0GMXTCpbdMP3DeHV6gfcrmSrO7XXVpKsaqIprK89cHpS04KFT1 c327CNFcBLBOk35AuYJZhqCOtQUr7r01zEYTNTEz1+2I2nVvhYmWof+QO LtkBBbreSNcXk2q8UT0sARJ15zNQibz4tM/IbpbkzFYRCJBZ/NGKf2nwc 8y45WTTDNkRk8KmwzyQYJO5U86JFWyLW1e5ep0kbICvlNgnlOVozEoOA8 kx5VIvGkZu+RRscGWzqXKQw5V5S9MWN1ziYWPmveu19XmYqInDy5b02Ol qhKdboyiPgFMeg7u5Qs6L5GgWbwXQl942pAPtqiak7JwVcryuBhiXJjoo A==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="230328066" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa5.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:10:45 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:10:29 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:10:28 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 2/9] smartpqi: add abort handler Date: Thu, 17 Aug 2023 08:12:25 -0500 Message-ID: <20230817131232.86754-3-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Kevin Barnett Implement aborts as resets. Avoid I/O stalls across all devices attached to a controller when device I/O requests time out. Reviewed-by: Mahesh Rajashekhara Reviewed-by: Scott Teel Reviewed-by: Scott Benesh Reviewed-by: Mike McGowen Signed-off-by: Kevin Barnett Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi.h | 14 ++- drivers/scsi/smartpqi/smartpqi_init.c | 171 ++++++++++++++++++++------ 2 files changed, 149 insertions(+), 36 deletions(-) diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h index e392eaf5b2bf..e560d99efa95 100644 --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h @@ -1085,7 +1085,16 @@ struct pqi_stream_data { u32 last_accessed; }; -#define PQI_MAX_LUNS_PER_DEVICE 256 +#define PQI_MAX_LUNS_PER_DEVICE 256 + +struct pqi_tmf_work { + struct work_struct work_struct; + struct scsi_cmnd *scmd; + struct pqi_ctrl_info *ctrl_info; + struct pqi_scsi_dev *device; + u8 lun; + u8 scsi_opcode; +}; struct pqi_scsi_dev { int devtype; /* as reported by INQUIRY command */ @@ -1111,6 +1120,7 @@ struct pqi_scsi_dev { u8 erase_in_progress : 1; bool aio_enabled; /* only valid for physical disks */ bool in_remove; + bool in_reset[PQI_MAX_LUNS_PER_DEVICE]; bool device_offline; u8 vendor[8]; /* bytes 8-15 of inquiry data */ u8 model[16]; /* bytes 16-31 of inquiry data */ @@ -1149,6 +1159,8 @@ struct pqi_scsi_dev { struct pqi_stream_data stream_data[NUM_STREAMS_PER_LUN]; atomic_t scsi_cmds_outstanding[PQI_MAX_LUNS_PER_DEVICE]; unsigned int raid_bypass_cnt; + + struct pqi_tmf_work tmf_work[PQI_MAX_LUNS_PER_DEVICE]; }; /* VPD inquiry pages */ diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index 4486259f85ab..ec36896eb08e 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -48,6 +48,8 @@ #define PQI_POST_RESET_DELAY_SECS 5 #define PQI_POST_OFA_RESET_DELAY_UPON_TIMEOUT_SECS 10 +#define PQI_NO_COMPLETION ((void *)-1) + MODULE_AUTHOR("Microchip"); MODULE_DESCRIPTION("Driver for Microchip Smart Family Controller version " DRIVER_VERSION); @@ -96,6 +98,7 @@ static int pqi_ofa_host_memory_update(struct pqi_ctrl_info *ctrl_info); static int pqi_device_wait_for_pending_io(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device, u8 lun, unsigned long timeout_msecs); static void pqi_fail_all_outstanding_requests(struct pqi_ctrl_info *ctrl_info); +static void pqi_tmf_worker(struct work_struct *work); /* for flags argument to pqi_submit_raid_request_synchronous() */ #define PQI_SYNC_FLAGS_INTERRUPTABLE 0x1 @@ -455,6 +458,21 @@ static inline bool pqi_device_in_remove(struct pqi_scsi_dev *device) return device->in_remove; } +static inline void pqi_device_reset_start(struct pqi_scsi_dev *device, u8 lun) +{ + device->in_reset[lun] = true; +} + +static inline void pqi_device_reset_done(struct pqi_scsi_dev *device, u8 lun) +{ + device->in_reset[lun] = false; +} + +static inline bool pqi_device_in_reset(struct pqi_scsi_dev *device, u8 lun) +{ + return device->in_reset[lun]; +} + static inline int pqi_event_type_to_event_index(unsigned int event_type) { int index; @@ -2122,6 +2140,15 @@ static inline bool pqi_is_device_added(struct pqi_scsi_dev *device) return device->sdev != NULL; } +static inline void pqi_init_device_tmf_work(struct pqi_scsi_dev *device) +{ + unsigned int lun; + struct pqi_tmf_work *tmf_work; + + for (lun = 0, tmf_work = device->tmf_work; lun < PQI_MAX_LUNS_PER_DEVICE; lun++, tmf_work++) + INIT_WORK(&tmf_work->work_struct, pqi_tmf_worker); +} + static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *new_device_list[], unsigned int num_new_devices) { @@ -2202,6 +2229,7 @@ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info, list_add_tail(&device->add_list_entry, &add_list); /* To prevent this device structure from being freed later. */ device->keep_device = true; + pqi_init_device_tmf_work(device); } spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); @@ -5623,6 +5651,7 @@ static inline bool pqi_is_bypass_eligible_request(struct scsi_cmnd *scmd) void pqi_prep_for_scsi_done(struct scsi_cmnd *scmd) { struct pqi_scsi_dev *device; + struct completion *wait; if (!scmd->device) { set_host_byte(scmd, DID_NO_CONNECT); @@ -5636,6 +5665,10 @@ void pqi_prep_for_scsi_done(struct scsi_cmnd *scmd) } atomic_dec(&device->scsi_cmds_outstanding[scmd->device->lun]); + + wait = (struct completion *)xchg(&scmd->host_scribble, NULL); + if (wait != PQI_NO_COMPLETION) + complete(wait); } static bool pqi_is_parity_write_stream(struct pqi_ctrl_info *ctrl_info, @@ -5719,6 +5752,9 @@ static int pqi_scsi_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scm u16 hw_queue; struct pqi_queue_group *queue_group; bool raid_bypassed; + u8 lun; + + scmd->host_scribble = PQI_NO_COMPLETION; device = scmd->device->hostdata; @@ -5728,7 +5764,9 @@ static int pqi_scsi_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scm return 0; } - atomic_inc(&device->scsi_cmds_outstanding[scmd->device->lun]); + lun = (u8)scmd->device->lun; + + atomic_inc(&device->scsi_cmds_outstanding[lun]); ctrl_info = shost_to_hba(shost); @@ -5738,7 +5776,7 @@ static int pqi_scsi_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scm return 0; } - if (pqi_ctrl_blocked(ctrl_info)) { + if (pqi_ctrl_blocked(ctrl_info) || pqi_device_in_reset(device, lun)) { rc = SCSI_MLQUEUE_HOST_BUSY; goto out; } @@ -5773,8 +5811,10 @@ static int pqi_scsi_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scm } out: - if (rc) - atomic_dec(&device->scsi_cmds_outstanding[scmd->device->lun]); + if (rc) { + scmd->host_scribble = NULL; + atomic_dec(&device->scsi_cmds_outstanding[lun]); + } return rc; } @@ -5868,7 +5908,7 @@ static int pqi_wait_until_inbound_queues_empty(struct pqi_ctrl_info *ctrl_info) } static void pqi_fail_io_queued_for_device(struct pqi_ctrl_info *ctrl_info, - struct pqi_scsi_dev *device) + struct pqi_scsi_dev *device, u8 lun) { unsigned int i; unsigned int path; @@ -5894,6 +5934,9 @@ static void pqi_fail_io_queued_for_device(struct pqi_ctrl_info *ctrl_info, if (scsi_device != device) continue; + if ((u8)scmd->device->lun != lun) + continue; + list_del(&io_request->request_list_entry); set_host_byte(scmd, DID_RESET); pqi_free_io_request(io_request); @@ -5990,15 +6033,13 @@ static int pqi_wait_for_lun_reset_completion(struct pqi_ctrl_info *ctrl_info, #define PQI_LUN_RESET_FIRMWARE_TIMEOUT_SECS 30 -static int pqi_lun_reset(struct pqi_ctrl_info *ctrl_info, struct scsi_cmnd *scmd) +static int pqi_lun_reset(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device, u8 lun) { int rc; struct pqi_io_request *io_request; DECLARE_COMPLETION_ONSTACK(wait); struct pqi_task_management_request *request; - struct pqi_scsi_dev *device; - device = scmd->device->hostdata; io_request = pqi_alloc_io_request(ctrl_info, NULL); io_request->io_complete_callback = pqi_lun_reset_complete; io_request->context = &wait; @@ -6011,14 +6052,14 @@ static int pqi_lun_reset(struct pqi_ctrl_info *ctrl_info, struct scsi_cmnd *scmd put_unaligned_le16(io_request->index, &request->request_id); memcpy(request->lun_number, device->scsi3addr, sizeof(request->lun_number)); if (!pqi_is_logical_device(device) && ctrl_info->multi_lun_device_supported) - request->ml_device_lun_number = (u8)scmd->device->lun; + request->ml_device_lun_number = lun; request->task_management_function = SOP_TASK_MANAGEMENT_LUN_RESET; if (ctrl_info->tmf_iu_timeout_supported) put_unaligned_le16(PQI_LUN_RESET_FIRMWARE_TIMEOUT_SECS, &request->timeout); pqi_start_io(ctrl_info, &ctrl_info->queue_groups[PQI_DEFAULT_QUEUE_GROUP], RAID_PATH, io_request); - rc = pqi_wait_for_lun_reset_completion(ctrl_info, device, (u8)scmd->device->lun, &wait); + rc = pqi_wait_for_lun_reset_completion(ctrl_info, device, lun, &wait); if (rc == 0) rc = io_request->status; @@ -6032,18 +6073,16 @@ static int pqi_lun_reset(struct pqi_ctrl_info *ctrl_info, struct scsi_cmnd *scmd #define PQI_LUN_RESET_PENDING_IO_TIMEOUT_MSECS (10 * 60 * 1000) #define PQI_LUN_RESET_FAILED_PENDING_IO_TIMEOUT_MSECS (2 * 60 * 1000) -static int pqi_lun_reset_with_retries(struct pqi_ctrl_info *ctrl_info, struct scsi_cmnd *scmd) +static int pqi_lun_reset_with_retries(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device, u8 lun) { int reset_rc; int wait_rc; unsigned int retries; unsigned long timeout_msecs; - struct pqi_scsi_dev *device; - device = scmd->device->hostdata; for (retries = 0;;) { - reset_rc = pqi_lun_reset(ctrl_info, scmd); - if (reset_rc == 0 || reset_rc == -ENODEV || ++retries > PQI_LUN_RESET_RETRIES) + reset_rc = pqi_lun_reset(ctrl_info, device, lun); + if (reset_rc == 0 || reset_rc == -ENODEV || reset_rc == -ENXIO || ++retries > PQI_LUN_RESET_RETRIES) break; msleep(PQI_LUN_RESET_RETRY_INTERVAL_MSECS); } @@ -6051,60 +6090,53 @@ static int pqi_lun_reset_with_retries(struct pqi_ctrl_info *ctrl_info, struct sc timeout_msecs = reset_rc ? PQI_LUN_RESET_FAILED_PENDING_IO_TIMEOUT_MSECS : PQI_LUN_RESET_PENDING_IO_TIMEOUT_MSECS; - wait_rc = pqi_device_wait_for_pending_io(ctrl_info, device, scmd->device->lun, timeout_msecs); + wait_rc = pqi_device_wait_for_pending_io(ctrl_info, device, lun, timeout_msecs); if (wait_rc && reset_rc == 0) reset_rc = wait_rc; return reset_rc == 0 ? SUCCESS : FAILED; } -static int pqi_device_reset(struct pqi_ctrl_info *ctrl_info, struct scsi_cmnd *scmd) +static int pqi_device_reset(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device, u8 lun) { int rc; - struct pqi_scsi_dev *device; - device = scmd->device->hostdata; pqi_ctrl_block_requests(ctrl_info); pqi_ctrl_wait_until_quiesced(ctrl_info); - pqi_fail_io_queued_for_device(ctrl_info, device); + pqi_fail_io_queued_for_device(ctrl_info, device, lun); rc = pqi_wait_until_inbound_queues_empty(ctrl_info); + pqi_device_reset_start(device, lun); + pqi_ctrl_unblock_requests(ctrl_info); if (rc) rc = FAILED; else - rc = pqi_lun_reset_with_retries(ctrl_info, scmd); - pqi_ctrl_unblock_requests(ctrl_info); + rc = pqi_lun_reset_with_retries(ctrl_info, device, lun); + pqi_device_reset_done(device, lun); return rc; } -static int pqi_eh_device_reset_handler(struct scsi_cmnd *scmd) +static int pqi_device_reset_handler(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device, u8 lun, struct scsi_cmnd *scmd, u8 scsi_opcode) { int rc; - struct Scsi_Host *shost; - struct pqi_ctrl_info *ctrl_info; - struct pqi_scsi_dev *device; - - shost = scmd->device->host; - ctrl_info = shost_to_hba(shost); - device = scmd->device->hostdata; mutex_lock(&ctrl_info->lun_reset_mutex); dev_err(&ctrl_info->pci_dev->dev, "resetting scsi %d:%d:%d:%d due to cmd 0x%02x\n", - shost->host_no, - device->bus, device->target, (u32)scmd->device->lun, + ctrl_info->scsi_host->host_no, + device->bus, device->target, lun, scmd->cmd_len > 0 ? scmd->cmnd[0] : 0xff); pqi_check_ctrl_health(ctrl_info); if (pqi_ctrl_offline(ctrl_info)) rc = FAILED; else - rc = pqi_device_reset(ctrl_info, scmd); + rc = pqi_device_reset(ctrl_info, device, lun); dev_err(&ctrl_info->pci_dev->dev, - "reset of scsi %d:%d:%d:%d: %s\n", - shost->host_no, device->bus, device->target, (u32)scmd->device->lun, + "reset of scsi %d:%d:%d:%u: %s\n", + ctrl_info->scsi_host->host_no, device->bus, device->target, lun, rc == SUCCESS ? "SUCCESS" : "FAILED"); mutex_unlock(&ctrl_info->lun_reset_mutex); @@ -6112,6 +6144,74 @@ static int pqi_eh_device_reset_handler(struct scsi_cmnd *scmd) return rc; } +static int pqi_eh_device_reset_handler(struct scsi_cmnd *scmd) +{ + struct Scsi_Host *shost; + struct pqi_ctrl_info *ctrl_info; + struct pqi_scsi_dev *device; + u8 scsi_opcode; + + shost = scmd->device->host; + ctrl_info = shost_to_hba(shost); + device = scmd->device->hostdata; + scsi_opcode = scmd->cmd_len > 0 ? scmd->cmnd[0] : 0xff; + + return pqi_device_reset_handler(ctrl_info, device, (u8)scmd->device->lun, scmd, scsi_opcode); +} + +static void pqi_tmf_worker(struct work_struct *work) +{ + struct pqi_tmf_work *tmf_work; + struct scsi_cmnd *scmd; + + tmf_work = container_of(work, struct pqi_tmf_work, work_struct); + scmd = (struct scsi_cmnd *)xchg(&tmf_work->scmd, NULL); + + pqi_device_reset_handler(tmf_work->ctrl_info, tmf_work->device, tmf_work->lun, scmd, tmf_work->scsi_opcode); +} + +static int pqi_eh_abort_handler(struct scsi_cmnd *scmd) +{ + struct Scsi_Host *shost; + struct pqi_ctrl_info *ctrl_info; + struct pqi_scsi_dev *device; + struct pqi_tmf_work *tmf_work; + DECLARE_COMPLETION_ONSTACK(wait); + + shost = scmd->device->host; + ctrl_info = shost_to_hba(shost); + + dev_err(&ctrl_info->pci_dev->dev, + "attempting TASK ABORT on SCSI cmd at %p\n", scmd); + + if (cmpxchg(&scmd->host_scribble, PQI_NO_COMPLETION, (void *)&wait) == NULL) { + dev_err(&ctrl_info->pci_dev->dev, + "SCSI cmd at %p already completed\n", scmd); + scmd->result = DID_RESET << 16; + goto out; + } + + device = scmd->device->hostdata; + tmf_work = &device->tmf_work[scmd->device->lun]; + + if (cmpxchg(&tmf_work->scmd, NULL, scmd) == NULL) { + tmf_work->ctrl_info = ctrl_info; + tmf_work->device = device; + tmf_work->lun = (u8)scmd->device->lun; + tmf_work->scsi_opcode = scmd->cmd_len > 0 ? scmd->cmnd[0] : 0xff; + schedule_work(&tmf_work->work_struct); + } + + wait_for_completion(&wait); + + dev_err(&ctrl_info->pci_dev->dev, + "TASK ABORT on SCSI cmd at %p: SUCCESS\n", scmd); + +out: + + return SUCCESS; +} + static int pqi_slave_alloc(struct scsi_device *sdev) { struct pqi_scsi_dev *device; @@ -7106,6 +7206,7 @@ static const struct scsi_host_template pqi_driver_template = { .scan_finished = pqi_scan_finished, .this_id = -1, .eh_device_reset_handler = pqi_eh_device_reset_handler, + .eh_abort_handler = pqi_eh_abort_handler, .ioctl = pqi_ioctl, .slave_alloc = pqi_slave_alloc, .slave_configure = pqi_slave_configure, From patchwork Thu Aug 17 13:12:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 714536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6F0CC2FC27 for ; Thu, 17 Aug 2023 13:13:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351345AbjHQNMz (ORCPT ); Thu, 17 Aug 2023 09:12:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351402AbjHQNMf (ORCPT ); Thu, 17 Aug 2023 09:12:35 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.154.123]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBDCB35BE for ; Thu, 17 Aug 2023 06:12:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277931; x=1723813931; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LIrHF8Ca2xczEuweCBI0sDnC3kg0eMCQSoa62Z9LCUM=; b=RSPSkXpRTcZl55GGaN14awPU9bQXA5I9/cnvrefISKcn/hz4ET7vGlUB 1TWqe+pMwtZE/uUnYGD5HKj7dsyoBZsAgg5NIzDXqdYyYRkzyj/c2+rbK mqR+/23osaT+3FgB+jZyawuy8KXmViCNqCE8V2icPnh89K7VuQAyOJAoC Rjzu6d1U6bDwbrNDCq4fXouFGrNFQUSS16lO+6A/2JvV6FuteNw8+fq9x IOVWSsvJafO6I0oRdU3TWMsu4c5CtdlOzTgySDYYzbN+sgcsbGr4o4P2B mXYxHFZA4LipfZs7SQXJttPmm1zTsnLngwwdcOx5+WjKzOm2tBzIAF15r w==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="225883243" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa4.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:11:24 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex02.mchp-main.com (10.10.85.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:10:30 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:10:29 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 3/9] smartpqi: refactor rename MACRO to clarify purpose Date: Thu, 17 Aug 2023 08:12:26 -0500 Message-ID: <20230817131232.86754-4-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Kevin Barnett Rename SOP_RC_INCORRECT_LOGICAL_UNIT to SOP_TMF_INCORRECT_LOGICAL_UNIT to clarify the intended purpose. Reviewed-by: Mahesh Rajashekhara Reviewed-by: Scott Teel Reviewed-by: Scott Benesh Reviewed-by: Mike McGowen Signed-off-by: Kevin Barnett Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi.h | 2 +- drivers/scsi/smartpqi/smartpqi_init.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h index e560d99efa95..041940183516 100644 --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h @@ -710,7 +710,7 @@ typedef u32 pqi_index_t; #define SOP_TMF_COMPLETE 0x0 #define SOP_TMF_REJECTED 0x4 #define SOP_TMF_FUNCTION_SUCCEEDED 0x8 -#define SOP_RC_INCORRECT_LOGICAL_UNIT 0x9 +#define SOP_TMF_INCORRECT_LOGICAL_UNIT 0x9 /* additional CDB bytes usage field codes */ #define SOP_ADDITIONAL_CDB_BYTES_0 0 /* 16-byte CDB */ diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index ec36896eb08e..2d695e7cd83f 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -3308,7 +3308,7 @@ static int pqi_interpret_task_management_response(struct pqi_ctrl_info *ctrl_inf case SOP_TMF_REJECTED: rc = -EAGAIN; break; - case SOP_RC_INCORRECT_LOGICAL_UNIT: + case SOP_TMF_INCORRECT_LOGICAL_UNIT: rc = -ENODEV; break; default: From patchwork Thu Aug 17 13:12:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 714538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49597C27C7A for ; Thu, 17 Aug 2023 13:13:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351050AbjHQNMt (ORCPT ); Thu, 17 Aug 2023 09:12:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351421AbjHQNMp (ORCPT ); Thu, 17 Aug 2023 09:12:45 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.154.123]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2C7B3A87 for ; Thu, 17 Aug 2023 06:12:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277937; x=1723813937; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JUkatfIgX+owCMGNvN9+UKRJen/QKAB9K1lXUmS+wA4=; b=nK1o1KskD6IPM/7pL3BeD+bdZX957b4ZQpEK055wc85PwZJc2OSzgCQb CTC7aJ2NLRL60aIvfZVgGaofO1dPkWZA+6fS9OMwqykUa6ucx2eyvkSFN T9TixPswbeUSfLxILQy6Im57nil/lORK/yayh1rfe+/RiFj3KCD3aExD3 2s6yNLZex1m0/5/qh8mKuL+hwf9jc6ZY25a6JCGvVFKtOoKdNQKC5NNVx feYicCpSnxv21uCdsVcwxwxY34Ae2fv/xHbsNirxzhxpJOwx8cNulkbLn uRkVU0r8wRkWTHg+OpC9/mESsfiupDiQ0hdSeoZyCMht6ilspcW//Z3eP g==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="166911784" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa6.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:11:26 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex02.mchp-main.com (10.10.85.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:11:00 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:10:59 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 4/9] smartpqi: refactor rename pciinfo to pci_info Date: Thu, 17 Aug 2023 08:12:27 -0500 Message-ID: <20230817131232.86754-5-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Kevin Barnett Make pci device structure names consistent and readable. Reviewed-by: Scott Benesh Reviewed-by: Mike McGowen Signed-off-by: Kevin Barnett Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi_init.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index 2d695e7cd83f..dedc721b007b 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -6332,21 +6332,21 @@ static int pqi_getpciinfo_ioctl(struct pqi_ctrl_info *ctrl_info, void __user *ar struct pci_dev *pci_dev; u32 subsystem_vendor; u32 subsystem_device; - cciss_pci_info_struct pciinfo; + cciss_pci_info_struct pci_info; if (!arg) return -EINVAL; pci_dev = ctrl_info->pci_dev; - pciinfo.domain = pci_domain_nr(pci_dev->bus); - pciinfo.bus = pci_dev->bus->number; - pciinfo.dev_fn = pci_dev->devfn; + pci_info.domain = pci_domain_nr(pci_dev->bus); + pci_info.bus = pci_dev->bus->number; + pci_info.dev_fn = pci_dev->devfn; subsystem_vendor = pci_dev->subsystem_vendor; subsystem_device = pci_dev->subsystem_device; - pciinfo.board_id = ((subsystem_device << 16) & 0xffff0000) | subsystem_vendor; + pci_info.board_id = ((subsystem_device << 16) & 0xffff0000) | subsystem_vendor; - if (copy_to_user(arg, &pciinfo, sizeof(pciinfo))) + if (copy_to_user(arg, &pci_info, sizeof(pci_info))) return -EFAULT; return 0; From patchwork Thu Aug 17 13:12:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 715346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A43FC2FC14 for ; Thu, 17 Aug 2023 13:13:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351138AbjHQNMu (ORCPT ); Thu, 17 Aug 2023 09:12:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351392AbjHQNMb (ORCPT ); Thu, 17 Aug 2023 09:12:31 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.153.233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B222F30D6 for ; Thu, 17 Aug 2023 06:12:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277924; x=1723813924; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F2BmUbmhok78MCe6S2UWhMiI0Fw+jdgKHLnvuXLzqWw=; b=OyRcAQtOsAwznL11MbzvwKpKYwt92l3ugQqfISOj9ldDW/cAtnzmAvHi ryMQRCuXg3SXZH0hUsUaHNL28ZSxS0SG5dcyvOUcvJ02YS1D4HOPdf3KH fENYXwOPDZC7lBX2+66fdNqc1MrdttanrBJGtikJFpv7tzDyC+eifYZq6 6p9qH6yOBYBRsPARp6oDdKW52iImvmg62F8bxvN9+nQc0B5uHZtGOCksT J7T+RsWIIzYeL2j6dAjZb8guYgRVQ63drWZ0HN1spnyeZC6ETl+iPmn0E yw6+08OfINR1mLJcAo5HarJeglZrqBfc40Qvgd6q3jTAwHhSKxbRTOr5i g==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="229442497" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa3.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:11:06 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex03.mchp-main.com (10.10.85.151) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:11:06 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:11:05 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 5/9] smartpqi: simplify lun_number assignment Date: Thu, 17 Aug 2023 08:12:28 -0500 Message-ID: <20230817131232.86754-6-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: David Strahan Simplify lun_number assignment. lun_number assignment is only required for non-AIO requests. Reviewed-by: Scott Benesh Reviewed-by: Mike McGowen Reviewed-by: Kevin Barnett Signed-off-by: David Strahan Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi_init.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index dedc721b007b..e3ce5b738e15 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -5429,7 +5429,6 @@ static int pqi_aio_submit_io(struct pqi_ctrl_info *ctrl_info, int rc; struct pqi_io_request *io_request; struct pqi_aio_path_request *request; - struct pqi_scsi_dev *device; io_request = pqi_alloc_io_request(ctrl_info, scmd); if (!io_request) @@ -5449,9 +5448,8 @@ static int pqi_aio_submit_io(struct pqi_ctrl_info *ctrl_info, request->command_priority = io_high_prio; put_unaligned_le16(io_request->index, &request->request_id); request->error_index = request->request_id; - device = scmd->device->hostdata; - if (!pqi_is_logical_device(device) && ctrl_info->multi_lun_device_supported) - put_unaligned_le64(((scmd->device->lun) << 8), &request->lun_number); + if (!raid_bypass && ctrl_info->multi_lun_device_supported) + put_unaligned_le64(scmd->device->lun << 8, &request->lun_number); if (cdb_length > sizeof(request->cdb)) cdb_length = sizeof(request->cdb); request->cdb_length = cdb_length; From patchwork Thu Aug 17 13:12:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 714535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C3F6C41513 for ; Thu, 17 Aug 2023 13:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235517AbjHQNNW (ORCPT ); Thu, 17 Aug 2023 09:13:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351405AbjHQNNK (ORCPT ); Thu, 17 Aug 2023 09:13:10 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.154.123]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADF113AB3 for ; Thu, 17 Aug 2023 06:12:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277964; x=1723813964; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bxVTnTo22qivxImDIHPDWcC/aThn69MAlos4VbkdNpI=; b=XRcG2RrpgLcLCsdpOU0+734Byd/utDcjKNKpfc1ZZbprKPVctB4TyRrZ mXwd8R1xxEczpgE/Slw6CIurtbUMgDOIocq4AjFVcK+X08BAw31j+M34Q FiQeWeYd5vYaTLUuJ/4dgaEdqYqd8dZZQdUuzwlafxKLJiOtgAyzAJ1JI m726NjIffCKKTE4onzkcJUnBGVW98Ad4YxFmB/matYPBPzHB/OozHZzZ1 pZFGa2q80amtJ3vVytQgKFYtmYt39kQqDsQuNU1rTwmkJhk5WEMEcOcJb 9OZxHdxCgaDStXh2qghUkxgcDoDe9FEmW+E1aX6FPmDezhIJ1D5+GWpLP A==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="166911871" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa6.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:11:38 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex04.mchp-main.com (10.10.85.152) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:11:07 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:11:06 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 6/9] smartpqi: enhance shutdown notification Date: Thu, 17 Aug 2023 08:12:29 -0500 Message-ID: <20230817131232.86754-7-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: David Strahan Provide more detailed information about cache flush errors during shutdown. Reviewed-by: Mahesh Rajashekhara Reviewed-by: Scott Benesh Reviewed-by: Scott Teel Reviewed-by: Mike McGowen Reviewed-by: Kevin Barnett Signed-off-by: David Strahan Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi_init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index e3ce5b738e15..2612818d476d 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -8870,7 +8870,7 @@ static void pqi_shutdown(struct pci_dev *pci_dev) rc = pqi_flush_cache(ctrl_info, shutdown_event); if (rc) dev_err(&pci_dev->dev, - "unable to flush controller cache\n"); + "unable to flush controller cache during shutdown\n"); pqi_crash_if_pending_command(ctrl_info); pqi_reset(ctrl_info); From patchwork Thu Aug 17 13:12:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 715345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B7E4C2FC24 for ; Thu, 17 Aug 2023 13:13:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351329AbjHQNMy (ORCPT ); Thu, 17 Aug 2023 09:12:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351408AbjHQNMg (ORCPT ); Thu, 17 Aug 2023 09:12:36 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.154.123]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 610753598 for ; Thu, 17 Aug 2023 06:12:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277931; x=1723813931; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9B7U6pD9tMIjJLDsJ1hgKCr04EnqHyirE6EeSamgd/I=; b=bIWmSjCn8G0ghJa4Mcy/1Ajg2rCfy6wgNx9f47gcGkzak1X4XKDCcvDF 728FFji//Q6Y3BhMRUMw2pt2yPD09IhcfOWQ6r18P8cjfjUoUJDbpL+U/ kiWkEOQUJWp3nbwC/Kdp0RE0r5TNl5MobuT1G97I1jhEhubQrCgbhnD2u xuTMXCYV+3nnCzPTyHdSChPpA4Qek8+g3SNKkmQuFqvoHYcZqjJ4KueNE oXwPupUGBCE4avdf4E1OTCUEpswPpdnCdkHWwWcoEN00K0xte9FxR41W6 tVkJtFCBcgzaLL92Y8EGfEu2D+Lc5YR+oP8ITsoFBqvncN19p7cMucv5D A==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="230249337" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa2.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:11:27 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex02.mchp-main.com (10.10.85.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:11:08 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:11:07 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 7/9] smartpqi: enhance controller offline notification Date: Thu, 17 Aug 2023 08:12:30 -0500 Message-ID: <20230817131232.86754-8-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: David Strahan Add a description for the reason the controller has been taken off-line. Reviewed-by: Scott Benesh Reviewed-by: Scott Teel Reviewed-by: Mike McGowen Signed-off-by: David Strahan Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi_init.c | 50 ++++++++++++++++++++++++++- 1 file changed, 49 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index 2612818d476d..a497a35431b2 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -8712,6 +8712,52 @@ static void pqi_ctrl_offline_worker(struct work_struct *work) pqi_take_ctrl_offline_deferred(ctrl_info); } +static char *pqi_ctrl_shutdown_reason_to_string(enum pqi_ctrl_shutdown_reason ctrl_shutdown_reason) +{ + char *string; + + switch (ctrl_shutdown_reason) { + case PQI_IQ_NOT_DRAINED_TIMEOUT: + string = "inbound queue not drained timeout"; + break; + case PQI_LUN_RESET_TIMEOUT: + string = "LUN reset timeout"; + break; + case PQI_IO_PENDING_POST_LUN_RESET_TIMEOUT: + string = "I/O pending timeout after LUN reset"; + break; + case PQI_NO_HEARTBEAT: + string = "no controller heartbeat detected"; + break; + case PQI_FIRMWARE_KERNEL_NOT_UP: + string = "firmware kernel not ready"; + break; + case PQI_OFA_RESPONSE_TIMEOUT: + string = "OFA response timeout"; + break; + case PQI_INVALID_REQ_ID: + string = "invalid request ID"; + break; + case PQI_UNMATCHED_REQ_ID: + string = "unmatched request ID"; + break; + case PQI_IO_PI_OUT_OF_RANGE: + string = "I/O queue producer index out of range"; + break; + case PQI_EVENT_PI_OUT_OF_RANGE: + string = "event queue producer index out of range"; + break; + case PQI_UNEXPECTED_IU_TYPE: + string = "unexpected IU type"; + break; + default: + string = "unknown reason"; + break; + } + + return string; +} + static void pqi_take_ctrl_offline(struct pqi_ctrl_info *ctrl_info, enum pqi_ctrl_shutdown_reason ctrl_shutdown_reason) { @@ -8724,7 +8770,9 @@ static void pqi_take_ctrl_offline(struct pqi_ctrl_info *ctrl_info, if (!pqi_disable_ctrl_shutdown) sis_shutdown_ctrl(ctrl_info, ctrl_shutdown_reason); pci_disable_device(ctrl_info->pci_dev); - dev_err(&ctrl_info->pci_dev->dev, "controller offline\n"); + dev_err(&ctrl_info->pci_dev->dev, + "controller offline: reason code 0x%x (%s)\n", + ctrl_shutdown_reason, pqi_ctrl_shutdown_reason_to_string(ctrl_shutdown_reason)); schedule_work(&ctrl_info->ctrl_offline_work); } From patchwork Thu Aug 17 13:12:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 715343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0B0DC2FC14 for ; Thu, 17 Aug 2023 13:13:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351147AbjHQNNX (ORCPT ); Thu, 17 Aug 2023 09:13:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351386AbjHQNNB (ORCPT ); Thu, 17 Aug 2023 09:13:01 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.154.123]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A603435A3 for ; Thu, 17 Aug 2023 06:12:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277956; x=1723813956; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xPLqWVW3LXUCmsgID+4bsZSYOt8KXhfRmsqp5DXNcMs=; b=yXxZO6lDddN+t3cGa57TGxK9iI3tuGGc4u2qA03bf7QZXfJ2n656XokD g5o8n5QA2FDITkp26JcQK3UKq5efsYHGHpCrWD1gu+PyTRZA7cZrX4/NL 7pFQcXlpZenzX+awyBI6ZqVF59hdaCUgAEjn5sxCxyK2qiwyeOveXbsNu L1DYbnBftpg1w8W+ErDKGyphrDpSvBos0BGjc6N464S26baln7JcUSfw/ 7GzQb1PF6BTsu/bB+P1sdnizVI5GFM/mj/SVYbC+dlQqD+klF9m/4uHGP 3gqpWlKWHvq0rqSaK5PIkhokXE0lMBB2BglxM0Ddc4cFwjnPbnJHwR5Jy g==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="230249339" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa2.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:11:27 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex02.mchp-main.com (10.10.85.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:11:09 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:11:08 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 8/9] smartpqi: enhance error messages Date: Thu, 17 Aug 2023 08:12:31 -0500 Message-ID: <20230817131232.86754-9-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Mahesh Rajashekhara Add more detail to some TMF messages. Reviewed-by: Scott Benesh Reviewed-by: Mike McGowen Reviewed-by: Kevin Barnett Signed-off-by: Mahesh Rajashekhara Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi_init.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index a497a35431b2..cab44f1f6256 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -6121,10 +6121,8 @@ static int pqi_device_reset_handler(struct pqi_ctrl_info *ctrl_info, struct pqi_ mutex_lock(&ctrl_info->lun_reset_mutex); dev_err(&ctrl_info->pci_dev->dev, - "resetting scsi %d:%d:%d:%d due to cmd 0x%02x\n", - ctrl_info->scsi_host->host_no, - device->bus, device->target, lun, - scmd->cmd_len > 0 ? scmd->cmnd[0] : 0xff); + "resetting scsi %d:%d:%d:%u SCSI cmd at %p due to cmd opcode 0x%02x\n", + ctrl_info->scsi_host->host_no, device->bus, device->target, lun, scmd, scsi_opcode); pqi_check_ctrl_health(ctrl_info); if (pqi_ctrl_offline(ctrl_info)) @@ -6178,18 +6176,20 @@ static int pqi_eh_abort_handler(struct scsi_cmnd *scmd) shost = scmd->device->host; ctrl_info = shost_to_hba(shost); + device = scmd->device->hostdata; dev_err(&ctrl_info->pci_dev->dev, - "attempting TASK ABORT on SCSI cmd at %p\n", scmd); + "attempting TASK ABORT on scsi %d:%d:%d:%d for SCSI cmd at %p\n", + shost->host_no, device->bus, device->target, (int)scmd->device->lun, scmd); if (cmpxchg(&scmd->host_scribble, PQI_NO_COMPLETION, (void *)&wait) == NULL) { dev_err(&ctrl_info->pci_dev->dev, - "SCSI cmd at %p already completed\n", scmd); + "scsi %d:%d:%d:%d for SCSI cmd at %p already completed\n", + shost->host_no, device->bus, device->target, (int)scmd->device->lun, scmd); scmd->result = DID_RESET << 16; goto out; } - device = scmd->device->hostdata; tmf_work = &device->tmf_work[scmd->device->lun]; if (cmpxchg(&tmf_work->scmd, NULL, scmd) == NULL) { @@ -6203,7 +6203,8 @@ static int pqi_eh_abort_handler(struct scsi_cmnd *scmd) wait_for_completion(&wait); dev_err(&ctrl_info->pci_dev->dev, - "TASK ABORT on SCSI cmd at %p: SUCCESS\n", scmd); + "TASK ABORT on scsi %d:%d:%d:%d for SCSI cmd at %p: SUCCESS\n", + shost->host_no, device->bus, device->target, (int)scmd->device->lun, scmd); out: From patchwork Thu Aug 17 13:12:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Don Brace X-Patchwork-Id: 714534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31E97C27C7A for ; Thu, 17 Aug 2023 13:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351160AbjHQNNX (ORCPT ); Thu, 17 Aug 2023 09:13:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351357AbjHQNM5 (ORCPT ); Thu, 17 Aug 2023 09:12:57 -0400 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.153.233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CAE135B3 for ; Thu, 17 Aug 2023 06:12:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1692277947; x=1723813947; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kAYa92dc/XaTLUpBkmr9xqxW0SmLU8LvXUePTs0Nivs=; b=PpQ8eWn0OVeXnZjyVgxxnfy+sHVFg0DT0+QGuxwI4Et7yaiUGPprLvih s0rBxUBHQYEliAAuWhXcXM6xHxHB1RzfR4TSiPBRTK7eYfV3wZ5dxTB5W O5tDu0ANgvEL8qCZUVZ8J+xRnQ9Og8uiBe99iD35dBTV0whFXXUe2oGq/ ZWXgV2Fwm2/DT3IG00eM1G2uZQXEaOodGRuOblBelEhRBMjBOY7W5hMYK tJIH7e0RVauV+pjIDaMZdwzuDvDsfEVGPkyTq96SCqleBInD6onhjHBDX sAPVYb05hc4gCupML5mDAiKyVIt8640ep6b/rfNF+NiWk6TlXDx6Kle+1 w==; X-IronPort-AV: E=Sophos;i="6.01,180,1684825200"; d="scan'208";a="242128506" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa1.microchip.iphmx.com with ESMTP/TLS/AES256-SHA256; 17 Aug 2023 06:11:28 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.85.143) by chn-vm-ex04.mchp-main.com (10.10.85.152) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 17 Aug 2023 06:11:10 -0700 Received: from brunhilda.pdev.net (10.10.115.15) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.21 via Frontend Transport; Thu, 17 Aug 2023 06:11:09 -0700 From: Don Brace To: , , , , , , , , , , , , , , , CC: Subject: [PATCH 9/9] smartpqi: change driver version to 2.1.24-046 Date: Thu, 17 Aug 2023 08:12:32 -0500 Message-ID: <20230817131232.86754-10-don.brace@microchip.com> X-Mailer: git-send-email 2.42.0.rc2 In-Reply-To: <20230817131232.86754-1-don.brace@microchip.com> References: <20230817131232.86754-1-don.brace@microchip.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Reviewed-by: Gerry Morong Reviewed-by: Scott Benesh Reviewed-by: Scott Teel Reviewed-by: Mike McGowen Reviewed-by: Kevin Barnett Signed-off-by: Don Brace --- drivers/scsi/smartpqi/smartpqi_init.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index cab44f1f6256..74359e260ef5 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -33,11 +33,11 @@ #define BUILD_TIMESTAMP #endif -#define DRIVER_VERSION "2.1.22-040" +#define DRIVER_VERSION "2.1.24-046" #define DRIVER_MAJOR 2 #define DRIVER_MINOR 1 -#define DRIVER_RELEASE 22 -#define DRIVER_REVISION 40 +#define DRIVER_RELEASE 24 +#define DRIVER_REVISION 46 #define DRIVER_NAME "Microchip SmartPQI Driver (v" \ DRIVER_VERSION BUILD_TIMESTAMP ")"