mbox series

[00/24] Introducing mpi3mr driver

Message ID 20201222101156.98308-1-kashyap.desai@broadcom.com
Headers show
Series Introducing mpi3mr driver | expand

Message

Kashyap Desai Dec. 22, 2020, 10:11 a.m. UTC
This patch series covers logical patches of the new device driver for the
MPI3MR high performance storage I/O & RAID controllers (Avenger series).
The mpi3mr has true multiple h/w queue interfacing like nvme.

See more info -
https://www.spinics.net/lists/linux-scsi/msg147868.html

The controllers managed by the mpi3mr driver are capable of reaching a
very high performance numbers compared to existing controller due to the
new hardware architectures. This Driver is tested with the internal
versions of the MPI3MR I/O & RAID controllers.

Patches are logical split mainly for better code review. Full patch set is
required for functional stability of this new driver.

You can find the source at - https://github.com/kadesai16/mpi3mr_v1


Kashyap Desai (24):
  mpi3mr: add mpi30 Rev-R headers and Kconfig
  mpi3mr: base driver code
  mpi3mr: create operational request and reply queue pair
  mpi3mr: add support of queue command processing
  mpi3mr: add support of internal watchdog thread
  mpi3mr: add support of event handling part-1
  mpi3mr: add support of event handling pcie devices part-2
  mpi3mr: add support of event handling part-3
  mpi3mr: add support for recovering controller
  mpi3mr: add support of timestamp sync with firmware
  mpi3mr: print ioc info for debugging
  mpi3mr: add bios_param shost template hook
  mpi3mr: implement scsi error handler hooks
  mpi3mr: add change queue depth support
  mpi3mr: allow certain commands during pci-remove hook
  mpi3mr: hardware workaround for UNMAP commands to nvme drives
  mpi3mr: add support of threaded isr
  mpi3mr: add complete support of soft reset
  mpi3mr: print pending host ios for debug
  mpi3mr: wait for pending IO completions upon detection of VD IO
    timeout
  mpi3mr: add support of PM suspend and resume
  mpi3mr: add support of DSN secure fw check
  mpi3mr: add eedp dif dix support
  mpi3mr: add event handling debug prints

 drivers/scsi/Kconfig                      |    1 +
 drivers/scsi/Makefile                     |    1 +
 drivers/scsi/mpi3mr/Kconfig               |    7 +
 drivers/scsi/mpi3mr/Makefile              |    4 +
 drivers/scsi/mpi3mr/mpi/mpi30_api.h       |   23 +
 drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h      | 2721 ++++++++++++++
 drivers/scsi/mpi3mr/mpi/mpi30_image.h     |  285 ++
 drivers/scsi/mpi3mr/mpi/mpi30_init.h      |  216 ++
 drivers/scsi/mpi3mr/mpi/mpi30_ioc.h       | 1423 +++++++
 drivers/scsi/mpi3mr/mpi/mpi30_sas.h       |   46 +
 drivers/scsi/mpi3mr/mpi/mpi30_transport.h |  675 ++++
 drivers/scsi/mpi3mr/mpi/mpi30_type.h      |   89 +
 drivers/scsi/mpi3mr/mpi3mr.h              |  906 +++++
 drivers/scsi/mpi3mr/mpi3mr_debug.h        |   60 +
 drivers/scsi/mpi3mr/mpi3mr_fw.c           | 3944 ++++++++++++++++++++
 drivers/scsi/mpi3mr/mpi3mr_os.c           | 4148 +++++++++++++++++++++
 16 files changed, 14549 insertions(+)
 create mode 100644 drivers/scsi/mpi3mr/Kconfig
 create mode 100644 drivers/scsi/mpi3mr/Makefile
 create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_api.h
 create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h
 create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_image.h
 create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_init.h
 create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_ioc.h
 create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_sas.h
 create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_transport.h
 create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_type.h
 create mode 100644 drivers/scsi/mpi3mr/mpi3mr.h
 create mode 100644 drivers/scsi/mpi3mr/mpi3mr_debug.h
 create mode 100644 drivers/scsi/mpi3mr/mpi3mr_fw.c
 create mode 100644 drivers/scsi/mpi3mr/mpi3mr_os.c

Comments

Bart Van Assche Dec. 22, 2020, 6:18 p.m. UTC | #1
On 12/22/20 2:11 AM, Kashyap Desai wrote:
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are
> intended solely for the use of the individual or entity to whom it is
> addressed and may contain information that is confidential, legally
> privileged, protected by privacy laws, or otherwise restricted from
> disclosure to anyone else. If you are not the intended recipient or
> the person responsible for delivering the e-mail to the intended
> recipient, you are hereby notified that any use, copying,
> distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in
> error, please return the e-mail to the sender, delete it from your
> computer, and destroy any printed copy of it.

Please make sure that no confidentiality footers are added when posting
to a public mailing list.

Thanks,

Bart.
Kashyap Desai Dec. 23, 2020, 1:16 p.m. UTC | #2
> -----Original Message-----

> From: Bart Van Assche [mailto:bvanassche@acm.org]

> Sent: Tuesday, December 22, 2020 11:49 PM

> To: Kashyap Desai <kashyap.desai@broadcom.com>; linux-

> scsi@vger.kernel.org

> Cc: jejb@linux.ibm.com; martin.petersen@oracle.com;

> steve.hagan@broadcom.com; peter.rivera@broadcom.com; mpi3mr-

> linuxdrv.pdl@broadcom.com; sathya.prakash@broadcom.com

> Subject: Re: [PATCH 03/24] mpi3mr: create operational request and reply

> queue pair

>

> On 12/22/20 2:11 AM, Kashyap Desai wrote:

> > This electronic communication and the information and any files

> > transmitted with it, or attached to it, are confidential and are

> > intended solely for the use of the individual or entity to whom it is

> > addressed and may contain information that is confidential, legally

> > privileged, protected by privacy laws, or otherwise restricted from

> > disclosure to anyone else. If you are not the intended recipient or

> > the person responsible for delivering the e-mail to the intended

> > recipient, you are hereby notified that any use, copying,

> > distributing, dissemination, forwarding, printing, or copying of this

> > e-mail is strictly prohibited. If you received this e-mail in error,

> > please return the e-mail to the sender, delete it from your computer,

> > and destroy any printed copy of it.

>

> Please make sure that no confidentiality footers are added when posting

to a
> public mailing list.


Sorry for this. I will take care next time.
>

> Thanks,

>

> Bart.


-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.
Tomas Henzl Feb. 22, 2021, 3:31 p.m. UTC | #3
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Firmware can report various MPI Events.

> Support for certain Events (as listed below) are enabled in the driver

> and their processing in driver is covered in this patch.

> 

> MPI3_EVENT_DEVICE_ADDED

> MPI3_EVENT_DEVICE_INFO_CHANGED

> MPI3_EVENT_DEVICE_STATUS_CHANGE

> MPI3_EVENT_ENCL_DEVICE_STATUS_CHANGE

> MPI3_EVENT_SAS_TOPOLOGY_CHANGE_LIST

> MPI3_EVENT_SAS_DISCOVERY

> MPI3_EVENT_SAS_DEVICE_DISCOVERY_ERROR

> 

> Key support in this patch is device add/removal.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

...
> + */

> +void mpi3mr_cleanup_fwevt_list(struct mpi3mr_ioc *mrioc)

> +{

> +	struct mpi3mr_fwevt *fwevt = NULL;

> +

> +	if ((list_empty(&mrioc->fwevt_list) && !mrioc->current_event) ||

> +	    !mrioc->fwevt_worker_thread || in_interrupt())

The in_interrup macro is deprecated and should not be used in new code.
Is it at all possible to call the mpi3mr_cleanup_fwevt_list from
interrupt context?

> +		return;

> +

> +	while ((fwevt = mpi3mr_dequeue_fwevt(mrioc)) ||

> +	    (fwevt = mrioc->current_event)) {

> +		/*

> +		 * Wait on the fwevt to complete. If this returns 1, then

> +		 * the event was never executed, and we need a put for the

> +		 * reference the work had on the fwevt.

> +		 *

> +		 * If it did execute, we wait for it to finish, and the put will

> +		 * happen from mpi3mr_process_fwevt()

> +		 */

> +		if (cancel_work_sync(&fwevt->work)) {

> +			/*

> +			 * Put fwevt reference count after

> +			 * dequeuing it from worker queue

> +			 */

> +			mpi3mr_fwevt_put(fwevt);

> +			/*

> +			 * Put fwevt reference count to neutralize

> +			 * kref_init increment

> +			 */

> +			mpi3mr_fwevt_put(fwevt);

> +		}

> +	}

> +}
Tomas Henzl Feb. 22, 2021, 3:39 p.m. UTC | #4
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> This patch series covers logical patches of the new device driver for the

> MPI3MR high performance storage I/O & RAID controllers (Avenger series).

> The mpi3mr has true multiple h/w queue interfacing like nvme.

> 

> See more info -

> https://www.spinics.net/lists/linux-scsi/msg147868.html

> 

> The controllers managed by the mpi3mr driver are capable of reaching a

> very high performance numbers compared to existing controller due to the

> new hardware architectures. This Driver is tested with the internal

> versions of the MPI3MR I/O & RAID controllers.

> 

> Patches are logical split mainly for better code review. Full patch set is

> required for functional stability of this new driver.

> 

> You can find the source at - https://github.com/kadesai16/mpi3mr_v1


This was posted months ago, If I may suggest sort out the comments and
post a V2 of  the set.
Cheers,
tomash
> 

> 

> Kashyap Desai (24):

>   mpi3mr: add mpi30 Rev-R headers and Kconfig

>   mpi3mr: base driver code

>   mpi3mr: create operational request and reply queue pair

>   mpi3mr: add support of queue command processing

>   mpi3mr: add support of internal watchdog thread

>   mpi3mr: add support of event handling part-1

>   mpi3mr: add support of event handling pcie devices part-2

>   mpi3mr: add support of event handling part-3

>   mpi3mr: add support for recovering controller

>   mpi3mr: add support of timestamp sync with firmware

>   mpi3mr: print ioc info for debugging

>   mpi3mr: add bios_param shost template hook

>   mpi3mr: implement scsi error handler hooks

>   mpi3mr: add change queue depth support

>   mpi3mr: allow certain commands during pci-remove hook

>   mpi3mr: hardware workaround for UNMAP commands to nvme drives

>   mpi3mr: add support of threaded isr

>   mpi3mr: add complete support of soft reset

>   mpi3mr: print pending host ios for debug

>   mpi3mr: wait for pending IO completions upon detection of VD IO

>     timeout

>   mpi3mr: add support of PM suspend and resume

>   mpi3mr: add support of DSN secure fw check

>   mpi3mr: add eedp dif dix support

>   mpi3mr: add event handling debug prints

> 

>  drivers/scsi/Kconfig                      |    1 +

>  drivers/scsi/Makefile                     |    1 +

>  drivers/scsi/mpi3mr/Kconfig               |    7 +

>  drivers/scsi/mpi3mr/Makefile              |    4 +

>  drivers/scsi/mpi3mr/mpi/mpi30_api.h       |   23 +

>  drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h      | 2721 ++++++++++++++

>  drivers/scsi/mpi3mr/mpi/mpi30_image.h     |  285 ++

>  drivers/scsi/mpi3mr/mpi/mpi30_init.h      |  216 ++

>  drivers/scsi/mpi3mr/mpi/mpi30_ioc.h       | 1423 +++++++

>  drivers/scsi/mpi3mr/mpi/mpi30_sas.h       |   46 +

>  drivers/scsi/mpi3mr/mpi/mpi30_transport.h |  675 ++++

>  drivers/scsi/mpi3mr/mpi/mpi30_type.h      |   89 +

>  drivers/scsi/mpi3mr/mpi3mr.h              |  906 +++++

>  drivers/scsi/mpi3mr/mpi3mr_debug.h        |   60 +

>  drivers/scsi/mpi3mr/mpi3mr_fw.c           | 3944 ++++++++++++++++++++

>  drivers/scsi/mpi3mr/mpi3mr_os.c           | 4148 +++++++++++++++++++++

>  16 files changed, 14549 insertions(+)

>  create mode 100644 drivers/scsi/mpi3mr/Kconfig

>  create mode 100644 drivers/scsi/mpi3mr/Makefile

>  create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_api.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_image.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_init.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_ioc.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_sas.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_transport.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_type.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi3mr.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi3mr_debug.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi3mr_fw.c

>  create mode 100644 drivers/scsi/mpi3mr/mpi3mr_os.c

>
Hannes Reinecke Feb. 23, 2021, 12:55 p.m. UTC | #5
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> This patch covers basic pci device driver requirements -

> device probe, memory allocation, mapping system registers,

> allocate irq lines etc.

> 

> Source is managed in mainly three different files.

> 

> mpi3mr_fw.c -	Keep common code which interact with underlying fw/hw.

> mpi3mr_os.c -	Keep common code which interact with scsi midlayer.

> mpi3mr_app.c -	Keep common code which interact with application/ioctl.

> 		This is currently work in progress.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>  drivers/scsi/mpi3mr/Makefile       |    4 +

>  drivers/scsi/mpi3mr/mpi3mr.h       |  526 ++++++++

>  drivers/scsi/mpi3mr/mpi3mr_debug.h |   60 +

>  drivers/scsi/mpi3mr/mpi3mr_fw.c    | 1819 ++++++++++++++++++++++++++++

>  drivers/scsi/mpi3mr/mpi3mr_os.c    |  368 ++++++

>  5 files changed, 2777 insertions(+)

>  create mode 100644 drivers/scsi/mpi3mr/Makefile

>  create mode 100644 drivers/scsi/mpi3mr/mpi3mr.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi3mr_debug.h

>  create mode 100644 drivers/scsi/mpi3mr/mpi3mr_fw.c

>  create mode 100644 drivers/scsi/mpi3mr/mpi3mr_os.c

> 

> diff --git a/drivers/scsi/mpi3mr/Makefile b/drivers/scsi/mpi3mr/Makefile

> new file mode 100644

> index 000000000000..7c2063e04c81

> --- /dev/null

> +++ b/drivers/scsi/mpi3mr/Makefile

> @@ -0,0 +1,4 @@

> +# mpi3mr makefile

> +obj-m += mpi3mr.o

> +mpi3mr-y +=  mpi3mr_os.o     \

> +		mpi3mr_fw.o \

> diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h

> new file mode 100644

> index 000000000000..dd79b12218e1

> --- /dev/null

> +++ b/drivers/scsi/mpi3mr/mpi3mr.h

> @@ -0,0 +1,526 @@

> +/* SPDX-License-Identifier: GPL-2.0-or-later */

> +/*

> + * Driver for Broadcom MPI3 Storage Controllers

> + *

> + * Copyright (C) 2017-2020 Broadcom Inc.

> + *  (mailto: mpi3mr-linuxdrv.pdl@broadcom.com)

> + *

> + */

> +

> +#ifndef MPI3MR_H_INCLUDED

> +#define MPI3MR_H_INCLUDED

> +

> +#include <linux/blkdev.h>

> +#include <linux/blk-mq.h>

> +#include <linux/blk-mq-pci.h>

> +#include <linux/delay.h>

> +#include <linux/dmapool.h>

> +#include <linux/errno.h>

> +#include <linux/init.h>

> +#include <linux/io.h>

> +#include <linux/interrupt.h>

> +#include <linux/kernel.h>

> +#include <linux/miscdevice.h>

> +#include <linux/module.h>

> +#include <linux/pci.h>

> +#include <linux/poll.h>

> +#include <linux/sched.h>

> +#include <linux/slab.h>

> +#include <linux/types.h>

> +#include <linux/uaccess.h>

> +#include <linux/utsname.h>

> +#include <linux/version.h>

> +#include <linux/workqueue.h>

> +#include <asm/unaligned.h>

> +#include <scsi/scsi.h>

> +#include <scsi/scsi_cmnd.h>

> +#include <scsi/scsi_dbg.h>

> +#include <scsi/scsi_device.h>

> +#include <scsi/scsi_host.h>

> +#include <scsi/scsi_tcq.h>

> +

> +#include "mpi/mpi30_api.h"

> +#include "mpi3mr_debug.h"

> +

> +/* Global list and lock for storing multiple adapters managed by the driver */

> +extern spinlock_t mrioc_list_lock;

> +extern struct list_head mrioc_list;

> +

> +#define MPI3MR_DRIVER_VERSION	"00.255.45.01"

> +#define MPI3MR_DRIVER_RELDATE	"12-December-2020"

> +

> +#define MPI3MR_DRIVER_NAME	"mpi3mr"

> +#define MPI3MR_DRIVER_LICENSE	"GPL"

> +#define MPI3MR_DRIVER_AUTHOR	"Broadcom Inc. <mpi3mr-linuxdrv.pdl@broadcom.com>"

> +#define MPI3MR_DRIVER_DESC	"MPI3 Storage Controller Device Driver"

> +

> +#define MPI3MR_NAME_LENGTH	32

> +#define IOCNAME			"%s: "

> +

> +/* Definitions for internal SGL and Chain SGL buffers */

> +#define MPI3MR_PAGE_SIZE_4K		4096

> +#define MPI3MR_SG_DEPTH		(PAGE_SIZE/sizeof(Mpi3SGESimple_t))

> +

> +/* Definitions for MAX values for shost */

> +#define MPI3MR_MAX_CMDS_LUN	7

> +#define MPI3MR_MAX_CDB_LENGTH	32

> +

> +/* Admin queue management definitions */

> +#define MPI3MR_ADMIN_REQ_Q_SIZE		(2 * MPI3MR_PAGE_SIZE_4K)

> +#define MPI3MR_ADMIN_REPLY_Q_SIZE	(4 * MPI3MR_PAGE_SIZE_4K)

> +#define MPI3MR_ADMIN_REQ_FRAME_SZ	128

> +#define MPI3MR_ADMIN_REPLY_FRAME_SZ	16

> +

> +

> +/* Reserved Host Tag definitions */

> +#define MPI3MR_HOSTTAG_INVALID		0xFFFF

> +#define MPI3MR_HOSTTAG_INITCMDS		1

> +#define MPI3MR_HOSTTAG_IOCTLCMDS	2

> +#define MPI3MR_HOSTTAG_BLK_TMS		5

> +

> +#define MPI3MR_NUM_DEVRMCMD		1

> +#define MPI3MR_HOSTTAG_DEVRMCMD_MIN	(MPI3MR_HOSTTAG_BLK_TMS + 1)

> +#define MPI3MR_HOSTTAG_DEVRMCMD_MAX	(MPI3MR_HOSTTAG_DEVRMCMD_MIN + \

> +						MPI3MR_NUM_DEVRMCMD - 1)

> +

> +#define MPI3MR_INTERNAL_CMDS_RESVD     MPI3MR_HOSTTAG_DEVRMCMD_MAX

> +

> +/* Reduced resource count definition for crash kernel */

> +#define MPI3MR_HOST_IOS_KDUMP		128

> +

> +/* command/controller interaction timeout definitions in seconds */

> +#define MPI3MR_INTADMCMD_TIMEOUT		10

> +#define MPI3MR_RESETTM_TIMEOUT			30

> +#define MPI3MR_DEFAULT_SHUTDOWN_TIME		120

> +

> +#define MPI3MR_WATCHDOG_INTERVAL		1000 /* in milli seconds */

> +

> +/* Internal admin command state definitions*/

> +#define MPI3MR_CMD_NOTUSED	0x8000

> +#define MPI3MR_CMD_COMPLETE	0x0001

> +#define MPI3MR_CMD_PENDING	0x0002

> +#define MPI3MR_CMD_REPLY_VALID	0x0004

> +#define MPI3MR_CMD_RESET	0x0008

> +

> +/* Definitions for Event replies and sense buffer allocated per controller */

> +#define MPI3MR_NUM_EVT_REPLIES	64

> +#define MPI3MR_SENSEBUF_SZ	256

> +#define MPI3MR_SENSEBUF_FACTOR	3

> +#define MPI3MR_CHAINBUF_FACTOR	3

> +

> +/* Invalid target device handle */

> +#define MPI3MR_INVALID_DEV_HANDLE	0xFFFF

> +

> +/* Controller Reset related definitions */

> +#define MPI3MR_HOSTDIAG_UNLOCK_RETRY_COUNT	5

> +#define MPI3MR_MAX_RESET_RETRY_COUNT		3

> +

> +/* ResponseCode definitions */

> +#define MPI3MR_RI_MASK_RESPCODE		(0x000000FF)

> +#define MPI3MR_RSP_TM_COMPLETE		0x00

> +#define MPI3MR_RSP_INVALID_FRAME	0x02

> +#define MPI3MR_RSP_TM_NOT_SUPPORTED	0x04

> +#define MPI3MR_RSP_TM_FAILED		0x05

> +#define MPI3MR_RSP_TM_SUCCEEDED		0x08

> +#define MPI3MR_RSP_TM_INVALID_LUN	0x09

> +#define MPI3MR_RSP_TM_OVERLAPPED_TAG	0x0A

> +#define MPI3MR_RSP_IO_QUEUED_ON_IOC \

> +			MPI3_SCSITASKMGMT_RSPCODE_IO_QUEUED_ON_IOC

> +

> +/* SGE Flag definition */

> +#define MPI3MR_SGEFLAGS_SYSTEM_SIMPLE_END_OF_LIST \

> +	(MPI3_SGE_FLAGS_ELEMENT_TYPE_SIMPLE | MPI3_SGE_FLAGS_DLAS_SYSTEM | \

> +	MPI3_SGE_FLAGS_END_OF_LIST)

> +

> +/* IOC State definitions */

> +enum mpi3mr_iocstate {

> +	MRIOC_STATE_READY = 1,

> +	MRIOC_STATE_RESET,

> +	MRIOC_STATE_FAULT,

> +	MRIOC_STATE_BECOMING_READY,

> +	MRIOC_STATE_RESET_REQUESTED,

> +	MRIOC_STATE_UNRECOVERABLE,

> +};

> +

> +/* Reset reason code definitions*/

> +enum mpi3mr_reset_reason {

> +	MPI3MR_RESET_FROM_BRINGUP = 1,

> +	MPI3MR_RESET_FROM_FAULT_WATCH = 2,

> +	MPI3MR_RESET_FROM_IOCTL = 3,

> +	MPI3MR_RESET_FROM_EH_HOS = 4,

> +	MPI3MR_RESET_FROM_TM_TIMEOUT = 5,

> +	MPI3MR_RESET_FROM_IOCTL_TIMEOUT = 6,

> +	MPI3MR_RESET_FROM_MUR_FAILURE = 7,

> +	MPI3MR_RESET_FROM_CTLR_CLEANUP = 8,

> +	MPI3MR_RESET_FROM_CIACTIV_FAULT = 9,

> +	MPI3MR_RESET_FROM_PE_TIMEOUT = 10,

> +	MPI3MR_RESET_FROM_TSU_TIMEOUT = 11,

> +	MPI3MR_RESET_FROM_DELREQQ_TIMEOUT = 12,

> +	MPI3MR_RESET_FROM_DELREPQ_TIMEOUT = 13,

> +	MPI3MR_RESET_FROM_CREATEREPQ_TIMEOUT = 14,

> +	MPI3MR_RESET_FROM_CREATEREQQ_TIMEOUT = 15,

> +	MPI3MR_RESET_FROM_IOCFACTS_TIMEOUT = 16,

> +	MPI3MR_RESET_FROM_IOCINIT_TIMEOUT = 17,

> +	MPI3MR_RESET_FROM_EVTNOTIFY_TIMEOUT = 18,

> +	MPI3MR_RESET_FROM_EVTACK_TIMEOUT = 19,

> +	MPI3MR_RESET_FROM_CIACTVRST_TIMER = 20,

> +	MPI3MR_RESET_FROM_GETPKGVER_TIMEOUT = 21,

> +};

> +

> +/**

> + * struct mpi3mr_compimg_ver - replica of component image

> + * version defined in mpi30_image.h in host endianness

> + *

> + */

> +struct mpi3mr_compimg_ver {

> +	u16 build_num;

> +	u16 cust_id;

> +	u8 ph_minor;

> +	u8 ph_major;

> +	u8 gen_minor;

> +	u8 gen_major;

> +};

> +

> +/**

> + * struct mpi3mr_ioc_facs - replica of component image version

> + * defined in mpi30_ioc.h in host endianness

> + *

> + */

> +struct mpi3mr_ioc_facts {

> +	u32 ioc_capabilities;

> +	struct mpi3mr_compimg_ver fw_ver;

> +	u32 mpi_version;

> +	u16 max_reqs;

> +	u16 product_id;

> +	u16 op_req_sz;

> +	u16 reply_sz;

> +	u16 exceptions;

> +	u16 max_perids;

> +	u16 max_pds;

> +	u16 max_sasexpanders;

> +	u16 max_sasinitiators;

> +	u16 max_enclosures;

> +	u16 max_pcieswitches;

> +	u16 max_nvme;

> +	u16 max_vds;

> +	u16 max_hpds;

> +	u16 max_advhpds;

> +	u16 max_raidpds;

> +	u16 min_devhandle;

> +	u16 max_devhandle;

> +	u16 max_op_req_q;

> +	u16 max_op_reply_q;

> +	u16 shutdown_timeout;

> +	u8 ioc_num;

> +	u8 who_init;

> +	u16 max_msix_vectors;

> +	u8 personality;

> +	u8 dma_mask;

> +	u8 protocol_flags;

> +	u8 sge_mod_mask;

> +	u8 sge_mod_value;

> +	u8 sge_mod_shift;

> +};

> +

> +/**

> + * struct op_req_qinfo -  Operational Request Queue Information

> + *

> + * @ci: consumer index

> + * @pi: producer index

> + */

> +struct op_req_qinfo {

> +	u16 ci;

> +	u16 pi;

> +};

> +

> +/**

> + * struct op_reply_qinfo -  Operational Reply Queue Information

> + *

> + * @ci: consumer index

> + * @qid: Queue Id starting from 1

> + */

> +struct op_reply_qinfo {

> +	u16 ci;

> +	u16 qid;

> +};

> +

> +/**

> + * struct mpi3mr_intr_info -  Interrupt cookie information

> + *

> + * @mrioc: Adapter instance reference

> + * @msix_index: MSIx index

> + * @op_reply_q: Associated operational reply queue

> + * @name: Dev name for the irq claiming device

> + */

> +struct mpi3mr_intr_info {

> +	struct mpi3mr_ioc *mrioc;

> +	u16 msix_index;

> +	struct op_reply_qinfo *op_reply_q;

> +	char name[MPI3MR_NAME_LENGTH];

> +};

> +

> +

> +typedef struct mpi3mr_drv_cmd DRV_CMD;

> +typedef void (*DRV_CMD_CALLBACK)(struct mpi3mr_ioc *mrioc,

> +	DRV_CMD *drv_cmd);

> +

> +/**

> + * struct mpi3mr_drv_cmd - Internal command tracker

> + *

> + * @mutex: Command mutex

> + * @done: Completeor for wakeup

> + * @reply: Firmware reply for internal commands

> + * @sensebuf: Sensebuf for SCSI IO commands

> + * @state: Command State

> + * @dev_handle: Firmware handle for device specific commands

> + * @ioc_status: IOC status from the firmware

> + * @ioc_loginfo:IOC log info from the firmware

> + * @is_waiting: Is the command issued in block mode

> + * @retry_count: Retry count for retriable commands

> + * @host_tag: Host tag used by the command

> + * @callback: Callback for non blocking commands

> + */

> +struct mpi3mr_drv_cmd {

> +	struct mutex mutex;

> +	struct completion done;

> +	void *reply;

> +	u8 *sensebuf;

> +	u16 state;

> +	u16 dev_handle;

> +	u16 ioc_status;

> +	u32 ioc_loginfo;

> +	u8 is_waiting;

> +	u8 retry_count;

> +	u16 host_tag;

> +	DRV_CMD_CALLBACK callback;

> +};

> +

> +

> +/**

> + * struct chain_element - memory descriptor structure to store

> + * virtual and dma addresses for chain elements.

> + *

> + * @addr: virtual address

> + * @dma_addr: dma address

> + */

> +struct chain_element {

> +	void *addr;

> +	dma_addr_t dma_addr;

> +};

> +

> +/**

> + * struct scmd_priv - SCSI command private data

> + *

> + * @host_tag: Host tag specific to operational queue

> + * @in_lld_scope: Command in LLD scope or not

> + * @scmd: SCSI Command pointer

> + * @req_q_idx: Operational request queue index

> + * @chain_idx: Chain frame index

> + * @mpi3mr_scsiio_req: MPI SCSI IO request

> + */

> +struct scmd_priv {

> +	u16 host_tag;

> +	u8 in_lld_scope;

> +	struct scsi_cmnd *scmd;

> +	u16 req_q_idx;

> +	int chain_idx;

> +	u8 mpi3mr_scsiio_req[MPI3MR_ADMIN_REQ_FRAME_SZ];

> +};

> +

> +/**

> + * struct mpi3mr_ioc - Adapter anchor structure stored in shost

> + * private data

> + *

> + * @list: List pointer

> + * @pdev: PCI device pointer

> + * @shost: Scsi_Host pointer

> + * @id: Controller ID

> + * @cpu_count: Number of online CPUs

> + * @name: Controller ASCII name

> + * @driver_name: Driver ASCII name

> + * @sysif_regs: System interface registers virtual address

> + * @sysif_regs_phys: System interface registers physical address

> + * @bars: PCI BARS

> + * @dma_mask: DMA mask

> + * @msix_count: Number of MSIX vectors used

> + * @intr_enabled: Is interrupts enabled

> + * @num_admin_req: Number of admin requests

> + * @admin_req_q_sz: Admin request queue size

> + * @admin_req_pi: Admin request queue producer index

> + * @admin_req_ci: Admin request queue consumer index

> + * @admin_req_base: Admin request queue base virtual address

> + * @admin_req_dma: Admin request queue base dma address

> + * @admin_req_lock: Admin queue access lock

> + * @num_admin_replies: Number of admin replies

> + * @admin_reply_q_sz: Admin reply queue size

> + * @admin_reply_ci: Admin reply queue consumer index

> + * @admin_reply_ephase:Admin reply queue expected phase

> + * @admin_reply_base: Admin reply queue base virtual address

> + * @admin_reply_dma: Admin reply queue base dma address

> + * @ready_timeout: Controller ready timeout

> + * @intr_info: Interrupt cookie pointer

> + * @intr_info_count: Number of interrupt cookies

> + * @num_queues: Number of operational queues

> + * @num_op_req_q: Number of operational request queues

> + * @req_qinfo: Operational request queue info pointer

> + * @num_op_reply_q: Number of operational reply queues

> + * @op_reply_qinfo: Operational reply queue info pointer

> + * @init_cmds: Command tracker for initialization commands

> + * @facts: Cached IOC facts data

> + * @op_reply_desc_sz: Operational reply descriptor size

> + * @num_reply_bufs: Number of reply buffers allocated

> + * @reply_buf_pool: Reply buffer pool

> + * @reply_buf: Reply buffer base virtual address

> + * @reply_buf_dma: Reply buffer DMA address

> + * @reply_buf_dma_max_address: Reply DMA address max limit

> + * @reply_free_qsz: Reply free queue size

> + * @reply_free_q_pool: Reply free queue pool

> + * @reply_free_q: Reply free queue base virtual address

> + * @reply_free_q_dma: Reply free queue base DMA address

> + * @reply_free_queue_lock: Reply free queue lock

> + * @reply_free_queue_host_index: Reply free queue host index

> + * @num_sense_bufs: Number of sense buffers

> + * @sense_buf_pool: Sense buffer pool

> + * @sense_buf: Sense buffer base virtual address

> + * @sense_buf_dma: Sense buffer base DMA address

> + * @sense_buf_q_sz: Sense buffer queue size

> + * @sense_buf_q_pool: Sense buffer queue pool

> + * @sense_buf_q: Sense buffer queue virtual address

> + * @sense_buf_q_dma: Sense buffer queue DMA address

> + * @sbq_lock: Sense buffer queue lock

> + * @sbq_host_index: Sense buffer queuehost index

> + * @is_driver_loading: Is driver still loading

> + * @max_host_ios: Maximum host I/O count

> + * @chain_buf_count: Chain buffer count

> + * @chain_buf_pool: Chain buffer pool

> + * @chain_sgl_list: Chain SGL list

> + * @chain_bitmap_sz: Chain buffer allocator bitmap size

> + * @chain_bitmap: Chain buffer allocator bitmap

> + * @reset_in_progress: Reset in progress flag

> + * @unrecoverable: Controller unrecoverable flag

> + * @logging_level: Controller debug logging level

> + * @current_event: Firmware event currently in process

> + * @driver_info: Driver, Kernel, OS information to firmware

> + * @change_count: Topology change count

> + */

> +struct mpi3mr_ioc {

> +	struct list_head list;

> +	struct pci_dev *pdev;

> +	struct Scsi_Host *shost;

> +	u8 id;

> +	int cpu_count;

> +

> +	char name[MPI3MR_NAME_LENGTH];

> +	char driver_name[MPI3MR_NAME_LENGTH];

> +

> +	Mpi3SysIfRegs_t __iomem *sysif_regs;

> +	resource_size_t sysif_regs_phys;

> +	int bars;

> +	u64 dma_mask;

> +

> +	u16 msix_count;

> +	u8 intr_enabled;

> +

> +	u16 num_admin_req;

> +	u32 admin_req_q_sz;

> +	u16 admin_req_pi;

> +	u16 admin_req_ci;

> +	void *admin_req_base;

> +	dma_addr_t admin_req_dma;

> +	spinlock_t admin_req_lock;

> +

> +	u16 num_admin_replies;

> +	u32 admin_reply_q_sz;

> +	u16 admin_reply_ci;

> +	u8 admin_reply_ephase;

> +	void *admin_reply_base;

> +	dma_addr_t admin_reply_dma;

> +

> +	u32 ready_timeout;

> +

> +	struct mpi3mr_intr_info *intr_info;


Please, be consistent.
If you must introduce typedefs for your internal structures, okay.
But then introduce typedefs for _all_ internal structures.
Or leave the typedefs and just use 'struct XXX'; which actually is the
recommended way for linux.

> +	u16 intr_info_count;

> +

> +	u16 num_queues;

> +	u16 num_op_req_q;

> +	struct op_req_qinfo *req_qinfo;

> +

> +	u16 num_op_reply_q;

> +	struct op_reply_qinfo *op_reply_qinfo;

> +

> +	struct mpi3mr_drv_cmd init_cmds;

> +	struct mpi3mr_ioc_facts facts;

> +	u16 op_reply_desc_sz;

> +

> +	u32 num_reply_bufs;

> +	struct dma_pool *reply_buf_pool;

> +	u8 *reply_buf;

> +	dma_addr_t reply_buf_dma;

> +	dma_addr_t reply_buf_dma_max_address;

> +

> +	u16 reply_free_qsz;

> +	struct dma_pool *reply_free_q_pool;

> +	U64 *reply_free_q;

> +	dma_addr_t reply_free_q_dma;

> +	spinlock_t reply_free_queue_lock;

> +	u32 reply_free_queue_host_index;

> +

> +	u32 num_sense_bufs;

> +	struct dma_pool *sense_buf_pool;

> +	u8 *sense_buf;

> +	dma_addr_t sense_buf_dma;

> +

> +	u16 sense_buf_q_sz;

> +	struct dma_pool *sense_buf_q_pool;

> +	U64 *sense_buf_q;

> +	dma_addr_t sense_buf_q_dma;

> +	spinlock_t sbq_lock;

> +	u32 sbq_host_index;

> +

> +	u8 is_driver_loading;

> +

> +	u16 max_host_ios;

> +

> +	u32 chain_buf_count;

> +	struct dma_pool *chain_buf_pool;

> +	struct chain_element *chain_sgl_list;

> +	u16  chain_bitmap_sz;

> +	void *chain_bitmap;

> +

> +	u8 reset_in_progress;

> +	u8 unrecoverable;

> +

> +	int logging_level;

> +

> +	struct mpi3mr_fwevt *current_event;

> +	Mpi3DriverInfoLayout_t driver_info;


See my comment about struct typedefs above.

> +	u16 change_count;

> +};

> +

> +int mpi3mr_setup_resources(struct mpi3mr_ioc *mrioc);

> +void mpi3mr_cleanup_resources(struct mpi3mr_ioc *mrioc);

> +int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc);

> +void mpi3mr_cleanup_ioc(struct mpi3mr_ioc *mrioc);

> +int mpi3mr_admin_request_post(struct mpi3mr_ioc *mrioc, void *admin_req,

> +u16 admin_req_sz, u8 ignore_reset);

> +void mpi3mr_add_sg_single(void *paddr, u8 flags, u32 length,

> +			  dma_addr_t dma_addr);

> +void mpi3mr_build_zero_len_sge(void *paddr);

> +void *mpi3mr_get_sensebuf_virt_addr(struct mpi3mr_ioc *mrioc,

> +				     dma_addr_t phys_addr);

> +void *mpi3mr_get_reply_virt_addr(struct mpi3mr_ioc *mrioc,

> +				     dma_addr_t phys_addr);

> +void mpi3mr_repost_sense_buf(struct mpi3mr_ioc *mrioc,

> +				     u64 sense_buf_dma);

> +

> +void mpi3mr_start_watchdog(struct mpi3mr_ioc *mrioc);

> +void mpi3mr_stop_watchdog(struct mpi3mr_ioc *mrioc);

> +

> +int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,

> +			      u32 reset_reason, u8 snapdump);

> +void mpi3mr_ioc_disable_intr(struct mpi3mr_ioc *mrioc);

> +void mpi3mr_ioc_enable_intr(struct mpi3mr_ioc *mrioc);

> +

> +enum mpi3mr_iocstate mpi3mr_get_iocstate(struct mpi3mr_ioc *mrioc);

> +

> +#endif /*MPI3MR_H_INCLUDED*/

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_debug.h b/drivers/scsi/mpi3mr/mpi3mr_debug.h

> new file mode 100644

> index 000000000000..d35f296d9325

> --- /dev/null

> +++ b/drivers/scsi/mpi3mr/mpi3mr_debug.h

> @@ -0,0 +1,60 @@

> +/* SPDX-License-Identifier: GPL-2.0-or-later */

> +/*

> + * Driver for Broadcom MPI3 Storage Controllers

> + *

> + * Copyright (C) 2017-2020 Broadcom Inc.

> + *  (mailto: mpi3mr-linuxdrv.pdl@broadcom.com)

> + *

> + */

> +

> +#ifndef MPI3SAS_DEBUG_H_INCLUDED

> +

> +#define MPI3SAS_DEBUG_H_INCLUDED

> +

> +/*

> + * debug levels

> + */

> +#define MPI3_DEBUG			0x00000001

> +#define MPI3_DEBUG_MSG_FRAME		0x00000002

> +#define MPI3_DEBUG_SG			0x00000004

> +#define MPI3_DEBUG_EVENTS		0x00000008

> +#define MPI3_DEBUG_EVENT_WORK_TASK	0x00000010

> +#define MPI3_DEBUG_INIT			0x00000020

> +#define MPI3_DEBUG_EXIT			0x00000040

> +#define MPI3_DEBUG_FAIL			0x00000080

> +#define MPI3_DEBUG_TM			0x00000100

> +#define MPI3_DEBUG_REPLY		0x00000200

> +#define MPI3_DEBUG_HANDSHAKE		0x00000400

> +#define MPI3_DEBUG_CONFIG		0x00000800

> +#define MPI3_DEBUG_DL			0x00001000

> +#define MPI3_DEBUG_RESET		0x00002000

> +#define MPI3_DEBUG_SCSI			0x00004000

> +#define MPI3_DEBUG_IOCTL		0x00008000

> +#define MPI3_DEBUG_CSMISAS		0x00010000

> +#define MPI3_DEBUG_SAS			0x00020000

> +#define MPI3_DEBUG_TRANSPORT		0x00040000

> +#define MPI3_DEBUG_TASK_SET_FULL	0x00080000

> +#define MPI3_DEBUG_TRIGGER_DIAG		0x00200000

> +

> +

> +/*

> + * debug macros

> + */

> +

> +#define ioc_err(ioc, fmt, ...) \

> +	pr_err("%s: " fmt, (ioc)->name, ##__VA_ARGS__)

> +#define ioc_notice(ioc, fmt, ...) \

> +	pr_notice("%s: " fmt, (ioc)->name, ##__VA_ARGS__)

> +#define ioc_warn(ioc, fmt, ...) \

> +	pr_warn("%s: " fmt, (ioc)->name, ##__VA_ARGS__)

> +#define ioc_info(ioc, fmt, ...) \

> +	pr_info("%s: " fmt, (ioc)->name, ##__VA_ARGS__)

> +

> +

> +#define dbgprint(IOC, FMT, ...) \

> +	do { \

> +		if (IOC->logging_level & MPI3_DEBUG) \

> +			pr_info("%s: " FMT, (IOC)->name, ##__VA_ARGS__); \

> +	} while (0)

> +

> +#endif /* MPT3SAS_DEBUG_H_INCLUDED */

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c

> new file mode 100644

> index 000000000000..97eb7e6ec5c6

> --- /dev/null

> +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c

> @@ -0,0 +1,1819 @@

> +// SPDX-License-Identifier: GPL-2.0-or-later

> +/*

> + * Driver for Broadcom MPI3 Storage Controllers

> + *

> + * Copyright (C) 2017-2020 Broadcom Inc.

> + *  (mailto: mpi3mr-linuxdrv.pdl@broadcom.com)

> + *

> + */

> +

> +#include "mpi3mr.h"

> +

> +#if defined(writeq) && defined(CONFIG_64BIT)

> +static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr)

> +{

> +	writeq(b, addr);

> +}

> +#else

> +static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr)

> +{

> +	__u64 data_out = b;

> +

> +	writel((u32)(data_out), addr);

> +	writel((u32)(data_out >> 32), (addr + 4));

> +}

> +#endif

> +

> +static void mpi3mr_sync_irqs(struct mpi3mr_ioc *mrioc)

> +{

> +	u16 i, max_vectors;

> +

> +	max_vectors = mrioc->intr_info_count;

> +

> +	for (i = 0; i < max_vectors; i++)

> +		synchronize_irq(pci_irq_vector(mrioc->pdev, i));

> +}

> +

> +void mpi3mr_ioc_disable_intr(struct mpi3mr_ioc *mrioc)

> +{

> +	mrioc->intr_enabled = 0;

> +	mpi3mr_sync_irqs(mrioc);

> +}

> +

> +void mpi3mr_ioc_enable_intr(struct mpi3mr_ioc *mrioc)

> +{

> +	mrioc->intr_enabled = 1;

> +}

> +

> +static void mpi3mr_cleanup_isr(struct mpi3mr_ioc *mrioc)

> +{

> +	u16 i;

> +

> +	mpi3mr_ioc_disable_intr(mrioc);

> +

> +	if (!mrioc->intr_info)

> +		return;

> +

> +	for (i = 0; i < mrioc->intr_info_count; i++)

> +		free_irq(pci_irq_vector(mrioc->pdev, i),

> +		    (mrioc->intr_info + i));

> +

> +	kfree(mrioc->intr_info);

> +	mrioc->intr_info = NULL;

> +	mrioc->intr_info_count = 0;

> +	pci_free_irq_vectors(mrioc->pdev);

> +}

> +

> +void mpi3mr_add_sg_single(void *paddr, u8 flags, u32 length,

> +	dma_addr_t dma_addr)

> +{

> +	Mpi3SGESimple_t *sgel = paddr;

> +

> +	sgel->Flags = flags;

> +	sgel->Length = cpu_to_le32(length);

> +	sgel->Address = cpu_to_le64(dma_addr);

> +}

> +

> +void mpi3mr_build_zero_len_sge(void *paddr)

> +{

> +	u8 sgl_flags = MPI3MR_SGEFLAGS_SYSTEM_SIMPLE_END_OF_LIST;

> +

> +	mpi3mr_add_sg_single(paddr, sgl_flags, 0, -1);

> +

> +}

> +void *mpi3mr_get_reply_virt_addr(struct mpi3mr_ioc *mrioc,

> +	dma_addr_t phys_addr)

> +{

> +	if (!phys_addr)

> +		return NULL;

> +

> +	if ((phys_addr < mrioc->reply_buf_dma) ||

> +	    (phys_addr > mrioc->reply_buf_dma_max_address))

> +		return NULL;

> +

> +	return mrioc->reply_buf + (phys_addr - mrioc->reply_buf_dma);

> +}

> +

> +void *mpi3mr_get_sensebuf_virt_addr(struct mpi3mr_ioc *mrioc,

> +	dma_addr_t phys_addr)

> +{

> +	if (!phys_addr)

> +		return NULL;

> +

> +	return mrioc->sense_buf + (phys_addr - mrioc->sense_buf_dma);

> +}

> +

> +static void mpi3mr_repost_reply_buf(struct mpi3mr_ioc *mrioc,

> +	u64 reply_dma)

> +{

> +	u32 old_idx = 0;

> +

> +	spin_lock(&mrioc->reply_free_queue_lock);

> +	old_idx  =  mrioc->reply_free_queue_host_index;

> +	mrioc->reply_free_queue_host_index = (

> +	    (mrioc->reply_free_queue_host_index ==

> +	    (mrioc->reply_free_qsz - 1)) ? 0 :

> +	    (mrioc->reply_free_queue_host_index + 1));

> +	mrioc->reply_free_q[old_idx] = cpu_to_le64(reply_dma);

> +	writel(mrioc->reply_free_queue_host_index,

> +	    &mrioc->sysif_regs->ReplyFreeHostIndex);

> +	spin_unlock(&mrioc->reply_free_queue_lock);

> +}

> +

> +void mpi3mr_repost_sense_buf(struct mpi3mr_ioc *mrioc,

> +	u64 sense_buf_dma)

> +{

> +	u32 old_idx = 0;

> +

> +	spin_lock(&mrioc->sbq_lock);

> +	old_idx  =  mrioc->sbq_host_index;

> +	mrioc->sbq_host_index = ((mrioc->sbq_host_index ==

> +	    (mrioc->sense_buf_q_sz - 1)) ? 0 :

> +	    (mrioc->sbq_host_index + 1));

> +	mrioc->sense_buf_q[old_idx] = cpu_to_le64(sense_buf_dma);

> +	writel(mrioc->sbq_host_index,

> +	    &mrioc->sysif_regs->SenseBufferFreeHostIndex);

> +	spin_unlock(&mrioc->sbq_lock);

> +}

> +

> +static void mpi3mr_handle_events(struct mpi3mr_ioc *mrioc,

> +	Mpi3DefaultReply_t *def_reply)

> +{

> +	Mpi3EventNotificationReply_t *event_reply =

> +	    (Mpi3EventNotificationReply_t *)def_reply;

> +

> +	mrioc->change_count = le16_to_cpu(event_reply->IOCChangeCount);

> +}

> +

> +static struct mpi3mr_drv_cmd *

> +mpi3mr_get_drv_cmd(struct mpi3mr_ioc *mrioc, u16 host_tag,

> +	Mpi3DefaultReply_t *def_reply)

> +{

> +	switch (host_tag) {

> +	case MPI3MR_HOSTTAG_INITCMDS:

> +		return &mrioc->init_cmds;

> +	case MPI3MR_HOSTTAG_INVALID:

> +		if (def_reply && def_reply->Function ==

> +		    MPI3_FUNCTION_EVENT_NOTIFICATION)

> +			mpi3mr_handle_events(mrioc, def_reply);

> +		return NULL;

> +	default:

> +		break;

> +	}

> +

> +	return NULL;

> +}

> +

> +static void mpi3mr_process_admin_reply_desc(struct mpi3mr_ioc *mrioc,

> +	Mpi3DefaultReplyDescriptor_t *reply_desc, u64 *reply_dma)

> +{

> +	u16 reply_desc_type, host_tag = 0;

> +	u16 ioc_status = MPI3_IOCSTATUS_SUCCESS;

> +	u32 ioc_loginfo = 0;

> +	Mpi3StatusReplyDescriptor_t *status_desc;

> +	Mpi3AddressReplyDescriptor_t *addr_desc;

> +	Mpi3SuccessReplyDescriptor_t *success_desc;

> +	Mpi3DefaultReply_t *def_reply = NULL;

> +	struct mpi3mr_drv_cmd *cmdptr = NULL;

> +	Mpi3SCSIIOReply_t *scsi_reply;

> +	u8 *sense_buf = NULL;

> +

> +	*reply_dma = 0;

> +	reply_desc_type = le16_to_cpu(reply_desc->ReplyFlags) &

> +	    MPI3_REPLY_DESCRIPT_FLAGS_TYPE_MASK;

> +	switch (reply_desc_type) {

> +	case MPI3_REPLY_DESCRIPT_FLAGS_TYPE_STATUS:

> +		status_desc = (Mpi3StatusReplyDescriptor_t *)reply_desc;

> +		host_tag = le16_to_cpu(status_desc->HostTag);

> +		ioc_status = le16_to_cpu(status_desc->IOCStatus);

> +		if (ioc_status &

> +		    MPI3_REPLY_DESCRIPT_STATUS_IOCSTATUS_LOGINFOAVAIL)

> +			ioc_loginfo = le32_to_cpu(status_desc->IOCLogInfo);

> +		ioc_status &= MPI3_REPLY_DESCRIPT_STATUS_IOCSTATUS_STATUS_MASK;

> +		break;

> +	case MPI3_REPLY_DESCRIPT_FLAGS_TYPE_ADDRESS_REPLY:

> +		addr_desc = (Mpi3AddressReplyDescriptor_t *)reply_desc;

> +		*reply_dma = le64_to_cpu(addr_desc->ReplyFrameAddress);

> +		def_reply = mpi3mr_get_reply_virt_addr(mrioc, *reply_dma);

> +		if (!def_reply)

> +			goto out;

> +		host_tag = le16_to_cpu(def_reply->HostTag);

> +		ioc_status = le16_to_cpu(def_reply->IOCStatus);

> +		if (ioc_status &

> +		    MPI3_REPLY_DESCRIPT_STATUS_IOCSTATUS_LOGINFOAVAIL)

> +			ioc_loginfo = le32_to_cpu(def_reply->IOCLogInfo);

> +		ioc_status &= MPI3_REPLY_DESCRIPT_STATUS_IOCSTATUS_STATUS_MASK;

> +		if (def_reply->Function == MPI3_FUNCTION_SCSI_IO) {

> +			scsi_reply = (Mpi3SCSIIOReply_t *)def_reply;

> +			sense_buf = mpi3mr_get_sensebuf_virt_addr(mrioc,

> +			    le64_to_cpu(scsi_reply->SenseDataBufferAddress));

> +		}

> +		break;

> +	case MPI3_REPLY_DESCRIPT_FLAGS_TYPE_SUCCESS:

> +		success_desc = (Mpi3SuccessReplyDescriptor_t *)reply_desc;

> +		host_tag = le16_to_cpu(success_desc->HostTag);

> +		break;

> +	default:

> +		break;

> +	}

> +

> +	cmdptr = mpi3mr_get_drv_cmd(mrioc, host_tag, def_reply);

> +	if (cmdptr) {

> +		if (cmdptr->state & MPI3MR_CMD_PENDING) {

> +			cmdptr->state |= MPI3MR_CMD_COMPLETE;

> +			cmdptr->ioc_loginfo = ioc_loginfo;

> +			cmdptr->ioc_status = ioc_status;

> +			cmdptr->state &= ~MPI3MR_CMD_PENDING;

> +			if (def_reply) {

> +				cmdptr->state |= MPI3MR_CMD_REPLY_VALID;

> +				memcpy((u8 *)cmdptr->reply, (u8 *)def_reply,

> +				    mrioc->facts.reply_sz);

> +			}

> +			if (cmdptr->is_waiting) {

> +				complete(&cmdptr->done);

> +				cmdptr->is_waiting = 0;

> +			} else if (cmdptr->callback)

> +				cmdptr->callback(mrioc, cmdptr);

> +		}

> +	}

> +out:

> +	if (sense_buf)

> +		mpi3mr_repost_sense_buf(mrioc,

> +		    le64_to_cpu(scsi_reply->SenseDataBufferAddress));

> +}

> +

> +static int mpi3mr_process_admin_reply_q(struct mpi3mr_ioc *mrioc)

> +{

> +	u32 exp_phase = mrioc->admin_reply_ephase;

> +	u32 admin_reply_ci = mrioc->admin_reply_ci;

> +	u32 num_admin_replies = 0;

> +	u64 reply_dma = 0;

> +	Mpi3DefaultReplyDescriptor_t *reply_desc;

> +

> +	reply_desc = (Mpi3DefaultReplyDescriptor_t *)mrioc->admin_reply_base +

> +	    admin_reply_ci;

> +

> +	if ((le16_to_cpu(reply_desc->ReplyFlags) &

> +	    MPI3_REPLY_DESCRIPT_FLAGS_PHASE_MASK) != exp_phase)

> +		return 0;

> +

> +	do {

> +		mrioc->admin_req_ci = le16_to_cpu(reply_desc->RequestQueueCI);

> +		mpi3mr_process_admin_reply_desc(mrioc, reply_desc, &reply_dma);

> +		if (reply_dma)

> +			mpi3mr_repost_reply_buf(mrioc, reply_dma);

> +		num_admin_replies++;

> +		if (++admin_reply_ci == mrioc->num_admin_replies) {

> +			admin_reply_ci = 0;

> +			exp_phase ^= 1;

> +		}

> +		reply_desc =

> +		    (Mpi3DefaultReplyDescriptor_t *)mrioc->admin_reply_base +

> +		    admin_reply_ci;

> +		if ((le16_to_cpu(reply_desc->ReplyFlags) &

> +		    MPI3_REPLY_DESCRIPT_FLAGS_PHASE_MASK) != exp_phase)

> +			break;

> +	} while (1);

> +

> +	writel(admin_reply_ci, &mrioc->sysif_regs->AdminReplyQueueCI);

> +	mrioc->admin_reply_ci = admin_reply_ci;

> +	mrioc->admin_reply_ephase = exp_phase;

> +

> +	return num_admin_replies;

> +}

> +

> +static irqreturn_t mpi3mr_isr_primary(int irq, void *privdata)

> +{

> +	struct mpi3mr_intr_info *intr_info = privdata;

> +	struct mpi3mr_ioc *mrioc;

> +	u16 midx;

> +	u32 num_admin_replies = 0;

> +

> +	if (!intr_info)

> +		return IRQ_NONE;

> +

> +	mrioc = intr_info->mrioc;

> +

> +	if (!mrioc->intr_enabled)

> +		return IRQ_NONE;

> +

> +	midx = intr_info->msix_index;

> +

> +	if (!midx)

> +		num_admin_replies = mpi3mr_process_admin_reply_q(mrioc);

> +

> +	if (num_admin_replies)

> +		return IRQ_HANDLED;

> +	else

> +		return IRQ_NONE;

> +}

> +

> +static irqreturn_t mpi3mr_isr(int irq, void *privdata)

> +{

> +	struct mpi3mr_intr_info *intr_info = privdata;

> +	struct mpi3mr_ioc *mrioc;

> +	u16 midx;

> +	int ret;

> +

> +	if (!intr_info)

> +		return IRQ_NONE;

> +

> +	mrioc = intr_info->mrioc;

> +	midx = intr_info->msix_index;

> +	/* Call primary ISR routine */

> +	ret = mpi3mr_isr_primary(irq, privdata);

> +

> +	return ret;

> +}

> +

> +/**

> + * mpi3mr_isr_poll - Reply queue polling routine

> + * @irq: IRQ

> + * @privdata: Interrupt info

> + *

> + * poll for pending I/O completions in a loop until pending I/Os

> + * present or controller queue depth I/Os are processed.

> + *

> + * Return: IRQ_NONE or IRQ_HANDLED

> + */

> +static irqreturn_t mpi3mr_isr_poll(int irq, void *privdata)

> +{

> +	return IRQ_HANDLED;

> +}

> +

> +/**

> + * mpi3mr_request_irq - Request IRQ and register ISR

> + * @mrioc: Adapter instance reference

> + * @index: IRQ vector index

> + *

> + * Request threaded ISR with primary ISR and secondary

> + *

> + * Return: 0 on success and non zero on failures.

> + */

> +static inline int mpi3mr_request_irq(struct mpi3mr_ioc *mrioc, u16 index)

> +{

> +	struct pci_dev *pdev = mrioc->pdev;

> +	struct mpi3mr_intr_info *intr_info = mrioc->intr_info + index;

> +	int retval = 0;

> +

> +	intr_info->mrioc = mrioc;

> +	intr_info->msix_index = index;

> +	intr_info->op_reply_q = NULL;

> +

> +	snprintf(intr_info->name, MPI3MR_NAME_LENGTH, "%s%d-msix%d",

> +	    mrioc->driver_name, mrioc->id, index);

> +

> +	retval = request_threaded_irq(pci_irq_vector(pdev, index), mpi3mr_isr,

> +	    mpi3mr_isr_poll, IRQF_ONESHOT, intr_info->name, intr_info);

> +	if (retval) {

> +		ioc_err(mrioc, "%s: Unable to allocate interrupt %d!\n",

> +		    intr_info->name, pci_irq_vector(pdev, index));

> +		return retval;

> +	}

> +


The point of having 'mpi3mr_isr_poll()' here is what exactly?

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_setup_isr - Setup ISR for the controller

> + * @mrioc: Adapter instance reference

> + * @setup_one: Request one IRQ or more

> + *

> + * Allocate IRQ vectors and call mpi3mr_request_irq to setup ISR

> + *

> + * Return: 0 on success and non zero on failures.

> + */

> +static int mpi3mr_setup_isr(struct mpi3mr_ioc *mrioc, u8 setup_one)

> +{

> +	unsigned int irq_flags = PCI_IRQ_MSIX;

> +	u16 max_vectors = 0, i;

> +	int retval = 0;

> +	struct irq_affinity desc = { .pre_vectors =  1};

> +

> +

> +	mpi3mr_cleanup_isr(mrioc);

> +

> +	if (setup_one || reset_devices)

> +		max_vectors = 1;

> +	else {

> +		max_vectors =

> +		    min_t(int, mrioc->cpu_count + 1, mrioc->msix_count);

> +

> +		ioc_info(mrioc,

> +		    "MSI-X vectors supported: %d, no of cores: %d,",

> +		    mrioc->msix_count, mrioc->cpu_count);

> +		ioc_info(mrioc,

> +		    "MSI-x vectors requested: %d\n", max_vectors);

> +	}

> +

> +	irq_flags |= PCI_IRQ_AFFINITY | PCI_IRQ_ALL_TYPES;

> +

> +	i = pci_alloc_irq_vectors_affinity(mrioc->pdev,

> +	    1, max_vectors, irq_flags, &desc);

> +	if (i <= 0) {

> +		ioc_err(mrioc, "Cannot alloc irq vectors\n");

> +		goto out_failed;

> +	}

> +	if (i != max_vectors) {

> +		ioc_info(mrioc,

> +		    "allocated vectors (%d) are less than configured (%d)\n",

> +		    i, max_vectors);

> +

> +		max_vectors = i;

> +	}

> +	mrioc->intr_info = kzalloc(sizeof(struct mpi3mr_intr_info)*max_vectors,

> +	    GFP_KERNEL);

> +	if (!mrioc->intr_info) {

> +		retval = -1;

> +		pci_free_irq_vectors(mrioc->pdev);

> +		goto out_failed;

> +	}

> +	for (i = 0; i < max_vectors; i++) {

> +		retval = mpi3mr_request_irq(mrioc, i);

> +		if (retval) {

> +			mrioc->intr_info_count = i;

> +			goto out_failed;

> +		}

> +	}

> +	mrioc->intr_info_count = max_vectors;

> +	mpi3mr_ioc_enable_intr(mrioc);

> +	return retval;

> +out_failed:

> +	mpi3mr_cleanup_isr(mrioc);

> +

> +	return retval;

> +}

> +

> +static const struct {

> +	enum mpi3mr_iocstate value;

> +	char *name;

> +} mrioc_states[] = {

> +	{ MRIOC_STATE_READY, "ready" },

> +	{ MRIOC_STATE_FAULT, "fault" },

> +	{ MRIOC_STATE_RESET, "reset" },

> +	{ MRIOC_STATE_BECOMING_READY, "becoming ready" },

> +	{ MRIOC_STATE_RESET_REQUESTED, "reset requested" },

> +	{ MRIOC_STATE_UNRECOVERABLE, "unrecoverable error" },

> +};

> +

> +static const char *mpi3mr_iocstate_name(enum mpi3mr_iocstate mrioc_state)

> +{

> +	int i;

> +	char *name = NULL;

> +

> +	for (i = 0; i < ARRAY_SIZE(mrioc_states); i++) {

> +		if (mrioc_states[i].value == mrioc_state) {

> +			name = mrioc_states[i].name;

> +			break;

> +		}

> +	}

> +	return name;

> +}

> +

> +

> +/**

> + * mpi3mr_print_fault_info - Display fault information

> + * @mrioc: Adapter instance reference

> + *

> + * Display the controller fault information if there is a

> + * controller fault.

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_print_fault_info(struct mpi3mr_ioc *mrioc)

> +{

> +	u32 ioc_status, code, code1, code2, code3;

> +

> +	ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +

> +	if (ioc_status & MPI3_SYSIF_IOC_STATUS_FAULT) {

> +		code = readl(&mrioc->sysif_regs->Fault);

> +		code1 = readl(&mrioc->sysif_regs->FaultInfo[0]);

> +		code2 = readl(&mrioc->sysif_regs->FaultInfo[1]);

> +		code3 = readl(&mrioc->sysif_regs->FaultInfo[2]);

> +

> +		ioc_info(mrioc,

> +		    "fault code(0x%08X): Additional code: (0x%08X:0x%08X:0x%08X)\n",

> +		    code, code1, code2, code3);

> +	}

> +}

> +

> +/**

> + * mpi3mr_get_iocstate - Get IOC State

> + * @mrioc: Adapter instance reference

> + *

> + * Return a proper IOC state enum based on the IOC status and

> + * IOC configuration and unrcoverable state of the controller.

> + *

> + * Return: Current IOC state.

> + */

> +enum mpi3mr_iocstate mpi3mr_get_iocstate(struct mpi3mr_ioc *mrioc)

> +{

> +	u32 ioc_status, ioc_config;

> +	u8 ready, enabled;

> +

> +	ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +

> +	if (mrioc->unrecoverable)

> +		return MRIOC_STATE_UNRECOVERABLE;

> +	if (ioc_status & MPI3_SYSIF_IOC_STATUS_FAULT)

> +		return MRIOC_STATE_FAULT;

> +

> +	ready = (ioc_status & MPI3_SYSIF_IOC_STATUS_READY);

> +	enabled = (ioc_config & MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC);

> +

> +	if (ready && enabled)

> +		return MRIOC_STATE_READY;

> +	if ((!ready) && (!enabled))

> +		return MRIOC_STATE_RESET;

> +	if ((!ready) && (enabled))

> +		return MRIOC_STATE_BECOMING_READY;

> +

> +	return MRIOC_STATE_RESET_REQUESTED;

> +}

> +

> +/**

> + * mpi3mr_clear_reset_history - Clear reset history

> + * @mrioc: Adapter instance reference

> + *

> + * Write the reset history bit in IOC Status to clear the bit,

> + * if it is already set.

> + *

> + * Return: Nothing.

> + */

> +static inline void mpi3mr_clear_reset_history(struct mpi3mr_ioc *mrioc)

> +{

> +	u32 ioc_status;

> +

> +	ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +	if (ioc_status & MPI3_SYSIF_IOC_STATUS_RESET_HISTORY)

> +		writel(ioc_status, &mrioc->sysif_regs->IOCStatus);

> +

> +}

> +

> +/**

> + * mpi3mr_issue_and_process_mur - Message Unit Reset handler

> + * @mrioc: Adapter instance reference

> + * @reset_reason: Reset reason code

> + *

> + * Issue Message Unit Reset to the controller and wait for it to

> + * be complete.

> + *

> + * Return: 0 on success, -1 on failure.

> + */

> +static int mpi3mr_issue_and_process_mur(struct mpi3mr_ioc *mrioc,

> +					u32 reset_reason)

> +{

> +	u32 ioc_config, timeout, ioc_status;

> +	int retval = -1;

> +

> +	ioc_info(mrioc, "Issuing Message Unit Reset(MUR)\n");

> +	if (mrioc->unrecoverable) {

> +		ioc_info(mrioc, "IOC is unrecoverable MUR not issued\n");

> +		return retval;

> +	}

> +	mpi3mr_clear_reset_history(mrioc);

> +	writel(reset_reason, &mrioc->sysif_regs->Scratchpad[0]);

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +	ioc_config &= ~MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC;

> +	writel(ioc_config, &mrioc->sysif_regs->IOCConfiguration);

> +

> +	timeout = mrioc->ready_timeout * 10;

> +	do {

> +		ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +		if ((ioc_status & MPI3_SYSIF_IOC_STATUS_RESET_HISTORY)) {

> +			mpi3mr_clear_reset_history(mrioc);

> +			ioc_config =

> +			    readl(&mrioc->sysif_regs->IOCConfiguration);

> +			if (!((ioc_status & MPI3_SYSIF_IOC_STATUS_READY) ||

> +			    (ioc_status & MPI3_SYSIF_IOC_STATUS_FAULT) ||

> +			    (ioc_config & MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC))) {

> +				retval = 0;

> +				break;

> +			}

> +		}

> +		msleep(100);

> +	} while (--timeout);

> +

> +	ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +

> +	ioc_info(mrioc, "Base IOC Sts/Config after %s MUR is (0x%x)/(0x%x)\n",

> +	    (!retval) ? "successful" : "failed", ioc_status, ioc_config);

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_bring_ioc_ready - Bring controller to ready state

> + * @mrioc: Adapter instance reference

> + *

> + * Set Enable IOC bit in IOC configuration register and wait for

> + * the controller to become ready.

> + *

> + * Return: 0 on success, -1 on failure.

> + */

> +static int mpi3mr_bring_ioc_ready(struct mpi3mr_ioc *mrioc)

> +{

> +	u32 ioc_config, timeout;

> +	enum mpi3mr_iocstate current_state;

> +

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +	ioc_config |= MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC;

> +	writel(ioc_config, &mrioc->sysif_regs->IOCConfiguration);

> +

> +	timeout = mrioc->ready_timeout * 10;

> +	do {

> +		current_state = mpi3mr_get_iocstate(mrioc);

> +		if (current_state == MRIOC_STATE_READY)

> +			return 0;

> +		msleep(100);

> +	} while (--timeout);

> +

> +	return -1;

> +}

> +

> +/**

> + * mpi3mr_set_diagsave - Set diag save bit for snapdump

> + * @mrioc: Adapter reference

> + *

> + * Set diag save bit in IOC configuration register to enable

> + * snapdump.

> + *

> + * Return: Nothing.

> + */

> +static inline void mpi3mr_set_diagsave(struct mpi3mr_ioc *mrioc)

> +{

> +	u32 ioc_config;

> +

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +	ioc_config |= MPI3_SYSIF_IOC_CONFIG_DIAG_SAVE;

> +	writel(ioc_config, &mrioc->sysif_regs->IOCConfiguration);

> +}

> +

> +/**

> + * mpi3mr_issue_reset - Issue reset to the controller

> + * @mrioc: Adapter reference

> + * @reset_type: Reset type

> + * @reset_reason: Reset reason code

> + *

> + * TBD

> + *

> + * Return: 0 on success, non-zero on failure.

> + */

> +static int mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type,

> +	u32 reset_reason)

> +{

> +	return 0;

> +}

> +

> +/**

> + * mpi3mr_admin_request_post - Post request to admin queue

> + * @mrioc: Adapter reference

> + * @admin_req: MPI3 request

> + * @admin_req_sz: Request size

> + * @ignore_reset: Ignore reset in process

> + *

> + * Post the MPI3 request into admin request queue and

> + * inform the controller, if the queue is full return

> + * appropriate error.

> + *

> + * Return: 0 on success, non-zero on failure.

> + */

> +int mpi3mr_admin_request_post(struct mpi3mr_ioc *mrioc, void *admin_req,

> +	u16 admin_req_sz, u8 ignore_reset)

> +{

> +	u16 areq_pi = 0, areq_ci = 0, max_entries = 0;

> +	int retval = 0;

> +	unsigned long flags;

> +	u8 *areq_entry;

> +

> +

> +	if (mrioc->unrecoverable) {

> +		ioc_err(mrioc, "%s : Unrecoverable controller\n", __func__);

> +		return -EFAULT;

> +	}

> +

> +	spin_lock_irqsave(&mrioc->admin_req_lock, flags);

> +	areq_pi = mrioc->admin_req_pi;

> +	areq_ci = mrioc->admin_req_ci;

> +	max_entries = mrioc->num_admin_req;

> +	if ((areq_ci == (areq_pi + 1)) || ((!areq_ci) &&

> +	    (areq_pi == (max_entries - 1)))) {

> +		ioc_err(mrioc, "AdminReqQ full condition detected\n");

> +		retval = -EAGAIN;

> +		goto out;

> +	}

> +	if (!ignore_reset && mrioc->reset_in_progress) {

> +		ioc_err(mrioc, "AdminReqQ submit reset in progress\n");

> +		retval = -EAGAIN;

> +		goto out;

> +	}

> +	areq_entry = (u8 *)mrioc->admin_req_base +

> +	    (areq_pi * MPI3MR_ADMIN_REQ_FRAME_SZ);

> +	memset(areq_entry, 0, MPI3MR_ADMIN_REQ_FRAME_SZ);

> +	memcpy(areq_entry, (u8 *)admin_req, admin_req_sz);

> +

> +	if (++areq_pi == max_entries)

> +		areq_pi = 0;

> +	mrioc->admin_req_pi = areq_pi;

> +

> +	writel(mrioc->admin_req_pi, &mrioc->sysif_regs->AdminRequestQueuePI);

> +

> +out:

> +	spin_unlock_irqrestore(&mrioc->admin_req_lock, flags);

> +

> +	return retval;

> +}

> +


It might be an idea to have an 'admin' queue structure; keeping the
values all within the main IOC structure might cause cache misses and a
degraded performance.

> +

> +/**

> + * mpi3mr_setup_admin_qpair - Setup admin queue pair

> + * @mrioc: Adapter instance reference

> + *

> + * Allocate memory for admin queue pair if required and register

> + * the admin queue with the controller.

> + *

> + * Return: 0 on success, non-zero on failures.

> + */

> +static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc)

> +{

> +	int retval = 0;

> +	u32 num_admin_entries = 0;

> +

> +	mrioc->admin_req_q_sz = MPI3MR_ADMIN_REQ_Q_SIZE;

> +	mrioc->num_admin_req = mrioc->admin_req_q_sz /

> +	    MPI3MR_ADMIN_REQ_FRAME_SZ;

> +	mrioc->admin_req_ci = mrioc->admin_req_pi = 0;

> +	mrioc->admin_req_base = NULL;

> +

> +	mrioc->admin_reply_q_sz = MPI3MR_ADMIN_REPLY_Q_SIZE;

> +	mrioc->num_admin_replies = mrioc->admin_reply_q_sz /

> +	    MPI3MR_ADMIN_REPLY_FRAME_SZ;

> +	mrioc->admin_reply_ci = 0;

> +	mrioc->admin_reply_ephase = 1;

> +	mrioc->admin_reply_base = NULL;

> +

> +	if (!mrioc->admin_req_base) {

> +		mrioc->admin_req_base = dma_alloc_coherent(&mrioc->pdev->dev,

> +		    mrioc->admin_req_q_sz, &mrioc->admin_req_dma, GFP_KERNEL);

> +

> +		if (!mrioc->admin_req_base) {

> +			retval = -1;

> +			goto out_failed;

> +		}

> +

> +		mrioc->admin_reply_base = dma_alloc_coherent(&mrioc->pdev->dev,

> +		    mrioc->admin_reply_q_sz, &mrioc->admin_reply_dma,

> +		    GFP_KERNEL);

> +

> +		if (!mrioc->admin_reply_base) {

> +			retval = -1;

> +			goto out_failed;

> +		}

> +

> +	}

> +

> +	num_admin_entries = (mrioc->num_admin_replies << 16) |

> +	    (mrioc->num_admin_req);

> +	writel(num_admin_entries, &mrioc->sysif_regs->AdminQueueNumEntries);

> +	mpi3mr_writeq(mrioc->admin_req_dma,

> +	    &mrioc->sysif_regs->AdminRequestQueueAddress);

> +	mpi3mr_writeq(mrioc->admin_reply_dma,

> +	    &mrioc->sysif_regs->AdminReplyQueueAddress);

> +	writel(mrioc->admin_req_pi, &mrioc->sysif_regs->AdminRequestQueuePI);

> +	writel(mrioc->admin_reply_ci, &mrioc->sysif_regs->AdminReplyQueueCI);

> +	return retval;

> +

> +out_failed:

> +

> +	if (mrioc->admin_reply_base) {

> +		dma_free_coherent(&mrioc->pdev->dev, mrioc->admin_reply_q_sz,

> +		    mrioc->admin_reply_base, mrioc->admin_reply_dma);

> +		mrioc->admin_reply_base = NULL;

> +	}

> +	if (mrioc->admin_req_base) {

> +		dma_free_coherent(&mrioc->pdev->dev, mrioc->admin_req_q_sz,

> +		    mrioc->admin_req_base, mrioc->admin_req_dma);

> +		mrioc->admin_req_base = NULL;

> +	}

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_issue_iocfacts - Send IOC Facts

> + * @mrioc: Adapter instance reference

> + *

> + * Issue IOC Facts MPI request through admin queue and wait for

> + * the completion of it or time out.

> + *

> + * Return: 0 on success, non-zero on failures.

> + */

> +static int mpi3mr_issue_iocfacts(struct mpi3mr_ioc *mrioc,

> +	Mpi3IOCFactsData_t *facts_data)

> +{

> +	Mpi3IOCFactsRequest_t iocfacts_req;

> +	void *data = NULL;

> +	dma_addr_t data_dma;

> +	u32 data_len = sizeof(*facts_data);

> +	int retval = 0;

> +	u8 sgl_flags = MPI3MR_SGEFLAGS_SYSTEM_SIMPLE_END_OF_LIST;

> +

> +	data = dma_alloc_coherent(&mrioc->pdev->dev, data_len, &data_dma,

> +	    GFP_KERNEL);

> +

> +	if (!data) {

> +		retval = -1;

> +		goto out;

> +	}

> +

> +	memset(&iocfacts_req, 0, sizeof(iocfacts_req));

> +	mutex_lock(&mrioc->init_cmds.mutex);

> +	if (mrioc->init_cmds.state & MPI3MR_CMD_PENDING) {

> +		retval = -1;

> +		ioc_err(mrioc, "Issue IOCFacts: Init command is in use\n");

> +		mutex_unlock(&mrioc->init_cmds.mutex);

> +		goto out;

> +	}

> +	mrioc->init_cmds.state = MPI3MR_CMD_PENDING;

> +	mrioc->init_cmds.is_waiting = 1;

> +	mrioc->init_cmds.callback = NULL;

> +	iocfacts_req.HostTag = cpu_to_le16(MPI3MR_HOSTTAG_INITCMDS);

> +	iocfacts_req.Function = MPI3_FUNCTION_IOC_FACTS;

> +

> +	mpi3mr_add_sg_single(&iocfacts_req.SGL, sgl_flags, data_len,

> +	    data_dma);

> +

> +	init_completion(&mrioc->init_cmds.done);

> +	retval = mpi3mr_admin_request_post(mrioc, &iocfacts_req,

> +	    sizeof(iocfacts_req), 1);

> +	if (retval) {

> +		ioc_err(mrioc, "Issue IOCFacts: Admin Post failed\n");

> +		goto out_unlock;

> +	}

> +	wait_for_completion_timeout(&mrioc->init_cmds.done,

> +	    (MPI3MR_INTADMCMD_TIMEOUT * HZ));

> +	if (!(mrioc->init_cmds.state & MPI3MR_CMD_COMPLETE)) {

> +		ioc_err(mrioc, "Issue IOCFacts: command timed out\n");

> +		mpi3mr_set_diagsave(mrioc);

> +		mpi3mr_issue_reset(mrioc,

> +		    MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT,

> +		    MPI3MR_RESET_FROM_IOCFACTS_TIMEOUT);

> +		mrioc->unrecoverable = 1;

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	if ((mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK)

> +	    != MPI3_IOCSTATUS_SUCCESS) {

> +		ioc_err(mrioc,

> +		    "Issue IOCFacts: Failed IOCStatus(0x%04x) Loginfo(0x%08x)\n",

> +		    (mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK),

> +		    mrioc->init_cmds.ioc_loginfo);

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	memcpy(facts_data, (u8 *)data, data_len);

> +out_unlock:

> +	mrioc->init_cmds.state = MPI3MR_CMD_NOTUSED;

> +	mutex_unlock(&mrioc->init_cmds.mutex);

> +

> +out:

> +	if (data)

> +		dma_free_coherent(&mrioc->pdev->dev, data_len, data, data_dma);

> +

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_check_reset_dma_mask - Process IOC facts data

> + * @mrioc: Adapter instance reference

> + *

> + * Check whether the new DMA mask requested through IOCFacts by

> + * firmware needs to be set, if so set it .

> + *

> + * Return: 0 on success, non-zero on failure.

> + */

> +static inline int mpi3mr_check_reset_dma_mask(struct mpi3mr_ioc *mrioc)

> +{

> +	struct pci_dev *pdev = mrioc->pdev;

> +	int r;

> +	u64 facts_dma_mask = DMA_BIT_MASK(mrioc->facts.dma_mask);

> +

> +	if (!mrioc->facts.dma_mask || (mrioc->dma_mask <= facts_dma_mask))

> +		return 0;

> +

> +	ioc_info(mrioc, "Changing DMA mask from 0x%016llx to 0x%016llx\n",

> +	    mrioc->dma_mask, facts_dma_mask);

> +

> +	r = dma_set_mask_and_coherent(&pdev->dev, facts_dma_mask);

> +	if (r) {

> +		ioc_err(mrioc, "Setting DMA mask to 0x%016llx failed: %d\n",

> +		    facts_dma_mask, r);

> +		return r;

> +	}

> +	mrioc->dma_mask = facts_dma_mask;

> +	return r;

> +}

> +/**

> + * mpi3mr_process_factsdata - Process IOC facts data

> + * @mrioc: Adapter instance reference

> + *

> + * Convert IOC facts data into cpu endianness and cache it in

> + * the driver .

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,

> +	Mpi3IOCFactsData_t *facts_data)

> +{

> +	u32 ioc_config, req_sz, facts_flags;

> +

> +	if ((le16_to_cpu(facts_data->IOCFactsDataLength)) !=

> +	    (sizeof(*facts_data)/4)) {

> +		ioc_warn(mrioc,

> +		    "IOCFactsdata length mismatch driver_sz(%ld) firmware_sz(%d)\n",

> +		    sizeof(*facts_data),

> +		    le16_to_cpu(facts_data->IOCFactsDataLength) * 4);

> +	}

> +

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +	req_sz = 1 << ((ioc_config & MPI3_SYSIF_IOC_CONFIG_OPER_REQ_ENT_SZ) >>

> +	    MPI3_SYSIF_IOC_CONFIG_OPER_REQ_ENT_SZ_SHIFT);

> +	if (le16_to_cpu(facts_data->IOCRequestFrameSize) != (req_sz/4)) {

> +		ioc_err(mrioc,

> +		    "IOCFacts data reqFrameSize mismatch hw_size(%d) firmware_sz(%d)\n",

> +		    req_sz/4, le16_to_cpu(facts_data->IOCRequestFrameSize));

> +	}

> +

> +	memset(&mrioc->facts, 0, sizeof(mrioc->facts));

> +

> +	facts_flags = le32_to_cpu(facts_data->Flags);

> +	mrioc->facts.op_req_sz = req_sz;

> +	mrioc->op_reply_desc_sz = 1 << ((ioc_config &

> +	    MPI3_SYSIF_IOC_CONFIG_OPER_RPY_ENT_SZ) >>

> +	    MPI3_SYSIF_IOC_CONFIG_OPER_RPY_ENT_SZ_SHIFT);

> +

> +	mrioc->facts.ioc_num = facts_data->IOCNumber;

> +	mrioc->facts.who_init = facts_data->WhoInit;

> +	mrioc->facts.max_msix_vectors = le16_to_cpu(facts_data->MaxMSIxVectors);

> +	mrioc->facts.personality = (facts_flags &

> +	    MPI3_IOCFACTS_FLAGS_PERSONALITY_MASK);

> +	mrioc->facts.dma_mask = (facts_flags &

> +	    MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >>

> +	    MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT;

> +	mrioc->facts.protocol_flags = facts_data->ProtocolFlags;

> +	mrioc->facts.mpi_version = le32_to_cpu(facts_data->MPIVersion.Word);

> +	mrioc->facts.max_reqs = le16_to_cpu(facts_data->MaxOutstandingRequest);

> +	mrioc->facts.product_id = le16_to_cpu(facts_data->ProductID);

> +	mrioc->facts.reply_sz = le16_to_cpu(facts_data->ReplyFrameSize) * 4;

> +	mrioc->facts.exceptions = le16_to_cpu(facts_data->IOCExceptions);

> +	mrioc->facts.max_perids = le16_to_cpu(facts_data->MaxPersistentID);

> +	mrioc->facts.max_pds = le16_to_cpu(facts_data->MaxPDs);

> +	mrioc->facts.max_vds = le16_to_cpu(facts_data->MaxVDs);

> +	mrioc->facts.max_hpds = le16_to_cpu(facts_data->MaxHostPDs);

> +	mrioc->facts.max_advhpds = le16_to_cpu(facts_data->MaxAdvancedHostPDs);

> +	mrioc->facts.max_raidpds = le16_to_cpu(facts_data->MaxRAIDPDs);

> +	mrioc->facts.max_nvme = le16_to_cpu(facts_data->MaxNVMe);

> +	mrioc->facts.max_pcieswitches =

> +	    le16_to_cpu(facts_data->MaxPCIeSwitches);

> +	mrioc->facts.max_sasexpanders =

> +	    le16_to_cpu(facts_data->MaxSASExpanders);

> +	mrioc->facts.max_sasinitiators =

> +	    le16_to_cpu(facts_data->MaxSASInitiators);

> +	mrioc->facts.max_enclosures = le16_to_cpu(facts_data->MaxEnclosures);

> +	mrioc->facts.min_devhandle = le16_to_cpu(facts_data->MinDevHandle);

> +	mrioc->facts.max_devhandle = le16_to_cpu(facts_data->MaxDevHandle);

> +	mrioc->facts.max_op_req_q =

> +	    le16_to_cpu(facts_data->MaxOperationalRequestQueues);

> +	mrioc->facts.max_op_reply_q =

> +	    le16_to_cpu(facts_data->MaxOperationalReplyQueues);

> +	mrioc->facts.ioc_capabilities =

> +	    le32_to_cpu(facts_data->IOCCapabilities);

> +	mrioc->facts.fw_ver.build_num =

> +	    le16_to_cpu(facts_data->FWVersion.BuildNum);

> +	mrioc->facts.fw_ver.cust_id =

> +	    le16_to_cpu(facts_data->FWVersion.CustomerID);

> +	mrioc->facts.fw_ver.ph_minor = facts_data->FWVersion.PhaseMinor;

> +	mrioc->facts.fw_ver.ph_major = facts_data->FWVersion.PhaseMajor;

> +	mrioc->facts.fw_ver.gen_minor = facts_data->FWVersion.GenMinor;

> +	mrioc->facts.fw_ver.gen_major = facts_data->FWVersion.GenMajor;

> +	mrioc->msix_count = min_t(int, mrioc->msix_count,

> +	    mrioc->facts.max_msix_vectors);

> +	mrioc->facts.sge_mod_mask = facts_data->SGEModifierMask;

> +	mrioc->facts.sge_mod_value = facts_data->SGEModifierValue;

> +	mrioc->facts.sge_mod_shift = facts_data->SGEModifierShift;

> +	mrioc->facts.shutdown_timeout =

> +	    le16_to_cpu(facts_data->ShutdownTimeout);

> +

> +	ioc_info(mrioc, "ioc_num(%d), maxopQ(%d), maxopRepQ(%d), maxdh(%d),",

> +	    mrioc->facts.ioc_num, mrioc->facts.max_op_req_q,

> +	    mrioc->facts.max_op_reply_q, mrioc->facts.max_devhandle);

> +	ioc_info(mrioc,

> +	    "maxreqs(%d), mindh(%d) maxPDs(%d) maxvectors(%d) maxperids(%d)\n",

> +	    mrioc->facts.max_reqs, mrioc->facts.min_devhandle,

> +	    mrioc->facts.max_pds, mrioc->facts.max_msix_vectors,

> +	    mrioc->facts.max_perids);

> +	ioc_info(mrioc, "SGEModMask 0x%x SGEModVal 0x%x SGEModShift 0x%x ",

> +	    mrioc->facts.sge_mod_mask, mrioc->facts.sge_mod_value,

> +	    mrioc->facts.sge_mod_shift);

> +	ioc_info(mrioc, "DMA Mask %d InitialPE Status 0x%x\n",

> +	    mrioc->facts.dma_mask, (facts_flags &

> +	    MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_MASK));

> +

> +	mrioc->max_host_ios = mrioc->facts.max_reqs - MPI3MR_INTERNAL_CMDS_RESVD;

> +

> +	if (reset_devices)

> +		mrioc->max_host_ios = min_t(int, mrioc->max_host_ios,

> +		    MPI3MR_HOST_IOS_KDUMP);

> +

> +}

> +

> +/**

> + * mpi3mr_alloc_reply_sense_bufs - Send IOC Init

> + * @mrioc: Adapter instance reference

> + *

> + * Allocate and initialize the reply free buffers, sense

> + * buffers, reply free queue and sense buffer queue.

> + *

> + * Return: 0 on success, non-zero on failures.

> + */

> +static int mpi3mr_alloc_reply_sense_bufs(struct mpi3mr_ioc *mrioc)

> +{

> +	int retval = 0;

> +	u32 sz, i;

> +	dma_addr_t phy_addr;

> +

> +	if (mrioc->init_cmds.reply)

> +		goto post_reply_sbuf;

> +

> +	mrioc->init_cmds.reply = kzalloc(mrioc->facts.reply_sz, GFP_KERNEL);

> +	if (!mrioc->init_cmds.reply)

> +		goto out_failed;

> +

> +

> +	mrioc->num_reply_bufs = mrioc->facts.max_reqs + MPI3MR_NUM_EVT_REPLIES;

> +	mrioc->reply_free_qsz = mrioc->num_reply_bufs + 1;

> +	mrioc->num_sense_bufs = mrioc->facts.max_reqs / MPI3MR_SENSEBUF_FACTOR;

> +	mrioc->sense_buf_q_sz = mrioc->num_sense_bufs + 1;

> +

> +	/* reply buffer pool, 16 byte align */

> +	sz = mrioc->num_reply_bufs * mrioc->facts.reply_sz;

> +	mrioc->reply_buf_pool = dma_pool_create("reply_buf pool",

> +	    &mrioc->pdev->dev, sz, 16, 0);

> +	if (!mrioc->reply_buf_pool) {

> +		ioc_err(mrioc, "reply buf pool: dma_pool_create failed\n");

> +		goto out_failed;

> +	}

> +

> +	mrioc->reply_buf = dma_pool_zalloc(mrioc->reply_buf_pool, GFP_KERNEL,

> +	    &mrioc->reply_buf_dma);

> +	if (!mrioc->reply_buf)

> +		goto out_failed;

> +

> +	mrioc->reply_buf_dma_max_address = mrioc->reply_buf_dma + sz;

> +

> +	/* reply free queue, 8 byte align */

> +	sz = mrioc->reply_free_qsz * 8;

> +	mrioc->reply_free_q_pool = dma_pool_create("reply_free_q pool",

> +	    &mrioc->pdev->dev, sz, 8, 0);

> +	if (!mrioc->reply_free_q_pool) {

> +		ioc_err(mrioc, "reply_free_q pool: dma_pool_create failed\n");

> +		goto out_failed;

> +	}

> +	mrioc->reply_free_q = dma_pool_zalloc(mrioc->reply_free_q_pool,

> +	    GFP_KERNEL, &mrioc->reply_free_q_dma);

> +	if (!mrioc->reply_free_q)

> +		goto out_failed;

> +

> +	/* sense buffer pool,  4 byte align */

> +	sz = mrioc->num_sense_bufs * MPI3MR_SENSEBUF_SZ;

> +	mrioc->sense_buf_pool = dma_pool_create("sense_buf pool",

> +	    &mrioc->pdev->dev, sz, 4, 0);

> +	if (!mrioc->sense_buf_pool) {

> +		ioc_err(mrioc, "sense_buf pool: dma_pool_create failed\n");

> +		goto out_failed;

> +	}

> +	mrioc->sense_buf = dma_pool_zalloc(mrioc->sense_buf_pool, GFP_KERNEL,

> +	    &mrioc->sense_buf_dma);

> +	if (!mrioc->sense_buf)

> +		goto out_failed;

> +

> +	/* sense buffer queue, 8 byte align */

> +	sz = mrioc->sense_buf_q_sz * 8;

> +	mrioc->sense_buf_q_pool = dma_pool_create("sense_buf_q pool",

> +	    &mrioc->pdev->dev, sz, 8, 0);

> +	if (!mrioc->sense_buf_q_pool) {

> +		ioc_err(mrioc, "sense_buf_q pool: dma_pool_create failed\n");

> +		goto out_failed;

> +	}

> +	mrioc->sense_buf_q = dma_pool_zalloc(mrioc->sense_buf_q_pool,

> +	    GFP_KERNEL, &mrioc->sense_buf_q_dma);

> +	if (!mrioc->sense_buf_q)

> +		goto out_failed;

> +

> +post_reply_sbuf:

> +	sz = mrioc->num_reply_bufs * mrioc->facts.reply_sz;

> +	ioc_info(mrioc,

> +	    "reply buf pool(0x%p): depth(%d), frame_size(%d), pool_size(%d kB), reply_dma(0x%llx)\n",

> +	    mrioc->reply_buf, mrioc->num_reply_bufs, mrioc->facts.reply_sz,

> +	    (sz / 1024), (unsigned long long)mrioc->reply_buf_dma);

> +	sz = mrioc->reply_free_qsz * 8;

> +	ioc_info(mrioc,

> +	    "reply_free_q pool(0x%p): depth(%d), frame_size(%d), pool_size(%d kB), reply_dma(0x%llx)\n",

> +	    mrioc->reply_free_q, mrioc->reply_free_qsz, 8, (sz / 1024),

> +	    (unsigned long long)mrioc->reply_free_q_dma);

> +	sz = mrioc->num_sense_bufs * MPI3MR_SENSEBUF_SZ;

> +	ioc_info(mrioc,

> +	    "sense_buf pool(0x%p): depth(%d), frame_size(%d), pool_size(%d kB), sense_dma(0x%llx)\n",

> +	    mrioc->sense_buf, mrioc->num_sense_bufs, MPI3MR_SENSEBUF_SZ,

> +	    (sz / 1024), (unsigned long long)mrioc->sense_buf_dma);

> +	sz = mrioc->sense_buf_q_sz * 8;

> +	ioc_info(mrioc,

> +	    "sense_buf_q pool(0x%p): depth(%d), frame_size(%d), pool_size(%d kB), sense_dma(0x%llx)\n",

> +	    mrioc->sense_buf_q, mrioc->sense_buf_q_sz, 8, (sz / 1024),

> +	    (unsigned long long)mrioc->sense_buf_q_dma);

> +

> +	/* initialize Reply buffer Queue */

> +	for (i = 0, phy_addr = mrioc->reply_buf_dma;

> +	    i < mrioc->num_reply_bufs; i++, phy_addr += mrioc->facts.reply_sz)

> +		mrioc->reply_free_q[i] = cpu_to_le64(phy_addr);

> +	mrioc->reply_free_q[i] = cpu_to_le64(0);

> +

> +	/* initialize Sense Buffer Queue */

> +	for (i = 0, phy_addr = mrioc->sense_buf_dma;

> +	    i < mrioc->num_sense_bufs; i++, phy_addr += MPI3MR_SENSEBUF_SZ)

> +		mrioc->sense_buf_q[i] = cpu_to_le64(phy_addr);

> +	mrioc->sense_buf_q[i] = cpu_to_le64(0);

> +	return retval;

> +

> +out_failed:

> +	retval = -1;

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_issue_iocinit - Send IOC Init

> + * @mrioc: Adapter instance reference

> + *

> + * Issue IOC Init MPI request through admin queue and wait for

> + * the completion of it or time out.

> + *

> + * Return: 0 on success, non-zero on failures.

> + */

> +static int mpi3mr_issue_iocinit(struct mpi3mr_ioc *mrioc)

> +{

> +	Mpi3IOCInitRequest_t iocinit_req;

> +	Mpi3DriverInfoLayout_t *drv_info;

> +	dma_addr_t data_dma;

> +	u32 data_len = sizeof(*drv_info);

> +	int retval = 0;

> +	ktime_t current_time;

> +

> +	drv_info = dma_alloc_coherent(&mrioc->pdev->dev, data_len, &data_dma,

> +	    GFP_KERNEL);

> +	if (!drv_info) {

> +		retval = -1;

> +		goto out;

> +	}

> +	drv_info->InformationLength = cpu_to_le32(data_len);

> +	strcpy(drv_info->DriverSignature, "Broadcom");

> +	strcpy(drv_info->OsName, utsname()->sysname);

> +	drv_info->OsName[sizeof(drv_info->OsName)-1] = 0;

> +	strcpy(drv_info->OsVersion, utsname()->release);

> +	drv_info->OsVersion[sizeof(drv_info->OsVersion)-1] = 0;

> +	strcpy(drv_info->DriverName, MPI3MR_DRIVER_NAME);

> +	strcpy(drv_info->DriverVersion, MPI3MR_DRIVER_VERSION);

> +	strcpy(drv_info->DriverReleaseDate, MPI3MR_DRIVER_RELDATE);

> +	drv_info->DriverCapabilities = 0;

> +	memcpy((u8 *)&mrioc->driver_info, (u8 *)drv_info,

> +	    sizeof(mrioc->driver_info));

> +

> +	memset(&iocinit_req, 0, sizeof(iocinit_req));

> +	mutex_lock(&mrioc->init_cmds.mutex);

> +	if (mrioc->init_cmds.state & MPI3MR_CMD_PENDING) {

> +		retval = -1;

> +		ioc_err(mrioc, "Issue IOCInit: Init command is in use\n");

> +		mutex_unlock(&mrioc->init_cmds.mutex);

> +		goto out;

> +	}

> +	mrioc->init_cmds.state = MPI3MR_CMD_PENDING;

> +	mrioc->init_cmds.is_waiting = 1;

> +	mrioc->init_cmds.callback = NULL;

> +	iocinit_req.HostTag = cpu_to_le16(MPI3MR_HOSTTAG_INITCMDS);

> +	iocinit_req.Function = MPI3_FUNCTION_IOC_INIT;

> +	iocinit_req.MPIVersion.Struct.Dev = MPI3_VERSION_DEV;

> +	iocinit_req.MPIVersion.Struct.Unit = MPI3_VERSION_UNIT;

> +	iocinit_req.MPIVersion.Struct.Major = MPI3_VERSION_MAJOR;

> +	iocinit_req.MPIVersion.Struct.Minor = MPI3_VERSION_MINOR;

> +	iocinit_req.WhoInit = MPI3_WHOINIT_HOST_DRIVER;

> +	iocinit_req.ReplyFreeQueueDepth = cpu_to_le16(mrioc->reply_free_qsz);

> +	iocinit_req.ReplyFreeQueueAddress =

> +	    cpu_to_le64(mrioc->reply_free_q_dma);

> +	iocinit_req.SenseBufferLength = cpu_to_le16(MPI3MR_SENSEBUF_SZ);

> +	iocinit_req.SenseBufferFreeQueueDepth =

> +	    cpu_to_le16(mrioc->sense_buf_q_sz);

> +	iocinit_req.SenseBufferFreeQueueAddress =

> +	    cpu_to_le64(mrioc->sense_buf_q_dma);

> +	iocinit_req.DriverInformationAddress = cpu_to_le64(data_dma);

> +

> +	current_time = ktime_get_real();

> +	iocinit_req.TimeStamp = cpu_to_le64(ktime_to_ms(current_time));

> +

> +	init_completion(&mrioc->init_cmds.done);

> +	retval = mpi3mr_admin_request_post(mrioc, &iocinit_req,

> +	    sizeof(iocinit_req), 1);

> +	if (retval) {

> +		ioc_err(mrioc, "Issue IOCInit: Admin Post failed\n");

> +		goto out_unlock;

> +	}

> +	wait_for_completion_timeout(&mrioc->init_cmds.done,

> +	    (MPI3MR_INTADMCMD_TIMEOUT * HZ));

> +	if (!(mrioc->init_cmds.state & MPI3MR_CMD_COMPLETE)) {

> +		mpi3mr_set_diagsave(mrioc);

> +		mpi3mr_issue_reset(mrioc,

> +		    MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT,

> +		    MPI3MR_RESET_FROM_IOCINIT_TIMEOUT);

> +		mrioc->unrecoverable = 1;

> +		ioc_err(mrioc, "Issue IOCInit: command timed out\n");

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	if ((mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK)

> +	    != MPI3_IOCSTATUS_SUCCESS) {

> +		ioc_err(mrioc,

> +		    "Issue IOCInit: Failed IOCStatus(0x%04x) Loginfo(0x%08x)\n",

> +		    (mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK),

> +		    mrioc->init_cmds.ioc_loginfo);

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +

> +out_unlock:

> +	mrioc->init_cmds.state = MPI3MR_CMD_NOTUSED;

> +	mutex_unlock(&mrioc->init_cmds.mutex);

> +

> +out:

> +	if (drv_info)

> +		dma_free_coherent(&mrioc->pdev->dev, data_len, drv_info,

> +		    data_dma);

> +

> +	return retval;

> +}

> +

> +

> +/**

> + * mpi3mr_alloc_chain_bufs - Allocate chain buffers

> + * @mrioc: Adapter instance reference

> + *

> + * Allocate chain buffers and set a bitmap to indicate free

> + * chain buffers. Chain buffers are used to pass the SGE

> + * information along with MPI3 SCSI IO requests for host I/O.

> + *

> + * Return: 0 on success, non-zero on failure

> + */

> +static int mpi3mr_alloc_chain_bufs(struct mpi3mr_ioc *mrioc)

> +{

> +	int retval = 0;

> +	u32 sz, i;

> +	u16 num_chains;

> +

> +	num_chains = mrioc->max_host_ios/MPI3MR_CHAINBUF_FACTOR;

> +

> +	mrioc->chain_buf_count = num_chains;

> +	sz = sizeof(struct chain_element) * num_chains;

> +	mrioc->chain_sgl_list = kzalloc(sz, GFP_KERNEL);

> +	if (!mrioc->chain_sgl_list)

> +		goto out_failed;

> +

> +	sz = MPI3MR_PAGE_SIZE_4K;

> +	mrioc->chain_buf_pool = dma_pool_create("chain_buf pool",

> +	    &mrioc->pdev->dev, sz, 16, 0);

> +	if (!mrioc->chain_buf_pool) {

> +		ioc_err(mrioc, "chain buf pool: dma_pool_create failed\n");

> +		goto out_failed;

> +	}

> +

> +	for (i = 0; i < num_chains; i++) {

> +		mrioc->chain_sgl_list[i].addr =

> +		    dma_pool_zalloc(mrioc->chain_buf_pool, GFP_KERNEL,

> +		    &mrioc->chain_sgl_list[i].dma_addr);

> +

> +		if (!mrioc->chain_sgl_list[i].addr)

> +			goto out_failed;

> +	}

> +	mrioc->chain_bitmap_sz = num_chains / 8;

> +	if (num_chains % 8)

> +		mrioc->chain_bitmap_sz++;

> +	mrioc->chain_bitmap = kzalloc(mrioc->chain_bitmap_sz, GFP_KERNEL);

> +	if (!mrioc->chain_bitmap)

> +		goto out_failed;

> +	return retval;

> +out_failed:

> +	retval = -1;

> +	return retval;

> +}

> +

> +

> +/**

> + * mpi3mr_cleanup_resources - Free PCI resources

> + * @mrioc: Adapter instance reference

> + *

> + * Unmap PCI device memory and disable PCI device.

> + *

> + * Return: 0 on success and non-zero on failure.

> + */

> +void mpi3mr_cleanup_resources(struct mpi3mr_ioc *mrioc)

> +{

> +	struct pci_dev *pdev = mrioc->pdev;

> +

> +	mpi3mr_cleanup_isr(mrioc);

> +

> +	if (mrioc->sysif_regs) {

> +		iounmap(mrioc->sysif_regs);

> +		mrioc->sysif_regs = NULL;

> +	}

> +

> +	if (pci_is_enabled(pdev)) {

> +		if (mrioc->bars)

> +			pci_release_selected_regions(pdev, mrioc->bars);

> +		pci_disable_device(pdev);

> +	}

> +}

> +

> +/**

> + * mpi3mr_setup_resources - Enable PCI resources

> + * @mrioc: Adapter instance reference

> + *

> + * Enable PCI device memory, MSI-x registers and set DMA mask.

> + *

> + * Return: 0 on success and non-zero on failure.

> + */

> +int mpi3mr_setup_resources(struct mpi3mr_ioc *mrioc)

> +{

> +	struct pci_dev *pdev = mrioc->pdev;

> +	u32 memap_sz = 0;

> +	int i, retval = 0, capb = 0;

> +	u16 message_control;

> +	u64 dma_mask = mrioc->dma_mask ? mrioc->dma_mask :

> +	    (((dma_get_required_mask(&pdev->dev) > DMA_BIT_MASK(32)) &&

> +	    (sizeof(dma_addr_t) > 4)) ? DMA_BIT_MASK(64):DMA_BIT_MASK(32));

> +

> +	if (pci_enable_device_mem(pdev)) {

> +		ioc_err(mrioc, "pci_enable_device_mem: failed\n");

> +		retval = -ENODEV;

> +		goto out_failed;

> +	}

> +

> +	capb = pci_find_capability(pdev, PCI_CAP_ID_MSIX);

> +	if (!capb) {

> +		ioc_err(mrioc, "Unable to find MSI-X Capabilities\n");

> +		retval = -ENODEV;

> +		goto out_failed;

> +	}

> +	mrioc->bars = pci_select_bars(pdev, IORESOURCE_MEM);

> +

> +	if (pci_request_selected_regions(pdev, mrioc->bars,

> +	    mrioc->driver_name)) {

> +		ioc_err(mrioc, "pci_request_selected_regions: failed\n");

> +		retval = -ENODEV;

> +		goto out_failed;

> +	}

> +

> +	for (i = 0; (i < DEVICE_COUNT_RESOURCE); i++) {

> +		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {

> +			mrioc->sysif_regs_phys = pci_resource_start(pdev, i);

> +			memap_sz = pci_resource_len(pdev, i);

> +			mrioc->sysif_regs =

> +			    ioremap(mrioc->sysif_regs_phys, memap_sz);

> +			break;

> +		}

> +	}

> +

> +	pci_set_master(pdev);

> +

> +	retval = dma_set_mask_and_coherent(&pdev->dev, dma_mask);

> +	if (retval) {

> +		if (dma_mask != DMA_BIT_MASK(32)) {

> +			ioc_warn(mrioc, "Setting 64 bit DMA mask failed\n");

> +			dma_mask = DMA_BIT_MASK(32);

> +			retval = dma_set_mask_and_coherent(&pdev->dev,

> +			    dma_mask);

> +		}

> +		if (retval) {

> +			mrioc->dma_mask = 0;

> +			ioc_err(mrioc, "Setting 32 bit DMA mask also failed\n");

> +			goto out_failed;

> +		}

> +	}

> +	mrioc->dma_mask = dma_mask;

> +

> +	if (mrioc->sysif_regs == NULL) {

> +		ioc_err(mrioc,

> +		    "Unable to map adapter memory or resource not found\n");

> +		retval = -EINVAL;

> +		goto out_failed;

> +	}

> +

> +	pci_read_config_word(pdev, capb + 2, &message_control);

> +	mrioc->msix_count = (message_control & 0x3FF) + 1;

> +

> +	pci_save_state(pdev);

> +

> +	pci_set_drvdata(pdev, mrioc->shost);

> +

> +	mpi3mr_ioc_disable_intr(mrioc);

> +

> +	ioc_info(mrioc, "iomem(0x%016llx), mapped(0x%p), size(%d)\n",

> +	    (unsigned long long)mrioc->sysif_regs_phys,

> +	    mrioc->sysif_regs, memap_sz);

> +	ioc_info(mrioc, "Number of MSI-X vectors found in capabilities: (%d)\n",

> +	    mrioc->msix_count);

> +	return retval;

> +

> +out_failed:

> +	mpi3mr_cleanup_resources(mrioc);

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_init_ioc - Initialize the controller

> + * @mrioc: Adapter instance reference

> + *

> + * This the controller initialization routine, executed either

> + * after soft reset or from pci probe callback.

> + * Setup the required resources, memory map the controller

> + * registers, create admin and operational reply queue pairs,

> + * allocate required memory for reply pool, sense buffer pool,

> + * issue IOC init request to the firmware, unmask the events and

> + * issue port enable to discover SAS/SATA/NVMe devies and RAID

> + * volumes.

> + *

> + * Return: 0 on success and non-zero on failure.

> + */

> +int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc)

> +{

> +	int retval = 0;

> +	enum mpi3mr_iocstate ioc_state;

> +	u64 base_info;

> +	u32 timeout;

> +	u32 ioc_status, ioc_config;

> +	Mpi3IOCFactsData_t facts_data;

> +

> +	mrioc->change_count = 0;

> +	mrioc->cpu_count = num_online_cpus();


What about CPU hotplug?

> +	retval = mpi3mr_setup_resources(mrioc);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to setup resources:error %d\n",

> +		    retval);

> +		goto out_nocleanup;

> +	}

> +	ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +

> +	ioc_info(mrioc, "SOD status %x configuration %x\n",

> +	    ioc_status, ioc_config);

> +

> +	base_info = readq(&mrioc->sysif_regs->IOCInformation);

> +	ioc_info(mrioc, "SOD base_info %llx\n",	base_info);

> +

> +	/*The timeout value is in 2sec unit, changing it to seconds*/

> +	mrioc->ready_timeout =

> +	    ((base_info & MPI3_SYSIF_IOC_INFO_LOW_TIMEOUT_MASK) >>

> +	    MPI3_SYSIF_IOC_INFO_LOW_TIMEOUT_SHIFT) * 2;

> +

> +	ioc_info(mrioc, "IOC ready timeout %d\n", mrioc->ready_timeout);

> +

> +	ioc_state = mpi3mr_get_iocstate(mrioc);

> +	ioc_info(mrioc, "IOC in %s state during detection\n",

> +	    mpi3mr_iocstate_name(ioc_state));

> +

> +	if (ioc_state == MRIOC_STATE_BECOMING_READY ||

> +			ioc_state == MRIOC_STATE_RESET_REQUESTED) {

> +		timeout = mrioc->ready_timeout * 10;

> +		do {

> +			msleep(100);

> +		} while (--timeout);

> +

> +		ioc_state = mpi3mr_get_iocstate(mrioc);

> +		ioc_info(mrioc,

> +			"IOC in %s state after waiting for reset time\n",

> +			mpi3mr_iocstate_name(ioc_state));

> +	}

> +

> +	if (ioc_state == MRIOC_STATE_READY) {

> +		retval = mpi3mr_issue_and_process_mur(mrioc,

> +		    MPI3MR_RESET_FROM_BRINGUP);

> +		if (retval) {

> +			ioc_err(mrioc, "Failed to MU reset IOC error %d\n",

> +			    retval);

> +		}

> +		ioc_state = mpi3mr_get_iocstate(mrioc);

> +	}

> +	if (ioc_state != MRIOC_STATE_RESET) {

> +		mpi3mr_print_fault_info(mrioc);

> +		retval = mpi3mr_issue_reset(mrioc,

> +		    MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SOFT_RESET,

> +		    MPI3MR_RESET_FROM_BRINGUP);

> +		if (retval) {

> +			ioc_err(mrioc,

> +			    "%s :Failed to soft reset IOC error %d\n",

> +			    __func__, retval);

> +			goto out_failed;

> +		}

> +	}

> +	ioc_state = mpi3mr_get_iocstate(mrioc);

> +	if (ioc_state != MRIOC_STATE_RESET) {

> +		ioc_err(mrioc, "Cannot bring IOC to reset state\n");

> +		goto out_failed;

> +	}

> +

> +	retval = mpi3mr_setup_admin_qpair(mrioc);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to setup admin Qs: error %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +

> +	retval = mpi3mr_bring_ioc_ready(mrioc);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to bring ioc ready: error %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +

> +	retval = mpi3mr_setup_isr(mrioc, 1);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to setup ISR error %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +

> +	retval = mpi3mr_issue_iocfacts(mrioc, &facts_data);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to Issue IOC Facts %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +

> +	mpi3mr_process_factsdata(mrioc, &facts_data);

> +	retval = mpi3mr_check_reset_dma_mask(mrioc);

> +	if (retval) {

> +		ioc_err(mrioc, "Resetting dma mask failed %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +

> +	retval = mpi3mr_alloc_reply_sense_bufs(mrioc);

> +	if (retval) {

> +		ioc_err(mrioc,

> +		    "%s :Failed to allocated reply sense buffers %d\n",

> +		    __func__, retval);

> +		goto out_failed;

> +	}

> +

> +	retval = mpi3mr_alloc_chain_bufs(mrioc);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to allocated chain buffers %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +

> +	retval = mpi3mr_issue_iocinit(mrioc);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to Issue IOC Init %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +	mrioc->reply_free_queue_host_index = mrioc->num_reply_bufs;

> +	writel(mrioc->reply_free_queue_host_index,

> +	    &mrioc->sysif_regs->ReplyFreeHostIndex);

> +

> +	mrioc->sbq_host_index = mrioc->num_sense_bufs;

> +	writel(mrioc->sbq_host_index,

> +	    &mrioc->sysif_regs->SenseBufferFreeHostIndex);

> +

> +	retval = mpi3mr_setup_isr(mrioc, 0);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to re-setup ISR, error %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +

> +	return retval;

> +

> +out_failed:

> +	mpi3mr_cleanup_ioc(mrioc);

> +out_nocleanup:

> +	return retval;

> +}

> +

> +

> +/**

> + * mpi3mr_free_mem - Free memory allocated for a controller

> + * @mrioc: Adapter instance reference

> + *

> + * Free all the memory allocated for a controller.

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc)

> +{

> +	u16 i;

> +	struct mpi3mr_intr_info *intr_info;

> +

> +	if (mrioc->sense_buf_pool) {

> +		if (mrioc->sense_buf)

> +			dma_pool_free(mrioc->sense_buf_pool, mrioc->sense_buf,

> +			    mrioc->sense_buf_dma);

> +		dma_pool_destroy(mrioc->sense_buf_pool);

> +		mrioc->sense_buf = NULL;

> +		mrioc->sense_buf_pool = NULL;

> +	}

> +	if (mrioc->sense_buf_q_pool) {

> +		if (mrioc->sense_buf_q)

> +			dma_pool_free(mrioc->sense_buf_q_pool,

> +			    mrioc->sense_buf_q, mrioc->sense_buf_q_dma);

> +		dma_pool_destroy(mrioc->sense_buf_q_pool);

> +		mrioc->sense_buf_q = NULL;

> +		mrioc->sense_buf_q_pool = NULL;

> +	}

> +

> +	if (mrioc->reply_buf_pool) {

> +		if (mrioc->reply_buf)

> +			dma_pool_free(mrioc->reply_buf_pool, mrioc->reply_buf,

> +			    mrioc->reply_buf_dma);

> +		dma_pool_destroy(mrioc->reply_buf_pool);

> +		mrioc->reply_buf = NULL;

> +		mrioc->reply_buf_pool = NULL;

> +	}

> +	if (mrioc->reply_free_q_pool) {

> +		if (mrioc->reply_free_q)

> +			dma_pool_free(mrioc->reply_free_q_pool,

> +			    mrioc->reply_free_q, mrioc->reply_free_q_dma);

> +		dma_pool_destroy(mrioc->reply_free_q_pool);

> +		mrioc->reply_free_q = NULL;

> +		mrioc->reply_free_q_pool = NULL;

> +	}

> +

> +	for (i = 0; i < mrioc->intr_info_count; i++) {

> +		intr_info = mrioc->intr_info + i;

> +		if (intr_info)

> +			intr_info->op_reply_q = NULL;

> +	}

> +

> +	kfree(mrioc->req_qinfo);

> +	mrioc->req_qinfo = NULL;

> +	mrioc->num_op_req_q = 0;

> +

> +	kfree(mrioc->op_reply_qinfo);

> +	mrioc->op_reply_qinfo = NULL;

> +	mrioc->num_op_reply_q = 0;

> +

> +	kfree(mrioc->init_cmds.reply);

> +	mrioc->init_cmds.reply = NULL;

> +

> +	kfree(mrioc->chain_bitmap);

> +	mrioc->chain_bitmap = NULL;

> +

> +	if (mrioc->chain_buf_pool) {

> +		for (i = 0; i < mrioc->chain_buf_count; i++) {

> +			if (mrioc->chain_sgl_list[i].addr) {

> +				dma_pool_free(mrioc->chain_buf_pool,

> +				    mrioc->chain_sgl_list[i].addr,

> +				    mrioc->chain_sgl_list[i].dma_addr);

> +				mrioc->chain_sgl_list[i].addr = NULL;

> +			}

> +		}

> +		dma_pool_destroy(mrioc->chain_buf_pool);

> +		mrioc->chain_buf_pool = NULL;

> +	}

> +

> +	kfree(mrioc->chain_sgl_list);

> +	mrioc->chain_sgl_list = NULL;

> +

> +	if (mrioc->admin_reply_base) {

> +		dma_free_coherent(&mrioc->pdev->dev, mrioc->admin_reply_q_sz,

> +		    mrioc->admin_reply_base, mrioc->admin_reply_dma);

> +		mrioc->admin_reply_base = NULL;

> +	}

> +	if (mrioc->admin_req_base) {

> +		dma_free_coherent(&mrioc->pdev->dev, mrioc->admin_req_q_sz,

> +		    mrioc->admin_req_base, mrioc->admin_req_dma);

> +		mrioc->admin_req_base = NULL;

> +	}

> +

> +}

> +

> +/**

> + * mpi3mr_issue_ioc_shutdown - Shutdown controller

> + * @mrioc: Adapter instance reference

> + *

> + * Send shutodwn notification to the controller and wait for the

> + * shutdown_timeout for it to be completed.

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_issue_ioc_shutdown(struct mpi3mr_ioc *mrioc)

> +{

> +	u32 ioc_config, ioc_status;

> +	u8 retval = 1;

> +	u32 timeout = MPI3MR_DEFAULT_SHUTDOWN_TIME * 10;

> +

> +	ioc_info(mrioc, "Issuing Shutdown Notification\n");

> +	if (mrioc->unrecoverable) {

> +		ioc_warn(mrioc,

> +		    "IOC is unrecoverable Shutdown is not issued\n");

> +		return;

> +	}

> +	ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +	if ((ioc_status & MPI3_SYSIF_IOC_STATUS_SHUTDOWN_MASK)

> +	    == MPI3_SYSIF_IOC_STATUS_SHUTDOWN_IN_PROGRESS) {

> +		ioc_info(mrioc, "Shutdown already in progress\n");

> +		return;

> +	}

> +

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +	ioc_config |= MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_NORMAL;

> +	ioc_config |= MPI3_SYSIF_IOC_CONFIG_DEVICE_SHUTDOWN;

> +

> +	writel(ioc_config, &mrioc->sysif_regs->IOCConfiguration);

> +

> +	if (mrioc->facts.shutdown_timeout)

> +		timeout = mrioc->facts.shutdown_timeout * 10;

> +

> +	do {

> +		ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +		if ((ioc_status & MPI3_SYSIF_IOC_STATUS_SHUTDOWN_MASK)

> +		    == MPI3_SYSIF_IOC_STATUS_SHUTDOWN_COMPLETE) {

> +			retval = 0;

> +			break;

> +		}

> +		msleep(100);

> +	} while (--timeout);

> +

> +

> +	ioc_status = readl(&mrioc->sysif_regs->IOCStatus);

> +	ioc_config = readl(&mrioc->sysif_regs->IOCConfiguration);

> +

> +	if (retval) {

> +		if ((ioc_status & MPI3_SYSIF_IOC_STATUS_SHUTDOWN_MASK)

> +		    == MPI3_SYSIF_IOC_STATUS_SHUTDOWN_IN_PROGRESS)

> +			ioc_warn(mrioc,

> +			    "Shutdown still in progress after timeout\n");

> +	}

> +

> +	ioc_info(mrioc,

> +	    "Base IOC Sts/Config after %s shutdown is (0x%x)/(0x%x)\n",

> +	    (!retval)?"successful":"failed", ioc_status,

> +	    ioc_config);

> +}

> +

> +/**

> + * mpi3mr_cleanup_ioc - Cleanup controller

> + * @mrioc: Adapter instance reference

> + *

> + * Controller cleanup handler, Message unit reset or soft reset

> + * and shutdown notification is issued to the controller and the

> + * associated memory resources are freed.

> + *

> + * Return: Nothing.

> + */

> +void mpi3mr_cleanup_ioc(struct mpi3mr_ioc *mrioc)

> +{

> +	enum mpi3mr_iocstate ioc_state;

> +

> +	mpi3mr_ioc_disable_intr(mrioc);

> +

> +	ioc_state = mpi3mr_get_iocstate(mrioc);

> +

> +	if ((!mrioc->unrecoverable) && (!mrioc->reset_in_progress) &&

> +	     (ioc_state == MRIOC_STATE_READY)) {

> +		if (mpi3mr_issue_and_process_mur(mrioc,

> +		    MPI3MR_RESET_FROM_CTLR_CLEANUP))

> +			mpi3mr_issue_reset(mrioc,

> +			    MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SOFT_RESET,

> +			    MPI3MR_RESET_FROM_MUR_FAILURE);

> +

> +		 mpi3mr_issue_ioc_shutdown(mrioc);

> +	}

> +

> +	mpi3mr_free_mem(mrioc);

> +	mpi3mr_cleanup_resources(mrioc);

> +}

> +

> +

> +/**

> + * mpi3mr_soft_reset_handler - Reset the controller

> + * @mrioc: Adapter instance reference

> + * @reset_reason: Reset reason code

> + * @snapdump: Flag to generate snapdump in firmware or not

> + *

> + * TBD

> + *

> + * Return: 0 on success, non-zero on failure.

> + */

> +int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,

> +	u32 reset_reason, u8 snapdump)

> +{

> +	return 0;

> +}

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c

> new file mode 100644

> index 000000000000..c31ec9883152

> --- /dev/null

> +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c

> @@ -0,0 +1,368 @@

> +// SPDX-License-Identifier: GPL-2.0-or-later

> +/*

> + * Driver for Broadcom MPI3 Storage Controllers

> + *

> + * Copyright (C) 2017-2020 Broadcom Inc.

> + *  (mailto: mpi3mr-linuxdrv.pdl@broadcom.com)

> + *

> + */

> +

> +#include "mpi3mr.h"

> +

> +/* global driver scop variables */

> +LIST_HEAD(mrioc_list);

> +DEFINE_SPINLOCK(mrioc_list_lock);

> +static int mrioc_ids;

> +static int warn_non_secure_ctlr;

> +

> +MODULE_AUTHOR(MPI3MR_DRIVER_AUTHOR);

> +MODULE_DESCRIPTION(MPI3MR_DRIVER_DESC);

> +MODULE_LICENSE(MPI3MR_DRIVER_LICENSE);

> +MODULE_VERSION(MPI3MR_DRIVER_VERSION);

> +

> +/* Module parameters*/

> +int logging_level;

> +module_param(logging_level, int, 0);

> +MODULE_PARM_DESC(logging_level,

> +	" bits for enabling additional logging info (default=0)");

> +

> +

> +/**

> + * mpi3mr_map_queues - Map queues callback handler

> + * @shost: SCSI host reference

> + *

> + * Call the blk_mq_pci_map_queues with from which operational

> + * queue the mapping has to be done

> + *

> + * Return: return of blk_mq_pci_map_queues

> + */

> +static int mpi3mr_map_queues(struct Scsi_Host *shost)

> +{

> +	struct mpi3mr_ioc *mrioc = shost_priv(shost);

> +

> +	return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT],

> +	    mrioc->pdev, 0);

> +}

> +


What happened to polling?
You did some patches for megaraid_sas, so I would have expected them to
be here, too ...

> +/**

> + * mpi3mr_slave_destroy - Slave destroy callback handler

> + * @sdev: SCSI device reference

> + *

> + * Cleanup and free per device(LUN) private data.

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_slave_destroy(struct scsi_device *sdev)

> +{

> +}

> +

> +/**

> + * mpi3mr_target_destroy - Target destroy callback handler

> + * @starget: SCSI target reference

> + *

> + * Cleanup and free per target private data.

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_target_destroy(struct scsi_target *starget)

> +{

> +}

> +

> +/**

> + * mpi3mr_slave_configure - Slave configure callback handler

> + * @sdev: SCSI device reference

> + *

> + * Configure queue depth, max hardware sectors and virt boundary

> + * as required

> + *

> + * Return: 0 always.

> + */

> +static int mpi3mr_slave_configure(struct scsi_device *sdev)

> +{

> +	int retval = 0;

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_slave_alloc -Slave alloc callback handler

> + * @sdev: SCSI device reference

> + *

> + * Allocate per device(LUN) private data and initialize it.

> + *

> + * Return: 0 on success -ENOMEM on memory allocation failure.

> + */

> +static int mpi3mr_slave_alloc(struct scsi_device *sdev)

> +{

> +	int retval = 0;

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_target_alloc - Target alloc callback handler

> + * @starget: SCSI target reference

> + *

> + * Allocate per target private data and initialize it.

> + *

> + * Return: 0 on success -ENOMEM on memory allocation failure.

> + */

> +static int mpi3mr_target_alloc(struct scsi_target *starget)

> +{

> +	int retval = -ENODEV;

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_qcmd - I/O request despatcher

> + * @shost: SCSI Host reference

> + * @scmd: SCSI Command reference

> + *

> + * Issues the SCSI Command as an MPI3 request.

> + *

> + * Return: 0 on successful queueing of the request or if the

> + *         request is completed with failure.

> + *         SCSI_MLQUEUE_DEVICE_BUSY when the device is busy.

> + *         SCSI_MLQUEUE_HOST_BUSY when the host queue is full.

> + */

> +static int mpi3mr_qcmd(struct Scsi_Host *shost,

> +	struct scsi_cmnd *scmd)

> +{

> +	int retval = 0;

> +

> +	scmd->result = DID_NO_CONNECT << 16;

> +	scmd->scsi_done(scmd);

> +	return retval;

> +}

> +

> +static struct scsi_host_template mpi3mr_driver_template = {

> +	.module				= THIS_MODULE,

> +	.name				= "MPI3 Storage Controller",

> +	.proc_name			= MPI3MR_DRIVER_NAME,

> +	.queuecommand			= mpi3mr_qcmd,

> +	.target_alloc			= mpi3mr_target_alloc,

> +	.slave_alloc			= mpi3mr_slave_alloc,

> +	.slave_configure		= mpi3mr_slave_configure,

> +	.target_destroy			= mpi3mr_target_destroy,

> +	.slave_destroy			= mpi3mr_slave_destroy,

> +	.map_queues			= mpi3mr_map_queues,

> +	.no_write_same			= 1,

> +	.can_queue			= 1,

> +	.this_id			= -1,

> +	.sg_tablesize			= MPI3MR_SG_DEPTH,

> +	/* max xfer supported is 1M (2K in 512 byte sized sectors)

> +	 */

> +	.max_sectors			= 2048,

> +	.cmd_per_lun			= MPI3MR_MAX_CMDS_LUN,

> +	.track_queue_depth		= 1,

> +	.cmd_size			= sizeof(struct scmd_priv),

> +};

> +

> +

> +/**

> + * mpi3mr_init_drv_cmd - Initialize internal command tracker

> + * @cmdptr: Internal command tracker

> + * @host_tag: Host tag used for the specific command

> + *

> + * Initialize the internal command tracker structure with

> + * specified host tag.

> + *

> + * Return: Nothing.

> + */

> +static inline void mpi3mr_init_drv_cmd(struct mpi3mr_drv_cmd *cmdptr,

> +	u16 host_tag)

> +{

> +	mutex_init(&cmdptr->mutex);

> +	cmdptr->reply = NULL;

> +	cmdptr->state = MPI3MR_CMD_NOTUSED;

> +	cmdptr->dev_handle = MPI3MR_INVALID_DEV_HANDLE;

> +	cmdptr->host_tag = host_tag;

> +}

> +

> +/**

> + * mpi3mr_probe - PCI probe callback

> + * @pdev: PCI device instance

> + * @id: PCI device ID details

> + *

> + * Controller initialization routine. Checks the security status

> + * of the controller and if it is invalid or tampered return the

> + * probe without initializing the controller. Otherwise,

> + * allocate per adapter instance through shost_priv and

> + * initialize controller specific data structures, initializae

> + * the controller hardware, add shost to the SCSI subsystem.

> + *

> + * Return: 0 on success, non-zero on failure.

> + */

> +

> +static int

> +mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)

> +{

> +	struct mpi3mr_ioc *mrioc = NULL;

> +	struct Scsi_Host *shost = NULL;

> +	int retval = 0;

> +

> +	shost = scsi_host_alloc(&mpi3mr_driver_template,

> +	    sizeof(struct mpi3mr_ioc));

> +	if (!shost) {

> +		retval = -ENODEV;

> +		goto shost_failed;

> +	}

> +

> +	mrioc = shost_priv(shost);

> +	mrioc->id = mrioc_ids++;

> +	sprintf(mrioc->driver_name, "%s", MPI3MR_DRIVER_NAME);

> +	sprintf(mrioc->name, "%s%d", mrioc->driver_name, mrioc->id);

> +	INIT_LIST_HEAD(&mrioc->list);

> +	spin_lock(&mrioc_list_lock);

> +	list_add_tail(&mrioc->list, &mrioc_list);

> +	spin_unlock(&mrioc_list_lock);

> +

> +	spin_lock_init(&mrioc->admin_req_lock);

> +	spin_lock_init(&mrioc->reply_free_queue_lock);

> +	spin_lock_init(&mrioc->sbq_lock);

> +

> +	mpi3mr_init_drv_cmd(&mrioc->init_cmds, MPI3MR_HOSTTAG_INITCMDS);

> +

> +	mrioc->logging_level = logging_level;

> +	mrioc->shost = shost;

> +	mrioc->pdev = pdev;

> +

> +	/* init shost parameters */

> +	shost->max_cmd_len = MPI3MR_MAX_CDB_LENGTH;

> +	shost->max_lun = -1;

> +	shost->unique_id = mrioc->id;

> +

> +	shost->max_channel = 1;

> +	shost->max_id = 0xFFFFFFFF;

> +

> +	mrioc->is_driver_loading = 1;

> +	if (mpi3mr_init_ioc(mrioc)) {

> +		ioc_err(mrioc, "failure at %s:%d/%s()!\n",

> +		    __FILE__, __LINE__, __func__);

> +		retval = -ENODEV;

> +		goto out_iocinit_failed;

> +	}

> +

> +	shost->nr_hw_queues = mrioc->num_op_reply_q;

> +	shost->can_queue = mrioc->max_host_ios;

> +	shost->sg_tablesize = MPI3MR_SG_DEPTH;

> +	shost->max_id = mrioc->facts.max_perids;

> +

> +	retval = scsi_add_host(shost, &pdev->dev);

> +	if (retval) {

> +		ioc_err(mrioc, "failure at %s:%d/%s()!\n",

> +		    __FILE__, __LINE__, __func__);

> +		goto addhost_failed;

> +	}

> +

> +	scsi_scan_host(shost);

> +	return retval;

> +

> +addhost_failed:

> +	mpi3mr_cleanup_ioc(mrioc);

> +out_iocinit_failed:

> +	spin_lock(&mrioc_list_lock);

> +	list_del(&mrioc->list);

> +	spin_unlock(&mrioc_list_lock);

> +	scsi_host_put(shost);

> +shost_failed:

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_remove - PCI remove callback

> + * @pdev: PCI device instance

> + *

> + * Free up all memory and resources associated with the

> + * controllerand target devices, unregister the shost.

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_remove(struct pci_dev *pdev)

> +{

> +	struct Scsi_Host *shost = pci_get_drvdata(pdev);

> +	struct mpi3mr_ioc *mrioc;

> +

> +	mrioc = shost_priv(shost);

> +	while (mrioc->reset_in_progress || mrioc->is_driver_loading)

> +		ssleep(1);

> +

> +

> +	scsi_remove_host(shost);

> +

> +	mpi3mr_cleanup_ioc(mrioc);

> +

> +	spin_lock(&mrioc_list_lock);

> +	list_del(&mrioc->list);

> +	spin_unlock(&mrioc_list_lock);

> +

> +	scsi_host_put(shost);

> +}

> +

> +/**

> + * mpi3mr_shutdown - PCI shutdown callback

> + * @pdev: PCI device instance

> + *

> + * Free up all memory and resources associated with the

> + * controller

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_shutdown(struct pci_dev *pdev)

> +{

> +	struct Scsi_Host *shost = pci_get_drvdata(pdev);

> +	struct mpi3mr_ioc *mrioc;

> +

> +	if (!shost)

> +		return;

> +

> +	mrioc = shost_priv(shost);

> +	while (mrioc->reset_in_progress || mrioc->is_driver_loading)

> +		ssleep(1);

> +

> +	mpi3mr_cleanup_ioc(mrioc);

> +

> +}

> +

> +static const struct pci_device_id mpi3mr_pci_id_table[] = {

> +	{

> +		PCI_DEVICE_SUB(PCI_VENDOR_ID_LSI_LOGIC, 0x00A5,

> +		    PCI_ANY_ID, PCI_ANY_ID)

> +	},

> +	{ 0 }

> +};

> +MODULE_DEVICE_TABLE(pci, mpi3mr_pci_id_table);

> +

> +static struct pci_driver mpi3mr_pci_driver = {

> +	.name = MPI3MR_DRIVER_NAME,

> +	.id_table = mpi3mr_pci_id_table,

> +	.probe = mpi3mr_probe,

> +	.remove = mpi3mr_remove,

> +	.shutdown = mpi3mr_shutdown,

> +};

> +

> +static int __init mpi3mr_init(void)

> +{

> +	int ret_val;

> +

> +	pr_info("Loading %s version %s\n", MPI3MR_DRIVER_NAME,

> +	    MPI3MR_DRIVER_VERSION);

> +

> +	ret_val = pci_register_driver(&mpi3mr_pci_driver);

> +

> +	return ret_val;

> +}

> +

> +static void __exit mpi3mr_exit(void)

> +{

> +	if (warn_non_secure_ctlr)

> +		pr_warn(

> +		    "Unloading %s version %s while managing a non secure controller\n",

> +		    MPI3MR_DRIVER_NAME, MPI3MR_DRIVER_VERSION);

> +	else

> +		pr_info("Unloading %s version %s\n", MPI3MR_DRIVER_NAME,

> +		    MPI3MR_DRIVER_VERSION);

> +

> +	pci_unregister_driver(&mpi3mr_pci_driver);

> +}

> +

> +module_init(mpi3mr_init);

> +module_exit(mpi3mr_exit);

> 

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer
Kashyap Desai Feb. 25, 2021, 1:36 p.m. UTC | #6
> ...

> > + */

> > +void mpi3mr_cleanup_fwevt_list(struct mpi3mr_ioc *mrioc) {

> > +	struct mpi3mr_fwevt *fwevt = NULL;

> > +

> > +	if ((list_empty(&mrioc->fwevt_list) && !mrioc->current_event) ||

> > +	    !mrioc->fwevt_worker_thread || in_interrupt())

> The in_interrup macro is deprecated and should not be used in new code.

> Is it at all possible to call the mpi3mr_cleanup_fwevt_list from

interrupt
> context?


I agree with you. In_interrupt() check is safe to remove. I will take care
while sending V2.

Kashyap
>

> > +		return;

> > +

> > +	while ((fwevt = mpi3mr_dequeue_fwevt(mrioc)) ||

> > +	    (fwevt = mrioc->current_event)) {

> > +		/*

> > +		 * Wait on the fwevt to complete. If this returns 1, then

> > +		 * the event was never executed, and we need a put for the

> > +		 * reference the work had on the fwevt.

> > +		 *

> > +		 * If it did execute, we wait for it to finish, and the

put will
> > +		 * happen from mpi3mr_process_fwevt()

> > +		 */

> > +		if (cancel_work_sync(&fwevt->work)) {

> > +			/*

> > +			 * Put fwevt reference count after

> > +			 * dequeuing it from worker queue

> > +			 */

> > +			mpi3mr_fwevt_put(fwevt);

> > +			/*

> > +			 * Put fwevt reference count to neutralize

> > +			 * kref_init increment

> > +			 */

> > +			mpi3mr_fwevt_put(fwevt);

> > +		}

> > +	}

> > +}
Hannes Reinecke Feb. 28, 2021, 1:04 p.m. UTC | #7
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Create operational request and reply queue pair.

> 

> The MPI3 transport interface consists of an Administrative Request Queue,

> an Administrative Reply Queue, and Operational Messaging Queues.

> The Operational Messaging Queues are the primary communication mechanism

> between the host and the I/O Controller (IOC).

> Request messages, allocated in host memory, identify I/O operations to be

> performed by the IOC. These operations are queued on an Operational

> Request Queue by the host driver.

> Reply descriptors track I/O operations as they complete.

> The IOC queues these completions in an Operational Reply Queue.

> 

> To fulfil large contiguous memory requirement, driver creates multiple

> segments and provide the list of segments. Each segment size should be 4K

> which is h/w requirement. An element array is contiguous or segmented.

> A contiguous element array is located in contiguous physical memory.

> A contiguous element array must be aligned on an element size boundary.

> An element's physical address within the array may be directly calculated

> from the base address, the Producer/Consumer index, and the element size.

> 

> Expected phased identifier bit is used to find out valid entry on reply queue.

> Driver set <ephase> bit and IOC invert the value of this bit on each pass.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr.h    |  56 +++

>   drivers/scsi/mpi3mr/mpi3mr_fw.c | 601 ++++++++++++++++++++++++++++++++

>   drivers/scsi/mpi3mr/mpi3mr_os.c |   4 +-

>   3 files changed, 660 insertions(+), 1 deletion(-)

> 

> diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h

> index dd79b12218e1..fe6094bb357a 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr.h

> +++ b/drivers/scsi/mpi3mr/mpi3mr.h

> @@ -71,6 +71,12 @@ extern struct list_head mrioc_list;

>   #define MPI3MR_ADMIN_REQ_FRAME_SZ	128

>   #define MPI3MR_ADMIN_REPLY_FRAME_SZ	16

>   

> +/* Operational queue management definitions */

> +#define MPI3MR_OP_REQ_Q_QD		512

> +#define MPI3MR_OP_REP_Q_QD		4096

> +#define MPI3MR_OP_REQ_Q_SEG_SIZE	4096

> +#define MPI3MR_OP_REP_Q_SEG_SIZE	4096

> +#define MPI3MR_MAX_SEG_LIST_SIZE	4096

>   

Do I read this correctly?
The reply queue depth is larger than the request queue depth?
Why is that?

>   /* Reserved Host Tag definitions */

>   #define MPI3MR_HOSTTAG_INVALID		0xFFFF

> @@ -132,6 +138,9 @@ extern struct list_head mrioc_list;

>   	(MPI3_SGE_FLAGS_ELEMENT_TYPE_SIMPLE | MPI3_SGE_FLAGS_DLAS_SYSTEM | \

>   	MPI3_SGE_FLAGS_END_OF_LIST)

>   

> +/* MSI Index from Reply Queue Index */

> +#define REPLY_QUEUE_IDX_TO_MSIX_IDX(qidx, offset)	(qidx + offset)

> +

>   /* IOC State definitions */

>   enum mpi3mr_iocstate {

>   	MRIOC_STATE_READY = 1,

> @@ -222,15 +231,45 @@ struct mpi3mr_ioc_facts {

>   	u8 sge_mod_shift;

>   };

>   

> +/**

> + * struct segments - memory descriptor structure to store

> + * virtual and dma addresses for operational queue segments.

> + *

> + * @segment: virtual address

> + * @segment_dma: dma address

> + */

> +struct segments {

> +	void *segment;

> +	dma_addr_t segment_dma;

> +};

> +

>   /**

>    * struct op_req_qinfo -  Operational Request Queue Information

>    *

>    * @ci: consumer index

>    * @pi: producer index

> + * @num_request: Maximum number of entries in the queue

> + * @qid: Queue Id starting from 1

> + * @reply_qid: Associated reply queue Id

> + * @num_segments: Number of discontiguous memory segments

> + * @segment_qd: Depth of each segments

> + * @q_lock: Concurrent queue access lock

> + * @q_segments: Segment descriptor pointer

> + * @q_segment_list: Segment list base virtual address

> + * @q_segment_list_dma: Segment list base DMA address

>    */

>   struct op_req_qinfo {

>   	u16 ci;

>   	u16 pi;

> +	u16 num_requests;

> +	u16 qid;

> +	u16 reply_qid;

> +	u16 num_segments;

> +	u16 segment_qd;

> +	spinlock_t q_lock;

> +	struct segments *q_segments;

> +	void *q_segment_list;

> +	dma_addr_t q_segment_list_dma;

>   };

>   

>   /**

> @@ -238,10 +277,24 @@ struct op_req_qinfo {

>    *

>    * @ci: consumer index

>    * @qid: Queue Id starting from 1

> + * @num_replies: Maximum number of entries in the queue

> + * @num_segments: Number of discontiguous memory segments

> + * @segment_qd: Depth of each segments

> + * @q_segments: Segment descriptor pointer

> + * @q_segment_list: Segment list base virtual address

> + * @q_segment_list_dma: Segment list base DMA address

> + * @ephase: Expected phased identifier for the reply queue

>    */

>   struct op_reply_qinfo {

>   	u16 ci;

>   	u16 qid;

> +	u16 num_replies;

> +	u16 num_segments;

> +	u16 segment_qd;

> +	struct segments *q_segments;

> +	void *q_segment_list;

> +	dma_addr_t q_segment_list_dma;

> +	u8 ephase;

>   };

>   

>   /**

> @@ -402,6 +455,7 @@ struct scmd_priv {

>    * @current_event: Firmware event currently in process

>    * @driver_info: Driver, Kernel, OS information to firmware

>    * @change_count: Topology change count

> + * @op_reply_q_offset: Operational reply queue offset with MSIx

>    */

>   struct mpi3mr_ioc {

>   	struct list_head list;

> @@ -409,6 +463,7 @@ struct mpi3mr_ioc {

>   	struct Scsi_Host *shost;

>   	u8 id;

>   	int cpu_count;

> +	bool enable_segqueue;

>   

>   	char name[MPI3MR_NAME_LENGTH];

>   	char driver_name[MPI3MR_NAME_LENGTH];

> @@ -495,6 +550,7 @@ struct mpi3mr_ioc {

>   	struct mpi3mr_fwevt *current_event;

>   	Mpi3DriverInfoLayout_t driver_info;

>   	u16 change_count;

> +	u16 op_reply_q_offset;

>   };

>   

>   int mpi3mr_setup_resources(struct mpi3mr_ioc *mrioc);

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c

> index 97eb7e6ec5c6..6fb28983038e 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c

> +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c

> @@ -408,6 +408,8 @@ static int mpi3mr_setup_isr(struct mpi3mr_ioc *mrioc, u8 setup_one)

>   

>   	irq_flags |= PCI_IRQ_AFFINITY | PCI_IRQ_ALL_TYPES;

>   

> +	mrioc->op_reply_q_offset = (max_vectors > 1) ? 1 : 0;

> +

>   	i = pci_alloc_irq_vectors_affinity(mrioc->pdev,

>   	    1, max_vectors, irq_flags, &desc);

>   	if (i <= 0) {

> @@ -418,6 +420,12 @@ static int mpi3mr_setup_isr(struct mpi3mr_ioc *mrioc, u8 setup_one)

>   		ioc_info(mrioc,

>   		    "allocated vectors (%d) are less than configured (%d)\n",

>   		    i, max_vectors);

> +		/*

> +		 * If only one MSI-x is allocated, then MSI-x 0 will be shared

> +		 * between Admin queue and operational queue

> +		 */

> +		if (i == 1)

> +			mrioc->op_reply_q_offset = 0;

>   

>   		max_vectors = i;

>   	}

> @@ -726,6 +734,586 @@ int mpi3mr_admin_request_post(struct mpi3mr_ioc *mrioc, void *admin_req,

>   	return retval;

>   }

>   

> +/**

> + * mpi3mr_free_op_req_q_segments - free request memory segments

> + * @mrioc: Adapter instance reference

> + * @q_idx: operational request queue index

> + *

> + * Free memory segments allocated for operational request queue

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_free_op_req_q_segments(struct mpi3mr_ioc *mrioc, u16 q_idx)

> +{

> +	u16 j;

> +	int size;

> +	struct segments *segments;

> +

> +	segments = mrioc->req_qinfo[q_idx].q_segments;

> +	if (!segments)

> +		return;

> +

> +	if (mrioc->enable_segqueue) {

> +		size = MPI3MR_OP_REQ_Q_SEG_SIZE;

> +		if (mrioc->req_qinfo[q_idx].q_segment_list) {

> +			dma_free_coherent(&mrioc->pdev->dev,

> +			    MPI3MR_MAX_SEG_LIST_SIZE,

> +			    mrioc->req_qinfo[q_idx].q_segment_list,

> +			    mrioc->req_qinfo[q_idx].q_segment_list_dma);

> +			mrioc->op_reply_qinfo[q_idx].q_segment_list = NULL;

> +		}

> +	} else

> +		size = mrioc->req_qinfo[q_idx].num_requests *

> +		    mrioc->facts.op_req_sz;

> +

> +	for (j = 0; j < mrioc->req_qinfo[q_idx].num_segments; j++) {

> +		if (!segments[j].segment)

> +			continue;

> +		dma_free_coherent(&mrioc->pdev->dev,

> +		    size, segments[j].segment, segments[j].segment_dma);

> +		segments[j].segment = NULL;

> +	}

> +	kfree(mrioc->req_qinfo[q_idx].q_segments);

> +	mrioc->req_qinfo[q_idx].q_segments = NULL;

> +	mrioc->req_qinfo[q_idx].qid = 0;

> +}

> +

> +/**

> + * mpi3mr_free_op_reply_q_segments - free reply memory segments

> + * @mrioc: Adapter instance reference

> + * @q_idx: operational reply queue index

> + *

> + * Free memory segments allocated for operational reply queue

> + *

> + * Return: Nothing.

> + */

> +static void mpi3mr_free_op_reply_q_segments(struct mpi3mr_ioc *mrioc, u16 q_idx)

> +{

> +	u16 j;

> +	int size;

> +	struct segments *segments;

> +

> +	segments = mrioc->op_reply_qinfo[q_idx].q_segments;

> +	if (!segments)

> +		return;

> +

> +	if (mrioc->enable_segqueue) {

> +		size = MPI3MR_OP_REP_Q_SEG_SIZE;

> +		if (mrioc->op_reply_qinfo[q_idx].q_segment_list) {

> +			dma_free_coherent(&mrioc->pdev->dev,

> +			    MPI3MR_MAX_SEG_LIST_SIZE,

> +			    mrioc->op_reply_qinfo[q_idx].q_segment_list,

> +			    mrioc->op_reply_qinfo[q_idx].q_segment_list_dma);

> +			mrioc->op_reply_qinfo[q_idx].q_segment_list = NULL;

> +		}

> +	} else

> +		size = mrioc->op_reply_qinfo[q_idx].segment_qd *

> +		    mrioc->op_reply_desc_sz;

> +

> +	for (j = 0; j < mrioc->op_reply_qinfo[q_idx].num_segments; j++) {

> +		if (!segments[j].segment)

> +			continue;

> +		dma_free_coherent(&mrioc->pdev->dev,

> +		    size, segments[j].segment, segments[j].segment_dma);

> +		segments[j].segment = NULL;

> +	}

> +

> +	kfree(mrioc->op_reply_qinfo[q_idx].q_segments);

> +	mrioc->op_reply_qinfo[q_idx].q_segments = NULL;

> +	mrioc->op_reply_qinfo[q_idx].qid = 0;

> +}

> +

> +/**

> + * mpi3mr_delete_op_reply_q - delete operational reply queue

> + * @mrioc: Adapter instance reference

> + * @qidx: operational reply queue index

> + *

> + * Delete operatinal reply queue by issuing MPI request

> + * through admin queue.

> + *

> + * Return:  0 on success, non-zero on failure.

> + */

> +static int mpi3mr_delete_op_reply_q(struct mpi3mr_ioc *mrioc, u16 qidx)

> +{

> +	Mpi3DeleteReplyQueueRequest_t delq_req;

> +	int retval = 0;

> +	u16 reply_qid = 0, midx;

> +

> +	reply_qid = mrioc->op_reply_qinfo[qidx].qid;

> +

> +	midx = REPLY_QUEUE_IDX_TO_MSIX_IDX(qidx, mrioc->op_reply_q_offset);

> +

> +	if (!reply_qid)	{

> +		retval = -1;

> +		ioc_err(mrioc, "Issue DelRepQ: called with invalid ReqQID\n");

> +		goto out;

> +	}

> +

> +	memset(&delq_req, 0, sizeof(delq_req));

> +	mutex_lock(&mrioc->init_cmds.mutex);

> +	if (mrioc->init_cmds.state & MPI3MR_CMD_PENDING) {

> +		retval = -1;

> +		ioc_err(mrioc, "Issue DelRepQ: Init command is in use\n");

> +		mutex_unlock(&mrioc->init_cmds.mutex);

> +		goto out;

> +	}

> +	mrioc->init_cmds.state = MPI3MR_CMD_PENDING;

> +	mrioc->init_cmds.is_waiting = 1;

> +	mrioc->init_cmds.callback = NULL;

> +	delq_req.HostTag = cpu_to_le16(MPI3MR_HOSTTAG_INITCMDS);

> +	delq_req.Function = MPI3_FUNCTION_DELETE_REPLY_QUEUE;

> +	delq_req.QueueID = cpu_to_le16(reply_qid);

> +

> +	init_completion(&mrioc->init_cmds.done);

> +	retval = mpi3mr_admin_request_post(mrioc, &delq_req, sizeof(delq_req),

> +	    1);

> +	if (retval) {

> +		ioc_err(mrioc, "Issue DelRepQ: Admin Post failed\n");

> +		goto out_unlock;

> +	}

> +	wait_for_completion_timeout(&mrioc->init_cmds.done,

> +	    (MPI3MR_INTADMCMD_TIMEOUT * HZ));

> +	if (!(mrioc->init_cmds.state & MPI3MR_CMD_COMPLETE)) {

> +		ioc_err(mrioc, "Issue DelRepQ: command timed out\n");

> +		mpi3mr_set_diagsave(mrioc);

> +		mpi3mr_issue_reset(mrioc,

> +		    MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT,

> +		    MPI3MR_RESET_FROM_DELREPQ_TIMEOUT);

> +		mrioc->unrecoverable = 1;

> +

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	if ((mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK)

> +	    != MPI3_IOCSTATUS_SUCCESS) {

> +		ioc_err(mrioc,

> +		    "Issue DelRepQ: Failed IOCStatus(0x%04x) Loginfo(0x%08x)\n",

> +		    (mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK),

> +		    mrioc->init_cmds.ioc_loginfo);

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	mrioc->intr_info[midx].op_reply_q = NULL;

> +

> +	mpi3mr_free_op_reply_q_segments(mrioc, qidx);

> +out_unlock:

> +	mrioc->init_cmds.state = MPI3MR_CMD_NOTUSED;

> +	mutex_unlock(&mrioc->init_cmds.mutex);

> +out:

> +

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_alloc_op_reply_q_segments -Alloc segmented reply pool

> + * @mrioc: Adapter instance reference

> + * @qidx: request queue index

> + *

> + * Allocate segmented memory pools for operational reply

> + * queue.

> + *

> + * Return: 0 on success, non-zero on failure.

> + */

> +static int mpi3mr_alloc_op_reply_q_segments(struct mpi3mr_ioc *mrioc, u16 qidx)

> +{

> +	struct op_reply_qinfo *op_reply_q = mrioc->op_reply_qinfo + qidx;

> +	int i, size;

> +	u64 *q_segment_list_entry = NULL;

> +	struct segments *segments;

> +

> +	if (mrioc->enable_segqueue) {

> +		op_reply_q->segment_qd =

> +		    MPI3MR_OP_REP_Q_SEG_SIZE / mrioc->op_reply_desc_sz;

> +

> +		size = MPI3MR_OP_REP_Q_SEG_SIZE;

> +

> +		op_reply_q->q_segment_list = dma_alloc_coherent(&mrioc->pdev->dev,

> +		    MPI3MR_MAX_SEG_LIST_SIZE, &op_reply_q->q_segment_list_dma,

> +		    GFP_KERNEL);

> +		if (!op_reply_q->q_segment_list)

> +			return -ENOMEM;

> +		q_segment_list_entry = (u64 *)op_reply_q->q_segment_list;

> +	} else {

> +		op_reply_q->segment_qd = op_reply_q->num_replies;

> +		size = op_reply_q->num_replies * mrioc->op_reply_desc_sz;

> +	}

> +

> +	op_reply_q->num_segments = DIV_ROUND_UP(op_reply_q->num_replies,

> +	    op_reply_q->segment_qd);

> +

> +	op_reply_q->q_segments = kcalloc(op_reply_q->num_segments,

> +	    sizeof(struct segments), GFP_KERNEL);

> +	if (!op_reply_q->q_segments)

> +		return -ENOMEM;

> +

> +	segments = op_reply_q->q_segments;

> +	for (i = 0; i < op_reply_q->num_segments; i++) {

> +		segments[i].segment =

> +		    dma_alloc_coherent(&mrioc->pdev->dev,

> +		    size, &segments[i].segment_dma, GFP_KERNEL);

> +		if (!segments[i].segment)

> +			return -ENOMEM;

> +		if (mrioc->enable_segqueue)

> +			q_segment_list_entry[i] =

> +			    (unsigned long)segments[i].segment_dma;

> +	}

> +

> +	return 0;

> +}

> +

> +/**

> + * mpi3mr_alloc_op_req_q_segments - Alloc segmented req pool.

> + * @mrioc: Adapter instance reference

> + * @qidx: request queue index

> + *

> + * Allocate segmented memory pools for operational request

> + * queue.

> + *

> + * Return: 0 on success, non-zero on failure.

> + */

> +static int mpi3mr_alloc_op_req_q_segments(struct mpi3mr_ioc *mrioc, u16 qidx)

> +{

> +	struct op_req_qinfo *op_req_q = mrioc->req_qinfo + qidx;

> +	int i, size;

> +	u64 *q_segment_list_entry = NULL;

> +	struct segments *segments;

> +

> +	if (mrioc->enable_segqueue) {

> +		op_req_q->segment_qd =

> +		    MPI3MR_OP_REQ_Q_SEG_SIZE / mrioc->facts.op_req_sz;

> +

> +		size = MPI3MR_OP_REQ_Q_SEG_SIZE;

> +

> +		op_req_q->q_segment_list = dma_alloc_coherent(&mrioc->pdev->dev,

> +		    MPI3MR_MAX_SEG_LIST_SIZE, &op_req_q->q_segment_list_dma,

> +		    GFP_KERNEL);

> +		if (!op_req_q->q_segment_list)

> +			return -ENOMEM;

> +		q_segment_list_entry = (u64 *)op_req_q->q_segment_list;

> +

> +	} else {

> +		op_req_q->segment_qd = op_req_q->num_requests;

> +		size = op_req_q->num_requests * mrioc->facts.op_req_sz;

> +	}

> +

> +	op_req_q->num_segments = DIV_ROUND_UP(op_req_q->num_requests,

> +	    op_req_q->segment_qd);

> +

> +	op_req_q->q_segments = kcalloc(op_req_q->num_segments,

> +	    sizeof(struct segments), GFP_KERNEL);

> +	if (!op_req_q->q_segments)

> +		return -ENOMEM;

> +

> +	segments = op_req_q->q_segments;

> +	for (i = 0; i < op_req_q->num_segments; i++) {

> +		segments[i].segment =

> +		    dma_alloc_coherent(&mrioc->pdev->dev,

> +		    size, &segments[i].segment_dma, GFP_KERNEL);

> +		if (!segments[i].segment)

> +			return -ENOMEM;

> +		if (mrioc->enable_segqueue)

> +			q_segment_list_entry[i] =

> +			    (unsigned long)segments[i].segment_dma;

> +	}

> +

> +	return 0;

> +}

> +

> +/**

> + * mpi3mr_create_op_reply_q - create operational reply queue

> + * @mrioc: Adapter instance reference

> + * @qidx: operational reply queue index

> + *

> + * Create operatinal reply queue by issuing MPI request

> + * through admin queue.

> + *

> + * Return:  0 on success, non-zero on failure.

> + */

> +static int mpi3mr_create_op_reply_q(struct mpi3mr_ioc *mrioc, u16 qidx)

> +{

> +	Mpi3CreateReplyQueueRequest_t create_req;

> +	struct op_reply_qinfo *op_reply_q = mrioc->op_reply_qinfo + qidx;

> +	int retval = 0;

> +	u16 reply_qid = 0, midx;

> +

> +

> +	reply_qid = op_reply_q->qid;

> +

> +	midx = REPLY_QUEUE_IDX_TO_MSIX_IDX(qidx, mrioc->op_reply_q_offset);

> +

> +	if (reply_qid) {

> +		retval = -1;

> +		ioc_err(mrioc, "CreateRepQ: called for duplicate qid %d\n",

> +		    reply_qid);

> +

> +		return retval;

> +	}

> +

> +	reply_qid = qidx + 1;

> +	op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;

> +	op_reply_q->ci = 0;

> +	op_reply_q->ephase = 1;

> +

> +	if (!op_reply_q->q_segments) {

> +		retval = mpi3mr_alloc_op_reply_q_segments(mrioc, qidx);

> +		if (retval) {

> +			mpi3mr_free_op_reply_q_segments(mrioc, qidx);

> +			goto out;

> +		}

> +	}

> +

> +	memset(&create_req, 0, sizeof(create_req));

> +	mutex_lock(&mrioc->init_cmds.mutex);

> +	if (mrioc->init_cmds.state & MPI3MR_CMD_PENDING) {

> +		retval = -1;

> +		ioc_err(mrioc, "CreateRepQ: Init command is in use\n");

> +		goto out;

> +	}

> +	mrioc->init_cmds.state = MPI3MR_CMD_PENDING;

> +	mrioc->init_cmds.is_waiting = 1;

> +	mrioc->init_cmds.callback = NULL;

> +	create_req.HostTag = cpu_to_le16(MPI3MR_HOSTTAG_INITCMDS);

> +	create_req.Function = MPI3_FUNCTION_CREATE_REPLY_QUEUE;

> +	create_req.QueueID = cpu_to_le16(reply_qid);

> +	create_req.Flags = MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_ENABLE;

> +	create_req.MSIxIndex = cpu_to_le16(mrioc->intr_info[midx].msix_index);

> +	if (mrioc->enable_segqueue) {

> +		create_req.Flags |=

> +		    MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_SEGMENTED;

> +		create_req.BaseAddress = cpu_to_le64(

> +		    op_reply_q->q_segment_list_dma);

> +	} else

> +		create_req.BaseAddress = cpu_to_le64(

> +		    op_reply_q->q_segments[0].segment_dma);

> +

> +	create_req.Size = cpu_to_le16(op_reply_q->num_replies);

> +

> +	init_completion(&mrioc->init_cmds.done);

> +	retval = mpi3mr_admin_request_post(mrioc, &create_req,

> +	    sizeof(create_req), 1);

> +	if (retval) {

> +		ioc_err(mrioc, "CreateRepQ: Admin Post failed\n");

> +		goto out_unlock;

> +	}

> +	wait_for_completion_timeout(&mrioc->init_cmds.done,

> +	    (MPI3MR_INTADMCMD_TIMEOUT * HZ));

> +	if (!(mrioc->init_cmds.state & MPI3MR_CMD_COMPLETE)) {

> +		ioc_err(mrioc, "CreateRepQ: command timed out\n");

> +		mpi3mr_set_diagsave(mrioc);

> +		mpi3mr_issue_reset(mrioc,

> +		    MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT,

> +		    MPI3MR_RESET_FROM_CREATEREPQ_TIMEOUT);

> +		mrioc->unrecoverable = 1;

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	if ((mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK)

> +	    != MPI3_IOCSTATUS_SUCCESS) {

> +		ioc_err(mrioc,

> +		    "CreateRepQ: Failed IOCStatus(0x%04x) Loginfo(0x%08x)\n",

> +		    (mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK),

> +		    mrioc->init_cmds.ioc_loginfo);

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	op_reply_q->qid = reply_qid;

> +	mrioc->intr_info[midx].op_reply_q = op_reply_q;

> +

> +out_unlock:

> +	mrioc->init_cmds.state = MPI3MR_CMD_NOTUSED;

> +	mutex_unlock(&mrioc->init_cmds.mutex);

> +out:

> +

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_create_op_req_q - create operational request queue

> + * @mrioc: Adapter instance reference

> + * @idx: operational request queue index

> + * @reply_qid: Reply queue ID

> + *

> + * Create operatinal request queue by issuing MPI request

> + * through admin queue.

> + *

> + * Return:  0 on success, non-zero on failure.

> + */

> +static int mpi3mr_create_op_req_q(struct mpi3mr_ioc *mrioc, u16 idx,

> +	u16 reply_qid)

> +{

> +	Mpi3CreateRequestQueueRequest_t create_req;

> +	struct op_req_qinfo *op_req_q = mrioc->req_qinfo + idx;

> +	int retval = 0;

> +	u16 req_qid = 0;

> +

> +

> +	req_qid = op_req_q->qid;

> +

> +	if (req_qid) {

> +		retval = -1;

> +		ioc_err(mrioc, "CreateReqQ: called for duplicate qid %d\n",

> +		    req_qid);

> +

> +		return retval;

> +	}

> +	req_qid = idx + 1;

> +

> +	op_req_q->num_requests = MPI3MR_OP_REQ_Q_QD;

> +	op_req_q->ci = 0;

> +	op_req_q->pi = 0;

> +	op_req_q->reply_qid = reply_qid;

> +	spin_lock_init(&op_req_q->q_lock);

> +

> +	if (!op_req_q->q_segments) {

> +		retval = mpi3mr_alloc_op_req_q_segments(mrioc, idx);

> +		if (retval) {

> +			mpi3mr_free_op_req_q_segments(mrioc, idx);

> +			goto out;

> +		}

> +	}

> +

> +	memset(&create_req, 0, sizeof(create_req));

> +	mutex_lock(&mrioc->init_cmds.mutex);

> +	if (mrioc->init_cmds.state & MPI3MR_CMD_PENDING) {

> +		retval = -1;

> +		ioc_err(mrioc, "CreateReqQ: Init command is in use\n");

> +		goto out;

> +	}

> +	mrioc->init_cmds.state = MPI3MR_CMD_PENDING;

> +	mrioc->init_cmds.is_waiting = 1;

> +	mrioc->init_cmds.callback = NULL;

> +	create_req.HostTag = cpu_to_le16(MPI3MR_HOSTTAG_INITCMDS);

> +	create_req.Function = MPI3_FUNCTION_CREATE_REQUEST_QUEUE;

> +	create_req.QueueID = cpu_to_le16(req_qid);

> +	if (mrioc->enable_segqueue) {

> +		create_req.Flags =

> +		    MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_SEGMENTED;

> +		create_req.BaseAddress = cpu_to_le64(

> +		    op_req_q->q_segment_list_dma);

> +	} else

> +		create_req.BaseAddress = cpu_to_le64(

> +		    op_req_q->q_segments[0].segment_dma);

> +	create_req.ReplyQueueID = cpu_to_le16(reply_qid);

> +	create_req.Size = cpu_to_le16(op_req_q->num_requests);

> +

> +	init_completion(&mrioc->init_cmds.done);

> +	retval = mpi3mr_admin_request_post(mrioc, &create_req,

> +	    sizeof(create_req), 1);

> +	if (retval) {

> +		ioc_err(mrioc, "CreateReqQ: Admin Post failed\n");

> +		goto out_unlock;

> +	}

> +	wait_for_completion_timeout(&mrioc->init_cmds.done,

> +	    (MPI3MR_INTADMCMD_TIMEOUT * HZ));

> +	if (!(mrioc->init_cmds.state & MPI3MR_CMD_COMPLETE)) {

> +		ioc_err(mrioc, "CreateReqQ: command timed out\n");

> +		mpi3mr_set_diagsave(mrioc);

> +		if (mpi3mr_issue_reset(mrioc,

> +		    MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT,

> +		    MPI3MR_RESET_FROM_CREATEREQQ_TIMEOUT))

> +			mrioc->unrecoverable = 1;

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	if ((mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK)

> +	    != MPI3_IOCSTATUS_SUCCESS) {

> +		ioc_err(mrioc,

> +		    "CreateReqQ: Failed IOCStatus(0x%04x) Loginfo(0x%08x)\n",

> +		    (mrioc->init_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK),

> +		    mrioc->init_cmds.ioc_loginfo);

> +		retval = -1;

> +		goto out_unlock;

> +	}

> +	op_req_q->qid = req_qid;

> +

> +out_unlock:

> +	mrioc->init_cmds.state = MPI3MR_CMD_NOTUSED;

> +	mutex_unlock(&mrioc->init_cmds.mutex);

> +out:

> +

> +	return retval;

> +}

> +

> +/**

> + * mpi3mr_create_op_queues - create operational queue pairs

> + * @mrioc: Adapter instance reference

> + *

> + * Allocate memory for operational queue meta data and call

> + * create request and reply queue functions.

> + *

> + * Return: 0 on success, non-zero on failures.

> + */

> +static int mpi3mr_create_op_queues(struct mpi3mr_ioc *mrioc)

> +{

> +	int retval = 0;

> +	u16 num_queues = 0, i = 0, msix_count_op_q = 1;

> +

> +	num_queues = min_t(int, mrioc->facts.max_op_reply_q,

> +	    mrioc->facts.max_op_req_q);

> +

> +	msix_count_op_q =

> +	    mrioc->intr_info_count - mrioc->op_reply_q_offset;

> +	if (!mrioc->num_queues)

> +		mrioc->num_queues = min_t(int, num_queues, msix_count_op_q);

> +	num_queues = mrioc->num_queues;

> +	ioc_info(mrioc, "Trying to create %d Operational Q pairs\n",

> +	    num_queues);

> +

> +	if (!mrioc->req_qinfo) {

> +		mrioc->req_qinfo = kcalloc(num_queues,

> +		    sizeof(struct op_req_qinfo), GFP_KERNEL);

> +		if (!mrioc->req_qinfo) {

> +			retval = -1;

> +			goto out_failed;

> +		}

> +

> +		mrioc->op_reply_qinfo = kzalloc(sizeof(struct op_reply_qinfo) *

> +		    num_queues, GFP_KERNEL);

> +		if (!mrioc->op_reply_qinfo) {

> +			retval = -1;

> +			goto out_failed;

> +		}

> +	}

> +

> +	if (mrioc->enable_segqueue)

> +		ioc_info(mrioc,

> +		    "allocating operational queues through segmented queues\n");

> +

> +	for (i = 0; i < num_queues; i++) {

> +		if (mpi3mr_create_op_reply_q(mrioc, i)) {

> +			ioc_err(mrioc, "Cannot create OP RepQ %d\n", i);

> +			break;

> +		}

> +		if (mpi3mr_create_op_req_q(mrioc, i,

> +		    mrioc->op_reply_qinfo[i].qid)) {

> +			ioc_err(mrioc, "Cannot create OP ReqQ %d\n", i);

> +			mpi3mr_delete_op_reply_q(mrioc, i);

> +			break;

> +		}

> +	}

> +

> +	if (i == 0) {

> +		/* Not even one queue is created successfully*/

> +		retval = -1;

> +		goto out_failed;

> +	}

> +	mrioc->num_op_reply_q = mrioc->num_op_req_q = i;

> +	ioc_info(mrioc, "Successfully created %d Operational Q pairs\n",

> +	    mrioc->num_op_reply_q);

> +

> +

> +	return retval;

> +out_failed:

> +	kfree(mrioc->req_qinfo);

> +	mrioc->req_qinfo = NULL;

> +

> +	kfree(mrioc->op_reply_qinfo);

> +	mrioc->op_reply_qinfo = NULL;

> +

> +

> +	return retval;

> +}

> +

>   

>   /**

>    * mpi3mr_setup_admin_qpair - Setup admin queue pair

> @@ -1599,6 +2187,13 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc)

>   		goto out_failed;

>   	}

>   

> +	retval = mpi3mr_create_op_queues(mrioc);

> +	if (retval) {

> +		ioc_err(mrioc, "Failed to create OpQueues error %d\n",

> +		    retval);

> +		goto out_failed;

> +	}

> +

>   	return retval;

>   

>   out_failed:

> @@ -1655,6 +2250,12 @@ static void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc)

>   		mrioc->reply_free_q_pool = NULL;

>   	}

>   

> +	for (i = 0; i < mrioc->num_op_req_q; i++)

> +		mpi3mr_free_op_req_q_segments(mrioc, i);

> +

> +	for (i = 0; i < mrioc->num_op_reply_q; i++)

> +		mpi3mr_free_op_reply_q_segments(mrioc, i);

> +

>   	for (i = 0; i < mrioc->intr_info_count; i++) {

>   		intr_info = mrioc->intr_info + i;

>   		if (intr_info)

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c

> index c31ec9883152..3cf0be63842f 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr_os.c

> +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c

> @@ -41,7 +41,7 @@ static int mpi3mr_map_queues(struct Scsi_Host *shost)

>   	struct mpi3mr_ioc *mrioc = shost_priv(shost);

>   

>   	return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT],

> -	    mrioc->pdev, 0);

> +	    mrioc->pdev, mrioc->op_reply_q_offset);

>   }

>   

>   /**

> @@ -220,6 +220,8 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)

>   	spin_lock_init(&mrioc->sbq_lock);

>   

>   	mpi3mr_init_drv_cmd(&mrioc->init_cmds, MPI3MR_HOSTTAG_INITCMDS);

> +	if (pdev->revision)

> +		mrioc->enable_segqueue = true;

>   

>   	mrioc->logging_level = logging_level;

>   	mrioc->shost = shost;

> 

Other than that:

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke Feb. 28, 2021, 1:24 p.m. UTC | #8
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Watchdog thread is driver's internal thread which does few things like

> detecting FW fault and reset the controller, Timestamp sync etc.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr.h    |  11 +++

>   drivers/scsi/mpi3mr/mpi3mr_fw.c | 125 ++++++++++++++++++++++++++++++++

>   drivers/scsi/mpi3mr/mpi3mr_os.c |   3 +

>   3 files changed, 139 insertions(+)

> 

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 6:47 a.m. UTC | #9
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Firmware can report various MPI Events.

> Support for certain Events (as listed below) are enabled in the driver

> and their processing in driver is covered in this patch.

> 

> MPI3_EVENT_DEVICE_ADDED

> MPI3_EVENT_DEVICE_INFO_CHANGED

> MPI3_EVENT_DEVICE_STATUS_CHANGE

> MPI3_EVENT_ENCL_DEVICE_STATUS_CHANGE

> MPI3_EVENT_SAS_TOPOLOGY_CHANGE_LIST

> MPI3_EVENT_SAS_DISCOVERY

> MPI3_EVENT_SAS_DEVICE_DISCOVERY_ERROR

> 

> Key support in this patch is device add/removal.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi/mpi30_api.h  |    2 +

>   drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h | 2721 ++++++++++++++++++++++++++

>   drivers/scsi/mpi3mr/mpi/mpi30_sas.h  |   46 +

>   drivers/scsi/mpi3mr/mpi3mr.h         |  195 ++

>   drivers/scsi/mpi3mr/mpi3mr_fw.c      |  177 +-

>   drivers/scsi/mpi3mr/mpi3mr_os.c      | 1452 ++++++++++++++

>   6 files changed, 4592 insertions(+), 1 deletion(-)

>   create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h

>   create mode 100644 drivers/scsi/mpi3mr/mpi/mpi30_sas.h

> 

> diff --git a/drivers/scsi/mpi3mr/mpi/mpi30_api.h b/drivers/scsi/mpi3mr/mpi/mpi30_api.h

> index ca07387536d3..7bdd5aeb23be 100644

> --- a/drivers/scsi/mpi3mr/mpi/mpi30_api.h

> +++ b/drivers/scsi/mpi3mr/mpi/mpi30_api.h

> @@ -14,8 +14,10 @@

>   

>   #include "mpi30_type.h"

>   #include "mpi30_transport.h"

> +#include "mpi30_cnfg.h"

>   #include "mpi30_image.h"

>   #include "mpi30_init.h"

>   #include "mpi30_ioc.h"

> +#include "mpi30_sas.h"

>   

>   #endif  /* MPI30_API_H */

> diff --git a/drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h b/drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h

> new file mode 100644

> index 000000000000..3badb1bb85b1

> --- /dev/null

> +++ b/drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h

> @@ -0,0 +1,2721 @@

> +/*

> + *  Copyright 2017-2020 Broadcom Inc. All rights reserved.

> + *

> + *           Name: mpi30_cnfg.h

> + *    Description: Contains definitions for Configuration messages and pages

> + *  Creation Date: 03/15/2017

> + *        Version: 03.00.00

> + */

> +#ifndef MPI30_CNFG_H

> +#define MPI30_CNFG_H     1

> +

> +/*****************************************************************************

> + *              Configuration Page Types                                     *

> + ****************************************************************************/

> +#define MPI3_CONFIG_PAGETYPE_IO_UNIT                    (0x00)

> +#define MPI3_CONFIG_PAGETYPE_MANUFACTURING              (0x01)

> +#define MPI3_CONFIG_PAGETYPE_IOC                        (0x02)

> +#define MPI3_CONFIG_PAGETYPE_UEFI_BSD                   (0x03)

> +#define MPI3_CONFIG_PAGETYPE_SECURITY                   (0x04)

> +#define MPI3_CONFIG_PAGETYPE_ENCLOSURE                  (0x11)

> +#define MPI3_CONFIG_PAGETYPE_DEVICE                     (0x12)

> +#define MPI3_CONFIG_PAGETYPE_SAS_IO_UNIT                (0x20)

> +#define MPI3_CONFIG_PAGETYPE_SAS_EXPANDER               (0x21)

> +#define MPI3_CONFIG_PAGETYPE_SAS_PHY                    (0x23)

> +#define MPI3_CONFIG_PAGETYPE_SAS_PORT                   (0x24)

> +#define MPI3_CONFIG_PAGETYPE_PCIE_IO_UNIT               (0x30)

> +#define MPI3_CONFIG_PAGETYPE_PCIE_SWITCH                (0x31)

> +#define MPI3_CONFIG_PAGETYPE_PCIE_LINK                  (0x33)

> +

> +/*****************************************************************************

> + *              Configuration Page Attributes                                *

> + ****************************************************************************/

> +#define MPI3_CONFIG_PAGEATTR_MASK                       (0xF0)

> +#define MPI3_CONFIG_PAGEATTR_READ_ONLY                  (0x00)

> +#define MPI3_CONFIG_PAGEATTR_CHANGEABLE                 (0x10)

> +#define MPI3_CONFIG_PAGEATTR_PERSISTENT                 (0x20)

> +

> +/*****************************************************************************

> + *              Configuration Page Actions                                   *

> + ****************************************************************************/

> +#define MPI3_CONFIG_ACTION_PAGE_HEADER                  (0x00)

> +#define MPI3_CONFIG_ACTION_READ_DEFAULT                 (0x01)

> +#define MPI3_CONFIG_ACTION_READ_CURRENT                 (0x02)

> +#define MPI3_CONFIG_ACTION_WRITE_CURRENT                (0x03)

> +#define MPI3_CONFIG_ACTION_READ_PERSISTENT              (0x04)

> +#define MPI3_CONFIG_ACTION_WRITE_PERSISTENT             (0x05)

> +

> +/*****************************************************************************

> + *              Configuration Page Addressing                                *

> + ****************************************************************************/

> +

> +/**** Device PageAddress Format ****/

> +#define MPI3_DEVICE_PGAD_FORM_MASK                      (0xF0000000)

> +#define MPI3_DEVICE_PGAD_FORM_GET_NEXT_HANDLE           (0x00000000)

> +#define MPI3_DEVICE_PGAD_FORM_HANDLE                    (0x20000000)

> +#define MPI3_DEVICE_PGAD_HANDLE_MASK                    (0x0000FFFF)

> +

> +/**** SAS Expander PageAddress Format ****/

> +#define MPI3_SAS_EXPAND_PGAD_FORM_MASK                  (0xF0000000)

> +#define MPI3_SAS_EXPAND_PGAD_FORM_GET_NEXT_HANDLE       (0x00000000)

> +#define MPI3_SAS_EXPAND_PGAD_FORM_HANDLE_PHY_NUM        (0x10000000)

> +#define MPI3_SAS_EXPAND_PGAD_FORM_HANDLE                (0x20000000)

> +#define MPI3_SAS_EXPAND_PGAD_PHYNUM_MASK                (0x00FF0000)

> +#define MPI3_SAS_EXPAND_PGAD_PHYNUM_SHIFT               (16)

> +#define MPI3_SAS_EXPAND_PGAD_HANDLE_MASK                (0x0000FFFF)

> +

> +/**** SAS Phy PageAddress Format ****/

> +#define MPI3_SAS_PHY_PGAD_FORM_MASK                     (0xF0000000)

> +#define MPI3_SAS_PHY_PGAD_FORM_PHY_NUMBER               (0x00000000)

> +#define MPI3_SAS_PHY_PGAD_PHY_NUMBER_MASK               (0x000000FF)

> +

> +/**** SAS Port PageAddress Format ****/

> +#define MPI3_SASPORT_PGAD_FORM_MASK                     (0xF0000000)

> +#define MPI3_SASPORT_PGAD_FORM_GET_NEXT_PORT            (0x00000000)

> +#define MPI3_SASPORT_PGAD_FORM_PORT_NUM                 (0x10000000)

> +#define MPI3_SASPORT_PGAD_PORT_NUMBER_MASK              (0x000000FF)

> +

> +/**** Enclosure PageAddress Format ****/

> +#define MPI3_ENCLOS_PGAD_FORM_MASK                      (0xF0000000)

> +#define MPI3_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE           (0x00000000)

> +#define MPI3_ENCLOS_PGAD_FORM_HANDLE                    (0x10000000)

> +#define MPI3_ENCLOS_PGAD_HANDLE_MASK                    (0x0000FFFF)

> +

> +/**** PCIe Switch PageAddress Format ****/

> +#define MPI3_PCIE_SWITCH_PGAD_FORM_MASK                 (0xF0000000)

> +#define MPI3_PCIE_SWITCH_PGAD_FORM_GET_NEXT_HANDLE      (0x00000000)

> +#define MPI3_PCIE_SWITCH_PGAD_FORM_HANDLE_PORT_NUM      (0x10000000)

> +#define MPI3_PCIE_SWITCH_PGAD_FORM_HANDLE               (0x20000000)

> +#define MPI3_PCIE_SWITCH_PGAD_PORTNUM_MASK              (0x00FF0000)

> +#define MPI3_PCIE_SWITCH_PGAD_PORTNUM_SHIFT             (16)

> +#define MPI3_PCIE_SWITCH_PGAD_HANDLE_MASK               (0x0000FFFF)

> +

> +/**** PCIe Link PageAddress Format ****/

> +#define MPI3_PCIE_LINK_PGAD_FORM_MASK                   (0xF0000000)

> +#define MPI3_PCIE_LINK_PGAD_FORM_GET_NEXT_LINK          (0x00000000)

> +#define MPI3_PCIE_LINK_PGAD_FORM_LINK_NUM               (0x10000000)

> +#define MPI3_PCIE_LINK_PGAD_LINKNUM_MASK                (0x000000FF)

> +

> +/**** Security PageAddress Format ****/

> +#define MPI3_SECURITY_PGAD_FORM_MASK                    (0xF0000000)

> +#define MPI3_SECURITY_PGAD_FORM_GET_NEXT_SLOT           (0x00000000)

> +#define MPI3_SECURITY_PGAD_FORM_SOT_NUM                 (0x10000000)

> +#define MPI3_SECURITY_PGAD_SLOT_GROUP_MASK              (0x0000FF00)

> +#define MPI3_SECURITY_PGAD_SLOT_MASK                    (0x000000FF)

> +

> +/*****************************************************************************

> + *              Configuration Request Message                                *

> + ****************************************************************************/

> +typedef struct _MPI3_CONFIG_REQUEST {

> +    U16             HostTag;                            /* 0x00 */

> +    U8              IOCUseOnly02;                       /* 0x02 */

> +    U8              Function;                           /* 0x03 */

> +    U16             IOCUseOnly04;                       /* 0x04 */

> +    U8              IOCUseOnly06;                       /* 0x06 */

> +    U8              MsgFlags;                           /* 0x07 */

> +    U16             ChangeCount;                        /* 0x08 */

> +    U16             Reserved0A;                         /* 0x0A */

> +    U8              PageVersion;                        /* 0x0C */

> +    U8              PageNumber;                         /* 0x0D */

> +    U8              PageType;                           /* 0x0E */

> +    U8              Action;                             /* 0x0F */

> +    U32             PageAddress;                        /* 0x10 */

> +    U16             PageLength;                         /* 0x14 */

> +    U16             Reserved16;                         /* 0x16 */

> +    U32             Reserved18[2];                      /* 0x18 */

> +    MPI3_SGE_UNION  SGL;                                /* 0x20 */

> +} MPI3_CONFIG_REQUEST, MPI3_POINTER PTR_MPI3_CONFIG_REQUEST,

> +  Mpi3ConfigRequest_t, MPI3_POINTER pMpi3ConfigRequest_t;

> +

Can you please restrict yourself to _one_ set of codingstyle?
IE please keep your typedefs to all caps, or mixed caps.
But having both is just silly.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 6:52 a.m. UTC | #10
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Firmware can report various MPI Events.

> Support for certain Events (as listed below) are enabled in the driver

> and their processing in driver is covered in this patch.

> 

> MPI3_EVENT_PCIE_TOPOLOGY_CHANGE_LIST

> MPI3_EVENT_PCIE_ENUMERATION

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr_fw.c |   2 +

>   drivers/scsi/mpi3mr/mpi3mr_os.c | 202 ++++++++++++++++++++++++++++++++

>   2 files changed, 204 insertions(+)

> 

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 6:57 a.m. UTC | #11
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> This operation requests that the IOC update the TimeStamp.

> 

> When the I/O Unit is powered on, it sets the TimeStamp field value to

> 0x0000_0000_0000_0000 and increments the current value every millisecond.

> A host driver sets the TimeStamp field to the current time by using an

> IOCInit request. The TimeStamp field is periodically updated by host driver.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr.h    |  3 ++

>   drivers/scsi/mpi3mr/mpi3mr_fw.c | 74 +++++++++++++++++++++++++++++++++

>   2 files changed, 77 insertions(+)

> 

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 7 a.m. UTC | #12
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr_os.c | 40 +++++++++++++++++++++++++++++++++

>   1 file changed, 40 insertions(+)

> 

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c

> index 4d94352a4d48..7e0eacf45d84 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr_os.c

> +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c

> @@ -2078,6 +2078,45 @@ static int mpi3mr_build_sg_scmd(struct mpi3mr_ioc *mrioc,

>   	return ret;

>   }

>   

> +/**

> + * mpi3mr_bios_param - BIOS param callback

> + * @sdev: SCSI device reference

> + * @bdev: Block device reference

> + * @capacity: Capacity in logical sectors

> + * @params: Parameter array

> + *

> + * Just the parameters with heads/secots/cylinders.

> + *

> + * Return: 0 always

> + */

> +static int mpi3mr_bios_param(struct scsi_device *sdev,

> +	struct block_device *bdev, sector_t capacity, int params[])

> +{

> +	int heads;

> +	int sectors;

> +	sector_t cylinders;

> +	ulong dummy;

> +

> +	heads = 64;

> +	sectors = 32;

> +

> +	dummy = heads * sectors;

> +	cylinders = capacity;

> +	sector_div(cylinders, dummy);

> +

> +	if ((ulong)capacity >= 0x200000) {

> +		heads = 255;

> +		sectors = 63;

> +		dummy = heads * sectors;

> +		cylinders = capacity;

> +		sector_div(cylinders, dummy);

> +	}

> +

> +	params[0] = heads;

> +	params[1] = sectors;

> +	params[2] = cylinders;

> +	return 0;

> +}

>   

>   /**

>    * mpi3mr_map_queues - Map queues callback handler

> @@ -2511,6 +2550,7 @@ static struct scsi_host_template mpi3mr_driver_template = {

>   	.slave_destroy			= mpi3mr_slave_destroy,

>   	.scan_finished			= mpi3mr_scan_finished,

>   	.scan_start			= mpi3mr_scan_start,

> +	.bios_param			= mpi3mr_bios_param,

>   	.map_queues			= mpi3mr_map_queues,

>   	.no_write_same			= 1,

>   	.can_queue			= 1,

> 

OMG. I had hoped we could kill this eventually.
Oh well.

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 7:11 a.m. UTC | #13
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> This patch allows SSU and Sync Cache commands to be sent to the controller

> instead of driver returning DID_NO_CONNECT during driver unload to flush

> any cached data from the drive.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr_os.c | 24 +++++++++++++++++++++++-

>   1 file changed, 23 insertions(+), 1 deletion(-)

> 

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c

> index 6f19e5392433..07a7b1efbc4f 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr_os.c

> +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c

> @@ -2865,6 +2865,27 @@ static int mpi3mr_target_alloc(struct scsi_target *starget)

>   	return retval;

>   }

>   

> +

> +/**

> + * mpi3mr_allow_scmd_to_fw - Command is allowed during shutdown

> + * @scmd: SCSI Command reference

> + *

> + * Checks whether a CDB is allowed during shutdown or not.

> + *

> + * Return: TRUE for allowed commands, FALSE otherwise.

> + */

> +

> +inline bool mpi3mr_allow_scmd_to_fw(struct scsi_cmnd *scmd)

> +{

> +	switch (scmd->cmnd[0]) {

> +	case SYNCHRONIZE_CACHE:

> +	case START_STOP:

> +		return true;

> +	default:

> +		return false;

> +	}

> +}

> +

>   /**

>    * mpi3mr_qcmd - I/O request despatcher

>    * @shost: SCSI Host reference

> @@ -2900,7 +2921,8 @@ static int mpi3mr_qcmd(struct Scsi_Host *shost,

>   		goto out;

>   	}

>   

> -	if (mrioc->stop_drv_processing) {

> +	if (mrioc->stop_drv_processing &&

> +	    !(mpi3mr_allow_scmd_to_fw(scmd))) {

>   		scmd->result = DID_NO_CONNECT << 16;

>   		scmd->scsi_done(scmd);

>   		goto out;

> 

Reviewed-by: Hannes Reinecke <hare@suse.com>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 7:13 a.m. UTC | #14
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> The controller hardware can not handle certain unmap commands for NVMe

> drives, this patch adds support in the driver to check those commands and

> handle as appropriate.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr_os.c | 99 +++++++++++++++++++++++++++++++++

>   1 file changed, 99 insertions(+)

> 

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c

> index 07a7b1efbc4f..742cf45d4878 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr_os.c

> +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c

> @@ -2865,6 +2865,100 @@ static int mpi3mr_target_alloc(struct scsi_target *starget)

>   	return retval;

>   }

>   

> +/**

> + * mpi3mr_check_return_unmap - Whether an unmap is allowed

> + * @mrioc: Adapter instance reference

> + * @scmd: SCSI Command reference

> + *

> + * The controller hardware cannot handle certain unmap commands

> + * for NVMe drives, this routine checks those and return true

> + * and completes the SCSI command with proper status and sense

> + * data.

> + *

> + * Return: TRUE for not  allowed unmap, FALSE otherwise.

> + */

> +static bool mpi3mr_check_return_unmap(struct mpi3mr_ioc *mrioc,

> +	struct scsi_cmnd *scmd)

> +{

> +	unsigned char *buf;

> +	u16 param_len, desc_len;

> +

> +	param_len = get_unaligned_be16(scmd->cmnd + 7);

> +

> +	if (!param_len) {

> +		ioc_warn(mrioc,

> +		    "%s: CDB received with zero parameter length\n",

> +		    __func__);

> +		scsi_print_command(scmd);

> +		scmd->result = DID_OK << 16;

> +		scmd->scsi_done(scmd);

> +		return true;

> +	}

> +

> +	if (param_len < 24) {

> +		ioc_warn(mrioc,

> +		    "%s: CDB received with invalid param_len: %d\n",

> +		    __func__, param_len);

> +		scsi_print_command(scmd);

> +		scmd->result = (DRIVER_SENSE << 24) |

> +		    SAM_STAT_CHECK_CONDITION;

> +		scsi_build_sense_buffer(0, scmd->sense_buffer, ILLEGAL_REQUEST,

> +		    0x1A, 0);

> +		scmd->scsi_done(scmd);

> +		return true;

> +	}

> +	if (param_len != scsi_bufflen(scmd)) {

> +		ioc_warn(mrioc,

> +		    "%s: CDB received with param_len: %d bufflen: %d\n",

> +		    __func__, param_len, scsi_bufflen(scmd));

> +		scsi_print_command(scmd);

> +		scmd->result = (DRIVER_SENSE << 24) |

> +		    SAM_STAT_CHECK_CONDITION;

> +		scsi_build_sense_buffer(0, scmd->sense_buffer, ILLEGAL_REQUEST,

> +		    0x1A, 0);

> +		scmd->scsi_done(scmd);

> +		return true;

> +	}

> +	buf = kzalloc(scsi_bufflen(scmd), GFP_ATOMIC);

> +	if (!buf) {

> +		scsi_print_command(scmd);

> +		scmd->result = (DRIVER_SENSE << 24) |

> +		    SAM_STAT_CHECK_CONDITION;

> +		scsi_build_sense_buffer(0, scmd->sense_buffer, ILLEGAL_REQUEST,

> +		    0x55, 0x03);

> +		scmd->scsi_done(scmd);

> +		return true;

> +	}

> +	scsi_sg_copy_to_buffer(scmd, buf, scsi_bufflen(scmd));

> +	desc_len = get_unaligned_be16(&buf[2]);

> +

> +	if (desc_len < 16) {

> +		ioc_warn(mrioc,

> +		    "%s: Invalid descriptor length in param list: %d\n",

> +		    __func__, desc_len);

> +		scsi_print_command(scmd);

> +		scmd->result = (DRIVER_SENSE << 24) |

> +		    SAM_STAT_CHECK_CONDITION;

> +		scsi_build_sense_buffer(0, scmd->sense_buffer, ILLEGAL_REQUEST,

> +		    0x26, 0);

> +		scmd->scsi_done(scmd);

> +		kfree(buf);

> +		return true;

> +	}

> +

> +	if (param_len > (desc_len + 8)) {

> +		scsi_print_command(scmd);

> +		ioc_warn(mrioc,

> +		    "%s: Truncating param_len(%d) to desc_len+8(%d)\n",

> +		    __func__, param_len, (desc_len + 8));

> +		param_len = desc_len + 8;

> +		put_unaligned_be16(param_len, scmd->cmnd+7);

> +		scsi_print_command(scmd);

> +	}

> +

> +	kfree(buf);

> +	return false;

> +}

>   

>   /**

>    * mpi3mr_allow_scmd_to_fw - Command is allowed during shutdown

> @@ -2957,6 +3051,11 @@ static int mpi3mr_qcmd(struct Scsi_Host *shost,

>   		goto out;

>   	}

>   

> +	if ((scmd->cmnd[0] == UNMAP) &&

> +	    (stgt_priv_data->dev_type == MPI3_DEVICE_DEVFORM_PCIE) &&

> +	    mpi3mr_check_return_unmap(mrioc, scmd))

> +		goto out;

> +

>   	host_tag = mpi3mr_host_tag_for_scmd(mrioc, scmd);

>   	if (host_tag == MPI3MR_HOSTTAG_INVALID) {

>   		scmd->result = DID_ERROR << 16;

> 

One _could_ have modified the firmware instead ... oh well.

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 7:14 a.m. UTC | #15
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Register driver for threaded interrupt.

> 

> By default, driver will attempt io completion from interrupt context

> (primary handler). Since driver tracks per reply queue outstanding ios,

> it will schedule threaded ISR if there are any outstanding IOs expected

> on that particular reply queue. Threaded ISR (secondary handler) will loop

> for IO completion as long as there are outstanding IOs

> (speculative method using same per reply queue outstanding counter)

> or it has completed some X amount of commands (something like budget).

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr.h    | 12 ++++++

>   drivers/scsi/mpi3mr/mpi3mr_fw.c | 75 +++++++++++++++++++++++++++++++--

>   2 files changed, 84 insertions(+), 3 deletions(-)

> 

> diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h

> index 74b6b4b6e322..41a8689b46c9 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr.h

> +++ b/drivers/scsi/mpi3mr/mpi3mr.h

> @@ -144,6 +144,10 @@ extern struct list_head mrioc_list;

>   /* Default target device queue depth */

>   #define MPI3MR_DEFAULT_SDEV_QD	32

>   

> +/* Definitions for Threaded IRQ poll*/

> +#define MPI3MR_IRQ_POLL_SLEEP			2

> +#define MPI3MR_IRQ_POLL_TRIGGER_IOCOUNT		8

> +

>   /* SGE Flag definition */

>   #define MPI3MR_SGEFLAGS_SYSTEM_SIMPLE_END_OF_LIST \

>   	(MPI3_SGE_FLAGS_ELEMENT_TYPE_SIMPLE | MPI3_SGE_FLAGS_DLAS_SYSTEM | \

> @@ -295,6 +299,9 @@ struct op_req_qinfo {

>    * @q_segment_list: Segment list base virtual address

>    * @q_segment_list_dma: Segment list base DMA address

>    * @ephase: Expected phased identifier for the reply queue

> + * @pend_ios: Number of IOs pending in HW for this queue

> + * @enable_irq_poll: Flag to indicate polling is enabled

> + * @in_use: Queue is handled by poll/ISR

>    */

>   struct op_reply_qinfo {

>   	u16 ci;

> @@ -306,6 +313,9 @@ struct op_reply_qinfo {

>   	void *q_segment_list;

>   	dma_addr_t q_segment_list_dma;

>   	u8 ephase;

> +	atomic_t pend_ios;

> +	bool enable_irq_poll;

> +	atomic_t in_use;

>   };

>   

>   /**

> @@ -559,6 +569,7 @@ struct scmd_priv {

>    * @shost: Scsi_Host pointer

>    * @id: Controller ID

>    * @cpu_count: Number of online CPUs

> + * @irqpoll_sleep: usleep unit used in threaded isr irqpoll

>    * @name: Controller ASCII name

>    * @driver_name: Driver ASCII name

>    * @sysif_regs: System interface registers virtual address

> @@ -660,6 +671,7 @@ struct mpi3mr_ioc {

>   	u8 id;

>   	int cpu_count;

>   	bool enable_segqueue;

> +	u32 irqpoll_sleep;

>   

>   	char name[MPI3MR_NAME_LENGTH];

>   	char driver_name[MPI3MR_NAME_LENGTH];

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c

> index ba4bfcc17809..4c4e21fb4ef3 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c

> +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c

> @@ -346,12 +346,16 @@ static int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc,

>   

>   	reply_qidx = op_reply_q->qid - 1;

>   

> +	if (!atomic_add_unless(&op_reply_q->in_use, 1, 1))

> +		return 0;

> +

>   	exp_phase = op_reply_q->ephase;

>   	reply_ci = op_reply_q->ci;

>   

>   	reply_desc = mpi3mr_get_reply_desc(op_reply_q, reply_ci);

>   	if ((le16_to_cpu(reply_desc->ReplyFlags) &

>   	    MPI3_REPLY_DESCRIPT_FLAGS_PHASE_MASK) != exp_phase) {

> +		atomic_dec(&op_reply_q->in_use);

>   		return 0;

>   	}

>   

> @@ -364,6 +368,7 @@ static int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc,

>   

>   		mpi3mr_process_op_reply_desc(mrioc, reply_desc, &reply_dma,

>   		    reply_qidx);

> +		atomic_dec(&op_reply_q->pend_ios);

>   		if (reply_dma)

>   			mpi3mr_repost_reply_buf(mrioc, reply_dma);

>   		num_op_reply++;

> @@ -378,6 +383,14 @@ static int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc,

>   		if ((le16_to_cpu(reply_desc->ReplyFlags) &

>   		    MPI3_REPLY_DESCRIPT_FLAGS_PHASE_MASK) != exp_phase)

>   			break;

> +		/*

> +		 * Exit completion loop to avoid CPU lockup

> +		 * Ensure remaining completion happens from threaded ISR.

> +		 */

> +		if (num_op_reply > mrioc->max_host_ios) {

> +			intr_info->op_reply_q->enable_irq_poll = true;

> +			break;

> +		}

>   

>   	} while (1);

>   

> @@ -386,6 +399,7 @@ static int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc,

>   	    &mrioc->sysif_regs->OperQueueIndexes[reply_qidx].ConsumerIndex);

>   	op_reply_q->ci = reply_ci;

>   	op_reply_q->ephase = exp_phase;

> +	atomic_dec(&op_reply_q->in_use);

>   

>   	return num_op_reply;

>   }

> @@ -395,7 +409,7 @@ static irqreturn_t mpi3mr_isr_primary(int irq, void *privdata)

>   	struct mpi3mr_intr_info *intr_info = privdata;

>   	struct mpi3mr_ioc *mrioc;

>   	u16 midx;

> -	u32 num_admin_replies = 0;

> +	u32 num_admin_replies = 0, num_op_reply = 0;

>   

>   	if (!intr_info)

>   		return IRQ_NONE;

> @@ -409,8 +423,10 @@ static irqreturn_t mpi3mr_isr_primary(int irq, void *privdata)

>   

>   	if (!midx)

>   		num_admin_replies = mpi3mr_process_admin_reply_q(mrioc);

> +	if (intr_info->op_reply_q)

> +		num_op_reply = mpi3mr_process_op_reply_q(mrioc, intr_info);

>   

> -	if (num_admin_replies)

> +	if (num_admin_replies || num_op_reply)

>   		return IRQ_HANDLED;

>   	else

>   		return IRQ_NONE;

> @@ -431,7 +447,20 @@ static irqreturn_t mpi3mr_isr(int irq, void *privdata)

>   	/* Call primary ISR routine */

>   	ret = mpi3mr_isr_primary(irq, privdata);

>   

> -	return ret;

> +	/*

> +	 * If more IOs are expected, schedule IRQ polling thread.

> +	 * Otherwise exit from ISR.

> +	 */

> +	if (!intr_info->op_reply_q)

> +		return ret;

> +

> +	if (!intr_info->op_reply_q->enable_irq_poll ||

> +	    !atomic_read(&intr_info->op_reply_q->pend_ios))

> +		return ret;

> +

> +	disable_irq_nosync(pci_irq_vector(mrioc->pdev, midx));

> +

> +	return IRQ_WAKE_THREAD;

>   }

>   

>   /**

> @@ -446,6 +475,36 @@ static irqreturn_t mpi3mr_isr(int irq, void *privdata)

>    */

>   static irqreturn_t mpi3mr_isr_poll(int irq, void *privdata)

>   {

> +	struct mpi3mr_intr_info *intr_info = privdata;

> +	struct mpi3mr_ioc *mrioc;

> +	u16 midx;

> +	u32 num_admin_replies = 0, num_op_reply = 0;

> +

> +	if (!intr_info || !intr_info->op_reply_q)

> +		return IRQ_NONE;

> +

> +	mrioc = intr_info->mrioc;

> +	midx = intr_info->msix_index;

> +

> +	/* Poll for pending IOs completions */

> +	do {

> +		if (!mrioc->intr_enabled)

> +			break;

> +

> +		if (!midx)

> +			num_admin_replies = mpi3mr_process_admin_reply_q(mrioc);

> +		if (intr_info->op_reply_q)

> +			num_op_reply +=

> +			    mpi3mr_process_op_reply_q(mrioc, intr_info);

> +

> +		usleep_range(mrioc->irqpoll_sleep, 10 * mrioc->irqpoll_sleep);

> +

> +	} while (atomic_read(&intr_info->op_reply_q->pend_ios) &&

> +	    (num_op_reply < mrioc->max_host_ios));

> +

> +	intr_info->op_reply_q->enable_irq_poll = false;

> +	enable_irq(pci_irq_vector(mrioc->pdev, midx));

> +

>   	return IRQ_HANDLED;

>   }

>   

> @@ -1161,6 +1220,9 @@ static int mpi3mr_create_op_reply_q(struct mpi3mr_ioc *mrioc, u16 qidx)

>   	op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;

>   	op_reply_q->ci = 0;

>   	op_reply_q->ephase = 1;

> +	atomic_set(&op_reply_q->pend_ios, 0);

> +	atomic_set(&op_reply_q->in_use, 0);

> +	op_reply_q->enable_irq_poll = false;

>   

>   	if (!op_reply_q->q_segments) {

>   		retval = mpi3mr_alloc_op_reply_q_segments(mrioc, qidx);

> @@ -1482,6 +1544,10 @@ int mpi3mr_op_request_post(struct mpi3mr_ioc *mrioc,

>   		pi = 0;

>   	op_req_q->pi = pi;

>   

> +	if (atomic_inc_return(&mrioc->op_reply_qinfo[reply_qidx].pend_ios)

> +	    > MPI3MR_IRQ_POLL_TRIGGER_IOCOUNT)

> +		mrioc->op_reply_qinfo[reply_qidx].enable_irq_poll = true;

> +

>   	writel(op_req_q->pi,

>   	    &mrioc->sysif_regs->OperQueueIndexes[reply_qidx].ProducerIndex);

>   

> @@ -2783,6 +2849,7 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc, u8 re_init)

>   	u32 ioc_status, ioc_config, i;

>   	Mpi3IOCFactsData_t facts_data;

>   

> +	mrioc->irqpoll_sleep = MPI3MR_IRQ_POLL_SLEEP;

>   	mrioc->change_count = 0;

>   	if (!re_init) {

>   		mrioc->cpu_count = num_online_cpus();

> @@ -3068,6 +3135,8 @@ static void mpi3mr_memset_buffers(struct mpi3mr_ioc *mrioc)

>   		mrioc->op_reply_qinfo[i].ci = 0;

>   		mrioc->op_reply_qinfo[i].num_replies = 0;

>   		mrioc->op_reply_qinfo[i].ephase = 0;

> +		atomic_set(&mrioc->op_reply_qinfo[i].pend_ios, 0);

> +		atomic_set(&mrioc->op_reply_qinfo[i].in_use, 0);

>   		mpi3mr_memset_op_reply_q_buffers(mrioc, i);

>   

>   		mrioc->req_qinfo[i].ci = 0;

> 

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 7:16 a.m. UTC | #16
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Unlock the host diagnostic registers and write the specific

> reset type to that, wait for reset acknowledgment from the

> controller, if the reset is not successful retry for the

> predefined number of times

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr.h    |   3 +

>   drivers/scsi/mpi3mr/mpi3mr_fw.c | 245 +++++++++++++++++++++++++++++++-

>   2 files changed, 246 insertions(+), 2 deletions(-)

> 

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 7:16 a.m. UTC | #17
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr_os.c | 68 +++++++++++++++++++++++++++++++++

>   1 file changed, 68 insertions(+)

> 

> diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c

> index 742cf45d4878..8e665c70604d 100644

> --- a/drivers/scsi/mpi3mr/mpi3mr_os.c

> +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c

> @@ -334,6 +334,36 @@ void mpi3mr_invalidate_devhandles(struct mpi3mr_ioc *mrioc)

>   	}

>   }

>   

> +/**

> + * mpi3mr_print_scmd - print individual SCSI command

> + * @rq: Block request

> + * @data: Adapter instance reference

> + *

> + * Print the SCSI command details if it is in LLD scope.

> + *

> + * Return: true always.

> + */

> +static bool mpi3mr_print_scmd(struct request *rq,

> +	void *data, bool reserved)

> +{

> +	struct mpi3mr_ioc *mrioc = (struct mpi3mr_ioc *)data;

> +	struct scsi_cmnd *scmd = blk_mq_rq_to_pdu(rq);

> +	struct scmd_priv *priv = NULL;

> +

> +	if (scmd) {

> +		priv = scsi_cmd_priv(scmd);

> +		if (!priv->in_lld_scope)

> +			goto out;

> +

> +		ioc_info(mrioc, "%s :Host Tag = %d, qid = %d\n",

> +		    __func__, priv->host_tag, priv->req_q_idx + 1);

> +		scsi_print_command(scmd);

> +	}

> +

> +out:

> +	return(true);

> +}

> +

>   

>   /**

>    * mpi3mr_flush_scmd - Flush individual SCSI command

> @@ -2370,6 +2400,43 @@ static int mpi3mr_map_queues(struct Scsi_Host *shost)

>   	    mrioc->pdev, mrioc->op_reply_q_offset);

>   }

>   

> +/**

> + * mpi3mr_get_fw_pending_ios - Calculate pending I/O count

> + * @mrioc: Adapter instance reference

> + *

> + * Calculate the pending I/Os for the controller and return.

> + *

> + * Return: Number of pending I/Os

> + */

> +static inline int mpi3mr_get_fw_pending_ios(struct mpi3mr_ioc *mrioc)

> +{

> +	u16 i;

> +	uint pend_ios = 0;

> +

> +	for (i = 0; i < mrioc->num_op_reply_q; i++)

> +		pend_ios += atomic_read(&mrioc->op_reply_qinfo[i].pend_ios);

> +	return pend_ios;

> +}

> +

> +/**

> + * mpi3mr_print_pending_host_io - print pending I/Os

> + * @mrioc: Adapter instance reference

> + *

> + * Print number of pending I/Os and each I/O details prior to

> + * reset for debug purpose.

> + *

> + * Return: Nothing

> + */

> +static void mpi3mr_print_pending_host_io(struct mpi3mr_ioc *mrioc)

> +{

> +	struct Scsi_Host *shost = mrioc->shost;

> +

> +	ioc_info(mrioc, "%s :Pending commands prior to reset: %d\n",

> +	    __func__, mpi3mr_get_fw_pending_ios(mrioc));

> +	blk_mq_tagset_busy_iter(&shost->tag_set,

> +	    mpi3mr_print_scmd, (void *)mrioc);

> +}

> +

>   /**

>    * mpi3mr_eh_host_reset - Host reset error handling callback

>    * @scmd: SCSI command reference

> @@ -2395,6 +2462,7 @@ static int mpi3mr_eh_host_reset(struct scsi_cmnd *scmd)

>   		dev_type = stgt_priv_data->dev_type;

>   	}

>   

> +	mpi3mr_print_pending_host_io(mrioc);

>   	ret = mpi3mr_soft_reset_handler(mrioc,

>   	    MPI3MR_RESET_FROM_EH_HOS, 1);

>   	if (ret)

> 

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Hannes Reinecke March 1, 2021, 7:19 a.m. UTC | #18
On 12/22/20 11:11 AM, Kashyap Desai wrote:
> Read PCI_EXT_CAP_ID_DSN to know security status.

> 

> Driver will throw an warning message when a non-secure type controller

> is detected. Purpose of this interface is to avoid interacting with

> any firmware which is not secured/signed by Broadcom.

> Any tampering on Firmware component will be detected by hardware

> and it will be communicated to the driver to avoid any further

> interaction with that component.

> 

> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>

> Cc: sathya.prakash@broadcom.com

> ---

>   drivers/scsi/mpi3mr/mpi3mr.h    |  9 ++++

>   drivers/scsi/mpi3mr/mpi3mr_os.c | 80 +++++++++++++++++++++++++++++++++

>   2 files changed, 89 insertions(+)

> 

Reviewed-by: Hannes Reinecke <hare@suse.de>


Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Kashyap Desai March 2, 2021, 6:36 p.m. UTC | #19
> > +struct mpi3mr_ioc {
> > +	struct list_head list;
> > +	struct pci_dev *pdev;
> > +	struct Scsi_Host *shost;
> > +	u8 id;
> > +	int cpu_count;
> > +
> > +	char name[MPI3MR_NAME_LENGTH];
> > +	char driver_name[MPI3MR_NAME_LENGTH];
> > +
> > +	Mpi3SysIfRegs_t __iomem *sysif_regs;
> > +	resource_size_t sysif_regs_phys;
> > +	int bars;
> > +	u64 dma_mask;
> > +
> > +	u16 msix_count;
> > +	u8 intr_enabled;
> > +
> > +	u16 num_admin_req;
> > +	u32 admin_req_q_sz;
> > +	u16 admin_req_pi;
> > +	u16 admin_req_ci;
> > +	void *admin_req_base;
> > +	dma_addr_t admin_req_dma;
> > +	spinlock_t admin_req_lock;
> > +
> > +	u16 num_admin_replies;
> > +	u32 admin_reply_q_sz;
> > +	u16 admin_reply_ci;
> > +	u8 admin_reply_ephase;
> > +	void *admin_reply_base;
> > +	dma_addr_t admin_reply_dma;
> > +
> > +	u32 ready_timeout;
> > +
> > +	struct mpi3mr_intr_info *intr_info;
>
> Please, be consistent.
> If you must introduce typedefs for your internal structures, okay.
> But then introduce typedefs for _all_ internal structures.
> Or leave the typedefs and just use 'struct XXX'; which actually is the
> recommended way for linux.

Are you referring " typedef struct mpi3mr_drv_" ?. This is because of some
inter-operability issue of different kernel version. I will remove this
typedef in my V2.
Usually, our goal is not to have typedef in drivers except mpi3.0 header
files. I will scan such instances in and will update all the places.

>
> > +	u16 intr_info_count;
> > +
> > +	u16 num_queues;
> > +	u16 num_op_req_q;
> > +	struct op_req_qinfo *req_qinfo;
> > +
> > +	u16 num_op_reply_q;
> > +	struct op_reply_qinfo *op_reply_qinfo;
> > +
> > +	struct mpi3mr_drv_cmd init_cmds;
> > +	struct mpi3mr_ioc_facts facts;
> > +	u16 op_reply_desc_sz;
> > +
> > +	u32 num_reply_bufs;
> > +	struct dma_pool *reply_buf_pool;
> > +	u8 *reply_buf;
> > +	dma_addr_t reply_buf_dma;
> > +	dma_addr_t reply_buf_dma_max_address;
> > +
> > +	u16 reply_free_qsz;
> > +	struct dma_pool *reply_free_q_pool;
> > +	U64 *reply_free_q;
> > +	dma_addr_t reply_free_q_dma;
> > +	spinlock_t reply_free_queue_lock;
> > +	u32 reply_free_queue_host_index;
> > +
> > +	u32 num_sense_bufs;
> > +	struct dma_pool *sense_buf_pool;
> > +	u8 *sense_buf;
> > +	dma_addr_t sense_buf_dma;
> > +
> > +	u16 sense_buf_q_sz;
> > +	struct dma_pool *sense_buf_q_pool;
> > +	U64 *sense_buf_q;
> > +	dma_addr_t sense_buf_q_dma;
> > +	spinlock_t sbq_lock;
> > +	u32 sbq_host_index;
> > +
> > +	u8 is_driver_loading;
> > +
> > +	u16 max_host_ios;
> > +
> > +	u32 chain_buf_count;
> > +	struct dma_pool *chain_buf_pool;
> > +	struct chain_element *chain_sgl_list;
> > +	u16  chain_bitmap_sz;
> > +	void *chain_bitmap;
> > +
> > +	u8 reset_in_progress;
> > +	u8 unrecoverable;
> > +
> > +	int logging_level;
> > +
> > +	struct mpi3mr_fwevt *current_event;
> > +	Mpi3DriverInfoLayout_t driver_info;
>
> See my comment about struct typedefs above.

I will remove this typedef and similar instances.

> > +static inline int mpi3mr_request_irq(struct mpi3mr_ioc *mrioc, u16
index)
> > +{
> > +	struct pci_dev *pdev = mrioc->pdev;
> > +	struct mpi3mr_intr_info *intr_info = mrioc->intr_info + index;
> > +	int retval = 0;
> > +
> > +	intr_info->mrioc = mrioc;
> > +	intr_info->msix_index = index;
> > +	intr_info->op_reply_q = NULL;
> > +
> > +	snprintf(intr_info->name, MPI3MR_NAME_LENGTH, "%s%d-msix%d",
> > +	    mrioc->driver_name, mrioc->id, index);
> > +
> > +	retval = request_threaded_irq(pci_irq_vector(pdev, index),
> mpi3mr_isr,
> > +	    mpi3mr_isr_poll, IRQF_ONESHOT, intr_info->name, intr_info);
> > +	if (retval) {
> > +		ioc_err(mrioc, "%s: Unable to allocate interrupt %d!\n",
> > +		    intr_info->name, pci_irq_vector(pdev, index));
> > +		return retval;
> > +	}
> > +
>
> The point of having 'mpi3mr_isr_poll()' here is what exactly?

This is a place holder and actual use case is handled in " [17/24] mpi3mr:
add support of threaded isr"
For easy review, I have created separate patch " [17/24] mpi3mr: add
support of threaded isr"
> > +	areq_entry = (u8 *)mrioc->admin_req_base +
> > +	    (areq_pi * MPI3MR_ADMIN_REQ_FRAME_SZ);
> > +	memset(areq_entry, 0, MPI3MR_ADMIN_REQ_FRAME_SZ);
> > +	memcpy(areq_entry, (u8 *)admin_req, admin_req_sz);
> > +
> > +	if (++areq_pi == max_entries)
> > +		areq_pi = 0;
> > +	mrioc->admin_req_pi = areq_pi;
> > +
> > +	writel(mrioc->admin_req_pi, &mrioc->sysif_regs-
> >AdminRequestQueuePI);
> > +
> > +out:
> > +	spin_unlock_irqrestore(&mrioc->admin_req_lock, flags);
> > +
> > +	return retval;
> > +}
> > +
>
> It might be an idea to have an 'admin' queue structure; keeping the
> values all within the main IOC structure might cause cache misses and a
> degraded performance.

Noted your point. We can do it in future update. I think it make sense for
code readability as well.

> > +int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc)
> > +{
> > +	int retval = 0;
> > +	enum mpi3mr_iocstate ioc_state;
> > +	u64 base_info;
> > +	u32 timeout;
> > +	u32 ioc_status, ioc_config;
> > +	Mpi3IOCFactsData_t facts_data;
> > +
> > +	mrioc->change_count = 0;
> > +	mrioc->cpu_count = num_online_cpus();
>
> What about CPU hotplug?


We have to use num_available_cpus() to get benefit of cpu hotplug. In next
update it will be available.

> > +
> > +/* global driver scop variables */
> > +LIST_HEAD(mrioc_list);
> > +DEFINE_SPINLOCK(mrioc_list_lock);
> > +static int mrioc_ids;
> > +static int warn_non_secure_ctlr;
> > +
> > +MODULE_AUTHOR(MPI3MR_DRIVER_AUTHOR);
> > +MODULE_DESCRIPTION(MPI3MR_DRIVER_DESC);
> > +MODULE_LICENSE(MPI3MR_DRIVER_LICENSE);
> > +MODULE_VERSION(MPI3MR_DRIVER_VERSION);
> > +
> > +/* Module parameters*/
> > +int logging_level;
> > +module_param(logging_level, int, 0);
> > +MODULE_PARM_DESC(logging_level,
> > +	" bits for enabling additional logging info (default=0)");
> > +
> > +
> > +/**
> > + * mpi3mr_map_queues - Map queues callback handler
> > + * @shost: SCSI host reference
> > + *
> > + * Call the blk_mq_pci_map_queues with from which operational
> > + * queue the mapping has to be done
> > + *
> > + * Return: return of blk_mq_pci_map_queues
> > + */
> > +static int mpi3mr_map_queues(struct Scsi_Host *shost)
> > +{
> > +	struct mpi3mr_ioc *mrioc = shost_priv(shost);
> > +
> > +	return blk_mq_pci_map_queues(&shost-
> >tag_set.map[HCTX_TYPE_DEFAULT],
> > +	    mrioc->pdev, 0);
> > +}
> > +
>
> What happened to polling?
> You did some patches for megaraid_sas, so I would have expected them to
> be here, too ...

Internally, Io_uring iopoll is also completed for this driver as well, but
it is under testing and may be available in next update.

> > +module_init(mpi3mr_init);
> > +module_exit(mpi3mr_exit);
> >
> Cheers,

Hannes -

Thanks for the feedback. I am working on all the comments and soon I will
be posting V2.

Kashyap
>
> Hannes
> --
> Dr. Hannes Reinecke		           Kernel Storage Architect
> hare@suse.de			                  +49 911 74053 688
> SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
> HRB 36809 (AG Nürnberg), GF: Felix Imendörffer
Kashyap Desai March 2, 2021, 7:05 p.m. UTC | #20
> > diff --git a/drivers/scsi/mpi3mr/mpi3mr.h
> > b/drivers/scsi/mpi3mr/mpi3mr.h index dd79b12218e1..fe6094bb357a
> 100644
> > --- a/drivers/scsi/mpi3mr/mpi3mr.h
> > +++ b/drivers/scsi/mpi3mr/mpi3mr.h
> > @@ -71,6 +71,12 @@ extern struct list_head mrioc_list;
> >   #define MPI3MR_ADMIN_REQ_FRAME_SZ	128
> >   #define MPI3MR_ADMIN_REPLY_FRAME_SZ	16
> >
> > +/* Operational queue management definitions */
> > +#define MPI3MR_OP_REQ_Q_QD		512
> > +#define MPI3MR_OP_REP_Q_QD		4096
> > +#define MPI3MR_OP_REQ_Q_SEG_SIZE	4096
> > +#define MPI3MR_OP_REP_Q_SEG_SIZE	4096
> > +#define MPI3MR_MAX_SEG_LIST_SIZE	4096
> >
> Do I read this correctly?
> The reply queue depth is larger than the request queue depth?
> Why is that?

Hannes, You are correct. Request queue desc unit size is 128 byte and
reply queue desc unit size is 16 byte.
Having current values of queue depth, we are creating 64K size request
pool and reply pool.
To avoid memory allocation failure, we have come up with some realistic
queue depth which can meet memory allocation requirement on most of the
cases and also we do not harm performance.

BTW, we have also improvement in this area. You can notice segemented
queue depth “enable_segqueue” field in the same patch.
We have plan to improve this area based on test results of
“enable_segqueue”.

> >   /**
> > @@ -220,6 +220,8 @@ mpi3mr_probe(struct pci_dev *pdev, const struct
> pci_device_id *id)
> >   	spin_lock_init(&mrioc->sbq_lock);
> >
> >   	mpi3mr_init_drv_cmd(&mrioc->init_cmds,
> MPI3MR_HOSTTAG_INITCMDS);
> > +	if (pdev->revision)
> > +		mrioc->enable_segqueue = true;
> >
> >   	mrioc->logging_level = logging_level;
> >   	mrioc->shost = shost;
> >
> Other than that:
>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke                Kernel Storage Architect
> hare@suse.de                              +49 911 74053 688
> SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg HRB 36809
> (AG Nürnberg), Geschäftsführer: Felix Imendörffer