From patchwork Tue Dec 5 19:16:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chandrakanth Patil X-Patchwork-Id: 750642 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="KcsHSEuO" Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D527D1B9 for ; Tue, 5 Dec 2023 11:15:39 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id 98e67ed59e1d1-286f22c52c7so613294a91.2 for ; Tue, 05 Dec 2023 11:15:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701803739; x=1702408539; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=uMcbvYIS9+GDt3PATCfxtB2XQDsWuLKQuSDGJchgZ6E=; b=KcsHSEuOksckiWUsq7LNpmwvO0lz7vJFjK32f8VUA79TyD1WkKNZNxP0+TQfFSbEhT OqBQt6ByzyjS6hYw2E4kBsak/2JltcTgR8cy7U+wCgjSvr4RnDiArxVguz1a+f1DW0PH jHReaBIJ8E9AIqSfqfL7PYFWr426WUWgAtFVs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701803739; x=1702408539; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uMcbvYIS9+GDt3PATCfxtB2XQDsWuLKQuSDGJchgZ6E=; b=J4KiM9dijEoM3PoFh2ReJSpe/+sF756h0lsBkT16/ceqYJ6q1JZmpJrcEmzG4hzTOJ 7lOhwq59FmSnJPZEFzKEvNt9r5MfjjWsXkXOMp+xd0N8mq96HjY5Xrrb6VDq1gKUf5KD BnxeuKvREtcTOl9ZvETy7j0m3vG4w1o50xXrYEctqq1KwWUBqTUQkXK+F/N2jbK1vln7 0q3aaorcENDUZaWCpINhvFdYakx/y86itUmZ+a23XHoNa7h7taExMp3SNwxj3fNGpAV3 /7wrDt33wxTxXGC/6CAffT/pzVDXr7IoRZ8mWZA4BCAzvjWnUM4dk3ApoILI7cRiLdip lgpQ== X-Gm-Message-State: AOJu0YyDtSVzbbr8RrHKb6QIYmTDnSgpkNuHacQs+62h1ZMT2bNsIQ/x xPNR1SUz6EeOd0vJ/0P62AS0k4fTzYUv6jneRiQOVO0S1j3PywXEEyjtxqsx4Q1pAHB8IntqnfT t9hLp/ZrRSSv/KrpRgeDke0m/LG/X1st7NkEXjYE9LtmrJuJQkJTozvh5JXZFeB6XOxAp9kIN8/ oJRHmWUCz/CW1npOKY1w== X-Google-Smtp-Source: AGHT+IG40Zj/dNTvBy/BitoETz6LCCc694X03YZad9XjyDanWUgwIPXmcvz9cnge/65mXR/PJLMJJw== X-Received: by 2002:a17:90b:4f41:b0:285:dbc9:dc18 with SMTP id pj1-20020a17090b4f4100b00285dbc9dc18mr1476569pjb.38.1701803738808; Tue, 05 Dec 2023 11:15:38 -0800 (PST) Received: from dhcp-10-123-20-35.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id 16-20020a17090a199000b0028017a2a8fasm10801896pji.3.2023.12.05.11.15.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Dec 2023 11:15:38 -0800 (PST) From: Chandrakanth patil To: linux-scsi@vger.kernel.org, sumit.saxena@broadcom.com, sathya.prakash@broadcom.com, ranjan.kumar@broadcom.com, prayas.patel@broadcom.com Cc: Chandrakanth patil Subject: [PATCH 1/4] mpi3mr: Support for preallocation of SGL BSG data buffers part-1 Date: Wed, 6 Dec 2023 00:46:27 +0530 Message-Id: <20231205191630.12201-2-chandrakanth.patil@broadcom.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231205191630.12201-1-chandrakanth.patil@broadcom.com> References: <20231205191630.12201-1-chandrakanth.patil@broadcom.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The driver now supports SGLs for BSG data transfer. Upon loading, the driver pre-allocates SGLs in chunks of 8k, results in a total of 256 * 8k, equal to 2MB. These pre-allocated SGLs are reserved for handling BSG commands and are deallocated during driver unload. Signed-off-by: Sathya Prakash Signed-off-by: Chandrakanth patil --- drivers/scsi/mpi3mr/mpi3mr.h | 15 +++++ drivers/scsi/mpi3mr/mpi3mr_fw.c | 112 ++++++++++++++++++++++++++++++++ 2 files changed, 127 insertions(+) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 4f49f8396309..8fce57894f8f 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -477,6 +477,10 @@ struct mpi3mr_throttle_group_info { /* HBA port flags */ #define MPI3MR_HBA_PORT_FLAG_DIRTY 0x01 +/* IOCTL data transfer sge*/ +#define MPI3MR_NUM_IOCTL_SGE 256 +#define MPI3MR_IOCTL_SGE_SIZE (8 * 1024) + /** * struct mpi3mr_hba_port - HBA's port information * @port_id: Port number @@ -1042,6 +1046,11 @@ struct scmd_priv { * @sas_node_lock: Lock to protect SAS node list * @hba_port_table_list: List of HBA Ports * @enclosure_list: List of Enclosure objects + * @ioctl_dma_pool: DMA pool for IOCTL data buffers + * @ioctl_sge: DMA buffer descriptors for IOCTL data + * @ioctl_chain_sge: DMA buffer descriptor for IOCTL chain + * @ioctl_resp_sge: DMA buffer descriptor for Mgmt cmd response + * @ioctl_sges_allocated: Flag for IOCTL SGEs allocated or not */ struct mpi3mr_ioc { struct list_head list; @@ -1227,6 +1236,12 @@ struct mpi3mr_ioc { spinlock_t sas_node_lock; struct list_head hba_port_table_list; struct list_head enclosure_list; + + struct dma_pool *ioctl_dma_pool; + struct dma_memory_desc ioctl_sge[MPI3MR_NUM_IOCTL_SGE]; + struct dma_memory_desc ioctl_chain_sge; + struct dma_memory_desc ioctl_resp_sge; + bool ioctl_sges_allocated; }; /** diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c index 1ad2f88e0528..d8c57a0a518f 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c @@ -1058,6 +1058,114 @@ enum mpi3mr_iocstate mpi3mr_get_iocstate(struct mpi3mr_ioc *mrioc) return MRIOC_STATE_RESET_REQUESTED; } +/** + * mpi3mr_free_ioctl_dma_memory - free memory for ioctl dma + * @mrioc: Adapter instance reference + * + * Free the DMA memory allocated for IOCTL handling purpose. + + * + * Return: None + */ +static void mpi3mr_free_ioctl_dma_memory(struct mpi3mr_ioc *mrioc) +{ + struct dma_memory_desc *mem_desc; + u16 i; + + if (!mrioc->ioctl_dma_pool) + return; + + for (i = 0; i < MPI3MR_NUM_IOCTL_SGE; i++) { + mem_desc = &mrioc->ioctl_sge[i]; + if (mem_desc->addr) { + dma_pool_free(mrioc->ioctl_dma_pool, + mem_desc->addr, + mem_desc->dma_addr); + mem_desc->addr = NULL; + } + } + dma_pool_destroy(mrioc->ioctl_dma_pool); + mrioc->ioctl_dma_pool = NULL; + mem_desc = &mrioc->ioctl_chain_sge; + + if (mem_desc->addr) { + dma_free_coherent(&mrioc->pdev->dev, mem_desc->size, + mem_desc->addr, mem_desc->dma_addr); + mem_desc->addr = NULL; + } + mem_desc = &mrioc->ioctl_resp_sge; + if (mem_desc->addr) { + dma_free_coherent(&mrioc->pdev->dev, mem_desc->size, + mem_desc->addr, mem_desc->dma_addr); + mem_desc->addr = NULL; + } + + mrioc->ioctl_sges_allocated = false; +} + +/** + * mpi3mr_alloc_ioctl_dma_memory - Alloc memory for ioctl dma + * @mrioc: Adapter instance reference + + * + * This function allocates dmaable memory required to handle the + * application issued MPI3 IOCTL requests. + * + * Return: None + */ +static void mpi3mr_alloc_ioctl_dma_memory(struct mpi3mr_ioc *mrioc) + +{ + struct dma_memory_desc *mem_desc; + u16 i; + + mrioc->ioctl_dma_pool = dma_pool_create("ioctl dma pool", + &mrioc->pdev->dev, + MPI3MR_IOCTL_SGE_SIZE, + MPI3MR_PAGE_SIZE_4K, 0); + + if (!mrioc->ioctl_dma_pool) { + ioc_err(mrioc, "ioctl_dma_pool: dma_pool_create failed\n"); + goto out_failed; + } + + for (i = 0; i < MPI3MR_NUM_IOCTL_SGE; i++) { + mem_desc = &mrioc->ioctl_sge[i]; + mem_desc->size = MPI3MR_IOCTL_SGE_SIZE; + mem_desc->addr = dma_pool_zalloc(mrioc->ioctl_dma_pool, + GFP_KERNEL, + &mem_desc->dma_addr); + if (!mem_desc->addr) + goto out_failed; + } + + mem_desc = &mrioc->ioctl_chain_sge; + mem_desc->size = MPI3MR_PAGE_SIZE_4K; + mem_desc->addr = dma_alloc_coherent(&mrioc->pdev->dev, + mem_desc->size, + &mem_desc->dma_addr, + GFP_KERNEL); + if (!mem_desc->addr) + goto out_failed; + + mem_desc = &mrioc->ioctl_resp_sge; + mem_desc->size = MPI3MR_PAGE_SIZE_4K; + mem_desc->addr = dma_alloc_coherent(&mrioc->pdev->dev, + mem_desc->size, + &mem_desc->dma_addr, + GFP_KERNEL); + if (!mem_desc->addr) + goto out_failed; + + mrioc->ioctl_sges_allocated = true; + + return; +out_failed: + ioc_warn(mrioc, "cannot allocate DMA memory for the mpt commands\n" + "from the applications, application interface for MPT command is disabled\n"); + mpi3mr_free_ioctl_dma_memory(mrioc); +} + /** * mpi3mr_clear_reset_history - clear reset history * @mrioc: Adapter instance reference @@ -3874,6 +3982,9 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc) } } + dprint_init(mrioc, "allocating ioctl dma buffers\n"); + mpi3mr_alloc_ioctl_dma_memory(mrioc); + if (!mrioc->init_cmds.reply) { retval = mpi3mr_alloc_reply_sense_bufs(mrioc); if (retval) { @@ -4293,6 +4404,7 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc) struct mpi3mr_intr_info *intr_info; mpi3mr_free_enclosure_list(mrioc); + mpi3mr_free_ioctl_dma_memory(mrioc); if (mrioc->sense_buf_pool) { if (mrioc->sense_buf) From patchwork Tue Dec 5 19:16:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chandrakanth Patil X-Patchwork-Id: 750641 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="EdsgweIC" Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D1ACB0 for ; Tue, 5 Dec 2023 11:16:01 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id 98e67ed59e1d1-286f8ee27aeso547313a91.3 for ; Tue, 05 Dec 2023 11:16:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701803760; x=1702408560; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=HvpPftlvc5hzJYsZK3pE8dYogAS73Lb3IbYkPkQgScc=; b=EdsgweICdYxt2Agwa4AA/+FOzw7j4fn49iocZA9CM3xVhiLHnTBqI9XaVsb4coWnKz 5v0/wDNkLnnPZuUTkg/6BrHK53Ig5QjSppySpQLKi1M/qvLuxffIf5ngeIrZHQaXvQzF fOn/WCUIvvkI9/zvMifWkmO0c/XrzQDqa0Yf4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701803760; x=1702408560; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HvpPftlvc5hzJYsZK3pE8dYogAS73Lb3IbYkPkQgScc=; b=ZLCtTotZqaB2oulHH3P24W6jyNPOBED8e57VXcb8BvJhHVAjaG/SPRy5HtGca8wgGN a2iXCizfC3sksmAym59PjLJ44l+8IO0tb51NrZv26hfHFxpCIpfWiMxb4hAlLaPlOZea aexW5nXZ4N09yEfO+mhW0BZr7LwoDHJBgBQv78gUZDXDqrqbVnqVVgsf11/utfr7pVeN M1v1d2NM2ap6WjfUaTI2WCYGwpaMnOEWYpAE5a7+6HKfwxG59tPrI+yPux6S6E8XFxfa tJe9luA+F/zgwn0KIatjK4Ya+l+AO8FyIQ1jH4iBCre/QN25fsiUZDe3mI3prln7GCBH zmJw== X-Gm-Message-State: AOJu0YzqyoHpxNpoU4wE/8LY48J135eeeiLUCe3xeDmGwdTDCVG0f8df EBMb6mzVztWGRZD8hH2AjuziwS58OIB/i0E1GJY39EpxM0FzuqfFp8YhxF5OwjcQjvhgV5K3m5o WiCILbaylnqPxUj55e+Fg8NvSRN9OhazLgWU8CMAFPCgha5xwKEGzIbSGLtDNKPCeHulem0gQ0e wkTOCuEtR+9LYeOubN+Q== X-Google-Smtp-Source: AGHT+IG3L9qIE5quDaIE8q95/JGWu/TZv8brCbeci2dz9PCHG5e+K7xUn0tsWdBr6O8DvEYdIQEswA== X-Received: by 2002:a17:90b:4f90:b0:286:e5c7:c218 with SMTP id qe16-20020a17090b4f9000b00286e5c7c218mr1173990pjb.49.1701803760268; Tue, 05 Dec 2023 11:16:00 -0800 (PST) Received: from dhcp-10-123-20-35.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id 16-20020a17090a199000b0028017a2a8fasm10801896pji.3.2023.12.05.11.15.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Dec 2023 11:15:58 -0800 (PST) From: Chandrakanth patil To: linux-scsi@vger.kernel.org, sumit.saxena@broadcom.com, sathya.prakash@broadcom.com, ranjan.kumar@broadcom.com, prayas.patel@broadcom.com Cc: Chandrakanth patil Subject: [PATCH 3/4] mpi3mr: Support for preallocation of SGL BSG data buffers part-3 Date: Wed, 6 Dec 2023 00:46:29 +0530 Message-Id: <20231205191630.12201-4-chandrakanth.patil@broadcom.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231205191630.12201-1-chandrakanth.patil@broadcom.com> References: <20231205191630.12201-1-chandrakanth.patil@broadcom.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The Driver acquires the required NVME SGLs from the pre-allocated pool. Signed-off-by: Sathya Prakash Signed-off-by: Chandrakanth patil --- drivers/scsi/mpi3mr/mpi3mr.h | 10 ++- drivers/scsi/mpi3mr/mpi3mr_app.c | 124 ++++++++++++++++++++++------ include/uapi/scsi/scsi_bsg_mpi3mr.h | 2 + 3 files changed, 109 insertions(+), 27 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index dfff3df3a666..795e86626101 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -218,14 +218,16 @@ extern atomic64_t event_counter; * @length: SGE length * @rsvd: Reserved * @rsvd1: Reserved - * @sgl_type: sgl type + * @sub_type: sgl sub type + * @type: sgl type */ struct mpi3mr_nvme_pt_sge { - u64 base_addr; - u32 length; + __le64 base_addr; + __le32 length; u16 rsvd; u8 rsvd1; - u8 sgl_type; + u8 sub_type:4; + u8 type:4; }; /** diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index e8b0b121a6e3..4b93b7440da6 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -783,14 +783,20 @@ static int mpi3mr_build_nvme_sgl(struct mpi3mr_ioc *mrioc, struct mpi3mr_buf_map *drv_bufs, u8 bufcnt) { struct mpi3mr_nvme_pt_sge *nvme_sgl; - u64 sgl_ptr; + __le64 sgl_dma; u8 count; size_t length = 0; + u16 available_sges = 0, i; + u32 sge_element_size = sizeof(struct mpi3mr_nvme_pt_sge); struct mpi3mr_buf_map *drv_buf_iter = drv_bufs; u64 sgemod_mask = ((u64)((mrioc->facts.sge_mod_mask) << mrioc->facts.sge_mod_shift) << 32); u64 sgemod_val = ((u64)(mrioc->facts.sge_mod_value) << mrioc->facts.sge_mod_shift) << 32; + u32 size; + + nvme_sgl = (struct mpi3mr_nvme_pt_sge *) + ((u8 *)(nvme_encap_request->command) + MPI3MR_NVME_CMD_SGL_OFFSET); /* * Not all commands require a data transfer. If no data, just return @@ -799,27 +805,59 @@ static int mpi3mr_build_nvme_sgl(struct mpi3mr_ioc *mrioc, for (count = 0; count < bufcnt; count++, drv_buf_iter++) { if (drv_buf_iter->data_dir == DMA_NONE) continue; - sgl_ptr = (u64)drv_buf_iter->kern_buf_dma; length = drv_buf_iter->kern_buf_len; break; } - if (!length) + if (!length || !drv_buf_iter->num_dma_desc) return 0; - if (sgl_ptr & sgemod_mask) { + if (drv_buf_iter->num_dma_desc == 1) { + available_sges = 1; + goto build_sges; + } + + sgl_dma = cpu_to_le64(mrioc->ioctl_chain_sge.dma_addr); + if (sgl_dma & sgemod_mask) { dprint_bsg_err(mrioc, - "%s: SGL address collides with SGE modifier\n", + "%s: SGL chain address collides with SGE modifier\n", __func__); return -1; } - sgl_ptr &= ~sgemod_mask; - sgl_ptr |= sgemod_val; - nvme_sgl = (struct mpi3mr_nvme_pt_sge *) - ((u8 *)(nvme_encap_request->command) + MPI3MR_NVME_CMD_SGL_OFFSET); + sgl_dma &= ~sgemod_mask; + sgl_dma |= sgemod_val; + + memset(mrioc->ioctl_chain_sge.addr, 0, mrioc->ioctl_chain_sge.size); + available_sges = mrioc->ioctl_chain_sge.size / sge_element_size; + if (available_sges < drv_buf_iter->num_dma_desc) + return -1; memset(nvme_sgl, 0, sizeof(struct mpi3mr_nvme_pt_sge)); - nvme_sgl->base_addr = sgl_ptr; - nvme_sgl->length = length; + nvme_sgl->base_addr = sgl_dma; + size = drv_buf_iter->num_dma_desc * sizeof(struct mpi3mr_nvme_pt_sge); + nvme_sgl->length = cpu_to_le32(size); + nvme_sgl->type = MPI3MR_NVMESGL_LAST_SEGMENT; + nvme_sgl = (struct mpi3mr_nvme_pt_sge *)mrioc->ioctl_chain_sge.addr; + +build_sges: + for (i = 0; i < drv_buf_iter->num_dma_desc; i++) { + sgl_dma = cpu_to_le64(drv_buf_iter->dma_desc[i].dma_addr); + if (sgl_dma & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: SGL address collides with SGE modifier\n", + __func__); + return -1; + } + + sgl_dma &= ~sgemod_mask; + sgl_dma |= sgemod_val; + + nvme_sgl->base_addr = sgl_dma; + nvme_sgl->length = cpu_to_le32(drv_buf_iter->dma_desc[i].size); + nvme_sgl->type = MPI3MR_NVMESGL_DATA_SEGMENT; + nvme_sgl++; + available_sges--; + } + return 0; } @@ -847,7 +885,7 @@ static int mpi3mr_build_nvme_prp(struct mpi3mr_ioc *mrioc, dma_addr_t prp_entry_dma, prp_page_dma, dma_addr; u32 offset, entry_len, dev_pgsz; u32 page_mask_result, page_mask; - size_t length = 0; + size_t length = 0, desc_len; u8 count; struct mpi3mr_buf_map *drv_buf_iter = drv_bufs; u64 sgemod_mask = ((u64)((mrioc->facts.sge_mod_mask) << @@ -856,6 +894,7 @@ static int mpi3mr_build_nvme_prp(struct mpi3mr_ioc *mrioc, mrioc->facts.sge_mod_shift) << 32; u16 dev_handle = nvme_encap_request->dev_handle; struct mpi3mr_tgt_dev *tgtdev; + u16 desc_count = 0; tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle); if (!tgtdev) { @@ -874,6 +913,21 @@ static int mpi3mr_build_nvme_prp(struct mpi3mr_ioc *mrioc, dev_pgsz = 1 << (tgtdev->dev_spec.pcie_inf.pgsz); mpi3mr_tgtdev_put(tgtdev); + page_mask = dev_pgsz - 1; + + if (dev_pgsz > MPI3MR_IOCTL_SGE_SIZE) { + dprint_bsg_err(mrioc, + "%s: NVMe device page size(%d) is greater than ioctl data sge size(%d) for handle 0x%04x\n", + __func__, dev_pgsz, MPI3MR_IOCTL_SGE_SIZE, dev_handle); + return -1; + } + + if (MPI3MR_IOCTL_SGE_SIZE % dev_pgsz) { + dprint_bsg_err(mrioc, + "%s: ioctl data sge size(%d) is not a multiple of NVMe device page size(%d) for handle 0x%04x\n", + __func__, MPI3MR_IOCTL_SGE_SIZE, dev_pgsz, dev_handle); + return -1; + } /* * Not all commands require a data transfer. If no data, just return @@ -882,14 +936,26 @@ static int mpi3mr_build_nvme_prp(struct mpi3mr_ioc *mrioc, for (count = 0; count < bufcnt; count++, drv_buf_iter++) { if (drv_buf_iter->data_dir == DMA_NONE) continue; - dma_addr = drv_buf_iter->kern_buf_dma; length = drv_buf_iter->kern_buf_len; break; } - if (!length) + if (!length || !drv_buf_iter->num_dma_desc) return 0; + for (count = 0; count < drv_buf_iter->num_dma_desc; count++) { + dma_addr = drv_buf_iter->dma_desc[count].dma_addr; + if (dma_addr & page_mask) { + dprint_bsg_err(mrioc, + "%s:dma_addr 0x%llx is not aligned with page size 0x%x\n", + __func__, dma_addr, dev_pgsz); + return -1; + } + } + + dma_addr = drv_buf_iter->dma_desc[0].dma_addr; + desc_len = drv_buf_iter->dma_desc[0].size; + mrioc->prp_sz = 0; mrioc->prp_list_virt = dma_alloc_coherent(&mrioc->pdev->dev, dev_pgsz, &mrioc->prp_list_dma, GFP_KERNEL); @@ -919,7 +985,6 @@ static int mpi3mr_build_nvme_prp(struct mpi3mr_ioc *mrioc, * Check if we are within 1 entry of a page boundary we don't * want our first entry to be a PRP List entry. */ - page_mask = dev_pgsz - 1; page_mask_result = (uintptr_t)((u8 *)prp_page + prp_size) & page_mask; if (!page_mask_result) { dprint_bsg_err(mrioc, "%s: PRP page is not page aligned\n", @@ -1033,18 +1098,31 @@ static int mpi3mr_build_nvme_prp(struct mpi3mr_ioc *mrioc, prp_entry_dma += prp_size; } - /* - * Bump the phys address of the command's data buffer by the - * entry_len. - */ - dma_addr += entry_len; - /* decrement length accounting for last partial page. */ - if (entry_len > length) + if (entry_len >= length) { length = 0; - else + } else { + if (entry_len <= desc_len) { + dma_addr += entry_len; + desc_len -= entry_len; + } + if (!desc_len) { + if ((++desc_count) >= + drv_buf_iter->num_dma_desc) { + dprint_bsg_err(mrioc, + "%s: Invalid len %ld while building PRP\n", + __func__, length); + goto err_out; + } + dma_addr = + drv_buf_iter->dma_desc[desc_count].dma_addr; + desc_len = + drv_buf_iter->dma_desc[desc_count].size; + } length -= entry_len; + } } + return 0; err_out: if (mrioc->prp_list_virt) { diff --git a/include/uapi/scsi/scsi_bsg_mpi3mr.h b/include/uapi/scsi/scsi_bsg_mpi3mr.h index 907d345f04f9..c72ce387286a 100644 --- a/include/uapi/scsi/scsi_bsg_mpi3mr.h +++ b/include/uapi/scsi/scsi_bsg_mpi3mr.h @@ -491,6 +491,8 @@ struct mpi3_nvme_encapsulated_error_reply { #define MPI3MR_NVME_DATA_FORMAT_PRP 0 #define MPI3MR_NVME_DATA_FORMAT_SGL1 1 #define MPI3MR_NVME_DATA_FORMAT_SGL2 2 +#define MPI3MR_NVMESGL_DATA_SEGMENT 0x00 +#define MPI3MR_NVMESGL_LAST_SEGMENT 0x03 /* MPI3: task management related definitions */ struct mpi3_scsi_task_mgmt_request {