From patchwork Mon Jun 29 19:50:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE63EC433DF for ; Mon, 29 Jun 2020 19:52:04 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 88F6F20674 for ; Mon, 29 Jun 2020 19:52:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88F6F20674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:33442 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzox-0000QN-HS for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 15:52:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57424) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznR-00074x-DR; Mon, 29 Jun 2020 15:50:29 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46096) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznP-0005uO-2o; Mon, 29 Jun 2020 15:50:29 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 5EB7FBF724; Mon, 29 Jun 2020 19:50:24 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 01/17] hw/block/nvme: memset preallocated requests structures Date: Mon, 29 Jun 2020 21:50:01 +0200 Message-Id: <20200629195017.1217056-2-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen This is preparatory to subsequent patches that change how QSGs/IOVs are handled. It is important that the qsg and iov members of the NvmeRequest are initially zeroed. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index fbe9b2d50895..3dbce536456c 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -618,7 +618,7 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr, sq->size = size; sq->cqid = cqid; sq->head = sq->tail = 0; - sq->io_req = g_new(NvmeRequest, sq->size); + sq->io_req = g_new0(NvmeRequest, sq->size); QTAILQ_INIT(&sq->req_list); QTAILQ_INIT(&sq->out_req_list); From patchwork Mon Jun 29 19:50:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E027C433E2 for ; Mon, 29 Jun 2020 19:53:42 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2715A20674 for ; Mon, 29 Jun 2020 19:53:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2715A20674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:41938 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzqX-0003zd-9G for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 15:53:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57422) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznR-00074r-11; Mon, 29 Jun 2020 15:50:29 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46112) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznO-0005uX-Vt; Mon, 29 Jun 2020 15:50:28 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 10798BF783; Mon, 29 Jun 2020 19:50:25 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 03/17] hw/block/nvme: replace dma_acct with blk_acct equivalent Date: Mon, 29 Jun 2020 21:50:03 +0200 Message-Id: <20200629195017.1217056-4-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen The QSG isn't always initialized, so accounting could be wrong. Issue a call to blk_acct_start instead with the size taken from the QSG or IOV depending on the kind of I/O. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index aaf4651eab4c..54f31e7429c6 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -585,9 +585,10 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, return NVME_INVALID_FIELD | NVME_DNR; } - dma_acct_start(n->conf.blk, &req->acct, &req->qsg, acct); if (req->qsg.nsg > 0) { req->has_sg = true; + block_acct_start(blk_get_stats(n->conf.blk), &req->acct, req->qsg.size, + acct); req->aiocb = is_write ? dma_blk_write(n->conf.blk, &req->qsg, data_offset, BDRV_SECTOR_SIZE, nvme_rw_cb, req) : @@ -595,6 +596,8 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, nvme_rw_cb, req); } else { req->has_sg = false; + block_acct_start(blk_get_stats(n->conf.blk), &req->acct, req->iov.size, + acct); req->aiocb = is_write ? blk_aio_pwritev(n->conf.blk, data_offset, &req->iov, 0, nvme_rw_cb, req) : From patchwork Mon Jun 29 19:50:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AAC8C433E1 for ; Mon, 29 Jun 2020 19:52:18 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0820E20724 for ; Mon, 29 Jun 2020 19:52:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0820E20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:34714 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzpB-0000xQ-5M for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 15:52:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57428) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznR-000753-KJ; Mon, 29 Jun 2020 15:50:29 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46120) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznP-0005un-FP; Mon, 29 Jun 2020 15:50:29 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 62FA0BF7EC; Mon, 29 Jun 2020 19:50:25 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 04/17] hw/block/nvme: remove redundant has_sg member Date: Mon, 29 Jun 2020 21:50:04 +0200 Message-Id: <20200629195017.1217056-5-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Remove the has_sg member from NvmeRequest since it's redundant. Also, make sure the request iov is destroyed at completion time. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 11 ++++++----- hw/block/nvme.h | 1 - 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 54f31e7429c6..ded78a2301a6 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -513,16 +513,20 @@ static void nvme_rw_cb(void *opaque, int ret) block_acct_failed(blk_get_stats(n->conf.blk), &req->acct); req->status = NVME_INTERNAL_DEV_ERROR; } - if (req->has_sg) { + + if (req->qsg.nalloc) { qemu_sglist_destroy(&req->qsg); } + if (req->iov.nalloc) { + qemu_iovec_destroy(&req->iov); + } + nvme_enqueue_req_completion(cq, req); } static uint16_t nvme_flush(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, NvmeRequest *req) { - req->has_sg = false; block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, BLOCK_ACCT_FLUSH); req->aiocb = blk_aio_flush(n->conf.blk, nvme_rw_cb, req); @@ -548,7 +552,6 @@ static uint16_t nvme_write_zeros(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, return NVME_LBA_RANGE | NVME_DNR; } - req->has_sg = false; block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, BLOCK_ACCT_WRITE); req->aiocb = blk_aio_pwrite_zeroes(n->conf.blk, offset, count, @@ -586,7 +589,6 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, } if (req->qsg.nsg > 0) { - req->has_sg = true; block_acct_start(blk_get_stats(n->conf.blk), &req->acct, req->qsg.size, acct); req->aiocb = is_write ? @@ -595,7 +597,6 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, dma_blk_read(n->conf.blk, &req->qsg, data_offset, BDRV_SECTOR_SIZE, nvme_rw_cb, req); } else { - req->has_sg = false; block_acct_start(blk_get_stats(n->conf.blk), &req->acct, req->iov.size, acct); req->aiocb = is_write ? diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 54ec54f491bf..0169e1736f0c 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -22,7 +22,6 @@ typedef struct NvmeRequest { struct NvmeSQueue *sq; BlockAIOCB *aiocb; uint16_t status; - bool has_sg; NvmeCqe cqe; BlockAcctCookie acct; QEMUSGList qsg; From patchwork Mon Jun 29 19:50:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F08ADC433DF for ; Mon, 29 Jun 2020 19:54:15 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CDAE920702 for ; Mon, 29 Jun 2020 19:54:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CDAE920702 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:44960 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzr5-0005Ey-0P for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 15:54:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57508) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznW-0007BF-Lx; Mon, 29 Jun 2020 15:50:34 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46172) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznS-0005wg-KB; Mon, 29 Jun 2020 15:50:34 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 76651BF803; Mon, 29 Jun 2020 19:50:26 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 07/17] hw/block/nvme: add request mapping helper Date: Mon, 29 Jun 2020 21:50:07 +0200 Message-Id: <20200629195017.1217056-8-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Introduce the nvme_map helper to remove some noise in the main nvme_rw function. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index e7b7a1900b0b..d236a3cdee54 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -378,6 +378,15 @@ static uint16_t nvme_dma_prp(NvmeCtrl *n, uint8_t *ptr, uint32_t len, return status; } +static uint16_t nvme_map(NvmeCtrl *n, NvmeCmd *cmd, size_t len, + NvmeRequest *req) +{ + uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1); + uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2); + + return nvme_map_prp(n, &req->qsg, &req->iov, prp1, prp2, len, req); +} + static void nvme_post_cqes(void *opaque) { NvmeCQueue *cq = opaque; @@ -565,8 +574,6 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, NvmeRwCmd *rw = (NvmeRwCmd *)cmd; uint32_t nlb = le32_to_cpu(rw->nlb) + 1; uint64_t slba = le64_to_cpu(rw->slba); - uint64_t prp1 = le64_to_cpu(rw->dptr.prp1); - uint64_t prp2 = le64_to_cpu(rw->dptr.prp2); uint8_t lba_index = NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); uint8_t data_shift = ns->id_ns.lbaf[lba_index].ds; @@ -583,7 +590,7 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, return NVME_LBA_RANGE | NVME_DNR; } - if (nvme_map_prp(n, &req->qsg, &req->iov, prp1, prp2, data_size, req)) { + if (nvme_map(n, cmd, data_size, req)) { block_acct_invalid(blk_get_stats(n->conf.blk), acct); return NVME_INVALID_FIELD | NVME_DNR; } From patchwork Mon Jun 29 19:50:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2FBAC433DF for ; Mon, 29 Jun 2020 19:56:08 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AF4012076C for ; Mon, 29 Jun 2020 19:56:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AF4012076C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:52424 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzst-0008KH-UZ for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 15:56:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57618) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznq-0007Ru-IY; Mon, 29 Jun 2020 15:50:54 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46188) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznn-0005xi-7M; Mon, 29 Jun 2020 15:50:54 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id C1130BF80E; Mon, 29 Jun 2020 19:50:27 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 11/17] hw/block/nvme: be consistent about zeros vs zeroes Date: Mon, 29 Jun 2020 21:50:11 +0200 Message-Id: <20200629195017.1217056-12-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen The NVM Express specification generally uses 'zeroes' and not 'zeros'. It might very well be wrong, but let us align with it. Signed-off-by: Klaus Jensen --- block/nvme.c | 4 ++-- hw/block/nvme.c | 8 ++++---- include/block/nvme.h | 4 ++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/block/nvme.c b/block/nvme.c index 29e90557c428..bee0878dec71 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -465,7 +465,7 @@ static void nvme_identify(BlockDriverState *bs, int namespace, Error **errp) s->page_size / sizeof(uint64_t) * s->page_size); oncs = le16_to_cpu(idctrl->oncs); - s->supports_write_zeroes = !!(oncs & NVME_ONCS_WRITE_ZEROS); + s->supports_write_zeroes = !!(oncs & NVME_ONCS_WRITE_ZEROES); s->supports_discard = !!(oncs & NVME_ONCS_DSM); memset(resp, 0, 4096); @@ -1117,7 +1117,7 @@ static coroutine_fn int nvme_co_pwrite_zeroes(BlockDriverState *bs, } NvmeCmd cmd = { - .opcode = NVME_CMD_WRITE_ZEROS, + .opcode = NVME_CMD_WRITE_ZEROES, .nsid = cpu_to_le32(s->nsid), .cdw10 = cpu_to_le32((offset >> s->blkshift) & 0xFFFFFFFF), .cdw11 = cpu_to_le32(((offset >> s->blkshift) >> 32) & 0xFFFFFFFF), diff --git a/hw/block/nvme.c b/hw/block/nvme.c index d5dff6869b69..12f1b6331c43 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -579,7 +579,7 @@ static uint16_t nvme_flush(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, return NVME_NO_COMPLETE; } -static uint16_t nvme_write_zeros(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, +static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)cmd; @@ -679,8 +679,8 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) switch (cmd->opcode) { case NVME_CMD_FLUSH: return nvme_flush(n, ns, cmd, req); - case NVME_CMD_WRITE_ZEROS: - return nvme_write_zeros(n, ns, cmd, req); + case NVME_CMD_WRITE_ZEROES: + return nvme_write_zeroes(n, ns, cmd, req); case NVME_CMD_WRITE: case NVME_CMD_READ: return nvme_rw(n, ns, cmd, req); @@ -2280,7 +2280,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) id->sqes = (0x6 << 4) | 0x6; id->cqes = (0x4 << 4) | 0x4; id->nn = cpu_to_le32(n->num_namespaces); - id->oncs = cpu_to_le16(NVME_ONCS_WRITE_ZEROS | NVME_ONCS_TIMESTAMP | + id->oncs = cpu_to_le16(NVME_ONCS_WRITE_ZEROES | NVME_ONCS_TIMESTAMP | NVME_ONCS_FEATURES); pstrcpy((char *) id->subnqn, sizeof(id->subnqn), "nqn.2019-08.org.qemu:"); diff --git a/include/block/nvme.h b/include/block/nvme.h index 60833039a6c5..91456255ffa7 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -459,7 +459,7 @@ enum NvmeIoCommands { NVME_CMD_READ = 0x02, NVME_CMD_WRITE_UNCOR = 0x04, NVME_CMD_COMPARE = 0x05, - NVME_CMD_WRITE_ZEROS = 0x08, + NVME_CMD_WRITE_ZEROES = 0x08, NVME_CMD_DSM = 0x09, }; @@ -837,7 +837,7 @@ enum NvmeIdCtrlOncs { NVME_ONCS_COMPARE = 1 << 0, NVME_ONCS_WRITE_UNCORR = 1 << 1, NVME_ONCS_DSM = 1 << 2, - NVME_ONCS_WRITE_ZEROS = 1 << 3, + NVME_ONCS_WRITE_ZEROES = 1 << 3, NVME_ONCS_FEATURES = 1 << 4, NVME_ONCS_RESRVATIONS = 1 << 5, NVME_ONCS_TIMESTAMP = 1 << 6, From patchwork Mon Jun 29 19:50:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEE96C433E1 for ; Mon, 29 Jun 2020 19:56:18 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9C5C32078C for ; Mon, 29 Jun 2020 19:56:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9C5C32078C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:53414 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzt3-0000IO-Tb for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 15:56:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57624) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznq-0007SI-Pk; Mon, 29 Jun 2020 15:50:54 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46206) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpzno-0005z4-9H; Mon, 29 Jun 2020 15:50:54 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 98239BF814; Mon, 29 Jun 2020 19:50:28 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 13/17] hw/block/nvme: consolidate qsg/iov clearing Date: Mon, 29 Jun 2020 21:50:13 +0200 Message-Id: <20200629195017.1217056-14-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Always destroy the request qsg/iov at the end of request use. Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 48 +++++++++++++++++------------------------------- 1 file changed, 17 insertions(+), 31 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 3d38f61b61e5..c6c2c4670f7d 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -178,6 +178,14 @@ static void nvme_req_clear(NvmeRequest *req) { req->ns = NULL; memset(&req->cqe, 0x0, sizeof(req->cqe)); + + if (req->qsg.sg) { + qemu_sglist_destroy(&req->qsg); + } + + if (req->iov.iov) { + qemu_iovec_destroy(&req->iov); + } } static uint16_t nvme_map_addr_cmb(NvmeCtrl *n, QEMUIOVector *iov, hwaddr addr, @@ -262,15 +270,14 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, status = nvme_map_addr(n, qsg, iov, prp1, trans_len); if (status) { - goto unmap; + return status; } len -= trans_len; if (len) { if (unlikely(!prp2)) { trace_pci_nvme_err_invalid_prp2_missing(); - status = NVME_INVALID_FIELD | NVME_DNR; - goto unmap; + return NVME_INVALID_FIELD | NVME_DNR; } if (len > n->page_size) { @@ -291,13 +298,11 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, if (i == n->max_prp_ents - 1 && len > n->page_size) { if (unlikely(!prp_ent || prp_ent & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prplist_ent(prp_ent); - status = NVME_INVALID_FIELD | NVME_DNR; - goto unmap; + return NVME_INVALID_FIELD | NVME_DNR; } if (prp_list_in_cmb != nvme_addr_is_cmb(n, prp_ent)) { - status = NVME_INVALID_USE_OF_CMB | NVME_DNR; - goto unmap; + return NVME_INVALID_USE_OF_CMB | NVME_DNR; } i = 0; @@ -310,14 +315,13 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, if (unlikely(!prp_ent || prp_ent & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prplist_ent(prp_ent); - status = NVME_INVALID_FIELD | NVME_DNR; - goto unmap; + return NVME_INVALID_FIELD | NVME_DNR; } trans_len = MIN(len, n->page_size); status = nvme_map_addr(n, qsg, iov, prp_ent, trans_len); if (status) { - goto unmap; + return status; } len -= trans_len; @@ -326,27 +330,16 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, } else { if (unlikely(prp2 & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prp2_align(prp2); - status = NVME_INVALID_FIELD | NVME_DNR; - goto unmap; + return NVME_INVALID_FIELD | NVME_DNR; } status = nvme_map_addr(n, qsg, iov, prp2, len); if (status) { - goto unmap; + return status; } } } + return NVME_SUCCESS; - -unmap: - if (iov && iov->iov) { - qemu_iovec_destroy(iov); - } - - if (qsg && qsg->sg) { - qemu_sglist_destroy(qsg); - } - - return status; } static uint16_t nvme_dma_prp(NvmeCtrl *n, uint8_t *ptr, uint32_t len, @@ -566,13 +559,6 @@ static void nvme_rw_cb(void *opaque, int ret) req->status = NVME_INTERNAL_DEV_ERROR; } - if (req->qsg.nalloc) { - qemu_sglist_destroy(&req->qsg); - } - if (req->iov.nalloc) { - qemu_iovec_destroy(&req->iov); - } - nvme_enqueue_req_completion(cq, req); } From patchwork Mon Jun 29 19:50:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99E32C433E1 for ; Mon, 29 Jun 2020 19:58:01 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5FD9020674 for ; Mon, 29 Jun 2020 19:58:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5FD9020674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:33614 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzuh-0003kW-LD for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 15:57:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57634) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznr-0007Tv-LE; Mon, 29 Jun 2020 15:50:55 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46208) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpzno-0005z6-8n; Mon, 29 Jun 2020 15:50:55 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 238DCBF816; Mon, 29 Jun 2020 19:50:29 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 14/17] hw/block/nvme: remove NvmeCmd parameter Date: Mon, 29 Jun 2020 21:50:14 +0200 Message-Id: <20200629195017.1217056-15-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Keep a copy of the raw nvme command in the NvmeRequest and remove the now redundant NvmeCmd parameter. Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 177 +++++++++++++++++++++++++----------------------- hw/block/nvme.h | 1 + 2 files changed, 93 insertions(+), 85 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index c6c2c4670f7d..3309f8e0eac1 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -390,9 +390,9 @@ static uint16_t nvme_dma_prp(NvmeCtrl *n, uint8_t *ptr, uint32_t len, return status; } -static uint16_t nvme_map(NvmeCtrl *n, NvmeCmd *cmd, size_t len, - NvmeRequest *req) +static uint16_t nvme_map(NvmeCtrl *n, size_t len, NvmeRequest *req) { + NvmeCmd *cmd = &req->cmd; uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1); uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2); @@ -562,7 +562,7 @@ static void nvme_rw_cb(void *opaque, int ret) nvme_enqueue_req_completion(cq, req); } -static uint16_t nvme_flush(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_flush(NvmeCtrl *n, NvmeRequest *req) { block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, BLOCK_ACCT_FLUSH); @@ -571,9 +571,9 @@ static uint16_t nvme_flush(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_NO_COMPLETE; } -static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeRequest *req) { - NvmeRwCmd *rw = (NvmeRwCmd *)cmd; + NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; const uint8_t lba_index = NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); const uint8_t data_shift = ns->id_ns.lbaf[lba_index].ds; @@ -598,9 +598,9 @@ static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_NO_COMPLETE; } -static uint16_t nvme_rw(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_rw(NvmeCtrl *n, NvmeRequest *req) { - NvmeRwCmd *rw = (NvmeRwCmd *)cmd; + NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; uint32_t nlb = le32_to_cpu(rw->nlb) + 1; uint64_t slba = le64_to_cpu(rw->slba); @@ -629,7 +629,7 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return status; } - if (nvme_map(n, cmd, data_size, req)) { + if (nvme_map(n, data_size, req)) { block_acct_invalid(blk_get_stats(n->conf.blk), acct); return NVME_INVALID_FIELD | NVME_DNR; } @@ -655,11 +655,12 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_NO_COMPLETE; } -static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req) { - uint32_t nsid = le32_to_cpu(cmd->nsid); + uint32_t nsid = le32_to_cpu(req->cmd.nsid); - trace_pci_nvme_io_cmd(nvme_cid(req), nsid, nvme_sqid(req), cmd->opcode); + trace_pci_nvme_io_cmd(nvme_cid(req), nsid, nvme_sqid(req), + req->cmd.opcode); if (unlikely(nsid == 0 || nsid > n->num_namespaces)) { trace_pci_nvme_err_invalid_ns(nsid, n->num_namespaces); @@ -667,16 +668,16 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) } req->ns = &n->namespaces[nsid - 1]; - switch (cmd->opcode) { + switch (req->cmd.opcode) { case NVME_CMD_FLUSH: - return nvme_flush(n, cmd, req); + return nvme_flush(n, req); case NVME_CMD_WRITE_ZEROES: - return nvme_write_zeroes(n, cmd, req); + return nvme_write_zeroes(n, req); case NVME_CMD_WRITE: case NVME_CMD_READ: - return nvme_rw(n, cmd, req); + return nvme_rw(n, req); default: - trace_pci_nvme_err_invalid_opc(cmd->opcode); + trace_pci_nvme_err_invalid_opc(req->cmd.opcode); return NVME_INVALID_OPCODE | NVME_DNR; } } @@ -692,10 +693,10 @@ static void nvme_free_sq(NvmeSQueue *sq, NvmeCtrl *n) } } -static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeCmd *cmd) +static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeRequest *req) { - NvmeDeleteQ *c = (NvmeDeleteQ *)cmd; - NvmeRequest *req, *next; + NvmeDeleteQ *c = (NvmeDeleteQ *)&req->cmd; + NvmeRequest *r, *next; NvmeSQueue *sq; NvmeCQueue *cq; uint16_t qid = le16_to_cpu(c->qid); @@ -709,19 +710,19 @@ static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeCmd *cmd) sq = n->sq[qid]; while (!QTAILQ_EMPTY(&sq->out_req_list)) { - req = QTAILQ_FIRST(&sq->out_req_list); - assert(req->aiocb); - blk_aio_cancel(req->aiocb); + r = QTAILQ_FIRST(&sq->out_req_list); + assert(r->aiocb); + blk_aio_cancel(r->aiocb); } if (!nvme_check_cqid(n, sq->cqid)) { cq = n->cq[sq->cqid]; QTAILQ_REMOVE(&cq->sq_list, sq, entry); nvme_post_cqes(cq); - QTAILQ_FOREACH_SAFE(req, &cq->req_list, entry, next) { - if (req->sq == sq) { - QTAILQ_REMOVE(&cq->req_list, req, entry); - QTAILQ_INSERT_TAIL(&sq->req_list, req, entry); + QTAILQ_FOREACH_SAFE(r, &cq->req_list, entry, next) { + if (r->sq == sq) { + QTAILQ_REMOVE(&cq->req_list, r, entry); + QTAILQ_INSERT_TAIL(&sq->req_list, r, entry); } } } @@ -758,10 +759,10 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr, n->sq[sqid] = sq; } -static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeCmd *cmd) +static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeRequest *req) { NvmeSQueue *sq; - NvmeCreateSq *c = (NvmeCreateSq *)cmd; + NvmeCreateSq *c = (NvmeCreateSq *)&req->cmd; uint16_t cqid = le16_to_cpu(c->cqid); uint16_t sqid = le16_to_cpu(c->sqid); @@ -796,10 +797,10 @@ static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeCmd *cmd) return NVME_SUCCESS; } -static uint16_t nvme_smart_info(NvmeCtrl *n, NvmeCmd *cmd, uint8_t rae, - uint32_t buf_len, uint64_t off, - NvmeRequest *req) +static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, + uint64_t off, NvmeRequest *req) { + NvmeCmd *cmd = &req->cmd; uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1); uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2); uint32_t nsid = le32_to_cpu(cmd->nsid); @@ -855,10 +856,11 @@ static uint16_t nvme_smart_info(NvmeCtrl *n, NvmeCmd *cmd, uint8_t rae, DMA_DIRECTION_FROM_DEVICE, req); } -static uint16_t nvme_fw_log_info(NvmeCtrl *n, NvmeCmd *cmd, uint32_t buf_len, - uint64_t off, NvmeRequest *req) +static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t off, + NvmeRequest *req) { uint32_t trans_len; + NvmeCmd *cmd = &req->cmd; uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1); uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2); NvmeFwSlotInfoLog fw_log = { @@ -877,11 +879,11 @@ static uint16_t nvme_fw_log_info(NvmeCtrl *n, NvmeCmd *cmd, uint32_t buf_len, DMA_DIRECTION_FROM_DEVICE, req); } -static uint16_t nvme_error_info(NvmeCtrl *n, NvmeCmd *cmd, uint8_t rae, - uint32_t buf_len, uint64_t off, - NvmeRequest *req) +static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, + uint64_t off, NvmeRequest *req) { uint32_t trans_len; + NvmeCmd *cmd = &req->cmd; uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1); uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2); NvmeErrorLog errlog; @@ -902,8 +904,10 @@ static uint16_t nvme_error_info(NvmeCtrl *n, NvmeCmd *cmd, uint8_t rae, DMA_DIRECTION_FROM_DEVICE, req); } -static uint16_t nvme_get_log(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) { + NvmeCmd *cmd = &req->cmd; + uint32_t dw10 = le32_to_cpu(cmd->cdw10); uint32_t dw11 = le32_to_cpu(cmd->cdw11); uint32_t dw12 = le32_to_cpu(cmd->cdw12); @@ -938,11 +942,11 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) switch (lid) { case NVME_LOG_ERROR_INFO: - return nvme_error_info(n, cmd, rae, len, off, req); + return nvme_error_info(n, rae, len, off, req); case NVME_LOG_SMART_INFO: - return nvme_smart_info(n, cmd, rae, len, off, req); + return nvme_smart_info(n, rae, len, off, req); case NVME_LOG_FW_SLOT_INFO: - return nvme_fw_log_info(n, cmd, len, off, req); + return nvme_fw_log_info(n, len, off, req); default: trace_pci_nvme_err_invalid_log_page(nvme_cid(req), lid); return NVME_INVALID_FIELD | NVME_DNR; @@ -960,9 +964,9 @@ static void nvme_free_cq(NvmeCQueue *cq, NvmeCtrl *n) } } -static uint16_t nvme_del_cq(NvmeCtrl *n, NvmeCmd *cmd) +static uint16_t nvme_del_cq(NvmeCtrl *n, NvmeRequest *req) { - NvmeDeleteQ *c = (NvmeDeleteQ *)cmd; + NvmeDeleteQ *c = (NvmeDeleteQ *)&req->cmd; NvmeCQueue *cq; uint16_t qid = le16_to_cpu(c->qid); @@ -1003,10 +1007,10 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr, cq->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_post_cqes, cq); } -static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeCmd *cmd) +static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req) { NvmeCQueue *cq; - NvmeCreateCq *c = (NvmeCreateCq *)cmd; + NvmeCreateCq *c = (NvmeCreateCq *)&req->cmd; uint16_t cqid = le16_to_cpu(c->cqid); uint16_t vector = le16_to_cpu(c->irq_vector); uint16_t qsize = le16_to_cpu(c->qsize); @@ -1054,9 +1058,9 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeCmd *cmd) return NVME_SUCCESS; } -static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeIdentify *c, - NvmeRequest *req) +static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeRequest *req) { + NvmeIdentify *c = (NvmeIdentify *)&req->cmd; uint64_t prp1 = le64_to_cpu(c->prp1); uint64_t prp2 = le64_to_cpu(c->prp2); @@ -1066,10 +1070,10 @@ static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeIdentify *c, prp2, DMA_DIRECTION_FROM_DEVICE, req); } -static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeIdentify *c, - NvmeRequest *req) +static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) { NvmeNamespace *ns; + NvmeIdentify *c = (NvmeIdentify *)&req->cmd; uint32_t nsid = le32_to_cpu(c->nsid); uint64_t prp1 = le64_to_cpu(c->prp1); uint64_t prp2 = le64_to_cpu(c->prp2); @@ -1087,9 +1091,9 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeIdentify *c, prp2, DMA_DIRECTION_FROM_DEVICE, req); } -static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeIdentify *c, - NvmeRequest *req) +static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req) { + NvmeIdentify *c = (NvmeIdentify *)&req->cmd; static const int data_len = NVME_IDENTIFY_DATA_SIZE; uint32_t min_nsid = le32_to_cpu(c->nsid); uint64_t prp1 = le64_to_cpu(c->prp1); @@ -1116,9 +1120,9 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeIdentify *c, return ret; } -static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeIdentify *c, - NvmeRequest *req) +static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) { + NvmeIdentify *c = (NvmeIdentify *)&req->cmd; uint32_t nsid = le32_to_cpu(c->nsid); uint64_t prp1 = le64_to_cpu(c->prp1); uint64_t prp2 = le64_to_cpu(c->prp2); @@ -1157,28 +1161,28 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeIdentify *c, DMA_DIRECTION_FROM_DEVICE, req); } -static uint16_t nvme_identify(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) { - NvmeIdentify *c = (NvmeIdentify *)cmd; + NvmeIdentify *c = (NvmeIdentify *)&req->cmd; switch (le32_to_cpu(c->cns)) { case NVME_ID_CNS_NS: - return nvme_identify_ns(n, c, req); + return nvme_identify_ns(n, req); case NVME_ID_CNS_CTRL: - return nvme_identify_ctrl(n, c, req); + return nvme_identify_ctrl(n, req); case NVME_ID_CNS_NS_ACTIVE_LIST: - return nvme_identify_nslist(n, c, req); + return nvme_identify_nslist(n, req); case NVME_ID_CNS_NS_DESCR_LIST: - return nvme_identify_ns_descr_list(n, c, req); + return nvme_identify_ns_descr_list(n, req); default: trace_pci_nvme_err_invalid_identify_cns(le32_to_cpu(c->cns)); return NVME_INVALID_FIELD | NVME_DNR; } } -static uint16_t nvme_abort(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_abort(NvmeCtrl *n, NvmeRequest *req) { - uint16_t sqid = le32_to_cpu(cmd->cdw10) & 0xffff; + uint16_t sqid = le32_to_cpu(req->cmd.cdw10) & 0xffff; req->cqe.result = 1; if (nvme_check_sqid(n, sqid)) { @@ -1228,9 +1232,9 @@ static inline uint64_t nvme_get_timestamp(const NvmeCtrl *n) return cpu_to_le64(ts.all); } -static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeCmd *cmd, - NvmeRequest *req) +static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) { + NvmeCmd *cmd = &req->cmd; uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1); uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2); @@ -1240,8 +1244,9 @@ static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeCmd *cmd, prp2, DMA_DIRECTION_FROM_DEVICE, req); } -static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) { + NvmeCmd *cmd = &req->cmd; uint32_t dw10 = le32_to_cpu(cmd->cdw10); uint32_t dw11 = le32_to_cpu(cmd->cdw11); uint32_t nsid = le32_to_cpu(cmd->nsid); @@ -1311,7 +1316,7 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) result = cpu_to_le32(n->features.async_config); break; case NVME_TIMESTAMP: - return nvme_get_feature_timestamp(n, cmd, req); + return nvme_get_feature_timestamp(n, req); default: break; } @@ -1358,11 +1363,11 @@ out: return NVME_SUCCESS; } -static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeCmd *cmd, - NvmeRequest *req) +static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) { uint16_t ret; uint64_t timestamp; + NvmeCmd *cmd = &req->cmd; uint64_t prp1 = le64_to_cpu(cmd->dptr.prp1); uint64_t prp2 = le64_to_cpu(cmd->dptr.prp2); @@ -1377,8 +1382,9 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeCmd *cmd, return NVME_SUCCESS; } -static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) { + NvmeCmd *cmd = &req->cmd; uint32_t dw10 = le32_to_cpu(cmd->cdw10); uint32_t dw11 = le32_to_cpu(cmd->cdw11); uint32_t nsid = le32_to_cpu(cmd->nsid); @@ -1469,14 +1475,14 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) n->features.async_config = dw11; break; case NVME_TIMESTAMP: - return nvme_set_feature_timestamp(n, cmd, req); + return nvme_set_feature_timestamp(n, req); default: return NVME_FEAT_NOT_CHANGABLE | NVME_DNR; } return NVME_SUCCESS; } -static uint16_t nvme_aer(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_aer(NvmeCtrl *n, NvmeRequest *req) { trace_pci_nvme_aer(nvme_cid(req)); @@ -1495,33 +1501,33 @@ static uint16_t nvme_aer(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_NO_COMPLETE; } -static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) { - trace_pci_nvme_admin_cmd(nvme_cid(req), nvme_sqid(req), cmd->opcode); + trace_pci_nvme_admin_cmd(nvme_cid(req), nvme_sqid(req), req->cmd.opcode); - switch (cmd->opcode) { + switch (req->cmd.opcode) { case NVME_ADM_CMD_DELETE_SQ: - return nvme_del_sq(n, cmd); + return nvme_del_sq(n, req); case NVME_ADM_CMD_CREATE_SQ: - return nvme_create_sq(n, cmd); + return nvme_create_sq(n, req); case NVME_ADM_CMD_GET_LOG_PAGE: - return nvme_get_log(n, cmd, req); + return nvme_get_log(n, req); case NVME_ADM_CMD_DELETE_CQ: - return nvme_del_cq(n, cmd); + return nvme_del_cq(n, req); case NVME_ADM_CMD_CREATE_CQ: - return nvme_create_cq(n, cmd); + return nvme_create_cq(n, req); case NVME_ADM_CMD_IDENTIFY: - return nvme_identify(n, cmd, req); + return nvme_identify(n, req); case NVME_ADM_CMD_ABORT: - return nvme_abort(n, cmd, req); + return nvme_abort(n, req); case NVME_ADM_CMD_SET_FEATURES: - return nvme_set_feature(n, cmd, req); + return nvme_set_feature(n, req); case NVME_ADM_CMD_GET_FEATURES: - return nvme_get_feature(n, cmd, req); + return nvme_get_feature(n, req); case NVME_ADM_CMD_ASYNC_EV_REQ: - return nvme_aer(n, cmd, req); + return nvme_aer(n, req); default: - trace_pci_nvme_err_invalid_admin_opc(cmd->opcode); + trace_pci_nvme_err_invalid_admin_opc(req->cmd.opcode); return NVME_INVALID_OPCODE | NVME_DNR; } } @@ -1546,9 +1552,10 @@ static void nvme_process_sq(void *opaque) QTAILQ_REMOVE(&sq->req_list, req, entry); QTAILQ_INSERT_TAIL(&sq->out_req_list, req, entry); req->cqe.cid = cmd.cid; + memcpy(&req->cmd, &cmd, sizeof(NvmeCmd)); - status = sq->sqid ? nvme_io_cmd(n, &cmd, req) : - nvme_admin_cmd(n, &cmd, req); + status = sq->sqid ? nvme_io_cmd(n, req) : + nvme_admin_cmd(n, req); if (status != NVME_NO_COMPLETE) { req->status = status; nvme_enqueue_req_completion(cq, req); diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 6eaafd2e35f5..454b15945bee 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -25,6 +25,7 @@ typedef struct NvmeRequest { BlockAIOCB *aiocb; uint16_t status; NvmeCqe cqe; + NvmeCmd cmd; BlockAcctCookie acct; QEMUSGList qsg; QEMUIOVector iov; From patchwork Mon Jun 29 19:50:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10468C433DF for ; Mon, 29 Jun 2020 20:00:48 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D10AF206A1 for ; Mon, 29 Jun 2020 20:00:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D10AF206A1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:42890 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzxP-0007bA-3b for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 16:00:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57658) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznt-0007Z9-AV; Mon, 29 Jun 2020 15:50:57 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46220) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznp-0005zV-I8; Mon, 29 Jun 2020 15:50:56 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 7E392BF724; Mon, 29 Jun 2020 19:50:29 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 15/17] hw/block/nvme: allow multiple aios per command Date: Mon, 29 Jun 2020 21:50:15 +0200 Message-Id: <20200629195017.1217056-16-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen This refactors how the device issues asynchronous block backend requests. The NvmeRequest now holds a queue of NvmeAIOs that are associated with the command. This allows multiple aios to be issued for a command. Only when all requests have been completed will the device post a completion queue entry. Because the device is currently guaranteed to only issue a single aio request per command, the benefit is not immediately obvious. But this functionality is required to support metadata, the dataset management command as well as zoned namespaces and other features that require additional persistent state. Signed-off-by: Klaus Jensen Signed-off-by: Klaus Jensen Acked-by: Keith Busch --- hw/block/nvme.c | 330 +++++++++++++++++++++++++++++++++--------- hw/block/nvme.h | 104 +++++++++++-- hw/block/trace-events | 3 + 3 files changed, 360 insertions(+), 77 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 3309f8e0eac1..d836319f068a 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -74,6 +74,7 @@ } while (0) static void nvme_process_sq(void *opaque); +static void nvme_aio_cb(void *opaque, int ret); static inline void *nvme_addr_to_cmb(NvmeCtrl *n, hwaddr addr) { @@ -178,6 +179,9 @@ static void nvme_req_clear(NvmeRequest *req) { req->ns = NULL; memset(&req->cqe, 0x0, sizeof(req->cqe)); + req->status = NVME_SUCCESS; + req->slba = req->nlb = 0x0; + req->cb = req->cb_arg = NULL; if (req->qsg.sg) { qemu_sglist_destroy(&req->qsg); @@ -399,6 +403,91 @@ static uint16_t nvme_map(NvmeCtrl *n, size_t len, NvmeRequest *req) return nvme_map_prp(n, &req->qsg, &req->iov, prp1, prp2, len, req); } +static void nvme_aio_destroy(NvmeAIO *aio) +{ + g_free(aio); +} + +/* + * Submit an asynchronous I/O operation as described by the given NvmeAIO. This + * function takes care of accounting and special handling of reads and writes + * going to the Controller Memory Buffer. + */ +static void nvme_submit_aio(NvmeAIO *aio) +{ + BlockBackend *blk = aio->blk; + BlockAcctCookie *acct = &aio->acct; + BlockAcctStats *stats = blk_get_stats(blk); + + bool is_write; + + switch (aio->opc) { + case NVME_AIO_OPC_NONE: + break; + + case NVME_AIO_OPC_FLUSH: + block_acct_start(stats, acct, 0, BLOCK_ACCT_FLUSH); + aio->aiocb = blk_aio_flush(blk, nvme_aio_cb, aio); + break; + + case NVME_AIO_OPC_WRITE_ZEROES: + block_acct_start(stats, acct, aio->len, BLOCK_ACCT_WRITE); + aio->aiocb = blk_aio_pwrite_zeroes(blk, aio->offset, aio->len, + BDRV_REQ_MAY_UNMAP, nvme_aio_cb, + aio); + break; + + case NVME_AIO_OPC_READ: + case NVME_AIO_OPC_WRITE: + is_write = (aio->opc == NVME_AIO_OPC_WRITE); + + block_acct_start(stats, acct, aio->len, + is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ); + + if (aio->flags & NVME_AIO_DMA) { + QEMUSGList *qsg = (QEMUSGList *)aio->payload; + + if (is_write) { + aio->aiocb = dma_blk_write(blk, qsg, aio->offset, + BDRV_SECTOR_SIZE, nvme_aio_cb, aio); + } else { + aio->aiocb = dma_blk_read(blk, qsg, aio->offset, + BDRV_SECTOR_SIZE, nvme_aio_cb, aio); + } + } else { + QEMUIOVector *iov = (QEMUIOVector *)aio->payload; + + if (is_write) { + aio->aiocb = blk_aio_pwritev(blk, aio->offset, iov, 0, + nvme_aio_cb, aio); + } else { + aio->aiocb = blk_aio_preadv(blk, aio->offset, iov, 0, + nvme_aio_cb, aio); + } + } + + break; + } +} + +/* + * Register an asynchronous I/O operation with the NvmeRequest. The NvmeRequest + * will not complete until all registered AIO's have completed and the + * aio_tailq goes empty. + */ +static inline void nvme_req_add_aio(NvmeRequest *req, NvmeAIO *aio) +{ + assert(req); + + trace_pci_nvme_req_add_aio(nvme_cid(req), aio, blk_name(aio->blk), + aio->offset, aio->len, + nvme_aio_opc_str(aio), req); + + QTAILQ_INSERT_TAIL(&req->aio_tailq, aio, tailq_entry); + + nvme_submit_aio(aio); +} + static void nvme_post_cqes(void *opaque) { NvmeCQueue *cq = opaque; @@ -435,6 +524,7 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) assert(cq->cqid == req->sq->cqid); trace_pci_nvme_enqueue_req_completion(nvme_cid(req), cq->cqid, req->status); + QTAILQ_REMOVE(&req->sq->out_req_list, req, entry); QTAILQ_INSERT_TAIL(&cq->req_list, req, entry); timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); @@ -542,31 +632,128 @@ static inline uint16_t nvme_check_bounds(NvmeCtrl *n, NvmeNamespace *ns, return NVME_SUCCESS; } -static void nvme_rw_cb(void *opaque, int ret) +static void nvme_rw_cb(NvmeRequest *req, void *opaque) { - NvmeRequest *req = opaque; NvmeSQueue *sq = req->sq; NvmeCtrl *n = sq->ctrl; NvmeCQueue *cq = n->cq[sq->cqid]; trace_pci_nvme_rw_cb(nvme_cid(req)); - if (!ret) { - block_acct_done(blk_get_stats(n->conf.blk), &req->acct); - req->status = NVME_SUCCESS; - } else { - block_acct_failed(blk_get_stats(n->conf.blk), &req->acct); - req->status = NVME_INTERNAL_DEV_ERROR; + nvme_enqueue_req_completion(cq, req); +} + +static void nvme_aio_cb(void *opaque, int ret) +{ + NvmeAIO *aio = opaque; + NvmeRequest *req = aio->req; + + BlockBackend *blk = aio->blk; + BlockAcctCookie *acct = &aio->acct; + BlockAcctStats *stats = blk_get_stats(blk); + + Error *local_err = NULL; + + trace_pci_nvme_aio_cb(nvme_cid(req), aio, blk_name(blk), aio->offset, + nvme_aio_opc_str(aio), req); + + if (req) { + QTAILQ_REMOVE(&req->aio_tailq, aio, tailq_entry); } - nvme_enqueue_req_completion(cq, req); + if (!ret) { + block_acct_done(stats, acct); + } else { + block_acct_failed(stats, acct); + + if (req) { + uint16_t status; + + switch (aio->opc) { + case NVME_AIO_OPC_READ: + status = NVME_UNRECOVERED_READ; + break; + case NVME_AIO_OPC_WRITE: + case NVME_AIO_OPC_WRITE_ZEROES: + status = NVME_WRITE_FAULT; + break; + default: + status = NVME_INTERNAL_DEV_ERROR; + break; + } + + trace_pci_nvme_err_aio(nvme_cid(req), aio, blk_name(blk), + aio->offset, nvme_aio_opc_str(aio), req, + status); + + error_setg_errno(&local_err, -ret, "aio failed"); + error_report_err(local_err); + + /* + * An Internal Error trumps all other errors. For other errors, + * only set the first error encountered. Any additional errors will + * be recorded in the error information log page. + */ + if (!req->status || (status & 0xfff) == NVME_INTERNAL_DEV_ERROR) { + req->status = status; + } + } + } + + if (aio->cb) { + aio->cb(aio, aio->cb_arg, ret); + } + + if (req && QTAILQ_EMPTY(&req->aio_tailq)) { + if (req->cb) { + req->cb(req, req->cb_arg); + } else { + NvmeSQueue *sq = req->sq; + NvmeCtrl *n = sq->ctrl; + NvmeCQueue *cq = n->cq[sq->cqid]; + + nvme_enqueue_req_completion(cq, req); + } + } + + nvme_aio_destroy(aio); +} + +static void nvme_aio_rw(NvmeNamespace *ns, BlockBackend *blk, NvmeAIOOp opc, + NvmeRequest *req) +{ + NvmeAIO *aio = g_new(NvmeAIO, 1); + + *aio = (NvmeAIO) { + .opc = opc, + .blk = blk, + .offset = req->slba << nvme_ns_lbads(ns), + .req = req, + }; + + if (req->qsg.sg) { + aio->payload = &req->qsg; + aio->len = req->qsg.size; + aio->flags |= NVME_AIO_DMA; + } else { + aio->payload = &req->iov; + aio->len = req->iov.size; + } + + nvme_req_add_aio(req, aio); } static uint16_t nvme_flush(NvmeCtrl *n, NvmeRequest *req) { - block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, - BLOCK_ACCT_FLUSH); - req->aiocb = blk_aio_flush(n->conf.blk, nvme_rw_cb, req); + NvmeAIO *aio = g_new0(NvmeAIO, 1); + + *aio = (NvmeAIO) { + .opc = NVME_AIO_OPC_FLUSH, + .blk = n->conf.blk, + .req = req, + }; + + nvme_req_add_aio(req, aio); return NVME_NO_COMPLETE; } @@ -575,26 +762,39 @@ static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; - const uint8_t lba_index = NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); - const uint8_t data_shift = ns->id_ns.lbaf[lba_index].ds; - uint64_t slba = le64_to_cpu(rw->slba); - uint32_t nlb = le16_to_cpu(rw->nlb) + 1; - uint64_t offset = slba << data_shift; - uint32_t count = nlb << data_shift; + NvmeAIO *aio; + + int64_t offset; + size_t count; uint16_t status; - trace_pci_nvme_write_zeroes(nvme_cid(req), slba, nlb); + req->slba = le64_to_cpu(rw->slba); + req->nlb = le16_to_cpu(rw->nlb) + 1; - status = nvme_check_bounds(n, ns, slba, nlb); + trace_pci_nvme_write_zeroes(nvme_cid(req), req->slba, req->nlb); + + status = nvme_check_bounds(n, ns, req->slba, req->nlb); if (status) { - trace_pci_nvme_err_invalid_lba_range(slba, nlb, ns->id_ns.nsze); + trace_pci_nvme_err_invalid_lba_range(req->slba, req->nlb, + ns->id_ns.nsze); return status; } - block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, - BLOCK_ACCT_WRITE); - req->aiocb = blk_aio_pwrite_zeroes(n->conf.blk, offset, count, - BDRV_REQ_MAY_UNMAP, nvme_rw_cb, req); + offset = req->slba << nvme_ns_lbads(ns); + count = req->nlb << nvme_ns_lbads(ns); + + aio = g_new0(NvmeAIO, 1); + + *aio = (NvmeAIO) { + .opc = NVME_AIO_OPC_WRITE_ZEROES, + .blk = n->conf.blk, + .offset = offset, + .len = count, + .req = req, + }; + + nvme_req_add_aio(req, aio); + return NVME_NO_COMPLETE; } @@ -602,57 +802,52 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; - uint32_t nlb = le32_to_cpu(rw->nlb) + 1; - uint64_t slba = le64_to_cpu(rw->slba); - uint8_t lba_index = NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); - uint8_t data_shift = ns->id_ns.lbaf[lba_index].ds; - uint64_t data_size = (uint64_t)nlb << data_shift; - uint64_t data_offset = slba << data_shift; - int is_write = rw->opcode == NVME_CMD_WRITE ? 1 : 0; - enum BlockAcctType acct = is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ; - uint16_t status; + uint32_t len; + int status; - trace_pci_nvme_rw(is_write ? "write" : "read", nlb, data_size, slba); + enum BlockAcctType acct = BLOCK_ACCT_READ; + NvmeAIOOp opc = NVME_AIO_OPC_READ; - status = nvme_check_mdts(n, data_size); + if (nvme_req_is_write(req)) { + acct = BLOCK_ACCT_WRITE; + opc = NVME_AIO_OPC_WRITE; + } + + req->nlb = le16_to_cpu(rw->nlb) + 1; + req->slba = le64_to_cpu(rw->slba); + + len = req->nlb << nvme_ns_lbads(ns); + + trace_pci_nvme_rw(nvme_req_is_write(req) ? "write" : "read", req->nlb, + len, req->slba); + + status = nvme_check_mdts(n, len); if (status) { - trace_pci_nvme_err_mdts(nvme_cid(req), data_size); - block_acct_invalid(blk_get_stats(n->conf.blk), acct); - return status; + trace_pci_nvme_err_mdts(nvme_cid(req), len); + goto invalid; } - status = nvme_check_bounds(n, ns, slba, nlb); + status = nvme_check_bounds(n, ns, req->slba, req->nlb); if (status) { - trace_pci_nvme_err_invalid_lba_range(slba, nlb, ns->id_ns.nsze); - block_acct_invalid(blk_get_stats(n->conf.blk), acct); - return status; + trace_pci_nvme_err_invalid_lba_range(req->slba, req->nlb, + ns->id_ns.nsze); + goto invalid; } - if (nvme_map(n, data_size, req)) { - block_acct_invalid(blk_get_stats(n->conf.blk), acct); - return NVME_INVALID_FIELD | NVME_DNR; + status = nvme_map(n, len, req); + if (status) { + goto invalid; } - if (req->qsg.nsg > 0) { - block_acct_start(blk_get_stats(n->conf.blk), &req->acct, req->qsg.size, - acct); - req->aiocb = is_write ? - dma_blk_write(n->conf.blk, &req->qsg, data_offset, BDRV_SECTOR_SIZE, - nvme_rw_cb, req) : - dma_blk_read(n->conf.blk, &req->qsg, data_offset, BDRV_SECTOR_SIZE, - nvme_rw_cb, req); - } else { - block_acct_start(blk_get_stats(n->conf.blk), &req->acct, req->iov.size, - acct); - req->aiocb = is_write ? - blk_aio_pwritev(n->conf.blk, data_offset, &req->iov, 0, nvme_rw_cb, - req) : - blk_aio_preadv(n->conf.blk, data_offset, &req->iov, 0, nvme_rw_cb, - req); - } + nvme_aio_rw(ns, n->conf.blk, opc, req); + nvme_req_set_cb(req, nvme_rw_cb, NULL); return NVME_NO_COMPLETE; + +invalid: + block_acct_invalid(blk_get_stats(n->conf.blk), acct); + return status; } static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req) @@ -699,6 +894,7 @@ static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeRequest *req) NvmeRequest *r, *next; NvmeSQueue *sq; NvmeCQueue *cq; + NvmeAIO *aio; uint16_t qid = le16_to_cpu(c->qid); if (unlikely(!qid || nvme_check_sqid(n, qid))) { @@ -711,8 +907,11 @@ static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeRequest *req) sq = n->sq[qid]; while (!QTAILQ_EMPTY(&sq->out_req_list)) { r = QTAILQ_FIRST(&sq->out_req_list); - assert(r->aiocb); - blk_aio_cancel(r->aiocb); + while (!QTAILQ_EMPTY(&r->aio_tailq)) { + aio = QTAILQ_FIRST(&r->aio_tailq); + assert(aio->aiocb); + blk_aio_cancel(aio->aiocb); + } } if (!nvme_check_cqid(n, sq->cqid)) { cq = n->cq[sq->cqid]; @@ -749,6 +948,7 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr, QTAILQ_INIT(&sq->out_req_list); for (i = 0; i < sq->size; i++) { sq->io_req[i].sq = sq; + QTAILQ_INIT(&(sq->io_req[i].aio_tailq)); QTAILQ_INSERT_TAIL(&(sq->req_list), &sq->io_req[i], entry); } sq->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_process_sq, sq); diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 454b15945bee..c75b13a77efd 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -19,18 +19,36 @@ typedef struct NvmeAsyncEvent { NvmeAerResult result; } NvmeAsyncEvent; -typedef struct NvmeRequest { - struct NvmeSQueue *sq; - struct NvmeNamespace *ns; - BlockAIOCB *aiocb; - uint16_t status; - NvmeCqe cqe; - NvmeCmd cmd; - BlockAcctCookie acct; - QEMUSGList qsg; - QEMUIOVector iov; - QTAILQ_ENTRY(NvmeRequest)entry; -} NvmeRequest; +typedef struct NvmeRequest NvmeRequest; +typedef void NvmeRequestCompletionFunc(NvmeRequest *req, void *opaque); + +struct NvmeRequest { + struct NvmeSQueue *sq; + struct NvmeNamespace *ns; + + NvmeCqe cqe; + NvmeCmd cmd; + uint16_t status; + + uint64_t slba; + uint32_t nlb; + + QEMUSGList qsg; + QEMUIOVector iov; + + NvmeRequestCompletionFunc *cb; + void *cb_arg; + + QTAILQ_HEAD(, NvmeAIO) aio_tailq; + QTAILQ_ENTRY(NvmeRequest) entry; +}; + +static inline void nvme_req_set_cb(NvmeRequest *req, + NvmeRequestCompletionFunc *cb, void *cb_arg) +{ + req->cb = cb; + req->cb_arg = cb_arg; +} typedef struct NvmeSQueue { struct NvmeCtrl *ctrl; @@ -77,6 +95,68 @@ static inline uint8_t nvme_ns_lbads(NvmeNamespace *ns) return nvme_ns_lbaf(ns)->ds; } +typedef enum NvmeAIOOp { + NVME_AIO_OPC_NONE = 0x0, + NVME_AIO_OPC_FLUSH = 0x1, + NVME_AIO_OPC_READ = 0x2, + NVME_AIO_OPC_WRITE = 0x3, + NVME_AIO_OPC_WRITE_ZEROES = 0x4, +} NvmeAIOOp; + +typedef enum NvmeAIOFlags { + NVME_AIO_DMA = 1 << 0, +} NvmeAIOFlags; + +typedef struct NvmeAIO NvmeAIO; +typedef void NvmeAIOCompletionFunc(NvmeAIO *aio, void *opaque, int ret); + +struct NvmeAIO { + NvmeRequest *req; + + NvmeAIOOp opc; + int64_t offset; + size_t len; + BlockBackend *blk; + BlockAIOCB *aiocb; + BlockAcctCookie acct; + + NvmeAIOCompletionFunc *cb; + void *cb_arg; + + int flags; + void *payload; + + QTAILQ_ENTRY(NvmeAIO) tailq_entry; +}; + +static inline const char *nvme_aio_opc_str(NvmeAIO *aio) +{ + switch (aio->opc) { + case NVME_AIO_OPC_NONE: return "NVME_AIO_OP_NONE"; + case NVME_AIO_OPC_FLUSH: return "NVME_AIO_OP_FLUSH"; + case NVME_AIO_OPC_READ: return "NVME_AIO_OP_READ"; + case NVME_AIO_OPC_WRITE: return "NVME_AIO_OP_WRITE"; + case NVME_AIO_OPC_WRITE_ZEROES: return "NVME_AIO_OP_WRITE_ZEROES"; + default: return "NVME_AIO_OP_UNKNOWN"; + } +} + +static inline bool nvme_req_is_write(NvmeRequest *req) +{ + switch (req->cmd.opcode) { + case NVME_CMD_WRITE: + case NVME_CMD_WRITE_ZEROES: + return true; + default: + return false; + } +} + +static inline bool nvme_req_is_dma(NvmeRequest *req) +{ + return req->qsg.sg != NULL; +} + #define TYPE_NVME "nvme" #define NVME(obj) \ OBJECT_CHECK(NvmeCtrl, (obj), TYPE_NVME) diff --git a/hw/block/trace-events b/hw/block/trace-events index 5d7d4679650b..1b2c431e569e 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -36,6 +36,8 @@ pci_nvme_dma_read(uint64_t prp1, uint64_t prp2) "DMA read, prp1=0x%"PRIx64" prp2 pci_nvme_map_addr(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len %"PRIu64"" pci_nvme_map_addr_cmb(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len %"PRIu64"" pci_nvme_map_prp(uint16_t cid, uint64_t trans_len, uint32_t len, uint64_t prp1, uint64_t prp2, int num_prps) "cid %"PRIu16" trans_len %"PRIu64" len %"PRIu32" prp1 0x%"PRIx64" prp2 0x%"PRIx64" num_prps %d" +pci_nvme_req_add_aio(uint16_t cid, void *aio, const char *blkname, uint64_t offset, uint64_t count, const char *opc, void *req) "cid %"PRIu16" aio %p blk \"%s\" offset %"PRIu64" count %"PRIu64" opc \"%s\" req %p" +pci_nvme_aio_cb(uint16_t cid, void *aio, const char *blkname, uint64_t offset, const char *opc, void *req) "cid %"PRIu16" aio %p blk \"%s\" offset %"PRIu64" opc \"%s\" req %p" pci_nvme_io_cmd(uint16_t cid, uint32_t nsid, uint16_t sqid, uint8_t opcode) "cid %"PRIu16" nsid %"PRIu32" sqid %"PRIu16" opc 0x%"PRIx8"" pci_nvme_admin_cmd(uint16_t cid, uint16_t sqid, uint8_t opcode) "cid %"PRIu16" sqid %"PRIu16" opc 0x%"PRIx8"" pci_nvme_rw(const char *verb, uint32_t blk_count, uint64_t byte_count, uint64_t lba) "%s %"PRIu32" blocks (%"PRIu64" bytes) from LBA %"PRIu64"" @@ -86,6 +88,7 @@ pci_nvme_mmio_shutdown_cleared(void) "shutdown bit cleared" # nvme traces for error conditions pci_nvme_err_mdts(uint16_t cid, size_t len) "cid %"PRIu16" len %"PRIu64"" +pci_nvme_err_aio(uint16_t cid, void *aio, const char *blkname, uint64_t offset, const char *opc, void *req, uint16_t status) "cid %"PRIu16" aio %p blk \"%s\" offset %"PRIu64" opc \"%s\" req %p status 0x%"PRIx16"" pci_nvme_err_invalid_dma(void) "PRP/SGL is too small for transfer size" pci_nvme_err_invalid_prplist_ent(uint64_t prplist) "PRP list entry is null or not page aligned: 0x%"PRIx64"" pci_nvme_err_invalid_prp2_align(uint64_t prp2) "PRP2 is not page aligned: 0x%"PRIx64"" From patchwork Mon Jun 29 19:50:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 279092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F254C433E1 for ; Mon, 29 Jun 2020 19:58:55 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F136B2072D for ; Mon, 29 Jun 2020 19:58:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F136B2072D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=irrelevant.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:37202 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jpzva-0005E7-8x for qemu-devel@archiver.kernel.org; Mon, 29 Jun 2020 15:58:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57630) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznr-0007TI-9x; Mon, 29 Jun 2020 15:50:55 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46222) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jpznp-0005zW-Ex; Mon, 29 Jun 2020 15:50:54 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id D1B4DBF450; Mon, 29 Jun 2020 19:50:29 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 16/17] hw/block/nvme: add nvme_check_rw helper Date: Mon, 29 Jun 2020 21:50:16 +0200 Message-Id: <20200629195017.1217056-17-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629195017.1217056-1-its@irrelevant.dk> References: <20200629195017.1217056-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Move various request checks to a separate function. Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 32 +++++++++++++++++++++++--------- 1 file changed, 23 insertions(+), 9 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index d836319f068a..ec08841f74b6 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -632,6 +632,28 @@ static inline uint16_t nvme_check_bounds(NvmeCtrl *n, NvmeNamespace *ns, return NVME_SUCCESS; } +static uint16_t nvme_check_rw(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeNamespace *ns = req->ns; + size_t len = req->nlb << nvme_ns_lbads(ns); + uint16_t status; + + status = nvme_check_mdts(n, len); + if (status) { + trace_pci_nvme_err_mdts(nvme_cid(req), len); + return status; + } + + status = nvme_check_bounds(n, ns, req->slba, req->nlb); + if (status) { + trace_pci_nvme_err_invalid_lba_range(req->slba, req->nlb, + ns->id_ns.nsze); + return status; + } + + return NVME_SUCCESS; +} + static void nvme_rw_cb(NvmeRequest *req, void *opaque) { NvmeSQueue *sq = req->sq; @@ -822,16 +844,8 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_rw(nvme_req_is_write(req) ? "write" : "read", req->nlb, len, req->slba); - status = nvme_check_mdts(n, len); + status = nvme_check_rw(n, req); if (status) { - trace_pci_nvme_err_mdts(nvme_cid(req), len); - goto invalid; - } - - status = nvme_check_bounds(n, ns, req->slba, req->nlb); - if (status) { - trace_pci_nvme_err_invalid_lba_range(req->slba, req->nlb, - ns->id_ns.nsze); goto invalid; }