From patchwork Tue Apr 28 18:23:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 226773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3746CC83004 for ; Tue, 28 Apr 2020 18:54:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 131D320575 for ; Tue, 28 Apr 2020 18:54:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588100087; bh=LLQXHmgmDWfRRwnjm/MV0EgMgOLIQUwx5mlM/0UZwUU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Oaa/S2k9oBwrN8Nrzwb3bYo1RKl+wNxFzQhpAlRdyrfIroSkrYp68g87k26z8rNY0 6z9EAYq+4Ga/su5OJN/wj4NrC1pMLLA+7zoenD9gtmLhIsbLkEwtnlWxaUBW9uTtut w6dvU7yBtLXynz6frcTmyBoAy2fSHXLYERGpABoM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730310AbgD1Syq (ORCPT ); Tue, 28 Apr 2020 14:54:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:49772 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729308AbgD1SdX (ORCPT ); Tue, 28 Apr 2020 14:33:23 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 52AC720B80; Tue, 28 Apr 2020 18:33:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588098802; bh=LLQXHmgmDWfRRwnjm/MV0EgMgOLIQUwx5mlM/0UZwUU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iXXh4CKlsH5NY6cp6qpkfGaWLtHTaqxHxfEPvQuZJZ4BcVgjHqGs3xlwKUef/YcI8 MpWSxUnTWOYPOqZJmWE2wBLhYtolWVLKft9Eh9JvBzl3BOsa6hgP7pghPW3p2EeZRz 5/rjkRnisFlPTH4LZgJgB7ZT7WYPf43gL5QjQFbM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tony Asleson , Chaitanya Kulkarni , Sagi Grimberg , Keith Busch , Christoph Hellwig , Sasha Levin Subject: [PATCH 5.4 017/168] nvme-tcp: fix possible crash in write_zeroes processing Date: Tue, 28 Apr 2020 20:23:11 +0200 Message-Id: <20200428182233.867842327@linuxfoundation.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200428182231.704304409@linuxfoundation.org> References: <20200428182231.704304409@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Sagi Grimberg [ Upstream commit 25e5cb780e62bde432b401f312bb847edc78b432 ] We cannot look at blk_rq_payload_bytes without first checking that the request has a mappable physical segments first (e.g. blk_rq_nr_phys_segments(rq) != 0) and only then to take the request payload bytes. This caused us to send a wrong sgl to the target or even dereference a non-existing buffer in case we actually got to the data send sequence (if it was in-capsule). Reported-by: Tony Asleson Suggested-by: Chaitanya Kulkarni Signed-off-by: Sagi Grimberg Signed-off-by: Keith Busch Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/tcp.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 244984420b41b..11e84ed4de361 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -164,16 +164,14 @@ static inline bool nvme_tcp_async_req(struct nvme_tcp_request *req) static inline bool nvme_tcp_has_inline_data(struct nvme_tcp_request *req) { struct request *rq; - unsigned int bytes; if (unlikely(nvme_tcp_async_req(req))) return false; /* async events don't have a request */ rq = blk_mq_rq_from_pdu(req); - bytes = blk_rq_payload_bytes(rq); - return rq_data_dir(rq) == WRITE && bytes && - bytes <= nvme_tcp_inline_data_size(req->queue); + return rq_data_dir(rq) == WRITE && req->data_len && + req->data_len <= nvme_tcp_inline_data_size(req->queue); } static inline struct page *nvme_tcp_req_cur_page(struct nvme_tcp_request *req) @@ -2090,7 +2088,9 @@ static blk_status_t nvme_tcp_map_data(struct nvme_tcp_queue *queue, c->common.flags |= NVME_CMD_SGL_METABUF; - if (rq_data_dir(rq) == WRITE && req->data_len && + if (!blk_rq_nr_phys_segments(rq)) + nvme_tcp_set_sg_null(c); + else if (rq_data_dir(rq) == WRITE && req->data_len <= nvme_tcp_inline_data_size(queue)) nvme_tcp_set_sg_inline(queue, c, req->data_len); else @@ -2117,7 +2117,8 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns, req->data_sent = 0; req->pdu_len = 0; req->pdu_sent = 0; - req->data_len = blk_rq_payload_bytes(rq); + req->data_len = blk_rq_nr_phys_segments(rq) ? + blk_rq_payload_bytes(rq) : 0; req->curr_bio = rq->bio; if (rq_data_dir(rq) == WRITE &&