From patchwork Mon Feb 21 08:47:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 544852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A82BC433FE for ; Mon, 21 Feb 2022 09:22:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348679AbiBUJXG (ORCPT ); Mon, 21 Feb 2022 04:23:06 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:57804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350098AbiBUJWF (ORCPT ); Mon, 21 Feb 2022 04:22:05 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF67D377EC; Mon, 21 Feb 2022 01:09:27 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4C6B160B1E; Mon, 21 Feb 2022 09:09:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B29AC340E9; Mon, 21 Feb 2022 09:09:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1645434566; bh=1bFRTM8LO7/ZxkbpNx4MQ3apa34YplDCU4cZyUycKe8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fCEUxvxCSkDBed8Ml7wJ2pyyDx7UuItBJp2fJFB5iZrDGN/VYv3Rszl4mXgQbNqL+ 7TrA0AQI+fFQXiEQwhDTul3SMJKqMxFozMj/UNs9UMZ6HJab57p4QJon0BT7peeukr 39ZD5ilpr15JXGeIGPIfninkJvHSdcxoPbP6yLug= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chris Leech , Sagi Grimberg , Sasha Levin Subject: [PATCH 5.15 045/196] nvme-tcp: fix possible use-after-free in transport error_recovery work Date: Mon, 21 Feb 2022 09:47:57 +0100 Message-Id: <20220221084932.436230442@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220221084930.872957717@linuxfoundation.org> References: <20220221084930.872957717@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Sagi Grimberg [ Upstream commit ff9fc7ebf5c06de1ef72a69f9b1ab40af8b07f9e ] While nvme_tcp_submit_async_event_work is checking the ctrl and queue state before preparing the AER command and scheduling io_work, in order to fully prevent a race where this check is not reliable the error recovery work must flush async_event_work before continuing to destroy the admin queue after setting the ctrl state to RESETTING such that there is no race .submit_async_event and the error recovery handler itself changing the ctrl state. Tested-by: Chris Leech Signed-off-by: Sagi Grimberg Signed-off-by: Sasha Levin --- drivers/nvme/host/tcp.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index efa9037da53c9..ef65d24639c44 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -2105,6 +2105,7 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work) struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl; nvme_stop_keep_alive(ctrl); + flush_work(&ctrl->async_event_work); nvme_tcp_teardown_io_queues(ctrl, false); /* unquiesce to fail fast pending requests */ nvme_start_queues(ctrl);