From patchwork Wed Nov 1 07:38:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 740284 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24E4EC4332F for ; Wed, 1 Nov 2023 07:38:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231442AbjKAHi4 (ORCPT ); Wed, 1 Nov 2023 03:38:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231422AbjKAHiz (ORCPT ); Wed, 1 Nov 2023 03:38:55 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03AECED; Wed, 1 Nov 2023 00:38:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698824330; x=1730360330; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CPuhmUzZJc2Fl7jj6hRHKrHunHiCOvwemWTDCOmv/zs=; b=LSmvbWQDuaVWXy4noa/jJCf4Ics4FsHz6K/5RuUsWFL6HznUGcbGV4uB PGs5xAZW0tfGJk1ayiJ6PsrFTZbi9RIFY05jGuVjsjoiNX/UCpd4Uz+28 WVHHxqqGcxvNUK/Hv48cgCdCVP84cbMRjUv4R7kmkyOkfdfXHlxsI8pvP pZbrq/inBmbcC4PY3e7+vdhJOE/zWnOlYzmvHxCxD4Ykbmanwf8/7OSEA zlewnbVCHfBFq5FFq0RkSKq49yTsR4tibbz+5zSgSJkKwv1JJC4pstd7+ yDdwS0ZVO6g9ebTqk9vsjL2pH7WxdUv7GPy0oMf/diQJ4S/91J+Yd0keZ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10880"; a="9982635" X-IronPort-AV: E=Sophos;i="6.03,267,1694761200"; d="scan'208";a="9982635" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Nov 2023 00:38:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10880"; a="934357517" X-IronPort-AV: E=Sophos;i="6.03,267,1694761200"; d="scan'208";a="934357517" Received: from ahunter6-mobl1.ger.corp.intel.com (HELO ahunter-VirtualBox.home\044ger.corp.intel.com) ([10.252.34.17]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Nov 2023 00:38:43 -0700 From: Adrian Hunter To: Ulf Hansson , =?utf-8?q?Kornel_Dul=C4=99ba?= , Radoslaw Biernacki , Gwendal Grignou , Ritesh Harjani , Asutosh Das Cc: Chaotian Jing , Aswath Govindraju , Bhavya Kapoor , Kamal Dasu , Al Cooper , Haibo Chen , Bhupesh Sharma , Shaik Sajida Bhanu , Sai Krishna Potthuri , Swati Agarwal , Victor Shih , Ben Chuang , Thierry Reding , Aniruddha Tvs Rao , Chun-Hung Wu , linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/5] mmc: block: Do not lose cache flush during CQE error recovery Date: Wed, 1 Nov 2023 09:38:23 +0200 Message-Id: <20231101073827.4772-2-adrian.hunter@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231101073827.4772-1-adrian.hunter@intel.com> References: <20231101073827.4772-1-adrian.hunter@intel.com> MIME-Version: 1.0 Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org During CQE error recovery, error-free data commands get requeued if there is any data left to transfer, but non-data commands are completed even though they have not been processed. Requeue them instead. Note the only non-data command is cache flush, which would have resulted in a cache flush being lost if it was queued at the time of CQE recovery. Fixes: 1e8e55b67030 ("mmc: block: Add CQE support") Cc: stable@vger.kernel.org Signed-off-by: Adrian Hunter --- drivers/mmc/core/block.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 3a8f27c3e310..4a32b756b7d8 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1482,6 +1482,8 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req) blk_mq_requeue_request(req, true); else __blk_mq_end_request(req, BLK_STS_OK); + } else if (mq->in_recovery) { + blk_mq_requeue_request(req, true); } else { blk_mq_end_request(req, BLK_STS_OK); }