From patchwork Wed Sep 27 03:35:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UGV0ZXIgV2FuZyAo546L5L+h5Y+LKQ==?= X-Patchwork-Id: 727019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 453B6CE7A8B for ; Wed, 27 Sep 2023 04:10:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229729AbjI0EKv (ORCPT ); Wed, 27 Sep 2023 00:10:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229500AbjI0EJd (ORCPT ); Wed, 27 Sep 2023 00:09:33 -0400 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40E18448C for ; Tue, 26 Sep 2023 20:36:09 -0700 (PDT) X-UUID: fb040b625ce611ee8051498923ad61e6-20230927 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=QY8wTZWBh0Cu35+gt0igMLXPa6Vr9TYelNtjwTezN4Y=; b=fHHyxCiIRpD4hP/HyyoPl8N5frakhKuDD+c4a9vFboqRCFk1VWa35DEXN58nVXe71NkITpgqQ3PiqBuHP3J4hPTPOu2B0RAl3b6dEHqiKybVz/1OhWkjdwAlecAx4OpzayPfGPnV0H5s//RvqCQuVPf2aM6L1EdYBAYZ6QJ/W2E=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.32, REQID:d4d949a1-d20b-4ceb-b8c5-09f5ab662ba4, IP:0, U RL:0,TC:0,Content:0,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTION: release,TS:0 X-CID-META: VersionHash:5f78ec9, CLOUDID:776e6b14-4929-4845-9571-38c601e9c3c9, B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES:1,SPR: NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR,TF_CID_SPAM_ULN X-UUID: fb040b625ce611ee8051498923ad61e6-20230927 Received: from mtkmbs14n2.mediatek.inc [(172.21.101.76)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 21433892; Wed, 27 Sep 2023 11:36:01 +0800 Received: from mtkmbs13n2.mediatek.inc (172.21.101.108) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Wed, 27 Sep 2023 11:36:00 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs13n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Wed, 27 Sep 2023 11:36:00 +0800 From: To: , , , , , CC: , , , , , , , , , , , , , Subject: [PATCH v5] ufs: core: wlun send SSU timeout recovery Date: Wed, 27 Sep 2023 11:35:57 +0800 Message-ID: <20230927033557.13801-1-peter.wang@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-14.0.0.3152-9.1.1006-23728.005 X-TM-AS-Result: No-10--1.982800-8.000000 X-TMASE-MatchedRID: txJkgVt2LQAMQLXc2MGSbEKcYi5Qw/RVaN2KuTwsCwIGW3hFnC9N1XVi 1hAU/d+eLiiu988wpnv6XP0KNPsN+xXfDcvxC40Q0Xw0ILvo/uV+tO36GYDlsruqk4cq52pz8R2 RsKRNAl33RDlj++r10gG2ORx9EyapKc+6Aaw3enmrVklnbP5Jtn0tCKdnhB58k4rY1r+vswgXvQ kGi3tjz/cUt5lc1lLgKIzdZS3ou0W8241FL2hxML1ccGegns0wJiJB6FMX2gKghkaAgQH3kY5Tm ctR+5kToOAaijm225g6sj43Zt+ZdH80Ey2I90n2o+mBAiSUmlt5lSmbrC6fdtr/To2FgNrjDLMI OOVTHz12N6Rg5qIpOg== X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--1.982800-8.000000 X-TMASE-Version: SMEX-14.0.0.3152-9.1.1006-23728.005 X-TM-SNTS-SMTP: 2380C98A8E4F0699B9DE8648C946633CB6D3CEA3E10ABAAD2EEDD7252CC8FEF52000:8 X-MTK: N Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Peter Wang When runtime pm send SSU times out, the SCSI core invokes eh_host_reset_handler, which hooks function ufshcd_eh_host_reset_handler schedule eh_work and stuck at wait flush_work(&hba->eh_work). However, ufshcd_err_handler hangs in wait rpm resume. Do link recovery only in this case. Below is IO hang stack dump in kernel-6.1 kworker/4:0 D __switch_to+0x180/0x344 __schedule+0x5ec/0xa14 schedule+0x78/0xe0 schedule_timeout+0xb0/0x15c io_schedule_timeout+0x48/0x70 do_wait_for_common+0x108/0x19c wait_for_completion_io_timeout+0x50/0x78 blk_execute_rq+0x1b8/0x218 scsi_execute_cmd+0x148/0x238 ufshcd_set_dev_pwr_mode+0xe8/0x244 __ufshcd_wl_resume+0x1e0/0x45c ufshcd_wl_runtime_resume+0x3c/0x174 scsi_runtime_resume+0x7c/0xc8 __rpm_callback+0xa0/0x410 rpm_resume+0x43c/0x67c __rpm_callback+0x1f0/0x410 rpm_resume+0x460/0x67c pm_runtime_work+0xa4/0xac process_one_work+0x208/0x598 worker_thread+0x228/0x438 kthread+0x104/0x1d4 ret_from_fork+0x10/0x20 scsi_eh_0 D __switch_to+0x180/0x344 __schedule+0x5ec/0xa14 schedule+0x78/0xe0 schedule_timeout+0x44/0x15c do_wait_for_common+0x108/0x19c wait_for_completion+0x48/0x64 __flush_work+0x260/0x2d0 flush_work+0x10/0x20 ufshcd_eh_host_reset_handler+0x88/0xcc scsi_try_host_reset+0x48/0xe0 scsi_eh_ready_devs+0x934/0xa40 scsi_error_handler+0x168/0x374 kthread+0x104/0x1d4 ret_from_fork+0x10/0x20 kworker/u16:5 D __switch_to+0x180/0x344 __schedule+0x5ec/0xa14 schedule+0x78/0xe0 rpm_resume+0x114/0x67c __pm_runtime_resume+0x70/0xb4 ufshcd_err_handler+0x1a0/0xe68 process_one_work+0x208/0x598 worker_thread+0x228/0x438 kthread+0x104/0x1d4 ret_from_fork+0x10/0x20 Signed-off-by: Peter Wang Reviewed-by: Bart Van Assche Reviewed-by: Stanley Chu --- drivers/ufs/core/ufshcd.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index c2df07545f96..0619cefa092e 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -7716,6 +7716,20 @@ static int ufshcd_eh_host_reset_handler(struct scsi_cmnd *cmd) hba = shost_priv(cmd->device->host); + /* + * If runtime pm send SSU and got timeout, scsi_error_handler + * stuck at this function to wait for flush_work(&hba->eh_work). + * And ufshcd_err_handler(eh_work) stuck at wait for runtime pm active. + * Do ufshcd_link_recovery instead schedule eh_work can prevent + * dead lock to happen. + */ + if (hba->pm_op_in_progress) { + if (ufshcd_link_recovery(hba)) + err = FAILED; + + return err; + } + spin_lock_irqsave(hba->host->host_lock, flags); hba->force_reset = true; ufshcd_schedule_eh_work(hba);