From patchwork Mon Aug 3 09:04:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Can Guo X-Patchwork-Id: 258029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4989CC433E4 for ; Mon, 3 Aug 2020 09:05:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 30762206D7 for ; Mon, 3 Aug 2020 09:05:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726820AbgHCJFY (ORCPT ); Mon, 3 Aug 2020 05:05:24 -0400 Received: from labrats.qualcomm.com ([199.106.110.90]:21494 "EHLO labrats.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726062AbgHCJFX (ORCPT ); Mon, 3 Aug 2020 05:05:23 -0400 IronPort-SDR: tiMR4BmNoE4dGZXEe91/ql5tjyhGGdcKzVBe6GFDh5EPp4+AcxMCmZLDPHJnWA+sOQv/ltA46z mU9ijqkmdMJlQJfcxSytc4TKkbX1cpn6x0No59DuvABy0CJ+EIcDljLzOIrkcvMfhgjml9fMSl ajgrm5HGIE0QMkK9vXIrj6WOxkgCAfyiY9pBjBzq7RDZj34UlBm2tXzcfjinD4Y5+jB4Vzkf5p XO9a9aETGR7DJVr6EFFR7BJ2Lnit4TMbXXnrQhbFIgsLKUTw9FOTtdAq3ZtR4U9mniBa1uo57M W9A= X-IronPort-AV: E=Sophos;i="5.75,429,1589266800"; d="scan'208";a="47240958" Received: from unknown (HELO ironmsg02-sd.qualcomm.com) ([10.53.140.142]) by labrats.qualcomm.com with ESMTP; 03 Aug 2020 02:05:21 -0700 Received: from stor-presley.qualcomm.com ([192.168.140.85]) by ironmsg02-sd.qualcomm.com with ESMTP; 03 Aug 2020 02:05:20 -0700 Received: by stor-presley.qualcomm.com (Postfix, from userid 359480) id 0AB2A214E4; Mon, 3 Aug 2020 02:05:21 -0700 (PDT) From: Can Guo To: asutoshd@codeaurora.org, nguyenb@codeaurora.org, hongwus@codeaurora.org, rnayak@codeaurora.org, linux-scsi@vger.kernel.org, kernel-team@android.com, saravanak@google.com, salyzyn@google.com, cang@codeaurora.org Cc: Stanley Chu , Alim Akhtar , Avri Altman , "James E.J. Bottomley" , "Martin K. Petersen" , Matthias Brugger , Bean Huo , Bart Van Assche , linux-kernel@vger.kernel.org (open list), linux-arm-kernel@lists.infradead.org (moderated list:ARM/Mediatek SoC support), linux-mediatek@lists.infradead.org (moderated list:ARM/Mediatek SoC support) Subject: [PATCH v9 9/9] scsi: ufs: Properly release resources if a task is aborted successfully Date: Mon, 3 Aug 2020 02:04:44 -0700 Message-Id: <1596445485-19834-10-git-send-email-cang@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1596445485-19834-1-git-send-email-cang@codeaurora.org> References: <1596445485-19834-1-git-send-email-cang@codeaurora.org> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In current UFS task abort hook, namely ufshcd_abort(), if a task is aborted successfully, clock scaling busy time statistics is not updated and, most important, clk_gating.active_reqs is not decreased, which makes clk_gating.active_reqs stay above zero forever, thus clock gating would never happen. To fix it, instead of releasing resources "mannually", use the existing func __ufshcd_transfer_req_compl(). This can also eliminate racing of scsi_dma_unmap() from the real completion in IRQ handler path. Signed-off-by: Can Guo CC: Stanley Chu --- drivers/scsi/ufs/ufshcd.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index d7d2758..9a48389 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -6635,11 +6635,8 @@ static int ufshcd_abort(struct scsi_cmnd *cmd) goto out; } - scsi_dma_unmap(cmd); - spin_lock_irqsave(host->host_lock, flags); - ufshcd_outstanding_req_clear(hba, tag); - hba->lrb[tag].cmd = NULL; + __ufshcd_transfer_req_compl(hba, (1UL << tag)); spin_unlock_irqrestore(host->host_lock, flags); out: