From patchwork Wed Mar 2 17:27:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 547649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D6D1C433F5 for ; Wed, 2 Mar 2022 17:29:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236126AbiCBR3u (ORCPT ); Wed, 2 Mar 2022 12:29:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243974AbiCBR3m (ORCPT ); Wed, 2 Mar 2022 12:29:42 -0500 Received: from so254-9.mailgun.net (so254-9.mailgun.net [198.61.254.9]) by lindbergh.monkeyblade.net (Postfix) with UTF8SMTPS id 63268D1984 for ; Wed, 2 Mar 2022 09:28:19 -0800 (PST) DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1646242099; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=vcACLLIFNIfIgEzgPglMeo3ufxg9dnsyvhuXmdBj3pE=; b=vf6RJZhQ9WtPQsNu6+iQ+a2SfM7Za0ttVFdiE7Gmalvlv+5ipAqhQ6UVl7/oQuK2+aVbM6a9 F7eYrXq3IzBtRKSgom4rXS/U/i/yFVupffOOtAmmEyXJflfL9m4hnXe5KGxd+HOLcMhCLauq UeUIWFFfQLNUQmFESsAwVUN5L1Y= X-Mailgun-Sending-Ip: 198.61.254.9 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n07.prod.us-east-1.postgun.com with SMTP id 621fa93218892df15f8be4b3 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Wed, 02 Mar 2022 17:28:18 GMT Sender: quic_akhilpo=quicinc.com@mg.codeaurora.org Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 22E0AC4363B; Wed, 2 Mar 2022 17:28:18 +0000 (UTC) Received: from hyd-lnxbld559.qualcomm.com (unknown [202.46.22.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: akhilpo) by smtp.codeaurora.org (Postfix) with ESMTPSA id 81583C4361A; Wed, 2 Mar 2022 17:28:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 smtp.codeaurora.org 81583C4361A Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=quicinc.com From: Akhil P Oommen To: freedreno , dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Dmitry Baryshkov , Bjorn Andersson Cc: Abhinav Kumar , Daniel Vetter , David Airlie , Sean Paul , linux-kernel@vger.kernel.org Subject: [PATCH v1 05/10] drm/msm: Do recovery on hw_init failure Date: Wed, 2 Mar 2022 22:57:31 +0530 Message-Id: <20220302225551.v1.5.Ib02d5c2b453e8d4ef3f20e48fef7d9e0be09178e@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1646242056-2456-1-git-send-email-quic_akhilpo@quicinc.com> References: <1646242056-2456-1-git-send-email-quic_akhilpo@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Schedule the recover worker when there is hw init failure in msm_gpu_submit(). The recover worker will take care of capturing coredump, gpu recovery and resubmission of pending IBs. Signed-off-by: Akhil P Oommen --- drivers/gpu/drm/msm/msm_gpu.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index e8a442a..4d24fa1 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -757,12 +757,15 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_drm_private *priv = dev->dev_private; struct msm_ringbuffer *ring = submit->ring; unsigned long flags; + int ret; WARN_ON(!mutex_is_locked(&gpu->lock)); pm_runtime_get_sync(&gpu->pdev->dev); - msm_gpu_hw_init(gpu); + ret = msm_gpu_hw_init(gpu); + if (ret) + kthread_queue_work(gpu->worker, &gpu->recover_work); submit->seqno = ++ring->seqno;