From patchwork Wed Mar 29 22:24:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Baryshkov X-Patchwork-Id: 669269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46DA2C77B6E for ; Wed, 29 Mar 2023 22:25:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230321AbjC2WZH (ORCPT ); Wed, 29 Mar 2023 18:25:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229774AbjC2WZG (ORCPT ); Wed, 29 Mar 2023 18:25:06 -0400 Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com [IPv6:2a00:1450:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 713021BFE for ; Wed, 29 Mar 2023 15:25:04 -0700 (PDT) Received: by mail-lf1-x12e.google.com with SMTP id y15so22132482lfa.7 for ; Wed, 29 Mar 2023 15:25:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1680128702; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XvOicm7FSZsvoLM+I7eZyYxaJPBR6NHXblQQPNco0kQ=; b=wMNTntBtw3JbTujEgRlvIK1ugsDeViUl3qgk6tzLyCNzeOSCIpPdJlk4I4ZNDMthb/ AOX3vvAr8/YUaGuvgou1JQShkvM+HTeh9Dlk5BK99T5N5VIENL0fP9By1KEgWziqA4N7 JzOSptsOGmJwUe+TJyqdNisp45Z9JFoy8z0Vn6BhsVitIrn2STBQY8Rq93pOEjuJo/Mi MwxqYbpUW1uhvQ1a5R7vzHPBs0M49qyPDg0l8xTcotdc/jupwSQuhWwVs+POh5EOJYbP /TRYVXC7ToT1tP301fMTyzCk7qgy6AFW1JNDonwO9hyngBswFUUaooNEich/MQxDijYx v5kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680128702; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XvOicm7FSZsvoLM+I7eZyYxaJPBR6NHXblQQPNco0kQ=; b=3ZZ+9Vy+MDghlRhYNbhlQa/VdUgHjfSWx8hx6xzEvZUNUImL4QFgCMmWfCbRQY1ItF EiCcrOCFcb57faLNorl+uye4AS0DHAfU3K9NFHgtrMa/gKoU79rsjyo6qHD9RlTZnRNj GiZWLNFLu8Lu/yBEnJQw088Oi1iMARJ3Wn+JUVy86Yvgl82IuEURjAYzzWcjQ27juWu9 xjI/CawFavcIb1mzyIUei8auUcOugPsW8GrDo5PtZpIXzGsbJszW2Y6u1AvpAUemd7Hh cAbCMT+rDxNq64zHTmws2v0bUSO6MuzjSjsWvGRQm9KkZLVeZmdu1y6HJYxjG9vZpWVQ MYdA== X-Gm-Message-State: AAQBX9f6nGMlFGTwSJacp29ZYDEIvee+shAJQFxKevTJe4AWGk4FLXa8 hesaDtRvWQurhdPLrS6QDg/lyA== X-Google-Smtp-Source: AKy350Yafz+44RhjayuM2tQER3bgUHCtIb4t+86yGSe52bNOnaGogVbbHxOlWGD5ykjwJGQWJoEEYg== X-Received: by 2002:ac2:46d4:0:b0:4e8:200:132b with SMTP id p20-20020ac246d4000000b004e80200132bmr6084675lfo.62.1680128702692; Wed, 29 Mar 2023 15:25:02 -0700 (PDT) Received: from eriador.lan (dzccz6yyyyyyyyyyybcwt-3.rev.dnainternet.fi. [2001:14ba:a085:4d00::8a5]) by smtp.gmail.com with ESMTPSA id y26-20020ac255ba000000b004e9b307d2c8sm4724226lfg.238.2023.03.29.15.25.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Mar 2023 15:25:02 -0700 (PDT) From: Dmitry Baryshkov To: Andy Gross , Bjorn Andersson , Konrad Dybcio , Rob Clark , Sean Paul , Abhinav Kumar , Rob Herring , Krzysztof Kozlowski Cc: Stephen Boyd , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org Subject: [RFC PATCH 2/3] drm/msm/a5xx: scale MX domain following the frequncy changes Date: Thu, 30 Mar 2023 01:24:59 +0300 Message-Id: <20230329222500.1131836-3-dmitry.baryshkov@linaro.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329222500.1131836-1-dmitry.baryshkov@linaro.org> References: <20230329222500.1131836-1-dmitry.baryshkov@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org For some a5xx Adrenos we have to specify both GX and MX power domains. GX is used to power up the GPU clocks and logic. MX is used for scaling voltage of memory cells. In case the DT specifies several (GX, MX) power domains, none will be bound by the core. We have to manage GX manually. Also make sure that the MX domain is resumed and scaled to the proper performance state following the desired frequency. Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 52 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/adreno/a5xx_gpu.h | 3 ++ 2 files changed, 55 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 0372f8908202..36b3d11dd5b0 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include "msm_gem.h" #include "msm_mmu.h" @@ -1053,6 +1054,13 @@ static void a5xx_destroy(struct msm_gpu *gpu) } adreno_gpu_cleanup(adreno_gpu); + + if (a5xx_gpu->mx_link) + device_link_del(a5xx_gpu->mx_link); + + if (a5xx_gpu->gxpd) + dev_pm_domain_detach(a5xx_gpu->gxpd, true); + kfree(a5xx_gpu); } @@ -1339,8 +1347,15 @@ static void a5xx_dump(struct msm_gpu *gpu) static int a5xx_pm_resume(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu); int ret; + if (a5xx_gpu->gxpd) { + ret = pm_runtime_resume_and_get(a5xx_gpu->gxpd); + if (ret < 0) + return ret; + } + /* Turn on the core power */ ret = msm_gpu_pm_resume(gpu); if (ret) @@ -1414,6 +1429,9 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu) if (ret) return ret; + if (a5xx_gpu->gxpd) + pm_runtime_put(a5xx_gpu->gxpd); + if (a5xx_gpu->has_whereami) for (i = 0; i < gpu->nr_rings; i++) a5xx_gpu->shadow[i] = 0; @@ -1762,6 +1780,40 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) a5xx_gpu->lm_leakage = 0x4E001A; + /* + * If the device has several power domain (gx and mx), none are attached by the core. + */ + if (!pdev->dev.pm_domain) { + struct device **opp_virt_dev; + struct device *pd; + + /* FIXME: add cpr once it is supported */ + static const char *genpd_names[] = { "mx", NULL }; + + pd = dev_pm_domain_attach_by_name(&pdev->dev, "gx"); + if (IS_ERR(pd)) + return ERR_CAST(pd); + + /* GX is required for GPU to function */ + if (pd == NULL) + return ERR_PTR(-EINVAL); + + a5xx_gpu->gxpd = pd; + + ret = devm_pm_opp_attach_genpd(&pdev->dev, genpd_names, &opp_virt_dev); + if (ret) { + dev_pm_domain_detach(a5xx_gpu->gxpd, true); + return ERR_PTR(ret); + } + + a5xx_gpu->mx_link = device_link_add(&pdev->dev, opp_virt_dev[0], + DL_FLAG_RPM_ACTIVE | + DL_FLAG_PM_RUNTIME | + DL_FLAG_STATELESS); + if (!a5xx_gpu->mx_link) + return ERR_PTR(-ENODEV); + } + check_speed_bin(&pdev->dev); nr_rings = 4; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h index c7187bcc5e90..36e910397c14 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h @@ -44,6 +44,9 @@ struct a5xx_gpu { /* True if the microcode supports the WHERE_AM_I opcode */ bool has_whereami; + + struct device *gxpd; + struct device_link *mx_link; }; #define to_a5xx_gpu(x) container_of(x, struct a5xx_gpu, base)