From patchwork Thu Jan 31 00:39:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bjorn Andersson X-Patchwork-Id: 157098 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp6586761jaa; Wed, 30 Jan 2019 16:40:05 -0800 (PST) X-Google-Smtp-Source: ALg8bN4EYxUFkWdIsTi0diJZjOtC232PtyMZk6g4JnWcGovY+X+z2J4g5s8kyGY75TrqcKb+6X7c X-Received: by 2002:a63:a41:: with SMTP id z1mr29390131pgk.117.1548895205185; Wed, 30 Jan 2019 16:40:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548895205; cv=none; d=google.com; s=arc-20160816; b=b8isXXXwxeW3TYqrCx0ImyrLIIu0o3a98m2HG1u/opUJDbAqx0rQVG4GlxAe3Do0Ji jHMTQxHB7B7MMm8h9/sen82aumBnS3lgZytImcQwamXQy8mZjrqpERuXT+AFfNS78dUx MQzQtkqrz3tHHzKexPOsoLC3bKb2K2RczgGzmvMgIM+Mu0eo520UKDQ1u0XTWRVYP/tp BNhnSaovfeXGxlM+5gcQbPoX0Vx6byMix93uzfM4uNSXtGBh9TINnWz0nVpKWq/wdvnW VBiKXL2Iu+LbTkvfEHaXEsObQkdkea+zNswGhIUS84GM+O7I1fUOAmO/9kbNsq1462Qz j6dA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=ceyOt+Ii5BmCJVNixxOHlvttbK+fclQyqSbGF7lj+b0=; b=C6Ve4/wWqziwcB5cAQb0chzsZZTzrYXW7a15Nhn71Su0aAJ+PvHXwPpNB2qTUJ9QUj Q5JtYQKuSKcAvyq9jDgADQgMTDP4xINEGaW7xipuwt4Eu1VmjDHtDExJjfbX2Fgu+RP1 LAA2Pv5h13RtvMBqYUZaytDHaX3pkESd38wl48numotZyXq+zNSYZKP/4ebK30/uqM+U OVL/wHxGqpO7NgtIrCURfONDUPNMKO9XfHpNjc4e6DFJXd7yBXIG5yXIgWNUcbBNiw81 vBPn7oJm13SzN+zfAKIwCnMd83J6M5EgNN+DezZFDPp7qljoKGaxrcglJAFMRplVWDdQ 20AQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZnX9kKcE; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 21si2787974pge.374.2019.01.30.16.40.05; Wed, 30 Jan 2019 16:40:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZnX9kKcE; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730278AbfAaAkD (ORCPT + 7 others); Wed, 30 Jan 2019 19:40:03 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:45330 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729767AbfAaAkC (ORCPT ); Wed, 30 Jan 2019 19:40:02 -0500 Received: by mail-pl1-f193.google.com with SMTP id a14so618808plm.12 for ; Wed, 30 Jan 2019 16:40:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ceyOt+Ii5BmCJVNixxOHlvttbK+fclQyqSbGF7lj+b0=; b=ZnX9kKcEF4qdpbKrDbJDec0Zclhvj44369xB1LsUNBQw65FxMyuYFlcQjfTaT0eZCq bmkq/QZT83ScFF8v9rcbaH9kFNxf4s88PuTK4zsBtW1eHTLR17gqXHXoZxVsL7ra2LHs 50vrsI/Ve8BuIbPHUMoBeE7heCYw+X7LwvZR4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ceyOt+Ii5BmCJVNixxOHlvttbK+fclQyqSbGF7lj+b0=; b=pUTTvVOmsgk8mkDXxd2qREdS/2MjR2sGucGQJMM+BxpvTDgjWdJPUFtV68zdfZCqOB 42JJ16Ws59/Ehrqcz8Gmx13rUF7RgB65Tspw7L8PI5E2AFQmu2VRNCrSvWszYsnZsUKj 36sjn5GTxiRrPr9wZcxZxR/VJpnvu2oqJskYq8c/r/CNn2T3nYTCG4A4Zpii3WxtZnzI i94sWroRxY+6tZjZQBQeCfwkoN9fJAAFqSEo18rlts2OMMu09CYJPLVwF3XRZrJvjx34 6xeaU0v2nBzvUQL/XKnClJw1xhY4QkoLfOuQrCst5vh1B7p35dhNH/ob+1gVduU+Mp10 DfEg== X-Gm-Message-State: AJcUukfQRqZ/mzXNctAVOvPRA/OvRkZdsFuQteVYfCMEelj5n2enKm7V /Mgnb+LEGZWOQMZz9dq1UlfCBA== X-Received: by 2002:a17:902:96a:: with SMTP id 97mr31700576plm.45.1548895201878; Wed, 30 Jan 2019 16:40:01 -0800 (PST) Received: from localhost.localdomain (104-188-17-28.lightspeed.sndgca.sbcglobal.net. [104.188.17.28]) by smtp.gmail.com with ESMTPSA id k15sm4357928pfb.147.2019.01.30.16.40.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 30 Jan 2019 16:40:01 -0800 (PST) From: Bjorn Andersson To: Ohad Ben-Cohen , Bjorn Andersson Cc: Andy Gross , David Brown , Rob Herring , Mark Rutland , Arun Kumar Neelakantam , Sibi Sankar , linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-remoteproc@vger.kernel.org Subject: [PATCH v5 08/10] remoteproc: q6v5-mss: Active powerdomain for SDM845 Date: Wed, 30 Jan 2019 16:39:31 -0800 Message-Id: <20190131003933.11436-9-bjorn.andersson@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190131003933.11436-1-bjorn.andersson@linaro.org> References: <20190131003933.11436-1-bjorn.andersson@linaro.org> Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The SDM845 MSS needs the load_state powerdomain voted for during the duration of the MSS being powered on, to let the AOSS know that it may not perform certain power save measures. So vote for this. Tested-by: Sibi Sankar Reviewed-by: Sibi Sankar Signed-off-by: Bjorn Andersson --- Changes since v4: - None Changes since v3: - None drivers/remoteproc/qcom_q6v5_mss.c | 31 ++++++++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-) -- 2.18.0 diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c index c32c63e351a0..e30f5486fd20 100644 --- a/drivers/remoteproc/qcom_q6v5_mss.c +++ b/drivers/remoteproc/qcom_q6v5_mss.c @@ -133,6 +133,7 @@ struct rproc_hexagon_res { char **proxy_clk_names; char **reset_clk_names; char **active_clk_names; + char **active_pd_names; char **proxy_pd_names; int version; bool need_mem_protection; @@ -159,10 +160,12 @@ struct q6v5 { struct clk *active_clks[8]; struct clk *reset_clks[4]; struct clk *proxy_clks[4]; + struct device *active_pds[1]; struct device *proxy_pds[3]; int active_clk_count; int reset_clk_count; int proxy_clk_count; + int active_pd_count; int proxy_pd_count; struct reg_info active_regs[1]; @@ -730,10 +733,16 @@ static int q6v5_mba_load(struct q6v5 *qproc) qcom_q6v5_prepare(&qproc->q6v5); + ret = q6v5_pds_enable(qproc, qproc->active_pds, qproc->active_pd_count); + if (ret < 0) { + dev_err(qproc->dev, "failed to enable active power domains\n"); + goto disable_irqs; + } + ret = q6v5_pds_enable(qproc, qproc->proxy_pds, qproc->proxy_pd_count); if (ret < 0) { dev_err(qproc->dev, "failed to enable proxy power domains\n"); - goto disable_irqs; + goto disable_active_pds; } ret = q6v5_regulator_enable(qproc, qproc->proxy_regs, @@ -839,6 +848,8 @@ static int q6v5_mba_load(struct q6v5 *qproc) qproc->proxy_reg_count); disable_proxy_pds: q6v5_pds_disable(qproc, qproc->proxy_pds, qproc->proxy_pd_count); +disable_active_pds: + q6v5_pds_disable(qproc, qproc->active_pds, qproc->active_pd_count); disable_irqs: qcom_q6v5_unprepare(&qproc->q6v5); @@ -878,6 +889,7 @@ static void q6v5_mba_reclaim(struct q6v5 *qproc) qproc->active_clk_count); q6v5_regulator_disable(qproc, qproc->active_regs, qproc->active_reg_count); + q6v5_pds_disable(qproc, qproc->active_pds, qproc->active_pd_count); /* In case of failure or coredump scenario where reclaiming MBA memory * could not happen reclaim it here. @@ -1412,11 +1424,19 @@ static int q6v5_probe(struct platform_device *pdev) } qproc->active_reg_count = ret; + ret = q6v5_pds_attach(&pdev->dev, qproc->active_pds, + desc->active_pd_names); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to attach active power domains\n"); + goto free_rproc; + } + qproc->active_pd_count = ret; + ret = q6v5_pds_attach(&pdev->dev, qproc->proxy_pds, desc->proxy_pd_names); if (ret < 0) { dev_err(&pdev->dev, "Failed to init power domains\n"); - goto free_rproc; + goto detach_active_pds; } qproc->proxy_pd_count = ret; @@ -1452,6 +1472,8 @@ static int q6v5_probe(struct platform_device *pdev) detach_proxy_pds: q6v5_pds_detach(qproc, qproc->proxy_pds, qproc->proxy_pd_count); +detach_active_pds: + q6v5_pds_detach(qproc, qproc->active_pds, qproc->active_pd_count); free_rproc: rproc_free(rproc); @@ -1469,6 +1491,7 @@ static int q6v5_remove(struct platform_device *pdev) qcom_remove_smd_subdev(qproc->rproc, &qproc->smd_subdev); qcom_remove_ssr_subdev(qproc->rproc, &qproc->ssr_subdev); + q6v5_pds_detach(qproc, qproc->active_pds, qproc->active_pd_count); q6v5_pds_detach(qproc, qproc->proxy_pds, qproc->proxy_pd_count); rproc_free(qproc->rproc); @@ -1495,6 +1518,10 @@ static const struct rproc_hexagon_res sdm845_mss = { "mnoc_axi", NULL }, + .active_pd_names = (char*[]){ + "load_state", + NULL + }, .proxy_pd_names = (char*[]){ "cx", "mx",