From patchwork Tue Aug 8 21:02:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 711644 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97CBDC04E69 for ; Tue, 8 Aug 2023 21:02:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232545AbjHHVCx (ORCPT ); Tue, 8 Aug 2023 17:02:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231485AbjHHVCw (ORCPT ); Tue, 8 Aug 2023 17:02:52 -0400 Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com [IPv6:2a00:1450:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01132FB for ; Tue, 8 Aug 2023 14:02:52 -0700 (PDT) Received: by mail-lj1-x22a.google.com with SMTP id 38308e7fff4ca-2b9b5ee9c5aso98352321fa.1 for ; Tue, 08 Aug 2023 14:02:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528570; x=1692133370; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=hS8c4wyB3wtjZLqyo7PC6T7LpcZXGBfeQFGSCqAfqLk=; b=lqkaAoh41MhSMh/CKlh0pgemoK+C1ZZeoxjazQT9iwnQtnGJyFbuTVVwP8AA5DmVtw NdG5QY1C8SgLlr/JYHDankbQVJ+5nbBkSeZAG6ct4meH5GMbtggV0TBlZanCizE6UXFV AJhdN+GWrfpXsNcOZBHorjcR5O8rfG2ZKlRog9G+Xh+YVlexC4CS/6TJVyCxACn7cBd6 ZlGvAbVVLTLXZ5BhRVUsVs4l8YDTfRi28+3PcLxw/nkbNDLgqACtrPsCyWvkX8Pelmbv mI4+tGiHXEe4Rj5jbnyi1n50X5a+9STXoKTUK2nhvnOmjNQm/nTVU/MNMAOywhEImAIF v6cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528570; x=1692133370; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hS8c4wyB3wtjZLqyo7PC6T7LpcZXGBfeQFGSCqAfqLk=; b=HzOzTAQXErb9Q8Ql/ldj8rerA72HQfTTjb/setB7jfBdyA3uUWyIC/v6y8Zqvllz9j yfSpEvXWaCJW1lc8EA/LeCPm+TmQR5t8nf6gsMh6yf+2n4KB1/NGgttDxyFrHzX/w0HN g3BMZnUCtPuFMZhM591ZVYVVdB81VKNuwJudOXUPuYwiZkhdj5ZmooHEjL7XbsCgQvbl GnCAw32EwDFSY7xJSQ6Xb5EquDcl4cdtv0Hh6TE7uO+U+DcSfkHFKGEBXUespqv7/QRU 8k9yD0BRVopQOPR3+xtDgiaLPUtlNBWDQF3WwFAm4hUlv63WV+KTQDe/hZFZyX/n9zVH tArQ== X-Gm-Message-State: AOJu0YyYT5jjNb9i7ybSlZ/ucN1R+JGLtnxpgxm4/1avfoD0UfuzMR9M Ky0O3bmzszRaIbvN4M7b2BKcKw== X-Google-Smtp-Source: AGHT+IHGfhZxTyBRzijfbQw22JrjBKP3ZlPHO3havq5nM/dZu9mn+HvCruRTBcPXH6PabOfDIhRpMQ== X-Received: by 2002:a2e:8798:0:b0:2b9:f13b:6135 with SMTP id n24-20020a2e8798000000b002b9f13b6135mr508198lji.18.1691528570199; Tue, 08 Aug 2023 14:02:50 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.02.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:02:49 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:39 +0200 Subject: [PATCH v2 01/14] dt-bindings: display/msm/gmu: Add Adreno 7[34]0 GMU MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-1-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Krzysztof Kozlowski , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=2826; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=W4TNDfqRe8poCgRB2PojMgbpj1+i/wuiEusxrpkSZtc=; b=9gary2WEBMhPBMDBnH9hw+LpjpkqnncTuqSe2xl+THzCZIpP0KDeutd1AkzYS5KZk93EM1sWG FO9Bjvrh4+gBM5RPfq/WGGuEHFTlKd9KDTuputRiNJC/q3f+unC1/x0 X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org The GMU on the A7xx series is pretty much the same as on the A6xx parts. It's now "smarter", needs a bit less register writes and controls more things (like inter-frame power collapse) mostly internally (instead of us having to write to G[PM]U_[CG]X registers from APPS) The only difference worth mentioning is the now-required DEMET clock, which is strictly required for things like asserting reset lines, not turning it on results in GMU not being fully functional (all OOB requests would fail and HFI would hang after the first submitted OOB). Describe the A730 and A740 GMU. Reviewed-by: Krzysztof Kozlowski Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- .../devicetree/bindings/display/msm/gmu.yaml | 40 +++++++++++++++++++++- 1 file changed, 39 insertions(+), 1 deletion(-) diff --git a/Documentation/devicetree/bindings/display/msm/gmu.yaml b/Documentation/devicetree/bindings/display/msm/gmu.yaml index 5fc4106110ad..20ddb89a4500 100644 --- a/Documentation/devicetree/bindings/display/msm/gmu.yaml +++ b/Documentation/devicetree/bindings/display/msm/gmu.yaml @@ -21,7 +21,7 @@ properties: compatible: oneOf: - items: - - pattern: '^qcom,adreno-gmu-6[0-9][0-9]\.[0-9]$' + - pattern: '^qcom,adreno-gmu-[67][0-9][0-9]\.[0-9]$' - const: qcom,adreno-gmu - const: qcom,adreno-gmu-wrapper @@ -213,6 +213,44 @@ allOf: - const: axi - const: memnoc + - if: + properties: + compatible: + contains: + enum: + - qcom,adreno-gmu-730.1 + - qcom,adreno-gmu-740.1 + then: + properties: + reg: + items: + - description: Core GMU registers + - description: Resource controller registers + - description: GMU PDC registers + reg-names: + items: + - const: gmu + - const: rscc + - const: gmu_pdc + clocks: + items: + - description: GPU AHB clock + - description: GMU clock + - description: GPU CX clock + - description: GPU AXI clock + - description: GPU MEMNOC clock + - description: GMU HUB clock + - description: GPUSS DEMET clock + clock-names: + items: + - const: ahb + - const: gmu + - const: cxo + - const: axi + - const: memnoc + - const: hub + - const: demet + - if: properties: compatible: From patchwork Tue Aug 8 21:02:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 712829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6BDCC04FDF for ; Tue, 8 Aug 2023 21:02:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233633AbjHHVCz (ORCPT ); Tue, 8 Aug 2023 17:02:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232358AbjHHVCy (ORCPT ); Tue, 8 Aug 2023 17:02:54 -0400 Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com [IPv6:2a00:1450:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C2BC1B4 for ; Tue, 8 Aug 2023 14:02:53 -0700 (PDT) Received: by mail-lj1-x236.google.com with SMTP id 38308e7fff4ca-2b9a2033978so94954481fa.0 for ; Tue, 08 Aug 2023 14:02:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528571; x=1692133371; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=hfboBfiRFogaTpUuhly3NgfZjgGVKja/IsRvHexLQ30=; b=FWL/9588n+DzkqoNB8hffVQrNAMttYAEKN+YYxfTS9SFuaMVv0x1FDgWAbYpX1lpEn y+HgLcWjx/BPzB34iijB2ZhCbTIBeJV3cWPm2iKnk8sPzH0fMfDC2hPL8ElG1vk2b9Iv yf7Q++GsxTRpX34BXPKK3S7XWzE90oDZVVyqXUr8Hml4TJuAJiX0EoD9meI1pN0tsbPg g1fdxp3AwjtMfPY62SdVUxMmfXzdg0Gt4Dzn4Knl0qTIDFM4E/2FcphyjG1MSrBS3Dyg MUJ9PltjjE7Xt+Vb77M2S2xF5DUFudNblx+0ptKg8udZO0L9QpWsYcvxhCLBfxmULkjk XIMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528571; x=1692133371; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hfboBfiRFogaTpUuhly3NgfZjgGVKja/IsRvHexLQ30=; b=BAXwWtYPWQkyr+tHnzFygKR+6s0CuehnsGC9wR7h7EC0FVkI5rBLC3h9nQYd98ynzL PyJtL+A2YdfVDpeuOaEKqiCP8/2b8d60rRgEU6Z4kaRvI0uNDEHmVbhW4PA9zuion9X0 yrtrZeWVrnJz2B7Bz1WNcRMZKfdczkJX2hgK+1tAseV5/ZxOIlId9Tpso3sRs5AJGpal mAZH0fgALhjuhO8Yv1nQS1YRVF6899I2017lPE9qxZvUnuvrhJh3X2OHWOfBTR8vlYRR nZxRegPch7wqiSBOhTssvV6gjmKzFzQP+w1PZHQ4KfRL+CkiX/Ug9qz6HMmJy3IAwSLw /v0w== X-Gm-Message-State: AOJu0Yw8DV2qrnJEilW8lhlcdZyygZgIq1cxiziz6lu7mMxvsFErAdQc I/nflY3KBYnLEB93Z6Wgi6qosA== X-Google-Smtp-Source: AGHT+IHmtQKh9Ne4fsCZMKO/0I3do4hlNgFoi5gsoycNE3Zw2qfKFIElCjB2RXnFAV2OS+qeIducrA== X-Received: by 2002:a2e:6a04:0:b0:2b6:cff1:cd1c with SMTP id f4-20020a2e6a04000000b002b6cff1cd1cmr444440ljc.34.1691528571765; Tue, 08 Aug 2023 14:02:51 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.02.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:02:51 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:40 +0200 Subject: [PATCH v2 02/14] dt-bindings: display/msm/gmu: Allow passing QMP handle MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-2-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=1248; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=DmGlYwIEUEHn/wQtjBFLoVdCMMIvFa4G3MsWRsymdpo=; b=NsU1D9zX+SvHMM5tanczlcL+kW81jzbaObcsHCwoLQsp3YgUXDvomWL0uXIX/ygHDXQhieJdr SQ/Mr8eaZ4aAqjvuI/2lC0ahRXyGV8q84BrlIDCdKIwq62BnzEw5evp X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org When booting the GMU, the QMP mailbox should be pinged about some tunables (e.g. adaptive clock distribution state). To achieve that, a reference to it is necessary. Allow it and require it with A730. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio Acked-by: Krzysztof Kozlowski --- Documentation/devicetree/bindings/display/msm/gmu.yaml | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/Documentation/devicetree/bindings/display/msm/gmu.yaml b/Documentation/devicetree/bindings/display/msm/gmu.yaml index 20ddb89a4500..e132dbff3c4a 100644 --- a/Documentation/devicetree/bindings/display/msm/gmu.yaml +++ b/Documentation/devicetree/bindings/display/msm/gmu.yaml @@ -64,6 +64,10 @@ properties: iommus: maxItems: 1 + qcom,qmp: + $ref: /schemas/types.yaml#/definitions/phandle + description: Reference to the AOSS side-channel message RAM + operating-points-v2: true opp-table: @@ -251,6 +255,9 @@ allOf: - const: hub - const: demet + required: + - qcom,qmp + - if: properties: compatible: From patchwork Tue Aug 8 21:02:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 711643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DE5EC001E0 for ; Tue, 8 Aug 2023 21:03:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234590AbjHHVDA (ORCPT ); Tue, 8 Aug 2023 17:03:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233960AbjHHVC4 (ORCPT ); Tue, 8 Aug 2023 17:02:56 -0400 Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com [IPv6:2a00:1450:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 729941B4 for ; Tue, 8 Aug 2023 14:02:55 -0700 (PDT) Received: by mail-lj1-x22c.google.com with SMTP id 38308e7fff4ca-2b9ba3d6157so98760301fa.3 for ; Tue, 08 Aug 2023 14:02:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528573; x=1692133373; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=eeeLPCy6qfu7VzyO6TB+4obhTO937OTJDmGqSEAL/C0=; b=eRcfceqdGyPWbfW1IJ46N3yGau4WsQ2IRDMKSpvhwjcuY1kR0EXgSKZpXQ3cPEYebb u8qMEteJ4St0tBsKjVoNTAPoxcM9D/47M/CQRXFqAjX/yvHuRXVZ0+ODhS/Gskr8soZb Mc7Vrr/yzIunoEBRd3AuGknL9b0pCsDPDvZYA/5ibX5mov+umg9pP/66rpmtQUe+YX2v C6A/wF6LhVTDNx+EFnJaR1Xp8kpiVmIGtbimvgXP1qaJr0Nx6OBgkSmbmENhPkUokxKW BDs3DXMbWlh74cFvqyVIEZGZYprXcQx222sijRafq1m/gmV4i5fhRI7qFcAQWaZoswM/ Zrzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528573; x=1692133373; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eeeLPCy6qfu7VzyO6TB+4obhTO937OTJDmGqSEAL/C0=; b=Jehn9wdSGNqKd6nqxJXHo6g/uZHxp1ulnf7bBWVMX7ZykITA/yK1ygxj0EQrsJLR+4 Bd9fOw1Eomdlt9ceOWU+7G854QLGFj6qDmMnZcz118qwtUeNgvMkR8PARKyXVBjEfsbs uyolrXJawWv5lvZ4HjSs1OaFvc7qy0TcYk6xA74YGuhjDo019BMTH1nAItdEPCu/hewu +mpvvyz6Dotu3o8ehQpkoKrAW746c8GKNHClZWZBoml7yUN0x3R6D6nzoMTFzYsfBxvG 9ka1BkbgiAfsRAMtoLfkY/ZNFs3rX4EKFZhb7uIWn21RAEpcwOya0ttAjMEcCFe1KBdV A2ZA== X-Gm-Message-State: AOJu0YzP8nJ/0TszrR2SyqR8zVG5hWgjsy4hboDA4qMuHoJMoy/3D0M2 VkaYUQJD9nos4YehNENUU8wmMA6h2HIJrUC+v7U= X-Google-Smtp-Source: AGHT+IHn7Bj8Chf85274C6TwBy1wODZyxABYbM3L3w6Y3HCaMqYLDNJjbcllz+vIgP1YjMj8QqwXZA== X-Received: by 2002:a2e:7305:0:b0:2b6:da66:2d69 with SMTP id o5-20020a2e7305000000b002b6da662d69mr477165ljc.28.1691528573769; Tue, 08 Aug 2023 14:02:53 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.02.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:02:53 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:41 +0200 Subject: [PATCH v2 03/14] dt-bindings: display/msm/gpu: Allow A7xx SKUs MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-3-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Krzysztof Kozlowski , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=1566; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=Ora3efxJ7ruQmdmZDV+9TI1SKqnioWIl7z/BhTRZm+U=; b=wxvCDdd4et5qNx/FesBXpu/lrapN0Cbryw33tqS57rBAVhyK7YZ8hVwPyuZo002qCn38q+GUZ BqucMQIeANyCEn41/g72zvwEiRwCHGIpRxtGArMsrIZQL1zjQ1MI0S5 X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Allow A7xx SKUs, such as the A730 GPU found on SM8450 and friends. They use GMU for all things DVFS, just like most A6xx GPUs. Reviewed-by: Krzysztof Kozlowski Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- Documentation/devicetree/bindings/display/msm/gpu.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/devicetree/bindings/display/msm/gpu.yaml b/Documentation/devicetree/bindings/display/msm/gpu.yaml index 56b9b247e8c2..b019db954793 100644 --- a/Documentation/devicetree/bindings/display/msm/gpu.yaml +++ b/Documentation/devicetree/bindings/display/msm/gpu.yaml @@ -23,7 +23,7 @@ properties: The driver is parsing the compat string for Adreno to figure out the gpu-id and patch level. items: - - pattern: '^qcom,adreno-[3-6][0-9][0-9]\.[0-9]$' + - pattern: '^qcom,adreno-[3-7][0-9][0-9]\.[0-9]$' - const: qcom,adreno - description: | The driver is parsing the compat string for Imageon to @@ -203,7 +203,7 @@ allOf: properties: compatible: contains: - pattern: '^qcom,adreno-6[0-9][0-9]\.[0-9]$' + pattern: '^qcom,adreno-[67][0-9][0-9]\.[0-9]$' then: # Starting with A6xx, the clocks are usually defined in the GMU node properties: From patchwork Tue Aug 8 21:02:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 712828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31D04C41513 for ; Tue, 8 Aug 2023 21:03:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235021AbjHHVDC (ORCPT ); Tue, 8 Aug 2023 17:03:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234253AbjHHVC6 (ORCPT ); Tue, 8 Aug 2023 17:02:58 -0400 Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com [IPv6:2a00:1450:4864:20::130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D55C32132 for ; Tue, 8 Aug 2023 14:02:56 -0700 (PDT) Received: by mail-lf1-x130.google.com with SMTP id 2adb3069b0e04-4fe61ae020bso5736610e87.2 for ; Tue, 08 Aug 2023 14:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528575; x=1692133375; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=YiSVmhvbZ5wDLNmlxl7Dw8tGNwgQ0Tshc2ZD4z2JryU=; b=qBya5IN2zpvucFsRjFYq9wlTNOUXihOSSMVK6BC5+NdXjn27jik6Ao/7upPuSRpt4R D12ocbHVYF0NH1hWo9clSgd4EJT395Ktvd7hyoGQhoSaHVaP2z8K5dlMKHfnOyV0rzRj 2iWyKfFXaznQAnzinzfL7BF/x0u0VOWnFXx6e/M0fzL+uYa3cz9QjBZGAnHpy2+gSHh7 azSQuN/EjdL3ElMCR69zRdZ7hHs4LGs5fQ59vKjUfn5axTYjmij82XD09rkmM7kYeTk0 uizjom5WLv7df8gkMucfh3flNvOyqNb+5YWOJYQIu5e+15xTLBALoU0dGYSW2t7LAGHg 2NRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528575; x=1692133375; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YiSVmhvbZ5wDLNmlxl7Dw8tGNwgQ0Tshc2ZD4z2JryU=; b=YRL7ag4X3simrXbEGu7b8L2M8EIQtmEEAnwcbcFIzwY7DaZZ1Rsc0VnVdntcCbuAay obvKI2qj4V1sBTYtGw/NFA2n3ZItJoEQKOY/86DoAq24AnUQqZKfto6eV+YspUa6E3Rd iAKWCcMHhIGahyLvdt64MfH0xTkTOomFA0meuXap/dBZogGNeYpU1j9LVTYz1SPy4rJf ext25ZvQ8NgCC6BCoKzBZl1QWZuPF+qHNjVb3f2XAB62Odml3z00ayfMw5MPT65Y4cMZ KWaMNvwCKA+6nFsox6k+SVRDtRbtrRu51OZuFLLLKb7u60JS5LNqrV7eH19JeDPvnCX8 QUQQ== X-Gm-Message-State: AOJu0YwkKbNAEbbQOa7yEu51V6vKiixQwaY5L9gZ0oiiRudkrUZdtae+ IBvltleiDYFXgYtLJA6UtPs9yw== X-Google-Smtp-Source: AGHT+IFxAKQT5MUfj+PHAERXrFF2Y6Hvxy828nnXyA1PkhOB5PJNVuwwuMhstbA4gcEqnszu3et7Ww== X-Received: by 2002:a2e:7e06:0:b0:2b9:c5fd:d649 with SMTP id z6-20020a2e7e06000000b002b9c5fdd649mr459172ljc.24.1691528575195; Tue, 08 Aug 2023 14:02:55 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.02.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:02:54 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:42 +0200 Subject: [PATCH v2 04/14] drm/msm/a6xx: Add missing regs for A7XX MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-4-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=2751; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=qjjhDjYdtW67El5WC1tRT4syRzMoZ1olgqU/rNJYQOA=; b=KhMeG9bzNskqMk1YYRADJEC9kZlT9P7Z3UpXir4Ee0NVVfrqFNiQ8blcPO9scWtlWqs0h/NNl k3V2wwAtXwIAcJFD4FBNY+mnmg/AbXa9BU1X5um5eDiUKgVcHA/Gg4s X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add some missing definitions required for A7 support. This may be substituted with a mesa header sync. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx.xml.h | 9 +++++++++ drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h | 8 ++++++++ 2 files changed, 17 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/a6xx.xml.h b/drivers/gpu/drm/msm/adreno/a6xx.xml.h index 1c051535fd4a..863b5e3b0e67 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx.xml.h +++ b/drivers/gpu/drm/msm/adreno/a6xx.xml.h @@ -1114,6 +1114,12 @@ enum a6xx_tex_type { #define REG_A6XX_CP_MISC_CNTL 0x00000840 #define REG_A6XX_CP_APRIV_CNTL 0x00000844 +#define A6XX_CP_APRIV_CNTL_CDWRITE 0x00000040 +#define A6XX_CP_APRIV_CNTL_CDREAD 0x00000020 +#define A6XX_CP_APRIV_CNTL_RBRPWB 0x00000008 +#define A6XX_CP_APRIV_CNTL_RBPRIVLEVEL 0x00000004 +#define A6XX_CP_APRIV_CNTL_RBFETCH 0x00000002 +#define A6XX_CP_APRIV_CNTL_ICACHE 0x00000001 #define REG_A6XX_CP_PREEMPT_THRESHOLD 0x000008c0 @@ -1939,6 +1945,8 @@ static inline uint32_t REG_A6XX_RBBM_PERFCTR_RBBM_SEL(uint32_t i0) { return 0x00 #define REG_A6XX_RBBM_CLOCK_HYST_TEX_FCHE 0x00000122 +#define REG_A7XX_RBBM_CLOCK_HYST2_VFD 0x0000012f + #define REG_A6XX_RBBM_LPAC_GBIF_CLIENT_QOS_CNTL 0x000005ff #define REG_A6XX_DBGC_CFG_DBGBUS_SEL_A 0x00000600 @@ -8252,5 +8260,6 @@ static inline uint32_t A6XX_CX_DBGC_CFG_DBGBUS_BYTEL_1_BYTEL15(uint32_t val) #define REG_A6XX_CX_MISC_SYSTEM_CACHE_CNTL_1 0x00000002 +#define REG_A7XX_CX_MISC_TCM_RET_CNTL 0x00000039 #endif /* A6XX_XML */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h index fcd9eb53baf8..5b66efafc901 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.xml.h @@ -360,6 +360,12 @@ static inline uint32_t A6XX_GMU_GPU_NAP_CTRL_SID(uint32_t val) #define REG_A6XX_GMU_GENERAL_7 0x000051cc +#define REG_A6XX_GMU_GENERAL_8 0x000051cd + +#define REG_A6XX_GMU_GENERAL_9 0x000051ce + +#define REG_A6XX_GMU_GENERAL_10 0x000051cf + #define REG_A6XX_GMU_ISENSE_CTRL 0x0000515d #define REG_A6XX_GPU_CS_ENABLE_REG 0x00008920 @@ -471,6 +477,8 @@ static inline uint32_t A6XX_GMU_GPU_NAP_CTRL_SID(uint32_t val) #define REG_A6XX_RSCC_SEQ_BUSY_DRV0 0x00000101 +#define REG_A7XX_RSCC_SEQ_MEM_0_DRV0_A740 0x00000154 + #define REG_A6XX_RSCC_SEQ_MEM_0_DRV0 0x00000180 #define REG_A6XX_RSCC_TCS0_DRV0_STATUS 0x00000346 From patchwork Tue Aug 8 21:02:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 711642 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2C4BC04FE1 for ; Tue, 8 Aug 2023 21:03:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233555AbjHHVDD (ORCPT ); Tue, 8 Aug 2023 17:03:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231645AbjHHVDA (ORCPT ); Tue, 8 Aug 2023 17:03:00 -0400 Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com [IPv6:2a00:1450:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53095BD for ; Tue, 8 Aug 2023 14:02:58 -0700 (PDT) Received: by mail-lj1-x22e.google.com with SMTP id 38308e7fff4ca-2b9dc1bff38so95426231fa.1 for ; Tue, 08 Aug 2023 14:02:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528576; x=1692133376; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=lqz7w9DN46LollZ8B8cwAps5w+FdCgj3qIXMhHKzaOA=; b=w7TM1+29aq7mu8DdyFgFg7q+tgvrgq72wC0QJ1yMpAwOb7RgSgZuGmuKEmbwBXvDzu ZORquxMncRCJAal/oUKm5zZRUGD1FqYFfzQ+4NTN6QCZ0Y3Asgd/KkBWNo83AxuryI1P NQSnjjqfQXWMDTi2eZdHcQY761nqfLPi6OY8rh5nBD0ll44gRz/jzi769Z7XLoYuTpij KBrP87k27wq+4TguJ2Yp7Y0Ckz+HjaXAf6AAVgEo+iBKZv6sNjDiHAUjGYfSq7mNB6mg YAx08WwEv6X0ml8XbTkRlerFClWzFHI+E+h8nagjK0xKUFXVCFyQcD/ze9bF7Sw1u+0z wzdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528576; x=1692133376; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lqz7w9DN46LollZ8B8cwAps5w+FdCgj3qIXMhHKzaOA=; b=RvhK1GO7uA1d13HJwKX9RcdW9JlP9N4DeTyawn3fdc+BkxslHo9HTcjTBL/RM4DijE Y74TDsi+ukw5ljRTVx7TgYjUJuF6WDOR2A3R4oHC/vuqpkC3P7LaVIaA8cGNW/ratrKJ uJ2JrAG5lYMExyvNlsjByBCaEfyEivYK9TIiYW1t+h3g3Sb0DPXwipn/SdMYM4DBj7yi IwHqvPfVI9eREYw1XFPqRgRcUoFNhLrO1buNPEpASYSX8Q4ZuQRsT5EOgmxYtzttSVN4 Vw3XzeDZ5Zb7Lz1C+Hb5P+dPLu0cmqEk8V4+qRvps8UOD86fqsePjgzFODcfyMMDgkAb 0Vgg== X-Gm-Message-State: AOJu0YysTtv1RoS8PKWeOdk8iUw2X6axl4AWHozOy8GmI65SK164Bq6n wV4UnVAkTh625SF6UfJAn+ASQw== X-Google-Smtp-Source: AGHT+IE0PLzf8rTXtsLHDVnz0HqqaD6Vde/Y6FvOMzx+RT7sb+AooJgdWRW4uKO3Beejot6Pj8JL3g== X-Received: by 2002:a2e:a307:0:b0:2b6:c886:681 with SMTP id l7-20020a2ea307000000b002b6c8860681mr492063lje.6.1691528576656; Tue, 08 Aug 2023 14:02:56 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.02.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:02:56 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:43 +0200 Subject: [PATCH v2 05/14] drm/msm/a6xx: Introduce a6xx_llc_read MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-5-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=1048; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=TnlFTvDdHhoi4rVAYZ2tyAwm56dDVRIuWBaJyWSlozw=; b=a/SUmHuwx6GbHQKZ+zB6XePPj9sI9E9dNkhNkm4geJQBC/XMyObKFtFrnDWlvJ+i/rulMyMRt UaeyhJm1kS7BUbCC3ZrXmWHAvbWEMsshMCIBeYdQFx+cLTGdOsb1k+7 X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add a helper that does exactly what it says on the can, it'll be required for A7xx. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 1ed202c4e497..0fef92f71c4e 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1740,6 +1740,11 @@ static void a6xx_llc_rmw(struct a6xx_gpu *a6xx_gpu, u32 reg, u32 mask, u32 or) return msm_rmw(a6xx_gpu->llc_mmio + (reg << 2), mask, or); } +static u32 a6xx_llc_read(struct a6xx_gpu *a6xx_gpu, u32 reg) +{ + return msm_readl(a6xx_gpu->llc_mmio + (reg << 2)); +} + static void a6xx_llc_write(struct a6xx_gpu *a6xx_gpu, u32 reg, u32 value) { msm_writel(value, a6xx_gpu->llc_mmio + (reg << 2)); From patchwork Tue Aug 8 21:02:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 712827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9381DC04A94 for ; Tue, 8 Aug 2023 21:03:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235722AbjHHVDn (ORCPT ); Tue, 8 Aug 2023 17:03:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235514AbjHHVDe (ORCPT ); Tue, 8 Aug 2023 17:03:34 -0400 Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com [IPv6:2a00:1450:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58D242D76 for ; Tue, 8 Aug 2023 14:03:00 -0700 (PDT) Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2b9e6cc93d8so98669001fa.0 for ; Tue, 08 Aug 2023 14:03:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528578; x=1692133378; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=a13TlERjZ/yvBKly/AGOqrMMiFpj1YGs0Ux9Op8gIVs=; b=H1PJZYsx2T7Rn4anW5XSQZA9afRmTh9NeBPbPqfY5kvZSO7R3brHeG7mSPb196jYXp as+6wGFHIpLIkpo05bugbQcrX5qR2Ebpkn6ed8+aILzxYjkn6PuuE712VVxPGQaWX7M+ 5NBikVn9aUZFcus7i5zrpnizNJLXYu36sdpOrX6mwtHGATCsuI+YblaIJpk2zchk/5JX O3hZP+rlOzYOwv3puSthY9wWsnCp65dMLr53Djrp/5kKCEaIoFOh1pmvhLy27gDU2iGc 58cf36uzHg12bXUm0xrhKQIrnsu48m0IULJEYJUkCGgXDKKamKQZuvNAfUgbARKcgBIj 3gPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528578; x=1692133378; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a13TlERjZ/yvBKly/AGOqrMMiFpj1YGs0Ux9Op8gIVs=; b=Pq56F7WeSkiJjz5dBjXwHWCrr6xvCaGKJgBFXEC7FXrkmEXCtBMJp9ZKuSPcxp6osQ 9B6WECqTtA3dKxfVcO5bEluhWCWKtLIu4hHERu0yD5c/5/3I6ogymwdJYJttZ3QwSs3t Jajf+k4RBfDjccwJkS8MrC17jtP5NLxDsB2AMwfOe5+pf+UowkQTP8/oS5DB3VyOxt7S P3WphfF7TRR8lvPKDw+g4B5GQKm7GUTFSlkn3AaqSxpvrv0ozVWeSuM5+B4FTpjiS2WX 5Fb15LA2k0/UcVqnzgpypufAbPUkC+vUtg+F9CjXNlE5lh7pWEPAZZ3xOYkI5ul9kJEZ mOFg== X-Gm-Message-State: AOJu0YxWyBdHz4MN/WmCgwi1sG21JBKnKUCzXhmloI+qA96wM76Ff1sn MmNxja9p/Opzyyrr055j8Zk3xg== X-Google-Smtp-Source: AGHT+IHwifWAFYokdWjpmdSh19UTnzHvP19artpwYL3ci97p4SwBhNfIJcamjQIDm7o3EPSEFxwWEg== X-Received: by 2002:a2e:86c3:0:b0:2b9:f1ad:94f5 with SMTP id n3-20020a2e86c3000000b002b9f1ad94f5mr438696ljj.40.1691528578674; Tue, 08 Aug 2023 14:02:58 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.02.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:02:57 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:44 +0200 Subject: [PATCH v2 06/14] drm/msm/a6xx: Move LLC accessors to the common header MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-6-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=2192; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=tUfU3+ovsnb6sldTwSDEUkthoS8ftGWI68xnQOhZSl8=; b=Gg1/EQ7JSx1ssTGQaOw1FYx7cTycHkjbtlzVhlDG270vRIUCbgfLbg3AMBVJC2fNsPnBawSiU TqXDjJXP2rWABo0WNT8X5kzfCnD17h/qGOicpiVO/tFfqtlvDgapl0L X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Move these wrappers in preparation for use in a6xx_gmu.c Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 15 --------------- drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 15 +++++++++++++++ 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0fef92f71c4e..6dd6d72bcd86 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1735,21 +1735,6 @@ static irqreturn_t a6xx_irq(struct msm_gpu *gpu) return IRQ_HANDLED; } -static void a6xx_llc_rmw(struct a6xx_gpu *a6xx_gpu, u32 reg, u32 mask, u32 or) -{ - return msm_rmw(a6xx_gpu->llc_mmio + (reg << 2), mask, or); -} - -static u32 a6xx_llc_read(struct a6xx_gpu *a6xx_gpu, u32 reg) -{ - return msm_readl(a6xx_gpu->llc_mmio + (reg << 2)); -} - -static void a6xx_llc_write(struct a6xx_gpu *a6xx_gpu, u32 reg, u32 value) -{ - msm_writel(value, a6xx_gpu->llc_mmio + (reg << 2)); -} - static void a6xx_llc_deactivate(struct a6xx_gpu *a6xx_gpu) { llcc_slice_deactivate(a6xx_gpu->llc_slice); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h index ab66d281828c..34822b080759 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h @@ -62,6 +62,21 @@ static inline bool a6xx_has_gbif(struct adreno_gpu *gpu) return true; } +static inline void a6xx_llc_rmw(struct a6xx_gpu *a6xx_gpu, u32 reg, u32 mask, u32 or) +{ + return msm_rmw(a6xx_gpu->llc_mmio + (reg << 2), mask, or); +} + +static inline u32 a6xx_llc_read(struct a6xx_gpu *a6xx_gpu, u32 reg) +{ + return msm_readl(a6xx_gpu->llc_mmio + (reg << 2)); +} + +static inline void a6xx_llc_write(struct a6xx_gpu *a6xx_gpu, u32 reg, u32 value) +{ + msm_writel(value, a6xx_gpu->llc_mmio + (reg << 2)); +} + #define shadowptr(_a6xx_gpu, _ring) ((_a6xx_gpu)->shadow_iova + \ ((_ring)->id * sizeof(uint32_t))) From patchwork Tue Aug 8 21:02:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 711641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFF72C04A94 for ; Tue, 8 Aug 2023 21:03:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235788AbjHHVDw (ORCPT ); Tue, 8 Aug 2023 17:03:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235548AbjHHVDg (ORCPT ); Tue, 8 Aug 2023 17:03:36 -0400 Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com [IPv6:2a00:1450:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7517830EF for ; Tue, 8 Aug 2023 14:03:02 -0700 (PDT) Received: by mail-lj1-x22d.google.com with SMTP id 38308e7fff4ca-2b9c66e2e36so3101831fa.1 for ; Tue, 08 Aug 2023 14:03:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528580; x=1692133380; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=vyXbhGhn18q2epfu4SWH/aQDrzkgDXaFIhnNbGtGC7U=; b=VujgEBjwyScdy4krYp0PsisUqplwWaZEowKlMh+MXClybDubRW1i0x4zIXf+cBxDC8 DwOo9NWhfAObZdHzDVck9UugL8XOeJkkWzMGhGpk0cQTs605Ww9IhwQjL/xtDd6dxa5k y8BfiC2JqDSi1HnrcRRVR9XjKIpUTJb4/wl6ONuWTKDqfAA3WNVRt+iFBLtTJf992jk+ XTOxtJvZuyVsTc4/8IvcLtOTdiq54fgYZaK2QCLbMZskAoE5V4vY3Tr+hC1g8DjEkuIX 0M5cQ/tXZgQYD7nC39CcNZN+Mw6Cx90Njg43yHRNmV1918sPXf3M80BbtYb0PNSprQww DoFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528580; x=1692133380; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vyXbhGhn18q2epfu4SWH/aQDrzkgDXaFIhnNbGtGC7U=; b=iapOxLEm9jQns98xQj6IfXUyEzTy+2URskp7r7O2szwXQq1vUYzPnjY3LlHZq/37cA rEj6LHPE77owGJdOKJzEBHfGOUf0bbP3Rfi9cmaopp2Pdwn2LMvJLVs82PaSQjmaFnD1 f/PGeAN+D2vPhjmxEyF4nCE26gGYRCTvc/+X4cBqB+TuE0Nm9pX0u4CQ1FHHLDpwOwes lApbe4DZgcBX5YjY41Pl3E7z+5OhsJGMOiO5rsp2pF0Ve5QxXRvLYtWA2ZBBMppSe5p+ bdfly50fPB847/rv9MtnvjqvLvuluLCAcnXQ0tN/6yNNIueMy3+sMMO7g+neQMHlBH4y K2uw== X-Gm-Message-State: AOJu0YwXFL20Yr12U4wJqxDdkQjJfdSrX2mxz0KzX2ls7yS4IZyC8i3f h5+5JnZBEsZjI/MbxMSYYCk5Iw== X-Google-Smtp-Source: AGHT+IHoV1oF15QkBUldmZLMCtLPoUuJyJYtuXzGep5+KTw10AGDZo6lCAtzJhHf8ovdFAPWY8uLkQ== X-Received: by 2002:a2e:b547:0:b0:2b9:d965:fbf2 with SMTP id a7-20020a2eb547000000b002b9d965fbf2mr3284565ljn.22.1691528580533; Tue, 08 Aug 2023 14:03:00 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.02.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:03:00 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:45 +0200 Subject: [PATCH v2 07/14] drm/msm/a6xx: Bail out early if setting GPU OOB fails MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-7-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=1076; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=eKaMrn2nEo1t9dG12+njAKjMPE3YGEK2vLhd6tZW2mA=; b=oYA7dxokB+2a54oH9Jf1VwY7xcZib3TY3MJ+n+qR5kxnmlSD/8n7GK/svGTU/OhFgcpD4UtZI we5G825DbTbCz3tmp2CUl7JV8Ercmly2mfEsQvY2f/8YthWHWlFypQk X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org If the GMU can't guarantee the required resources are up, trying to bring up the GPU is a lost cause. Return early if setting GPU OOB fails. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 6dd6d72bcd86..d4e85e24002f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1201,7 +1201,9 @@ static int hw_init(struct msm_gpu *gpu) if (!adreno_has_gmu_wrapper(adreno_gpu)) { /* Make sure the GMU keeps the GPU on while we set it up */ - a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET); + ret = a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET); + if (ret) + return ret; } /* Clear GBIF halt in case GX domain was not collapsed */ From patchwork Tue Aug 8 21:02:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 712826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BDB0C001DB for ; Tue, 8 Aug 2023 21:03:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235813AbjHHVD6 (ORCPT ); Tue, 8 Aug 2023 17:03:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235296AbjHHVDj (ORCPT ); Tue, 8 Aug 2023 17:03:39 -0400 Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com [IPv6:2a00:1450:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0BF146AF for ; Tue, 8 Aug 2023 14:03:04 -0700 (PDT) Received: by mail-lj1-x232.google.com with SMTP id 38308e7fff4ca-2b72161c6e9so3154481fa.0 for ; Tue, 08 Aug 2023 14:03:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528583; x=1692133383; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=AcNe1+281MQYsT3aURw3riSFF3x+3uw1bn6sOHLxfjk=; b=i3OIZkDmzLUdjwJ0cDCkPQjRtHbzw7Q9/qubHKoSwHYc/r+sl8xmP0GDvzb6VFeMM4 V9Uc2fIsSaq2v7FfX68qMdYbktRwM1+CbuUrDxghA3oYUE54VnzI6doF6qd53tH1Dd/u Gz3700HGeXQwF2SZgRe+5TyUmuoPAspj3QsfA1OkxNZbW0BXJp1A3rQkrwDrjtlYx1tu vlKTfiEE6fMijTwDrPZHaPZSSp8Y46z5X+Y6hDBH8w69vqRPDeMwrTOIPh1OomgGzQwv cjDMPapB+1M6QWHhEORAI5FYsN7rEcdTQD6XKCpeLCuMkCVJSQ9IXw/xFFAwuUB2avjj UKog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528583; x=1692133383; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AcNe1+281MQYsT3aURw3riSFF3x+3uw1bn6sOHLxfjk=; b=H7HiTeBMpuqb1/MToI/n/JnZ2B7kkCV+I2ELA/QISMcM+tnTYbMSwZdKIIEE2OIWyp xPmW3YfmBTTR55spP9yvQxYSlOEmJdY8n44XX80zzEQg+RU9LbAJGEuPzQSREdl5M0OC LWLx3wIz+XK2IgHOqch46cyI1uCdpqP5BXXGnUeNycgHQtj/QITrzqHUJi4hGufuJUCu SNopn/seayC5tpItMSJ6K9t5CxsZ5NUKFAHSKAyrZlnF0TDtQDwLEJETI3vKUtdk5Dj+ z8GxNNGOmhZsgvpBdZDwCtsVPpO+0RbsNeQna07PgXusb5WtDL3RpyseNWTsnevW3qJ1 g1Sw== X-Gm-Message-State: AOJu0YyL+XSjJCEdLzRqv2rSJZr/+UFxRvCJoK5J8SI1amRgxEtrIN7Z 8weCZU+nLb/0uC7qmA1oY2pi9w== X-Google-Smtp-Source: AGHT+IFv2VoknU7cWIvZcSFe9ZSFGswbilt46Sb36ANZ3t8Z/sKvDme5ODU7IP40cdX3IoftiEUVHw== X-Received: by 2002:a2e:b794:0:b0:2b6:cd70:2781 with SMTP id n20-20020a2eb794000000b002b6cd702781mr332668ljo.12.1691528582850; Tue, 08 Aug 2023 14:03:02 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.03.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:03:02 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:46 +0200 Subject: [PATCH v2 08/14] drm/msm/a6xx: Add skeleton A7xx support MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-8-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=36921; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=oixnOeVb0kMSAzCCg+92PNfXxih05RLeRbXfoaOXUGI=; b=W5tPGb7AW1Jqd0DrZXglv1LTPo6oipfD4NnhDHgUazg4CJxBdOd34wEWmVEc06kLZFC3JAKiT lvae82LVhjyCg4mgyKrTi1kJKb6UYuPgIRNGU2GbAHcmTiDXyBTltbV X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org A7xx GPUs are - from kernel's POV anyway - basically another generation of A6xx. They build upon the A650/A660_family advancements, skipping some writes (presumably more values are preset correctly on reset), adding some new ones and changing others. One notable difference is the introduction of a second shadow, called BV. To handle this with the current code, allocate it right after the current RPTR shadow. BV handling and .submit are mostly based on Jonathan Marek's work. All A7xx GPUs are assumed to have a GMU. A702 is not an A7xx-class GPU, it's a weird forked A610. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 95 +++++-- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 451 ++++++++++++++++++++++++++++---- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 1 + drivers/gpu/drm/msm/adreno/adreno_gpu.h | 12 + drivers/gpu/drm/msm/msm_ringbuffer.h | 2 + 5 files changed, 481 insertions(+), 80 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 03fa89bf3e4b..75984260898e 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -200,9 +200,10 @@ int a6xx_gmu_wait_for_idle(struct a6xx_gmu *gmu) static int a6xx_gmu_start(struct a6xx_gmu *gmu) { + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + u32 mask, reset_val, val; int ret; - u32 val; - u32 mask, reset_val; val = gmu_read(gmu, REG_A6XX_GMU_CM3_DTCM_START + 0xff8); if (val <= 0x20010004) { @@ -218,7 +219,11 @@ static int a6xx_gmu_start(struct a6xx_gmu *gmu) /* Set the log wptr index * note: downstream saves the value in poweroff and restores it here */ - gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_PWR_COL_CP_RESP, 0); + if (adreno_is_a7xx(adreno_gpu)) + gmu_write(gmu, REG_A6XX_GMU_GENERAL_9, 0); + else + gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_PWR_COL_CP_RESP, 0); + gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 0); @@ -518,7 +523,9 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) if (IS_ERR(pdcptr)) goto err; - if (adreno_is_a650(adreno_gpu) || adreno_is_a660_family(adreno_gpu)) + if (adreno_is_a650(adreno_gpu) || + adreno_is_a660_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) pdc_in_aop = true; else if (adreno_is_a618(adreno_gpu) || adreno_is_a640_family(adreno_gpu)) pdc_address_offset = 0x30090; @@ -550,7 +557,8 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) gmu_write_rscc(gmu, REG_A6XX_RSCC_PDC_MATCH_VALUE_HI, 0x4514); /* Load RSC sequencer uCode for sleep and wakeup */ - if (adreno_is_a650_family(adreno_gpu)) { + if (adreno_is_a650_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0, 0xeaaae5a0); gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 1, 0xe1a1ebab); gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 2, 0xa2e0a581); @@ -635,11 +643,18 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) /* Set up the idle state for the GMU */ static void a6xx_gmu_power_config(struct a6xx_gmu *gmu) { + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + /* Disable GMU WB/RB buffer */ gmu_write(gmu, REG_A6XX_GMU_SYS_BUS_CONFIG, 0x1); gmu_write(gmu, REG_A6XX_GMU_ICACHE_CONFIG, 0x1); gmu_write(gmu, REG_A6XX_GMU_DCACHE_CONFIG, 0x1); + /* A7xx knows better by default! */ + if (adreno_is_a7xx(adreno_gpu)) + return; + gmu_write(gmu, REG_A6XX_GMU_PWR_COL_INTER_FRAME_CTRL, 0x9c40400); switch (gmu->idle_level) { @@ -702,7 +717,7 @@ static int a6xx_gmu_fw_load(struct a6xx_gmu *gmu) u32 itcm_base = 0x00000000; u32 dtcm_base = 0x00040000; - if (adreno_is_a650_family(adreno_gpu)) + if (adreno_is_a650_family(adreno_gpu) || adreno_is_a7xx(adreno_gpu)) dtcm_base = 0x10004000; if (gmu->legacy) { @@ -751,14 +766,22 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) { struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + u32 fence_range_lower, fence_range_upper; int ret; u32 chipid; - if (adreno_is_a650_family(adreno_gpu)) { + /* Vote veto for FAL10 */ + if (adreno_is_a650_family(adreno_gpu) || adreno_is_a7xx(adreno_gpu)) { gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_CX_FALNEXT_INTF, 1); gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_CX_FAL_INTF, 1); } + /* Turn on TCM (Tightly Coupled Memory) retention */ + if (adreno_is_a7xx(adreno_gpu)) + a6xx_llc_write(a6xx_gpu, REG_A7XX_CX_MISC_TCM_RET_CNTL, 1); + else + gmu_write(gmu, REG_A6XX_GMU_GENERAL_7, 1); + if (state == GMU_WARM_BOOT) { ret = a6xx_rpmh_start(gmu); if (ret) @@ -768,9 +791,6 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) "GMU firmware is not loaded\n")) return -ENOENT; - /* Turn on register retention */ - gmu_write(gmu, REG_A6XX_GMU_GENERAL_7, 1); - ret = a6xx_rpmh_start(gmu); if (ret) return ret; @@ -780,6 +800,7 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) return ret; } + /* Clear init result to make sure we are getting a fresh value */ gmu_write(gmu, REG_A6XX_GMU_CM3_FW_INIT_RESULT, 0); gmu_write(gmu, REG_A6XX_GMU_CM3_BOOT_CONFIG, 0x02); @@ -787,8 +808,18 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) gmu_write(gmu, REG_A6XX_GMU_HFI_QTBL_ADDR, gmu->hfi.iova); gmu_write(gmu, REG_A6XX_GMU_HFI_QTBL_INFO, 1); + if (adreno_is_a7xx(adreno_gpu)) { + fence_range_upper = 0x32; + fence_range_lower = 0x8a0; + } else { + fence_range_upper = 0xa; + fence_range_lower = 0xa0; + } + gmu_write(gmu, REG_A6XX_GMU_AHB_FENCE_RANGE_0, - (1 << 31) | (0xa << 18) | (0xa0)); + BIT(31) | + FIELD_PREP(GENMASK(30, 18), fence_range_upper) | + FIELD_PREP(GENMASK(17, 0), fence_range_lower)); /* * Snapshots toggle the NMI bit which will result in a jump to the NMI @@ -807,10 +838,17 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) chipid |= (adreno_gpu->chip_id << 4) & 0xf000; /* minor */ chipid |= (adreno_gpu->chip_id << 8) & 0x0f00; /* patchid */ - gmu_write(gmu, REG_A6XX_GMU_HFI_SFR_ADDR, chipid); + if (adreno_is_a7xx(adreno_gpu)) { + gmu_write(gmu, REG_A6XX_GMU_GENERAL_10, chipid); + gmu_write(gmu, REG_A6XX_GMU_GENERAL_8, + (gmu->log.iova & GENMASK(31, 12)) | + ((gmu->log.size / SZ_4K - 1) & GENMASK(7, 0))); + } else { + gmu_write(gmu, REG_A6XX_GMU_HFI_SFR_ADDR, chipid); - gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_PWR_COL_CP_MSG, - gmu->log.iova | (gmu->log.size / SZ_4K - 1)); + gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_PWR_COL_CP_MSG, + gmu->log.iova | (gmu->log.size / SZ_4K - 1)); + } /* Set up the lowest idle level on the GMU */ a6xx_gmu_power_config(gmu); @@ -984,15 +1022,19 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) enable_irq(gmu->gmu_irq); /* Check to see if we are doing a cold or warm boot */ - status = gmu_read(gmu, REG_A6XX_GMU_GENERAL_7) == 1 ? - GMU_WARM_BOOT : GMU_COLD_BOOT; - - /* - * Warm boot path does not work on newer GPUs - * Presumably this is because icache/dcache regions must be restored - */ - if (!gmu->legacy) + if (adreno_is_a7xx(adreno_gpu)) { + status = a6xx_llc_read(a6xx_gpu, REG_A7XX_CX_MISC_TCM_RET_CNTL) == 1 ? + GMU_WARM_BOOT : GMU_COLD_BOOT; + } else if (gmu->legacy) { + status = gmu_read(gmu, REG_A6XX_GMU_GENERAL_7) == 1 ? + GMU_WARM_BOOT : GMU_COLD_BOOT; + } else { + /* + * Warm boot path does not work on newer A6xx GPUs + * Presumably this is because icache/dcache regions must be restored + */ status = GMU_COLD_BOOT; + } ret = a6xx_gmu_fw_start(gmu, status); if (ret) @@ -1604,7 +1646,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) * are otherwise unused by a660. */ gmu->dummy.size = SZ_4K; - if (adreno_is_a660_family(adreno_gpu)) { + if (adreno_is_a660_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_4K * 7, 0x60400000, "debug"); if (ret) @@ -1620,7 +1663,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) goto err_memory; /* Note that a650 family also includes a660 family: */ - if (adreno_is_a650_family(adreno_gpu)) { + if (adreno_is_a650_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { ret = a6xx_gmu_memory_alloc(gmu, &gmu->icache, SZ_16M - SZ_16K, 0x04000, "icache"); if (ret) @@ -1668,7 +1712,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) goto err_memory; } - if (adreno_is_a650_family(adreno_gpu)) { + if (adreno_is_a650_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc"); if (IS_ERR(gmu->rscc)) { ret = -ENODEV; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index d4e85e24002f..61ce8d053355 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -103,6 +103,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring, struct msm_file_private *ctx) { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; u64 memptr = rbmemptr(ring, ttbr0); @@ -114,9 +115,11 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, return; if (!sysprof) { - /* Turn off protected mode to write to special registers */ - OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); - OUT_RING(ring, 0); + if (!adreno_is_a7xx(adreno_gpu)) { + /* Turn off protected mode to write to special registers */ + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); + OUT_RING(ring, 0); + } OUT_PKT4(ring, REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD, 1); OUT_RING(ring, 1); @@ -141,6 +144,16 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, OUT_RING(ring, lower_32_bits(ttbr)); OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr)); + /* + * Sync both threads after switching pagetables and enable BR only + * to make sure BV doesn't race ahead while BR is still switching + * pagetables. + */ + if (adreno_is_a7xx(&a6xx_gpu->base)) { + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR); + } + /* * And finally, trigger a uche flush to be sure there isn't anything * lingering in that part of the GPU @@ -163,9 +176,11 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, OUT_RING(ring, CP_WAIT_REG_MEM_4_MASK(0x1)); OUT_RING(ring, CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0)); - /* Re-enable protected mode: */ - OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); - OUT_RING(ring, 1); + if (!adreno_is_a7xx(adreno_gpu)) { + /* Re-enable protected mode: */ + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); + OUT_RING(ring, 1); + } } } @@ -252,6 +267,133 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) a6xx_flush(gpu, ring); } +static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) +{ + unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT; + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); + struct msm_ringbuffer *ring = submit->ring; + unsigned int i, ibs = 0; + + /* + * Toggle concurrent binning for pagetable switch and set the thread to + * BR since only it can execute the pagetable switch packets. + */ + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR); + + a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx); + + get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0), + rbmemptr_stats(ring, index, cpcycles_start)); + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER, + rbmemptr_stats(ring, index, alwayson_start)); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BOTH); + + OUT_PKT7(ring, CP_SET_MARKER, 1); + OUT_RING(ring, 0x101); /* IFPC disable */ + + OUT_PKT7(ring, CP_SET_MARKER, 1); + OUT_RING(ring, 0x00d); /* IB1LIST start */ + + /* Submit the commands */ + for (i = 0; i < submit->nr_cmds; i++) { + switch (submit->cmd[i].type) { + case MSM_SUBMIT_CMD_IB_TARGET_BUF: + break; + case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: + if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno) + break; + fallthrough; + case MSM_SUBMIT_CMD_BUF: + OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3); + OUT_RING(ring, lower_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, upper_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, submit->cmd[i].size); + ibs++; + break; + } + + /* + * Periodically update shadow-wptr if needed, so that we + * can see partial progress of submits with large # of + * cmds.. otherwise we could needlessly stall waiting for + * ringbuffer state, simply due to looking at a shadow + * rptr value that has not been updated + */ + if ((ibs % 32) == 0) + update_shadow_rptr(gpu, ring); + } + + OUT_PKT7(ring, CP_SET_MARKER, 1); + OUT_RING(ring, 0x00e); /* IB1LIST end */ + + get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0), + rbmemptr_stats(ring, index, cpcycles_end)); + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER, + rbmemptr_stats(ring, index, alwayson_end)); + + /* Write the fence to the scratch register */ + OUT_PKT4(ring, REG_A6XX_CP_SCRATCH_REG(2), 1); + OUT_RING(ring, submit->seqno); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BR); + + OUT_PKT7(ring, CP_EVENT_WRITE, 1); + OUT_RING(ring, CCU_INVALIDATE_DEPTH); + + OUT_PKT7(ring, CP_EVENT_WRITE, 1); + OUT_RING(ring, CCU_INVALIDATE_COLOR); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BV); + + /* + * Make sure the timestamp is committed once BV pipe is + * completely done with this submission. + */ + OUT_PKT7(ring, CP_EVENT_WRITE, 4); + OUT_RING(ring, CACHE_CLEAN | BIT(27)); + OUT_RING(ring, lower_32_bits(rbmemptr(ring, bv_fence))); + OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence))); + OUT_RING(ring, submit->seqno); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BR); + + /* + * This makes sure that BR doesn't race ahead and commit + * timestamp to memstore while BV is still processing + * this submission. + */ + OUT_PKT7(ring, CP_WAIT_TIMESTAMP, 4); + OUT_RING(ring, 0); + OUT_RING(ring, lower_32_bits(rbmemptr(ring, bv_fence))); + OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence))); + OUT_RING(ring, submit->seqno); + + /* write the ringbuffer timestamp */ + OUT_PKT7(ring, CP_EVENT_WRITE, 4); + OUT_RING(ring, CACHE_CLEAN | CP_EVENT_WRITE_0_IRQ | BIT(27)); + OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence))); + OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence))); + OUT_RING(ring, submit->seqno); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BOTH); + + OUT_PKT7(ring, CP_SET_MARKER, 1); + OUT_RING(ring, 0x100); /* IFPC enable */ + + trace_msm_gpu_submit_flush(submit, + gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER)); + + a6xx_flush(gpu, ring); +} + const struct adreno_reglist a612_hwcg[] = { {REG_A6XX_RBBM_CLOCK_CNTL_SP0, 0x22222222}, {REG_A6XX_RBBM_CLOCK_CNTL2_SP0, 0x02222220}, @@ -714,6 +856,15 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) else clock_cntl_on = 0x8aa8aa82; + if (adreno_is_a7xx(adreno_gpu)) { + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_MODE_CNTL, + state ? 0x20000 : 0); + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_DELAY_CNTL, + state ? 0x10111 : 0); + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_HYST_CNTL, + state ? 0x5555 : 0); + } + val = gpu_read(gpu, REG_A6XX_RBBM_CLOCK_CNTL); /* Don't re-program the registers if they are already correct */ @@ -721,14 +872,14 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) return; /* Disable SP clock before programming HWCG registers */ - if (!adreno_is_a610(adreno_gpu)) + if (!adreno_is_a610(adreno_gpu) && !adreno_is_a7xx(adreno_gpu)) gmu_rmw(gmu, REG_A6XX_GPU_GMU_GX_SPTPRAC_CLOCK_CONTROL, 1, 0); for (i = 0; (reg = &adreno_gpu->info->hwcg[i], reg->offset); i++) gpu_write(gpu, reg->offset, state ? reg->value : 0); /* Enable SP clock */ - if (!adreno_is_a610(adreno_gpu)) + if (!adreno_is_a610(adreno_gpu) && !adreno_is_a7xx(adreno_gpu)) gmu_rmw(gmu, REG_A6XX_GPU_GMU_GX_SPTPRAC_CLOCK_CONTROL, 0, 1); gpu_write(gpu, REG_A6XX_RBBM_CLOCK_CNTL, state ? clock_cntl_on : 0); @@ -1017,6 +1168,10 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu) uavflagprd_inv << 4 | min_acc_len << 3 | hbb_lo << 1 | ubwc_mode); + if (adreno_is_a7xx(adreno_gpu)) + gpu_write(gpu, REG_A7XX_GRAS_NC_MODE_CNTL, + FIELD_PREP(GENMASK(8, 5), hbb_lo)); + gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, min_acc_len << 23 | hbb_lo << 21); } @@ -1049,6 +1204,55 @@ static int a6xx_cp_init(struct msm_gpu *gpu) return a6xx_idle(gpu, ring) ? 0 : -EINVAL; } +static int a7xx_cp_init(struct msm_gpu *gpu) +{ + struct msm_ringbuffer *ring = gpu->rb[0]; + u32 mask; + + /* Disable concurrent binning before sending CP init */ + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, BIT(27)); + + OUT_PKT7(ring, CP_ME_INIT, 7); + + /* Use multiple HW contexts */ + mask = BIT(0); + + /* Enable error detection */ + mask |= BIT(1); + + /* Set default reset state */ + mask |= BIT(3); + + /* Disable save/restore of performance counters across preemption */ + mask |= BIT(6); + + /* Enable the register init list with the spinlock */ + mask |= BIT(8); + + OUT_RING(ring, mask); + + /* Enable multiple hardware contexts */ + OUT_RING(ring, 0x00000003); + + /* Enable error detection */ + OUT_RING(ring, 0x20000000); + + /* Operation mode mask */ + OUT_RING(ring, 0x00000002); + + /* *Don't* send a power up reg list for concurrent binning (TODO) */ + /* Lo address */ + OUT_RING(ring, 0x00000000); + /* Hi address */ + OUT_RING(ring, 0x00000000); + /* BIT(31) set => read the regs from the list */ + OUT_RING(ring, 0x00000000); + + a6xx_flush(gpu, ring); + return a6xx_idle(gpu, ring) ? 0 : -EINVAL; +} + /* * Check that the microcode version is new enough to include several key * security fixes. Return true if the ucode is safe. @@ -1065,6 +1269,10 @@ static bool a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu, if (IS_ERR(buf)) return false; + /* A7xx is safe! */ + if (adreno_is_a7xx(adreno_gpu)) + return true; + /* * Targets up to a640 (a618, a630 and a640) need to check for a * microcode version that is patched to support the whereami opcode or @@ -1181,16 +1389,39 @@ static int a6xx_zap_shader_init(struct msm_gpu *gpu) } #define A6XX_INT_MASK (A6XX_RBBM_INT_0_MASK_CP_AHB_ERROR | \ - A6XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNCFIFO_OVERFLOW | \ - A6XX_RBBM_INT_0_MASK_CP_HW_ERROR | \ - A6XX_RBBM_INT_0_MASK_CP_IB2 | \ - A6XX_RBBM_INT_0_MASK_CP_IB1 | \ - A6XX_RBBM_INT_0_MASK_CP_RB | \ - A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS | \ - A6XX_RBBM_INT_0_MASK_RBBM_ATB_BUS_OVERFLOW | \ - A6XX_RBBM_INT_0_MASK_RBBM_HANG_DETECT | \ - A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \ - A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR) + A6XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNCFIFO_OVERFLOW | \ + A6XX_RBBM_INT_0_MASK_CP_HW_ERROR | \ + A6XX_RBBM_INT_0_MASK_CP_IB2 | \ + A6XX_RBBM_INT_0_MASK_CP_IB1 | \ + A6XX_RBBM_INT_0_MASK_CP_RB | \ + A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS | \ + A6XX_RBBM_INT_0_MASK_RBBM_ATB_BUS_OVERFLOW | \ + A6XX_RBBM_INT_0_MASK_RBBM_HANG_DETECT | \ + A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \ + A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR) + +#define A7XX_INT_MASK (A6XX_RBBM_INT_0_MASK_CP_AHB_ERROR | \ + A6XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNCFIFO_OVERFLOW | \ + A6XX_RBBM_INT_0_MASK_RBBM_GPC_ERROR | \ + A6XX_RBBM_INT_0_MASK_CP_SW | \ + A6XX_RBBM_INT_0_MASK_CP_HW_ERROR | \ + A6XX_RBBM_INT_0_MASK_PM4CPINTERRUPT | \ + A6XX_RBBM_INT_0_MASK_CP_RB_DONE_TS | \ + A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS | \ + A6XX_RBBM_INT_0_MASK_RBBM_ATB_BUS_OVERFLOW | \ + A6XX_RBBM_INT_0_MASK_RBBM_HANG_DETECT | \ + A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \ + A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR | \ + A6XX_RBBM_INT_0_MASK_TSBWRITEERROR) + +#define A7XX_APRIV_MASK (A6XX_CP_APRIV_CNTL_ICACHE | \ + A6XX_CP_APRIV_CNTL_RBFETCH | \ + A6XX_CP_APRIV_CNTL_RBPRIVLEVEL | \ + A6XX_CP_APRIV_CNTL_RBRPWB) + +#define A7XX_BR_APRIVMASK (A7XX_APRIV_MASK | \ + A6XX_CP_APRIV_CNTL_CDREAD | \ + A6XX_CP_APRIV_CNTL_CDWRITE) static int hw_init(struct msm_gpu *gpu) { @@ -1232,19 +1463,21 @@ static int hw_init(struct msm_gpu *gpu) gpu_write64(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_BASE, 0x00000000); gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_SIZE, 0x00000000); - /* Turn on 64 bit addressing for all blocks */ - gpu_write(gpu, REG_A6XX_CP_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_VSC_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_GRAS_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_RB_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_PC_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_HLSQ_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_VFD_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_VPC_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_UCHE_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_SP_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_TPL1_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_ADDR_MODE_CNTL, 0x1); + if (!adreno_is_a7xx(adreno_gpu)) { + /* Turn on 64 bit addressing for all blocks */ + gpu_write(gpu, REG_A6XX_CP_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_VSC_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_GRAS_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_RB_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_PC_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_HLSQ_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_VFD_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_VPC_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_UCHE_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_SP_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_TPL1_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_ADDR_MODE_CNTL, 0x1); + } /* enable hardware clockgating */ a6xx_set_hwcg(gpu, true); @@ -1252,12 +1485,14 @@ static int hw_init(struct msm_gpu *gpu) /* VBIF/GBIF start*/ if (adreno_is_a610(adreno_gpu) || adreno_is_a640_family(adreno_gpu) || - adreno_is_a650_family(adreno_gpu)) { + adreno_is_a650_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { gpu_write(gpu, REG_A6XX_GBIF_QSB_SIDE0, 0x00071620); gpu_write(gpu, REG_A6XX_GBIF_QSB_SIDE1, 0x00071620); gpu_write(gpu, REG_A6XX_GBIF_QSB_SIDE2, 0x00071620); gpu_write(gpu, REG_A6XX_GBIF_QSB_SIDE3, 0x00071620); - gpu_write(gpu, REG_A6XX_RBBM_GBIF_CLIENT_QOS_CNTL, 0x3); + gpu_write(gpu, REG_A6XX_RBBM_GBIF_CLIENT_QOS_CNTL, + adreno_is_a7xx(adreno_gpu) ? 0x2120212 : 0x3); } else { gpu_write(gpu, REG_A6XX_RBBM_VBIF_CLIENT_QOS_CNTL, 0x3); } @@ -1265,13 +1500,21 @@ static int hw_init(struct msm_gpu *gpu) if (adreno_is_a630(adreno_gpu)) gpu_write(gpu, REG_A6XX_VBIF_GATE_OFF_WRREQ_EN, 0x00000009); + if (adreno_is_a7xx(adreno_gpu)) + gpu_write(gpu, REG_A6XX_UCHE_GBIF_GX_CONFIG, 0x10240e0); + /* Make all blocks contribute to the GPU BUSY perf counter */ gpu_write(gpu, REG_A6XX_RBBM_PERFCTR_GPU_BUSY_MASKED, 0xffffffff); /* Disable L2 bypass in the UCHE */ - gpu_write64(gpu, REG_A6XX_UCHE_WRITE_RANGE_MAX, 0x0001ffffffffffc0llu); - gpu_write64(gpu, REG_A6XX_UCHE_TRAP_BASE, 0x0001fffffffff000llu); - gpu_write64(gpu, REG_A6XX_UCHE_WRITE_THRU_BASE, 0x0001fffffffff000llu); + if (adreno_is_a7xx(adreno_gpu)) { + gpu_write64(gpu, REG_A6XX_UCHE_TRAP_BASE, 0x0001fffffffff000llu); + gpu_write64(gpu, REG_A6XX_UCHE_WRITE_THRU_BASE, 0x0001fffffffff000llu); + } else { + gpu_write64(gpu, REG_A6XX_UCHE_WRITE_RANGE_MAX, 0x0001ffffffffffc0llu); + gpu_write64(gpu, REG_A6XX_UCHE_TRAP_BASE, 0x0001fffffffff000llu); + gpu_write64(gpu, REG_A6XX_UCHE_WRITE_THRU_BASE, 0x0001fffffffff000llu); + } if (!adreno_is_a650_family(adreno_gpu)) { /* Set the GMEM VA range [0x100000:0x100000 + gpu->gmem - 1] */ @@ -1281,8 +1524,12 @@ static int hw_init(struct msm_gpu *gpu) 0x00100000 + adreno_gpu->info->gmem - 1); } - gpu_write(gpu, REG_A6XX_UCHE_FILTER_CNTL, 0x804); - gpu_write(gpu, REG_A6XX_UCHE_CACHE_WAYS, 0x4); + if (adreno_is_a7xx(adreno_gpu)) + gpu_write(gpu, REG_A6XX_UCHE_CACHE_WAYS, BIT(23)); + else { + gpu_write(gpu, REG_A6XX_UCHE_FILTER_CNTL, 0x804); + gpu_write(gpu, REG_A6XX_UCHE_CACHE_WAYS, 0x4); + } if (adreno_is_a640_family(adreno_gpu) || adreno_is_a650_family(adreno_gpu)) { gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_2, 0x02000140); @@ -1290,7 +1537,7 @@ static int hw_init(struct msm_gpu *gpu) } else if (adreno_is_a610(adreno_gpu)) { gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_2, 0x00800060); gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_1, 0x40201b16); - } else { + } else if (!adreno_is_a7xx(adreno_gpu)) { gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_2, 0x010000c0); gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_1, 0x8040362c); } @@ -1302,7 +1549,7 @@ static int hw_init(struct msm_gpu *gpu) if (adreno_is_a610(adreno_gpu)) { gpu_write(gpu, REG_A6XX_CP_MEM_POOL_SIZE, 48); gpu_write(gpu, REG_A6XX_CP_MEM_POOL_DBG_ADDR, 47); - } else + } else if (!adreno_is_a7xx(adreno_gpu)) gpu_write(gpu, REG_A6XX_CP_MEM_POOL_SIZE, 128); /* Setting the primFifo thresholds default values, @@ -1318,7 +1565,7 @@ static int hw_init(struct msm_gpu *gpu) gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00018000); else if (adreno_is_a610(adreno_gpu)) gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00080000); - else + else if (!adreno_is_a7xx(adreno_gpu)) gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00180000); /* Set the AHB default slave response to "ERROR" */ @@ -1327,6 +1574,12 @@ static int hw_init(struct msm_gpu *gpu) /* Turn on performance counters */ gpu_write(gpu, REG_A6XX_RBBM_PERFCTR_CNTL, 0x1); + if (adreno_is_a7xx(adreno_gpu)) { + /* Turn on the IFPC counter (countable 4 on XOCLK4) */ + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GMU_CX_GMU_POWER_COUNTER_SELECT_1, + FIELD_PREP(GENMASK(7, 0), 0x4)); + } + /* Select CP0 to always count cycles */ gpu_write(gpu, REG_A6XX_CP_PERFCTR_CP_SEL(0), PERF_CP_ALWAYS_COUNT); @@ -1373,15 +1626,31 @@ static int hw_init(struct msm_gpu *gpu) /* Set dualQ + disable afull for A660 GPU */ if (adreno_is_a660(adreno_gpu)) gpu_write(gpu, REG_A6XX_UCHE_CMDQ_CONFIG, 0x66906); + else if (adreno_is_a7xx(adreno_gpu)) + gpu_write(gpu, REG_A6XX_UCHE_CMDQ_CONFIG, + FIELD_PREP(GENMASK(19, 16), 6) | + FIELD_PREP(GENMASK(15, 12), 6) | + FIELD_PREP(GENMASK(11, 8), 9) | + BIT(3) | BIT(2) | + FIELD_PREP(GENMASK(1, 0), 2)); /* Enable expanded apriv for targets that support it */ if (gpu->hw_apriv) { - gpu_write(gpu, REG_A6XX_CP_APRIV_CNTL, - (1 << 6) | (1 << 5) | (1 << 3) | (1 << 2) | (1 << 1)); + if (adreno_is_a7xx(adreno_gpu)) { + gpu_write(gpu, REG_A6XX_CP_APRIV_CNTL, + A7XX_BR_APRIVMASK); + gpu_write(gpu, REG_A7XX_CP_BV_APRIV_CNTL, + A7XX_APRIV_MASK); + gpu_write(gpu, REG_A7XX_CP_LPAC_APRIV_CNTL, + A7XX_APRIV_MASK); + } else + gpu_write(gpu, REG_A6XX_CP_APRIV_CNTL, + BIT(6) | BIT(5) | BIT(3) | BIT(2) | BIT(1)); } /* Enable interrupts */ - gpu_write(gpu, REG_A6XX_RBBM_INT_0_MASK, A6XX_INT_MASK); + gpu_write(gpu, REG_A6XX_RBBM_INT_0_MASK, + adreno_is_a7xx(adreno_gpu) ? A7XX_INT_MASK : A6XX_INT_MASK); ret = adreno_hw_init(gpu); if (ret) @@ -1408,6 +1677,12 @@ static int hw_init(struct msm_gpu *gpu) shadowptr(a6xx_gpu, gpu->rb[0])); } + /* ..which means "always" on A7xx, also for BV shadow */ + if (adreno_is_a7xx(adreno_gpu)) { + gpu_write64(gpu, REG_A7XX_CP_BV_RB_RPTR_ADDR, + rbmemptr(gpu->rb[0], bv_fence)); + } + /* Always come up on rb 0 */ a6xx_gpu->cur_ring = gpu->rb[0]; @@ -1416,7 +1691,7 @@ static int hw_init(struct msm_gpu *gpu) /* Enable the SQE_to start the CP engine */ gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1); - ret = a6xx_cp_init(gpu); + ret = adreno_is_a7xx(adreno_gpu) ? a7xx_cp_init(gpu) : a6xx_cp_init(gpu); if (ret) goto out; @@ -1653,7 +1928,7 @@ static void a6xx_cp_hw_err_irq(struct msm_gpu *gpu) (val & 0x3ffff), val); } - if (status & A6XX_CP_INT_CP_AHB_ERROR) + if (status & A6XX_CP_INT_CP_AHB_ERROR && !adreno_is_a7xx(to_adreno_gpu(gpu))) dev_err_ratelimited(&gpu->pdev->dev, "CP AHB error interrupt\n"); if (status & A6XX_CP_INT_CP_VSD_PARITY_ERROR) @@ -1803,6 +2078,35 @@ static void a6xx_llc_activate(struct a6xx_gpu *a6xx_gpu) gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL1, GENMASK(24, 0), cntl1_regval); } +static void a7xx_llc_activate(struct a6xx_gpu *a6xx_gpu) +{ + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + struct msm_gpu *gpu = &adreno_gpu->base; + + if (IS_ERR(a6xx_gpu->llc_mmio)) + return; + + if (!llcc_slice_activate(a6xx_gpu->llc_slice)) { + u32 gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice); + + gpu_scid &= GENMASK(4, 0); + + gpu_write(gpu, REG_A6XX_GBIF_SCACHE_CNTL1, + FIELD_PREP(GENMASK(29, 25), gpu_scid) | + FIELD_PREP(GENMASK(24, 20), gpu_scid) | + FIELD_PREP(GENMASK(19, 15), gpu_scid) | + FIELD_PREP(GENMASK(14, 10), gpu_scid) | + FIELD_PREP(GENMASK(9, 5), gpu_scid) | + FIELD_PREP(GENMASK(4, 0), gpu_scid)); + + gpu_write(gpu, REG_A6XX_GBIF_SCACHE_CNTL0, + FIELD_PREP(GENMASK(14, 10), gpu_scid) | + BIT(8)); + } + + llcc_slice_activate(a6xx_gpu->htw_llc_slice); +} + static void a6xx_llc_slices_destroy(struct a6xx_gpu *a6xx_gpu) { /* No LLCC on non-RPMh (and by extension, non-GMU) SoCs */ @@ -1814,7 +2118,7 @@ static void a6xx_llc_slices_destroy(struct a6xx_gpu *a6xx_gpu) } static void a6xx_llc_slices_init(struct platform_device *pdev, - struct a6xx_gpu *a6xx_gpu) + struct a6xx_gpu *a6xx_gpu, bool is_a7xx) { struct device_node *phandle; @@ -1823,18 +2127,18 @@ static void a6xx_llc_slices_init(struct platform_device *pdev, return; /* - * There is a different programming path for targets with an mmu500 - * attached, so detect if that is the case + * There is a different programming path for A6xx targets with an + * mmu500 attached, so detect if that is the case */ phandle = of_parse_phandle(pdev->dev.of_node, "iommus", 0); a6xx_gpu->have_mmu500 = (phandle && of_device_is_compatible(phandle, "arm,mmu-500")); of_node_put(phandle); - if (a6xx_gpu->have_mmu500) - a6xx_gpu->llc_mmio = NULL; - else + if (is_a7xx || !a6xx_gpu->have_mmu500) a6xx_gpu->llc_mmio = msm_ioremap(pdev, "cx_mem"); + else + a6xx_gpu->llc_mmio = NULL; a6xx_gpu->llc_slice = llcc_slice_getd(LLCC_GPU); a6xx_gpu->htw_llc_slice = llcc_slice_getd(LLCC_GPUHTW); @@ -1920,7 +2224,7 @@ static int a6xx_gmu_pm_resume(struct msm_gpu *gpu) msm_devfreq_resume(gpu); - a6xx_llc_activate(a6xx_gpu); + adreno_is_a7xx(adreno_gpu) ? a7xx_llc_activate : a6xx_llc_activate(a6xx_gpu); return ret; } @@ -2307,6 +2611,37 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper = { .get_timestamp = a6xx_get_timestamp, }; +static const struct adreno_gpu_funcs funcs_a7xx = { + .base = { + .get_param = adreno_get_param, + .set_param = adreno_set_param, + .hw_init = a6xx_hw_init, + .ucode_load = a6xx_ucode_load, + .pm_suspend = a6xx_gmu_pm_suspend, + .pm_resume = a6xx_gmu_pm_resume, + .recover = a6xx_recover, + .submit = a7xx_submit, + .active_ring = a6xx_active_ring, + .irq = a6xx_irq, + .destroy = a6xx_destroy, +#if defined(CONFIG_DRM_MSM_GPU_STATE) + .show = a6xx_show, +#endif + .gpu_busy = a6xx_gpu_busy, + .gpu_get_freq = a6xx_gmu_get_freq, + .gpu_set_freq = a6xx_gpu_set_freq, +#if defined(CONFIG_DRM_MSM_GPU_STATE) + .gpu_state_get = a6xx_gpu_state_get, + .gpu_state_put = a6xx_gpu_state_put, +#endif + .create_address_space = a6xx_create_address_space, + .create_private_address_space = a6xx_create_private_address_space, + .get_rptr = a6xx_get_rptr, + .progress = a6xx_progress, + }, + .get_timestamp = a6xx_gmu_get_timestamp, +}; + struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) { struct msm_drm_private *priv = dev->dev_private; @@ -2316,6 +2651,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) struct a6xx_gpu *a6xx_gpu; struct adreno_gpu *adreno_gpu; struct msm_gpu *gpu; + bool is_a7xx; int ret; a6xx_gpu = kzalloc(sizeof(*a6xx_gpu), GFP_KERNEL); @@ -2339,7 +2675,10 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->base.hw_apriv = !!(config->info->quirks & ADRENO_QUIRK_HAS_HW_APRIV); - a6xx_llc_slices_init(pdev, a6xx_gpu); + /* gpu->info only gets assigned in adreno_gpu_init() */ + is_a7xx = config->info->family == ADRENO_7XX_GEN1; + + a6xx_llc_slices_init(pdev, a6xx_gpu, is_a7xx); ret = a6xx_set_supported_hw(&pdev->dev, config->info); if (ret) { @@ -2347,7 +2686,9 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (adreno_has_gmu_wrapper(adreno_gpu)) + if (is_a7xx) + ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs_a7xx, 1); + else if (adreno_has_gmu_wrapper(adreno_gpu)) ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs_gmuwrapper, 1); else ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 695cce82d914..1c31a8bd19da 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -575,6 +575,7 @@ int adreno_hw_init(struct msm_gpu *gpu) ring->cur = ring->start; ring->next = ring->start; ring->memptrs->rptr = 0; + ring->memptrs->bv_fence = ring->fctx->completed_fence; /* Detect and clean up an impossible fence, ie. if GPU managed * to scribble something invalid, we don't want that to confuse diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 49f38edf9854..3e69ef9dde3f 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -46,6 +46,7 @@ enum adreno_family { ADRENO_6XX_GEN2, /* a640 family */ ADRENO_6XX_GEN3, /* a650 family */ ADRENO_6XX_GEN4, /* a660 family */ + ADRENO_7XX_GEN1, /* a730 family */ }; #define ADRENO_QUIRK_TWO_PASS_USE_WFI BIT(0) @@ -401,6 +402,17 @@ static inline int adreno_is_a640_family(const struct adreno_gpu *gpu) return gpu->info->family == ADRENO_6XX_GEN2; } +static inline int adreno_is_a730(struct adreno_gpu *gpu) +{ + return gpu->info->chip_ids[0] == 0x07030001; +} + +static inline int adreno_is_a7xx(struct adreno_gpu *gpu) +{ + /* Update with non-fake (i.e. non-A702) Gen 7 GPUs */ + return adreno_is_a730(gpu); +} + u64 adreno_private_address_space_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, uint32_t param, uint64_t *value, uint32_t *len); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 698b333abccd..0d6beb8cd39a 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -30,6 +30,8 @@ struct msm_gpu_submit_stats { struct msm_rbmemptrs { volatile uint32_t rptr; volatile uint32_t fence; + /* Introduced on A7xx */ + volatile uint32_t bv_fence; volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT]; volatile u64 ttbr0; From patchwork Tue Aug 8 21:02:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 711640 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C5A3C04FDF for ; Tue, 8 Aug 2023 21:04:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235282AbjHHVD7 (ORCPT ); Tue, 8 Aug 2023 17:03:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235687AbjHHVDk (ORCPT ); Tue, 8 Aug 2023 17:03:40 -0400 Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com [IPv6:2a00:1450:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26D3D5276 for ; Tue, 8 Aug 2023 14:03:07 -0700 (PDT) Received: by mail-lj1-x22c.google.com with SMTP id 38308e7fff4ca-2b9b9f0387dso95724511fa.0 for ; Tue, 08 Aug 2023 14:03:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528585; x=1692133385; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=oWSqXH+9RfbwUTEnzUS/yIoSzpfO5MDzRDb4ETGPnXo=; b=HDjwem1ulOLEnOWwiJkSQF5CYp47hiV60zxYdQN0lOvnSjDIYHmdTvaPoFHM51st+1 hYJFv+X9JYZ6CXSMmV+3JcekbOuuv9Z0NbMJZWoDG1UCpkR9990ABDwO+Jtb0G5dOM09 0wwadN6rqKQ3qDofigBIoMTKhaav/nyoYL9khx6W3eGYgIwsIHuUDeF2YU4A3PiyPaUV 9NM2vLTCzWKVOJ5axWTUTjKab3NNyLujAN9QKx7Y/Z+Y7aW7FTXta/LEwVR/PNfa2XrA rjW+2bwO30ji2U8WPdreGKkVULpqPYVVp/JaY9zzS0dOr7HNuZTIpRGc9/xGyacj/FFf 7i0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528585; x=1692133385; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oWSqXH+9RfbwUTEnzUS/yIoSzpfO5MDzRDb4ETGPnXo=; b=E6cp61MmqQjk4Ff29Qeop6OC22nRuNBw6EVBE1r336PRMM+vyKCzfrscXOpsS6MsZW eM3P9ADKqQ8gPoh2YQbBFWXnOty35a3krEAInoyoC+93Ta6fSgNpMCrPj6OAGT2oLUMe IYdJP0hp5xF55XI/hfjcjSVT/hhMow/3QOci2AdeAbUQcnqi4OuPZCeL6emPBgc9hblo 91JroXO1RlIMoSA38c8xiuOY8pn2nBEWtixyyTs78CbUXkxC1JCOB/hYQhjl7VOF7GGO XtEfgK5O8mTJnKiNsoKPb0aPRf65+xvB+lnRoRaV3nQHFjlT6u6ap2vKeTQt2u/VxE5P xAYA== X-Gm-Message-State: AOJu0YyBM/PE8DUzLodRRJ3KYGBF0ZQ1LMCHGW4SK7XKCNUCxOzEJecQ NB6C20WWTp7UUOcFK9CXoHqEVA== X-Google-Smtp-Source: AGHT+IH30kcdgrakV3yA9VIz6jtlC0ExfeVhv/nhNZepLIccW64T5ouuUZPiVHp6SNXZ5fohgyAt6A== X-Received: by 2002:a2e:b009:0:b0:2b6:df23:2117 with SMTP id y9-20020a2eb009000000b002b6df232117mr484344ljk.43.1691528585080; Tue, 08 Aug 2023 14:03:05 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.03.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:03:04 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:47 +0200 Subject: [PATCH v2 09/14] drm/msm/a6xx: Send ACD state to QMP at GMU resume MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-9-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=3163; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=JkmXZM9EFVG7rbP7M14vpehHX3UdHrePo85jPFtHIho=; b=ViUB144QP1ZseV6WexhJNzar78IsyKH78bT0Xt0p7SqasyBwYAzOh2k7+FDqwIYL1eHLPJWBA tu3tvQBD+r6DlevjhGDLQ/8G40/Oonz8jbwPfKu+zeA0HCwI0qi0QTQ X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org The QMP mailbox expects to be notified of the ACD (Adaptive Clock Distribution) state. Get a handle to the mailbox at probe time and poke it at GMU resume. Since we don't fully support ACD yet, hardcode the message to "val: 0" (state = disabled). Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 21 +++++++++++++++++++++ drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 3 +++ 2 files changed, 24 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 75984260898e..17e1e72f5d7d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -980,11 +980,13 @@ static void a6xx_gmu_set_initial_bw(struct msm_gpu *gpu, struct a6xx_gmu *gmu) dev_pm_opp_put(gpu_opp); } +#define GMU_ACD_STATE_MSG_LEN 36 int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) { struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; struct msm_gpu *gpu = &adreno_gpu->base; struct a6xx_gmu *gmu = &a6xx_gpu->gmu; + char buf[GMU_ACD_STATE_MSG_LEN]; int status, ret; if (WARN(!gmu->initialized, "The GMU is not set up yet\n")) @@ -992,6 +994,18 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) gmu->hung = false; + /* Notify AOSS about the ACD state (unimplemented for now => disable it) */ + if (!IS_ERR(gmu->qmp)) { + ret = snprintf(buf, sizeof(buf), + "{class: gpu, res: acd, val: %d}", + 0 /* Hardcode ACD to be disabled for now */); + WARN_ON(ret >= GMU_ACD_STATE_MSG_LEN); + + ret = qmp_send(gmu->qmp, buf, sizeof(buf)); + if (ret) + dev_err(gmu->dev, "failed to send GPU ACD state\n"); + } + /* Turn on the resources */ pm_runtime_get_sync(gmu->dev); @@ -1744,6 +1758,10 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) goto detach_cxpd; } + gmu->qmp = qmp_get(gmu->dev); + if (IS_ERR(gmu->qmp) && adreno_is_a7xx(adreno_gpu)) + return PTR_ERR(gmu->qmp); + init_completion(&gmu->pd_gate); complete_all(&gmu->pd_gate); gmu->pd_nb.notifier_call = cxpd_notifier_cb; @@ -1767,6 +1785,9 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) return 0; + if (!IS_ERR_OR_NULL(gmu->qmp)) + qmp_put(gmu->qmp); + detach_cxpd: dev_pm_domain_detach(gmu->cxpd, false); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index 236f81a43caa..592b296aab22 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -8,6 +8,7 @@ #include #include #include +#include #include "msm_drv.h" #include "a6xx_hfi.h" @@ -96,6 +97,8 @@ struct a6xx_gmu { /* For power domain callback */ struct notifier_block pd_nb; struct completion pd_gate; + + struct qmp *qmp; }; static inline u32 gmu_read(struct a6xx_gmu *gmu, u32 offset) From patchwork Tue Aug 8 21:02:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 712825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CE51C04FE1 for ; Tue, 8 Aug 2023 21:04:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234963AbjHHVEA (ORCPT ); Tue, 8 Aug 2023 17:04:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235766AbjHHVDl (ORCPT ); Tue, 8 Aug 2023 17:03:41 -0400 Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com [IPv6:2a00:1450:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 660CD59B9 for ; Tue, 8 Aug 2023 14:03:08 -0700 (PDT) Received: by mail-lj1-x22b.google.com with SMTP id 38308e7fff4ca-2b9c55e0fbeso94816571fa.2 for ; Tue, 08 Aug 2023 14:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528586; x=1692133386; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=zThXIzbWU/BroFBAHAP99vCVEFjH5bD8LBiNoj0YJvE=; b=zwv8HLFYxye0L1MB/rYFcEksw2FK5jzLSQmog0Kx7s96skYs7vRd3DqNKnJcnGV0rh XxMr+TX7UgG6Ah8TKl3xCFyeymolRcQSb1zfU/50qs1EQ5giXVcnVyHVmMIrkqBeYHjm X/tuF5zGu+1fFDoNaAgkEJ79mh1ELxWyFWYx16YtgM29jvYq1zsfcrzX2MudrooxY+Mn zwx2X1ZAEA6X0FFc+dC1iv9Xn1D2mFrTIdTtEifvKC7EPfTLxwLEoiBaqBiaNb7pJ8+O dqVFt4HhoyoBxLY/GubLDxxz5Ry0H6/mivbQwcc00MuyQ3GzyanPaGh7uWPfTg8+5YL3 lTWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528586; x=1692133386; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zThXIzbWU/BroFBAHAP99vCVEFjH5bD8LBiNoj0YJvE=; b=SwCb9QSBRK2Xbepv8N7NMhURz8FLT34Xezgdh5XNh3P33Vv0Kh+e8KTNvwzI5h1jEO TyGUAvXD4KI9RXeae59pRRtBwD5cPKLTTgLoHYnVZn+3/HVizCA5ndRsx4/+xmxAb+A5 qjaIfIMmikQ73PpqP0XOqZnsWb295V4val8GbO8VVW5BcuRphgkNvWLoaowL1XxU871/ BDDPU8vYOKSrhJKTf2bGcjD3bl4BSVij7nydnTq8I0qOZF15MUdoZKkrFxA4VNJRNXNT 2nkm/JRXQvILXm3SJJGJxu87C08GyxiDy6KrkBYiIaHWvzHlLLr4jiHrvyFK48ym3p4M wKIQ== X-Gm-Message-State: AOJu0YxOD19TrMbQHna38PHFtDhdzaEiy4jxBpMRmzLUDjH2+BJ2LuLj iBnCe8n0UxskH0GkQnHSO8xTcA== X-Google-Smtp-Source: AGHT+IHZNe4lgeDJxTN+qXQRHIvX6WX0+lgI8/FtX6nSBcE3VsHAkx6uY3SGy1JXj2gxI/JI9TJmOw== X-Received: by 2002:a2e:9848:0:b0:2b6:bb08:91c4 with SMTP id e8-20020a2e9848000000b002b6bb0891c4mr508151ljj.42.1691528586694; Tue, 08 Aug 2023 14:03:06 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.03.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:03:06 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:48 +0200 Subject: [PATCH v2 10/14] drm/msm/a6xx: Mostly implement A7xx gpu_state MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-10-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=7067; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=loqI1PthvvbTeOGvwIoPnH2y/NMnLW1i8nFc76YUrQk=; b=ph8zeQdC2aYwjNDf+2zW+AfbXLeF/T6m5Agopbfo13YUHQ22S8+BdrAtQGBdypqyFzbVZmVld +L23vLbBN71BMTYEEWZb5iv3LnyhUKTOG2USBJpvMoro9UYlqwZ8Hy5 X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Provide the necessary alternations to mostly support state dumping on A7xx. Newer GPUs will probably require more changes here. Crashdumper and debugbus remain untested. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 52 +++++++++++++++++++++++- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h | 61 ++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index 4e5d650578c6..18be2d3bde09 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -948,6 +948,18 @@ static u32 a6xx_get_cp_roq_size(struct msm_gpu *gpu) return gpu_read(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_2) >> 14; } +static u32 a7xx_get_cp_roq_size(struct msm_gpu *gpu) +{ + /* + * The value at CP_ROQ_THRESHOLDS_2[20:31] is in 4dword units. + * That register however is not directly accessible from APSS on A7xx. + * Program the SQE_UCODE_DBG_ADDR with offset=0x70d3 and read the value. + */ + gpu_write(gpu, REG_A6XX_CP_SQE_UCODE_DBG_ADDR, 0x70d3); + + return 4 * (gpu_read(gpu, REG_A6XX_CP_SQE_UCODE_DBG_DATA) >> 20); +} + /* Read a block of data from an indexed register pair */ static void a6xx_get_indexed_regs(struct msm_gpu *gpu, struct a6xx_gpu_state *a6xx_state, @@ -1019,8 +1031,40 @@ static void a6xx_get_indexed_registers(struct msm_gpu *gpu, /* Restore the size in the hardware */ gpu_write(gpu, REG_A6XX_CP_MEM_POOL_SIZE, mempool_size); +} + +static void a7xx_get_indexed_registers(struct msm_gpu *gpu, + struct a6xx_gpu_state *a6xx_state) +{ + int i, indexed_count, mempool_count; + + indexed_count = ARRAY_SIZE(a7xx_indexed_reglist); + mempool_count = ARRAY_SIZE(a7xx_cp_bv_mempool_indexed); - a6xx_state->nr_indexed_regs = count; + a6xx_state->indexed_regs = state_kcalloc(a6xx_state, + indexed_count + mempool_count, + sizeof(*a6xx_state->indexed_regs)); + if (!a6xx_state->indexed_regs) + return; + + a6xx_state->nr_indexed_regs = indexed_count + mempool_count; + + /* First read the common regs */ + for (i = 0; i < indexed_count; i++) + a6xx_get_indexed_regs(gpu, a6xx_state, &a7xx_indexed_reglist[i], + &a6xx_state->indexed_regs[i]); + + gpu_rmw(gpu, REG_A6XX_CP_CHICKEN_DBG, 0, BIT(2)); + gpu_rmw(gpu, REG_A7XX_CP_BV_CHICKEN_DBG, 0, BIT(2)); + + /* Get the contents of the CP_BV mempool */ + for (i = 0; i < mempool_count; i++) + a6xx_get_indexed_regs(gpu, a6xx_state, a7xx_cp_bv_mempool_indexed, + &a6xx_state->indexed_regs[indexed_count - 1 + i]); + + gpu_rmw(gpu, REG_A6XX_CP_CHICKEN_DBG, BIT(2), 0); + gpu_rmw(gpu, REG_A7XX_CP_BV_CHICKEN_DBG, BIT(2), 0); + return; } struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) @@ -1056,6 +1100,12 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) return &a6xx_state->base; /* Get the banks of indexed registers */ + if (adreno_is_a7xx(adreno_gpu)) { + a7xx_get_indexed_registers(gpu, a6xx_state); + /* Further codeflow is untested on A7xx. */ + return &a6xx_state->base; + } + a6xx_get_indexed_registers(gpu, a6xx_state); /* diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h index e788ed72eb0d..8d7e6f26480a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h @@ -338,6 +338,28 @@ static const struct a6xx_registers a6xx_vbif_reglist = static const struct a6xx_registers a6xx_gbif_reglist = REGS(a6xx_gbif_registers, 0, 0); +static const u32 a7xx_ahb_registers[] = { + /* RBBM_STATUS */ + 0x210, 0x210, + /* RBBM_STATUS2-3 */ + 0x212, 0x213, +}; + +static const u32 a7xx_gbif_registers[] = { + 0x3c00, 0x3c0b, + 0x3c40, 0x3c42, + 0x3c45, 0x3c47, + 0x3c49, 0x3c4a, + 0x3cc0, 0x3cd1, +}; + +static const struct a6xx_registers a7xx_ahb_reglist[] = { + REGS(a7xx_ahb_registers, 0, 0), +}; + +static const struct a6xx_registers a7xx_gbif_reglist = + REGS(a7xx_gbif_registers, 0, 0); + static const u32 a6xx_gmu_gx_registers[] = { /* GMU GX */ 0x0000, 0x0000, 0x0010, 0x0013, 0x0016, 0x0016, 0x0018, 0x001b, @@ -384,14 +406,17 @@ static const struct a6xx_registers a6xx_gmu_reglist[] = { }; static u32 a6xx_get_cp_roq_size(struct msm_gpu *gpu); +static u32 a7xx_get_cp_roq_size(struct msm_gpu *gpu); -static struct a6xx_indexed_registers { +struct a6xx_indexed_registers { const char *name; u32 addr; u32 data; u32 count; u32 (*count_fn)(struct msm_gpu *gpu); -} a6xx_indexed_reglist[] = { +}; + +static struct a6xx_indexed_registers a6xx_indexed_reglist[] = { { "CP_SQE_STAT", REG_A6XX_CP_SQE_STAT_ADDR, REG_A6XX_CP_SQE_STAT_DATA, 0x33, NULL }, { "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR, @@ -402,11 +427,43 @@ static struct a6xx_indexed_registers { REG_A6XX_CP_ROQ_DBG_DATA, 0, a6xx_get_cp_roq_size}, }; +static struct a6xx_indexed_registers a7xx_indexed_reglist[] = { + { "CP_SQE_STAT", REG_A6XX_CP_SQE_STAT_ADDR, + REG_A6XX_CP_SQE_STAT_DATA, 0x33, NULL }, + { "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR, + REG_A6XX_CP_DRAW_STATE_DATA, 0x100, NULL }, + { "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR, + REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x8000, NULL }, + { "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_SQE_STAT_ADDR, + REG_A7XX_CP_BV_SQE_STAT_DATA, 0x33, NULL }, + { "CP_BV_DRAW_STATE_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR, + REG_A7XX_CP_BV_DRAW_STATE_DATA, 0x100, NULL }, + { "CP_BV_SQE_UCODE_DBG_ADDR", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR, + REG_A7XX_CP_BV_SQE_UCODE_DBG_DATA, 0x8000, NULL }, + { "CP_SQE_AC_STAT_ADDR", REG_A7XX_CP_SQE_AC_STAT_ADDR, + REG_A7XX_CP_SQE_AC_STAT_DATA, 0x33, NULL }, + { "CP_LPAC_DRAW_STATE_ADDR", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR, + REG_A7XX_CP_LPAC_DRAW_STATE_DATA, 0x100, NULL }, + { "CP_SQE_AC_UCODE_DBG_ADDR", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR, + REG_A7XX_CP_SQE_AC_UCODE_DBG_DATA, 0x8000, NULL }, + { "CP_LPAC_FIFO_DBG_ADDR", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR, + REG_A7XX_CP_LPAC_FIFO_DBG_DATA, 0x40, NULL }, + { "CP_ROQ", REG_A6XX_CP_ROQ_DBG_ADDR, + REG_A6XX_CP_ROQ_DBG_DATA, 0, a7xx_get_cp_roq_size }, +}; + static struct a6xx_indexed_registers a6xx_cp_mempool_indexed = { "CP_MEMPOOL", REG_A6XX_CP_MEM_POOL_DBG_ADDR, REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2060, NULL, }; +static struct a6xx_indexed_registers a7xx_cp_bv_mempool_indexed[] = { + { "CP_MEMPOOL", REG_A6XX_CP_MEM_POOL_DBG_ADDR, + REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2100, NULL }, + { "CP_BV_MEMPOOL", REG_A7XX_CP_BV_MEM_POOL_DBG_ADDR, + REG_A7XX_CP_BV_MEM_POOL_DBG_DATA, 0x2100, NULL }, +}; + #define DEBUGBUS(_id, _count) { .id = _id, .name = #_id, .count = _count } static const struct a6xx_debugbus_block { From patchwork Tue Aug 8 21:02:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 711639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20D5FC41513 for ; Tue, 8 Aug 2023 21:04:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235405AbjHHVEP (ORCPT ); Tue, 8 Aug 2023 17:04:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235779AbjHHVDp (ORCPT ); Tue, 8 Aug 2023 17:03:45 -0400 Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com [IPv6:2a00:1450:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F06575FDB for ; Tue, 8 Aug 2023 14:03:10 -0700 (PDT) Received: by mail-lj1-x22e.google.com with SMTP id 38308e7fff4ca-2b9b5ee9c5aso98356421fa.1 for ; Tue, 08 Aug 2023 14:03:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528589; x=1692133389; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=DnEyuSuzHyIiWPCbKBx0MITv8DGMKqibysift6AXEZA=; b=PwcBYr/cMxLSORI1gXgizUUR7eQVyvfLzSjXXhWKslPTqAuP4Nu4zOzW84kpQcqr5j HE1+uFEF1kIqQxpoWHjaK6jiMxo2N273zQX/f7wzcmFf91RrprMhTgKKWI82oISOHQTp i1kHynPj4qJdeFK5Y3bkW0bi3QP5DOBMKCln94piOl1SCK8SYNVN628oFphJxHAnB3gb ZeQv989ugjNFGolGW+W2YZOA7qV6I2jipAsOAXmBS8TZ21u3wgIgn219/C8NdwwNpqL3 TyECWzlLAeJRnLrjIkCIkfG0e4rtuZTciuVrg+QJ9crxuYmXsJYDbPMLhVGS3KyuBrVl SMGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528589; x=1692133389; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DnEyuSuzHyIiWPCbKBx0MITv8DGMKqibysift6AXEZA=; b=f4svZ1oKCnl900xtwMBF996JzGFsZNcJj56gXZlo2Jy0b+XTkTch+wTLiPgzE9RrjV t2gNa1MkZp9nZWpHIsoohmOchue2HjF4vnzdcerwEAYIV8Xl27bwT2jG7Up9gwTlIBRx fCXtj86pI2ok/+oQdsMR6HqZiPlROEVLM5n0uSJ7sLIuGaT/r+uv4p17NCW0uc84dF94 rlYpfAGZhJkkRm5yiLq39sMUcDBTpGDUB807BrkqowTXHBv71TUuP4x3oi8mk77vS7rB mL6UAtwFuCU3qm1hvkahVzap1B070lQUqH1XQ1i9kjw+4h9DK1PoappCUR8lOJiWrGAW hM0g== X-Gm-Message-State: AOJu0YyUOA/zyTpk+h9RSlTLFhVXsq1IbGgJwGv6oz7bICEfC/2fhmgU udBFtaOKTvPUd0CwsbmLXS6LNw== X-Google-Smtp-Source: AGHT+IGI+UC90Pcq5objf07qqH4zSfFFPp898D/tDfR+VxTMmhCZ682bgcRanS8FhLoTzQoI0GbzRg== X-Received: by 2002:a2e:8216:0:b0:2b9:ea6b:64f with SMTP id w22-20020a2e8216000000b002b9ea6b064fmr477646ljg.37.1691528588961; Tue, 08 Aug 2023 14:03:08 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.03.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:03:08 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:49 +0200 Subject: [PATCH v2 11/14] drm/msm/a6xx: Add A730 support MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-11-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=12269; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=6rle9TmGhZHu4G4nBc5UtAUfD8o1Q7myQIeoSGr7ntI=; b=HvubsaisOEejZAjtuFwBqH0/ppHul+7RBN7Tc+jt8HB67GVeuQ8JLW3zi3bM7tMXibgvOanNz A2jcYTt7jLBDofzqvTs805sx8PIvnwGjY/L8iL5qjgjJ6HFsC4tasgh X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support for Adreno 730, also known as GEN7_0_x, found on SM8450. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 126 ++++++++++++++++++++++++++++- drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 61 ++++++++++++++ drivers/gpu/drm/msm/adreno/adreno_device.c | 13 +++ drivers/gpu/drm/msm/adreno/adreno_gpu.h | 2 +- 4 files changed, 198 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 61ce8d053355..522043883290 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -837,6 +837,63 @@ const struct adreno_reglist a690_hwcg[] = { {} }; +const struct adreno_reglist a730_hwcg[] = { + { REG_A6XX_RBBM_CLOCK_CNTL_SP0, 0x02222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_SP0, 0x02022222 }, + { REG_A6XX_RBBM_CLOCK_HYST_SP0, 0x0000f3cf }, + { REG_A6XX_RBBM_CLOCK_DELAY_SP0, 0x00000080 }, + { REG_A6XX_RBBM_CLOCK_CNTL_TP0, 0x22222220 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_TP0, 0x22222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL3_TP0, 0x22222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL4_TP0, 0x00222222 }, + { REG_A6XX_RBBM_CLOCK_HYST_TP0, 0x77777777 }, + { REG_A6XX_RBBM_CLOCK_HYST2_TP0, 0x77777777 }, + { REG_A6XX_RBBM_CLOCK_HYST3_TP0, 0x77777777 }, + { REG_A6XX_RBBM_CLOCK_HYST4_TP0, 0x00077777 }, + { REG_A6XX_RBBM_CLOCK_DELAY_TP0, 0x11111111 }, + { REG_A6XX_RBBM_CLOCK_DELAY2_TP0, 0x11111111 }, + { REG_A6XX_RBBM_CLOCK_DELAY3_TP0, 0x11111111 }, + { REG_A6XX_RBBM_CLOCK_DELAY4_TP0, 0x00011111 }, + { REG_A6XX_RBBM_CLOCK_CNTL_UCHE, 0x22222222 }, + { REG_A6XX_RBBM_CLOCK_HYST_UCHE, 0x00000004 }, + { REG_A6XX_RBBM_CLOCK_DELAY_UCHE, 0x00000002 }, + { REG_A6XX_RBBM_CLOCK_CNTL_RB0, 0x22222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_RB0, 0x01002222 }, + { REG_A6XX_RBBM_CLOCK_CNTL_CCU0, 0x00002220 }, + { REG_A6XX_RBBM_CLOCK_HYST_RB_CCU0, 0x44000f00 }, + { REG_A6XX_RBBM_CLOCK_CNTL_RAC, 0x25222022 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_RAC, 0x00555555 }, + { REG_A6XX_RBBM_CLOCK_DELAY_RAC, 0x00000011 }, + { REG_A6XX_RBBM_CLOCK_HYST_RAC, 0x00440044 }, + { REG_A6XX_RBBM_CLOCK_CNTL_TSE_RAS_RBBM, 0x04222222 }, + { REG_A7XX_RBBM_CLOCK_MODE2_GRAS, 0x00000222 }, + { REG_A7XX_RBBM_CLOCK_MODE_BV_GRAS, 0x00222222 }, + { REG_A6XX_RBBM_CLOCK_MODE_GPC, 0x02222223 }, + { REG_A6XX_RBBM_CLOCK_MODE_VFD, 0x00002222 }, + { REG_A7XX_RBBM_CLOCK_MODE_BV_GPC, 0x00222222 }, + { REG_A7XX_RBBM_CLOCK_MODE_BV_VFD, 0x00002222 }, + { REG_A6XX_RBBM_CLOCK_HYST_TSE_RAS_RBBM, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_HYST_GPC, 0x04104004 }, + { REG_A6XX_RBBM_CLOCK_HYST_VFD, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_DELAY_TSE_RAS_RBBM, 0x00004000 }, + { REG_A6XX_RBBM_CLOCK_DELAY_GPC, 0x00000200 }, + { REG_A6XX_RBBM_CLOCK_DELAY_VFD, 0x00002222 }, + { REG_A6XX_RBBM_CLOCK_MODE_HLSQ, 0x00002222 }, + { REG_A6XX_RBBM_CLOCK_DELAY_HLSQ, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_HYST_HLSQ, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_DELAY_HLSQ_2, 0x00000002 }, + { REG_A7XX_RBBM_CLOCK_MODE_BV_LRZ, 0x55555552 }, + { REG_A7XX_RBBM_CLOCK_MODE_CP, 0x00000223 }, + { REG_A6XX_RBBM_CLOCK_CNTL, 0x8aa8aa82 }, + { REG_A6XX_RBBM_ISDB_CNT, 0x00000182 }, + { REG_A6XX_RBBM_RAC_THRESHOLD_CNT, 0x00000000 }, + { REG_A6XX_RBBM_SP_HYST_CNT, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_CNTL_GMU_GX, 0x00000222 }, + { REG_A6XX_RBBM_CLOCK_DELAY_GMU_GX, 0x00000111 }, + { REG_A6XX_RBBM_CLOCK_HYST_GMU_GX, 0x00000555 }, + {}, +}; + static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -1048,6 +1105,59 @@ static const u32 a690_protect[] = { A6XX_PROTECT_NORDWR(0x11c00, 0x00000), /*note: infiite range */ }; +static const u32 a730_protect[] = { + A6XX_PROTECT_RDONLY(0x00000, 0x04ff), + A6XX_PROTECT_RDONLY(0x0050b, 0x0058), + A6XX_PROTECT_NORDWR(0x0050e, 0x0000), + A6XX_PROTECT_NORDWR(0x00510, 0x0000), + A6XX_PROTECT_NORDWR(0x00534, 0x0000), + A6XX_PROTECT_RDONLY(0x005fb, 0x009d), + A6XX_PROTECT_NORDWR(0x00699, 0x01e9), + A6XX_PROTECT_NORDWR(0x008a0, 0x0008), + A6XX_PROTECT_NORDWR(0x008ab, 0x0024), + /* 0x008d0-0x008dd are unprotected on purpose for tools like perfetto */ + A6XX_PROTECT_RDONLY(0x008de, 0x0154), + A6XX_PROTECT_NORDWR(0x00900, 0x004d), + A6XX_PROTECT_NORDWR(0x0098d, 0x00b2), + A6XX_PROTECT_NORDWR(0x00a41, 0x01be), + A6XX_PROTECT_NORDWR(0x00df0, 0x0001), + A6XX_PROTECT_NORDWR(0x00e01, 0x0000), + A6XX_PROTECT_NORDWR(0x00e07, 0x0008), + A6XX_PROTECT_NORDWR(0x03c00, 0x00c3), + A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff), + A6XX_PROTECT_NORDWR(0x08630, 0x01cf), + A6XX_PROTECT_NORDWR(0x08e00, 0x0000), + A6XX_PROTECT_NORDWR(0x08e08, 0x0000), + A6XX_PROTECT_NORDWR(0x08e50, 0x001f), + A6XX_PROTECT_NORDWR(0x08e80, 0x0280), + A6XX_PROTECT_NORDWR(0x09624, 0x01db), + A6XX_PROTECT_NORDWR(0x09e40, 0x0000), + A6XX_PROTECT_NORDWR(0x09e64, 0x000d), + A6XX_PROTECT_NORDWR(0x09e78, 0x0187), + A6XX_PROTECT_NORDWR(0x0a630, 0x01cf), + A6XX_PROTECT_NORDWR(0x0ae02, 0x0000), + A6XX_PROTECT_NORDWR(0x0ae50, 0x000f), + A6XX_PROTECT_NORDWR(0x0ae66, 0x0003), + A6XX_PROTECT_NORDWR(0x0ae6f, 0x0003), + A6XX_PROTECT_NORDWR(0x0b604, 0x0003), + A6XX_PROTECT_NORDWR(0x0ec00, 0x0fff), + A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff), + A6XX_PROTECT_NORDWR(0x18400, 0x0053), + A6XX_PROTECT_RDONLY(0x18454, 0x0004), + A6XX_PROTECT_NORDWR(0x18459, 0x1fff), + A6XX_PROTECT_NORDWR(0x1a459, 0x1fff), + A6XX_PROTECT_NORDWR(0x1c459, 0x1fff), + A6XX_PROTECT_NORDWR(0x1f400, 0x0443), + A6XX_PROTECT_RDONLY(0x1f844, 0x007b), + A6XX_PROTECT_NORDWR(0x1f860, 0x0000), + A6XX_PROTECT_NORDWR(0x1f878, 0x002a), + /* CP_PROTECT_REG[44, 46] are left untouched! */ + 0, + 0, + 0, + A6XX_PROTECT_NORDWR(0x1f8c0, 0x00000), +}; + static void a6xx_set_cp_protect(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -1069,6 +1179,11 @@ static void a6xx_set_cp_protect(struct msm_gpu *gpu) count = ARRAY_SIZE(a660_protect); count_max = 48; BUILD_BUG_ON(ARRAY_SIZE(a660_protect) > 48); + } else if (adreno_is_a730(adreno_gpu)) { + regs = a730_protect; + count = ARRAY_SIZE(a730_protect); + count_max = 48; + BUILD_BUG_ON(ARRAY_SIZE(a730_protect) > 48); } else { regs = a6xx_protect; count = ARRAY_SIZE(a6xx_protect); @@ -1135,7 +1250,9 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu) if (adreno_is_a640_family(adreno_gpu)) amsbc = 1; - if (adreno_is_a650(adreno_gpu) || adreno_is_a660(adreno_gpu)) { + if (adreno_is_a650(adreno_gpu) || + adreno_is_a660(adreno_gpu) || + adreno_is_a730(adreno_gpu)) { /* TODO: get ddr type from bootloader and use 2 for LPDDR4 */ hbb_lo = 3; amsbc = 1; @@ -1516,7 +1633,8 @@ static int hw_init(struct msm_gpu *gpu) gpu_write64(gpu, REG_A6XX_UCHE_WRITE_THRU_BASE, 0x0001fffffffff000llu); } - if (!adreno_is_a650_family(adreno_gpu)) { + if (!(adreno_is_a650_family(adreno_gpu) || + adreno_is_a730(adreno_gpu))) { /* Set the GMEM VA range [0x100000:0x100000 + gpu->gmem - 1] */ gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MIN, 0x00100000); @@ -1586,7 +1704,9 @@ static int hw_init(struct msm_gpu *gpu) a6xx_set_ubwc_config(gpu); /* Enable fault detection */ - if (adreno_is_a619(adreno_gpu)) + if (adreno_is_a730(adreno_gpu)) + gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL, (1 << 30) | 0xcfffff); + else if (adreno_is_a619(adreno_gpu)) gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL, (1 << 30) | 0x3fffff); else if (adreno_is_a610(adreno_gpu)) gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL, (1 << 30) | 0x3ffff); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c index 25b235b49ebc..3865cd44523c 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c @@ -5,6 +5,8 @@ #include #include +#include + #include "a6xx_gmu.h" #include "a6xx_gmu.xml.h" #include "a6xx_gpu.h" @@ -506,6 +508,63 @@ static void adreno_7c3_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) msg->cnoc_cmds_data[0][0] = 0x40000000; msg->cnoc_cmds_data[1][0] = 0x60000001; } + +static void a730_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + msg->bw_level_num = 12; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x7; + + msg->ddr_cmds_addrs[0] = cmd_db_read_addr("SH0"); + msg->ddr_cmds_addrs[1] = cmd_db_read_addr("MC0"); + msg->ddr_cmds_addrs[2] = cmd_db_read_addr("ACV"); + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + msg->ddr_cmds_data[1][0] = 0x600002e8; + msg->ddr_cmds_data[1][1] = 0x600003d0; + msg->ddr_cmds_data[1][2] = 0x60000008; + msg->ddr_cmds_data[2][0] = 0x6000068d; + msg->ddr_cmds_data[2][1] = 0x6000089a; + msg->ddr_cmds_data[2][2] = 0x60000008; + msg->ddr_cmds_data[3][0] = 0x600007f2; + msg->ddr_cmds_data[3][1] = 0x60000a6e; + msg->ddr_cmds_data[3][2] = 0x60000008; + msg->ddr_cmds_data[4][0] = 0x600009e5; + msg->ddr_cmds_data[4][1] = 0x60000cfd; + msg->ddr_cmds_data[4][2] = 0x60000008; + msg->ddr_cmds_data[5][0] = 0x60000b29; + msg->ddr_cmds_data[5][1] = 0x60000ea6; + msg->ddr_cmds_data[5][2] = 0x60000008; + msg->ddr_cmds_data[6][0] = 0x60001698; + msg->ddr_cmds_data[6][1] = 0x60001da8; + msg->ddr_cmds_data[6][2] = 0x60000008; + msg->ddr_cmds_data[7][0] = 0x600018d2; + msg->ddr_cmds_data[7][1] = 0x60002093; + msg->ddr_cmds_data[7][2] = 0x60000008; + msg->ddr_cmds_data[8][0] = 0x60001e66; + msg->ddr_cmds_data[8][1] = 0x600027e6; + msg->ddr_cmds_data[8][2] = 0x60000008; + msg->ddr_cmds_data[9][0] = 0x600027c2; + msg->ddr_cmds_data[9][1] = 0x6000342f; + msg->ddr_cmds_data[9][2] = 0x60000008; + msg->ddr_cmds_data[10][0] = 0x60002e71; + msg->ddr_cmds_data[10][1] = 0x60003cf5; + msg->ddr_cmds_data[10][2] = 0x60000008; + msg->ddr_cmds_data[11][0] = 0x600030ae; + msg->ddr_cmds_data[11][1] = 0x60003fe5; + msg->ddr_cmds_data[11][2] = 0x60000008; + + msg->cnoc_cmds_num = 1; + msg->cnoc_wait_bitmask = 0x1; + + msg->cnoc_cmds_addrs[0] = cmd_db_read_addr("CN0"); + msg->cnoc_cmds_data[0][0] = 0x40000000; + msg->cnoc_cmds_data[1][0] = 0x60000001; +} + static void a6xx_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) { /* Send a single "off" entry since the 630 GMU doesn't do bus scaling */ @@ -564,6 +623,8 @@ static int a6xx_hfi_send_bw_table(struct a6xx_gmu *gmu) a660_build_bw_table(&msg); else if (adreno_is_a690(adreno_gpu)) a690_build_bw_table(&msg); + else if (adreno_is_a730(adreno_gpu)) + a730_build_bw_table(&msg); else a6xx_build_bw_table(&msg); diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index 9cda403ebc7b..1373b62dce01 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -490,6 +490,19 @@ static const struct adreno_info gpulist[] = { .zapfw = "a690_zap.mdt", .hwcg = a690_hwcg, .address_space_size = SZ_16G, + }, { + .chip_ids = ADRENO_CHIP_IDS(0x07030001), + .family = ADRENO_7XX_GEN1, + .fw = { + [ADRENO_FW_SQE] = "a730_sqe.fw", + [ADRENO_FW_GMU] = "gmu_gen70000.bin", + }, + .gmem = SZ_2M, + .inactive_period = DRM_MSM_INACTIVE_PERIOD, + .init = a6xx_gpu_init, + .zapfw = "a730_zap.mdt", + .hwcg = a730_hwcg, + .address_space_size = SZ_16G, }, }; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 3e69ef9dde3f..21e6940d2982 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -76,7 +76,7 @@ struct adreno_reglist { }; extern const struct adreno_reglist a612_hwcg[], a615_hwcg[], a630_hwcg[], a640_hwcg[], a650_hwcg[]; -extern const struct adreno_reglist a660_hwcg[], a690_hwcg[]; +extern const struct adreno_reglist a660_hwcg[], a690_hwcg[], a730_hwcg[]; struct adreno_speedbin { uint16_t fuse; From patchwork Tue Aug 8 21:02:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 712824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C78DC001DB for ; Tue, 8 Aug 2023 21:04:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235616AbjHHVEW (ORCPT ); Tue, 8 Aug 2023 17:04:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235820AbjHHVD6 (ORCPT ); Tue, 8 Aug 2023 17:03:58 -0400 Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com [IPv6:2a00:1450:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF5B12705 for ; Tue, 8 Aug 2023 14:03:14 -0700 (PDT) Received: by mail-lj1-x236.google.com with SMTP id 38308e7fff4ca-2b974031aeaso95167961fa.0 for ; Tue, 08 Aug 2023 14:03:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528592; x=1692133392; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=VXf2NOc9PBLrg2bwANp/ewuz6OUR1psi1HjpQK7sdCs=; b=FP/jmTE8au85NQxAYotBmx88UqDsH/U5haTMfno7uMhLyonkt8Ffz3HLam/rFIyiFV 8VWditb/BJeDqzgFgVfel3PgBL4xpUJP5yOSHXfx9Q22IX628wXl+GZ/OwGgw+Bb3ipZ trhZJ+fdfedI5s28DCTJ+zQJQ4NbYuxg56xUbvfQP1zHZA6epbFBDebG0EcaPjruEqFV K98sPq1u4Ex6EYw+1oko1W7HZGn2s8iGBjqXv18ysiZaXvfxuICuetsvCYwdHV60sL3p ygFWQdPSTqfcotb+rY8xVk7HO+5Kvnv219XyrfCujPTosGbqPVU+rMiAb9E4PD+SyHjy ZMbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528592; x=1692133392; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VXf2NOc9PBLrg2bwANp/ewuz6OUR1psi1HjpQK7sdCs=; b=AbcXFFJgtetIfiVlsG9QlYEhxmEZYY+0Q8XFtqptciUHoezmLwcsdd/rYR0lGSBJbX APDV3agRYGyu/9YJ0QubeKL6ym8XBBthyfJKxMmZh47VON//WUVKT8Zdw6FlfyDOwyCi MeLe1mW1NMD+stZfj/OS7g0nhAWXUhgZfecpYv4BIgaLPSHXKcNS1xW7pE5lzXX8MDM2 S0S0ioBJTNnJ666vKKOVFFnKuqvMCz9AzxqbzPVQRxfUkm4uY/aV5YNv+/9WeABoV5Y1 2iOTQXZjhb1uFg0SITC9H/2fTgPwA8465Mu4XEzD0iOZ1QL3j3AlnuIqwlfssHcDT97a RbPA== X-Gm-Message-State: AOJu0Yxk0wEeOkdQSEb/WHqypDMgqoMqjStF07yb+mv96678Fz1Szav7 QlieyDazyIpLrnBTgCh+nTlo6A== X-Google-Smtp-Source: AGHT+IG6MKW4Zic9VPPNB/90dxp6LGBrubQG9a4Umei9IgmtIUCGcJTF+2nQlVtW8bI5o6uWKWZfgQ== X-Received: by 2002:a2e:9b13:0:b0:2b6:e0d3:45b7 with SMTP id u19-20020a2e9b13000000b002b6e0d345b7mr529570lji.14.1691528592677; Tue, 08 Aug 2023 14:03:12 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.03.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:03:11 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:50 +0200 Subject: [PATCH v2 12/14] drm/msm/a6xx: Add A740 support MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-12-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=19035; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=DJ092aEWMxIENC3E4DKaMEYFDDbkKwVmrSZAarwXGno=; b=LSzSiY9ZNZ+zQTuLQN/MdF9xYUes7R7lpkYOyaQBdlgm/jbOXX2CP1lF2uZEtR4vphPF/TalR +cWfZ3cKQGWCZl3uQccO/RWbwNRniiZUX7NI1QTkiMqg5wJKyuPxmYO X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org A740 builds upon the A730 IP, shuffling some values and registers around. More differences will appear when things like BCL are implemented. adreno_is_a740_family is added in preparation for more A7xx GPUs, the logic checks will be valid resulting in smaller diffs. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 88 +++++++++++++++++++++--------- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 82 +++++++++++++++++++++++++--- drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 27 +++++++++ drivers/gpu/drm/msm/adreno/adreno_device.c | 17 ++++++ drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 +++++- 6 files changed, 200 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 17e1e72f5d7d..14ba407e7fe0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -516,6 +516,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; struct platform_device *pdev = to_platform_device(gmu->dev); void __iomem *pdcptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc"); + u32 seqmem0_drv0_reg = REG_A6XX_RSCC_SEQ_MEM_0_DRV0; void __iomem *seqptr = NULL; uint32_t pdc_address_offset; bool pdc_in_aop = false; @@ -549,21 +550,26 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) gmu_write_rscc(gmu, REG_A6XX_RSCC_HIDDEN_TCS_CMD0_ADDR, 0); gmu_write_rscc(gmu, REG_A6XX_RSCC_HIDDEN_TCS_CMD0_DATA + 2, 0); gmu_write_rscc(gmu, REG_A6XX_RSCC_HIDDEN_TCS_CMD0_ADDR + 2, 0); - gmu_write_rscc(gmu, REG_A6XX_RSCC_HIDDEN_TCS_CMD0_DATA + 4, 0x80000000); + gmu_write_rscc(gmu, REG_A6XX_RSCC_HIDDEN_TCS_CMD0_DATA + 4, + adreno_is_a740_family(adreno_gpu) ? 0x80000021 : 0x80000000); gmu_write_rscc(gmu, REG_A6XX_RSCC_HIDDEN_TCS_CMD0_ADDR + 4, 0); gmu_write_rscc(gmu, REG_A6XX_RSCC_OVERRIDE_START_ADDR, 0); gmu_write_rscc(gmu, REG_A6XX_RSCC_PDC_SEQ_START_ADDR, 0x4520); gmu_write_rscc(gmu, REG_A6XX_RSCC_PDC_MATCH_VALUE_LO, 0x4510); gmu_write_rscc(gmu, REG_A6XX_RSCC_PDC_MATCH_VALUE_HI, 0x4514); + /* The second spin of A7xx GPUs messed with some register offsets.. */ + if (adreno_is_a740_family(adreno_gpu)) + seqmem0_drv0_reg = REG_A7XX_RSCC_SEQ_MEM_0_DRV0_A740; + /* Load RSC sequencer uCode for sleep and wakeup */ if (adreno_is_a650_family(adreno_gpu) || adreno_is_a7xx(adreno_gpu)) { - gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0, 0xeaaae5a0); - gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 1, 0xe1a1ebab); - gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 2, 0xa2e0a581); - gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 3, 0xecac82e2); - gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 4, 0x0020edad); + gmu_write_rscc(gmu, seqmem0_drv0_reg, 0xeaaae5a0); + gmu_write_rscc(gmu, seqmem0_drv0_reg + 1, 0xe1a1ebab); + gmu_write_rscc(gmu, seqmem0_drv0_reg + 2, 0xa2e0a581); + gmu_write_rscc(gmu, seqmem0_drv0_reg + 3, 0xecac82e2); + gmu_write_rscc(gmu, seqmem0_drv0_reg + 4, 0x0020edad); } else { gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0, 0xa7a506a0); gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 1, 0xa1e6a6e7); @@ -767,8 +773,8 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; u32 fence_range_lower, fence_range_upper; + u32 chipid, chipid_min = 0; int ret; - u32 chipid; /* Vote veto for FAL10 */ if (adreno_is_a650_family(adreno_gpu) || adreno_is_a7xx(adreno_gpu)) { @@ -827,16 +833,37 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) */ gmu_write(gmu, REG_A6XX_GMU_CM3_CFG, 0x4052); - /* - * Note that the GMU has a slightly different layout for - * chip_id, for whatever reason, so a bit of massaging - * is needed. The upper 16b are the same, but minor and - * patchid are packed in four bits each with the lower - * 8b unused: - */ - chipid = adreno_gpu->chip_id & 0xffff0000; - chipid |= (adreno_gpu->chip_id << 4) & 0xf000; /* minor */ - chipid |= (adreno_gpu->chip_id << 8) & 0x0f00; /* patchid */ + /* NOTE: A730 may also fall in this if-condition with a future GMU fw update. */ + if (adreno_is_a7xx(adreno_gpu) && !adreno_is_a730(adreno_gpu)) { + /* A7xx GPUs have obfuscated chip IDs. Use constant maj = 7 */ + chipid = FIELD_PREP(GENMASK(31, 24), 0x7); + + /* + * The min part has a 1-1 mapping for each GPU SKU. + * This chipid that the GMU expects corresponds to the "GENX_Y_Z" naming, + * where X = major, Y = minor, Z = patchlevel, e.g. GEN7_2_1 for prod A740. + */ + if (adreno_is_a740(adreno_gpu)) + chipid_min = 2; + else + return -EINVAL; + + chipid |= FIELD_PREP(GENMASK(23, 16), chipid_min); + + /* Get the patchid (which may vary) from the device tree */ + chipid |= FIELD_PREP(GENMASK(15, 8), adreno_patchid(adreno_gpu)); + } else { + /* + * Note that the GMU has a slightly different layout for + * chip_id, for whatever reason, so a bit of massaging + * is needed. The upper 16b are the same, but minor and + * patchid are packed in four bits each with the lower + * 8b unused: + */ + chipid = adreno_gpu->chip_id & 0xffff0000; + chipid |= (adreno_gpu->chip_id << 4) & 0xf000; /* minor */ + chipid |= (adreno_gpu->chip_id << 8) & 0x0f00; /* patchid */ + } if (adreno_is_a7xx(adreno_gpu)) { gmu_write(gmu, REG_A6XX_GMU_GENERAL_10, chipid); @@ -899,17 +926,23 @@ static void a6xx_gmu_irq_disable(struct a6xx_gmu *gmu) static void a6xx_gmu_rpmh_off(struct a6xx_gmu *gmu) { - u32 val; + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + u32 val, seqmem_off = 0; + + /* The second spin of A7xx GPUs messed with some register offsets.. */ + if (adreno_is_a740_family(adreno_gpu)) + seqmem_off = 4; /* Make sure there are no outstanding RPMh votes */ - gmu_poll_timeout_rscc(gmu, REG_A6XX_RSCC_TCS0_DRV0_STATUS, val, - (val & 1), 100, 10000); - gmu_poll_timeout_rscc(gmu, REG_A6XX_RSCC_TCS1_DRV0_STATUS, val, - (val & 1), 100, 10000); - gmu_poll_timeout_rscc(gmu, REG_A6XX_RSCC_TCS2_DRV0_STATUS, val, - (val & 1), 100, 10000); - gmu_poll_timeout_rscc(gmu, REG_A6XX_RSCC_TCS3_DRV0_STATUS, val, - (val & 1), 100, 1000); + gmu_poll_timeout_rscc(gmu, REG_A6XX_RSCC_TCS0_DRV0_STATUS + seqmem_off, + val, (val & 1), 100, 10000); + gmu_poll_timeout_rscc(gmu, REG_A6XX_RSCC_TCS1_DRV0_STATUS + seqmem_off, + val, (val & 1), 100, 10000); + gmu_poll_timeout_rscc(gmu, REG_A6XX_RSCC_TCS2_DRV0_STATUS + seqmem_off, + val, (val & 1), 100, 10000); + gmu_poll_timeout_rscc(gmu, REG_A6XX_RSCC_TCS3_DRV0_STATUS + seqmem_off, + val, (val & 1), 100, 1000); } /* Force the GMU off in case it isn't responsive */ @@ -1019,7 +1052,8 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) /* Use a known rate to bring up the GMU */ clk_set_rate(gmu->core_clk, 200000000); - clk_set_rate(gmu->hub_clk, 150000000); + clk_set_rate(gmu->hub_clk, adreno_is_a740_family(adreno_gpu) ? + 200000000 : 150000000); ret = clk_bulk_prepare_enable(gmu->nr_clocks, gmu->clocks); if (ret) { pm_runtime_put(gmu->gxpd); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 522043883290..2313620084b6 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -894,6 +894,64 @@ const struct adreno_reglist a730_hwcg[] = { {}, }; +const struct adreno_reglist a740_hwcg[] = { + { REG_A6XX_RBBM_CLOCK_CNTL_SP0, 0x02222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_SP0, 0x22022222 }, + { REG_A6XX_RBBM_CLOCK_HYST_SP0, 0x003cf3cf }, + { REG_A6XX_RBBM_CLOCK_DELAY_SP0, 0x00000080 }, + { REG_A6XX_RBBM_CLOCK_CNTL_TP0, 0x22222220 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_TP0, 0x22222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL3_TP0, 0x22222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL4_TP0, 0x00222222 }, + { REG_A6XX_RBBM_CLOCK_HYST_TP0, 0x77777777 }, + { REG_A6XX_RBBM_CLOCK_HYST2_TP0, 0x77777777 }, + { REG_A6XX_RBBM_CLOCK_HYST3_TP0, 0x77777777 }, + { REG_A6XX_RBBM_CLOCK_HYST4_TP0, 0x00077777 }, + { REG_A6XX_RBBM_CLOCK_DELAY_TP0, 0x11111111 }, + { REG_A6XX_RBBM_CLOCK_DELAY2_TP0, 0x11111111 }, + { REG_A6XX_RBBM_CLOCK_DELAY3_TP0, 0x11111111 }, + { REG_A6XX_RBBM_CLOCK_DELAY4_TP0, 0x00011111 }, + { REG_A6XX_RBBM_CLOCK_CNTL_UCHE, 0x22222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_UCHE, 0x00222222 }, + { REG_A6XX_RBBM_CLOCK_HYST_UCHE, 0x00000444 }, + { REG_A6XX_RBBM_CLOCK_DELAY_UCHE, 0x00000222 }, + { REG_A6XX_RBBM_CLOCK_CNTL_RB0, 0x22222222 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_RB0, 0x01002222 }, + { REG_A6XX_RBBM_CLOCK_CNTL_CCU0, 0x00002220 }, + { REG_A6XX_RBBM_CLOCK_HYST_RB_CCU0, 0x44000f00 }, + { REG_A6XX_RBBM_CLOCK_CNTL_RAC, 0x25222022 }, + { REG_A6XX_RBBM_CLOCK_CNTL2_RAC, 0x00555555 }, + { REG_A6XX_RBBM_CLOCK_DELAY_RAC, 0x00000011 }, + { REG_A6XX_RBBM_CLOCK_HYST_RAC, 0x00440044 }, + { REG_A6XX_RBBM_CLOCK_CNTL_TSE_RAS_RBBM, 0x04222222 }, + { REG_A7XX_RBBM_CLOCK_MODE2_GRAS, 0x00000222 }, + { REG_A7XX_RBBM_CLOCK_MODE_BV_GRAS, 0x00222222 }, + { REG_A6XX_RBBM_CLOCK_MODE_GPC, 0x02222223 }, + { REG_A6XX_RBBM_CLOCK_MODE_VFD, 0x00222222 }, + { REG_A7XX_RBBM_CLOCK_MODE_BV_GPC, 0x00222222 }, + { REG_A7XX_RBBM_CLOCK_MODE_BV_VFD, 0x00002222 }, + { REG_A6XX_RBBM_CLOCK_HYST_TSE_RAS_RBBM, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_HYST_GPC, 0x04104004 }, + { REG_A6XX_RBBM_CLOCK_HYST_VFD, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_DELAY_TSE_RAS_RBBM, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_DELAY_GPC, 0x00000200 }, + { REG_A6XX_RBBM_CLOCK_DELAY_VFD, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_MODE_HLSQ, 0x00002222 }, + { REG_A6XX_RBBM_CLOCK_DELAY_HLSQ, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_HYST_HLSQ, 0x00000000 }, + { REG_A7XX_RBBM_CLOCK_MODE_BV_LRZ, 0x55555552 }, + { REG_A7XX_RBBM_CLOCK_HYST2_VFD, 0x00000000 }, + { REG_A7XX_RBBM_CLOCK_MODE_CP, 0x00000222 }, + { REG_A6XX_RBBM_CLOCK_CNTL, 0x8aa8aa82 }, + { REG_A6XX_RBBM_ISDB_CNT, 0x00000182 }, + { REG_A6XX_RBBM_RAC_THRESHOLD_CNT, 0x00000000 }, + { REG_A6XX_RBBM_SP_HYST_CNT, 0x00000000 }, + { REG_A6XX_RBBM_CLOCK_CNTL_GMU_GX, 0x00000222 }, + { REG_A6XX_RBBM_CLOCK_DELAY_GMU_GX, 0x00000111 }, + { REG_A6XX_RBBM_CLOCK_HYST_GMU_GX, 0x00000555 }, + {}, +}; + static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -901,7 +959,7 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) struct a6xx_gmu *gmu = &a6xx_gpu->gmu; const struct adreno_reglist *reg; unsigned int i; - u32 val, clock_cntl_on; + u32 val, clock_cntl_on, cgc_mode; if (!adreno_gpu->info->hwcg) return; @@ -914,8 +972,10 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) clock_cntl_on = 0x8aa8aa82; if (adreno_is_a7xx(adreno_gpu)) { + cgc_mode = adreno_is_a740_family(adreno_gpu) ? 0x20222 : 0x20000; + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_MODE_CNTL, - state ? 0x20000 : 0); + state ? cgc_mode : 0); gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_DELAY_CNTL, state ? 0x10111 : 0); gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_HYST_CNTL, @@ -1179,7 +1239,7 @@ static void a6xx_set_cp_protect(struct msm_gpu *gpu) count = ARRAY_SIZE(a660_protect); count_max = 48; BUILD_BUG_ON(ARRAY_SIZE(a660_protect) > 48); - } else if (adreno_is_a730(adreno_gpu)) { + } else if (adreno_is_a730(adreno_gpu) || adreno_is_a740(adreno_gpu)) { regs = a730_protect; count = ARRAY_SIZE(a730_protect); count_max = 48; @@ -1252,7 +1312,8 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu) if (adreno_is_a650(adreno_gpu) || adreno_is_a660(adreno_gpu) || - adreno_is_a730(adreno_gpu)) { + adreno_is_a730(adreno_gpu) || + adreno_is_a740_family(adreno_gpu)) { /* TODO: get ddr type from bootloader and use 2 for LPDDR4 */ hbb_lo = 3; amsbc = 1; @@ -1545,6 +1606,7 @@ static int hw_init(struct msm_gpu *gpu) struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); struct a6xx_gmu *gmu = &a6xx_gpu->gmu; + u64 gmem_range_min; int ret; if (!adreno_has_gmu_wrapper(adreno_gpu)) { @@ -1635,11 +1697,13 @@ static int hw_init(struct msm_gpu *gpu) if (!(adreno_is_a650_family(adreno_gpu) || adreno_is_a730(adreno_gpu))) { + gmem_range_min = adreno_is_a740_family(adreno_gpu) ? SZ_16M : SZ_1M; + /* Set the GMEM VA range [0x100000:0x100000 + gpu->gmem - 1] */ - gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MIN, 0x00100000); + gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MIN, gmem_range_min); gpu_write64(gpu, REG_A6XX_UCHE_GMEM_RANGE_MAX, - 0x00100000 + adreno_gpu->info->gmem - 1); + gmem_range_min + adreno_gpu->info->gmem - 1); } if (adreno_is_a7xx(adreno_gpu)) @@ -1704,7 +1768,8 @@ static int hw_init(struct msm_gpu *gpu) a6xx_set_ubwc_config(gpu); /* Enable fault detection */ - if (adreno_is_a730(adreno_gpu)) + if (adreno_is_a730(adreno_gpu) || + adreno_is_a740_family(adreno_gpu)) gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL, (1 << 30) | 0xcfffff); else if (adreno_is_a619(adreno_gpu)) gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL, (1 << 30) | 0x3fffff); @@ -2796,7 +2861,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) !!(config->info->quirks & ADRENO_QUIRK_HAS_HW_APRIV); /* gpu->info only gets assigned in adreno_gpu_init() */ - is_a7xx = config->info->family == ADRENO_7XX_GEN1; + is_a7xx = config->info->family == ADRENO_7XX_GEN1 || + config->info->family == ADRENO_7XX_GEN2; a6xx_llc_slices_init(pdev, a6xx_gpu, is_a7xx); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c index 3865cd44523c..cdb3f6e74d3e 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c @@ -565,6 +565,31 @@ static void a730_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) msg->cnoc_cmds_data[1][0] = 0x60000001; } +static void a740_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) +{ + msg->bw_level_num = 1; + + msg->ddr_cmds_num = 3; + msg->ddr_wait_bitmask = 0x7; + + msg->ddr_cmds_addrs[0] = cmd_db_read_addr("SH0"); + msg->ddr_cmds_addrs[1] = cmd_db_read_addr("MC0"); + msg->ddr_cmds_addrs[2] = cmd_db_read_addr("ACV"); + + msg->ddr_cmds_data[0][0] = 0x40000000; + msg->ddr_cmds_data[0][1] = 0x40000000; + msg->ddr_cmds_data[0][2] = 0x40000000; + + /* TODO: add a proper dvfs table */ + + msg->cnoc_cmds_num = 1; + msg->cnoc_wait_bitmask = 0x1; + + msg->cnoc_cmds_addrs[0] = cmd_db_read_addr("CN0"); + msg->cnoc_cmds_data[0][0] = 0x40000000; + msg->cnoc_cmds_data[1][0] = 0x60000001; +} + static void a6xx_build_bw_table(struct a6xx_hfi_msg_bw_table *msg) { /* Send a single "off" entry since the 630 GMU doesn't do bus scaling */ @@ -625,6 +650,8 @@ static int a6xx_hfi_send_bw_table(struct a6xx_gmu *gmu) a690_build_bw_table(&msg); else if (adreno_is_a730(adreno_gpu)) a730_build_bw_table(&msg); + else if (adreno_is_a740_family(adreno_gpu)) + a740_build_bw_table(&msg); else a6xx_build_bw_table(&msg); diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index 1373b62dce01..86d0b5793061 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -499,10 +499,27 @@ static const struct adreno_info gpulist[] = { }, .gmem = SZ_2M, .inactive_period = DRM_MSM_INACTIVE_PERIOD, + .quirks = ADRENO_QUIRK_HAS_CACHED_COHERENT | + ADRENO_QUIRK_HAS_HW_APRIV, .init = a6xx_gpu_init, .zapfw = "a730_zap.mdt", .hwcg = a730_hwcg, .address_space_size = SZ_16G, + }, { + .chip_ids = ADRENO_CHIP_IDS(0x43050a01), /* "C510v2" */ + .family = ADRENO_7XX_GEN2, + .fw = { + [ADRENO_FW_SQE] = "a740_sqe.fw", + [ADRENO_FW_GMU] = "gmu_gen70200.bin", + }, + .gmem = 3 * SZ_1M, + .inactive_period = DRM_MSM_INACTIVE_PERIOD, + .quirks = ADRENO_QUIRK_HAS_CACHED_COHERENT | + ADRENO_QUIRK_HAS_HW_APRIV, + .init = a6xx_gpu_init, + .zapfw = "a740_zap.mdt", + .hwcg = a740_hwcg, + .address_space_size = SZ_16G, }, }; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 1c31a8bd19da..874c30032ced 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -323,7 +323,11 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, *value = adreno_gpu->info->gmem; return 0; case MSM_PARAM_GMEM_BASE: - *value = !adreno_is_a650_family(adreno_gpu) ? 0x100000 : 0; + if (adreno_is_a650_family(adreno_gpu) || + adreno_is_a740_family(adreno_gpu)) + *value = 0; + else + *value = 0x100000; return 0; case MSM_PARAM_CHIP_ID: *value = adreno_gpu->chip_id; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 21e6940d2982..24271b2720fd 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -47,6 +47,7 @@ enum adreno_family { ADRENO_6XX_GEN3, /* a650 family */ ADRENO_6XX_GEN4, /* a660 family */ ADRENO_7XX_GEN1, /* a730 family */ + ADRENO_7XX_GEN2, /* a740 family */ }; #define ADRENO_QUIRK_TWO_PASS_USE_WFI BIT(0) @@ -76,7 +77,7 @@ struct adreno_reglist { }; extern const struct adreno_reglist a612_hwcg[], a615_hwcg[], a630_hwcg[], a640_hwcg[], a650_hwcg[]; -extern const struct adreno_reglist a660_hwcg[], a690_hwcg[], a730_hwcg[]; +extern const struct adreno_reglist a660_hwcg[], a690_hwcg[], a730_hwcg[], a740_hwcg[]; struct adreno_speedbin { uint16_t fuse; @@ -407,10 +408,23 @@ static inline int adreno_is_a730(struct adreno_gpu *gpu) return gpu->info->chip_ids[0] == 0x07030001; } +static inline int adreno_is_a740(struct adreno_gpu *gpu) +{ + return gpu->info->chip_ids[0] == 0x43050a01; +} + +/* Placeholder to make future diffs smaller */ +static inline int adreno_is_a740_family(struct adreno_gpu *gpu) +{ + if (WARN_ON_ONCE(!gpu->info)) + return false; + return gpu->info->family == ADRENO_7XX_GEN2; +} + static inline int adreno_is_a7xx(struct adreno_gpu *gpu) { /* Update with non-fake (i.e. non-A702) Gen 7 GPUs */ - return adreno_is_a730(gpu); + return adreno_is_a730(gpu) || adreno_is_a740_family(gpu); } u64 adreno_private_address_space_size(struct msm_gpu *gpu); From patchwork Tue Aug 8 21:02:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 711638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E85C7C001E0 for ; Tue, 8 Aug 2023 21:04:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235696AbjHHVEY (ORCPT ); Tue, 8 Aug 2023 17:04:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235705AbjHHVEA (ORCPT ); Tue, 8 Aug 2023 17:04:00 -0400 Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com [IPv6:2a00:1450:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64D7167C7 for ; Tue, 8 Aug 2023 14:03:17 -0700 (PDT) Received: by mail-lj1-x233.google.com with SMTP id 38308e7fff4ca-2b9cdbf682eso94508601fa.2 for ; Tue, 08 Aug 2023 14:03:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528595; x=1692133395; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=F8zHT0Zsqj0DMiFej1WNo9eVzZZclVqVsx7269nV/OQ=; b=pqR6K3CgsFQcjWft6t18Ysccvsoskmce1p14E9WnFNA41dxkcLK8Io49JTfkjHk7o5 TbUBqMP3ZperyK3G2YGX+khvtOhISg59nw1U4Yaoj3JLLr9hF0fMdOrGsWd2XK4go4Ap /QhTJEhJ1CvPINBUPFIj+rySRnthLLEN0UeggyvwBZBg7YrA9p7ovInlg75GaT9D4Drt 5XEQAHC5UKF7O4WDTCBDjfXbZ7zcoAJBJXGkMMIBtRbzpoxF8JGFL8IUp7hOHDlNaZTw OJEigzLKz0wqn76RtuzvibmWYKvRsRqaqePsOOjs0VSWKUH2zSVp83dVZd6TyxYOAl5o bW3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528595; x=1692133395; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F8zHT0Zsqj0DMiFej1WNo9eVzZZclVqVsx7269nV/OQ=; b=iKjVdT0AS2OMKdTLN5xkNfRJQl+BngdaOBPRJ9R3ysjW5M164Ij7MrzOxg6czEEF3H AlgCXZJtkVgOZYnS3VkPKn5AHkZy51gPwBpDOosJIMOIjtIUGiBPXSjF1GCA3trhADms kpop/iT/z2KjmHnP6LPpG6qx2hwl4mWXER7ZTjMFqiX+eaSyaY5fhmTqvLDLTcuEQAyU YsQf/KbS8HxIQt/qcE4z6jRnr2u1DJ/VnUn+td0wOQQOr1B8aDdxGNL/otADXm5jqGlW uZvG5nVbSu3WksTuCN5yr75pu3oR92Uvi2xkyS6bDEuPfmXEcVMzHPNRruXjlHM9LhCQ XjxA== X-Gm-Message-State: AOJu0YyszzgURhVw4UkiRG7uZ1GO/ocfGpui0UhpmRgL5pjQuKNxFxmG EYQs0Am//x0sRxjOyVxFs7X/NA== X-Google-Smtp-Source: AGHT+IHEk5xv0mOBDNw2eWsuftaz3iiVhy7A62TD88UdcHsAnsRdnmxXvi3j2UDwm7XRbKulViS/Ig== X-Received: by 2002:a2e:3c10:0:b0:2b6:b6c4:6e79 with SMTP id j16-20020a2e3c10000000b002b6b6c46e79mr466475lja.1.1691528595593; Tue, 08 Aug 2023 14:03:15 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.03.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:03:14 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:51 +0200 Subject: [PATCH v2 13/14] drm/msm/a6xx: Vastly increase HFI timeout MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-13-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=1002; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=QgEQhQstPekw5tfzihDoKDYHTujiIQxnO5xLgEm4zT8=; b=+RtC6tfWJbx8pi3yWGD9Tybj7ws14q5BakBfIVJOPco0LbYKpv9r+tu8rXNfWunG2iFFW9Vqa imn604AJMwSC6s98nG1DE/rv1xVICybZjdZwSf5TWaxdTiXXA06tHZL X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org A7xx GMUs can be slow as molasses at times. Increase the timeout to 1 second to match the vendor driver. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c index cdb3f6e74d3e..e25ddb82a087 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c @@ -108,7 +108,7 @@ static int a6xx_hfi_wait_for_ack(struct a6xx_gmu *gmu, u32 id, u32 seqnum, /* Wait for a response */ ret = gmu_poll_timeout(gmu, REG_A6XX_GMU_GMU2HOST_INTR_INFO, val, - val & A6XX_GMU_GMU2HOST_INTR_INFO_MSGQ, 100, 5000); + val & A6XX_GMU_GMU2HOST_INTR_INFO_MSGQ, 100, 1000000); if (ret) { DRM_DEV_ERROR(gmu->dev, From patchwork Tue Aug 8 21:02:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 712823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83507C04A94 for ; Tue, 8 Aug 2023 21:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235919AbjHHVEf (ORCPT ); Tue, 8 Aug 2023 17:04:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234470AbjHHVEN (ORCPT ); Tue, 8 Aug 2023 17:04:13 -0400 Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com [IPv6:2a00:1450:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13C2A6A6B for ; Tue, 8 Aug 2023 14:03:21 -0700 (PDT) Received: by mail-lj1-x22b.google.com with SMTP id 38308e7fff4ca-2b9db1de50cso94218041fa.3 for ; Tue, 08 Aug 2023 14:03:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691528597; x=1692133397; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=QHqFXZ6x0OPm4ZMw7WQeV7WZNp4cmRIh/Av5DgwSNhQ=; b=GQUbyw4eRYjKwQkU5ZMU/yeNnQVDHGDQdMvp/qWJGIY57WAbwVGFi3beDujQ6DoFkD deiVVsvTtEZZT6uz3lACeVQkh5886v4aLsOAyuUfgqCdxir+SLFu3rEcv/obMnTgAxV6 yeT6l1GhG/sudeo/uTTdKZ6jai9Xn4aKc26RgdSxgq++VhsSyRGJvplJDSb6NoUkD/GT n7gasiqw7oDvjknFzGddWUp60RVM3W+FruMhz6u1/NVVghH+5FQtkH0QLVLDIku2uxVB KXS6GSjqoyok0rp9k5ki1PYoDqyKfqxCOF6v9cL4guXjzVK7F5scYE3AT77rYIwvzera R0dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691528597; x=1692133397; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QHqFXZ6x0OPm4ZMw7WQeV7WZNp4cmRIh/Av5DgwSNhQ=; b=DVZGmb4fdiE+m24sux+fIfk1IotXiODPPVK+e8i4voZJ6RYXO+x8VPkXaAq52HZ85H oHSpcn/7Q6K9z0MhqxC74hBprG77yLH0J5qwZ0G+sp0NNVuqfpunJikkZbGAlvTaDufM m7KbCQ0hG3JCAaNW9oNCL3mD4birR9g06Nd9XHWeheARZen2R4CPo7jpWBUUf1NYXwCr Glf09Qs2jAtc972aMdoEt4RUvF93R6+Es6ihhRlQPEx8X+lJcpkEIdZfvkmjMwzyOzbB 7o67rVb9huzQilf3in0zYd2tyKYg66RGW6h970+NcThjAr5HxvaE7VEUhjdT+d1s5J7e xEOQ== X-Gm-Message-State: AOJu0YzuRen87Z3l/bTmCxsQ+Jfhe6zo59Mv6zbyyty/57Vg/QIz7xy7 B59Sv3s5SPDkg5Uy30jll87/Nw== X-Google-Smtp-Source: AGHT+IFIE0Szj/KgGDm5MrS0BWlSGQSLooWZrPv7OJdSkMIOa28C2Xb44HY47TOrwSvtshviQefZoA== X-Received: by 2002:a2e:9d54:0:b0:2b9:dfd1:3808 with SMTP id y20-20020a2e9d54000000b002b9dfd13808mr460068ljj.25.1691528597563; Tue, 08 Aug 2023 14:03:17 -0700 (PDT) Received: from [192.168.1.101] (abxi185.neoplus.adsl.tpnet.pl. [83.9.2.185]) by smtp.gmail.com with ESMTPSA id h11-20020a2eb0eb000000b002b6cc17add3sm2431483ljl.25.2023.08.08.14.03.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Aug 2023 14:03:17 -0700 (PDT) From: Konrad Dybcio Date: Tue, 08 Aug 2023 23:02:52 +0200 Subject: [PATCH v2 14/14] drm/msm/a6xx: Poll for GBIF unhalt status in hw_init MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v2-14-1439e1b2343f@linaro.org> References: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v2-0-1439e1b2343f@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1691528566; l=1394; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=adxhdr38vQ+H8w1ud+VW4JLIkAtv3z4AL0XqkuOgPHs=; b=XQDGBxuHYfbbYepOx78RUkDMexkbi9oIwe5G+gOBDPTNfeODBiBmltLstJo8wG7FOZvlbmixK dJY8My1JyIxAjexcNUki8A3SAwf+Ej9E7lDVtfvwQXhZtaZ6C1DidhH X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Some GPUs - particularly A7xx ones - are really really stubborn and sometimes take a longer-than-expected time to finish unhalting GBIF. Note that this is not caused by the request a few lines above. Poll for the unhalt ack to make sure we're not trying to write bits to an essentially dead GPU that can't receive data on its end of the bus. Failing to do this will result in inexplicable GMU timeouts or worse. This is a rather ugly hack which introduces a whole lot of latency. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 2313620084b6..11cb410e0ac7 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1629,6 +1629,10 @@ static int hw_init(struct msm_gpu *gpu) mb(); } + /* Some GPUs are stubborn and take their sweet time to unhalt GBIF! */ + if (adreno_is_a7xx(adreno_gpu) && a6xx_has_gbif(adreno_gpu)) + spin_until(!gpu_read(gpu, REG_A6XX_GBIF_HALT_ACK)); + gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_CNTL, 0); if (adreno_is_a619_holi(adreno_gpu))