Message ID | 1620807083-5451-1-git-send-email-sibis@codeaurora.org |
---|---|
Headers | show |
Series | DDR/L3 Scaling support on SC7280 SoCs | expand |
On Wed, May 12, 2021 at 01:41:23PM +0530, Sibi Sankar wrote: > Add OPP tables required to scale DDR/L3 per freq-domain on SC7280 SoCs. > > Reviewed-by: Douglas Anderson <dianders@chromium.org> > Signed-off-by: Sibi Sankar <sibis@codeaurora.org> > --- > > V3: > * Rename cpu opp table nodes [Matthias] > * Rename opp phandles [Doug] > > Depends on the following patch series: > L3 Provider Support: https://lore.kernel.org/lkml/1618556290-28303-1-git-send-email-okukatla@codeaurora.org/ > CPUfreq Support: https://lore.kernel.org/lkml/1618020280-5470-2-git-send-email-tdas@codeaurora.org/ > RPMH Provider Support: https://lore.kernel.org/lkml/1619517059-12109-1-git-send-email-okukatla@codeaurora.org/ > > It also depends on L3 and cpufreq dt nodes from the ^^ series to not have > overlapping memory regions. > > arch/arm64/boot/dts/qcom/sc7280.dtsi | 215 +++++++++++++++++++++++++++++++++++ > 1 file changed, 215 insertions(+) > > diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi > index 0bb835aeae33..89ec11eb7fc0 100644 > --- a/arch/arm64/boot/dts/qcom/sc7280.dtsi > +++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi > @@ -7,6 +7,7 @@ > > #include <dt-bindings/clock/qcom,gcc-sc7280.h> > #include <dt-bindings/clock/qcom,rpmh.h> > +#include <dt-bindings/interconnect/qcom,osm-l3.h> > #include <dt-bindings/interconnect/qcom,sc7280.h> > #include <dt-bindings/interrupt-controller/arm-gic.h> > #include <dt-bindings/mailbox/qcom-ipcc.h> > @@ -71,6 +72,9 @@ > &LITTLE_CPU_SLEEP_1 > &CLUSTER_SLEEP_0>; > next-level-cache = <&L2_0>; > + operating-points-v2 = <&cpu0_opp_table>; > + interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>, > + <&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>; > qcom,freq-domain = <&cpufreq_hw 0>; > L2_0: l2-cache { > compatible = "cache"; > @@ -90,6 +94,9 @@ > &LITTLE_CPU_SLEEP_1 > &CLUSTER_SLEEP_0>; > next-level-cache = <&L2_100>; > + operating-points-v2 = <&cpu0_opp_table>; > + interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>, > + <&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>; > qcom,freq-domain = <&cpufreq_hw 0>; > L2_100: l2-cache { > compatible = "cache"; > @@ -106,6 +113,9 @@ > &LITTLE_CPU_SLEEP_1 > &CLUSTER_SLEEP_0>; > next-level-cache = <&L2_200>; > + operating-points-v2 = <&cpu0_opp_table>; > + interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>, > + <&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>; > qcom,freq-domain = <&cpufreq_hw 0>; > L2_200: l2-cache { > compatible = "cache"; > @@ -122,6 +132,9 @@ > &LITTLE_CPU_SLEEP_1 > &CLUSTER_SLEEP_0>; > next-level-cache = <&L2_300>; > + operating-points-v2 = <&cpu0_opp_table>; > + interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>, > + <&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>; > qcom,freq-domain = <&cpufreq_hw 0>; > L2_300: l2-cache { > compatible = "cache"; > @@ -138,6 +151,9 @@ > &BIG_CPU_SLEEP_1 > &CLUSTER_SLEEP_0>; > next-level-cache = <&L2_400>; > + operating-points-v2 = <&cpu4_opp_table>; > + interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>, > + <&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>; > qcom,freq-domain = <&cpufreq_hw 1>; > L2_400: l2-cache { > compatible = "cache"; > @@ -154,6 +170,9 @@ > &BIG_CPU_SLEEP_1 > &CLUSTER_SLEEP_0>; > next-level-cache = <&L2_500>; > + operating-points-v2 = <&cpu4_opp_table>; > + interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>, > + <&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>; > qcom,freq-domain = <&cpufreq_hw 1>; > L2_500: l2-cache { > compatible = "cache"; > @@ -170,6 +189,9 @@ > &BIG_CPU_SLEEP_1 > &CLUSTER_SLEEP_0>; > next-level-cache = <&L2_600>; > + operating-points-v2 = <&cpu4_opp_table>; > + interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>, > + <&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>; > qcom,freq-domain = <&cpufreq_hw 1>; > L2_600: l2-cache { > compatible = "cache"; > @@ -186,6 +208,9 @@ > &BIG_CPU_SLEEP_1 > &CLUSTER_SLEEP_0>; > next-level-cache = <&L2_700>; > + operating-points-v2 = <&cpu7_opp_table>; > + interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>, > + <&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>; > qcom,freq-domain = <&cpufreq_hw 2>; > L2_700: l2-cache { > compatible = "cache"; > @@ -248,6 +273,196 @@ > }; > }; > > + cpu0_opp_table: cpu0-opp-table { > + compatible = "operating-points-v2"; > + opp-shared; > + > + cpu0_opp_300mhz: opp-300000000 { > + opp-hz = /bits/ 64 <300000000>; > + opp-peak-kBps = <800000 9600000>; > + }; > + > + cpu0_opp_691mhz: opp-691200000 { > + opp-hz = /bits/ 64 <691200000>; > + opp-peak-kBps = <800000 17817600>; > + }; > + > + cpu0_opp_806mhz: opp-806400000 { > + opp-hz = /bits/ 64 <806400000>; > + opp-peak-kBps = <800000 20889600>; > + }; > + > + cpu0_opp_940mhz: opp-940800000 { nit: one could argue that rounded it's 941 MHz. Same for some other OPPs. Not super-important though, so: Reviewed-by: Matthias Kaehlcke <mka@chromium.org>