mbox series

[v1,0/6] NVIDIA Tegra memory improvements

Message ID 20210329194602.17049-1-digetx@gmail.com
Headers show
Series NVIDIA Tegra memory improvements | expand

Message

Dmitry Osipenko March 29, 2021, 7:45 p.m. UTC
Hi,

This series replaces the raw voltage regulator with a power domain that
will be managing SoC core voltage. The core power domain patches are still
under review, but it's clear at this point that this is the way we will
implement the DVFS support.

The remaining Tegra20 memory bindings are converted to schema. I also
made a small improvement to the memory drivers.

Dmitry Osipenko (6):
  dt-bindings: memory: tegra20: emc: Replace core regulator with power
    domain
  dt-bindings: memory: tegra30: emc: Replace core regulator with power
    domain
  dt-bindings: memory: tegra124: emc: Replace core regulator with power
    domain
  dt-bindings: memory: tegra20: mc: Convert to schema
  dt-bindings: memory: tegra20: emc: Convert to schema
  memory: tegra: Print out info-level once per driver probe

 .../nvidia,tegra124-emc.yaml                  |   7 +-
 .../memory-controllers/nvidia,tegra20-emc.txt | 130 --------
 .../nvidia,tegra20-emc.yaml                   | 294 ++++++++++++++++++
 .../memory-controllers/nvidia,tegra20-mc.txt  |  40 ---
 .../memory-controllers/nvidia,tegra20-mc.yaml |  78 +++++
 .../nvidia,tegra30-emc.yaml                   |   7 +-
 drivers/memory/tegra/tegra124-emc.c           |  12 +-
 drivers/memory/tegra/tegra20-emc.c            |  20 +-
 drivers/memory/tegra/tegra30-emc.c            |  18 +-
 9 files changed, 405 insertions(+), 201 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt
 create mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.yaml
 delete mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-mc.txt
 create mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-mc.yaml

Comments

Krzysztof Kozlowski March 30, 2021, 8:48 a.m. UTC | #1
On 29/03/2021 21:46, Dmitry Osipenko wrote:
> Convert Tegra20 External Memory Controller binding to schema.

> 

> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>

> ---

>  .../memory-controllers/nvidia,tegra20-emc.txt | 130 --------

>  .../nvidia,tegra20-emc.yaml                   | 294 ++++++++++++++++++

>  2 files changed, 294 insertions(+), 130 deletions(-)

>  delete mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt

>  create mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.yaml

> 

> diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt

> deleted file mode 100644

> index d2250498c36d..000000000000

> --- a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt

> +++ /dev/null

> @@ -1,130 +0,0 @@

> -Embedded Memory Controller

> -

> -Properties:

> -- name : Should be emc

> -- #address-cells : Should be 1

> -- #size-cells : Should be 0

> -- compatible : Should contain "nvidia,tegra20-emc".

> -- reg : Offset and length of the register set for the device

> -- nvidia,use-ram-code : If present, the sub-nodes will be addressed

> -  and chosen using the ramcode board selector. If omitted, only one

> -  set of tables can be present and said tables will be used

> -  irrespective of ram-code configuration.

> -- interrupts : Should contain EMC General interrupt.

> -- clocks : Should contain EMC clock.

> -- nvidia,memory-controller : Phandle of the Memory Controller node.

> -- #interconnect-cells : Should be 0.

> -- operating-points-v2: See ../bindings/opp/opp.txt for details.

> -

> -For each opp entry in 'operating-points-v2' table:

> -- opp-supported-hw: One bitfield indicating SoC process ID mask

> -

> -	A bitwise AND is performed against this value and if any bit

> -	matches, the OPP gets enabled.

> -

> -Optional properties:

> -- power-domains: Phandle of the SoC "core" power domain.

> -

> -Child device nodes describe the memory settings for different configurations and clock rates.

> -

> -Example:

> -

> -	opp_table: opp-table {

> -		compatible = "operating-points-v2";

> -

> -		opp@36000000 {

> -			opp-microvolt = <950000 950000 1300000>;

> -			opp-hz = /bits/ 64 <36000000>;

> -		};

> -		...

> -	};

> -

> -	memory-controller@7000f400 {

> -		#address-cells = < 1 >;

> -		#size-cells = < 0 >;

> -		#interconnect-cells = <0>;

> -		compatible = "nvidia,tegra20-emc";

> -		reg = <0x7000f400 0x400>;

> -		interrupts = <0 78 0x04>;

> -		clocks = <&tegra_car TEGRA20_CLK_EMC>;

> -		nvidia,memory-controller = <&mc>;

> -		power-domains = <&domain>;

> -		operating-points-v2 = <&opp_table>;

> -	}

> -

> -

> -Embedded Memory Controller ram-code table

> -

> -If the emc node has the nvidia,use-ram-code property present, then the

> -next level of nodes below the emc table are used to specify which settings

> -apply for which ram-code settings.

> -

> -If the emc node lacks the nvidia,use-ram-code property, this level is omitted

> -and the tables are stored directly under the emc node (see below).

> -

> -Properties:

> -

> -- name : Should be emc-tables

> -- nvidia,ram-code : the binary representation of the ram-code board strappings

> -  for which this node (and children) are valid.

> -

> -

> -

> -Embedded Memory Controller configuration table

> -

> -This is a table containing the EMC register settings for the various

> -operating speeds of the memory controller. They are always located as

> -subnodes of the emc controller node.

> -

> -There are two ways of specifying which tables to use:

> -

> -* The simplest is if there is just one set of tables in the device tree,

> -  and they will always be used (based on which frequency is used).

> -  This is the preferred method, especially when firmware can fill in

> -  this information based on the specific system information and just

> -  pass it on to the kernel.

> -

> -* The slightly more complex one is when more than one memory configuration

> -  might exist on the system.  The Tegra20 platform handles this during

> -  early boot by selecting one out of possible 4 memory settings based

> -  on a 2-pin "ram code" bootstrap setting on the board. The values of

> -  these strappings can be read through a register in the SoC, and thus

> -  used to select which tables to use.

> -

> -Properties:

> -- name : Should be emc-table

> -- compatible : Should contain "nvidia,tegra20-emc-table".

> -- reg : either an opaque enumerator to tell different tables apart, or

> -  the valid frequency for which the table should be used (in kHz).

> -- clock-frequency : the clock frequency for the EMC at which this

> -  table should be used (in kHz).

> -- nvidia,emc-registers : a 46 word array of EMC registers to be programmed

> -  for operation at the 'clock-frequency' setting.

> -  The order and contents of the registers are:

> -    RC, RFC, RAS, RP, R2W, W2R, R2P, W2P, RD_RCD, WR_RCD, RRD, REXT,

> -    WDV, QUSE, QRST, QSAFE, RDV, REFRESH, BURST_REFRESH_NUM, PDEX2WR,

> -    PDEX2RD, PCHG2PDEN, ACT2PDEN, AR2PDEN, RW2PDEN, TXSR, TCKE, TFAW,

> -    TRPAB, TCLKSTABLE, TCLKSTOP, TREFBW, QUSE_EXTRA, FBIO_CFG6, ODT_WRITE,

> -    ODT_READ, FBIO_CFG5, CFG_DIG_DLL, DLL_XFORM_DQS, DLL_XFORM_QUSE,

> -    ZCAL_REF_CNT, ZCAL_WAIT_CNT, AUTO_CAL_INTERVAL, CFG_CLKTRIM_0,

> -    CFG_CLKTRIM_1, CFG_CLKTRIM_2

> -

> -		emc-table@166000 {

> -			reg = <166000>;

> -			compatible = "nvidia,tegra20-emc-table";

> -			clock-frequency = < 166000 >;

> -			nvidia,emc-registers = < 0 0 0 0 0 0 0 0 0 0 0 0 0 0

> -						 0 0 0 0 0 0 0 0 0 0 0 0 0 0

> -						 0 0 0 0 0 0 0 0 0 0 0 0 0 0

> -						 0 0 0 0 >;

> -		};

> -

> -		emc-table@333000 {

> -			reg = <333000>;

> -			compatible = "nvidia,tegra20-emc-table";

> -			clock-frequency = < 333000 >;

> -			nvidia,emc-registers = < 0 0 0 0 0 0 0 0 0 0 0 0 0 0

> -						 0 0 0 0 0 0 0 0 0 0 0 0 0 0

> -						 0 0 0 0 0 0 0 0 0 0 0 0 0 0

> -						 0 0 0 0 >;

> -		};

> diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.yaml b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.yaml

> new file mode 100644

> index 000000000000..9665fdd80b22

> --- /dev/null

> +++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.yaml

> @@ -0,0 +1,294 @@

> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)

> +%YAML 1.2

> +---

> +$id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra20-emc.yaml#

> +$schema: http://devicetree.org/meta-schemas/core.yaml#

> +

> +title: NVIDIA Tegra20 SoC External Memory Controller

> +

> +maintainers:

> +  - Dmitry Osipenko <digetx@gmail.com>

> +  - Jon Hunter <jonathanh@nvidia.com>

> +  - Thierry Reding <thierry.reding@gmail.com>

> +

> +description: |

> +  The External Memory Controller (EMC) interfaces with the off-chip SDRAM to

> +  service the request stream sent from Memory Controller. The EMC also has

> +  various performance-affecting settings beyond the obvious SDRAM configuration

> +  parameters and initialization settings. Tegra20 EMC supports multiple JEDEC

> +  standard protocols: DDR1, LPDDR2 and DDR2.

> +

> +properties:

> +  compatible:

> +    const: nvidia,tegra20-emc

> +

> +  reg:

> +    maxItems: 1

> +

> +  clocks:

> +    maxItems: 1


An idea for a follow-up patch (not needed here) - add the clock-names
and get the clock by name in the driver.

> +

> +  interrupts:

> +    maxItems: 1

> +

> +  "#address-cells":

> +    const: 1

> +

> +  "#size-cells":

> +    const: 0

> +

> +  "#interconnect-cells":

> +    const: 0

> +

> +  nvidia,memory-controller:

> +    $ref: /schemas/types.yaml#/definitions/phandle

> +    description:

> +      Phandle of the Memory Controller node.

> +

> +  power-domains:

> +    $ref: /schemas/types.yaml#/definitions/phandle

> +    description:

> +      Phandle of the SoC "core" power domain.


I think the core checks the type, so you only need to limit max items.

> +

> +  operating-points-v2:

> +    description:

> +      Should contain freqs and voltages and opp-supported-hw property, which

> +      is a bitfield indicating SoC process ID mask.

> +

> +  nvidia,use-ram-code:

> +    type: boolean

> +    description:

> +      If present, the emc-tables@ sub-nodes will be addressed.

> +

> +patternProperties:

> +  "^emc-table@[0-9]+$":


This might not be easy but you should add constraints when emc-table and
emc-tables are expected. The schema should check if proper node is used
depending on "nvidia,use-ram-code".

> +    type: object

> +    properties:

> +      compatible:

> +        const: nvidia,tegra20-emc-table

> +

> +      clock-frequency:

> +        description:

> +          Memory clock rate in kHz.

> +        minimum: 1000

> +        maximum: 900000

> +

> +      reg:

> +        maxItems: 1

> +        description:

> +          Either an opaque enumerator to tell different tables apart, or

> +          the valid frequency for which the table should be used (in kHz).

> +

> +      nvidia,emc-registers:

> +        description:

> +          EMC timing characterization data. These are the registers

> +          (see section "15.4.1 EMC Registers" in the TRM) whose values

> +          need to be specified, according to the board documentation.

> +        $ref: /schemas/types.yaml#/definitions/uint32-array

> +        items:

> +          - description: EMC_RC

> +          - description: EMC_RFC

> +          - description: EMC_RAS

> +          - description: EMC_RP

> +          - description: EMC_R2W

> +          - description: EMC_W2R

> +          - description: EMC_R2P

> +          - description: EMC_W2P

> +          - description: EMC_RD_RCD

> +          - description: EMC_WR_RCD

> +          - description: EMC_RRD

> +          - description: EMC_REXT

> +          - description: EMC_WDV

> +          - description: EMC_QUSE

> +          - description: EMC_QRST

> +          - description: EMC_QSAFE

> +          - description: EMC_RDV

> +          - description: EMC_REFRESH

> +          - description: EMC_BURST_REFRESH_NUM

> +          - description: EMC_PDEX2WR

> +          - description: EMC_PDEX2RD

> +          - description: EMC_PCHG2PDEN

> +          - description: EMC_ACT2PDEN

> +          - description: EMC_AR2PDEN

> +          - description: EMC_RW2PDEN

> +          - description: EMC_TXSR

> +          - description: EMC_TCKE

> +          - description: EMC_TFAW

> +          - description: EMC_TRPAB

> +          - description: EMC_TCLKSTABLE

> +          - description: EMC_TCLKSTOP

> +          - description: EMC_TREFBW

> +          - description: EMC_QUSE_EXTRA

> +          - description: EMC_FBIO_CFG6

> +          - description: EMC_ODT_WRITE

> +          - description: EMC_ODT_READ

> +          - description: EMC_FBIO_CFG5

> +          - description: EMC_CFG_DIG_DLL

> +          - description: EMC_DLL_XFORM_DQS

> +          - description: EMC_DLL_XFORM_QUSE

> +          - description: EMC_ZCAL_REF_CNT

> +          - description: EMC_ZCAL_WAIT_CNT

> +          - description: EMC_AUTO_CAL_INTERVAL

> +          - description: EMC_CFG_CLKTRIM_0

> +          - description: EMC_CFG_CLKTRIM_1

> +          - description: EMC_CFG_CLKTRIM_2

> +

> +    required:

> +      - clock-frequency

> +      - compatible

> +      - reg

> +      - nvidia,emc-registers

> +

> +    additionalProperties: false

> +

> +  "^emc-tables@[a-z0-9\\-]+$":


Why \ and - in the pattern?

Best regards,
Krzysztof
Dmitry Osipenko March 30, 2021, 3:29 p.m. UTC | #2
30.03.2021 11:48, Krzysztof Kozlowski пишет:
>> +  power-domains:

>> +    $ref: /schemas/types.yaml#/definitions/phandle

>> +    description:

>> +      Phandle of the SoC "core" power domain.

> I think the core checks the type, so you only need to limit max items.

> 


It's a bit confusing that both variants work and it's not apparent what
variant is better.

I actually used the max items limit initially and then changed it to
$ref phandle because it appeared to me that it's a better choice. I'll
switch back to the limit in v2, thanks.
Dmitry Osipenko March 30, 2021, 3:32 p.m. UTC | #3
30.03.2021 11:48, Krzysztof Kozlowski пишет:
>> +  nvidia,use-ram-code:

>> +    type: boolean

>> +    description:

>> +      If present, the emc-tables@ sub-nodes will be addressed.

>> +

>> +patternProperties:

>> +  "^emc-table@[0-9]+$":

> This might not be easy but you should add constraints when emc-table and

> emc-tables are expected. The schema should check if proper node is used

> depending on "nvidia,use-ram-code".

> 


I'm afraid this is not doable. If you have an example how to do this,
please share it with me.

I see that there is a "dependencies:", but it doesn't work with the
patterns, IIUC.
Dmitry Osipenko March 30, 2021, 3:34 p.m. UTC | #4
30.03.2021 11:48, Krzysztof Kozlowski пишет:
>> +  "^emc-tables@[a-z0-9\\-]+$":

> Why \ and - in the pattern?


Good catch, I thought that '-' needs to be escaped, but then forgot to
remove the unnecessary slashes.
Dmitry Osipenko March 30, 2021, 3:56 p.m. UTC | #5
30.03.2021 18:29, Dmitry Osipenko пишет:
> 30.03.2021 11:48, Krzysztof Kozlowski пишет:

>>> +  power-domains:

>>> +    $ref: /schemas/types.yaml#/definitions/phandle

>>> +    description:

>>> +      Phandle of the SoC "core" power domain.

>> I think the core checks the type, so you only need to limit max items.

>>

> 

> It's a bit confusing that both variants work and it's not apparent what

> variant is better.

> 

> I actually used the max items limit initially and then changed it to

> $ref phandle because it appeared to me that it's a better choice. I'll

> switch back to the limit in v2, thanks.

> 


Although, I'm still not sure what is the best variant. Could you please
clarify why maxItems is better?

Seems the $ref phandle already limits domain items to 1. So I don't
understand what's the difference.
Rob Herring (Arm) March 30, 2021, 10:23 p.m. UTC | #6
On Mon, Mar 29, 2021 at 10:45:58PM +0300, Dmitry Osipenko wrote:
> Power domain fits much better than a voltage regulator in regards to

> a proper hardware description and from a software perspective as well.

> Hence replace the core regulator with the power domain. Note that this

> doesn't affect any existing DTBs because we haven't started to use the

> regulator yet, and thus, it's okay to change it.

> 

> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>

> ---

>  .../bindings/memory-controllers/nvidia,tegra30-emc.yaml    | 7 ++++---

>  1 file changed, 4 insertions(+), 3 deletions(-)

> 

> diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml

> index 0a2e2c0d0fdd..4a2edb9b8bdc 100644

> --- a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml

> +++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml

> @@ -39,9 +39,10 @@ properties:

>      description:

>        Phandle of the Memory Controller node.

>  

> -  core-supply:

> +  power-domains:

> +    $ref: /schemas/types.yaml#/definitions/phandle


'power-domains' already has a type. We need to know how many (maxItems).

>      description:

> -      Phandle of voltage regulator of the SoC "core" power domain.

> +      Phandle of the SoC "core" power domain.

>  

>    operating-points-v2:

>      description:

> @@ -241,7 +242,7 @@ examples:

>  

>          nvidia,memory-controller = <&mc>;

>          operating-points-v2 = <&dvfs_opp_table>;

> -        core-supply = <&vdd_core>;

> +        power-domains = <&domain>;

>  

>          #interconnect-cells = <0>;

>  

> -- 

> 2.30.2

>
Dmitry Osipenko March 30, 2021, 10:31 p.m. UTC | #7
31.03.2021 01:23, Rob Herring пишет:
> On Mon, Mar 29, 2021 at 10:45:58PM +0300, Dmitry Osipenko wrote:

>> Power domain fits much better than a voltage regulator in regards to

>> a proper hardware description and from a software perspective as well.

>> Hence replace the core regulator with the power domain. Note that this

>> doesn't affect any existing DTBs because we haven't started to use the

>> regulator yet, and thus, it's okay to change it.

>>

>> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>

>> ---

>>  .../bindings/memory-controllers/nvidia,tegra30-emc.yaml    | 7 ++++---

>>  1 file changed, 4 insertions(+), 3 deletions(-)

>>

>> diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml

>> index 0a2e2c0d0fdd..4a2edb9b8bdc 100644

>> --- a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml

>> +++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-emc.yaml

>> @@ -39,9 +39,10 @@ properties:

>>      description:

>>        Phandle of the Memory Controller node.

>>  

>> -  core-supply:

>> +  power-domains:

>> +    $ref: /schemas/types.yaml#/definitions/phandle

> 

> 'power-domains' already has a type. We need to know how many (maxItems).



Alright, I see now what makes the difference. Thank you and Krzysztof
for the suggestion.
Rob Herring (Arm) March 30, 2021, 10:33 p.m. UTC | #8
On Tue, Mar 30, 2021 at 06:56:44PM +0300, Dmitry Osipenko wrote:
> 30.03.2021 18:29, Dmitry Osipenko пишет:

> > 30.03.2021 11:48, Krzysztof Kozlowski пишет:

> >>> +  power-domains:

> >>> +    $ref: /schemas/types.yaml#/definitions/phandle

> >>> +    description:

> >>> +      Phandle of the SoC "core" power domain.

> >> I think the core checks the type, so you only need to limit max items.

> >>

> > 

> > It's a bit confusing that both variants work and it's not apparent what

> > variant is better.


Soon '$ref' won't work. I have a pending meta-schema change to catch 
this. It takes some time because I have to fix all the existing cases in 
tree and wait a cycle so I'm not breaking everyone. 

> > 

> > I actually used the max items limit initially and then changed it to

> > $ref phandle because it appeared to me that it's a better choice. I'll

> > switch back to the limit in v2, thanks.

> > 

> 

> Although, I'm still not sure what is the best variant. Could you please

> clarify why maxItems is better?

> 

> Seems the $ref phandle already limits domain items to 1. So I don't

> understand what's the difference.


It would not work with '<&domain 1>' as 'phandle' doesn't accept any 
arg cells. While you may know you don't have any cells, technically 
that's provider dependent and outside the scope of this binding.

Rob
Rob Herring (Arm) March 30, 2021, 10:35 p.m. UTC | #9
On Tue, Mar 30, 2021 at 06:32:20PM +0300, Dmitry Osipenko wrote:
> 30.03.2021 11:48, Krzysztof Kozlowski пишет:

> >> +  nvidia,use-ram-code:

> >> +    type: boolean

> >> +    description:

> >> +      If present, the emc-tables@ sub-nodes will be addressed.

> >> +

> >> +patternProperties:

> >> +  "^emc-table@[0-9]+$":

> > This might not be easy but you should add constraints when emc-table and

> > emc-tables are expected. The schema should check if proper node is used

> > depending on "nvidia,use-ram-code".

> > 

> 

> I'm afraid this is not doable. If you have an example how to do this,

> please share it with me.

> 

> I see that there is a "dependencies:", but it doesn't work with the

> patterns, IIUC.


That's correct.

Rob