mbox series

[0/6] pseries NUMA distance calculation

Message ID 20200923193458.203186-1-danielhb413@gmail.com
Headers show
Series pseries NUMA distance calculation | expand

Message

Daniel Henrique Barboza Sept. 23, 2020, 7:34 p.m. UTC
Hi,

This series is a follow-up of the reworked pSeries NUMA
code that is already merged upstream. It contains some of
the patches that were presented in the first version of this
work [1], some of them changed based on the reviews made
back there. 

With this series, we're able to take user input into consideration
when setting up the NUMA topology of the guest. It is still an
approximation, but at least user input is not completely ignored.

The changes will only be effective with pseries-5.2 and newer
machines, and if more than one NUMA node is declared by the user.
The idea is that we don't want to tamper with legacy guest behavior.
Patch 6 has examples of how we are approximating NUMA distance
via user input.

The series was rebased using David's ppc-for-5.2 at
4cca31df828.


Changes carried over from [1]:
- patch 1 (former 4): same patch, added David's r-b
- patch 2 (former 2): the check for asymetrical NUMA was moved
to spapr code as requested in the review
- patch 4 is a merge of former patches 5 and 6
- patch 5 (former 9): reworked
- patch 6 (former 10): same patch

Patch 3 is new in the series.



[1] https://lists.gnu.org/archive/html/qemu-devel/2020-08/msg03169.html



Daniel Henrique Barboza (6):
  spapr: add spapr_machine_using_legacy_numa() helper
  spapr_numa: forbid asymmetrical NUMA setups
  spapr_numa: translate regular NUMA distance to PAPR distance
  spapr_numa: change reference-points and maxdomain settings
  spapr_numa: consider user input when defining associativity
  specs/ppc-spapr-numa: update with new NUMA support

 docs/specs/ppc-spapr-numa.rst | 213 ++++++++++++++++++++++++++++++++++
 hw/ppc/spapr.c                |  12 ++
 hw/ppc/spapr_numa.c           | 184 +++++++++++++++++++++++++++--
 include/hw/ppc/spapr.h        |   2 +
 4 files changed, 402 insertions(+), 9 deletions(-)

Comments

Daniel Henrique Barboza Sept. 24, 2020, 11:18 a.m. UTC | #1
On 9/24/20 5:16 AM, Greg Kurz wrote:
> On Wed, 23 Sep 2020 16:34:55 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
>> QEMU allows the user to set NUMA distances in the command line.
>> For ACPI architectures like x86, this means that user input is
>> used to populate the SLIT table, and the guest perceives the
>> distances as the user chooses to.
>>
>> PPC64 does not work that way. In the PAPR concept of NUMA,
>> associativity relations between the NUMA nodes are provided by
>> the device tree, and the guest kernel is free to calculate the
>> distances as it sees fit. Given how ACPI architectures works,
>> this puts the pSeries machine in a strange spot - users expect
>> to define NUMA distances like in the ACPI case, but QEMU does
>> not have control over it. To give pSeries users a similar
>> experience, we'll need to bring kernel specifics to QEMU
>> to approximate the NUMA distances.
>>
>> The pSeries kernel works with the NUMA distance range 10,
>> 20, 40, 80 and 160. The code starts at 10 (local distance) and
>> searches for a match in the first NUMA level between the
>> resources. If there is no match, the distance is doubled and
>> then it proceeds to try to match in the next NUMA level. Rinse
>> and repeat for MAX_DISTANCE_REF_POINTS levels.
>>
>> This patch introduces a spapr_numa_PAPRify_distances() helper
> 
> Funky naming but meaningful and funny, for me at least :)
> 
>> that translates the user distances to kernel distance, which
>> we're going to use to determine the associativity domains for
>> the NUMA nodes.
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>>   hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 44 insertions(+)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index 36aaa273ee..180800b2f3 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>>       return true;
>>   }
>>   
>> +/*
>> + * This function will translate the user distances into
>> + * what the kernel understand as possible values: 10
>> + * (local distance), 20, 40, 80 and 160. Current heuristic
>> + * is:
>> + *
>> + *  - distances between 11 and 30 -> rounded to 20
>> + *  - distances between 31 and 60 -> rounded to 40
>> + *  - distances between 61 and 120 -> rounded to 80
>> + *  - everything above 120 -> 160
> 
> It isn't clear what happens when the distances are exactly
> 30, 60 or 120...

30 is rounded to 20, 60 is rounded to 40 and 120 is rounded to 80.
Perhaps I should change this to mention "between 11 and 30
inclusive" and so on.

> 
>> + *
>> + * This step can also be done in the same time as the NUMA
>> + * associativity domains calculation, at the cost of extra
>> + * complexity. We chose to keep it simpler.
>> + *
>> + * Note: this will overwrite the distance values in
>> + * ms->numa_state->nodes.
>> + */
>> +static void spapr_numa_PAPRify_distances(MachineState *ms)
>> +{
>> +    int src, dst;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t rounded_distance = 160;
>> +
>> +            if (distance > 11 && distance < 30) {
>> +                rounded_distance = 20;
>> +            } else if (distance > 31 && distance < 60) {
>> +                rounded_distance = 40;
>> +            } else if (distance > 61 && distance < 120) {
>> +                rounded_distance = 80;
>> +            }
> 
> ... and this code doesn't convert them to PAPR-friendly values
> actually. I guess < should be turned into <= .


Good catch. Yep, this needs to be <=.


Thanks,


DHB

> 
>> +
>> +            numa_info[src].distance[dst] = rounded_distance;
>> +            numa_info[dst].distance[src] = rounded_distance;
>> +        }
>> +    }
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>> @@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>           exit(1);
>>       }
>>   
>> +    spapr_numa_PAPRify_distances(machine);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>
Daniel Henrique Barboza Sept. 24, 2020, 11:21 a.m. UTC | #2
On 9/24/20 7:22 AM, Greg Kurz wrote:
> On Wed, 23 Sep 2020 16:34:57 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
>> This patch puts all the pieces together to finally allow user
>> input when defining the NUMA topology of the spapr guest.
>>
>> We have one more kernel restriction to handle in this patch:
>> the associativity array of node 0 must be filled with zeroes
>> [1]. The strategy below ensures that this will happen.
>>
>> spapr_numa_define_associativity_domains() will read the distance
>> (already PAPRified) between the nodes from numa_state and determine
>> the appropriate NUMA level. The NUMA domains, processed in ascending
>> order, are going to be matched via NUMA levels, and the lowest
>> associativity domain value is assigned to that specific level for
>> both.
>>
>> This will create an heuristic where the associativities of the first
>> nodes have higher priority and are re-used in new matches, instead of
>> overwriting them with a new associativity match. This is necessary
>> because neither QEMU, nor the pSeries kernel, supports multiple
>> associativity domains for each resource, meaning that we have to
>> decide which associativity relation is relevant.
>>
>> Ultimately, all of this results in a best effort approximation for
>> the actual NUMA distances the user input in the command line. Given
>> the nature of how PAPR itself interprets NUMA distances versus the
>> expectations risen by how ACPI SLIT works, there might be better
>> algorithms but, in the end, it'll also result in another way to
>> approximate what the user really wanted.
>>
>> To keep this commit message no longer than it already is, the next
>> patch will update the existing documentation in ppc-spapr-numa.rst
>> with more in depth details and design considerations/drawbacks.
>>
>> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>>   hw/ppc/spapr_numa.c | 81 ++++++++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 80 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index 688391278e..c84f77cda7 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -80,12 +80,79 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
>>       }
>>   }
>>   
>> +static uint8_t spapr_numa_get_NUMA_level(uint8_t distance)
> 
> The funky naming doesn't improve clarity IMHO. I'd rather make
> it lowercase only.
> 
>> +{
>> +    uint8_t numa_level;
>> +
>> +    switch (distance) {
>> +    case 20:
>> +        numa_level = 0x3;
>> +        break;
>> +    case 40:
>> +        numa_level = 0x2;
>> +        break;
>> +    case 80:
>> +        numa_level = 0x1;
>> +        break;
>> +    default:
>> +        numa_level = 0;
> 
> Hmm... same level for distances 10 and 160 ? Is this correct ?


This will never be called with distance = 10 because we won't
evaluate distance between the node to itself. But I'll put a
'case 10:' clause there that does nothing to make it clearer.



DHB

> 
>> +    }
>> +
>> +    return numa_level;
>> +}
>> +
>> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr,
>> +                                                    MachineState *ms)
> 
> Passing ms seems to indicate that it could have a different value than spapr,
> which is certainly no true.
> 
> I'd rather make it a local variable:
> 
>      MachineState *ms = MACHINE(spapr);
> 
> This is an slow path : we don't really care to do dynamic type checking
> multiple times.
> 
>> +{
>> +    int src, dst;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            /*
>> +             * This is how the associativity domain between A and B
>> +             * is calculated:
>> +             *
>> +             * - get the distance between them
>> +             * - get the correspondent NUMA level for this distance
>> +             * - the arrays were initialized with their own numa_ids,
>> +             * and we're calculating the distance in node_id ascending order,
>> +             * starting from node 0. This will have a cascade effect in the
>> +             * algorithm because the associativity domains that node 0 defines
>> +             * will be carried over to the other nodes, and node 1
>> +             * associativities will be carried over unless there's already a
>> +             * node 0 associativity assigned, and so on. This happens because
>> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
>> +             * the associativity domain of both, for the given NUMA level.
>> +             *
>> +             * The PPC kernel expects the associativity domains of node 0 to
>> +             * be always 0, and this algorithm will grant that by default.
>> +             */
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t n_level = spapr_numa_get_NUMA_level(distance);
>> +            uint32_t assoc_src, assoc_dst;
>> +
>> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
>> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
>> +
>> +            if (assoc_src < assoc_dst) {
>> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
>> +            } else {
>> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
>> +            }
>> +        }
>> +    }
>> +
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>>       SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>>       int nb_numa_nodes = machine->numa_state->num_nodes;
>>       int i, j, max_nodes_with_gpus;
>> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>>   
>>       /*
>>        * For all associativity arrays: first position is the size,
>> @@ -99,6 +166,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       for (i = 0; i < nb_numa_nodes; i++) {
>>           spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>> +
>> +        /*
>> +         * Fill all associativity domains of the node with node_id.
>> +         * This is required because the kernel makes valid associativity
> 
> It would be appreciated to have an URL to the corresponding code in the
> changelog.
> 
>> +         * matches with the zeroes if we leave the matrix unitialized.
>> +         */
>> +        if (!using_legacy_numa) {
>> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
>> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
>> +            }
>> +        }
>>       }
>>   
>>       /*
>> @@ -128,7 +206,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>        * 1 NUMA node) will not benefit from anything we're going to do
>>        * after this point.
>>        */
>> -    if (spapr_machine_using_legacy_numa(spapr)) {
>> +    if (using_legacy_numa) {
>>           return;
>>       }
>>   
>> @@ -139,6 +217,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       }
>>   
>>       spapr_numa_PAPRify_distances(machine);
>> +    spapr_numa_define_associativity_domains(spapr, machine);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>
Greg Kurz Sept. 24, 2020, 11:32 a.m. UTC | #3
On Thu, 24 Sep 2020 08:21:47 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> 
> 
> On 9/24/20 7:22 AM, Greg Kurz wrote:
> > On Wed, 23 Sep 2020 16:34:57 -0300
> > Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> > 
> >> This patch puts all the pieces together to finally allow user
> >> input when defining the NUMA topology of the spapr guest.
> >>
> >> We have one more kernel restriction to handle in this patch:
> >> the associativity array of node 0 must be filled with zeroes
> >> [1]. The strategy below ensures that this will happen.
> >>
> >> spapr_numa_define_associativity_domains() will read the distance
> >> (already PAPRified) between the nodes from numa_state and determine
> >> the appropriate NUMA level. The NUMA domains, processed in ascending
> >> order, are going to be matched via NUMA levels, and the lowest
> >> associativity domain value is assigned to that specific level for
> >> both.
> >>
> >> This will create an heuristic where the associativities of the first
> >> nodes have higher priority and are re-used in new matches, instead of
> >> overwriting them with a new associativity match. This is necessary
> >> because neither QEMU, nor the pSeries kernel, supports multiple
> >> associativity domains for each resource, meaning that we have to
> >> decide which associativity relation is relevant.
> >>
> >> Ultimately, all of this results in a best effort approximation for
> >> the actual NUMA distances the user input in the command line. Given
> >> the nature of how PAPR itself interprets NUMA distances versus the
> >> expectations risen by how ACPI SLIT works, there might be better
> >> algorithms but, in the end, it'll also result in another way to
> >> approximate what the user really wanted.
> >>
> >> To keep this commit message no longer than it already is, the next
> >> patch will update the existing documentation in ppc-spapr-numa.rst
> >> with more in depth details and design considerations/drawbacks.
> >>
> >> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> >>
> >> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> >> ---
> >>   hw/ppc/spapr_numa.c | 81 ++++++++++++++++++++++++++++++++++++++++++++-
> >>   1 file changed, 80 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> >> index 688391278e..c84f77cda7 100644
> >> --- a/hw/ppc/spapr_numa.c
> >> +++ b/hw/ppc/spapr_numa.c
> >> @@ -80,12 +80,79 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
> >>       }
> >>   }
> >>   
> >> +static uint8_t spapr_numa_get_NUMA_level(uint8_t distance)
> > 
> > The funky naming doesn't improve clarity IMHO. I'd rather make
> > it lowercase only.
> > 
> >> +{
> >> +    uint8_t numa_level;
> >> +
> >> +    switch (distance) {
> >> +    case 20:
> >> +        numa_level = 0x3;
> >> +        break;
> >> +    case 40:
> >> +        numa_level = 0x2;
> >> +        break;
> >> +    case 80:
> >> +        numa_level = 0x1;
> >> +        break;
> >> +    default:
> >> +        numa_level = 0;
> > 
> > Hmm... same level for distances 10 and 160 ? Is this correct ?
> 
> 
> This will never be called with distance = 10 because we won't
> evaluate distance between the node to itself. But I'll put a
> 'case 10:' clause there that does nothing to make it clearer.
> 

You should make it g_assert_not_reached() in this case.

> 
> 
> DHB
> 
> > 
> >> +    }
> >> +
> >> +    return numa_level;
> >> +}
> >> +
> >> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr,
> >> +                                                    MachineState *ms)
> > 
> > Passing ms seems to indicate that it could have a different value than spapr,
> > which is certainly no true.
> > 
> > I'd rather make it a local variable:
> > 
> >      MachineState *ms = MACHINE(spapr);
> > 
> > This is an slow path : we don't really care to do dynamic type checking
> > multiple times.
> > 
> >> +{
> >> +    int src, dst;
> >> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> >> +    NodeInfo *numa_info = ms->numa_state->nodes;
> >> +
> >> +    for (src = 0; src < nb_numa_nodes; src++) {
> >> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> >> +            /*
> >> +             * This is how the associativity domain between A and B
> >> +             * is calculated:
> >> +             *
> >> +             * - get the distance between them
> >> +             * - get the correspondent NUMA level for this distance
> >> +             * - the arrays were initialized with their own numa_ids,
> >> +             * and we're calculating the distance in node_id ascending order,
> >> +             * starting from node 0. This will have a cascade effect in the
> >> +             * algorithm because the associativity domains that node 0 defines
> >> +             * will be carried over to the other nodes, and node 1
> >> +             * associativities will be carried over unless there's already a
> >> +             * node 0 associativity assigned, and so on. This happens because
> >> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
> >> +             * the associativity domain of both, for the given NUMA level.
> >> +             *
> >> +             * The PPC kernel expects the associativity domains of node 0 to
> >> +             * be always 0, and this algorithm will grant that by default.
> >> +             */
> >> +            uint8_t distance = numa_info[src].distance[dst];
> >> +            uint8_t n_level = spapr_numa_get_NUMA_level(distance);
> >> +            uint32_t assoc_src, assoc_dst;
> >> +
> >> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
> >> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
> >> +
> >> +            if (assoc_src < assoc_dst) {
> >> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
> >> +            } else {
> >> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
> >> +            }
> >> +        }
> >> +    }
> >> +
> >> +}
> >> +
> >>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
> >>                                      MachineState *machine)
> >>   {
> >>       SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
> >>       int nb_numa_nodes = machine->numa_state->num_nodes;
> >>       int i, j, max_nodes_with_gpus;
> >> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
> >>   
> >>       /*
> >>        * For all associativity arrays: first position is the size,
> >> @@ -99,6 +166,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
> >>       for (i = 0; i < nb_numa_nodes; i++) {
> >>           spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
> >>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> >> +
> >> +        /*
> >> +         * Fill all associativity domains of the node with node_id.
> >> +         * This is required because the kernel makes valid associativity
> > 
> > It would be appreciated to have an URL to the corresponding code in the
> > changelog.
> > 
> >> +         * matches with the zeroes if we leave the matrix unitialized.
> >> +         */
> >> +        if (!using_legacy_numa) {
> >> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> >> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
> >> +            }
> >> +        }
> >>       }
> >>   
> >>       /*
> >> @@ -128,7 +206,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
> >>        * 1 NUMA node) will not benefit from anything we're going to do
> >>        * after this point.
> >>        */
> >> -    if (spapr_machine_using_legacy_numa(spapr)) {
> >> +    if (using_legacy_numa) {
> >>           return;
> >>       }
> >>   
> >> @@ -139,6 +217,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
> >>       }
> >>   
> >>       spapr_numa_PAPRify_distances(machine);
> >> +    spapr_numa_define_associativity_domains(spapr, machine);
> >>   }
> >>   
> >>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> >