mbox series

[v2,0/6] pseries NUMA distance calculation

Message ID 20200924195058.362984-1-danielhb413@gmail.com
Headers show
Series pseries NUMA distance calculation | expand

Message

Daniel Henrique Barboza Sept. 24, 2020, 7:50 p.m. UTC
This second version contains fixes based on the v1 review made
by Greg.

changes from v1:
- patch 1:
    * added Greg's R-b
- patch 2:
    * fixed 'order' typo
    * changed exit(1) to exit(EXIT_FAILURE)
    * added Greg's R-b
- patch 3:
    * clarified in the function comment the rounding of distance
values 30, 60 and 120
    * fixed a bug where distances 30, 60 and 120 wasn't being
PAPRified (use <= instead of just <)
- patch 4:
    * made the changes multi-line and more explicit, allowing
to easily see the differences between current and legacy arrays
- patch 5:
    * turned spapr_numa_get_NUMA_level() to lowercase
    * added a switch clause for distance = 10
    * removed MachineState parameter of
spapr_numa_define_associativity_domains()
    * clarified the reason why the associativity domains need
to be initialized with 'node_id' in the comment
    * we're now handling distances > 120 appropriately (no
NUMA match case)


v1 link: https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg08549.html


Daniel Henrique Barboza (6):
  spapr: add spapr_machine_using_legacy_numa() helper
  spapr_numa: forbid asymmetrical NUMA setups
  spapr_numa: translate regular NUMA distance to PAPR distance
  spapr_numa: change reference-points and maxdomain settings
  spapr_numa: consider user input when defining associativity
  specs/ppc-spapr-numa: update with new NUMA support

 docs/specs/ppc-spapr-numa.rst | 213 +++++++++++++++++++++++++++++++++
 hw/ppc/spapr.c                |  12 ++
 hw/ppc/spapr_numa.c           | 217 ++++++++++++++++++++++++++++++++--
 include/hw/ppc/spapr.h        |   2 +
 4 files changed, 436 insertions(+), 8 deletions(-)

Comments

David Gibson Sept. 25, 2020, 2:35 a.m. UTC | #1
On Thu, Sep 24, 2020 at 04:50:55PM -0300, Daniel Henrique Barboza wrote:
> QEMU allows the user to set NUMA distances in the command line.
> For ACPI architectures like x86, this means that user input is
> used to populate the SLIT table, and the guest perceives the
> distances as the user chooses to.
> 
> PPC64 does not work that way. In the PAPR concept of NUMA,
> associativity relations between the NUMA nodes are provided by
> the device tree, and the guest kernel is free to calculate the
> distances as it sees fit. Given how ACPI architectures works,
> this puts the pSeries machine in a strange spot - users expect
> to define NUMA distances like in the ACPI case, but QEMU does
> not have control over it. To give pSeries users a similar
> experience, we'll need to bring kernel specifics to QEMU
> to approximate the NUMA distances.
> 
> The pSeries kernel works with the NUMA distance range 10,
> 20, 40, 80 and 160. The code starts at 10 (local distance) and
> searches for a match in the first NUMA level between the
> resources. If there is no match, the distance is doubled and
> then it proceeds to try to match in the next NUMA level. Rinse
> and repeat for MAX_DISTANCE_REF_POINTS levels.
> 
> This patch introduces a spapr_numa_PAPRify_distances() helper
> that translates the user distances to kernel distance, which
> we're going to use to determine the associativity domains for
> the NUMA nodes.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

The idea of rounding the distances like this seems pretty good to me.
Since each level is a multiple of a distance from the preivous one it
might be more theoretically correct to place the thresholds at the
geometric mean between each level, rather than the arithmetic mean.  I
very much doubt it makes much different in practice though, and this
is simpler.

There is one nit, I'm less happy with though..

> ---
>  hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index fe395e80a3..990a5fce08 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>      return true;
>  }
>  
> +/*
> + * This function will translate the user distances into
> + * what the kernel understand as possible values: 10
> + * (local distance), 20, 40, 80 and 160. Current heuristic
> + * is:
> + *
> + *  - distances between 11 and 30 inclusive -> rounded to 20
> + *  - distances between 31 and 60 inclusive -> rounded to 40
> + *  - distances between 61 and 120 inclusive -> rounded to 80
> + *  - everything above 120 -> 160
> + *
> + * This step can also be done in the same time as the NUMA
> + * associativity domains calculation, at the cost of extra
> + * complexity. We chose to keep it simpler.
> + *
> + * Note: this will overwrite the distance values in
> + * ms->numa_state->nodes.
> + */
> +static void spapr_numa_PAPRify_distances(MachineState *ms)
> +{
> +    int src, dst;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            uint8_t distance = numa_info[src].distance[dst];
> +            uint8_t rounded_distance = 160;
> +
> +            if (distance > 11 && distance <= 30) {
> +                rounded_distance = 20;
> +            } else if (distance > 31 && distance <= 60) {
> +                rounded_distance = 40;
> +            } else if (distance > 61 && distance <= 120) {
> +                rounded_distance = 80;
> +            }
> +
> +            numa_info[src].distance[dst] = rounded_distance;
> +            numa_info[dst].distance[src] = rounded_distance;

..I don't love the fact that we alter the distance table in place.
Even though it was never exposed to the guest, I'd prefer not to
destroy the information the user passed in.  It could lead to
surprising results with QMP introspection, and it may make future
extensions more difficult.

So I'd prefer to either (a) just leave the table as is and round
on-demand with a paprify_distance(NN) -> {20,40,80,..} type function,
or (b) create a parallel, spapr local, table with the rounded
distances

> +        }
> +    }
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
> @@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>          exit(EXIT_FAILURE);
>      }
>  
> +    spapr_numa_PAPRify_distances(machine);
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
David Gibson Sept. 25, 2020, 3:39 a.m. UTC | #2
On Thu, Sep 24, 2020 at 04:50:57PM -0300, Daniel Henrique Barboza wrote:
> This patch puts all the pieces together to finally allow user
> input when defining the NUMA topology of the spapr guest.
> 
> We have one more kernel restriction to handle in this patch:
> the associativity array of node 0 must be filled with zeroes
> [1]. The strategy below ensures that this will happen.

Can you clarify exactly what "node 0" means?  Qemu and the guest
kernel each have node numberings, which I don't think are necessarily
the same.  With PAPR's scheme, it's not totally clear what "node 0"
even means.

> spapr_numa_define_associativity_domains() will read the distance
> (already PAPRified) between the nodes from numa_state and determine
> the appropriate NUMA level. The NUMA domains, processed in ascending
> order, are going to be matched via NUMA levels, and the lowest
> associativity domain value is assigned to that specific level for
> both.
> 
> This will create an heuristic where the associativities of the first
> nodes have higher priority and are re-used in new matches, instead of
> overwriting them with a new associativity match. This is necessary
> because neither QEMU, nor the pSeries kernel, supports multiple
> associativity domains for each resource, meaning that we have to
> decide which associativity relation is relevant.

Hmm.  I find that a bit hard to follow.  So IIUC, what's going on is
you start out by treating every node as as distant as possible from
every other: they have a unique value at every level of the
associativity array.  Then you collapse together nodes that are
supposed to be closer by making some of their associativity entries
match.  Is that right?

> Ultimately, all of this results in a best effort approximation for
> the actual NUMA distances the user input in the command line. Given
> the nature of how PAPR itself interprets NUMA distances versus the
> expectations risen by how ACPI SLIT works, there might be better
> algorithms but, in the end, it'll also result in another way to
> approximate what the user really wanted.
> 
> To keep this commit message no longer than it already is, the next
> patch will update the existing documentation in ppc-spapr-numa.rst
> with more in depth details and design considerations/drawbacks.
> 
> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 101 +++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 100 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index ea33439a15..21cae3f799 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -80,12 +80,99 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
>      }
>  }
>  
> +static uint8_t spapr_numa_get_numa_level(uint8_t distance)
> +{
> +    uint8_t numa_level;

This function reinforces my feeling that pre-PAPRizing the distances
might not be the best idea.  It could go directly from the user
distance to level just as easily.

> +
> +    switch (distance) {
> +    case 10:
> +        numa_level = 0x4;
> +        break;
> +    case 20:
> +        numa_level = 0x3;
> +        break;
> +    case 40:
> +        numa_level = 0x2;
> +        break;
> +    case 80:
> +        numa_level = 0x1;
> +        break;
> +    default:
> +        numa_level = 0;
> +    }
> +
> +    return numa_level;
> +}
> +
> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
> +{
> +    MachineState *ms = MACHINE(spapr);
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    int src, dst;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            /*
> +             * This is how the associativity domain between A and B
> +             * is calculated:
> +             *
> +             * - get the distance between them
> +             * - get the correspondent NUMA level for this distance
> +             * - the arrays were initialized with their own numa_ids,
> +             * and we're calculating the distance in node_id ascending order,
> +             * starting from node 0. This will have a cascade effect in the
> +             * algorithm because the associativity domains that node 0 defines
> +             * will be carried over to the other nodes, and node 1
> +             * associativities will be carried over unless there's already a
> +             * node 0 associativity assigned, and so on. This happens because
> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
> +             * the associativity domain of both, for the given NUMA level.

Ok, I follow this description better than the one in the commit message.

> +             * The PPC kernel expects the associativity domains of node 0 to
> +             * be always 0, and this algorithm will grant that by default.
> +             */
> +            uint8_t distance = numa_info[src].distance[dst];
> +            uint8_t n_level = spapr_numa_get_numa_level(distance);
> +            uint32_t assoc_src, assoc_dst;
> +
> +            /*
> +             * n_level = 0 means that the distance is greater than our last
> +             * rounded value (120). In this case there is no NUMA level match
> +             * between src and dst and we can skip the remaining of the loop.
> +             *
> +             * The Linux kernel will assume that the distance between src and
> +             * dst, in this case of no match, is 10 (local distance) doubled
> +             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
> +             * levels (4), so this gives us 10*2*2*2*2 = 160.
> +             *
> +             * This logic can be seen in the Linux kernel source code, as of
> +             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
> +             */
> +            if (n_level == 0) {
> +                continue;
> +            }
> +
> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
> +
> +            if (assoc_src < assoc_dst) {

So, I don't think you need this logic.  It doesn't really matter which
associativity value you let win, as long as you're consistent.  So you
can just base it on src < dst, rather than assooc_src < assoc_dst.
But src <= dst always, because of the way you've constructed the loops.

> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
> +            } else {
> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
> +            }

Don't you need to match the values a all levels >= n_level, rather
than just n_level?  AFAICT it doesn't make sense for two components to
be in the same associativity domain at one level, but different
domains at "bigger" levels (i.e. something like two cores sharing an
L2 cache, but not an L3 cache).

> +        }
> +    }
> +
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      int nb_numa_nodes = machine->numa_state->num_nodes;
>      int i, j, max_nodes_with_gpus;
> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>  
>      /*
>       * For all associativity arrays: first position is the size,
> @@ -99,6 +186,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>      for (i = 0; i < nb_numa_nodes; i++) {
>          spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>          spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> +
> +        /*
> +         * Fill all associativity domains of non-zero NUMA nodes with
> +         * node_id. This is required because the default value (0) is
> +         * considered a match with associativity domains of node 0.
> +         */
> +        if (!using_legacy_numa && i != 0) {
> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
> +            }
> +        }
>      }
>  
>      /*
> @@ -128,7 +226,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>       * 1 NUMA node) will not benefit from anything we're going to do
>       * after this point.
>       */
> -    if (spapr_machine_using_legacy_numa(spapr)) {
> +    if (using_legacy_numa) {
>          return;
>      }
>  
> @@ -139,6 +237,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>      }
>  
>      spapr_numa_PAPRify_distances(machine);
> +    spapr_numa_define_associativity_domains(spapr);
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
David Gibson Sept. 25, 2020, 3:43 a.m. UTC | #3
On Thu, Sep 24, 2020 at 04:50:58PM -0300, Daniel Henrique Barboza wrote:
> This update provides more in depth information about the
> choices and drawbacks of the new NUMA support for the
> spapr machine.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  docs/specs/ppc-spapr-numa.rst | 213 ++++++++++++++++++++++++++++++++++
>  1 file changed, 213 insertions(+)
> 
> diff --git a/docs/specs/ppc-spapr-numa.rst b/docs/specs/ppc-spapr-numa.rst
> index e762038022..994bfb996f 100644
> --- a/docs/specs/ppc-spapr-numa.rst
> +++ b/docs/specs/ppc-spapr-numa.rst
> @@ -189,3 +189,216 @@ QEMU up to 5.1, as follows:
>  
>  This also means that user input in QEMU command line does not change the
>  NUMA distancing inside the guest for the pseries machine.
> +
> +New NUMA mechanics for pseries in QEMU 5.2
> +==========================================

I'd suggest putting the new mechanics - which we intend to become the
normal mechanis - at the top of the document, with the old version later.

> +
> +Starting in QEMU 5.2, the pseries machine now considers user input when
> +setting NUMA topology of the guest. The following changes were made:
> +
> +* ibm,associativity-reference-points was changed to {0x4, 0x3, 0x2, 0x1}, allowing
> +  for 4 distinct NUMA distance values based on the NUMA levels
> +
> +* ibm,max-associativity-domains was changed to support multiple associativity
> +  domains in all NUMA levels. This is needed to ensure user flexibility
> +
> +* ibm,associativity for all resources now varies with user input
> +
> +These changes are only effective for pseries-5.2 and newer machines that are
> +created with more than one NUMA node (disconsidering NUMA nodes created by
> +the machine itself, e.g. NVLink 2 GPUs). The now legacy support has been
> +around for such a long time, with users seeing NUMA distances 10 and 40
> +(and 80 if using NVLink2 GPUs), and there is no need to disrupt the
> +existing experience of those guests.
> +
> +To bring the user experience x86 users have when tuning up NUMA, we had
> +to operate under the current pseries Linux kernel logic described in
> +`How the pseries Linux guest calculates NUMA distances`_. The result
> +is that we needed to translate NUMA distance user input to pseries
> +Linux kernel input.
> +
> +Translating user distance to kernel distance
> +--------------------------------------------
> +
> +User input for NUMA distance can vary from 10 to 254. We need to translate
> +that to the values that the Linux kernel operates on (10, 20, 40, 80, 160).
> +This is how it is being done:
> +
> +* user distance 11 to 30 will be interpreted as 20
> +* user distance 31 to 60 will be interpreted as 40
> +* user distance 61 to 120 will be interpreted as 80
> +* user distance 121 and beyond will be interpreted as 160
> +* user distance 10 stays 10
> +
> +The reasoning behind this aproximation is to avoid any round up to the local
> +distance (10), keeping it exclusive to the 4th NUMA level (which is still
> +exclusive to the node_id). All other ranges were chosen under the developer
> +discretion of what would be (somewhat) sensible considering the user input.
> +Any other strategy can be used here, but in the end the reality is that we'll
> +have to accept that a large array of values will be translated to the same
> +NUMA topology in the guest, e.g. this user input:
> +
> +::
> +
> +      0   1   2
> +  0  10  31 120
> +  1  31  10  30
> +  2 120  30  10
> +
> +And this other user input:
> +
> +::
> +
> +      0   1   2
> +  0  10  60  61
> +  1  60  10  11
> +  2  61  11  10
> +
> +Will both be translated to the same values internally:
> +
> +::
> +
> +      0   1   2
> +  0  10  40  80
> +  1  40  10  20
> +  2  80  20  10
> +
> +Users are encouraged to use only the kernel values in the NUMA definition to
> +avoid being taken by surprise with that the guest is actually seeing in the
> +topology. There are enough potential surprises that are inherent to the
> +associativity domain assignment process, discussed below.
> +
> +
> +How associativity domains are assigned
> +--------------------------------------
> +
> +LOPAPR allows more than one associativity array (or 'string') per allocated
> +resource. This would be used to represent that the resource has multiple
> +connections with the board, and then the operational system, when deciding
> +NUMA distancing, should consider the associativity information that provides
> +the shortest distance.
> +
> +The spapr implementation does not support multiple associativity arrays per
> +resource, neither does the pseries Linux kernel. We'll have to represent the
> +NUMA topology using one associativity per resource, which means that choices
> +and compromises are going to be made.
> +
> +Consider the following NUMA topology entered by user input:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  20  20  40
> +  1  20  10  80  40
> +  2  20  80  10  20
> +  3  40  40  20  10
> +
> +Honoring just the relative distances of node 0 to every other node, one possible
> +value for all associativity arrays would be:
> +
> +* node 0: 0 B A 0
> +* node 1: 0 0 A 1
> +* node 2: 0 0 A 2
> +* node 3: 0 B 0 3
> +
> +With the reference points {0x4, 0x3, 0x2, 0x1}, for node 0:
> +
> +* distance from 0 to 1 is 20 (no match at 0x4, will match at 0x3)
> +* distance from 0 to 2 is 20 (no match at 0x4, will match at 0x3)
> +* distance from 0 to 3 is 40 (no match at 0x4 and 0x3, will match
> +  at 0x2)
> +
> +The distances related to node 0 are well represented. Doing for node 1, and keeping
> +in mind that we don't need to revisit node 0 again, the distance from node 1 to
> +2 is 80, matching at 0x4:
> +
> +* node 1: C 0 A 1
> +* node 2: C 0 A 2
> +
> +Over here we already have the first conflict. Even if we assign a new associativity
> +domain at 0x4 for 1 and 2, and we do that in the code, the kernel will define
> +the distance between 1 and 2 as 20, not 80, because both 1 and 2 have the "A"
> +associativity domain from the previous step. If we decide to discard the
> +associativity with "A" then the node 0 distances are compromised.
> +
> +Following up with the distance from 1 to 3 being 40 (a match in 0x2) we have another
> +decision to make. These are the current associativity domains of each:
> +
> +* node 1: C 0 A 1
> +* node 3: 0 B 0 3
> +
> +There is already an associativity domain at 0x2 in node 3, "B", which was assigned
> +by the node 0 distances. If we define a new associativity domain at this level
> +for 1 and 3 we will overwrite the existing associativity between 0 and 3. What
> +the code is doing in this case is to assign the existing domain to the
> +current associativity, in this case, "B" is now assigned to the 0x2 of node 1,
> +resulting in the following associativity arrays:
> +
> +* node 0: 0 B A 0
> +* node 1: C 0 A 1
> +* node 2: C B A 2
> +* node 3: 0 B 0 3
> +
> +In the last step we will analyze just nodes 2 and 3. The desired distance between
> +2 and 3 is 20, i.e. a match in 0x3. Node 2 already has a domain assigned in 0x3,
> +A, so we do the same as we did in the previous case and assign it to node 3
> +at 0x3. This is the end result for the associativity arrays:
> +
> +* node 0: 0 B A 0
> +* node 1: C 0 A 1
> +* node 2: C B A 2
> +* node 3: 0 B A 3
> +
> +The kernel will read these arrays and will calculate the following NUMA topology for
> +the guest:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  20  20  20
> +  1  20  10  20  20
> +  2  20  20  10  20
> +  3  20  20  20  10
> +
> +Which is not what the user wanted, but it is what the current logic and implementation
> +constraints of the kernel and QEMU will provide inside the LOPAPR specification.
> +
> +Changing a single value, specially a low distance value, makes for drastic changes
> +in the result. For example, with the same user input from above, but changing the
> +node distance from 0 to 1 to 40:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  40  20  40
> +  1  40  10  80  40
> +  2  20  80  10  20
> +  3  40  40  20  10
> +
> +This is the result inside the guest, applying the same heuristics:
> +
> +::
> +
> +  $ numactl -H
> +  available: 4 nodes (0-3)
> +  (...)
> +  node distances:
> +  node   0   1   2   3
> +    0:  10  40  20  20
> +    1:  40  10  80  40
> +    2:  20  80  10  20
> +    3:  20  40  20  10
> +
> +This result is much closer to the user input and only a single distance was changed
> +from the original.
> +
> +The kernel will always match with the shortest associativity domain possible, and we're
> +attempting to retain the previous established relations between the nodes. This means
> +that a distance equal to 20 between nodes A and B and the same distance 20 between nodes
> +A and F will cause the distance between B and F to also be 20. The same will happen to
> +other distances, but shorter distances has precedent over it to the distance calculation.
> +
> +Users are welcome to use this knowledge and experiment with the input to get the
> +NUMA topology they want, or as closer as they want. The important thing is to keep
> +expectations up to par with what we are capable of provide at this moment: an
> +approximation.
Daniel Henrique Barboza Sept. 25, 2020, 12:44 p.m. UTC | #4
On 9/24/20 11:35 PM, David Gibson wrote:
> On Thu, Sep 24, 2020 at 04:50:55PM -0300, Daniel Henrique Barboza wrote:
>> QEMU allows the user to set NUMA distances in the command line.
>> For ACPI architectures like x86, this means that user input is
>> used to populate the SLIT table, and the guest perceives the
>> distances as the user chooses to.
>>
>> PPC64 does not work that way. In the PAPR concept of NUMA,
>> associativity relations between the NUMA nodes are provided by
>> the device tree, and the guest kernel is free to calculate the
>> distances as it sees fit. Given how ACPI architectures works,
>> this puts the pSeries machine in a strange spot - users expect
>> to define NUMA distances like in the ACPI case, but QEMU does
>> not have control over it. To give pSeries users a similar
>> experience, we'll need to bring kernel specifics to QEMU
>> to approximate the NUMA distances.
>>
>> The pSeries kernel works with the NUMA distance range 10,
>> 20, 40, 80 and 160. The code starts at 10 (local distance) and
>> searches for a match in the first NUMA level between the
>> resources. If there is no match, the distance is doubled and
>> then it proceeds to try to match in the next NUMA level. Rinse
>> and repeat for MAX_DISTANCE_REF_POINTS levels.
>>
>> This patch introduces a spapr_numa_PAPRify_distances() helper
>> that translates the user distances to kernel distance, which
>> we're going to use to determine the associativity domains for
>> the NUMA nodes.
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> 
> The idea of rounding the distances like this seems pretty good to me.
> Since each level is a multiple of a distance from the preivous one it
> might be more theoretically correct to place the thresholds at the
> geometric mean between each level, rather than the arithmetic mean.  I
> very much doubt it makes much different in practice though, and this
> is simpler.
> 
> There is one nit, I'm less happy with though..
> 
>> ---
>>   hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 44 insertions(+)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index fe395e80a3..990a5fce08 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>>       return true;
>>   }
>>   
>> +/*
>> + * This function will translate the user distances into
>> + * what the kernel understand as possible values: 10
>> + * (local distance), 20, 40, 80 and 160. Current heuristic
>> + * is:
>> + *
>> + *  - distances between 11 and 30 inclusive -> rounded to 20
>> + *  - distances between 31 and 60 inclusive -> rounded to 40
>> + *  - distances between 61 and 120 inclusive -> rounded to 80
>> + *  - everything above 120 -> 160
>> + *
>> + * This step can also be done in the same time as the NUMA
>> + * associativity domains calculation, at the cost of extra
>> + * complexity. We chose to keep it simpler.
>> + *
>> + * Note: this will overwrite the distance values in
>> + * ms->numa_state->nodes.
>> + */
>> +static void spapr_numa_PAPRify_distances(MachineState *ms)
>> +{
>> +    int src, dst;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t rounded_distance = 160;
>> +
>> +            if (distance > 11 && distance <= 30) {
>> +                rounded_distance = 20;
>> +            } else if (distance > 31 && distance <= 60) {
>> +                rounded_distance = 40;
>> +            } else if (distance > 61 && distance <= 120) {
>> +                rounded_distance = 80;
>> +            }
>> +
>> +            numa_info[src].distance[dst] = rounded_distance;
>> +            numa_info[dst].distance[src] = rounded_distance;
> 
> ..I don't love the fact that we alter the distance table in place.
> Even though it was never exposed to the guest, I'd prefer not to
> destroy the information the user passed in.  It could lead to
> surprising results with QMP introspection, and it may make future
> extensions more difficult.
> 
> So I'd prefer to either (a) just leave the table as is and round
> on-demand with a paprify_distance(NN) -> {20,40,80,..} type function,
> or (b) create a parallel, spapr local, table with the rounded
> distances


I did something similar with (a) in the very first version of this series.
I'll fall back to on-demand translation logic to avoid changing numa_info.



Thanks,


DHB

> 
>> +        }
>> +    }
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>> @@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>           exit(EXIT_FAILURE);
>>       }
>>   
>> +    spapr_numa_PAPRify_distances(machine);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>
Daniel Henrique Barboza Sept. 25, 2020, 2:42 p.m. UTC | #5
On 9/25/20 12:39 AM, David Gibson wrote:
> On Thu, Sep 24, 2020 at 04:50:57PM -0300, Daniel Henrique Barboza wrote:
>> This patch puts all the pieces together to finally allow user
>> input when defining the NUMA topology of the spapr guest.
>>
>> We have one more kernel restriction to handle in this patch:
>> the associativity array of node 0 must be filled with zeroes
>> [1]. The strategy below ensures that this will happen.
> 
> Can you clarify exactly what "node 0" means?  Qemu and the guest
> kernel each have node numberings, which I don't think are necessarily
> the same.  With PAPR's scheme, it's not totally clear what "node 0"
> even means.
> 
>> spapr_numa_define_associativity_domains() will read the distance
>> (already PAPRified) between the nodes from numa_state and determine
>> the appropriate NUMA level. The NUMA domains, processed in ascending
>> order, are going to be matched via NUMA levels, and the lowest
>> associativity domain value is assigned to that specific level for
>> both.
>>
>> This will create an heuristic where the associativities of the first
>> nodes have higher priority and are re-used in new matches, instead of
>> overwriting them with a new associativity match. This is necessary
>> because neither QEMU, nor the pSeries kernel, supports multiple
>> associativity domains for each resource, meaning that we have to
>> decide which associativity relation is relevant.
> 
> Hmm.  I find that a bit hard to follow.  So IIUC, what's going on is
> you start out by treating every node as as distant as possible from
> every other: they have a unique value at every level of the
> associativity array.  Then you collapse together nodes that are
> supposed to be closer by making some of their associativity entries
> match.  Is that right?
> 
>> Ultimately, all of this results in a best effort approximation for
>> the actual NUMA distances the user input in the command line. Given
>> the nature of how PAPR itself interprets NUMA distances versus the
>> expectations risen by how ACPI SLIT works, there might be better
>> algorithms but, in the end, it'll also result in another way to
>> approximate what the user really wanted.
>>
>> To keep this commit message no longer than it already is, the next
>> patch will update the existing documentation in ppc-spapr-numa.rst
>> with more in depth details and design considerations/drawbacks.
>>
>> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>>   hw/ppc/spapr_numa.c | 101 +++++++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 100 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index ea33439a15..21cae3f799 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -80,12 +80,99 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
>>       }
>>   }
>>   
>> +static uint8_t spapr_numa_get_numa_level(uint8_t distance)
>> +{
>> +    uint8_t numa_level;
> 
> This function reinforces my feeling that pre-PAPRizing the distances
> might not be the best idea.  It could go directly from the user
> distance to level just as easily.

Yeah, the logic from patch 3 will ended up being folded into this
function.

> 
>> +
>> +    switch (distance) {
>> +    case 10:
>> +        numa_level = 0x4;
>> +        break;
>> +    case 20:
>> +        numa_level = 0x3;
>> +        break;
>> +    case 40:
>> +        numa_level = 0x2;
>> +        break;
>> +    case 80:
>> +        numa_level = 0x1;
>> +        break;
>> +    default:
>> +        numa_level = 0;
>> +    }
>> +
>> +    return numa_level;
>> +}
>> +
>> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
>> +{
>> +    MachineState *ms = MACHINE(spapr);
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    int src, dst;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            /*
>> +             * This is how the associativity domain between A and B
>> +             * is calculated:
>> +             *
>> +             * - get the distance between them
>> +             * - get the correspondent NUMA level for this distance
>> +             * - the arrays were initialized with their own numa_ids,
>> +             * and we're calculating the distance in node_id ascending order,
>> +             * starting from node 0. This will have a cascade effect in the
>> +             * algorithm because the associativity domains that node 0 defines
>> +             * will be carried over to the other nodes, and node 1
>> +             * associativities will be carried over unless there's already a
>> +             * node 0 associativity assigned, and so on. This happens because
>> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
>> +             * the associativity domain of both, for the given NUMA level.
> 
> Ok, I follow this description better than the one in the commit message.
> 
>> +             * The PPC kernel expects the associativity domains of node 0 to
>> +             * be always 0, and this algorithm will grant that by default.
>> +             */
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t n_level = spapr_numa_get_numa_level(distance);
>> +            uint32_t assoc_src, assoc_dst;
>> +
>> +            /*
>> +             * n_level = 0 means that the distance is greater than our last
>> +             * rounded value (120). In this case there is no NUMA level match
>> +             * between src and dst and we can skip the remaining of the loop.
>> +             *
>> +             * The Linux kernel will assume that the distance between src and
>> +             * dst, in this case of no match, is 10 (local distance) doubled
>> +             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
>> +             * levels (4), so this gives us 10*2*2*2*2 = 160.
>> +             *
>> +             * This logic can be seen in the Linux kernel source code, as of
>> +             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
>> +             */
>> +            if (n_level == 0) {
>> +                continue;
>> +            }
>> +
>> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
>> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
>> +
>> +            if (assoc_src < assoc_dst) {
> 
> So, I don't think you need this logic.  It doesn't really matter which
> associativity value you let win, as long as you're consistent.  So you
> can just base it on src < dst, rather than assooc_src < assoc_dst.
> But src <= dst always, because of the way you've constructed the loops.
> 
>> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
>> +            } else {
>> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
>> +            }
> 
> Don't you need to match the values a all levels >= n_level, rather
> than just n_level?  AFAICT it doesn't make sense for two components to
> be in the same associativity domain at one level, but different
> domains at "bigger" levels (i.e. something like two cores sharing an
> L2 cache, but not an L3 cache).


I'm not sure what you meant, so let's go through an example. Let's say we
have something like:

      0    1     2
0    10   20   80
1    20   10   40
2    80   40   10

Starting arrays are:

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  1   1   1   1
2:  2   2   2   2

What the code will do is:

a) distance 0 to 1 is 20 (0x3 match), 0 to 2 is 80 (0x1 match):

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  1   1   0   1
2:  0   2   2   2


b) distance from 1 to 2 is 40 (0x2 match):

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  1   1   0   1
2:  0   1   2   2

What the kernel will do (via __node_distance()) is to stop at the first match.
So:

- distance 0 to 1: first match at 0x3, distance = 20:

    0x1 0x2 0x3 0x4
0:  0   0   0*   0
1:  1   1   0*   1

- distance 0 to 2: first match at 0x1, distance = 80:

    0x1 0x2 0x3 0x4
0:  0*   0   0   0
2:  0*   1   2   2

- distance 1 to 2: first match at 0x2, distance = 40:

    0x1 0x2 0x3 0x4
1:  1   *1   0   1
2:  0   *1   2   2


Note that, for this example, we have an exact match between user input and
what the kernel calculates.

Back in (a), the reason why I'm not doing matches in all n_levels above the
first one is because it doesn't matter for the kernel (it'll stop at the
first match) and it'll lead to unintended matches between the earlier and
newer nodes due to how I'm cascading the associativity domains.

Let's say we go back to (a) and do this:

a2) distance 0 to 1 is 20 (0x3 match), 0 to 2 is 80 (0x1 match):

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  0   0   0   1
2:  0   2   2   2

We forced the match on all n_levels above the first match in the associativity
array of 1. Then in (b), where we evaluate 1 to 2:

b2) distance 1 to 2 is 40 (0x2 match), cascade the 0 associativity already
existent:

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  0   0   0   1
2:  0   0   2   2

The kernel will read this as:

0 to 1: 20
0 to 2: 40 <--- this was 80 via user input
1 to 2: 40


In fact, reading what you said again, that sounds more like a vision in line
with the "Form 0" interpretation of the reference-points, where it would
represent something more close with the physical SMP topology itself.
With the "Form 1" format we're using, we're talking about 'near' and 'far'
resources and most significant boundaries, regardless of any correlation
of the underlying topology. So instead of thinking

"(i.e. something like two cores sharing an L2 cache, but not an L3 cache)"

We would think "node 0 and 1 most significant performance boundary is on
the 0x2 position of the reference-points".


Hope that clarified a bit.


DHB


> 
>> +        }
>> +    }
>> +
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>>       SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>>       int nb_numa_nodes = machine->numa_state->num_nodes;
>>       int i, j, max_nodes_with_gpus;
>> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>>   
>>       /*
>>        * For all associativity arrays: first position is the size,
>> @@ -99,6 +186,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       for (i = 0; i < nb_numa_nodes; i++) {
>>           spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>> +
>> +        /*
>> +         * Fill all associativity domains of non-zero NUMA nodes with
>> +         * node_id. This is required because the default value (0) is
>> +         * considered a match with associativity domains of node 0.
>> +         */
>> +        if (!using_legacy_numa && i != 0) {
>> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
>> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
>> +            }
>> +        }
>>       }
>>   
>>       /*
>> @@ -128,7 +226,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>        * 1 NUMA node) will not benefit from anything we're going to do
>>        * after this point.
>>        */
>> -    if (spapr_machine_using_legacy_numa(spapr)) {
>> +    if (using_legacy_numa) {
>>           return;
>>       }
>>   
>> @@ -139,6 +237,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       }
>>   
>>       spapr_numa_PAPRify_distances(machine);
>> +    spapr_numa_define_associativity_domains(spapr);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>