Message ID | 1397568941-4298-1-git-send-email-will.newton@linaro.org |
---|---|
State | Superseded |
Headers | show |
On Tue, 2014-04-15 at 14:35 +0100, Will Newton wrote: > Add a microbenchmark for measuring malloc and free performance. The > benchmark allocates and frees buffers of random sizes in a random > order and measures the overall execution time and RSS. Variants of the > benchmark are run with 8, 32 and 64 threads to measure the effect of > concurrency on allocator performance. > > The random block sizes used follow an inverse square distribution > which is intended to mimic the behaviour of real applications which > tend to allocate many more small blocks than large ones. > This test is more likely to measure the locking overhead of random then it is to measure malloc performance. Any attempt at defining a new (another) micro-benchmark should profile to verify that overhead of setup and measurement is small (< 1%) compared to what you are trying measure. And verify this on multiple platforms.
On 15 April 2014 16:36, Steven Munroe <munroesj@linux.vnet.ibm.com> wrote: > On Tue, 2014-04-15 at 14:35 +0100, Will Newton wrote: >> Add a microbenchmark for measuring malloc and free performance. The >> benchmark allocates and frees buffers of random sizes in a random >> order and measures the overall execution time and RSS. Variants of the >> benchmark are run with 8, 32 and 64 threads to measure the effect of >> concurrency on allocator performance. >> >> The random block sizes used follow an inverse square distribution >> which is intended to mimic the behaviour of real applications which >> tend to allocate many more small blocks than large ones. >> > > This test is more likely to measure the locking overhead of random then > it is to measure malloc performance. It uses rand_r so I don't think this is the case. > Any attempt at defining a new (another) micro-benchmark should profile > to verify that overhead of setup and measurement is small (< 1%) > compared to what you are trying measure. Well there are currently no microbenchmarks for malloc in glibc and very few in the wild, even fewer with sane licenses. The benchmark code spends roughly 80% of its time within malloc/free and friends, which is good, but does leave some room for improvement. Around 10% of the time is spent in dealing with random number generation so maybe a simple inline random number generator would improve things. > And verify this on multiple platforms. I work with arm, aarch64 and x86_64 and try and verify as much as possible across these three architectures, help with others is always appreciated.
On Tue, Apr 15, 2014 at 04:42:25PM +0100, Will Newton wrote: > On 15 April 2014 16:36, Steven Munroe <munroesj@linux.vnet.ibm.com> wrote: > > On Tue, 2014-04-15 at 14:35 +0100, Will Newton wrote: > >> Add a microbenchmark for measuring malloc and free performance. The > >> benchmark allocates and frees buffers of random sizes in a random > >> order and measures the overall execution time and RSS. Variants of the > >> benchmark are run with 8, 32 and 64 threads to measure the effect of > >> concurrency on allocator performance. > >> > >> The random block sizes used follow an inverse square distribution > >> which is intended to mimic the behaviour of real applications which > >> tend to allocate many more small blocks than large ones. > >> > > > > This test is more likely to measure the locking overhead of random then > > it is to measure malloc performance. > > It uses rand_r so I don't think this is the case. If you're using rand_r, you need to be careful how you use the output, as glibc's rand_r implementation has very poor statistical properties. See: http://sourceware.org/bugzilla/show_bug.cgi?id=15615 > > Any attempt at defining a new (another) micro-benchmark should profile > > to verify that overhead of setup and measurement is small (< 1%) > > compared to what you are trying measure. > > Well there are currently no microbenchmarks for malloc in glibc and > very few in the wild, even fewer with sane licenses. BTW have you looked at the one from locklessinc.com? It makes glibc look really bad and their allocator look very good, but I'm not convinced that they didn't artifically tweak it to get these results. If nothing else, however, it's revealing a serious bottleneck in how glibc does growth of non-main arenas using mprotect. > The benchmark code spends roughly 80% of its time within malloc/free > and friends, which is good, but does leave some room for improvement. > Around 10% of the time is spent in dealing with random number > generation so maybe a simple inline random number generator would > improve things. What about just pregenerating a large array of random numbers and accessing sequentual slots of the array? This potentially has cache issues but it might be possible to simply use a small array and wrap back to the beginning, perhaps performing a trivial operation like adding the last output of the previous run onto the value in the array. Rich
On Tue, 2014-04-15 at 12:27 -0400, Rich Felker wrote: > On Tue, Apr 15, 2014 at 04:42:25PM +0100, Will Newton wrote: > > On 15 April 2014 16:36, Steven Munroe <munroesj@linux.vnet.ibm.com> wrote: > > > On Tue, 2014-04-15 at 14:35 +0100, Will Newton wrote: > > >> Add a microbenchmark for measuring malloc and free performance. The > > >> benchmark allocates and frees buffers of random sizes in a random > > >> order and measures the overall execution time and RSS. Variants of the > > >> benchmark are run with 8, 32 and 64 threads to measure the effect of > > >> concurrency on allocator performance. > > >> > > >> The random block sizes used follow an inverse square distribution > > >> which is intended to mimic the behaviour of real applications which > > >> tend to allocate many more small blocks than large ones. > > >> > > > > > > This test is more likely to measure the locking overhead of random then > > > it is to measure malloc performance. > > > > It uses rand_r so I don't think this is the case. > > If you're using rand_r, you need to be careful how you use the output, > as glibc's rand_r implementation has very poor statistical properties. > See: > > http://sourceware.org/bugzilla/show_bug.cgi?id=15615 > > snip > > > The benchmark code spends roughly 80% of its time within malloc/free > > and friends, which is good, but does leave some room for improvement. > > Around 10% of the time is spent in dealing with random number > > generation so maybe a simple inline random number generator would > > improve things. > I personally strive for 95-99% time in the software-under-test (SUT). This is much harder then it looks but can and should be done. The other issue to look out for is gettimeofday/clock_gettime overheads. You need to run the SUT long enough that the clock reading and conversion is not a factor in the measurement. > What about just pregenerating a large array of random numbers and > accessing sequentual slots of the array? This potentially has cache > issues but it might be possible to simply use a small array and wrap > back to the beginning, perhaps performing a trivial operation like > adding the last output of the previous run onto the value in the > array. > This is generally a better design for a micro-benchmark.
On 15 April 2014 17:27, Rich Felker <dalias@aerifal.cx> wrote: > On Tue, Apr 15, 2014 at 04:42:25PM +0100, Will Newton wrote: >> On 15 April 2014 16:36, Steven Munroe <munroesj@linux.vnet.ibm.com> wrote: >> > On Tue, 2014-04-15 at 14:35 +0100, Will Newton wrote: >> >> Add a microbenchmark for measuring malloc and free performance. The >> >> benchmark allocates and frees buffers of random sizes in a random >> >> order and measures the overall execution time and RSS. Variants of the >> >> benchmark are run with 8, 32 and 64 threads to measure the effect of >> >> concurrency on allocator performance. >> >> >> >> The random block sizes used follow an inverse square distribution >> >> which is intended to mimic the behaviour of real applications which >> >> tend to allocate many more small blocks than large ones. >> >> >> > >> > This test is more likely to measure the locking overhead of random then >> > it is to measure malloc performance. >> >> It uses rand_r so I don't think this is the case. > > If you're using rand_r, you need to be careful how you use the output, > as glibc's rand_r implementation has very poor statistical properties. > See: > > http://sourceware.org/bugzilla/show_bug.cgi?id=15615 Thanks for the pointer, I have switched to using rand(3), although I suspect the quality of the random numbers is probably not a very big worry in this case. >> > Any attempt at defining a new (another) micro-benchmark should profile >> > to verify that overhead of setup and measurement is small (< 1%) >> > compared to what you are trying measure. >> >> Well there are currently no microbenchmarks for malloc in glibc and >> very few in the wild, even fewer with sane licenses. > > BTW have you looked at the one from locklessinc.com? It makes glibc > look really bad and their allocator look very good, but I'm not > convinced that they didn't artifically tweak it to get these results. > If nothing else, however, it's revealing a serious bottleneck in how > glibc does growth of non-main arenas using mprotect. I'll have a look at that, thanks for the link. >> The benchmark code spends roughly 80% of its time within malloc/free >> and friends, which is good, but does leave some room for improvement. >> Around 10% of the time is spent in dealing with random number >> generation so maybe a simple inline random number generator would >> improve things. > > What about just pregenerating a large array of random numbers and > accessing sequentual slots of the array? This potentially has cache > issues but it might be possible to simply use a small array and wrap > back to the beginning, perhaps performing a trivial operation like > adding the last output of the previous run onto the value in the > array. I'll use an approach like that in my next version. I now get between 90-95% of time spent within the allocator and the rest of the time in the benchmark loop seemingly due to cache misses accessing the random numbers and pointer array.
diff --git a/benchtests/Makefile b/benchtests/Makefile index a0954cd..f38380d 100644 --- a/benchtests/Makefile +++ b/benchtests/Makefile @@ -37,9 +37,11 @@ string-bench := bcopy bzero memccpy memchr memcmp memcpy memmem memmove \ strspn strstr strcpy_chk stpcpy_chk memrchr strsep strtok string-bench-all := $(string-bench) +malloc-bench := malloc malloc-threads-8 malloc-threads-32 malloc-threads-64 + stdlib-bench := strtod -benchset := $(string-bench-all) $(stdlib-bench) +benchset := $(string-bench-all) $(stdlib-bench) $(malloc-bench) CFLAGS-bench-ffs.c += -fno-builtin CFLAGS-bench-ffsll.c += -fno-builtin @@ -47,6 +49,9 @@ CFLAGS-bench-ffsll.c += -fno-builtin $(addprefix $(objpfx)bench-,$(bench-math)): $(common-objpfx)math/libm.so $(addprefix $(objpfx)bench-,$(bench-pthread)): \ $(common-objpfx)nptl/libpthread.so +$(objpfx)bench-malloc-threads-8: $(common-objpfx)nptl/libpthread.so +$(objpfx)bench-malloc-threads-32: $(common-objpfx)nptl/libpthread.so +$(objpfx)bench-malloc-threads-64: $(common-objpfx)nptl/libpthread.so diff --git a/benchtests/bench-malloc-threads-32.c b/benchtests/bench-malloc-threads-32.c new file mode 100644 index 0000000..463ceb7 --- /dev/null +++ b/benchtests/bench-malloc-threads-32.c @@ -0,0 +1,20 @@ +/* Measure malloc and free functions with threads. + Copyright (C) 2014 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +#define NUM_THREADS 32 +#include "bench-malloc.c" diff --git a/benchtests/bench-malloc-threads-64.c b/benchtests/bench-malloc-threads-64.c new file mode 100644 index 0000000..61d8c10 --- /dev/null +++ b/benchtests/bench-malloc-threads-64.c @@ -0,0 +1,20 @@ +/* Measure malloc and free functions with threads. + Copyright (C) 2014 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +#define NUM_THREADS 64 +#include "bench-malloc.c" diff --git a/benchtests/bench-malloc-threads-8.c b/benchtests/bench-malloc-threads-8.c new file mode 100644 index 0000000..ac4ff79 --- /dev/null +++ b/benchtests/bench-malloc-threads-8.c @@ -0,0 +1,20 @@ +/* Measure malloc and free functions with threads. + Copyright (C) 2014 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +#define NUM_THREADS 8 +#include "bench-malloc.c" diff --git a/benchtests/bench-malloc.c b/benchtests/bench-malloc.c new file mode 100644 index 0000000..6809bba --- /dev/null +++ b/benchtests/bench-malloc.c @@ -0,0 +1,183 @@ +/* Benchmark malloc and free functions. + Copyright (C) 2013-2014 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +#include <math.h> +#include <pthread.h> +#include <stdio.h> +#include <stdlib.h> +#include <sys/time.h> +#include <sys/resource.h> + +#include "bench-timing.h" +#include "json-lib.h" + +#define BENCHMARK_ITERATIONS 10000000 +#define RAND_SEED 88 + +#ifndef NUM_THREADS +#define NUM_THREADS 1 +#endif + +/* Maximum memory that can be allocated at any one time is: + + NUM_THREADS * WORKING_SET_SIZE * MAX_ALLOCATION_SIZE + + However due to the distribution of the random block sizes + the typical amount allocated will be much smaller. */ +#define WORKING_SET_SIZE 1024 + +#define MIN_ALLOCATION_SIZE 4 +#define MAX_ALLOCATION_SIZE 32768 + +/* Get a random block size with an inverse square distribution. */ +static size_t +get_block_size (unsigned int rand_data) +{ + /* Inverse square. */ + float exponent = -2; + /* Minimum value of distribution. */ + float dist_min = MIN_ALLOCATION_SIZE; + /* Maximum value of distribution. */ + float dist_max = MAX_ALLOCATION_SIZE; + + float min_pow = powf (dist_min, exponent + 1); + float max_pow = powf (dist_max, exponent + 1); + + float r = (float) rand_data / RAND_MAX; + + return (size_t) powf ((max_pow - min_pow) * r + min_pow, 1 / (exponent + 1)); +} + +/* Allocate and free blocks in a random order. */ +static void +malloc_benchmark_loop (size_t iters, void **ptr_arr, unsigned int *rand_data) +{ + size_t i; + + for (i = 0; i < iters; i++) + { + unsigned int r = rand_r (rand_data); + size_t next_idx = r % WORKING_SET_SIZE; + size_t next_block = get_block_size (r); + + if (ptr_arr[next_idx]) + free (ptr_arr[next_idx]); + + ptr_arr[next_idx] = malloc (next_block); + } +} + +static void *working_set[NUM_THREADS][WORKING_SET_SIZE]; + +#if NUM_THREADS > 1 +static pthread_t threads[NUM_THREADS]; + +struct thread_args +{ + size_t iters; + void **working_set; +}; + +static void * +benchmark_thread (void *arg) +{ + struct thread_args *args = (struct thread_args *) arg; + size_t iters = args->iters; + void *thread_set = args->working_set; + unsigned int rand_data = RAND_SEED; + + malloc_benchmark_loop (iters, thread_set, &rand_data); + + return NULL; +} +#endif + +int +main (int argc, char **argv) +{ + timing_t start, stop, cur; + size_t iters = BENCHMARK_ITERATIONS; + unsigned long res; + json_ctx_t json_ctx; + double d_total_s, d_total_i; + + json_init (&json_ctx, 0, stdout); + + json_document_begin (&json_ctx); + + json_attr_string (&json_ctx, "timing_type", TIMING_TYPE); + + json_attr_object_begin (&json_ctx, "functions"); + + json_attr_object_begin (&json_ctx, "malloc"); + + json_attr_object_begin (&json_ctx, ""); + + TIMING_INIT (res); + + (void) res; + + TIMING_NOW (start); +#if NUM_THREADS == 1 + unsigned int rand_data = RAND_SEED; + malloc_benchmark_loop (iters, working_set[0], &rand_data); +#else + struct thread_args args[NUM_THREADS]; + + size_t i; + + for (i = 0; i < NUM_THREADS; i++) + { + args[i].iters = iters; + args[i].working_set = working_set[i]; + pthread_create(&threads[i], NULL, benchmark_thread, &args[i]); + } + + for (i = 0; i < NUM_THREADS; i++) + pthread_join(threads[i], NULL); +#endif + TIMING_NOW (stop); + + struct rusage usage; + getrusage(RUSAGE_SELF, &usage); + + TIMING_DIFF (cur, start, stop); + + d_total_s = cur; + d_total_i = iters * NUM_THREADS; + + json_attr_double (&json_ctx, "duration", d_total_s); + json_attr_double (&json_ctx, "iterations", d_total_i); + json_attr_double (&json_ctx, "mean", d_total_s / d_total_i); + json_attr_double (&json_ctx, "max_rss", usage.ru_maxrss); + + json_attr_double (&json_ctx, "threads", NUM_THREADS); + json_attr_double (&json_ctx, "min_size", MIN_ALLOCATION_SIZE); + json_attr_double (&json_ctx, "max_size", MAX_ALLOCATION_SIZE); + json_attr_double (&json_ctx, "random_seed", RAND_SEED); + + json_attr_object_end (&json_ctx); + + json_attr_object_end (&json_ctx); + + json_attr_object_end (&json_ctx); + + json_document_end (&json_ctx); + + return 0; +}