From patchwork Wed Aug 8 13:00:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 143621 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp795938ljj; Wed, 8 Aug 2018 06:01:07 -0700 (PDT) X-Google-Smtp-Source: AA+uWPwCXrcin087t1sd+p7ejV1VY/bKPmUmUxRS5/VA1+3RMm8tkSzH97kFHOwVzTaUO/I7jPNb X-Received: by 2002:ac8:16dd:: with SMTP id y29-v6mr2606321qtk.34.1533733267511; Wed, 08 Aug 2018 06:01:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533733267; cv=none; d=google.com; s=arc-20160816; b=plaGq7GpRoQSVEHv8dM6m0ga5VYyVxU0jjeofSnUsUTlAjSvLP1HaQYTOEkRMg8ZO+ TMTZFuosWp8oFCtYniPLifxC0EhLRdsDhNVcSJ4E2tm/iYDP4FO2Yw0v9Jul04FF9gLQ r73mewtVSaBGRTzoYoFlR+lNgq+yFwQYpakuE9tLWBQZsZKRNjr6WyHgtZYXeykwrgcb V4NTpwvSr95oWfhM1SLX0F6X5o+2ivhpLS/74odPOhcCEOqtDfUhx5vKHjiCpbJ18zhS Jd8eHW9OUKGpaf4fA6Ex7W48eNMpnrWp7/nSO9ydEsftxf1zCFvuZlpvx67C+Qi+c0OR KoQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=vXsXsYLqcXV+HlHP6CWLnecYhoFiMIn/5tnhKYPWn3c=; b=xavyGW6CpvOmybgIlxPz8/nCJjNJerU+sG1aHOQ4QA+rKpJ2j8jYdDDLQGLZuB8rag 5liX2QpQuzsZvm1PwvJaWE4QlDTj7yTRW/fccPjoatByFEzByMFWmqIlVcL04W963dws QSa//Vf2eHmmXtxJJcNgvyiXJYcQ73HoNf3j1eQ9N5iFvGHQ0NoCwMY6ETuj7Vx9Y6wC 6a2t4eBaF3O5pFT3qqZpRxcTEmzmHs9FkqiaJOozmMB5a2aF1IV1LsWgZrhACGg25SAV o83thWDOEHHLaSDlWfU4racHM+E0aK9HvMRX7+ClOgJjzksYwE1SztGIRHdCSHeXI74a rNZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id m4-v6si2663037qta.149.2018.08.08.06.01.07; Wed, 08 Aug 2018 06:01:07 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 3608060C40; Wed, 8 Aug 2018 13:01:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-3.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_LOW autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id A6969600CE; Wed, 8 Aug 2018 13:00:33 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 4F9F660B53; Wed, 8 Aug 2018 13:00:24 +0000 (UTC) Received: from forward100p.mail.yandex.net (forward100p.mail.yandex.net [77.88.28.100]) by lists.linaro.org (Postfix) with ESMTPS id B947B600CE for ; Wed, 8 Aug 2018 13:00:17 +0000 (UTC) Received: from mxback6o.mail.yandex.net (mxback6o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::20]) by forward100p.mail.yandex.net (Yandex) with ESMTP id 128B251075D7 for ; Wed, 8 Aug 2018 16:00:09 +0300 (MSK) Received: from smtp4o.mail.yandex.net (smtp4o.mail.yandex.net [2a02:6b8:0:1a2d::28]) by mxback6o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id arKSR6hzG2-08Am2BvS; Wed, 08 Aug 2018 16:00:09 +0300 Received: by smtp4o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id faqybj182l-088aV4a5; Wed, 08 Aug 2018 16:00:08 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Wed, 8 Aug 2018 13:00:06 +0000 Message-Id: <1533733206-28811-2-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1533733206-28811-1-git-send-email-odpbot@yandex.ru> References: <1533733206-28811-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 666 Subject: [lng-odp] [PATCH v1 1/1] test: sched_perf: add num queues option X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Petri Savolainen Added option to set number of queues per worker thread. Number of active queues affects usually scheduler performance. Signed-off-by: Petri Savolainen --- /** Email created from pull request 666 (psavol:master-sched-perf-numqueue) ** https://github.com/Linaro/odp/pull/666 ** Patch: https://github.com/Linaro/odp/pull/666.patch ** Base sha: 7c87b66edc84e8c713fefc68d46464660adaf71e ** Merge commit sha: d8a76e7a44b96d574b4e8cc1741af827a1717475 **/ test/performance/odp_sched_perf.c | 60 ++++++++++++++++++++++--------- 1 file changed, 43 insertions(+), 17 deletions(-) diff --git a/test/performance/odp_sched_perf.c b/test/performance/odp_sched_perf.c index e76725cc0..eb27a3139 100644 --- a/test/performance/odp_sched_perf.c +++ b/test/performance/odp_sched_perf.c @@ -14,12 +14,18 @@ #include #include +#define MAX_QUEUES_PER_CPU 1024 +#define MAX_QUEUES (ODP_THREAD_COUNT_MAX * MAX_QUEUES_PER_CPU) + typedef struct test_options_t { uint32_t num_cpu; + uint32_t num_queue; uint32_t num_event; uint32_t num_round; uint32_t max_burst; int queue_type; + uint32_t tot_queue; + uint32_t tot_event; } test_options_t; @@ -38,7 +44,7 @@ typedef struct test_global_t { odp_barrier_t barrier; odp_pool_t pool; odp_cpumask_t cpumask; - odp_queue_t queue[ODP_THREAD_COUNT_MAX]; + odp_queue_t queue[MAX_QUEUES]; odph_odpthread_t thread_tbl[ODP_THREAD_COUNT_MAX]; test_stat_t stat[ODP_THREAD_COUNT_MAX]; @@ -53,11 +59,12 @@ static void print_usage(void) "\n" "Usage: odp_sched_perf [options]\n" "\n" - " -c, --num_cpu Number of CPUs (worker threads). 0: all available CPUs. Default 1.\n" + " -c, --num_cpu Number of CPUs (worker threads). 0: all available CPUs. Default: 1.\n" + " -q, --num_queue Number of queues per CPU. Default: 1.\n" " -e, --num_event Number of events per queue\n" " -r, --num_round Number of rounds\n" " -b, --burst Maximum number of events per operation\n" - " -t, --type Queue type. 0: parallel, 1: atomic, 2: ordered. Default 0.\n" + " -t, --type Queue type. 0: parallel, 1: atomic, 2: ordered. Default: 0.\n" " -h, --help This help\n" "\n"); } @@ -70,6 +77,7 @@ static int parse_options(int argc, char *argv[], test_options_t *test_options) static const struct option longopts[] = { {"num_cpu", required_argument, NULL, 'c'}, + {"num_queue", required_argument, NULL, 'q'}, {"num_event", required_argument, NULL, 'e'}, {"num_round", required_argument, NULL, 'r'}, {"burst", required_argument, NULL, 'b'}, @@ -78,9 +86,10 @@ static int parse_options(int argc, char *argv[], test_options_t *test_options) {NULL, 0, NULL, 0} }; - static const char *shortopts = "+c:e:r:b:t:h"; + static const char *shortopts = "+c:q:e:r:b:t:h"; test_options->num_cpu = 1; + test_options->num_queue = 1; test_options->num_event = 100; test_options->num_round = 100000; test_options->max_burst = 100; @@ -96,6 +105,9 @@ static int parse_options(int argc, char *argv[], test_options_t *test_options) case 'c': test_options->num_cpu = atoi(optarg); break; + case 'q': + test_options->num_queue = atoi(optarg); + break; case 'e': test_options->num_event = atoi(optarg); break; @@ -117,6 +129,17 @@ static int parse_options(int argc, char *argv[], test_options_t *test_options) } } + if (test_options->num_queue > MAX_QUEUES_PER_CPU) { + printf("Error: Too many queues per worker. Max supported %i\n.", + MAX_QUEUES_PER_CPU); + ret = -1; + } + + test_options->tot_queue = test_options->num_queue * + test_options->num_cpu; + test_options->tot_event = test_options->tot_queue * + test_options->num_event; + return ret; } @@ -157,18 +180,22 @@ static int create_pool(test_global_t *global) odp_pool_param_t pool_param; odp_pool_t pool; test_options_t *test_options = &global->test_options; + uint32_t num_cpu = test_options->num_cpu; + uint32_t num_queue = test_options->num_queue; uint32_t num_event = test_options->num_event; uint32_t num_round = test_options->num_round; uint32_t max_burst = test_options->max_burst; - int num_cpu = test_options->num_cpu; - uint32_t tot_event = num_event * num_cpu; + uint32_t tot_queue = test_options->tot_queue; + uint32_t tot_event = test_options->tot_event; printf("\nScheduler performance test\n"); - printf(" num cpu %i\n", num_cpu); - printf(" num rounds %u\n", num_round); - printf(" num events %u\n", tot_event); + printf(" num cpu %u\n", num_cpu); + printf(" queues per cpu %u\n", num_queue); printf(" events per queue %u\n", num_event); - printf(" max burst %u\n", max_burst); + printf(" max burst size %u\n", max_burst); + printf(" num queues %u\n", tot_queue); + printf(" num events %u\n", tot_event); + printf(" num rounds %u\n", num_round); if (odp_pool_capability(&pool_capa)) { printf("Error: Pool capa failed.\n"); @@ -207,7 +234,7 @@ static int create_queues(test_global_t *global) uint32_t i, j; test_options_t *test_options = &global->test_options; uint32_t num_event = test_options->num_event; - uint32_t num_queue = test_options->num_cpu; + uint32_t tot_queue = test_options->tot_queue; int type = test_options->queue_type; odp_pool_t pool = global->pool; @@ -222,7 +249,6 @@ static int create_queues(test_global_t *global) sync = ODP_SCHED_SYNC_ORDERED; } - printf(" num queues %u\n", num_queue); printf(" queue type %s\n\n", type_str); if (odp_queue_capability(&queue_capa)) { @@ -230,7 +256,7 @@ static int create_queues(test_global_t *global) return -1; } - if (num_queue > queue_capa.sched.max_num) { + if (tot_queue > queue_capa.sched.max_num) { printf("Max queues supported %u\n", queue_capa.sched.max_num); return -1; } @@ -241,7 +267,7 @@ static int create_queues(test_global_t *global) return -1; } - for (i = 0; i < ODP_THREAD_COUNT_MAX; i++) + for (i = 0; i < MAX_QUEUES; i++) global->queue[i] = ODP_QUEUE_INVALID; odp_queue_param_init(&queue_param); @@ -251,7 +277,7 @@ static int create_queues(test_global_t *global) queue_param.sched.group = ODP_SCHED_GROUP_ALL; queue_param.size = num_event; - for (i = 0; i < num_queue; i++) { + for (i = 0; i < tot_queue; i++) { queue = odp_queue_create(NULL, &queue_param); if (queue == ODP_QUEUE_INVALID) { @@ -262,7 +288,7 @@ static int create_queues(test_global_t *global) global->queue[i] = queue; } - for (i = 0; i < num_queue; i++) { + for (i = 0; i < tot_queue; i++) { queue = global->queue[i]; for (j = 0; j < num_event; j++) { @@ -294,7 +320,7 @@ static int destroy_queues(test_global_t *global) while ((ev = odp_schedule(NULL, wait)) != ODP_EVENT_INVALID) odp_event_free(ev); - for (i = 0; i < ODP_THREAD_COUNT_MAX; i++) { + for (i = 0; i < MAX_QUEUES; i++) { if (global->queue[i] != ODP_QUEUE_INVALID) { if (odp_queue_destroy(global->queue[i])) { printf("Error: Queue destroy failed %u\n", i);