From patchwork Fri Dec 23 02:32:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yi He X-Patchwork-Id: 88905 Delivered-To: patch@linaro.org Received: by 10.140.20.101 with SMTP id 92csp3118220qgi; Thu, 22 Dec 2016 18:32:43 -0800 (PST) X-Received: by 10.237.63.75 with SMTP id q11mr14125141qtf.189.1482460363065; Thu, 22 Dec 2016 18:32:43 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id m57si6295390qtb.34.2016.12.22.18.32.42; Thu, 22 Dec 2016 18:32:43 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 458DC63638; Fri, 23 Dec 2016 02:32:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 0AAAD60E82; Fri, 23 Dec 2016 02:32:36 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id D14B660E8C; Fri, 23 Dec 2016 02:32:33 +0000 (UTC) Received: from mail-pg0-f42.google.com (mail-pg0-f42.google.com [74.125.83.42]) by lists.linaro.org (Postfix) with ESMTPS id B26C160D78 for ; Fri, 23 Dec 2016 02:32:32 +0000 (UTC) Received: by mail-pg0-f42.google.com with SMTP id y62so51965667pgy.1 for ; Thu, 22 Dec 2016 18:32:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=aQyjOpDO44aBrSnexf4D2EvGGmfZkYNC5Yfz4U4gi20=; b=l95q3Qn9FufsYV60jGg2woag+rusZqp3FemeiGVuFsv3gZ1OnoX/F6hBNiyvIPLBj2 iOJcI83iiZNuaqhToZRwIitVnAhkfutdXb4AlZgu+RFhYdNXwTaMLSoJP4tl+n4ZbZFF 1HRxllExtHGpzL33cU3Kr9MlM+iFldteHABl8kGfDdeLWBm1KR8RCu0oviDHyDPidatL BGAHBaJL1N2NtrLzUosxjLaj7A0LnNxs1MSxwQWfPEb3X3uIKzXxkJHfG0yZU5124rdZ BGtWBwOglRJSZD6IXaCqYZ6oqnw37VHjfiTXXmtMEjtIXU23N/lSFtD+10bd4ZcNLUaI 0pmQ== X-Gm-Message-State: AIkVDXKKfX5h0otHdVOZXdFdnf0BujU7uOUGpvfjNzi/Uz19O4LkfPg/2Irzr2op2f+SyF3dVVA= X-Received: by 10.84.225.133 with SMTP id u5mr22505137plj.76.1482460351889; Thu, 22 Dec 2016 18:32:31 -0800 (PST) Received: from ubuntu.heyii.co (ubuntu.heyii.co. [45.32.66.203]) by smtp.googlemail.com with ESMTPSA id c142sm57579961pfb.23.2016.12.22.18.32.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 22 Dec 2016 18:32:30 -0800 (PST) From: Yi He To: petri.savolainen@nokia-bell-labs.com, lng-odp@lists.linaro.org Date: Fri, 23 Dec 2016 02:32:09 +0000 Message-Id: <1482460329-1254-1-git-send-email-yi.he@linaro.org> X-Mailer: git-send-email 1.9.1 Subject: [lng-odp] [API-NEXT PATCHv2] linux-gen: sched: fix SP scheduler hang in process mode X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" SP scheduler hangs in process mode performance test due to global data structure were not created in shared memory region. Signed-off-by: Yi He --- since v1: rebased upon Petri's linux-gen: schedule_sp: use ring as priority queue platform/linux-generic/odp_schedule_sp.c | 100 ++++++++++++++++++------------- 1 file changed, 60 insertions(+), 40 deletions(-) -- 2.7.4 diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 5150d28..bb7416a 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -108,6 +109,7 @@ typedef struct { sched_cmd_t pktio_cmd[NUM_PKTIO]; prio_queue_t prio_queue[NUM_GROUP][NUM_PRIO]; sched_group_t sched_group; + odp_shm_t shm; } sched_global_t; typedef struct { @@ -119,7 +121,7 @@ typedef struct { int group[NUM_GROUP]; } sched_local_t; -static sched_global_t sched_global; +static sched_global_t *sched_global; static __thread sched_local_t sched_local; static inline uint32_t index_to_ring_idx(int pktio, uint32_t index) @@ -145,30 +147,44 @@ static inline uint32_t index_from_ring_idx(uint32_t *index, uint32_t ring_idx) static int init_global(void) { int i, j; - sched_group_t *sched_group = &sched_global.sched_group; + odp_shm_t shm; + sched_group_t *sched_group = NULL; ODP_DBG("Using SP scheduler\n"); - memset(&sched_global, 0, sizeof(sched_global_t)); + shm = odp_shm_reserve("sp_scheduler", + sizeof(sched_global_t), + ODP_CACHE_LINE_SIZE, 0); + + sched_global = odp_shm_addr(shm); + + if (sched_global == NULL) { + ODP_ERR("Schedule init: Shm reserve failed.\n"); + return -1; + } + + memset(sched_global, 0, sizeof(sched_global_t)); + sched_global->shm = shm; for (i = 0; i < NUM_QUEUE; i++) { - sched_global.queue_cmd[i].s.type = CMD_QUEUE; - sched_global.queue_cmd[i].s.index = i; - sched_global.queue_cmd[i].s.ring_idx = index_to_ring_idx(0, i); + sched_global->queue_cmd[i].s.type = CMD_QUEUE; + sched_global->queue_cmd[i].s.index = i; + sched_global->queue_cmd[i].s.ring_idx = index_to_ring_idx(0, i); } for (i = 0; i < NUM_PKTIO; i++) { - sched_global.pktio_cmd[i].s.type = CMD_PKTIO; - sched_global.pktio_cmd[i].s.index = i; - sched_global.pktio_cmd[i].s.ring_idx = index_to_ring_idx(1, i); - sched_global.pktio_cmd[i].s.prio = PKTIN_PRIO; - sched_global.pktio_cmd[i].s.group = GROUP_PKTIN; + sched_global->pktio_cmd[i].s.type = CMD_PKTIO; + sched_global->pktio_cmd[i].s.index = i; + sched_global->pktio_cmd[i].s.ring_idx = index_to_ring_idx(1, i); + sched_global->pktio_cmd[i].s.prio = PKTIN_PRIO; + sched_global->pktio_cmd[i].s.group = GROUP_PKTIN; } for (i = 0; i < NUM_GROUP; i++) for (j = 0; j < NUM_PRIO; j++) - ring_init(&sched_global.prio_queue[i][j].ring); + ring_init(&sched_global->prio_queue[i][j].ring); + sched_group = &sched_global->sched_group; odp_ticketlock_init(&sched_group->s.lock); for (i = 0; i < NUM_THREAD; i++) @@ -202,16 +218,22 @@ static int init_local(void) static int term_global(void) { - int qi; + int qi, ret = 0; for (qi = 0; qi < NUM_QUEUE; qi++) { - if (sched_global.queue_cmd[qi].s.init) { + if (sched_global->queue_cmd[qi].s.init) { /* todo: dequeue until empty ? */ sched_cb_queue_destroy_finalize(qi); } } - return 0; + ret = odp_shm_free(sched_global->shm); + if (ret < 0) { + ODP_ERR("Shm free failed for sp_scheduler"); + ret = -1; + } + + return ret; } static int term_local(void) @@ -267,7 +289,7 @@ static void remove_group(sched_group_t *sched_group, int thr, int group) static int thr_add(odp_schedule_group_t group, int thr) { - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; if (group < 0 || group >= NUM_GROUP) return -1; @@ -292,7 +314,7 @@ static int thr_add(odp_schedule_group_t group, int thr) static int thr_rem(odp_schedule_group_t group, int thr) { - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; if (group < 0 || group >= NUM_GROUP) return -1; @@ -320,7 +342,7 @@ static int num_grps(void) static int init_queue(uint32_t qi, const odp_schedule_param_t *sched_param) { - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; odp_schedule_group_t group = sched_param->group; int prio = 0; @@ -333,18 +355,18 @@ static int init_queue(uint32_t qi, const odp_schedule_param_t *sched_param) if (sched_param->prio > 0) prio = LOWEST_QUEUE_PRIO; - sched_global.queue_cmd[qi].s.prio = prio; - sched_global.queue_cmd[qi].s.group = group; - sched_global.queue_cmd[qi].s.init = 1; + sched_global->queue_cmd[qi].s.prio = prio; + sched_global->queue_cmd[qi].s.group = group; + sched_global->queue_cmd[qi].s.init = 1; return 0; } static void destroy_queue(uint32_t qi) { - sched_global.queue_cmd[qi].s.prio = 0; - sched_global.queue_cmd[qi].s.group = 0; - sched_global.queue_cmd[qi].s.init = 0; + sched_global->queue_cmd[qi].s.prio = 0; + sched_global->queue_cmd[qi].s.group = 0; + sched_global->queue_cmd[qi].s.init = 0; } static inline void add_tail(sched_cmd_t *cmd) @@ -354,8 +376,7 @@ static inline void add_tail(sched_cmd_t *cmd) int prio = cmd->s.prio; uint32_t idx = cmd->s.ring_idx; - prio_queue = &sched_global.prio_queue[group][prio]; - + prio_queue = &sched_global->prio_queue[group][prio]; ring_enq(&prio_queue->ring, RING_MASK, idx); } @@ -365,8 +386,7 @@ static inline sched_cmd_t *rem_head(int group, int prio) uint32_t ring_idx, index; int pktio; - prio_queue = &sched_global.prio_queue[group][prio]; - + prio_queue = &sched_global->prio_queue[group][prio]; ring_idx = ring_deq(&prio_queue->ring, RING_MASK); if (ring_idx == RING_EMPTY) @@ -375,16 +395,16 @@ static inline sched_cmd_t *rem_head(int group, int prio) pktio = index_from_ring_idx(&index, ring_idx); if (pktio) - return &sched_global.pktio_cmd[index]; + return &sched_global->pktio_cmd[index]; - return &sched_global.queue_cmd[index]; + return &sched_global->queue_cmd[index]; } static int sched_queue(uint32_t qi) { sched_cmd_t *cmd; - cmd = &sched_global.queue_cmd[qi]; + cmd = &sched_global->queue_cmd[qi]; add_tail(cmd); return 0; @@ -410,7 +430,7 @@ static void pktio_start(int pktio_index, int num, int pktin_idx[]) ODP_DBG("pktio index: %i, %i pktin queues %i\n", pktio_index, num, pktin_idx[0]); - cmd = &sched_global.pktio_cmd[pktio_index]; + cmd = &sched_global->pktio_cmd[pktio_index]; if (num > NUM_PKTIN) ODP_ABORT("Supports only %i pktin queues per interface\n", @@ -428,7 +448,7 @@ static inline sched_cmd_t *sched_cmd(void) { int prio, i; int thr = sched_local.thr_id; - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; thr_group_t *thr_group = &sched_group->s.thr[thr]; uint32_t gen_cnt; @@ -602,7 +622,7 @@ static odp_schedule_group_t schedule_group_create(const char *name, const odp_thrmask_t *thrmask) { odp_schedule_group_t group = ODP_SCHED_GROUP_INVALID; - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; int i; odp_ticketlock_lock(&sched_group->s.lock); @@ -633,7 +653,7 @@ static odp_schedule_group_t schedule_group_create(const char *name, static int schedule_group_destroy(odp_schedule_group_t group) { - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; if (group < NUM_STATIC_GROUP || group >= NUM_GROUP) return -1; @@ -656,7 +676,7 @@ static int schedule_group_destroy(odp_schedule_group_t group) static odp_schedule_group_t schedule_group_lookup(const char *name) { odp_schedule_group_t group = ODP_SCHED_GROUP_INVALID; - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; int i; odp_ticketlock_lock(&sched_group->s.lock); @@ -677,7 +697,7 @@ static int schedule_group_join(odp_schedule_group_t group, const odp_thrmask_t *thrmask) { int thr; - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; if (group < 0 || group >= NUM_GROUP) return -1; @@ -709,7 +729,7 @@ static int schedule_group_leave(odp_schedule_group_t group, const odp_thrmask_t *thrmask) { int thr; - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; odp_thrmask_t *all = &sched_group->s.group[GROUP_ALL].mask; odp_thrmask_t not; @@ -743,7 +763,7 @@ static int schedule_group_leave(odp_schedule_group_t group, static int schedule_group_thrmask(odp_schedule_group_t group, odp_thrmask_t *thrmask) { - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; if (group < 0 || group >= NUM_GROUP) return -1; @@ -765,7 +785,7 @@ static int schedule_group_thrmask(odp_schedule_group_t group, static int schedule_group_info(odp_schedule_group_t group, odp_schedule_group_info_t *info) { - sched_group_t *sched_group = &sched_global.sched_group; + sched_group_t *sched_group = &sched_global->sched_group; if (group < 0 || group >= NUM_GROUP) return -1;