From patchwork Fri Jul 31 02:41:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 51733 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f72.google.com (mail-la0-f72.google.com [209.85.215.72]) by patches.linaro.org (Postfix) with ESMTPS id 89FA422A24 for ; Fri, 31 Jul 2015 02:43:41 +0000 (UTC) Received: by laah7 with SMTP id h7sf19446794laa.2 for ; Thu, 30 Jul 2015 19:43:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=xGwCYfDzzWLPj3VAmKwLeNEJGsNgleePq8WvoUPAsKI=; b=kar/lpwxks7NGQ7nnma6RpOhNYi1ujRsJYEK6qEtUjdMB2BYXg/CAPBtY33ZGrdfGs QGqNRaE+bk+d9Mwht7tF56PlUL47ejsHU7Jz/K/lBkBYMN6Vq78avfgt5NdsebXa4RC/ cuI//M6+XIoBLGJ6lJMpkcFaeKfZn+FveUxXXao9IoSwiBnZD1ggdnHAAZAE0ECIUtJA 9tEA3QcWD7MUpfFpy3H3Hy5JmKG8SVpi0nXuOhye+t4+eQ+cxwybvQYxK+VbixN7QvVR 5o2udDllSFgrHK/xZOCvxoH4H0sjN/nUhaV63gcvIc0JK/oAdDSNvCpKwJTECh6KY/PX rhkg== X-Gm-Message-State: ALoCoQn5Bq11LXqEkl7E+XlgkSWz1+EE9tNUct1RTd/2cO3iVdGBbkkaa6YPro+togF9ES6Fh2xO X-Received: by 10.180.216.12 with SMTP id om12mr285006wic.1.1438310620521; Thu, 30 Jul 2015 19:43:40 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.200 with SMTP id u8ls241679lau.55.gmail; Thu, 30 Jul 2015 19:43:40 -0700 (PDT) X-Received: by 10.112.198.100 with SMTP id jb4mr385080lbc.97.1438310620350; Thu, 30 Jul 2015 19:43:40 -0700 (PDT) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com. [209.85.217.182]) by mx.google.com with ESMTPS id m18si2297711lbg.157.2015.07.30.19.43.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Jul 2015 19:43:39 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) client-ip=209.85.217.182; Received: by lbqc9 with SMTP id c9so12916845lbq.1 for ; Thu, 30 Jul 2015 19:43:39 -0700 (PDT) X-Received: by 10.152.18.162 with SMTP id x2mr397561lad.73.1438310619851; Thu, 30 Jul 2015 19:43:39 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp100376lba; Thu, 30 Jul 2015 19:43:38 -0700 (PDT) X-Received: by 10.55.41.84 with SMTP id p81mr664635qkh.95.1438310618243; Thu, 30 Jul 2015 19:43:38 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id x128si3900672qha.40.2015.07.30.19.43.36; Thu, 30 Jul 2015 19:43:38 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 50694620C0; Fri, 31 Jul 2015 02:43:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id B84D7620CD; Fri, 31 Jul 2015 02:42:35 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id ADAF5620CD; Fri, 31 Jul 2015 02:42:31 +0000 (UTC) Received: from mail-pa0-f47.google.com (mail-pa0-f47.google.com [209.85.220.47]) by lists.linaro.org (Postfix) with ESMTPS id 05D4D620CE for ; Fri, 31 Jul 2015 02:42:01 +0000 (UTC) Received: by padck2 with SMTP id ck2so32843666pad.0 for ; Thu, 30 Jul 2015 19:42:00 -0700 (PDT) X-Received: by 10.66.164.106 with SMTP id yp10mr1183167pab.121.1438310520206; Thu, 30 Jul 2015 19:42:00 -0700 (PDT) Received: from localhost.localdomain (205.158.164.101.ptr.us.xo.net. [205.158.164.101]) by smtp.gmail.com with ESMTPSA id xs13sm4561776pac.3.2015.07.30.19.41.59 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 30 Jul 2015 19:41:59 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Thu, 30 Jul 2015 19:41:44 -0700 Message-Id: <1438310507-10750-4-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1438310507-10750-1-git-send-email-bill.fischofer@linaro.org> References: <1438310507-10750-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv3 3/6] linux-generic: schedule: implement scheduler groups X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Bill Fischofer --- include/odp/api/config.h | 5 + .../include/odp/plat/schedule_types.h | 4 + platform/linux-generic/odp_schedule.c | 158 +++++++++++++++++++++ platform/linux-generic/odp_thread.c | 25 +++- 4 files changed, 186 insertions(+), 6 deletions(-) diff --git a/include/odp/api/config.h b/include/odp/api/config.h index b5c8fdd..302eaf5 100644 --- a/include/odp/api/config.h +++ b/include/odp/api/config.h @@ -44,6 +44,11 @@ extern "C" { #define ODP_CONFIG_SCHED_PRIOS 8 /** + * Number of scheduling groups + */ +#define ODP_CONFIG_SCHED_GRPS 16 + +/** * Maximum number of packet IO resources */ #define ODP_CONFIG_PKTIO_ENTRIES 64 diff --git a/platform/linux-generic/include/odp/plat/schedule_types.h b/platform/linux-generic/include/odp/plat/schedule_types.h index 91e62e7..f13bfab 100644 --- a/platform/linux-generic/include/odp/plat/schedule_types.h +++ b/platform/linux-generic/include/odp/plat/schedule_types.h @@ -43,8 +43,12 @@ typedef int odp_schedule_sync_t; typedef int odp_schedule_group_t; +/* These must be kept in sync with thread_globals_t in odp_thread.c */ +#define ODP_SCHED_GROUP_INVALID -1 #define ODP_SCHED_GROUP_ALL 0 #define ODP_SCHED_GROUP_WORKER 1 +#define ODP_SCHED_GROUP_CONTROL 2 +#define ODP_SCHED_GROUP_NAMED 3 #define ODP_SCHED_GROUP_NAME_LEN 32 diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 5d32c81..20dd850 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -23,6 +23,8 @@ #include #include +odp_thrmask_t sched_mask_all; + /* Number of schedule commands. * One per scheduled queue and packet interface */ #define NUM_SCHED_CMD (ODP_CONFIG_QUEUES + ODP_CONFIG_PKTIO_ENTRIES) @@ -48,6 +50,11 @@ typedef struct { odp_pool_t pool; odp_shm_t shm; uint32_t pri_count[ODP_CONFIG_SCHED_PRIOS][QUEUES_PER_PRIO]; + odp_spinlock_t grp_lock; + struct { + char name[ODP_SCHED_GROUP_NAME_LEN]; + odp_thrmask_t *mask; + } sched_grp[ODP_CONFIG_SCHED_GRPS]; } sched_t; /* Schedule command */ @@ -87,6 +94,9 @@ static sched_t *sched; /* Thread local scheduler context */ static __thread sched_local_t sched_local; +/* Internal routine to get scheduler thread mask addrs */ +odp_thrmask_t *thread_sched_grp_mask(int index); + static void sched_local_init(void) { int i; @@ -163,6 +173,15 @@ int odp_schedule_init_global(void) } } + odp_spinlock_init(&sched->grp_lock); + + for (i = 0; i < ODP_CONFIG_SCHED_GRPS; i++) { + memset(&sched->sched_grp[i].name, 0, ODP_SCHED_GROUP_NAME_LEN); + sched->sched_grp[i].mask = thread_sched_grp_mask(i); + } + + odp_thrmask_setall(&sched_mask_all); + ODP_DBG("done\n"); return 0; @@ -466,6 +485,18 @@ static int schedule(odp_queue_t *out_queue, odp_event_t out_ev[], } qe = sched_cmd->qe; + if (qe->s.param.sched.group > ODP_SCHED_GROUP_ALL && + !odp_thrmask_isset(sched->sched_grp + [qe->s.param.sched.group].mask, + thr)) { + /* This thread is not eligible for work from + * this queue, so continue scheduling it. + */ + if (odp_queue_enq(pri_q, ev)) + ODP_ABORT("schedule failed\n"); + continue; + } + num = queue_deq_multi(qe, sched_local.buf_hdr, max_deq); if (num < 0) { @@ -587,3 +618,130 @@ int odp_schedule_num_prio(void) { return ODP_CONFIG_SCHED_PRIOS; } + +odp_schedule_group_t odp_schedule_group_create(const char *name, + const odp_thrmask_t *mask) +{ + odp_schedule_group_t group = ODP_SCHED_GROUP_INVALID; + int i; + + odp_spinlock_lock(&sched->grp_lock); + + for (i = ODP_SCHED_GROUP_NAMED; i < ODP_CONFIG_SCHED_GRPS; i++) { + if (sched->sched_grp[i].name[0] == 0) { + strncpy(sched->sched_grp[i].name, name, + ODP_SCHED_GROUP_NAME_LEN - 1); + sched->sched_grp[i].name[ODP_SCHED_GROUP_NAME_LEN - 1] + = 0; + odp_thrmask_copy(sched->sched_grp[i].mask, mask); + group = (odp_schedule_group_t)i; + break; + } + } + + odp_spinlock_unlock(&sched->grp_lock); + return group; +} + +int odp_schedule_group_destroy(odp_schedule_group_t group) +{ + int ret; + + odp_spinlock_lock(&sched->grp_lock); + + if (group < ODP_CONFIG_SCHED_GRPS && + group > ODP_SCHED_GROUP_NAMED && + sched->sched_grp[group].name[0] != 0) { + odp_thrmask_zero(sched->sched_grp[group].mask); + memset(&sched->sched_grp[group].name, 0, + ODP_SCHED_GROUP_NAME_LEN); + ret = 0; + } else { + ret = -1; + } + + odp_spinlock_unlock(&sched->grp_lock); + return ret; +} + +odp_schedule_group_t odp_schedule_group_lookup(const char *name) +{ + odp_schedule_group_t group = ODP_SCHED_GROUP_INVALID; + int i; + + odp_spinlock_lock(&sched->grp_lock); + + for (i = ODP_SCHED_GROUP_NAMED; i < ODP_CONFIG_SCHED_GRPS; i++) { + if (strcmp(name, sched->sched_grp[i].name) == 0) { + group = (odp_schedule_group_t)i; + break; + } + } + + odp_spinlock_unlock(&sched->grp_lock); + return group; +} + +int odp_schedule_group_join(odp_schedule_group_t group, + const odp_thrmask_t *mask) +{ + int ret; + + odp_spinlock_lock(&sched->grp_lock); + + if (group < ODP_CONFIG_SCHED_GRPS && + group >= ODP_SCHED_GROUP_NAMED && + sched->sched_grp[group].name[0] != 0) { + odp_thrmask_or(sched->sched_grp[group].mask, + sched->sched_grp[group].mask, + mask); + ret = 0; + } else { + ret = -1; + } + + odp_spinlock_unlock(&sched->grp_lock); + return ret; +} + +int odp_schedule_group_leave(odp_schedule_group_t group, + const odp_thrmask_t *mask) +{ + int ret; + + odp_spinlock_lock(&sched->grp_lock); + + if (group < ODP_CONFIG_SCHED_GRPS && + group >= ODP_SCHED_GROUP_NAMED && + sched->sched_grp[group].name[0] != 0) { + odp_thrmask_t leavemask; + + odp_thrmask_xor(&leavemask, mask, &sched_mask_all); + odp_thrmask_and(sched->sched_grp[group].mask, + sched->sched_grp[group].mask, + &leavemask); + ret = 0; + } else { + ret = -1; + } + + odp_spinlock_unlock(&sched->grp_lock); + return ret; +} + +int odp_schedule_group_count(odp_schedule_group_t group) +{ + int ret; + + odp_spinlock_lock(&sched->grp_lock); + + if (group < ODP_CONFIG_SCHED_GRPS && + group >= ODP_SCHED_GROUP_NAMED && + sched->sched_grp[group].name[0] != 0) + ret = odp_thrmask_count(sched->sched_grp[group].mask); + else + ret = -1; + + odp_spinlock_unlock(&sched->grp_lock); + return ret; +} diff --git a/platform/linux-generic/odp_thread.c b/platform/linux-generic/odp_thread.c index 9905c78..770c64e 100644 --- a/platform/linux-generic/odp_thread.c +++ b/platform/linux-generic/odp_thread.c @@ -32,9 +32,15 @@ typedef struct { typedef struct { thread_state_t thr[ODP_CONFIG_MAX_THREADS]; - odp_thrmask_t all; - odp_thrmask_t worker; - odp_thrmask_t control; + union { + /* struct order must be kept in sync with schedule_types.h */ + struct { + odp_thrmask_t all; + odp_thrmask_t worker; + odp_thrmask_t control; + }; + odp_thrmask_t sched_grp_mask[ODP_CONFIG_SCHED_GRPS]; + }; uint32_t num; uint32_t num_worker; uint32_t num_control; @@ -53,6 +59,7 @@ static __thread thread_state_t *this_thread; int odp_thread_init_global(void) { odp_shm_t shm; + int i; shm = odp_shm_reserve("odp_thread_globals", sizeof(thread_globals_t), @@ -65,13 +72,19 @@ int odp_thread_init_global(void) memset(thread_globals, 0, sizeof(thread_globals_t)); odp_spinlock_init(&thread_globals->lock); - odp_thrmask_zero(&thread_globals->all); - odp_thrmask_zero(&thread_globals->worker); - odp_thrmask_zero(&thread_globals->control); + + for (i = 0; i < ODP_CONFIG_SCHED_GRPS; i++) + odp_thrmask_zero(&thread_globals->sched_grp_mask[i]); return 0; } +odp_thrmask_t *thread_sched_grp_mask(int index); +odp_thrmask_t *thread_sched_grp_mask(int index) +{ + return &thread_globals->sched_grp_mask[index]; +} + int odp_thread_term_global(void) { int ret;