From patchwork Thu Mar 26 08:22:14 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Savolainen, Petri \(Nokia - FI/Espoo\)" X-Patchwork-Id: 46343 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id EEC8D21585 for ; Thu, 26 Mar 2015 08:22:41 +0000 (UTC) Received: by lbbro7 with SMTP id ro7sf2903655lbb.0 for ; Thu, 26 Mar 2015 01:22:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:thread-topic:thread-index :date:message-id:references:in-reply-to:accept-language :content-language:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :content-type:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=FY2UfHtnfyZaxmwE9fdNbD0wITvEIL15A2FNZ4Q7eEM=; b=ZNPhytzo8so13Tjoi8kyJulE0inYdsFvUI5nDRIFR6lnczN8P3HHpgKhRx4kBrL3vp QNgTfRJTQNSHH3OjgNCpo64aQ43uaLIcwvJi4JAQBoFVPOofNym4JsIKPY4cRxTq7CX0 65jxZRhSF2sDa5SgahTmUGVaAH1/tKtnRuD6qWispDCoc30G3lr3V/CefHeJv0W5ggPJ 43JeOkHN7ZAUiLeL7VUrBXaU+B1IJkEOBP9jQ2OCL/tGoUsy2wzrYBBPKwPhCDlr/bu4 IpkLkZEQuM3W24onH2MVeQl+i3DH7Ia1GGG9zouKQipFEeLp2j7zJKP/GvrI3RocwP3l BK5Q== X-Gm-Message-State: ALoCoQnBU+03Ov/P6SOAdpI6xpycC7fsDbUyReLiigDCedonrW2VaQlGjO+p9RS/1qEUETgqUdBl X-Received: by 10.152.19.67 with SMTP id c3mr3032923lae.7.1427358160591; Thu, 26 Mar 2015 01:22:40 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.153.6.10 with SMTP id cq10ls204914lad.72.gmail; Thu, 26 Mar 2015 01:22:40 -0700 (PDT) X-Received: by 10.153.5.37 with SMTP id cj5mr12455501lad.14.1427358160385; Thu, 26 Mar 2015 01:22:40 -0700 (PDT) Received: from mail-lb0-f175.google.com (mail-lb0-f175.google.com. [209.85.217.175]) by mx.google.com with ESMTPS id ws1si3987380lbb.97.2015.03.26.01.22.40 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Mar 2015 01:22:40 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) client-ip=209.85.217.175; Received: by lbcgn8 with SMTP id gn8so35339866lbc.2 for ; Thu, 26 Mar 2015 01:22:40 -0700 (PDT) X-Received: by 10.152.36.133 with SMTP id q5mr5465827laj.35.1427358160142; Thu, 26 Mar 2015 01:22:40 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.57.201 with SMTP id k9csp453522lbq; Thu, 26 Mar 2015 01:22:39 -0700 (PDT) X-Received: by 10.52.26.17 with SMTP id h17mr14830443vdg.27.1427358158094; Thu, 26 Mar 2015 01:22:38 -0700 (PDT) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id p13si6485194vdj.57.2015.03.26.01.22.36 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 26 Mar 2015 01:22:38 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Yb33f-0000nh-TR; Thu, 26 Mar 2015 08:22:31 +0000 Received: from demumfd002.nsn-inter.net ([93.183.12.31]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Yb33Y-0000na-21 for lng-odp@lists.linaro.org; Thu, 26 Mar 2015 08:22:24 +0000 Received: from demuprx016.emea.nsn-intra.net ([10.150.129.55]) by demumfd002.nsn-inter.net (8.14.3/8.14.3) with ESMTP id t2Q8MG5w024871 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 26 Mar 2015 08:22:16 GMT Received: from DEMUHTC003.nsn-intra.net ([10.159.42.34]) by demuprx016.emea.nsn-intra.net (8.12.11.20060308/8.12.11) with ESMTP id t2Q8MF8N030755 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Thu, 26 Mar 2015 09:22:16 +0100 Received: from DEMUHTC006.nsn-intra.net (10.159.42.37) by DEMUHTC003.nsn-intra.net (10.159.42.34) with Microsoft SMTP Server (TLS) id 14.3.224.2; Thu, 26 Mar 2015 09:22:15 +0100 Received: from DEMUMBX012.nsn-intra.net ([169.254.12.25]) by DEMUHTC006.nsn-intra.net ([10.159.42.37]) with mapi id 14.03.0224.002; Thu, 26 Mar 2015 09:22:15 +0100 From: "Savolainen, Petri (Nokia - FI/Espoo)" To: ext Mike Holmes Thread-Topic: [lng-odp] [PATCH 1/4] linux-generic: scheduler: restructured queue and pktio integration Thread-Index: AQHQZ0b/QIM5R7TBAkes3br0a+EXzp0uX0JA Date: Thu, 26 Mar 2015 08:22:14 +0000 Message-ID: References: <1427121729-25786-1-git-send-email-petri.savolainen@nokia.com> <1427121729-25786-2-git-send-email-petri.savolainen@nokia.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.159.42.112] MIME-Version: 1.0 X-purgate-type: clean X-purgate-Ad: Categorized by eleven eXpurgate (R) http://www.eleven.de X-purgate: clean X-purgate: This mail is considered clean (visit http://www.eleven.de for further information) X-purgate-size: 106726 X-purgate-ID: 151667::1427358137-00007F9C-55E114A6/0/0 X-Topics: patch Cc: lng-odp Subject: Re: [lng-odp] [PATCH 1/4] linux-generic: scheduler: restructured queue and pktio integration X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: petri.savolainen@nokia.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: ext Mike Holmes [mailto:mike.holmes@linaro.org] Sent: Thursday, March 26, 2015 12:00 AM To: Savolainen, Petri (Nokia - FI/Espoo) Cc: lng-odp Subject: Re: [lng-odp] [PATCH 1/4] linux-generic: scheduler: restructured queue and pktio integration On 23 March 2015 at 10:42, Petri Savolainen > wrote: Scheduler runs by polling scheduler priority queues for schedule commands. There are two types of scheduler commands: queue dequeue and packet input poll. Packet input is polled directly when a poll command is received. From schduler point of view, the default packet input queue is like any other queue. Signed-off-by: Petri Savolainen > --- .../linux-generic/include/odp_packet_io_internal.h | 17 +- .../linux-generic/include/odp_queue_internal.h | 34 +-- .../linux-generic/include/odp_schedule_internal.h | 14 +- platform/linux-generic/odp_packet_io.c | 78 ++++-- platform/linux-generic/odp_queue.c | 193 ++++++-------- platform/linux-generic/odp_schedule.c | 277 ++++++++++++++------- 6 files changed, 369 insertions(+), 244 deletions(-) -Petri diff --git a/platform/linux-generic/include/odp_packet_io_internal.h b/platform/linux-generic/include/odp_packet_io_internal.h index 47b8992..161be16 100644 --- a/platform/linux-generic/include/odp_packet_io_internal.h +++ b/platform/linux-generic/include/odp_packet_io_internal.h @@ -40,6 +40,8 @@ typedef enum { struct pktio_entry { odp_spinlock_t lock; /**< entry spinlock */ int taken; /**< is entry taken(1) or free(0) */ + int cls_ena; /**< is classifier enabled */ cls_ena is not very descriptive clsfy_enable is better but still ugly struct pktio_entry { odp_spinlock_t lock; /**< entry spinlock */ int taken; /**< is entry taken(1) or free(0) */ odp_queue_t inq_default; /**< default input queue, if set */ odp_queue_t outq_default; /**< default out queue */ odp_queue_t loopq; /**< loopback queue for "loop" device */ odp_pktio_type_t type; /**< pktio type */ pkt_sock_t pkt_sock; /**< using socket API for IO */ pkt_sock_mmap_t pkt_sock_mmap; /**< using socket mmap API for IO */ classifier_t cls; /**< classifier linked with this pktio*/ char name[IFNAMSIZ]; /**< name of pktio provided to pktio_open() */ odp_bool_t promisc; /**< promiscuous mode state */ }; It does not show in the diff, but the classifier is referenced as “cls” in the struct already. + odp_pktio_t handle; /**< pktio handle */ odp_queue_t inq_default; /**< default input queue, if set */ odp_queue_t outq_default; /**< default out queue */ odp_queue_t loopq; /**< loopback queue for "loop" device */ @@ -64,15 +66,22 @@ typedef struct { extern void *pktio_entry_ptr[]; +static inline int pktio_to_id(odp_pktio_t pktio) +{ + return _odp_typeval(pktio) - 1; +} -static inline pktio_entry_t *get_pktio_entry(odp_pktio_t id) +static inline pktio_entry_t *get_pktio_entry(odp_pktio_t pktio) { - if (odp_unlikely(id == ODP_PKTIO_INVALID || - _odp_typeval(id) > ODP_CONFIG_PKTIO_ENTRIES)) + if (odp_unlikely(pktio == ODP_PKTIO_INVALID || + _odp_typeval(pktio) > ODP_CONFIG_PKTIO_ENTRIES)) return NULL; - return pktio_entry_ptr[_odp_typeval(id) - 1]; + return pktio_entry_ptr[pktio_to_id(pktio)]; } + +int pktin_poll(pktio_entry_t *entry); + #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index 65aae14..61d0c43 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -36,10 +36,11 @@ extern "C" { #define QUEUE_MULTI_MAX 8 #define QUEUE_STATUS_FREE 0 -#define QUEUE_STATUS_READY 1 -#define QUEUE_STATUS_NOTSCHED 2 -#define QUEUE_STATUS_SCHED 3 -#define QUEUE_STATUS_DESTROYED 4 +#define QUEUE_STATUS_DESTROYED 1 +#define QUEUE_STATUS_READY 2 +#define QUEUE_STATUS_NOTSCHED 3 +#define QUEUE_STATUS_SCHED 4 + /* forward declaration */ union queue_entry_u; @@ -69,7 +70,8 @@ struct queue_entry_s { deq_multi_func_t dequeue_multi; odp_queue_t handle; - odp_buffer_t sched_buf; + odp_queue_t pri_queue; + odp_event_t cmd_ev; odp_queue_type_t type; odp_queue_param_t param; odp_pktio_t pktin; @@ -100,7 +102,6 @@ int queue_deq_multi_destroy(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], void queue_lock(queue_entry_t *queue); void queue_unlock(queue_entry_t *queue); -odp_buffer_t queue_sched_buf(odp_queue_t queue); int queue_sched_atomic(odp_queue_t handle); static inline uint32_t queue_to_id(odp_queue_t handle) @@ -121,24 +122,23 @@ static inline queue_entry_t *queue_to_qentry(odp_queue_t handle) return get_qentry(queue_id); } -static inline int queue_is_free(odp_queue_t handle) +static inline int queue_is_atomic(queue_entry_t *qe) { - queue_entry_t *queue; - - queue = queue_to_qentry(handle); + return qe->s.param.sched.sync == ODP_SCHED_SYNC_ATOMIC; +} - return queue->s.status == QUEUE_STATUS_FREE; +static inline odp_queue_t queue_handle(queue_entry_t *qe) +{ + return qe->s.handle; } -static inline int queue_is_sched(odp_queue_t handle) +static inline int queue_prio(queue_entry_t *qe) { - queue_entry_t *queue; + return qe->s.param.sched.prio; +} - queue = queue_to_qentry(handle); +void queue_destroy_finalize(queue_entry_t *qe); - return ((queue->s.status == QUEUE_STATUS_SCHED) && - (queue->s.pktin != ODP_PKTIO_INVALID)); -} #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_schedule_internal.h b/platform/linux-generic/include/odp_schedule_internal.h index acda2e4..904bfbd 100644 --- a/platform/linux-generic/include/odp_schedule_internal.h +++ b/platform/linux-generic/include/odp_schedule_internal.h @@ -16,12 +16,20 @@ extern "C" { #include #include +#include +#include -void odp_schedule_mask_set(odp_queue_t queue, int prio); -odp_buffer_t odp_schedule_buffer_alloc(odp_queue_t queue); +int schedule_queue_init(queue_entry_t *qe); +void schedule_queue_destroy(queue_entry_t *qe); -void odp_schedule_queue(odp_queue_t queue, int prio); +static inline void schedule_queue(const queue_entry_t *qe) +{ + odp_queue_enq(qe->s.pri_queue, qe->s.cmd_ev); +} + + +int schedule_pktio_start(odp_pktio_t pktio, int prio); #ifdef __cplusplus diff --git a/platform/linux-generic/odp_packet_io.c b/platform/linux-generic/odp_packet_io.c index 21f0c17..4ab45c0 100644 --- a/platform/linux-generic/odp_packet_io.c +++ b/platform/linux-generic/odp_packet_io.c @@ -142,6 +142,7 @@ static void unlock_entry_classifier(pktio_entry_t *entry) static void init_pktio_entry(pktio_entry_t *entry) { set_taken(entry); + entry->s.cls_ena = 1; /* TODO: disable cls by default */ Needs to have a bug link, a todo that makes it into the repo is a known deficiency in the code for that published revision of the code. This TODO highlight a point of new development. It’s a feature not a bug: classifier is now enabled by default, we should enable it only when actually used. Both ways work, but latter is cleaner. I’ll modify the comment. entry->s.inq_default = ODP_QUEUE_INVALID; memset(&entry->s.pkt_sock, 0, sizeof(entry->s.pkt_sock)); memset(&entry->s.pkt_sock_mmap, 0, sizeof(entry->s.pkt_sock_mmap)); @@ -273,6 +274,8 @@ static odp_pktio_t setup_pktio_entry(const char *dev, odp_pool_t pool) unlock_entry_classifier(pktio_entry); } + pktio_entry->s.handle = id; + return id; } @@ -475,19 +478,27 @@ int odp_pktio_inq_setdef(odp_pktio_t id, odp_queue_t queue) qentry = queue_to_qentry(queue); - if (qentry->s.type != ODP_QUEUE_TYPE_PKTIN) - return -1; - lock_entry(pktio_entry); pktio_entry->s.inq_default = queue; unlock_entry(pktio_entry); - queue_lock(qentry); - qentry->s.pktin = id; - qentry->s.status = QUEUE_STATUS_SCHED; - queue_unlock(qentry); - - odp_schedule_queue(queue, qentry->s.param.sched.prio); + switch (qentry->s.type) { + case ODP_QUEUE_TYPE_PKTIN: + /* User polls the input queue */ + queue_lock(qentry); + qentry->s.pktin = id; + queue_unlock(qentry); + /*break; TODO: Uncomment and change _TYPE_PKTIN to _POLL*/ Needs to have a bug link, a todo that makes it into the repo is a known deficiency in the code It’s not a bug. The new development documented here needs an API change: remove PKTIN type and use POLL/SCHED type instead. The API change will follow in another patch, this code structure prepares that change already. Anyway, I’ll modify the comment. + case ODP_QUEUE_TYPE_SCHED: + /* Packet input through the scheduler */ + if (schedule_pktio_start(id, ODP_SCHED_PRIO_LOWEST)) { + ODP_ERR("Schedule pktio start failed\n"); + return -1; + } + break; + default: + ODP_ABORT("Bad queue type\n"); If it permissible for an API to abort, I would say that is important enough to be described in the API docs as part of the expected and permissible behavior. You would otherwise expect an error return code. True. I’ll put the type check back on top. + } return 0; } @@ -506,15 +517,6 @@ int odp_pktio_inq_remdef(odp_pktio_t id) qentry = queue_to_qentry(queue); queue_lock(qentry); - if (qentry->s.status == QUEUE_STATUS_FREE) { - queue_unlock(qentry); - unlock_entry(pktio_entry); - return -1; - } - - qentry->s.enqueue = queue_enq_dummy; - qentry->s.enqueue_multi = queue_enq_multi_dummy; - qentry->s.status = QUEUE_STATUS_NOTSCHED; qentry->s.pktin = ODP_PKTIO_INVALID; queue_unlock(qentry); @@ -665,6 +667,46 @@ int pktin_deq_multi(queue_entry_t *qentry, odp_buffer_hdr_t *buf_hdr[], int num) return nbr; } +int pktin_poll(pktio_entry_t *entry) +{ + odp_packet_t pkt_tbl[QUEUE_MULTI_MAX]; + odp_buffer_hdr_t *hdr_tbl[QUEUE_MULTI_MAX]; + int num, num_enq, i; + + if (odp_unlikely(is_free(entry))) + return -1; + + num = odp_pktio_recv(entry->s.handle, pkt_tbl, QUEUE_MULTI_MAX); + + if (num < 0) { + ODP_ERR("Packet recv error\n"); + return -1; + } + + for (i = 0, num_enq = 0; i < num; ++i) { + odp_buffer_t buf; + odp_buffer_hdr_t *hdr; + + buf = _odp_packet_to_buffer(pkt_tbl[i]); + hdr = odp_buf_to_hdr(buf); + + if (entry->s.cls_ena) { + if (packet_classifier(entry->s.handle, pkt_tbl[i]) < 0) + hdr_tbl[num_enq++] = hdr; + } else { + hdr_tbl[num_enq++] = hdr; + } + } + + if (num_enq) { + queue_entry_t *qentry; + qentry = queue_to_qentry(entry->s.inq_default); + queue_enq_multi(qentry, hdr_tbl, num_enq); + } + + return 0; +} + /** function should be called with locked entry */ static int sockfd_from_pktio_entry(pktio_entry_t *entry) { diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 4bb8b9b..4a0465b 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -88,7 +88,9 @@ static void queue_init(queue_entry_t *queue, const char *name, queue->s.head = NULL; queue->s.tail = NULL; - queue->s.sched_buf = ODP_BUFFER_INVALID; + + queue->s.pri_queue = ODP_QUEUE_INVALID; + queue->s.cmd_ev = ODP_EVENT_INVALID; } @@ -222,22 +224,26 @@ odp_queue_t odp_queue_create(const char *name, odp_queue_type_t type, if (handle != ODP_QUEUE_INVALID && (type == ODP_QUEUE_TYPE_SCHED || type == ODP_QUEUE_TYPE_PKTIN)) { - odp_buffer_t buf; - - buf = odp_schedule_buffer_alloc(handle); - if (buf == ODP_BUFFER_INVALID) { - queue->s.status = QUEUE_STATUS_FREE; - ODP_ERR("queue_init: sched buf alloc failed\n"); + if (schedule_queue_init(queue)) { + ODP_ERR("schedule queue init failed\n"); return ODP_QUEUE_INVALID; } - - queue->s.sched_buf = buf; - odp_schedule_mask_set(handle, queue->s.param.sched.prio); } return handle; } +void queue_destroy_finalize(queue_entry_t *queue) +{ + LOCK(&queue->s.lock); + + if (queue->s.status == QUEUE_STATUS_DESTROYED) { + queue->s.status = QUEUE_STATUS_FREE; + schedule_queue_destroy(queue); + } + UNLOCK(&queue->s.lock); +} + int odp_queue_destroy(odp_queue_t handle) { queue_entry_t *queue; @@ -246,41 +252,31 @@ int odp_queue_destroy(odp_queue_t handle) LOCK(&queue->s.lock); if (queue->s.status == QUEUE_STATUS_FREE) { UNLOCK(&queue->s.lock); - ODP_ERR("queue_destroy: queue \"%s\" already free\n", - queue->s.name); + ODP_ERR("queue \"%s\" already free\n", queue->s.name); + return -1; + } + if (queue->s.status == QUEUE_STATUS_DESTROYED) { + UNLOCK(&queue->s.lock); + ODP_ERR("queue \"%s\" already destroyed\n", queue->s.name); return -1; } if (queue->s.head != NULL) { UNLOCK(&queue->s.lock); - ODP_ERR("queue_destroy: queue \"%s\" not empty\n", - queue->s.name); + ODP_ERR("queue \"%s\" not empty\n", queue->s.name); return -1; } - queue->s.enqueue = queue_enq_dummy; - queue->s.enqueue_multi = queue_enq_multi_dummy; - switch (queue->s.status) { case QUEUE_STATUS_READY: queue->s.status = QUEUE_STATUS_FREE; - queue->s.head = NULL; - queue->s.tail = NULL; + break; + case QUEUE_STATUS_NOTSCHED: + queue->s.status = QUEUE_STATUS_FREE; + schedule_queue_destroy(queue); break; case QUEUE_STATUS_SCHED: - /* - * Override dequeue_multi to destroy queue when it will - * be scheduled next time. - */ + /* Queue is still in scheduling */ queue->s.status = QUEUE_STATUS_DESTROYED; - queue->s.dequeue_multi = queue_deq_multi_destroy; - break; - case QUEUE_STATUS_NOTSCHED: - /* Queue won't be scheduled anymore */ - odp_buffer_free(queue->s.sched_buf); - queue->s.sched_buf = ODP_BUFFER_INVALID; - queue->s.status = QUEUE_STATUS_FREE; - queue->s.head = NULL; - queue->s.tail = NULL; break; default: ODP_ABORT("Unexpected queue status\n"); If it permissible for an API to abort, I would say that is important enough to be described in the API docs as part of the expected and permissible behavior. You would otherwise expect an error return code This line was not changed and it’s doing the right thing. Unknown queue status == internal error => crash. @@ -290,23 +286,6 @@ int odp_queue_destroy(odp_queue_t handle) return 0; } -odp_buffer_t queue_sched_buf(odp_queue_t handle) -{ - queue_entry_t *queue; - queue = queue_to_qentry(handle); - - return queue->s.sched_buf; -} - - -int queue_sched_atomic(odp_queue_t handle) -{ - queue_entry_t *queue; - queue = queue_to_qentry(handle); - - return queue->s.param.sched.sync == ODP_SCHED_SYNC_ATOMIC; -} - int odp_queue_set_context(odp_queue_t handle, void *context) { queue_entry_t *queue; @@ -352,6 +331,12 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) int sched = 0; LOCK(&queue->s.lock); + if (odp_unlikely(queue->s.status < QUEUE_STATUS_READY)) { + UNLOCK(&queue->s.lock); + ODP_ERR("Bad queue status\n"); + return -1; + } + if (queue->s.head == NULL) { /* Empty queue */ queue->s.head = buf_hdr; @@ -370,8 +355,8 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) UNLOCK(&queue->s.lock); /* Add queue to scheduling */ - if (sched == 1) - odp_schedule_queue(queue->s.handle, queue->s.param.sched.prio); + if (sched) + schedule_queue(queue); return 0; } @@ -389,6 +374,12 @@ int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) buf_hdr[num-1]->next = NULL; LOCK(&queue->s.lock); + if (odp_unlikely(queue->s.status < QUEUE_STATUS_READY)) { + UNLOCK(&queue->s.lock); + ODP_ERR("Bad queue status\n"); + return -1; + } + /* Empty queue */ if (queue->s.head == NULL) queue->s.head = buf_hdr[0]; @@ -404,25 +395,12 @@ int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) UNLOCK(&queue->s.lock); /* Add queue to scheduling */ - if (sched == 1) - odp_schedule_queue(queue->s.handle, queue->s.param.sched.prio); + if (sched) + schedule_queue(queue); return num; /* All events enqueued */ } -int queue_enq_dummy(queue_entry_t *queue ODP_UNUSED, - odp_buffer_hdr_t *buf_hdr ODP_UNUSED) -{ - return -1; -} - -int queue_enq_multi_dummy(queue_entry_t *queue ODP_UNUSED, - odp_buffer_hdr_t *buf_hdr[] ODP_UNUSED, - int num ODP_UNUSED) -{ - return -1; -} - int odp_queue_enq_multi(odp_queue_t handle, const odp_event_t ev[], int num) { odp_buffer_hdr_t *buf_hdr[QUEUE_MULTI_MAX]; @@ -455,24 +433,26 @@ int odp_queue_enq(odp_queue_t handle, odp_event_t ev) odp_buffer_hdr_t *queue_deq(queue_entry_t *queue) { - odp_buffer_hdr_t *buf_hdr = NULL; + odp_buffer_hdr_t *buf_hdr; LOCK(&queue->s.lock); if (queue->s.head == NULL) { /* Already empty queue */ - if (queue->s.status == QUEUE_STATUS_SCHED && - queue->s.type != ODP_QUEUE_TYPE_PKTIN) + if (queue->s.status == QUEUE_STATUS_SCHED) queue->s.status = QUEUE_STATUS_NOTSCHED; - } else { - buf_hdr = queue->s.head; - queue->s.head = buf_hdr->next; - buf_hdr->next = NULL; - if (queue->s.head == NULL) { - /* Queue is now empty */ - queue->s.tail = NULL; - } + UNLOCK(&queue->s.lock); + return NULL; + } + + buf_hdr = queue->s.head; + queue->s.head = buf_hdr->next; + buf_hdr->next = NULL; + + if (queue->s.head == NULL) { + /* Queue is now empty */ + queue->s.tail = NULL; } UNLOCK(&queue->s.lock); @@ -483,31 +463,39 @@ odp_buffer_hdr_t *queue_deq(queue_entry_t *queue) int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) { - int i = 0; + odp_buffer_hdr_t *hdr; + int i; LOCK(&queue->s.lock); + if (odp_unlikely(queue->s.status < QUEUE_STATUS_READY)) { + /* Bad queue, or queue has been destroyed. + * Scheduler finalizes queue destroy after this. */ + UNLOCK(&queue->s.lock); + return -1; + } - if (queue->s.head == NULL) { + hdr = queue->s.head; + + if (hdr == NULL) { /* Already empty queue */ - if (queue->s.status == QUEUE_STATUS_SCHED && - queue->s.type != ODP_QUEUE_TYPE_PKTIN) + if (queue->s.status == QUEUE_STATUS_SCHED) queue->s.status = QUEUE_STATUS_NOTSCHED; - } else { - odp_buffer_hdr_t *hdr = queue->s.head; - for (; i < num && hdr; i++) { - buf_hdr[i] = hdr; - /* odp_prefetch(hdr->addr); */ - hdr = hdr->next; - buf_hdr[i]->next = NULL; - } + UNLOCK(&queue->s.lock); + return 0; + } - queue->s.head = hdr; + for (i = 0; i < num && hdr; i++) { + buf_hdr[i] = hdr; + hdr = hdr->next; + buf_hdr[i]->next = NULL; + } - if (hdr == NULL) { - /* Queue is now empty */ - queue->s.tail = NULL; - } + queue->s.head = hdr; + + if (hdr == NULL) { + /* Queue is now empty */ + queue->s.tail = NULL; } UNLOCK(&queue->s.lock); @@ -515,23 +503,6 @@ int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) return i; } -int queue_deq_multi_destroy(queue_entry_t *queue, - odp_buffer_hdr_t *buf_hdr[] ODP_UNUSED, - int num ODP_UNUSED) -{ - LOCK(&queue->s.lock); - - odp_buffer_free(queue->s.sched_buf); - queue->s.sched_buf = ODP_BUFFER_INVALID; - queue->s.status = QUEUE_STATUS_FREE; - queue->s.head = NULL; - queue->s.tail = NULL; - - UNLOCK(&queue->s.lock); - - return 0; -} - int odp_queue_deq_multi(odp_queue_t handle, odp_event_t events[], int num) { queue_entry_t *queue; diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index dd65168..59e40c7 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -21,17 +21,15 @@ #include #include +#include - -/* Limits to number of scheduled queues */ -#define SCHED_POOL_SIZE (256*1024) +/* Number of schedule commands. + * One per scheduled queue and packet interface */ +#define NUM_SCHED_CMD (ODP_CONFIG_QUEUES + ODP_CONFIG_PKTIO_ENTRIES) /* Scheduler sub queues */ #define QUEUES_PER_PRIO 4 -/* TODO: random or queue based selection */ Needs to have a bug link, a todo that makes it into the repo is a known deficiency in the code This line was removed.