From patchwork Tue Aug 21 16:00:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 144741 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp5450123ljj; Tue, 21 Aug 2018 09:00:57 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda7SM5CcqV5qzhZMmr8+6S3xf3TazhvFZ54V90uz4Hx8TilmNVlZfqYqBDJjF5qo87mm1gp X-Received: by 2002:a37:6747:: with SMTP id b68-v6mr7404889qkc.108.1534867256856; Tue, 21 Aug 2018 09:00:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534867256; cv=none; d=google.com; s=arc-20160816; b=vOjJwVpN9wW26dxsnREmuJPqSHMJ4BoO245wEIr5/RGT0+lTrhHlgFU6scOeitNwtU LDUkjnPYWFiqOamig9t6WgiJ4hJYNk0dUv5PXb67oJ4dfIfmGqBTwGh4FMzjcCtpZ8PC E2HR+RVGx5zlVTVbI4RnGaxhgjW36RnlUuBdt39wwcYbFj2uNXr4iFqtVjYdJHY2oaJL xAmJEklPLReCjWJy2LygSy9GJEX5cqkU+u2jdt5e2kvpHmJBY1aj+DZB+Hvgf9oeIpO8 an2j2D0kJjdVfu9GoNrXJ8ff42PchraM3ssWnPoB8ifemOpXsJt7SATGx76VcuqwZQp7 4jOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=uzEKINA1s5BR7C68QldX+C3oWS7HZW22ESgCQA/nf8I=; b=cf9ksEuYjRbbPV3ippENOMybfu955FHS7rZnT3AsJJwz9xGHXGIRhOmWCIbU60BOhc gj03TBgmxK6/k2ydhiBMt0BSXjDSmmJku9IPAwCYUJ9NtAiikbaGT62BH/VXlJN5SnpS wKJEbT+WlgvMVMU5+hSO92hErAbSECEV7HeDwr0ehQxgzYQj7GM9bIIRMSJj2H+T5fk0 Rnkgcd/GbCH0hMcnFzgG3/U6nar0Ms+xbuVUR3wUFzmRlhIGRzTdsW+8Zi3soc6/15k6 8E4Lt4d/G+EdJjPXed5vNaKDL7yqXqtRcElVqkFWhlNRhtjxhZJ3ENwwsKYFxlWVzt7p kNQA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id h8-v6si4448155qvb.169.2018.08.21.09.00.56; Tue, 21 Aug 2018 09:00:56 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 8184B6095C; Tue, 21 Aug 2018 16:00:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-3.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_LOW autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id A921060833; Tue, 21 Aug 2018 16:00:24 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id C2B94607C7; Tue, 21 Aug 2018 16:00:17 +0000 (UTC) Received: from forward100o.mail.yandex.net (forward100o.mail.yandex.net [37.140.190.180]) by lists.linaro.org (Postfix) with ESMTPS id 04D2B60736 for ; Tue, 21 Aug 2018 16:00:15 +0000 (UTC) Received: from mxback20j.mail.yandex.net (mxback20j.mail.yandex.net [IPv6:2a02:6b8:0:1619::114]) by forward100o.mail.yandex.net (Yandex) with ESMTP id 87C582A23CA5 for ; Tue, 21 Aug 2018 19:00:13 +0300 (MSK) Received: from smtp4p.mail.yandex.net (smtp4p.mail.yandex.net [2a02:6b8:0:1402::15:6]) by mxback20j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id Ln1hJjTVV5-0DWKvsjh; Tue, 21 Aug 2018 19:00:13 +0300 Received: by smtp4p.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id wjNgbXIX8i-0C4C1xm7; Tue, 21 Aug 2018 19:00:12 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Tue, 21 Aug 2018 16:00:10 +0000 Message-Id: <1534867210-18272-2-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1534867210-18272-1-git-send-email-odpbot@yandex.ru> References: <1534867210-18272-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 665 Subject: [lng-odp] [PATCH API-NEXT v2 1/1] api: schedule: add scheduler flow aware mode X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Balasubramanian Manoharan ODP scheduler configuration to support flow aware mode Signed-off-by: Balasubramanian Manoharan --- /** Email created from pull request 665 (bala-manoharan:sched_flow_aware) ** https://github.com/Linaro/odp/pull/665 ** Patch: https://github.com/Linaro/odp/pull/665.patch ** Base sha: d42ef2c5a01d29a9877e0b487694f66e40c43624 ** Merge commit sha: bd567b03bbf050c03e9d0899bed4fcc23d1e6157 **/ include/odp/api/spec/event.h | 39 ++++++++ include/odp/api/spec/schedule.h | 126 ++++++++++++++++++++++++++ include/odp/api/spec/schedule_types.h | 6 ++ 3 files changed, 171 insertions(+) diff --git a/include/odp/api/spec/event.h b/include/odp/api/spec/event.h index d9f7ab73d..3254bd9a4 100644 --- a/include/odp/api/spec/event.h +++ b/include/odp/api/spec/event.h @@ -209,6 +209,45 @@ void odp_event_free_multi(const odp_event_t event[], int num); */ void odp_event_free_sp(const odp_event_t event[], int num); +/** + * Event flow hash value + * + * Returns the hash value set in the event. + * Flow hash value set in the event is used by scheduler to distribute the + * event across different flows. + * + * @param event Event handle + * + * @return Hash value + * + * @note The hash algorithm and the header fields defining the flow (therefore + * used for hashing) is platform dependent. The initial event flow hash + * generated by the HW will be same for flow hash generated for packets. + + * @note The returned hash is either the platform generated (if any), or if + * odp_event_flow_hash_set() were called then the value set there. + */ +uint32_t odp_event_flow_hash(odp_event_t event); + +/** + * Set event flow hash value + * + * Store the event flow hash for the event and sets the flow hash flag. + * When scheduler is configured as flow aware, schedule queue synchronization + * will be based on flow within each queue. + * When scheduler is configured as flow unaware, event flow hash is ignored by + * the implementation. + * The value of flow hash should not be greater than the max flow count + * supported by the implementation. + * + * @param event Event handle + * @param flow_hash Hash value to set + * + * @note When scheduler is configured as flow unaware, overwriting the platform + * provided value doesn't change how the platform handles this packet after it. + */ +void odp_event_flow_hash_set(odp_event_t event, uint32_t flow_hash); + /** * @} */ diff --git a/include/odp/api/spec/schedule.h b/include/odp/api/spec/schedule.h index bbc749836..af55250b2 100644 --- a/include/odp/api/spec/schedule.h +++ b/include/odp/api/spec/schedule.h @@ -24,6 +24,7 @@ extern "C" { #include #include #include +#include /** @defgroup odp_scheduler ODP SCHEDULER * Operations on the scheduler. @@ -45,6 +46,95 @@ extern "C" { * Maximum schedule group name length in chars including null char */ +/** + * Schedule configuration options + */ +typedef struct odp_schedule_config_t { + + /* Number of flows per queue to be supported + * Only valid when scheduler is configured as flow_aware mode + * + * Flows are lightweight entities and packets can be assigned to specific + * flows by the application using odp_event_flow_hash_set() before + * enqueuing the packet into the scheduler. + * This value is valid only when scheduler is configured in flow aware + * mode. + * Depening on the implementation this number might be rounded-off to + * nearest supported value (e.g power of 2) + * This number should be less than maximum flow supported by the + * implementation. + * @see odp_schedule_capability_t + */ + uint32_t flow_count; + + /* Maximum number of schedule queues to be supported + * Application configures the maximum number of schedule queues to be + * supported by the implementation. + * @see odp_queue_capability_t + */ + uint32_t queue_count; + + /* Maximum number of events required to be stored simultaneously in + * schedule queue. This number should be less than 'max_queue_size' + * supported by the implementation. + */ + uint32_t queue_size; +} odp_schedule_config_t; + +typedef struct odp_schedule_capability_t { + + /* Maximum supported flows per queue + * Specifies the maximum number of flows per queue supported by the + * implementation. + * A value of 0 indicates flow aware mode is not supported. + */ + uint32_t max_flow_count; + + /* Maximum supported queues + * Specifies the maximum number of queues supported by the + * implementation. + */ + uint32_t max_queue_count; + + /* Maximum number of events a schedule queue can store simultaneoulsy. + * A value of 0 indicates the implementation does not restrict the + * queue size + */ + uint32_t max_queue_size; +} odp_schedule_capability_t; + +/** + * Start scheduler operation + * + * Activate scheduler module to schedule packets across different schedule + * queues. The scheduler module should be started before creating any odp + * queues. The scheduler module can be stopped usinig odp_schedule_stop(). + * + * The initialization sequeunce should be, + * odp_schedule_capability() + * odp_schedule_config_init() + * odp_schedule_config() + * odp_schedule_start() + * odp_schedule() + * + * @retval 0 on success + * @retval <0 on failure + * + * @odp_schedule_stop() + */ +int odp_schedule_start(); + +/** + * Stop scheduler operations + * + * Stop scheduler module. The application should make sure there are no further + * events in the scheduler before calling odp_schedule_stop. + * + * @retval 0 on success + * @retval <0 on failure + */ +int odp_schedule_stop(); + /** * Schedule wait time * @@ -187,6 +277,42 @@ void odp_schedule_prefetch(int num); */ int odp_schedule_num_prio(void); +/** + * Initialize schedule configuration options + * + * Initialize an odp_schedule_config_t to its default values. + * + * @param[out] config Pointer to schedule configuration structure + */ +void odp_schedule_config_init(odp_schedule_config_t *config); + + +/** + * Global schedule configuration + * + * Initialize and configure scheduler with global configuration options. + * + * @param config Pointer to scheduler configuration structure + * + * @retval 0 on success + * @retval <0 on failure + * + * @see odp_schedule_capability(), odp_schedule_config_init() + */ +int odp_schedule_config(const odp_schedule_config_t *config); + +/** + * Query scheduler capabilities + * + * Outputs schedule capabilities on success. + * + * @param[out] capa Pointer to capability structure for output + * + * @retval 0 on success + * @retval <0 on failure + */ +int odp_schedule_capability(odp_schedule_capability_t *capa); + /** * Schedule group create * diff --git a/include/odp/api/spec/schedule_types.h b/include/odp/api/spec/schedule_types.h index 44eb663a2..7f2179e9e 100644 --- a/include/odp/api/spec/schedule_types.h +++ b/include/odp/api/spec/schedule_types.h @@ -79,6 +79,9 @@ extern "C" { * requests another event from the scheduler, which implicitly releases the * context. User may allow the scheduler to release the context earlier than * that by calling odp_schedule_release_atomic(). + * When scheduler is enabled as flow-aware, the event flow hash value affects + * scheduling of the event and synchronization is maintained per flow within + * each queue. */ /** @@ -105,6 +108,9 @@ extern "C" { * (e.g. freed or stored) within the context are considered missing from * reordering and are skipped at this time (but can be ordered again within * another context). + * When scheduler is enabled as flow-aware, the event flow hash value affects + * scheduling of the event and synchronization is maintained per flow within + * each queue. */ /**