This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "".
The branch, api-next has been updated discards a110685b8357276cb4a63ebc6ff421f42f461d94 (commit) discards bc86441b2d02dd518e710f1d9e6936525530c1bb (commit) discards cb620003928195980a9c2fcacc7a7d4a04a154e6 (commit) discards 61e6fb7b3db84648732fa5b9f828507afc63bf0c (commit) discards 6860ee802fbae5e333f0e3ebdab706d9fb5c26b3 (commit) discards ea6a715b872fb6f0b7b19b7931d5cd407af59b56 (commit) discards 01d234075533e2b6597761f68a47e874be4ff054 (commit) discards 265dbea40a50d83c4d582da2d45d866e11790aed (commit) discards 8e2b5974f33e9359021cd3209ffd8d297f66c9f5 (commit) discards c3e88f49b0ccf03d779af79acefb8e5bc4cce5e0 (commit) discards 3b1247abef8b5b999b826168ce5b3e2b8e8eb215 (commit) discards d151d349843dd6fad429927c52cc45d955d9fa0a (commit) discards 17efd13220d2f3e042e1a712e959545c56bbb56a (commit) discards 8a9dd8144f7358786cfecee09a1fc4716ff6315f (commit) discards 9b1647c37cb9b8686d0e794becd94374464678a2 (commit) discards 8616eb0a8106fb132a970e7927dfaf4cce09e0de (commit) discards d3ff5e3a9142244503fb780ecbb3b9c122d14ff6 (commit) discards da730188aa6fb9f58145a697ad002ecfd102f898 (commit) via 92c8ebd0b9ec85a71e3993c4864d48bcbb30a012 (commit) via 3e2e07762422a75298cab27fbab64fb4a6f9383c (commit) via 5235ac6934f2a39e91d30971003739c6e3620224 (commit) via 886995cfbf5faa9ad269fdfccae42a889035c001 (commit) via 809a6e3eaba3a510337f9a5ce4dc146074e39bd8 (commit) via a43d82864faeed378972b95087f470c4a5f076c2 (commit) via a73130efd9e89dae4e67baab64b78b4b0f261668 (commit) via dbd20ce8581fe5a36d61eb48071b47a89f808fdb (commit) via 06851a74104465905cff0d71f852f46d91af224c (commit) via fe8658c85b721c17a1d998cc0df9106d4e9a4ce7 (commit) via 9bc0a0598323f5f655eeb65544ecdc74ab8150c2 (commit) via 9f8d08163075eab9408de99d7da2165753f802e9 (commit) via 602df05c79ea8126e679513ca9523222c7946a19 (commit) via 1d8b95b6d776a7f8681ef400a062a67d4d37de56 (commit) via 09d8048fc8bff31797f9359db9f43da75fd15c3f (commit) via c7d5d4005f333f3f125e0582aac7cf2423112ac4 (commit) via 9b945554c0a522030de185fe5e2e0724427c8223 (commit) via ec5066a3430e31a87727ac4aea5793253e5ee843 (commit) via 487e6bd608a78527809ac7b88f0d3d3ec94cd707 (commit) via fd5939c3ae2a2a38c0a1f87428a787ee7ae00789 (commit) via 94e47dc62e340818b91c471788c29af3ba167d96 (commit) via 92e59d9e816a99db318ba24dcb12cb55f2e7392d (commit) via b1812f17ae652f11ce21f26fd24c8fd27818339b (commit) via 23102db002f522cc90d1b616e2725d21e525b1fc (commit) via 049d80427d0145a3c1738d28ba595717ae43d5c2 (commit) via 46d507adef3902a26b7e311506437211e7417a10 (commit) via e7ad8003e34195a3900e1dd3d3a93235896d7628 (commit) via 294856cc30d48d57e12485076bae49da36d346ed (commit) via 6b79ac4b1640e8050b076ba0ecb590cc297320b0 (commit) via 3cb35813da911a94eef6e07ae71ce0f5f325ebd8 (commit) via 3aad0e2ce0e5901fd49e50e26ac7d762c2b9a6aa (commit) via 98eb7327113fbd33a8e5448406e8f47d8d0ad5fb (commit) via d64232f45abae8d4f1222313ce44532cc26e2336 (commit) via 536cce998e84a559e125b4741d00f2a760a0d575 (commit) via 013cdab099659623af0d75ff5fd0b606a9c2ce6a (commit) via 33f6c963c4c43b6ed32ac2f9282b560f6016b682 (commit) via b498032d6f1388cf87f415367780a2dc54342d85 (commit) via ee833c56e09b95d8c11217e8a3f614470833f2d5 (commit)
This update added new revisions after undoing existing revisions. That is to say, the old revision is not a strict subset of the new revision. This situation occurs when you --force push a change and generate a repository containing something like this:
* -- * -- B -- O -- O -- O (a110685b8357276cb4a63ebc6ff421f42f461d94) \ N -- N -- N (92c8ebd0b9ec85a71e3993c4864d48bcbb30a012)
When this happens we assume that you've already had alert emails for all of the O revisions, and so we here report only the revisions in the N branch from the common base, B.
Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below.
- Log ----------------------------------------------------------------- commit 92c8ebd0b9ec85a71e3993c4864d48bcbb30a012 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Fri Nov 23 03:24:39 2018 +0300
linux-gen: event: support flow-awareness API
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_event.c b/platform/linux-generic/odp_event.c index bb378528..bdde93e1 100644 --- a/platform/linux-generic/odp_event.c +++ b/platform/linux-generic/odp_event.c @@ -59,6 +59,19 @@ int odp_event_type_multi(const odp_event_t event[], int num, return i; }
+/* For now ODP generic does not support flow awareness, + * so all flow ids are zero. */ +uint32_t odp_event_flow_id(odp_event_t event ODP_UNUSED) +{ + return 0; +} + +void odp_event_flow_id_set(odp_event_t event ODP_UNUSED, + uint32_t flow_id ODP_UNUSED) +{ + /* Do nothing */ +} + void odp_event_free(odp_event_t event) { switch (odp_event_type(event)) {
commit 3e2e07762422a75298cab27fbab64fb4a6f9383c Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Thu Nov 15 20:34:27 2018 +0300
validation: scheduler use schedule_config instead of capabilities
Since ODP test suite will use default configuration for scheduler, all comparisons should be done against it rather than maximum possible values returned by capabilities.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/validation/api/scheduler/scheduler.c b/test/validation/api/scheduler/scheduler.c index 4fdfc243..27377580 100644 --- a/test/validation/api/scheduler/scheduler.c +++ b/test/validation/api/scheduler/scheduler.c @@ -418,7 +418,7 @@ static void scheduler_test_wait(void) static void scheduler_test_queue_size(void) { odp_queue_capability_t queue_capa; - odp_scheduler_config_t default_config; + odp_schedule_config_t default_config; odp_pool_t pool; odp_pool_param_t pool_param; odp_queue_param_t queue_param; @@ -432,8 +432,8 @@ static void scheduler_test_queue_size(void) ODP_SCHED_SYNC_ORDERED};
CU_ASSERT_FATAL(odp_queue_capability(&queue_capa) == 0); - odp_scheduler_config_init(&default_config); queue_size = TEST_QUEUE_SIZE_NUM_EV; + odp_schedule_config_init(&default_config); if (default_config.queue_size && queue_size > default_config.queue_size) queue_size = default_config.queue_size; @@ -1662,6 +1662,7 @@ static int create_queues(test_globals_t *globals) int i, j, prios, rc; odp_queue_capability_t capa; odp_schedule_capability_t sched_capa; + odp_schedule_config_t default_config; odp_pool_t queue_ctx_pool; odp_pool_param_t params; odp_buffer_t queue_ctx_buf; @@ -1691,10 +1692,11 @@ static int create_queues(test_globals_t *globals) }
globals->max_sched_queue_size = BUFS_PER_QUEUE_EXCL; - if (sched_capa.max_queue_size && sched_capa.max_queue_size < - BUFS_PER_QUEUE_EXCL) { - printf("Max sched queue size %u\n", sched_capa.max_queue_size); - globals->max_sched_queue_size = sched_capa.max_queue_size; + odp_schedule_config_init(&default_config); + if (default_config.queue_size && + globals->max_sched_queue_size > default_config.queue_size) { + printf("Max sched queue size %u\n", default_config.queue_size); + globals->max_sched_queue_size = default_config.queue_size; }
prios = odp_schedule_num_prio(); @@ -1704,7 +1706,7 @@ static int create_queues(test_globals_t *globals) queues_per_prio = QUEUES_PER_PRIO; num_sched = (prios * queues_per_prio * sched_types) + CHAOS_NUM_QUEUES; num_plain = (prios * queues_per_prio); - while ((num_sched > sched_capa.max_queues || + while ((num_sched > default_config.num_queues || num_plain > capa.plain.max_num || num_sched + num_plain > capa.max_queues) && queues_per_prio) { queues_per_prio--;
commit 5235ac6934f2a39e91d30971003739c6e3620224 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Nov 7 17:41:04 2018 +0300
examples: add calls to odp_schedule_config()
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/example/classifier/odp_classifier.c b/example/classifier/odp_classifier.c index 43d40c53..274ffaf4 100644 --- a/example/classifier/odp_classifier.c +++ b/example/classifier/odp_classifier.c @@ -556,6 +556,9 @@ int main(int argc, char *argv[]) exit(EXIT_FAILURE); }
+ /* Configure scheduler */ + odp_schedule_config(NULL); + /* odp_pool_print(pool); */ odp_atomic_init_u64(&args->total_packets, 0);
diff --git a/example/generator/odp_generator.c b/example/generator/odp_generator.c index 1093454c..bd6af795 100644 --- a/example/generator/odp_generator.c +++ b/example/generator/odp_generator.c @@ -1199,6 +1199,9 @@ int main(int argc, char *argv[]) args->rx_burst_size = args->appl.rx_burst; }
+ /* Configure scheduler */ + odp_schedule_config(NULL); + /* Create packet pool */ odp_pool_param_init(¶ms); params.pkt.seg_len = POOL_PKT_LEN; diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c index 52ccce22..1bbf7a00 100644 --- a/example/ipsec/odp_ipsec.c +++ b/example/ipsec/odp_ipsec.c @@ -1297,6 +1297,9 @@ main(int argc, char *argv[]) exit(EXIT_FAILURE); }
+ /* Configure scheduler */ + odp_schedule_config(NULL); + /* Populate our IPsec cache */ printf("Using %s mode for crypto API\n\n", (CRYPTO_API_SYNC == global->appl.mode) ? "SYNC" : diff --git a/example/ipsec_api/odp_ipsec.c b/example/ipsec_api/odp_ipsec.c index fb0d9049..ab0fa3c5 100644 --- a/example/ipsec_api/odp_ipsec.c +++ b/example/ipsec_api/odp_ipsec.c @@ -996,6 +996,9 @@ main(int argc, char *argv[]) exit(EXIT_FAILURE); }
+ /* Configure scheduler */ + odp_schedule_config(NULL); + /* Populate our IPsec cache */ printf("Using %s mode for IPsec API\n\n", (ODP_IPSEC_OP_MODE_SYNC == global->appl.mode) ? "SYNC" : diff --git a/example/ipsec_offload/odp_ipsec_offload.c b/example/ipsec_offload/odp_ipsec_offload.c index 90b3f640..4d95b2e5 100644 --- a/example/ipsec_offload/odp_ipsec_offload.c +++ b/example/ipsec_offload/odp_ipsec_offload.c @@ -606,6 +606,9 @@ main(int argc, char *argv[])
ipsec_init_post();
+ /* Configure scheduler */ + odp_schedule_config(NULL); + /* Initialize interfaces (which resolves FWD DB entries */ for (i = 0; i < global->appl.if_count; i++) initialize_intf(global->appl.if_names[i], diff --git a/example/packet/odp_packet_dump.c b/example/packet/odp_packet_dump.c index 5dcb7893..4e3aec8f 100644 --- a/example/packet/odp_packet_dump.c +++ b/example/packet/odp_packet_dump.c @@ -640,6 +640,8 @@ int main(int argc, char *argv[])
global->pool = ODP_POOL_INVALID;
+ odp_schedule_config(NULL); + odp_sys_info_print();
if (open_pktios(global)) { diff --git a/example/packet/odp_pktio.c b/example/packet/odp_pktio.c index e73e903c..b1c4a79c 100644 --- a/example/packet/odp_pktio.c +++ b/example/packet/odp_pktio.c @@ -424,6 +424,9 @@ int main(int argc, char *argv[]) } odp_pool_print(pool);
+ /* Config and start scheduler */ + odp_schedule_config(NULL); + /* Create a pktio instance for each interface */ for (i = 0; i < args->appl.if_count; ++i) create_pktio(args->appl.if_names[i], pool, args->appl.mode); diff --git a/example/timer/odp_timer_accuracy.c b/example/timer/odp_timer_accuracy.c index 3b0d7e38..9409e340 100644 --- a/example/timer/odp_timer_accuracy.c +++ b/example/timer/odp_timer_accuracy.c @@ -426,6 +426,9 @@ int main(int argc, char *argv[])
odp_sys_info_print();
+ /* Configure scheduler */ + odp_schedule_config(NULL); + num = test_global.opt.num;
test_global.timer = calloc(num, sizeof(odp_timer_t)); diff --git a/example/timer/odp_timer_simple.c b/example/timer/odp_timer_simple.c index 116f8ba6..ddefb0d2 100644 --- a/example/timer/odp_timer_simple.c +++ b/example/timer/odp_timer_simple.c @@ -81,6 +81,9 @@ int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED) goto err; }
+ /* Configure scheduler */ + odp_schedule_config(NULL); + /* * Create a queue for timer test */ diff --git a/example/timer/odp_timer_test.c b/example/timer/odp_timer_test.c index 192a61d3..ca3e8ddf 100644 --- a/example/timer/odp_timer_test.c +++ b/example/timer/odp_timer_test.c @@ -418,6 +418,9 @@ int main(int argc, char *argv[]) printf("period: %i usec\n", gbls->args.period_us); printf("timeouts: %i\n", gbls->args.tmo_count);
+ /* Configure scheduler */ + odp_schedule_config(NULL); + /* * Create pool for timeouts */
commit 886995cfbf5faa9ad269fdfccae42a889035c001 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Nov 7 17:41:04 2018 +0300
performance: add calls to odp_schedule_config()
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/performance/odp_cpu_bench.c b/test/performance/odp_cpu_bench.c index b41bc43f..852ed308 100644 --- a/test/performance/odp_cpu_bench.c +++ b/test/performance/odp_cpu_bench.c @@ -526,7 +526,7 @@ int main(int argc, char *argv[]) odp_cpumask_t cpumask; odp_pool_capability_t pool_capa; odp_pool_t pool; - odp_schedule_capability_t schedule_capa; + odp_schedule_config_t schedule_config; odp_shm_t shm; odp_shm_t lookup_tbl_shm; odp_pool_param_t params; @@ -614,26 +614,24 @@ int main(int argc, char *argv[]) printf("first CPU: %i\n", odp_cpumask_first(&cpumask)); printf("cpu mask: %s\n", cpumaskstr);
- if (odp_schedule_capability(&schedule_capa)) { - printf("Error: Schedule capa failed.\n"); - return -1; - } + odp_schedule_config_init(&schedule_config); + odp_schedule_config(&schedule_config);
/* Make sure a single queue can store all the packets in a group */ pkts_per_group = QUEUES_PER_GROUP * PKTS_PER_QUEUE; - if (schedule_capa.max_queue_size && - schedule_capa.max_queue_size < pkts_per_group) - pkts_per_group = schedule_capa.max_queue_size; + if (schedule_config.queue_size && + schedule_config.queue_size < pkts_per_group) + pkts_per_group = schedule_config.queue_size;
/* Divide queues evenly into groups */ - if (schedule_capa.max_queues < QUEUES_PER_GROUP) { + if (schedule_config.num_queues < QUEUES_PER_GROUP) { LOG_ERR("Error: min %d queues required\n", QUEUES_PER_GROUP); return -1; } - num_queues = num_workers > schedule_capa.max_queues ? - schedule_capa.max_queues : num_workers; + num_queues = num_workers > schedule_config.num_queues ? + schedule_config.num_queues : num_workers; num_groups = (num_queues + QUEUES_PER_GROUP - 1) / QUEUES_PER_GROUP; - if (num_groups * QUEUES_PER_GROUP > schedule_capa.max_queues) + if (num_groups * QUEUES_PER_GROUP > schedule_config.num_queues) num_groups--; num_queues = num_groups * QUEUES_PER_GROUP;
diff --git a/test/performance/odp_crypto.c b/test/performance/odp_crypto.c index d175bb7e..665268be 100644 --- a/test/performance/odp_crypto.c +++ b/test/performance/odp_crypto.c @@ -1101,6 +1101,7 @@ int main(int argc, char *argv[])
odp_queue_param_init(&qparam); if (cargs.schedule) { + odp_schedule_config(NULL); qparam.type = ODP_QUEUE_TYPE_SCHED; qparam.sched.prio = ODP_SCHED_PRIO_DEFAULT; qparam.sched.sync = ODP_SCHED_SYNC_PARALLEL; diff --git a/test/performance/odp_ipsec.c b/test/performance/odp_ipsec.c index 5a5824e8..e388916c 100644 --- a/test/performance/odp_ipsec.c +++ b/test/performance/odp_ipsec.c @@ -1088,6 +1088,7 @@ int main(int argc, char *argv[])
odp_queue_param_init(&qparam); if (cargs.schedule) { + odp_schedule_config(NULL); qparam.type = ODP_QUEUE_TYPE_SCHED; qparam.sched.prio = ODP_SCHED_PRIO_DEFAULT; qparam.sched.sync = ODP_SCHED_SYNC_PARALLEL; diff --git a/test/performance/odp_l2fwd.c b/test/performance/odp_l2fwd.c index c9243184..78e3920f 100644 --- a/test/performance/odp_l2fwd.c +++ b/test/performance/odp_l2fwd.c @@ -1568,6 +1568,8 @@ int main(int argc, char *argv[])
bind_workers();
+ odp_schedule_config(NULL); + /* Default */ if (num_groups == 0) { group[0] = ODP_SCHED_GROUP_ALL; diff --git a/test/performance/odp_pktio_ordered.c b/test/performance/odp_pktio_ordered.c index da37407a..15229aeb 100644 --- a/test/performance/odp_pktio_ordered.c +++ b/test/performance/odp_pktio_ordered.c @@ -1062,6 +1062,7 @@ int main(int argc, char *argv[]) odp_pool_param_t params; odp_shm_t shm; odp_schedule_capability_t schedule_capa; + odp_schedule_config_t schedule_config; odp_pool_capability_t pool_capa; odph_ethaddr_t new_addr; odph_helper_options_t helper_options; @@ -1129,6 +1130,8 @@ int main(int argc, char *argv[]) /* Parse and store the application arguments */ parse_args(argc, argv, &gbl_args->appl);
+ odp_schedule_config(NULL); + if (gbl_args->appl.in_mode == SCHED_ORDERED) { /* At least one ordered lock required */ if (schedule_capa.max_ordered_locks < 1) { @@ -1158,9 +1161,9 @@ int main(int argc, char *argv[]) pool_size = pool_capa.pkt.max_num;
queue_size = MAX_NUM_PKT; - if (schedule_capa.max_queue_size && - schedule_capa.max_queue_size < MAX_NUM_PKT) - queue_size = schedule_capa.max_queue_size; + if (schedule_config.queue_size && + schedule_config.queue_size < MAX_NUM_PKT) + queue_size = schedule_config.queue_size;
/* Pool should not be larger than queue, otherwise queue enqueues at * packet input may fail. */ diff --git a/test/performance/odp_pktio_perf.c b/test/performance/odp_pktio_perf.c index b86d437e..2ed2c352 100644 --- a/test/performance/odp_pktio_perf.c +++ b/test/performance/odp_pktio_perf.c @@ -759,6 +759,9 @@ static int test_init(void) iface = gbl_args->args.ifaces[0]; schedule = gbl_args->args.schedule;
+ if (schedule) + odp_schedule_config(NULL); + /* create pktios and associate input/output queues */ gbl_args->pktio_tx = create_pktio(iface, schedule); if (gbl_args->args.num_ifaces > 1) { diff --git a/test/performance/odp_sched_latency.c b/test/performance/odp_sched_latency.c index b6299141..b5be1a16 100644 --- a/test/performance/odp_sched_latency.c +++ b/test/performance/odp_sched_latency.c @@ -714,6 +714,8 @@ int main(int argc, char *argv[]) memset(globals, 0, sizeof(test_globals_t)); memcpy(&globals->args, &args, sizeof(test_args_t));
+ odp_schedule_config(NULL); + /* * Create event pool */ diff --git a/test/performance/odp_sched_perf.c b/test/performance/odp_sched_perf.c index b25c3e19..c301263e 100644 --- a/test/performance/odp_sched_perf.c +++ b/test/performance/odp_sched_perf.c @@ -43,6 +43,7 @@ typedef struct test_stat_t { typedef struct test_global_t { test_options_t test_options;
+ odp_schedule_config_t schedule_config; odp_barrier_t barrier; odp_pool_t pool; odp_cpumask_t cpumask; @@ -251,7 +252,6 @@ static int create_pool(test_global_t *global)
static int create_queues(test_global_t *global) { - odp_schedule_capability_t schedule_capa; odp_queue_param_t queue_param; odp_queue_t queue; odp_buffer_t buf; @@ -279,20 +279,16 @@ static int create_queues(test_global_t *global)
printf(" queue type %s\n\n", type_str);
- if (odp_schedule_capability(&schedule_capa)) { - printf("Error: Schedule capa failed.\n"); - return -1; - } - - if (tot_queue > schedule_capa.max_queues) { + if (tot_queue > global->schedule_config.num_queues) { printf("Max queues supported %u\n", - schedule_capa.max_queues); + global->schedule_config.num_queues); return -1; }
- if (schedule_capa.max_queue_size && - queue_size > schedule_capa.max_queue_size) { - printf("Max queue size %u\n", schedule_capa.max_queue_size); + if (global->schedule_config.queue_size && + queue_size > global->schedule_config.queue_size) { + printf("Max queue size %u\n", + global->schedule_config.queue_size); return -1; }
@@ -603,6 +599,9 @@ int main(int argc, char **argv) return -1; }
+ odp_schedule_config_init(&global->schedule_config); + odp_schedule_config(&global->schedule_config); + if (set_num_cpu(global)) return -1;
diff --git a/test/performance/odp_sched_pktio.c b/test/performance/odp_sched_pktio.c index 1faa9b1d..393ea352 100644 --- a/test/performance/odp_sched_pktio.c +++ b/test/performance/odp_sched_pktio.c @@ -127,6 +127,8 @@ typedef struct { uint64_t rx_pkt_sum; uint64_t tx_pkt_sum;
+ odp_schedule_config_t schedule_config; + } test_global_t;
static test_global_t *test_global; @@ -723,6 +725,9 @@ static int config_setup(test_global_t *test_global) cpu = odp_cpumask_next(cpumask, cpu); }
+ odp_schedule_config_init(&test_global->schedule_config); + odp_schedule_config(&test_global->schedule_config); + if (odp_pool_capability(&pool_capa)) { printf("Error: Pool capability failed.\n"); return -1; @@ -1109,15 +1114,9 @@ static int create_pipeline_queues(test_global_t *test_global) int i, j, k, num_pktio, stages, queues, ctx_size; pipe_queue_context_t *ctx; odp_queue_param_t queue_param; - odp_schedule_capability_t schedule_capa; odp_schedule_sync_t sched_sync; int ret = 0;
- if (odp_schedule_capability(&schedule_capa)) { - printf("Error: Schedule capa failed.\n"); - return -1; - } - num_pktio = test_global->opt.num_pktio; stages = test_global->opt.pipe_stages; queues = test_global->opt.pipe_queues; @@ -1130,10 +1129,10 @@ static int create_pipeline_queues(test_global_t *test_global) queue_param.sched.group = ODP_SCHED_GROUP_ALL;
queue_param.size = test_global->opt.pipe_queue_size; - if (schedule_capa.max_queue_size && - queue_param.size > schedule_capa.max_queue_size) { + if (test_global->schedule_config.queue_size && + queue_param.size > test_global->schedule_config.queue_size) { printf("Error: Pipeline queue max size is %u\n", - schedule_capa.max_queue_size); + test_global->schedule_config.queue_size); return -1; }
diff --git a/test/performance/odp_scheduling.c b/test/performance/odp_scheduling.c index 655a619e..afe5b73b 100644 --- a/test/performance/odp_scheduling.c +++ b/test/performance/odp_scheduling.c @@ -813,7 +813,7 @@ int main(int argc, char *argv[]) odph_odpthread_params_t thr_params; odp_queue_capability_t capa; odp_pool_capability_t pool_capa; - odp_schedule_capability_t schedule_capa; + odp_schedule_config_t schedule_config; uint32_t num_queues, num_buf;
printf("\nODP example starts\n\n"); @@ -909,15 +909,14 @@ int main(int argc, char *argv[]) return -1; }
- if (odp_schedule_capability(&schedule_capa)) { - printf("Error: Schedule capa failed.\n"); - return -1; - } + odp_schedule_config_init(&schedule_config); + odp_schedule_config(&schedule_config);
globals->queues_per_prio = QUEUES_PER_PRIO; num_queues = globals->queues_per_prio * NUM_PRIOS; - if (num_queues > schedule_capa.max_queues) - globals->queues_per_prio = schedule_capa.max_queues / + if (schedule_config.num_queues && + num_queues > schedule_config.num_queues) + globals->queues_per_prio = schedule_config.num_queues / NUM_PRIOS;
/* One plain queue is also used */
commit 809a6e3eaba3a510337f9a5ce4dc146074e39bd8 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Nov 7 17:41:04 2018 +0300
validation: add calls to odp_schedule_config()
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common/odp_cunit_common.c b/test/common/odp_cunit_common.c index 4db6c32d..7f345fba 100644 --- a/test/common/odp_cunit_common.c +++ b/test/common/odp_cunit_common.c @@ -98,6 +98,10 @@ static int tests_global_init(odp_instance_t *inst) fprintf(stderr, "error: odp_init_local() failed.\n"); return -1; } + if (0 != odp_schedule_config(NULL)) { + fprintf(stderr, "error: odp_schedule_config(NULL) failed.\n"); + return -1; + }
return 0; } diff --git a/test/validation/api/timer/timer.c b/test/validation/api/timer/timer.c index 72294c5c..aaffd92d 100644 --- a/test/validation/api/timer/timer.c +++ b/test/validation/api/timer/timer.c @@ -92,6 +92,9 @@ static int timer_global_init(odp_instance_t *inst) global_mem = odp_shm_addr(global_shm); memset(global_mem, 0, sizeof(global_shared_mem_t));
+ /* Configure scheduler */ + odp_schedule_config(NULL); + return 0; }
commit a43d82864faeed378972b95087f470c4a5f076c2 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Nov 7 17:39:48 2018 +0300
linux-gen: implement odp_schedule_config() API call
Add odp_schedule_config() stub, which does nothing at this point. Use it to actually check (in debug mode) that application call it in proper place.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index abc64d0d..15c91590 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -87,10 +87,17 @@ int sched_cb_pktin_poll(int pktio_index, int pktin_index, int sched_cb_pktin_poll_one(int pktio_index, int rx_queue, odp_event_t evts[]); void sched_cb_pktio_stop_finalize(int pktio_index);
+/* For debugging */ +#ifdef ODP_DEBUG +extern int _odp_schedule_configured; +#endif + /* API functions */ typedef struct { uint64_t (*schedule_wait_time)(uint64_t ns); int (*schedule_capability)(odp_schedule_capability_t *capa); + void (*schedule_config_init)(odp_schedule_config_t *config); + int (*schedule_config)(const odp_schedule_config_t *config); odp_event_t (*schedule)(odp_queue_t *from, uint64_t wait); int (*schedule_multi)(odp_queue_t *from, uint64_t wait, odp_event_t events[], int num); diff --git a/platform/linux-generic/odp_schedule_basic.c b/platform/linux-generic/odp_schedule_basic.c index f057f468..48f232e6 100644 --- a/platform/linux-generic/odp_schedule_basic.c +++ b/platform/linux-generic/odp_schedule_basic.c @@ -597,6 +597,8 @@ static int schedule_init_queue(uint32_t queue_index, int i; int prio = prio_level_from_api(sched_param->prio);
+ ODP_ASSERT(_odp_schedule_configured); + pri_set_queue(queue_index, prio); sched->queue[queue_index].grp = sched_param->group; sched->queue[queue_index].prio = prio; @@ -797,6 +799,19 @@ static int schedule_term_local(void) return 0; }
+static void schedule_config_init(odp_schedule_config_t *config) +{ + config->num_queues = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; + config->queue_size = queue_glb->config.max_queue_size; +} + +static int schedule_config(const odp_schedule_config_t *config) +{ + (void)config; + + return 0; +} + static inline int copy_from_stash(odp_event_t out_ev[], unsigned int max) { int i = 0; @@ -1589,6 +1604,8 @@ const schedule_fn_t schedule_basic_fn = { const schedule_api_t schedule_basic_api = { .schedule_wait_time = schedule_wait_time, .schedule_capability = schedule_capability, + .schedule_config_init = schedule_config_init, + .schedule_config = schedule_config, .schedule = schedule, .schedule_multi = schedule_multi, .schedule_multi_wait = schedule_multi_wait, diff --git a/platform/linux-generic/odp_schedule_if.c b/platform/linux-generic/odp_schedule_if.c index 92e0a62f..cb52f155 100644 --- a/platform/linux-generic/odp_schedule_if.c +++ b/platform/linux-generic/odp_schedule_if.c @@ -25,6 +25,10 @@ extern const schedule_api_t schedule_scalable_api; const schedule_fn_t *sched_fn; const schedule_api_t *sched_api;
+#ifdef ODP_DEBUG +int _odp_schedule_configured; +#endif + uint64_t odp_schedule_wait_time(uint64_t ns) { return sched_api->schedule_wait_time(ns); @@ -35,14 +39,46 @@ int odp_schedule_capability(odp_schedule_capability_t *capa) return sched_api->schedule_capability(capa); }
+void odp_schedule_config_init(odp_schedule_config_t *config) +{ + memset(config, 0, sizeof(*config)); + + sched_api->schedule_config_init(config); +} + +int odp_schedule_config(const odp_schedule_config_t *config) +{ + int ret; + odp_schedule_config_t defconfig; + + ODP_ASSERT(!_odp_schedule_configured); + + if (!config) { + odp_schedule_config_init(&defconfig); + config = &defconfig; + } + + ret = sched_api->schedule_config(config); +#ifdef ODP_DEBUG + if (ret >= 0) + _odp_schedule_configured = 1; +#endif + + return ret; +} + odp_event_t odp_schedule(odp_queue_t *from, uint64_t wait) { + ODP_ASSERT(_odp_schedule_configured); + return sched_api->schedule(from, wait); }
int odp_schedule_multi(odp_queue_t *from, uint64_t wait, odp_event_t events[], int num) { + ODP_ASSERT(_odp_schedule_configured); + return sched_api->schedule_multi(from, wait, events, num); }
diff --git a/platform/linux-generic/odp_schedule_scalable.c b/platform/linux-generic/odp_schedule_scalable.c index 091e5ff9..4e9dd771 100644 --- a/platform/linux-generic/odp_schedule_scalable.c +++ b/platform/linux-generic/odp_schedule_scalable.c @@ -1994,6 +1994,19 @@ static int schedule_term_local(void) return rc; }
+static void schedule_config_init(odp_schedule_config_t *config) +{ + config->num_queues = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; + config->queue_size = 0; /* FIXME ? */ +} + +static int schedule_config(const odp_schedule_config_t *config) +{ + (void)config; + + return 0; +} + static int num_grps(void) { return MAX_SCHED_GROUP; @@ -2141,6 +2154,8 @@ const schedule_fn_t schedule_scalable_fn = { const schedule_api_t schedule_scalable_api = { .schedule_wait_time = schedule_wait_time, .schedule_capability = schedule_capability, + .schedule_config_init = schedule_config_init, + .schedule_config = schedule_config, .schedule = schedule, .schedule_multi = schedule_multi, .schedule_multi_wait = schedule_multi_wait, diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 6cc8f376..eec88a60 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -257,6 +257,19 @@ static int term_local(void) return 0; }
+static void schedule_config_init(odp_schedule_config_t *config) +{ + config->num_queues = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; + config->queue_size = queue_glb->config.max_queue_size; +} + +static int schedule_config(const odp_schedule_config_t *config) +{ + (void)config; + + return 0; +} + static uint32_t max_ordered_locks(void) { return NUM_ORDERED_LOCKS; @@ -362,6 +375,11 @@ static int init_queue(uint32_t qi, const odp_schedule_param_t *sched_param) odp_schedule_group_t group = sched_param->group; int prio = 0;
+#ifdef ODP_DEBUG + if (!_odp_schedule_configured) + ODP_ABORT("Scheduler not configured!\n"); +#endif + if (group < 0 || group >= NUM_GROUP) return -1;
@@ -961,6 +979,8 @@ const schedule_fn_t schedule_sp_fn = { const schedule_api_t schedule_sp_api = { .schedule_wait_time = schedule_wait_time, .schedule_capability = schedule_capability, + .schedule_config_init = schedule_config_init, + .schedule_config = schedule_config, .schedule = schedule, .schedule_multi = schedule_multi, .schedule_multi_wait = schedule_multi_wait,
commit a73130efd9e89dae4e67baab64b78b4b0f261668 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Nov 7 16:46:50 2018 +0300
linux-gen: schedule: rename config to get_config
Rename config function to get_config to avoid collisions.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index 88961269..abc64d0d 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -53,7 +53,7 @@ typedef void (*schedule_order_unlock_lock_fn_t)(void); typedef void (*schedule_order_lock_start_fn_t)(void); typedef void (*schedule_order_lock_wait_fn_t)(void); typedef uint32_t (*schedule_max_ordered_locks_fn_t)(void); -typedef void (*schedule_config_fn_t)(schedule_config_t *config); +typedef void (*schedule_get_config_fn_t)(schedule_config_t *config);
typedef struct schedule_fn_t { schedule_pktio_start_fn_t pktio_start; @@ -74,7 +74,7 @@ typedef struct schedule_fn_t { schedule_order_lock_wait_fn_t wait_order_lock; schedule_order_unlock_lock_fn_t order_unlock_lock; schedule_max_ordered_locks_fn_t max_ordered_locks; - schedule_config_fn_t config; + schedule_get_config_fn_t get_config;
} schedule_fn_t;
diff --git a/platform/linux-generic/odp_schedule_basic.c b/platform/linux-generic/odp_schedule_basic.c index b93e5c41..f057f468 100644 --- a/platform/linux-generic/odp_schedule_basic.c +++ b/platform/linux-generic/odp_schedule_basic.c @@ -1547,7 +1547,7 @@ static int schedule_num_grps(void) return NUM_SCHED_GRPS; }
-static void schedule_config(schedule_config_t *config) +static void schedule_get_config(schedule_config_t *config) { *config = *(&sched->config_if); }; @@ -1582,7 +1582,7 @@ const schedule_fn_t schedule_basic_fn = { .order_lock = order_lock, .order_unlock = order_unlock, .max_ordered_locks = schedule_max_ordered_locks, - .config = schedule_config + .get_config = schedule_get_config };
/* Fill in scheduler API calls */ diff --git a/platform/linux-generic/odp_thread.c b/platform/linux-generic/odp_thread.c index 7728929b..b30174dd 100644 --- a/platform/linux-generic/odp_thread.c +++ b/platform/linux-generic/odp_thread.c @@ -142,10 +142,10 @@ int odp_thread_init_local(odp_thread_type_t type) group_worker = 1; group_control = 1;
- if (sched_fn->config) { + if (sched_fn->get_config) { schedule_config_t schedule_config;
- sched_fn->config(&schedule_config); + sched_fn->get_config(&schedule_config); group_all = schedule_config.group_enable.all; group_worker = schedule_config.group_enable.worker; group_control = schedule_config.group_enable.control; @@ -196,10 +196,10 @@ int odp_thread_term_local(void) group_worker = 1; group_control = 1;
- if (sched_fn->config) { + if (sched_fn->get_config) { schedule_config_t schedule_config;
- sched_fn->config(&schedule_config); + sched_fn->get_config(&schedule_config); group_all = schedule_config.group_enable.all; group_worker = schedule_config.group_enable.worker; group_control = schedule_config.group_enable.control;
commit dbd20ce8581fe5a36d61eb48071b47a89f808fdb Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Oct 31 13:45:09 2018 +0300
api: schedule: add scheduler flow aware mode
ODP scheduler configuration to support flow aware mode
Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/event.h b/include/odp/api/spec/event.h index d9f7ab73..affdc7b0 100644 --- a/include/odp/api/spec/event.h +++ b/include/odp/api/spec/event.h @@ -209,6 +209,44 @@ void odp_event_free_multi(const odp_event_t event[], int num); */ void odp_event_free_sp(const odp_event_t event[], int num);
+/** + * Event flow id value + * + * Returns the flow id value set in the event. + * Usage of flow id enables scheduler to maintain multiple synchronization + * contexts per single queue. For example, when multiple flows are assigned to + * an atomic queue, events of a single flow (events from the same queue with + * the same flow id value) are guaranteed to be processed by only single thread + * at a time. For packets received through packet input initial + * event flow id will be same as flow hash generated for packets. The hash + * algorithm and therefore the resulting flow id value is implementation + * specific. Use pktio API configuration options to select the fields used for + * initial flow id calculation. For all other events initial flow id is zero + * An application can change event flow id using odp_event_flow_id_set(). + * + * @param event Event handle + * + * @return Flow id of the event + * + */ +uint32_t odp_event_flow_id(odp_event_t event); + +/** + * Set event flow id value + * + * Store the event flow id for the event and sets the flow id flag. + * When scheduler is configured as flow aware, scheduled queue synchronization + * will be based on this id within each queue. + * When scheduler is configured as flow unaware, event flow id is ignored by + * the implementation. + * The value of flow id must be less than the number of flows configured in the + * scheduler. + * + * @param event Event handle + * @param flow_id Flow event id to be set. + */ +void odp_event_flow_id_set(odp_event_t event, uint32_t flow_id); + /** * @} */ diff --git a/include/odp/api/spec/schedule_types.h b/include/odp/api/spec/schedule_types.h index 0b75d17d..3648c64e 100644 --- a/include/odp/api/spec/schedule_types.h +++ b/include/odp/api/spec/schedule_types.h @@ -78,6 +78,9 @@ extern "C" { * requests another event from the scheduler, which implicitly releases the * context. User may allow the scheduler to release the context earlier than * that by calling odp_schedule_release_atomic(). + * When scheduler is enabled as flow-aware, the event flow id value affects + * scheduling of the event and synchronization is maintained per flow within + * each queue. */
/** @@ -104,6 +107,9 @@ extern "C" { * (e.g. freed or stored) within the context are considered missing from * reordering and are skipped at this time (but can be ordered again within * another context). + * When scheduler is enabled as flow-aware, the event flow id value affects + * scheduling of the event and synchronization is maintained per flow within + * each queue. */
/** @@ -190,6 +196,13 @@ typedef struct odp_schedule_capability_t { * events. */ uint32_t max_queue_size;
+ /** Maximum supported flows per queue. + * Specifies the maximum number of flows per queue supported by the + * implementation. A value of 0 indicates flow aware mode is not + * supported. + */ + uint32_t max_flows; + /** Lock-free (ODP_NONBLOCKING_LF) queues support. * The specification is the same as for the blocking implementation. */ odp_support_t lockfree_queues; @@ -217,6 +230,22 @@ typedef struct odp_schedule_config_t { */ uint32_t queue_size;
+ /** Number of flows per queue to be supported. Scheduler enables flow + * aware mode when flow count is configured greater than 1 (up to + * 'max_flows' capability). + * + * Flows are lightweight entities and events can be assigned to + * specific flows by the application using odp_event_flow_id_set() + * before enqueuing the event into the scheduler. This value is ignored + * unless scheduler supports flow aware mode. + * + * This number should be less than maximum flow supported by the + * implementation. The default value is zero. + * + * @see odp_schedule_capability_t + */ + uint32_t num_flows; + } odp_schedule_config_t;
/**
commit 06851a74104465905cff0d71f852f46d91af224c Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Oct 24 17:55:18 2018 +0300
api: schedule: add scheduler config and start API
Add API calls to configure and start scheduler subsystem.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/schedule.h b/include/odp/api/spec/schedule.h index 6538c509..43292124 100644 --- a/include/odp/api/spec/schedule.h +++ b/include/odp/api/spec/schedule.h @@ -257,6 +257,44 @@ int odp_schedule_default_prio(void); */ int odp_schedule_num_prio(void);
+/** + * Initialize schedule configuration options + * + * Initialize an odp_schedule_config_t to its default values. + * + * @param[out] config Pointer to schedule configuration structure + */ +void odp_schedule_config_init(odp_schedule_config_t *config); + +/** + * Global schedule configuration + * + * Initialize and configure scheduler with global configuration options + * to schedule events across different scheduled queues. + * This function must be called before scheduler is used (any other scheduler + * function is called except odp_schedule_capability() and + * odp_schedule_config_init()) or any queues are created (by application itself + * or by other ODP modules). + * An application can pass NULL value to use default configuration. It will + * have the same result as filling the structure with + * odp_schedule_config_init() and then passing it to odp_schedule_config(). + * + * The initialization sequeunce should be, + * odp_schedule_capability() + * odp_schedule_config_init() + * odp_schedule_config() + * odp_schedule() + * + * @param config Pointer to scheduler configuration structure or NULL for the + * default configuration + * + * @retval 0 on success + * @retval <0 on failure + * + * @see odp_schedule_capability(), odp_schedule_config_init() + */ +int odp_schedule_config(const odp_schedule_config_t *config); + /** * Query scheduler capabilities * diff --git a/include/odp/api/spec/schedule_types.h b/include/odp/api/spec/schedule_types.h index e7cc0479..0b75d17d 100644 --- a/include/odp/api/spec/schedule_types.h +++ b/include/odp/api/spec/schedule_types.h @@ -200,6 +200,25 @@ typedef struct odp_schedule_capability_t {
} odp_schedule_capability_t;
+/** + * Schedule configuration + */ +typedef struct odp_schedule_config_t { + /** Maximum number of scheduled queues to be supported. + * + * @see odp_schedule_capability_t + */ + uint32_t num_queues; + + /** Maximum number of events required to be stored simultaneously in + * scheduled queue. This number must not exceed 'max_queue_size' + * capability. A value of 0 configures default queue size supported by + * the implementation. + */ + uint32_t queue_size; + +} odp_schedule_config_t; + /** * @} */
commit fe8658c85b721c17a1d998cc0df9106d4e9a4ce7 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Oct 31 13:47:52 2018 +0300
example, tests: move scheduled queue capabilities to sched
Move scheduled queue capabilities to odp_schedule_capability_t structure, as they logically belong to ODP scheduler module, rather than queue module.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/example/sysinfo/odp_sysinfo.c b/example/sysinfo/odp_sysinfo.c index cd0c6bfd..709f25d9 100644 --- a/example/sysinfo/odp_sysinfo.c +++ b/example/sysinfo/odp_sysinfo.c @@ -401,19 +401,15 @@ int main(void) printf(" max ordered locks: %" PRIu32 "\n", schedule_capa.max_ordered_locks); printf(" max groups: %u\n", schedule_capa.max_groups); - printf(" priorities: %u\n", schedule_capa.prios); - printf(" sched.max_num: %" PRIu32 "\n", - queue_capa.sched.max_num); - printf(" sched.max_size: %" PRIu32 "\n", - queue_capa.sched.max_size); - printf(" sched.lf.max_num: %" PRIu32 "\n", - queue_capa.sched.lockfree.max_num); - printf(" sched.lf.max_size: %" PRIu32 "\n", - queue_capa.sched.lockfree.max_size); - printf(" sched.wf.max_num: %" PRIu32 "\n", - queue_capa.sched.waitfree.max_num); - printf(" sched.wf.max_size: %" PRIu32 "\n", - queue_capa.sched.waitfree.max_size); + printf(" priorities: %u\n", schedule_capa.max_prios); + printf(" sched.max_queues: %" PRIu32 "\n", + schedule_capa.max_queues); + printf(" sched.max_queue_size: %" PRIu32 "\n", + schedule_capa.max_queue_size); + printf(" sched.lf_queues: %ssupported\n", + schedule_capa.lockfree_queues ? "" : "not "); + printf(" sched.wf_queues: %ssupported\n", + schedule_capa.waitfree_queues ? "" : "not ");
printf("\n"); printf(" TIMER\n"); diff --git a/test/performance/odp_cpu_bench.c b/test/performance/odp_cpu_bench.c index 402ab4a1..b41bc43f 100644 --- a/test/performance/odp_cpu_bench.c +++ b/test/performance/odp_cpu_bench.c @@ -526,7 +526,7 @@ int main(int argc, char *argv[]) odp_cpumask_t cpumask; odp_pool_capability_t pool_capa; odp_pool_t pool; - odp_queue_capability_t queue_capa; + odp_schedule_capability_t schedule_capa; odp_shm_t shm; odp_shm_t lookup_tbl_shm; odp_pool_param_t params; @@ -614,27 +614,26 @@ int main(int argc, char *argv[]) printf("first CPU: %i\n", odp_cpumask_first(&cpumask)); printf("cpu mask: %s\n", cpumaskstr);
- /* Create application queues */ - if (odp_queue_capability(&queue_capa)) { - LOG_ERR("Error: odp_queue_capability() failed\n"); - exit(EXIT_FAILURE); + if (odp_schedule_capability(&schedule_capa)) { + printf("Error: Schedule capa failed.\n"); + return -1; }
/* Make sure a single queue can store all the packets in a group */ pkts_per_group = QUEUES_PER_GROUP * PKTS_PER_QUEUE; - if (queue_capa.sched.max_size && - queue_capa.sched.max_size < pkts_per_group) - pkts_per_group = queue_capa.sched.max_size; + if (schedule_capa.max_queue_size && + schedule_capa.max_queue_size < pkts_per_group) + pkts_per_group = schedule_capa.max_queue_size;
/* Divide queues evenly into groups */ - if (queue_capa.sched.max_num < QUEUES_PER_GROUP) { + if (schedule_capa.max_queues < QUEUES_PER_GROUP) { LOG_ERR("Error: min %d queues required\n", QUEUES_PER_GROUP); return -1; } - num_queues = num_workers > queue_capa.sched.max_num ? - queue_capa.sched.max_num : num_workers; + num_queues = num_workers > schedule_capa.max_queues ? + schedule_capa.max_queues : num_workers; num_groups = (num_queues + QUEUES_PER_GROUP - 1) / QUEUES_PER_GROUP; - if (num_groups * QUEUES_PER_GROUP > queue_capa.sched.max_num) + if (num_groups * QUEUES_PER_GROUP > schedule_capa.max_queues) num_groups--; num_queues = num_groups * QUEUES_PER_GROUP;
diff --git a/test/performance/odp_pktio_ordered.c b/test/performance/odp_pktio_ordered.c index 1b4b756a..da37407a 100644 --- a/test/performance/odp_pktio_ordered.c +++ b/test/performance/odp_pktio_ordered.c @@ -1061,7 +1061,6 @@ int main(int argc, char *argv[]) odp_pool_t pool; odp_pool_param_t params; odp_shm_t shm; - odp_queue_capability_t queue_capa; odp_schedule_capability_t schedule_capa; odp_pool_capability_t pool_capa; odph_ethaddr_t new_addr; @@ -1099,11 +1098,6 @@ int main(int argc, char *argv[]) exit(EXIT_FAILURE); }
- if (odp_queue_capability(&queue_capa)) { - LOG_ERR("Error: Queue capa failed\n"); - exit(EXIT_FAILURE); - } - if (odp_schedule_capability(&schedule_capa)) { printf("Error: Schedule capa failed.\n"); return -1; @@ -1164,9 +1158,9 @@ int main(int argc, char *argv[]) pool_size = pool_capa.pkt.max_num;
queue_size = MAX_NUM_PKT; - if (queue_capa.sched.max_size && - queue_capa.sched.max_size < MAX_NUM_PKT) - queue_size = queue_capa.sched.max_size; + if (schedule_capa.max_queue_size && + schedule_capa.max_queue_size < MAX_NUM_PKT) + queue_size = schedule_capa.max_queue_size;
/* Pool should not be larger than queue, otherwise queue enqueues at * packet input may fail. */ diff --git a/test/performance/odp_sched_perf.c b/test/performance/odp_sched_perf.c index bbd76c86..b25c3e19 100644 --- a/test/performance/odp_sched_perf.c +++ b/test/performance/odp_sched_perf.c @@ -251,7 +251,7 @@ static int create_pool(test_global_t *global)
static int create_queues(test_global_t *global) { - odp_queue_capability_t queue_capa; + odp_schedule_capability_t schedule_capa; odp_queue_param_t queue_param; odp_queue_t queue; odp_buffer_t buf; @@ -279,19 +279,20 @@ static int create_queues(test_global_t *global)
printf(" queue type %s\n\n", type_str);
- if (odp_queue_capability(&queue_capa)) { - printf("Error: Queue capa failed.\n"); + if (odp_schedule_capability(&schedule_capa)) { + printf("Error: Schedule capa failed.\n"); return -1; }
- if (tot_queue > queue_capa.sched.max_num) { - printf("Max queues supported %u\n", queue_capa.sched.max_num); + if (tot_queue > schedule_capa.max_queues) { + printf("Max queues supported %u\n", + schedule_capa.max_queues); return -1; }
- if (queue_capa.sched.max_size && - queue_size > queue_capa.sched.max_size) { - printf("Max queue size %u\n", queue_capa.sched.max_size); + if (schedule_capa.max_queue_size && + queue_size > schedule_capa.max_queue_size) { + printf("Max queue size %u\n", schedule_capa.max_queue_size); return -1; }
diff --git a/test/performance/odp_sched_pktio.c b/test/performance/odp_sched_pktio.c index 878dcad0..1faa9b1d 100644 --- a/test/performance/odp_sched_pktio.c +++ b/test/performance/odp_sched_pktio.c @@ -1109,12 +1109,12 @@ static int create_pipeline_queues(test_global_t *test_global) int i, j, k, num_pktio, stages, queues, ctx_size; pipe_queue_context_t *ctx; odp_queue_param_t queue_param; - odp_queue_capability_t queue_capa; + odp_schedule_capability_t schedule_capa; odp_schedule_sync_t sched_sync; int ret = 0;
- if (odp_queue_capability(&queue_capa)) { - printf("Error: Queue capability failed\n"); + if (odp_schedule_capability(&schedule_capa)) { + printf("Error: Schedule capa failed.\n"); return -1; }
@@ -1130,10 +1130,10 @@ static int create_pipeline_queues(test_global_t *test_global) queue_param.sched.group = ODP_SCHED_GROUP_ALL;
queue_param.size = test_global->opt.pipe_queue_size; - if (queue_capa.sched.max_size && - queue_param.size > queue_capa.sched.max_size) { + if (schedule_capa.max_queue_size && + queue_param.size > schedule_capa.max_queue_size) { printf("Error: Pipeline queue max size is %u\n", - queue_capa.sched.max_size); + schedule_capa.max_queue_size); return -1; }
diff --git a/test/performance/odp_scheduling.c b/test/performance/odp_scheduling.c index acc401e0..655a619e 100644 --- a/test/performance/odp_scheduling.c +++ b/test/performance/odp_scheduling.c @@ -813,6 +813,7 @@ int main(int argc, char *argv[]) odph_odpthread_params_t thr_params; odp_queue_capability_t capa; odp_pool_capability_t pool_capa; + odp_schedule_capability_t schedule_capa; uint32_t num_queues, num_buf;
printf("\nODP example starts\n\n"); @@ -908,10 +909,16 @@ int main(int argc, char *argv[]) return -1; }
+ if (odp_schedule_capability(&schedule_capa)) { + printf("Error: Schedule capa failed.\n"); + return -1; + } + globals->queues_per_prio = QUEUES_PER_PRIO; num_queues = globals->queues_per_prio * NUM_PRIOS; - if (num_queues > capa.sched.max_num) - globals->queues_per_prio = capa.sched.max_num / NUM_PRIOS; + if (num_queues > schedule_capa.max_queues) + globals->queues_per_prio = schedule_capa.max_queues / + NUM_PRIOS;
/* One plain queue is also used */ num_queues = (globals->queues_per_prio * NUM_PRIOS) + 1; diff --git a/test/validation/api/queue/queue.c b/test/validation/api/queue/queue.c index 99acc4bf..aab95bab 100644 --- a/test/validation/api/queue/queue.c +++ b/test/validation/api/queue/queue.c @@ -127,18 +127,15 @@ static void queue_test_capa(void) odp_queue_param_t qparams; char name[ODP_QUEUE_NAME_LEN]; odp_queue_t queue[MAX_QUEUES]; - uint32_t num_queues, min, i, j; + uint32_t num_queues, min, i;
memset(&capa, 0, sizeof(odp_queue_capability_t)); CU_ASSERT(odp_queue_capability(&capa) == 0);
CU_ASSERT(capa.max_queues != 0); CU_ASSERT(capa.plain.max_num != 0); - CU_ASSERT(capa.sched.max_num != 0);
min = capa.plain.max_num; - if (min > capa.sched.max_num) - min = capa.sched.max_num;
CU_ASSERT(capa.max_queues >= min);
@@ -150,33 +147,26 @@ static void queue_test_capa(void) odp_queue_param_init(&qparams); CU_ASSERT(qparams.nonblocking == ODP_BLOCKING);
- for (j = 0; j < 2; j++) { - if (j == 0) { - num_queues = capa.plain.max_num; - } else { - num_queues = capa.sched.max_num; - qparams.type = ODP_QUEUE_TYPE_SCHED; - } + num_queues = capa.plain.max_num;
- if (num_queues > MAX_QUEUES) - num_queues = MAX_QUEUES; + if (num_queues > MAX_QUEUES) + num_queues = MAX_QUEUES;
- for (i = 0; i < num_queues; i++) { - generate_name(name, i); - queue[i] = odp_queue_create(name, &qparams); + for (i = 0; i < num_queues; i++) { + generate_name(name, i); + queue[i] = odp_queue_create(name, &qparams);
- if (queue[i] == ODP_QUEUE_INVALID) { - CU_FAIL("Queue create failed"); - num_queues = i; - break; - } - - CU_ASSERT(odp_queue_lookup(name) != ODP_QUEUE_INVALID); + if (queue[i] == ODP_QUEUE_INVALID) { + CU_FAIL("Queue create failed"); + num_queues = i; + break; }
- for (i = 0; i < num_queues; i++) - CU_ASSERT(odp_queue_destroy(queue[i]) == 0); + CU_ASSERT(odp_queue_lookup(name) != ODP_QUEUE_INVALID); } + + for (i = 0; i < num_queues; i++) + CU_ASSERT(odp_queue_destroy(queue[i]) == 0); }
static void queue_test_mode(void) diff --git a/test/validation/api/scheduler/scheduler.c b/test/validation/api/scheduler/scheduler.c index 35c38751..4fdfc243 100644 --- a/test/validation/api/scheduler/scheduler.c +++ b/test/validation/api/scheduler/scheduler.c @@ -20,6 +20,7 @@ #define NUM_BUFS_PAUSE 1000 #define NUM_BUFS_BEFORE_PAUSE 10 #define NUM_GROUPS 2 +#define MAX_QUEUES (64 * 1024)
#define TEST_QUEUE_SIZE_NUM_EV 50
@@ -144,12 +145,16 @@ static void release_context(odp_schedule_sync_t sync) static void scheduler_test_capa(void) { odp_schedule_capability_t capa; + odp_queue_capability_t queue_capa;
memset(&capa, 0, sizeof(odp_schedule_capability_t)); CU_ASSERT_FATAL(odp_schedule_capability(&capa) == 0); + CU_ASSERT_FATAL(odp_queue_capability(&queue_capa) == 0);
CU_ASSERT(capa.max_groups != 0); CU_ASSERT(capa.max_prios != 0); + CU_ASSERT(capa.max_queues != 0); + CU_ASSERT(queue_capa.max_queues >= capa.max_queues); }
static void scheduler_test_wait_time(void) @@ -413,6 +418,7 @@ static void scheduler_test_wait(void) static void scheduler_test_queue_size(void) { odp_queue_capability_t queue_capa; + odp_scheduler_config_t default_config; odp_pool_t pool; odp_pool_param_t pool_param; odp_queue_param_t queue_param; @@ -426,10 +432,11 @@ static void scheduler_test_queue_size(void) ODP_SCHED_SYNC_ORDERED};
CU_ASSERT_FATAL(odp_queue_capability(&queue_capa) == 0); + odp_scheduler_config_init(&default_config); queue_size = TEST_QUEUE_SIZE_NUM_EV; - if (queue_capa.sched.max_size && - queue_size > queue_capa.sched.max_size) - queue_size = queue_capa.sched.max_size; + if (default_config.queue_size && + queue_size > default_config.queue_size) + queue_size = default_config.queue_size;
odp_pool_param_init(&pool_param); pool_param.buf.size = 100; @@ -1684,9 +1691,10 @@ static int create_queues(test_globals_t *globals) }
globals->max_sched_queue_size = BUFS_PER_QUEUE_EXCL; - if (capa.sched.max_size && capa.sched.max_size < BUFS_PER_QUEUE_EXCL) { - printf("Max sched queue size %u\n", capa.sched.max_size); - globals->max_sched_queue_size = capa.sched.max_size; + if (sched_capa.max_queue_size && sched_capa.max_queue_size < + BUFS_PER_QUEUE_EXCL) { + printf("Max sched queue size %u\n", sched_capa.max_queue_size); + globals->max_sched_queue_size = sched_capa.max_queue_size; }
prios = odp_schedule_num_prio(); @@ -1696,7 +1704,7 @@ static int create_queues(test_globals_t *globals) queues_per_prio = QUEUES_PER_PRIO; num_sched = (prios * queues_per_prio * sched_types) + CHAOS_NUM_QUEUES; num_plain = (prios * queues_per_prio); - while ((num_sched > capa.sched.max_num || + while ((num_sched > sched_capa.max_queues || num_plain > capa.plain.max_num || num_sched + num_plain > capa.max_queues) && queues_per_prio) { queues_per_prio--;
commit 9bc0a0598323f5f655eeb65544ecdc74ab8150c2 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Oct 31 13:47:52 2018 +0300
linux-gen: queue, schedule: move scheduled queue capabilities to sched
Move scheduled queue capabilities to odp_schedule_capability_t structure, as they logically belong to ODP scheduler module, rather than queue module.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_queue_basic.c b/platform/linux-generic/odp_queue_basic.c index 1d66ccc7..37a2fad1 100644 --- a/platform/linux-generic/odp_queue_basic.c +++ b/platform/linux-generic/odp_queue_basic.c @@ -57,10 +57,10 @@ static int queue_capa(odp_queue_capability_t *capa, int sched ODP_UNUSED) capa->plain.max_size = queue_glb->config.max_queue_size; capa->plain.lockfree.max_num = queue_glb->queue_lf_num; capa->plain.lockfree.max_size = queue_glb->queue_lf_size; +#if ODP_DEPRECATED_API capa->sched.max_num = capa->max_queues; capa->sched.max_size = queue_glb->config.max_queue_size;
-#if ODP_DEPRECATED_API if (sched) { capa->max_ordered_locks = sched_fn->max_ordered_locks(); capa->max_sched_groups = sched_fn->num_grps(); diff --git a/platform/linux-generic/odp_queue_scalable.c b/platform/linux-generic/odp_queue_scalable.c index 4d5598a8..88abe8c7 100644 --- a/platform/linux-generic/odp_queue_scalable.c +++ b/platform/linux-generic/odp_queue_scalable.c @@ -317,11 +317,11 @@ static int queue_capability(odp_queue_capability_t *capa) capa->max_ordered_locks = sched_fn->max_ordered_locks(); capa->max_sched_groups = sched_fn->num_grps(); capa->sched_prios = odp_schedule_num_prio(); + capa->sched.max_num = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; + capa->sched.max_size = 0; #endif capa->plain.max_num = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; capa->plain.max_size = 0; - capa->sched.max_num = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; - capa->sched.max_size = 0;
return 0; } diff --git a/platform/linux-generic/odp_schedule_basic.c b/platform/linux-generic/odp_schedule_basic.c index 85b4142c..b93e5c41 100644 --- a/platform/linux-generic/odp_schedule_basic.c +++ b/platform/linux-generic/odp_schedule_basic.c @@ -1559,6 +1559,8 @@ static int schedule_capability(odp_schedule_capability_t *capa) capa->max_ordered_locks = schedule_max_ordered_locks(); capa->max_groups = schedule_num_grps(); capa->max_prios = schedule_num_prio(); + capa->max_queues = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; + capa->max_queue_size = queue_glb->config.max_queue_size;
return 0; } diff --git a/platform/linux-generic/odp_schedule_scalable.c b/platform/linux-generic/odp_schedule_scalable.c index 2c681647..091e5ff9 100644 --- a/platform/linux-generic/odp_schedule_scalable.c +++ b/platform/linux-generic/odp_schedule_scalable.c @@ -2114,6 +2114,8 @@ static int schedule_capability(odp_schedule_capability_t *capa) capa->max_ordered_locks = schedule_max_ordered_locks(); capa->max_groups = num_grps(); capa->max_prios = schedule_num_prio(); + capa->max_queues = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; + capa->max_queue_size = 0;
return 0; } diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 9c477d71..6cc8f376 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -932,6 +932,8 @@ static int schedule_capability(odp_schedule_capability_t *capa) capa->max_ordered_locks = max_ordered_locks(); capa->max_groups = num_grps(); capa->max_prios = schedule_num_prio(); + capa->max_queues = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; + capa->max_queue_size = queue_glb->config.max_queue_size;
return 0; }
commit 9f8d08163075eab9408de99d7da2165753f802e9 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Oct 24 17:50:54 2018 +0300
api: queue, schedule: move scheduled queue capabilities to sched
Move scheduled queue capabilities to odp_schedule_capability_t structure, as they logically belong to ODP scheduler module, rather than queue module.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/queue_types.h b/include/odp/api/spec/queue_types.h index 9c7c7de9..c8f31046 100644 --- a/include/odp/api/spec/queue_types.h +++ b/include/odp/api/spec/queue_types.h @@ -139,7 +139,8 @@ typedef struct odp_queue_capability_t { * instead */ unsigned int ODP_DEPRECATE(max_sched_groups);
- /** @deprecated Use prios field of odp_schedule_capability_t instead */ + /** @deprecated Use max_prios field of odp_schedule_capability_t + * instead */ unsigned int ODP_DEPRECATE(sched_prios);
/** Plain queue capabilities */ @@ -182,7 +183,8 @@ typedef struct odp_queue_capability_t {
} plain;
- /** Scheduled queue capabilities */ + /** @deprecated Use queue capabilities in odp_schedule_capability_t + * instead */ struct { /** Maximum number of scheduled (ODP_BLOCKING) queues of the * default size. */ @@ -220,7 +222,7 @@ typedef struct odp_queue_capability_t {
} waitfree;
- } sched; + } ODP_DEPRECATE(sched);
} odp_queue_capability_t;
diff --git a/include/odp/api/spec/schedule_types.h b/include/odp/api/spec/schedule_types.h index f55e53f3..e7cc0479 100644 --- a/include/odp/api/spec/schedule_types.h +++ b/include/odp/api/spec/schedule_types.h @@ -14,6 +14,8 @@ #define ODP_API_SPEC_SCHEDULE_TYPES_H_ #include <odp/visibility_begin.h>
+#include <odp/api/support.h> + #ifdef __cplusplus extern "C" { #endif @@ -178,6 +180,24 @@ typedef struct odp_schedule_capability_t { /** Number of scheduling priorities */ uint32_t max_prios;
+ /** Maximum number of scheduled (ODP_BLOCKING) queues of the default + * size. */ + uint32_t max_queues; + + /** Maximum number of events a scheduled (ODP_BLOCKING) queue can store + * simultaneously. The value of zero means that scheduled queues do not + * have a size limit, but a single queue can store all available + * events. */ + uint32_t max_queue_size; + + /** Lock-free (ODP_NONBLOCKING_LF) queues support. + * The specification is the same as for the blocking implementation. */ + odp_support_t lockfree_queues; + + /** Wait-free (ODP_NONBLOCKING_WF) queues support. + * The specification is the same as for the blocking implementation. */ + odp_support_t waitfree_queues; + } odp_schedule_capability_t;
/**
commit 602df05c79ea8126e679513ca9523222c7946a19 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Oct 24 17:49:29 2018 +0300
example, tests: move scheduler capabilities to scheduler
Add odp_schedule_capability() call to query scheduler capabilities. Move basic scheduler capabilities to new odp_schedule_capability_t structure.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/example/sysinfo/odp_sysinfo.c b/example/sysinfo/odp_sysinfo.c index ff79d893..cd0c6bfd 100644 --- a/example/sysinfo/odp_sysinfo.c +++ b/example/sysinfo/odp_sysinfo.c @@ -238,6 +238,7 @@ int main(void) odp_queue_capability_t queue_capa; odp_timer_capability_t timer_capa; odp_crypto_capability_t crypto_capa; + odp_schedule_capability_t schedule_capa; uint64_t huge_page[MAX_HUGE_PAGES]; char ava_mask_str[ODP_CPUMASK_STR_SIZE]; char work_mask_str[ODP_CPUMASK_STR_SIZE]; @@ -293,6 +294,11 @@ int main(void) return -1; }
+ if (odp_schedule_capability(&schedule_capa)) { + printf("schedule capability failed\n"); + return -1; + } + if (odp_timer_capability(ODP_CLOCK_CPU, &timer_capa)) { printf("timer capability failed\n"); return -1; @@ -393,9 +399,9 @@ int main(void) printf("\n"); printf(" SCHEDULER\n"); printf(" max ordered locks: %" PRIu32 "\n", - queue_capa.max_ordered_locks); - printf(" max groups: %u\n", queue_capa.max_sched_groups); - printf(" priorities: %u\n", queue_capa.sched_prios); + schedule_capa.max_ordered_locks); + printf(" max groups: %u\n", schedule_capa.max_groups); + printf(" priorities: %u\n", schedule_capa.prios); printf(" sched.max_num: %" PRIu32 "\n", queue_capa.sched.max_num); printf(" sched.max_size: %" PRIu32 "\n", diff --git a/test/performance/odp_pktio_ordered.c b/test/performance/odp_pktio_ordered.c index 2e0ff578..1b4b756a 100644 --- a/test/performance/odp_pktio_ordered.c +++ b/test/performance/odp_pktio_ordered.c @@ -1062,6 +1062,7 @@ int main(int argc, char *argv[]) odp_pool_param_t params; odp_shm_t shm; odp_queue_capability_t queue_capa; + odp_schedule_capability_t schedule_capa; odp_pool_capability_t pool_capa; odph_ethaddr_t new_addr; odph_helper_options_t helper_options; @@ -1103,6 +1104,11 @@ int main(int argc, char *argv[]) exit(EXIT_FAILURE); }
+ if (odp_schedule_capability(&schedule_capa)) { + printf("Error: Schedule capa failed.\n"); + return -1; + } + if (odp_pool_capability(&pool_capa)) { LOG_ERR("Error: Pool capa failed\n"); exit(EXIT_FAILURE); @@ -1131,7 +1137,7 @@ int main(int argc, char *argv[])
if (gbl_args->appl.in_mode == SCHED_ORDERED) { /* At least one ordered lock required */ - if (queue_capa.max_ordered_locks < 1) { + if (schedule_capa.max_ordered_locks < 1) { LOG_ERR("Error: Ordered locks not available.\n"); exit(EXIT_FAILURE); } diff --git a/test/validation/api/classification/odp_classification_tests.c b/test/validation/api/classification/odp_classification_tests.c index 41201d4a..4f722140 100644 --- a/test/validation/api/classification/odp_classification_tests.c +++ b/test/validation/api/classification/odp_classification_tests.c @@ -152,16 +152,16 @@ void configure_cls_pmr_chain(void) uint32_t addr; uint32_t mask; odp_pmr_param_t pmr_param; - odp_queue_capability_t queue_capa; + odp_schedule_capability_t schedule_capa;
- CU_ASSERT_FATAL(odp_queue_capability(&queue_capa) == 0); + CU_ASSERT_FATAL(odp_schedule_capability(&schedule_capa) == 0);
odp_queue_param_init(&qparam); qparam.type = ODP_QUEUE_TYPE_SCHED; qparam.sched.prio = odp_schedule_default_prio(); qparam.sched.sync = ODP_SCHED_SYNC_PARALLEL; qparam.sched.group = ODP_SCHED_GROUP_ALL; - qparam.sched.lock_count = queue_capa.max_ordered_locks; + qparam.sched.lock_count = schedule_capa.max_ordered_locks; sprintf(queuename, "%s", "SrcQueue");
queue_list[CLS_PMR_CHAIN_SRC] = odp_queue_create(queuename, &qparam); diff --git a/test/validation/api/queue/queue.c b/test/validation/api/queue/queue.c index cf081a99..99acc4bf 100644 --- a/test/validation/api/queue/queue.c +++ b/test/validation/api/queue/queue.c @@ -133,8 +133,6 @@ static void queue_test_capa(void) CU_ASSERT(odp_queue_capability(&capa) == 0);
CU_ASSERT(capa.max_queues != 0); - CU_ASSERT(capa.max_sched_groups != 0); - CU_ASSERT(capa.sched_prios != 0); CU_ASSERT(capa.plain.max_num != 0); CU_ASSERT(capa.sched.max_num != 0);
@@ -715,6 +713,7 @@ static void queue_test_info(void) odp_queue_info_t info; odp_queue_param_t param; odp_queue_capability_t capability; + odp_schedule_capability_t sched_capa; char q_plain_ctx[] = "test_q_plain context data"; char q_order_ctx[] = "test_q_order context data"; uint32_t lock_count; @@ -729,13 +728,14 @@ static void queue_test_info(void)
memset(&capability, 0, sizeof(odp_queue_capability_t)); CU_ASSERT(odp_queue_capability(&capability) == 0); + CU_ASSERT(odp_schedule_capability(&sched_capa) == 0); /* Create a scheduled ordered queue with explicitly set params */ odp_queue_param_init(¶m); param.type = ODP_QUEUE_TYPE_SCHED; param.sched.prio = odp_schedule_default_prio(); param.sched.sync = ODP_SCHED_SYNC_ORDERED; param.sched.group = ODP_SCHED_GROUP_ALL; - param.sched.lock_count = capability.max_ordered_locks; + param.sched.lock_count = sched_capa.max_ordered_locks; if (param.sched.lock_count == 0) printf("\n Ordered locks NOT supported\n"); param.context = q_order_ctx; diff --git a/test/validation/api/scheduler/scheduler.c b/test/validation/api/scheduler/scheduler.c index 2f66f526..35c38751 100644 --- a/test/validation/api/scheduler/scheduler.c +++ b/test/validation/api/scheduler/scheduler.c @@ -141,6 +141,17 @@ static void release_context(odp_schedule_sync_t sync) odp_schedule_release_ordered(); }
+static void scheduler_test_capa(void) +{ + odp_schedule_capability_t capa; + + memset(&capa, 0, sizeof(odp_schedule_capability_t)); + CU_ASSERT_FATAL(odp_schedule_capability(&capa) == 0); + + CU_ASSERT(capa.max_groups != 0); + CU_ASSERT(capa.max_prios != 0); +} + static void scheduler_test_wait_time(void) { int i; @@ -1643,6 +1654,7 @@ static int create_queues(test_globals_t *globals) { int i, j, prios, rc; odp_queue_capability_t capa; + odp_schedule_capability_t sched_capa; odp_pool_t queue_ctx_pool; odp_pool_param_t params; odp_buffer_t queue_ctx_buf; @@ -1659,11 +1671,16 @@ static int create_queues(test_globals_t *globals) return -1; }
+ if (odp_schedule_capability(&sched_capa) < 0) { + printf("Queue capability query failed\n"); + return -1; + } + /* Limit to test maximum */ - if (capa.max_ordered_locks > MAX_ORDERED_LOCKS) { - capa.max_ordered_locks = MAX_ORDERED_LOCKS; + if (sched_capa.max_ordered_locks > MAX_ORDERED_LOCKS) { + sched_capa.max_ordered_locks = MAX_ORDERED_LOCKS; printf("Testing only %u ordered locks\n", - capa.max_ordered_locks); + sched_capa.max_ordered_locks); }
globals->max_sched_queue_size = BUFS_PER_QUEUE_EXCL; @@ -1764,7 +1781,7 @@ static int create_queues(test_globals_t *globals)
snprintf(name, sizeof(name), "sched_%d_%d_o", i, j); p.sched.sync = ODP_SCHED_SYNC_ORDERED; - p.sched.lock_count = capa.max_ordered_locks; + p.sched.lock_count = sched_capa.max_ordered_locks; p.size = 0; q = odp_queue_create(name, &p);
@@ -1773,12 +1790,12 @@ static int create_queues(test_globals_t *globals) return -1; } if (odp_queue_lock_count(q) != - capa.max_ordered_locks) { + sched_capa.max_ordered_locks) { printf("Queue %" PRIu64 " created with " "%d locks instead of expected %d\n", odp_queue_to_u64(q), odp_queue_lock_count(q), - capa.max_ordered_locks); + sched_capa.max_ordered_locks); return -1; }
@@ -1795,7 +1812,7 @@ static int create_queues(test_globals_t *globals) qctx->sequence = 0;
for (ndx = 0; - ndx < capa.max_ordered_locks; + ndx < sched_capa.max_ordered_locks; ndx++) { qctx->lock_sequence[ndx] = 0; } @@ -1949,6 +1966,7 @@ static int scheduler_suite_term(void) }
odp_testinfo_t scheduler_suite[] = { + ODP_TEST_INFO(scheduler_test_capa), ODP_TEST_INFO(scheduler_test_wait_time), ODP_TEST_INFO(scheduler_test_num_prio), ODP_TEST_INFO(scheduler_test_queue_destroy),
commit 1d8b95b6d776a7f8681ef400a062a67d4d37de56 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Fri Oct 26 03:00:43 2018 +0300
linux-gen: queue, schedule: move scheduler capabilities to scheduler
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index 5f7f2c4d..88961269 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -90,6 +90,7 @@ void sched_cb_pktio_stop_finalize(int pktio_index); /* API functions */ typedef struct { uint64_t (*schedule_wait_time)(uint64_t ns); + int (*schedule_capability)(odp_schedule_capability_t *capa); odp_event_t (*schedule)(odp_queue_t *from, uint64_t wait); int (*schedule_multi)(odp_queue_t *from, uint64_t wait, odp_event_t events[], int num); diff --git a/platform/linux-generic/odp_queue_basic.c b/platform/linux-generic/odp_queue_basic.c index b1f9bd0e..1d66ccc7 100644 --- a/platform/linux-generic/odp_queue_basic.c +++ b/platform/linux-generic/odp_queue_basic.c @@ -47,7 +47,7 @@ static int queue_init(queue_entry_t *queue, const char *name, queue_global_t *queue_glb; extern _odp_queue_inline_offset_t _odp_queue_inline_offset;
-static int queue_capa(odp_queue_capability_t *capa, int sched) +static int queue_capa(odp_queue_capability_t *capa, int sched ODP_UNUSED) { memset(capa, 0, sizeof(odp_queue_capability_t));
@@ -60,11 +60,13 @@ static int queue_capa(odp_queue_capability_t *capa, int sched) capa->sched.max_num = capa->max_queues; capa->sched.max_size = queue_glb->config.max_queue_size;
+#if ODP_DEPRECATED_API if (sched) { capa->max_ordered_locks = sched_fn->max_ordered_locks(); capa->max_sched_groups = sched_fn->num_grps(); capa->sched_prios = odp_schedule_num_prio(); } +#endif
return 0; } diff --git a/platform/linux-generic/odp_queue_scalable.c b/platform/linux-generic/odp_queue_scalable.c index ac85d10a..4d5598a8 100644 --- a/platform/linux-generic/odp_queue_scalable.c +++ b/platform/linux-generic/odp_queue_scalable.c @@ -313,9 +313,11 @@ static int queue_capability(odp_queue_capability_t *capa)
/* Reserve some queues for internal use */ capa->max_queues = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; +#if ODP_DEPRECATED_API capa->max_ordered_locks = sched_fn->max_ordered_locks(); capa->max_sched_groups = sched_fn->num_grps(); capa->sched_prios = odp_schedule_num_prio(); +#endif capa->plain.max_num = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; capa->plain.max_size = 0; capa->sched.max_num = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; diff --git a/platform/linux-generic/odp_schedule_basic.c b/platform/linux-generic/odp_schedule_basic.c index 0b226e4f..85b4142c 100644 --- a/platform/linux-generic/odp_schedule_basic.c +++ b/platform/linux-generic/odp_schedule_basic.c @@ -1550,6 +1550,17 @@ static int schedule_num_grps(void) static void schedule_config(schedule_config_t *config) { *config = *(&sched->config_if); +}; + +static int schedule_capability(odp_schedule_capability_t *capa) +{ + memset(capa, 0, sizeof(odp_schedule_capability_t)); + + capa->max_ordered_locks = schedule_max_ordered_locks(); + capa->max_groups = schedule_num_grps(); + capa->max_prios = schedule_num_prio(); + + return 0; }
/* Fill in scheduler interface */ @@ -1575,6 +1586,7 @@ const schedule_fn_t schedule_basic_fn = { /* Fill in scheduler API calls */ const schedule_api_t schedule_basic_api = { .schedule_wait_time = schedule_wait_time, + .schedule_capability = schedule_capability, .schedule = schedule, .schedule_multi = schedule_multi, .schedule_multi_wait = schedule_multi_wait, diff --git a/platform/linux-generic/odp_schedule_if.c b/platform/linux-generic/odp_schedule_if.c index 4d50a13f..92e0a62f 100644 --- a/platform/linux-generic/odp_schedule_if.c +++ b/platform/linux-generic/odp_schedule_if.c @@ -30,6 +30,11 @@ uint64_t odp_schedule_wait_time(uint64_t ns) return sched_api->schedule_wait_time(ns); }
+int odp_schedule_capability(odp_schedule_capability_t *capa) +{ + return sched_api->schedule_capability(capa); +} + odp_event_t odp_schedule(odp_queue_t *from, uint64_t wait) { return sched_api->schedule(from, wait); diff --git a/platform/linux-generic/odp_schedule_scalable.c b/platform/linux-generic/odp_schedule_scalable.c index 957398ca..2c681647 100644 --- a/platform/linux-generic/odp_schedule_scalable.c +++ b/platform/linux-generic/odp_schedule_scalable.c @@ -2107,6 +2107,17 @@ static uint32_t schedule_max_ordered_locks(void) return CONFIG_QUEUE_MAX_ORD_LOCKS; }
+static int schedule_capability(odp_schedule_capability_t *capa) +{ + memset(capa, 0, sizeof(odp_schedule_capability_t)); + + capa->max_ordered_locks = schedule_max_ordered_locks(); + capa->max_groups = num_grps(); + capa->max_prios = schedule_num_prio(); + + return 0; +} + const schedule_fn_t schedule_scalable_fn = { .pktio_start = pktio_start, .thr_add = thr_add, @@ -2127,6 +2138,7 @@ const schedule_fn_t schedule_scalable_fn = {
const schedule_api_t schedule_scalable_api = { .schedule_wait_time = schedule_wait_time, + .schedule_capability = schedule_capability, .schedule = schedule, .schedule_multi = schedule_multi, .schedule_multi_wait = schedule_multi_wait, diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index f55e2151..9c477d71 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -925,6 +925,17 @@ static void order_unlock(void) { }
+static int schedule_capability(odp_schedule_capability_t *capa) +{ + memset(capa, 0, sizeof(odp_schedule_capability_t)); + + capa->max_ordered_locks = max_ordered_locks(); + capa->max_groups = num_grps(); + capa->max_prios = schedule_num_prio(); + + return 0; +} + /* Fill in scheduler interface */ const schedule_fn_t schedule_sp_fn = { .pktio_start = pktio_start, @@ -947,6 +958,7 @@ const schedule_fn_t schedule_sp_fn = { /* Fill in scheduler API calls */ const schedule_api_t schedule_sp_api = { .schedule_wait_time = schedule_wait_time, + .schedule_capability = schedule_capability, .schedule = schedule, .schedule_multi = schedule_multi, .schedule_multi_wait = schedule_multi_wait,
commit 09d8048fc8bff31797f9359db9f43da75fd15c3f Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Fri Oct 26 02:50:26 2018 +0300
linux-gen: move NUM_INTERNAL_QUEUES to config
It is really a config value, that allows one to select amount of queues to be reserved for platform internal use, so move it to config header.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_config_internal.h b/platform/linux-generic/include/odp_config_internal.h index a06e0c97..810576b9 100644 --- a/platform/linux-generic/include/odp_config_internal.h +++ b/platform/linux-generic/include/odp_config_internal.h @@ -26,6 +26,11 @@ extern "C" { */ #define ODP_CONFIG_QUEUES 1024
+/* + * Queues reserved for ODP internal use + */ +#define NUM_INTERNAL_QUEUES 64 + /* * Maximum number of ordered locks per queue */ diff --git a/platform/linux-generic/odp_queue_basic.c b/platform/linux-generic/odp_queue_basic.c index f02a9a32..b1f9bd0e 100644 --- a/platform/linux-generic/odp_queue_basic.c +++ b/platform/linux-generic/odp_queue_basic.c @@ -30,8 +30,6 @@ #include <odp/api/plat/queue_inline_types.h> #include <odp_global_data.h>
-#define NUM_INTERNAL_QUEUES 64 - #include <odp/api/plat/ticketlock_inlines.h> #define LOCK(queue_ptr) odp_ticketlock_lock(&((queue_ptr)->s.lock)) #define UNLOCK(queue_ptr) odp_ticketlock_unlock(&((queue_ptr)->s.lock)) diff --git a/platform/linux-generic/odp_queue_scalable.c b/platform/linux-generic/odp_queue_scalable.c index 5bff1354..ac85d10a 100644 --- a/platform/linux-generic/odp_queue_scalable.c +++ b/platform/linux-generic/odp_queue_scalable.c @@ -33,8 +33,6 @@ #include <string.h> #include <inttypes.h>
-#define NUM_INTERNAL_QUEUES 64 - #define MIN(a, b) \ ({ \ __typeof__(a) tmp_a = (a); \
commit c7d5d4005f333f3f125e0582aac7cf2423112ac4 Author: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Date: Wed Oct 24 17:49:29 2018 +0300
api: queue, schedule: move scheduler capabilities to scheduler
Add odp_schedule_capability() call to query scheduler capabilities. Move basic scheduler capabilities to new odp_schedule_capability_t structure.
Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/queue_types.h b/include/odp/api/spec/queue_types.h index be7e79a8..9c7c7de9 100644 --- a/include/odp/api/spec/queue_types.h +++ b/include/odp/api/spec/queue_types.h @@ -19,6 +19,7 @@ extern "C" { #endif
#include <odp/api/schedule_types.h> +#include <odp/api/deprecated.h>
/** @addtogroup odp_queue * @{ @@ -130,14 +131,16 @@ typedef struct odp_queue_capability_t { * types are used simultaneously. */ uint32_t max_queues;
- /** Maximum number of ordered locks per queue */ - uint32_t max_ordered_locks; + /** @deprecated Use max_ordered_locks field of + * odp_schedule_capability_t instead */ + uint32_t ODP_DEPRECATE(max_ordered_locks);
- /** Maximum number of scheduling groups */ - unsigned max_sched_groups; + /** @deprecated Use max_groups field of odp_schedule_capability_t + * instead */ + unsigned int ODP_DEPRECATE(max_sched_groups);
- /** Number of scheduling priorities */ - unsigned sched_prios; + /** @deprecated Use prios field of odp_schedule_capability_t instead */ + unsigned int ODP_DEPRECATE(sched_prios);
/** Plain queue capabilities */ struct { diff --git a/include/odp/api/spec/schedule.h b/include/odp/api/spec/schedule.h index d9b868e3..6538c509 100644 --- a/include/odp/api/spec/schedule.h +++ b/include/odp/api/spec/schedule.h @@ -257,6 +257,18 @@ int odp_schedule_default_prio(void); */ int odp_schedule_num_prio(void);
+/** + * Query scheduler capabilities + * + * Outputs schedule capabilities on success. + * + * @param[out] capa Pointer to capability structure for output + * + * @retval 0 on success + * @retval <0 on failure + */ +int odp_schedule_capability(odp_schedule_capability_t *capa); + /** * Schedule group create * diff --git a/include/odp/api/spec/schedule_types.h b/include/odp/api/spec/schedule_types.h index 76afc6dd..f55e53f3 100644 --- a/include/odp/api/spec/schedule_types.h +++ b/include/odp/api/spec/schedule_types.h @@ -165,6 +165,21 @@ typedef struct odp_schedule_param_t { uint32_t lock_count; } odp_schedule_param_t;
+/** + * Scheduler capabilities + */ +typedef struct odp_schedule_capability_t { + /** Maximum number of ordered locks per queue */ + uint32_t max_ordered_locks; + + /** Maximum number of scheduling groups */ + uint32_t max_groups; + + /** Number of scheduling priorities */ + uint32_t max_prios; + +} odp_schedule_capability_t; + /** * @} */
commit 9b945554c0a522030de185fe5e2e0724427c8223 Author: Balasubramanian Manoharan bala.manoharan@linaro.org Date: Tue Apr 24 20:09:37 2018 +0530
api: comp: compression specification
ODP Compression specification
Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Signed-off-by: Shally Verma shally.verma@cavium.com Signed-off-by: Mahipal Challa mahipal.challa@cavium.com Reviewed-by: Petri Savolainen petri.savolainen@linaro.org Reviewed-by: Bogdan Pricope bogdan.pricope@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/abi-default/comp.h b/include/odp/api/abi-default/comp.h new file mode 100644 index 00000000..8a1145af --- /dev/null +++ b/include/odp/api/abi-default/comp.h @@ -0,0 +1,35 @@ +/* Copyright (c) 2018, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_ABI_COMP_H_ +#define ODP_ABI_COMP_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +/** @internal Dummy type for strong typing */ +typedef struct { char dummy; /**< @internal Dummy */ } _odp_abi_comp_session_t; + +/** @ingroup odp_compression + * @{ + */ + +typedef _odp_abi_comp_session_t *odp_comp_session_t; + +#define ODP_COMP_SESSION_INVALID ((odp_comp_session_t)0) + +/** + * @} + */ + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/include/odp/api/spec/comp.h b/include/odp/api/spec/comp.h new file mode 100644 index 00000000..a5eb5a23 --- /dev/null +++ b/include/odp/api/spec/comp.h @@ -0,0 +1,613 @@ +/* Copyright (c) 2018, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * ODP Compression + */ + +#ifndef ODP_API_COMP_H_ +#define ODP_API_COMP_H_ + +#include <odp/visibility_begin.h> +#include <odp/api/support.h> +#include <odp/api/packet.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/** @defgroup odp_compression ODP COMP + * Operations for Compression and Decompression API. + * Hash calculation may be combined with de-/compression operations + * + * @{ + */ + +/** + * @def ODP_COMP_SESSION_INVALID + * Invalid session handle + */ + +/** + * @typedef odp_comp_session_t + * Compression/Decompression session handle + */ + +/** + * Compression operation mode + */ +typedef enum { + /** Synchronous Compression operation + * + * Application uses synchronous operation, + * which outputs all results on function return. + * */ + ODP_COMP_OP_MODE_SYNC, + + /** Asynchronous Compression operation + * + * Application uses asynchronous operation, + * which return results via events. + * */ + ODP_COMP_OP_MODE_ASYNC +} odp_comp_op_mode_t; + +/** + * Compression operation type. + */ +typedef enum { + /** Operation type - Compress */ + ODP_COMP_OP_COMPRESS, + + /** Operation type - Decompress */ + ODP_COMP_OP_DECOMPRESS +} odp_comp_op_t; + +/** + * Compression hash algorithms + */ +typedef enum { + /** No hash algorithm selected. */ + ODP_COMP_HASH_ALG_NONE, + + /** SHA-1 hash algorithm. */ + ODP_COMP_HASH_ALG_SHA1, + + /** SHA-2 hash algorithm 256-bit digest length. */ + ODP_COMP_HASH_ALG_SHA256 +} odp_comp_hash_alg_t; + +/** + * Compression algorithms + * + */ +typedef enum { + /** No algorithm specified. Added for testing purpose. */ + ODP_COMP_ALG_NULL, + + /** DEFLATE - RFC1951 */ + ODP_COMP_ALG_DEFLATE, + + /** ZLIB - RFC1950 */ + ODP_COMP_ALG_ZLIB, + + /** LZS */ + ODP_COMP_ALG_LZS +} odp_comp_alg_t; + +/** + * Compression operation status codes + */ +typedef enum { + /** Operation completed successfully*/ + ODP_COMP_STATUS_SUCCESS, + + /** Operation terminated due to insufficient output buffer */ + ODP_COMP_STATUS_OUT_OF_SPACE_TERM, + + /** Operation failure */ + ODP_COMP_STATUS_FAILURE, +} odp_comp_status_t; + +/** + * Hash algorithms in a bit field structure + */ +typedef union odp_comp_hash_algos_t { + /** hash algorithms */ + struct { + /** ODP_COMP_HASH_ALG_NONE */ + uint32_t none : 1, + + /** ODP_COMP_HASH_ALG_SHA1 */ + uint32_t sha1 : 1; + + /** ODP_COMP_HASH_ALG_SHA256 */ + uint32_t sha256 : 1; + + } bit; + + /** All bits of the bit field structure + * + * This field can be used to set/clear all flags, or bitwise + * operations over the entire structure. + */ + uint32_t all_bits; +} odp_comp_hash_algos_t; + +/** + * Compression algorithms in a bit field structure + */ +typedef union odp_comp_algos_t { + /** Compression algorithms */ + struct { + /** ODP_COMP_ALG_NULL */ + uint32_t null : 1; + + /** ODP_COMP_ALG_DEFLATE */ + uint32_t deflate : 1; + + /** ODP_COMP_ALG_ZLIB */ + uint32_t zlib : 1; + + /** ODP_COMP_ALG_LZS */ + uint32_t lzs : 1; + } bit; + + /** All bits of the bit field structure + * This field can be used to set/clear all flags, or bitwise + * operations over the entire structure. + */ + uint32_t all_bits; +} odp_comp_algos_t; + +/** + * Compression Interface Capabilities + */ +typedef struct odp_comp_capability_t { + /** Maximum number of sessions */ + uint32_t max_sessions; + + /** Supported compression algorithms */ + odp_comp_algos_t comp_algos; + + /** Supported hash algorithms */ + odp_comp_hash_algos_t hash_algos; + + /** Synchronous compression mode support (ODP_COMP_OP_MODE_SYNC) */ + odp_support_t sync; + + /** Aynchronous compression mode support (ODP_COMP_OP_MODE_SSYNC) */ + odp_support_t async; +} odp_comp_capability_t; + +/** + * Hash algorithm capabilities + */ +typedef struct odp_comp_hash_alg_capability_t { + /** Digest length in bytes */ + uint32_t digest_len; +} odp_comp_hash_alg_capability_t; + +/** + * Compression algorithm capabilities + */ +typedef struct odp_comp_alg_capability_t { + + /** Maximum compression level supported by implementation of this + * algorithm. Indicates number of compression levels supported by + * implementation. Valid range from (1 ... max_level) + */ + uint32_t max_level; + + /** Supported hash algorithms */ + odp_comp_hash_algos_t hash_algo; + + /** Compression ratio + * Optimal compression operation ratio for this algorithm. + * This is an estimate of maximum compression operation output for this + * algorithm. It is expressed as a percentage of maximum expected + * output data size with respect to input data size. + * i.e a value of 200% denotes the output data is 2x times the input + * data size. This is an optimal/most case estimate and it is possible + * that the percentage of output data produced might be greater + * than this value. + * + * @see odp__percent_t + */ + odp_percent_t compression_ratio; +} odp_comp_alg_capability_t; + +/** + * Compression Huffman type. Used by DEFLATE algorithm + */ +typedef enum odp_comp_huffman_code { + /** Fixed Huffman code */ + ODP_COMP_HUFFMAN_FIXED, + + /** Dynamic Huffman code */ + ODP_COMP_HUFFMAN_DYNAMIC, + + /** Default huffman code selected by implementation */ + ODP_COMP_HUFFMAN_DEFAULT, +} odp_comp_huffman_code_t; + +/** + * Compression DEFLATEe algorithm parameters. + * Also initialized by other deflate based algorithms , ex. ZLIB + */ +typedef struct odp_comp_deflate_param { + /** + * Compression level + * + * Valid range is integer between (0 ... max_level) + * level supported by the implementation. + * + * where, + * 0 - implemention default + * + * 1 - fastest compression i.e. output produced at + * best possible speed at the expense of compression quality + * + * max_level - High quality compression + * + * @see 'max_level' in odp_comp_alg_capability_t + */ + uint32_t comp_level; + + /** huffman code to use */ + odp_comp_huffman_code_t huffman_code; +} odp_comp_deflate_param_t; + +/** + * Compression algorithm specific parameters + */ +typedef union odp_comp_alg_param_t { + /** deflate parameter */ + odp_comp_deflate_param_t deflate; + + /** Struct for defining zlib algorithm parameters */ + struct { + /** deflate algo params */ + odp_comp_deflate_param_t deflate; + } zlib; +} odp_comp_alg_param_t; + + /** + * Compression session creation parameters + */ +typedef struct odp_comp_session_param_t { + /** Compression operation type Compress vs Decompress */ + odp_comp_op_t op; + + /** Compression operation mode + * + * Operation mode Synchronous vs Asynchronous + * + * @see odp_comp_op(), odp_comp_op_enq() + */ + odp_comp_op_mode_t mode; + + /** Compression algorithm + * + * @see odp_comp_capability() + */ + odp_comp_alg_t comp_algo; + + /** Hash algorithm + * + * @see odp_comp_alg_capability() + */ + odp_comp_hash_alg_t hash_algo; + + /** parameters specific to compression */ + odp_comp_alg_param_t alg_param; + + /** Session packet enqueue ordering + * Boolean to indicate if packet enqueue ordering is required per + * session. Valid only for Asynchronous operation mode + * (ODP_COMP_OP_MODE_ASYNC). Packet order is always maintained for + * synchronous operation mode (ODP_COMP_OP_MODE_SYNC) + * + * true: packet session enqueue order maintained + * + * false: packet session enqueue order is not maintained + * + * @note: By disabling packet order requirement, performance oriented + * application can leverage HW offered parallelism to increase operation + * performance. + */ + odp_bool_t packet_order; + + /** Destination queue for compression operations result. + * Results are enqueued as ODP_EVENT_PACKET with subtype + * ODP_EVENT_PACKET_COMP + */ + odp_queue_t compl_queue; +} odp_comp_session_param_t; + +/** + * Compression packet operation result + */ +typedef struct odp_comp_packet_result_t { + /** Operation status code */ + odp_comp_status_t status; + + /** Input packet handle */ + odp_packet_t pkt_in; + + /** Output packet data range + * Specifies offset and length of data resulting from compression + * operation. When hashing is configured output_data_range.len equals + * length of output data + 'digest+len' + */ + odp_packet_data_range_t output_data_range; +} odp_comp_packet_result_t; + +/** + * Compression per packet operation parameters + */ +typedef struct odp_comp_packet_op_param_t { + /** Session handle */ + odp_comp_session_t session; + + /** Input data range to process. where, + * + * offset - starting offset + * length - length of data for compression operation + * */ + odp_packet_data_range_t in_data_range; + + /** Output packet data range. + * Indicates where processed packet will be written. where, + * + * offset - starting offset + * length - length of buffer available for output + * + * Output packet data is not modified outside of this provided data + * range. If output data length is not sufficient for compression + * operation ODP_COMP_STATUS_OUT_OF_SPACE_TERM error will occur + */ + odp_packet_data_range_t out_data_range; +} odp_comp_packet_op_param_t; + +/** + * Query compression capabilities + * + * Output compression capabilities on success. + * + * @param[out] capa Pointer to capability structure for output + * + * @retval 0 on success + * @retval <0 on failure + */ +int odp_comp_capability(odp_comp_capability_t *capa); + +/** + * Query supported compression algorithm capabilities + * + * Output algorithm capabilities. + * + * @param comp Compression algorithm + * @param[out] capa Compression algorithm capability + * + * @retval 0 on success + * @retval <0 on failure + */ +int odp_comp_alg_capability(odp_comp_alg_t comp, + odp_comp_alg_capability_t *capa); + +/** + * Query supported hash algorithm capabilities + * + * Outputs all supported configuration options for the algorithm. + * + * @param hash Hash algorithm + * @param capa Hash algorithm capability + * + * @retval 0 on success + * @retval <0 on failure + */ +int odp_comp_hash_alg_capability(odp_comp_hash_alg_t hash, + odp_comp_hash_alg_capability_t *capa); + +/** + * Initialize compression session parameters + * + * Initialize an odp_comp_session_param_t to its default values for + * all fields. + * + * @param param Pointer to odp_comp_session_param_t to be initialized + */ +void odp_comp_session_param_init(odp_comp_session_param_t *param); + +/** + * Compression session creation + * + * Create a comp session according to the session parameters. Use + * odp_comp_session_param_init() to initialize parameters into their + * default values. + * + * @param param Session parameters + * + * @retval Comp session handle + * @retval ODP_COMP_SESSION_INVALID on failure + */ +odp_comp_session_t +odp_comp_session_create(const odp_comp_session_param_t *param); + +/** + * Compression session destroy + * + * Destroy an unused session. Result is undefined if session is being used + * (i.e. asynchronous operation is in progress). + * + * @param session Session handle + * + * @retval 0 on success + * @retval <0 on failure + */ +int odp_comp_session_destroy(odp_comp_session_t session); + +/** + * Synchronous packet compression operation + * + * This operation does packet compression in synchronous mode. A successful operation + * returns the number of successfully processed input packets and updates the + * results in the corresponding output packets. Outputted packets contain + * compression results metadata (odp_comp_packet_result_t), which should be + * checked for operation status. Length of outputted data can be got from + * output_data_range.len. + * + * When hashing is configured along with compression operation the + * result is appended at the end of the output data, output_data_range.len + * equals length of output data + 'digest_len'. Processed data length + * can be computed by subtracting 'digest_len' from output_data_range.len where + * 'digest_len' can be queried from odp_comp_hash_alg_capability(). + * Hash is always performed on plain text. Hash validation in decompression is + * performed by the application. + * For every input packet entry in 'pkt_in' array, application should pass + * corresponding valid output packet handle. If any error occurs during + * processing of packets, the API returns with number of entries successfully + * processed. + * Output packet metadatas like length or data pointer will not be updated. + * + * @param pkt_in Packets to be processed + * @param pkt_out Packet handle array for resulting packets + * @param num_pkt Number of packets to be processed + * @param param Operation parameters + * + * @return Number of input packets consumed (0 ... num_pkt) + * @retval <0 on failure + * + * @note The 'pkt_in','pkt_out'and 'param' arrays should be of same length, + * Results are undefined if otherwise. + + * @note Same packet handle cannot be used as input and output parameter. + * In-place compression operation is not supported + */ +int odp_comp_op(const odp_packet_t pkt_in[], odp_packet_t pkt_out[], + int num_pkt, const odp_comp_packet_op_param_t param[]); + +/** + * Asynchronous packet compression operation + * + * This operation does packet compression in asynchronous mode. It processes + * packets otherwise identical to odp_comp_op(), but the resulting packets are + * enqueued to 'compl_queue' configured during session (odp_comp_session_t) + * creation. For every input packet entry in in_pkt array, user should pass + * corresponding valid output packet handle. On return, API returns with + * number of entries successfully submitted for operation. + * + * When hashing is configured along with compression operation the + * result is appended at the end of the output data, output_data_range.len + * equals length of output data + 'digest_len'. Processed data length + * can be computed by subtracting 'digest_len' from output_data_range.len where + * 'digest_len' can be queried from odp_comp_hash_alg_capability(). + * Hash is always performed on plain text. Hash validation in decompression is + * performed by the application. + * + * In case of partially accepted array i.e. + * when number of packets returned < num_pkt, application may attempt to + * resubmit subsequent entries via calling any of the operation API. + * + * All the packets successfully enqueued will be submitted to 'compl_queue' + * after compression operation, Application should check 'status' of the + * operation in odp_comp_packet_result_t. + * Output packet metadatas like length or data pointer will not be updated. + * + * Please note it is always recommended that application using async mode, + * provide sufficiently large buffer size to avoid + * ODP_COMP_STATUS_OUT_OF_SPACE_TERM. + * + * @param pkt_in Packets to be processed + * @param pkt_out Packet handle array for resulting packets + * @param num_pkt Number of packets to be processed + * @param param Operation parameters + * + * @return Number of input packets enqueued (0 ... num_pkt) + * @retval <0 on failure + * + * @note The 'pkt_in','pkt_out'and 'param' arrays should be of same length, + * Results are undefined if otherwise. + + * @note Same packet handle cannot be used as input and output parameter. + * In-place compression operation is not supported + + * @see odp_comp_op(), odp_comp_packet_result() + */ +int odp_comp_op_enq(const odp_packet_t pkt_in[], odp_packet_t pkt_out[], + int num_pkt, const odp_comp_packet_op_param_t param[]); + +/** + * Get compression operation results from processed packet. + * + * Successful compression operations of all modes (ODP_COMP_OP_MODE_SYNC and + * ODP_COMP_OP_MODE_ASYNC) produce packets which contain compression result + * metadata. This function copies operation results from compression processed + * packet. Event subtype of this packet is ODP_EVENT_PACKET_COMP. Results are + * undefined if non-compression processed packet is passed as input. + * + * @param[out] result pointer to operation result for output + * @param packet compression processed packet (ODP_EVENT_PACKET_COMP) + * + * @retval 0 On success + * @retval <0 On failure + */ +int odp_comp_result(odp_comp_packet_result_t *result, odp_packet_t packet); + + /** + * Convert compression processed packet event to packet handle + * + * Get packet handle corresponding to processed packet event. Event subtype + * must be ODP_EVENT_PACKET_COMP. Compression operation results can be + * examined with odp_comp_result(). + * + * @param event Event handle + * + * @return Valid Packet handle on success, + * @retval ODP_PACKET_INVALID on failure + * + * @see odp_event_subtype(), odp_comp_result() + * + */ +odp_packet_t odp_comp_packet_from_event(odp_event_t event); + + /** + * Convert processed packet handle to event + * + * The packet handle must be an output of a compression operation + * + * @param pkt Packet handle from compression operation + * @return Event handle + */ +odp_event_t odp_comp_packet_to_event(odp_packet_t pkt); + +/** + * Get printable value for an odp_comp_session_t + * + * @param hdl odp_comp_session_t handle to be printed + * @return uint64_t value that can be used to print/display this + * handle + * + * @note This routine is intended to be used for diagnostic purposes + * to enable applications to generate a printable value that represents + * an odp_comp_session_t handle. + */ +uint64_t odp_comp_session_to_u64(odp_comp_session_t hdl); + +/** + * @} + */ + +#ifdef __cplusplus +} +#endif + +#include <odp/visibility_end.h> +#endif +
-----------------------------------------------------------------------
Summary of changes: CHANGELOG | 312 +++++++ configure.ac | 4 +- doc/implementers-guide/implementers-guide.adoc | 163 ++++ include/odp/api/abi-default/traffic_mngr.h | 8 +- .../linux-generic/include/odp_config_internal.h | 8 - .../include/odp_traffic_mngr_internal.h | 37 +- platform/linux-generic/odp_pool.c | 29 +- platform/linux-generic/odp_traffic_mngr.c | 958 +++++++++++---------- 8 files changed, 1043 insertions(+), 476 deletions(-)
hooks/post-receive