This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "".
The branch, cloud-dev has been updated via 3d3c8f71f39ff139695d6f4b8e5ea17502c5f7cf (commit) via 13322ca632f8ffba292bec058e597719bc54142d (commit) via cbb7f52e28d2e1c20c8eac662aa6135242d072e8 (commit) via 49093654e6faa652387bc192c17b5006af0fc0b4 (commit) via f6f317fc47a0314f1077af2477fc169302953e5c (commit) via d091f2176a28b09503aef6aabbbe7d2433e3b69b (commit) via bfbe4f50a2e8a2532fdcb4d2c16d42a477183c07 (commit) via 6e79d897e3d5010991bf6e6ebf207bfd988f1129 (commit) via c6ab55e8bc2b797af290349b56c829e38839929b (commit) via f2da0136e11df9372ada5f01efdc4cf176680dea (commit) via e344e4364f771f32b86d822d0c447770588fe65d (commit) via f3fa10fa00f5040f5ce2bfd18894227a7ef76c02 (commit) via 4e6dbf3c8151335c2a5361da3f228666c688d8fa (commit) via f172905f96be14da4760653cba92cd1f9c820374 (commit) via 7018fa1c5e446ca8f60490c848a95b27e942783b (commit) via f66f92352bb5d1ed3a19fe5b5b8ca0419e525274 (commit) via e35949df96b46d2e23ca83029f468415efcbf18a (commit) via d248692dfe6ace13138e48eb1c23c1f5e942269c (commit) via 88efcd91e325438d7e2b7c410213ca95a9acefe5 (commit) via f91cde2bb770eefcfe791ce49c67ce3b1f5bf6d2 (commit) via 3e03317f3663abc76708141233b18d6225b2482b (commit) via cc6d4562f16e134299e21ca3e545999c97549ad0 (commit) via 80321bd9880b03929056483be9bc9d8861636b33 (commit) via e1c46f8e296a730ed27141a33189185bb7dfd1b1 (commit) via 054306373ed6aa7a65a160d11ca339b24cf9c662 (commit) via 8c5516bba62ee7250cbf64d1f4b89ee4f0b12824 (commit) via a38ea82cf1ae45b7c807164dde5783d099efd39d (commit) via 32f5f3ff343b516cb0dbe98e81479e6c748f0ebf (commit) via 3c9248f5c98b194d91712fafed4bf3b21327f2a8 (commit) via 91ef9f7e224056af351cbaf99ccfa98ee815460d (commit) via ba97c3174d0de3e08cffaad414bf2a55de8853df (commit) via 69a1179a05344b59961aaf997a1b406698b6840b (commit) via 6efe7d971a3a4b8f7eaa42660b48eee31493924c (commit) via ef4e4d4c4ef1f9f569c236494cb178feb7c90343 (commit) via e4289ab4f73221a9d20ecfb8eac6b79a26df06a3 (commit) via a787abf96b17d9714a6b892091d19c1be2bc5e6c (commit) via 20c75b764af6fd15e5e1d4df969ac33d62525405 (commit) via 04029613fbdc85221d1a20354a49ada912302fc0 (commit) via 88d36e6811de494708c520cb12e9e5f97628e9e4 (commit) via a820d8a84e132652b4cef295756ccf135e3bd54b (commit) via f7c5f3e973814e0fc9211e008a37080b1c7d4a76 (commit) via d445395c66edb38abedb918afdba37ec9f7f95af (commit) via b38117eeed5d7bd5a334ef2387bf83d5cb8b9188 (commit) via d0821c9c43fb89e4c0021b174bbc12bef543ffa0 (commit) via 17e829a3b78d6513d22496734c6edbec955cbfd4 (commit) via eccc78540c05dc71095179b613f014648385f3df (commit) via f8c6360667380f6b3fb917ecbe5b941b23dbadf1 (commit) via 6b6253c30f88c80bf632436ff06c1b000860a2f1 (commit) via c0c20e6ed7d0f86dca02a276a7f991586fa7f33d (commit) via fb3f36cec108ce9c55241d9f0e66d4832a552b8a (commit) via 3b31169bee4f036bacbe823c27c9b199fc35fe75 (commit) via 2cb0edd5820fc7fc14d6f4018a605873fdf47033 (commit) via d6e2cd7830bd474e78980414ad7046443a4a3720 (commit) via e114fba150e07e7f25b86306c30003416324955e (commit) via 316d020fc2e3fee86b955eec4946290d90fb2eb1 (commit) via 4c8e7df6337e79cef937ad3246d61e75b7d2164d (commit) via 01f75d6582aadc1aeb6d41745c3b0a2fdbe7b142 (commit) via 29cc24dc2effa5cc76af7b365bcdafe671f03545 (commit) via b7181a3f13d58d87a561cd06e00fc37a9fc237b3 (commit) via 5581eaa55c9d32429be88b068b149ebc8b235f2c (commit) via 71e078e380f81d972cb82908a8d13dcc155f5cad (commit) via 3be7ab995e5f2c4472b20008f63299d93a3a806c (commit) via 1882b608abc37314f90bdd2de8ef7f0501a8d5d8 (commit) via 863f2ca462a7dd0a17b0828d037d2594767de092 (commit) via de3ad51a88daa12e9b822e6df339c0e10448d6dd (commit) via e7cf0a6c24811d768d4df91d2c03b71676f9f783 (commit) via 360c1d34b52a2356619b9290811862b9de41de00 (commit) via 7fa8e2c97ed18f8dd6e95cbc78b7e668ccb98869 (commit) via 53120868fba0d742961817683f01b1bee25f4e4c (commit) via 8d21533c341c0624f693af554341dd389c358238 (commit) via a3ca7d17f74e515f9ab4a738191a6a1da96463ab (commit) via 60d79f29df4ace1eac5ee53fbc4ab153398d36fd (commit) via 43ea7311f98d1602ab29e9eec4ea9c895d73181d (commit) via df29614a6174b03d03d44041e13c0c83199e42c3 (commit) via 77ac5252a71c92e991c3e797c668f30f712ca111 (commit) via 465069926f1eef1f28b64c4380b552251bcd1841 (commit) via 1abaebb5e2af4713c9230c9d5d52aa53b01809f5 (commit) via 9b7b03c4b7983c97ae6bb79df941edb08a60c6b7 (commit) via 4897f9783e623dfeb0d82e552e9961b603ae9077 (commit) via 4eae04e80a634c17ac276bb06bce468cbe28cde0 (commit) via 3b8515fbd81fe4017632e7e48754a5b99f684d2e (commit) via 42184679185ce0c979e065349360167e3fce6ca0 (commit) via 120e914768f731f18083afd950fba6a6793cca45 (commit) via de32602f12e563b2d5ff10b786c6fd506e74776f (commit) via 8a939edfa992620cf7a5cb495ce44dbc15c709c6 (commit) via 40a2663668ce995e4b6b410ca0d3bf3578d02a67 (commit) via 03203ea8b1c3d142b41f5c332527f20ed29c3040 (commit) via 60105f079350405920462a4b0d59c7e78d9a8492 (commit) via 6e02ad50626de86804cbd62ae467104ae7850220 (commit) via da905ec07e1e50b4d34975a81ea289ec96eba503 (commit) via 29139f725a7d6f2bd9e57a60abf1e55f4ac64c97 (commit) via 7508c5ac906bb7cb1d339b4c5e924f3a18e504ca (commit) via 87fbe7fbf2debf8bc44bfffc3d3a2d1827208452 (commit) via a7463a692a4e2dc311c2d383595adafd01433fa4 (commit) via 91c0b58fc87ba0431241818758cea94438cd5498 (commit) via 4c8578a2105e37e645e3606622fd8c6e46b77443 (commit) via 1bc3a176474ccb199caba9a05cbfaf713ed00707 (commit) via d5bd5c2634af91c1088a1de931f7a72666e8993e (commit) via 11fed684507a320fbb79dc86769c8f1755d0276f (commit) via b2270c5b1d2badb93dd7e6dc743191c04c562ad1 (commit) via 0b2b26281c0d1d8d00d69f9829e32c9a99b7af0f (commit) via 1bc1b6fed4c75b3bd9305d2c9e646efc50de5fb3 (commit) via 1e1ee0b03cc481e027b1cb911cce1b16fb2fee9a (commit) via 6a360e61978d03d12dbfff8c34c20cf95170a1c3 (commit) via 933b8cdc4832c05a9f81e748f73d8507673cc370 (commit) via 3f02e970482ca203c8f98c1b20b2a3813312df63 (commit) via 87361c8c9017ccd3d18fdf52b9e7ba845baeb1aa (commit) via 15de2926e800a451edc3cbbe970930fc0e64ee7b (commit) via dcb74c5cce2dd3c383730a29e396b76923f201f3 (commit) via c835e02fc287286d86377a9eb8937f8711a7d3cf (commit) via 820571cc1332e06191c7a75c28eb5d908561a533 (commit) via c6bfc6805796795df8f7a124a146365a11638351 (commit) via bc3b618ab85c8404f131ef071488791b97255166 (commit) via e4fe9119bb8a18f2eab6b1d45e532c8d1c41bcc7 (commit) via 1bcd97bb1f67d96d81e4e49a77089c6b17fba8ca (commit) via 512cfde208241f21b5cdbab848be81f43823810a (commit) via a811ff57407a6b9427b225793a75c03cb386e6c9 (commit) via 7a78aa2f6113789d5f6df0ddaff360f10fc859d7 (commit) via b675c825f9dc84df533381a4018663a4c6997882 (commit) via 9dd1357dc936c3b9e44753ce2373f6bb71629e34 (commit) via ffdd8c7e423503b3e85b7fdfd844ad10692795d5 (commit) via cd17794642638d6ee65b97bed9df5ddcd2cb2520 (commit) via 0a7686e47e40db0f5f6b862d16e8b021da23f90b (commit) via 36a7c389d3e00d4c3987236bd8229c54d812f533 (commit) via 26e1a355c7312e2fcc7196eb82ef49c74232035b (commit) via 6a9971dd8dd1cf982e7ae34ae2b62ccdadaed1c9 (commit) via f637a36cd2a7fc125a2d90ed5a93933007987e95 (commit) via 859293ad9b3c862264bb0fbfe8e7037b5e04d084 (commit) via 307ebe505ceb2de6ea03c29f37de0d3a5db850ed (commit) via 7b835f0e9eac18eba901e42f4054110bae9b78f5 (commit) via 8705e548f330d23173283fcca62f4afb835a6380 (commit) via eafd83ed1d036a404a18874d80c11d454d2580d3 (commit) via aecb3c7a442b426761f1e6f43308a1e9ea709ef3 (commit) via cb58da98065d255a23b80fe7f00412b1049c3b2c (commit) via 7bb364313bfeed59556e167bfa848e9ea6f52669 (commit) via 0bc12ffcf7898515c74a5567b832dcfc913d831a (commit) via 22ec8f3aefe9b2c92020cd9aa36e9f8e5a380799 (commit) via 2119e4548281ed50fecc5fe9f5bddcaa2adee2de (commit) via 8b435cae63abbf0d44899b5bb87bb0aeb488ca2d (commit) via 63c36bee7658d9dcf7126fe67b1eb9f74cb31d46 (commit) via 90d4ce1b3b25ca18446131906007571cc0ed0191 (commit) via 3547226b19e6982bf74fc8c258b89db2c5f6a39c (commit) via 762372f299b64c8c30c3f5a0ba51fbb48e234e1e (commit) via 65d0fbba8366f68a8fe24426bc0e16ea3cd3cd04 (commit) via edc288690b65167b347a0e8c2c171198e4d2fbe3 (commit) from e89a0ed9c4cd6d7dc947b978ad1dcabc6d5a21a2 (commit)
Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below.
- Log ----------------------------------------------------------------- commit 3d3c8f71f39ff139695d6f4b8e5ea17502c5f7cf Merge: e89a0ed9 13322ca6 Author: Honnappa Nagarahalli honnappa.nagarahalli@arm.com Date: Fri Sep 29 18:46:21 2017 -0500
Merge branch 'api-next' into cloud-dev
Signed-off-by: Honnappa Nagarahalli honnappa.nagarahalli@arm.com Signed-off-by: Yi He yi.he@linaro.org Signed-off-by: Brian Brooks brian.brooks@arm.com Signed-off-by: Dmitry Eremin-Solenikov dmitry.ereminsolenikov@linaro.org Signed-off-by: Balakrishna Garapati balakrishna.garapati@linaro.org Reviewed-by: Ola Liljedahl ola.liljedahl@arm.com
diff --cc .travis.yml index 52339b53,c1626b1a..55871703 --- a/.travis.yml +++ b/.travis.yml @@@ -255,16 -264,9 +265,16 @@@ jobs script: - echo ${TRAVIS_COMMIT_RANGE}; - ODP_PATCHES=`echo ${TRAVIS_COMMIT_RANGE} | sed 's/.//'`; - - if [ -z "${ODP_PATCHES}" ]; then env; exit 1; fi; + - if [ -z "${ODP_PATCHES}" ]; then env; exit 0; fi; - ./scripts/ci-checkpatches.sh ${ODP_PATCHES}; - + - stage: test + env: TEST=linux-dpdk + compiler: gcc + script: + - ./bootstrap + - ./configure --with-platform=linux-dpdk --enable-test-cpp --enable-test-vald --enable-test-helper --enable-test-perf --enable-user-guides --enable-test-perf-proc --enable-test-example --with-sdk-install-path=`pwd`/dpdk/${TARGET} --with-cunit-path=$HOME/cunit-install/$CROSS_ARCH $CONF + - make -j $(nproc) + - sudo LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH" make check after_failure: - cat config.log - find . -name 'test-suite.log' -execdir grep -il "FAILED" {} ; -exec echo {} ; -exec cat {} ; diff --cc example/Makefile.inc index 55950918,cba385b7..419bebd4 --- a/example/Makefile.inc +++ b/example/Makefile.inc @@@ -1,7 -1,6 +1,6 @@@ - include $(top_srcdir)/platform/@with_platform@/Makefile.inc LIB = $(top_builddir)/lib - LDADD = $(LIB)/lib$(ODP_LIB_STR).la $(LIB)/libodphelper.la $(DPDK_PMDS) $(OPENSSL_LIBS) - AM_CFLAGS += \ -LDADD = $(LIB)/libodp-linux.la $(LIB)/libodphelper.la $(DPDK_PMDS) ++LDADD = $(LIB)/lib$(ODP_LIB_STR).la $(LIB)/libodphelper.la $(DPDK_PMDS) + AM_CFLAGS = \ -I$(srcdir) \ -I$(top_srcdir)/example \ -I$(top_srcdir)/platform/@with_platform@/include \ diff --cc example/ddf_ifs/Makefile.am index aa892acd,00000000..c5f7d628 mode 100644,000000..100644 --- a/example/ddf_ifs/Makefile.am +++ b/example/ddf_ifs/Makefile.am @@@ -1,27 -1,0 +1,27 @@@ +LIB = $(top_builddir)/lib + - AM_CPPFLAGS += -I$(srcdir) \ ++AM_CPPFLAGS = -I$(srcdir) \ + -I$(top_srcdir)/include \ + -I$(top_srcdir)/platform/@with_platform@/include \ + -I$(top_srcdir)/platform/@with_platform@/arch/@ARCH_DIR@ + +lib_LTLIBRARIES = $(LIB)/libddf_ifs.la + +noinst_HEADERS = \ + $(srcdir)/ddf_ifs.h \ + $(srcdir)/ddf_ifs_enumr_class.h \ + $(srcdir)/ddf_ifs_enumr_dpdk.h \ + $(srcdir)/ddf_ifs_enumr_generic.h \ + $(srcdir)/ddf_ifs_dev_dpdk.h \ + $(srcdir)/ddf_ifs_devio_direct.h \ + $(srcdir)/ddf_ifs_driver.h \ + $(srcdir)/ddf_ifs_api.h + +__LIB__libddf_ifs_la_SOURCES = \ + ddf_ifs.c \ + ddf_ifs_enumr_class.c \ + ddf_ifs_enumr_dpdk.c \ + ddf_ifs_enumr_generic.c \ + ddf_ifs_dev_dpdk.c \ + ddf_ifs_devio_direct.c \ + ddf_ifs_driver.c diff --cc frameworks/modular/odp_module.c index 89b7cb0d,00000000..475bcd5e mode 100644,000000..100644 --- a/frameworks/modular/odp_module.c +++ b/frameworks/modular/odp_module.c @@@ -1,179 -1,0 +1,181 @@@ +/* Copyright (c) 2017, ARM Limited. All rights reserved. + * + * Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + ++#include <config.h> ++ +#include <stdio.h> +#include <errno.h> +#include "odp_module.h" + +#define MODULE_FRAMEWORK_VERSION 0x00010000UL +ODP_SUBSYSTEM_DEFINE(module, "module framework", MODULE_FRAMEWORK_VERSION); + +/* Bootstrap log facility, enable if ODP_DEBUG_PRINT flag is set. */ +#define DBG(format, ...) \ + do { \ + if (ODP_DEBUG_PRINT == 1) \ + fprintf(stderr, format, ##__VA_ARGS__); \ + } while (0) + +/* Keep it simple, allow one registration session at a time. */ +static struct { + odp_rwlock_t lock; + odp_subsystem_t *subsystem; + odp_module_base_t *module; +} registration = { + .lock = ODP_RWLOCK_UNLOCKED, + .subsystem = NULL, + .module = NULL, +}; + +static inline int registration_sanity_check( + odp_subsystem_t *subsystem, odp_module_base_t *module) +{ + if (subsystem == NULL || module == NULL) + return -ENOENT; + + if (!list_node_detached(&module->list)) { + DBG("module %s was already registered.\n", module->name); + return -EAGAIN; + } + + return 0; +} + +/* Module is linked statically or dynamically, and are loaded by + * program loader (execve) or dynamic linker/loader (ld.so) + * + * subsystem_register_module() should complete the whole registration + * session and link the module into subsystem's module array. + */ +static int linker_register_module( + odp_subsystem_t *subsystem, odp_module_base_t *module) +{ + int sanity = registration_sanity_check(subsystem, module); + + if (sanity < 0) /* sanity check errors */ + return sanity; + + /* Allow one registration session at a time */ + odp_rwlock_write_lock(®istration.lock); + + /* Block the subsystem API calls in load new + * implementation modules. */ + odp_rwlock_write_lock(&subsystem->lock); + module->handler = NULL; /* no DSO handler */ + list_add_tail(&subsystem->modules, &module->list); + odp_rwlock_write_unlock(&subsystem->lock); + + odp_rwlock_write_unlock(®istration.lock); + return 0; +} + +static int (*do_register_module)(odp_subsystem_t *, odp_module_base_t *) + = &linker_register_module; + +static int loader_register_module( + odp_subsystem_t *subsystem, odp_module_base_t *module) +{ + int sanity = registration_sanity_check(subsystem, module); + + if (sanity < 0) /* sanity check errors */ + return sanity; + + /* Registration session lock must be held by + * module_loader_start(). */ + if (odp_rwlock_write_trylock(®istration.lock) == 0) { + registration.subsystem = subsystem; + registration.module = module; + return 0; + } + + odp_rwlock_write_unlock(®istration.lock); + return -EACCES; +} + +void odp_module_loader_start(void) +{ + odp_rwlock_write_lock(®istration.lock); + + if (registration.module != NULL || + registration.subsystem != NULL) { + DBG("module loader start warn, A previous " + "registration did not complete yet.\n"); + } + + registration.module = NULL; + registration.subsystem = NULL; + do_register_module = &loader_register_module; +} + +void odp_module_loader_end(void) +{ + if (registration.module != NULL || + registration.subsystem != NULL) { + DBG("module loader end warn, A previous " + "registration did not complete yet.\n"); + } + + registration.module = NULL; + registration.subsystem = NULL; + do_register_module = &linker_register_module; + + odp_rwlock_write_unlock(®istration.lock); +} + +int odp_module_install(void *dso, bool active) +{ + /* Bottom halves of the registration, context exclusion + * is guaranteed by module_loader_start() + */ + if (odp_rwlock_write_trylock(®istration.lock) == 0) { + odp_subsystem_t *subsystem = registration.subsystem; + odp_module_base_t *module = registration.module; + + if (subsystem != NULL && module != NULL) { + odp_rwlock_write_lock(&subsystem->lock); + + module->handler = dso; + list_add_tail(&subsystem->modules, &module->list); + + /* install as active implementation */ + if (active) /* warn: replaceable */ + subsystem->active = module; + + odp_rwlock_write_unlock(&subsystem->lock); + } + + registration.subsystem = NULL; + registration.module = NULL; + return 0; + } + + odp_rwlock_write_unlock(®istration.lock); + return -EACCES; +} + +int odp_module_abandon(void) +{ + /* Bottom halves of the registration, context exclusion + * is guaranteed by module_loader_start() + */ + if (odp_rwlock_write_trylock(®istration.lock) == 0) { + registration.subsystem = NULL; + registration.module = NULL; + return 0; + } + + odp_rwlock_write_unlock(®istration.lock); + return -EACCES; +} + +int __subsystem_register_module( + odp_subsystem_t *subsystem, odp_module_base_t *module) +{ + return do_register_module(subsystem, module); +} diff --cc helper/cuckootable.c index 32800911,4707191d..adce187e --- a/helper/cuckootable.c +++ b/helper/cuckootable.c @@@ -4,6 -4,8 +4,8 @@@ * SPDX-License-Identifier: BSD-3-Clause */
-#include "config.h" ++#include <config.h> + /*- * BSD LICENSE * diff --cc helper/hashtable.c index f26b18b2,b124c2d7..e0761c37 --- a/helper/hashtable.c +++ b/helper/hashtable.c @@@ -3,6 -3,9 +3,9 @@@ * * SPDX-License-Identifier: BSD-3-Clause */ + -#include "config.h" ++#include <config.h> + #include <stdio.h> #include <string.h> #include <malloc.h> diff --cc helper/iplookuptable.c index ac7d0587,7ca68de2..a579fcb5 --- a/helper/iplookuptable.c +++ b/helper/iplookuptable.c @@@ -4,6 -4,8 +4,8 @@@ * SPDX-License-Identifier: BSD-3-Clause */
-#include "config.h" ++#include <config.h> + #include <string.h> #include <stdint.h> #include <errno.h> diff --cc helper/lineartable.c index dd4a5995,831eb11b..112f2e50 --- a/helper/lineartable.c +++ b/helper/lineartable.c @@@ -4,6 -4,8 +4,8 @@@ * SPDX-License-Identifier: BSD-3-Clause */
-#include "config.h" ++#include <config.h> + #include <stdio.h> #include <string.h> #include <malloc.h> diff --cc helper/threads.c index cb747e5b,a83014d4..3b648c34 --- a/helper/threads.c +++ b/helper/threads.c @@@ -4,6 -4,8 +4,8 @@@ * SPDX-License-Identifier: BSD-3-Clause */
-#include "config.h" ++#include <config.h> + #ifndef _GNU_SOURCE #define _GNU_SOURCE #endif diff --cc platform/Makefile.inc index 1621d980,ac5cd765..5acc8bf0 --- a/platform/Makefile.inc +++ b/platform/Makefile.inc @@@ -6,15 -6,13 +6,12 @@@ pkgconfig_DATA = $(top_builddir)/pkgcon .PHONY: pkgconfig/libodp-linux.pc
VPATH = $(srcdir) $(builddir) -lib_LTLIBRARIES = $(LIB)/libodp-linux.la
- AM_LDFLAGS += -version-number '$(ODP_LIBSO_VERSION)' + AM_LDFLAGS = -version-number '$(ODP_LIBSO_VERSION)'
- AM_CFLAGS += "-DGIT_HASH=$(VERSION)" + AM_CFLAGS = "-DGIT_HASH=$(VERSION)" AM_CFLAGS += $(VISIBILITY_CFLAGS)
- #The implementation will need to retain the deprecated implementation - AM_CFLAGS += -Wno-deprecated-declarations - AM_CFLAGS += @PTHREAD_CFLAGS@
odpapispecincludedir= $(includedir)/odp/api/spec diff --cc platform/linux-dpdk/Makefile.am index cb0d722d,3e26aab4..ad3afec0 --- a/platform/linux-dpdk/Makefile.am +++ b/platform/linux-dpdk/Makefile.am @@@ -1,31 -1,20 +1,34 @@@ -# Uncomment this if you need to change the CUSTOM_STR string -#export CUSTOM_STR=https://git.linaro.org/lng/odp.git - include $(top_srcdir)/platform/Makefile.inc +include $(top_srcdir)/platform/@with_platform@/Makefile.inc + +lib_LTLIBRARIES = $(LIB)/libodp-dpdk.la + +PLAT_CFLAGS = +if ARCH_IS_X86 +PLAT_CFLAGS += -msse4.2 +endif + - if DPDK_DEFAULT_DIR - PLAT_CFLAGS += -include /usr/include/dpdk/rte_config.h - else ++if SDK_INSTALL_PATH_ +PLAT_CFLAGS += -include $(SDK_INSTALL_PATH)/include/rte_config.h ++else ++PLAT_CFLAGS += -include /usr/include/dpdk/rte_config.h +endif
- AM_CFLAGS += $(PLAT_CFLAGS) - AM_CFLAGS += -I$(srcdir)/include - AM_CFLAGS += -I$(top_srcdir)/platform/linux-generic/include - AM_CFLAGS += -I$(top_srcdir)/frameworks/modular - AM_CFLAGS += -I$(top_srcdir)/include/odp/arch/@ARCH_ABI@ - AM_CFLAGS += -I$(top_srcdir)/include - AM_CFLAGS += -I$(top_builddir)/include - AM_CFLAGS += -Iinclude - AM_CFLAGS += -DSYSCONFDIR="@sysconfdir@" - AM_CFLAGS += -D_ODP_PKTIO_IPC - -AM_CPPFLAGS = -I$(srcdir)/include ++AM_CPPFLAGS = $(PLAT_CFLAGS) ++AM_CPPFLAGS += -I$(top_srcdir)/platform/linux-dpdk/include ++AM_CPPFLAGS += -I$(top_srcdir)/platform/linux-generic/include ++AM_CPPFLAGS += -I$(srcdir)/include + AM_CPPFLAGS += -I$(top_srcdir)/include ++AM_CPPFLAGS += -I$(top_srcdir)/frameworks/modular + AM_CPPFLAGS += -I$(top_srcdir)/include/odp/arch/@ARCH_ABI@ + AM_CPPFLAGS += -I$(top_builddir)/include + AM_CPPFLAGS += -Iinclude -AM_CPPFLAGS += -I$(top_srcdir)/platform/$(with_platform)/arch/$(ARCH_DIR) -AM_CPPFLAGS += -Iinclude ++AM_CPPFLAGS += -I$(srcdir) ++AM_CPPFLAGS += -I$(top_srcdir)/platform/$(with_platform)/arch/$(ARCH_DIR) + AM_CPPFLAGS += -DSYSCONFDIR="@sysconfdir@" + -AM_CPPFLAGS += $(OPENSSL_CPPFLAGS) + AM_CPPFLAGS += $(DPDK_CPPFLAGS) -AM_CPPFLAGS += $(NETMAP_CPPFLAGS) +AM_CPPFLAGS += $(OPENSSL_CPPFLAGS) AM_CPPFLAGS += $(LIBCONFIG_CFLAGS)
include_HEADERS = \ diff --cc platform/linux-dpdk/buffer/dpdk.c index 346549ec,00000000..704468ee mode 100644,000000..100644 --- a/platform/linux-dpdk/buffer/dpdk.c +++ b/platform/linux-dpdk/buffer/dpdk.c @@@ -1,207 -1,0 +1,209 @@@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + ++#include <config.h> ++ +#include <odp/api/buffer.h> +#include <odp_buffer_internal.h> +#include <odp_buffer_inlines.h> +#include <odp_buffer_subsystem.h> +#include <odp_debug_internal.h> +#include <odp_pool_internal.h> + +#include <string.h> +#include <stdio.h> +#include <inttypes.h> + +static odp_buffer_t buffer_alloc(odp_pool_t pool_hdl) +{ + odp_buffer_t buffer; + pool_entry_dp_t *pool_dp; + + ODP_ASSERT(odp_pool_to_entry_cp(pool_hdl)->params.type == + ODP_POOL_BUFFER || + odp_pool_to_entry_cp(pool_hdl)->params.type == + ODP_POOL_TIMEOUT); + + pool_dp = odp_pool_to_entry_dp(pool_hdl); + + buffer = (odp_buffer_t)rte_ctrlmbuf_alloc(pool_dp->rte_mempool); + + if ((struct rte_mbuf *)buffer == NULL) { + rte_errno = ENOMEM; + return ODP_BUFFER_INVALID; + } + + buf_hdl_to_hdr(buffer)->next = NULL; + return buffer; +} + +static odp_buffer_t dpdk_buffer_alloc(odp_pool_t pool_hdl) +{ + ODP_ASSERT(ODP_POOL_INVALID != pool_hdl); + + return buffer_alloc(pool_hdl); +} + +static int dpdk_buffer_alloc_multi(odp_pool_t pool_hdl, + odp_buffer_t buf[], + int num) +{ + int i; + + ODP_ASSERT(ODP_POOL_INVALID != pool_hdl); + + for (i = 0; i < num; i++) { + buf[i] = buffer_alloc(pool_hdl); + if (buf[i] == ODP_BUFFER_INVALID) + return rte_errno == ENOMEM ? i : -EINVAL; + } + return i; +} + +static void dpdk_buffer_free(odp_buffer_t buf) +{ + struct rte_mbuf *mbuf = (struct rte_mbuf *)buf; + + rte_ctrlmbuf_free(mbuf); +} + +static void dpdk_buffer_free_multi(const odp_buffer_t buf[], int num) +{ + int i; + + for (i = 0; i < num; i++) { + struct rte_mbuf *mbuf = (struct rte_mbuf *)buf[i]; + + rte_ctrlmbuf_free(mbuf); + } +} + +static odp_buffer_t dpdk_buffer_from_event(odp_event_t ev) +{ + return (odp_buffer_t)ev; +} + +static odp_event_t dpdk_buffer_to_event(odp_buffer_t buf) +{ + return (odp_event_t)buf; +} + +static void *dpdk_buffer_addr(odp_buffer_t buf) +{ + odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf); + + return hdr->mb.buf_addr; +} + +static uint32_t dpdk_buffer_size(odp_buffer_t buf) +{ + odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf); + struct rte_mbuf *mbuf = (struct rte_mbuf *)hdr; + + return mbuf->buf_len; +} + +int _odp_buffer_type(odp_buffer_t buf) +{ + odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf); + + return hdr->type; +} + +void _odp_buffer_type_set(odp_buffer_t buf, int type) +{ + odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf); + + hdr->type = type; +} + +static int dpdk_buffer_is_valid(odp_buffer_t buf) +{ + /* We could call rte_mbuf_sanity_check, but that panics + * and aborts the program */ + return buf != ODP_BUFFER_INVALID; +} + +int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf) +{ + odp_buffer_hdr_t *hdr; + int len = 0; + + if (!odp_buffer_is_valid(buf)) { + ODP_PRINT("Buffer is not valid.\n"); + return len; + } + + hdr = buf_hdl_to_hdr(buf); + + len += snprintf(&str[len], n - len, + "Buffer\n"); + len += snprintf(&str[len], n - len, + " pool %p\n", hdr->mb.pool); + len += snprintf(&str[len], n - len, + " phy_addr %" PRIu64 "\n", hdr->mb.buf_physaddr); + len += snprintf(&str[len], n - len, + " addr %p\n", hdr->mb.buf_addr); + len += snprintf(&str[len], n - len, + " size %u\n", hdr->mb.buf_len); + len += snprintf(&str[len], n - len, + " ref_count %i\n", + rte_mbuf_refcnt_read(&hdr->mb)); + len += snprintf(&str[len], n - len, + " odp type %i\n", hdr->type); + + return len; +} + +static void dpdk_buffer_print(odp_buffer_t buf) +{ + int max_len = 512; + char str[max_len]; + int len; + + len = odp_buffer_snprint(str, max_len - 1, buf); + str[len] = 0; + + ODP_PRINT("\n%s\n", str); +} + +static uint64_t dpdk_buffer_to_u64(odp_buffer_t hdl) +{ + return _odp_pri(hdl); +} + +static odp_pool_t dpdk_buffer_pool(odp_buffer_t buf) +{ + return buf_hdl_to_hdr(buf)->pool_hdl; +} + +odp_buffer_module_t dpdk_buffer = { + .base = { + .name = "dpdk_buffer", + .init_local = NULL, + .term_local = NULL, + .init_global = NULL, + .term_global = NULL, + }, + .buffer_alloc = dpdk_buffer_alloc, + .buffer_alloc_multi = dpdk_buffer_alloc_multi, + .buffer_free = dpdk_buffer_free, + .buffer_free_multi = dpdk_buffer_free_multi, + .buffer_from_event = dpdk_buffer_from_event, + .buffer_to_event = dpdk_buffer_to_event, + .buffer_addr = dpdk_buffer_addr, + .buffer_size = dpdk_buffer_size, + .buffer_is_valid = dpdk_buffer_is_valid, + .buffer_print = dpdk_buffer_print, + .buffer_to_u64 = dpdk_buffer_to_u64, + .buffer_pool = dpdk_buffer_pool, +}; + +ODP_MODULE_CONSTRUCTOR(dpdk_buffer) +{ + odp_module_constructor(&dpdk_buffer); + odp_subsystem_register_module(buffer, &dpdk_buffer); +} diff --cc platform/linux-dpdk/include/odp_packet_io_internal.h index ec1d0c21,00000000..10760ca4 mode 100644,000000..100644 --- a/platform/linux-dpdk/include/odp_packet_io_internal.h +++ b/platform/linux-dpdk/include/odp_packet_io_internal.h @@@ -1,154 -1,0 +1,157 @@@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * ODP packet IO - implementation internal + */ + +#ifndef ODP_PACKET_IO_INTERNAL_H_ +#define ODP_PACKET_IO_INTERNAL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <odp/api/spinlock.h> +#include <odp/api/ticketlock.h> +#include <odp_classification_datamodel.h> +#include <odp_align_internal.h> +#include <odp_debug_internal.h> +#include <odp_queue_if.h> + +#include <odp_config_internal.h> +#include <odp/api/hints.h> + +#define PKTIO_MAX_QUEUES 64 +#include <linux/if_ether.h> +#include <pktio/dpdk.h> + +/* Forward declaration */ +typedef union pktio_entry_u pktio_entry_t; +#include <odp_pktio_ops_subsystem.h> + +#define PKTIO_NAME_LEN 256 + +#define PKTIN_INVALID ((odp_pktin_queue_t) {ODP_PKTIO_INVALID, 0}) +#define PKTOUT_INVALID ((odp_pktout_queue_t) {ODP_PKTIO_INVALID, 0}) + +struct pktio_entry { + const pktio_ops_module_t *ops; /**< Implementation specific methods */ + pktio_ops_data_t ops_data; + /* These two locks together lock the whole pktio device */ + odp_ticketlock_t rxl; /**< RX ticketlock */ + odp_ticketlock_t txl; /**< TX ticketlock */ + int cls_enabled; /**< is classifier enabled */ + odp_pktio_t handle; /**< pktio handle */ + enum { + /* Not allocated */ + PKTIO_STATE_FREE = 0, + /* Close pending on scheduler response. Next state after this + * is PKTIO_STATE_FREE. */ + PKTIO_STATE_CLOSE_PENDING, + /* Open in progress. + Marker for all active states following under. */ + PKTIO_STATE_ACTIVE, + /* Open completed */ + PKTIO_STATE_OPENED, + /* Start completed */ + PKTIO_STATE_STARTED, + /* Stop pending on scheduler response */ + PKTIO_STATE_STOP_PENDING, + /* Stop completed */ + PKTIO_STATE_STOPPED + } state; + odp_pktio_config_t config; /**< Device configuration */ + classifier_t cls; /**< classifier linked with this pktio*/ + odp_pktio_stats_t stats; /**< statistic counters for pktio */ + char name[PKTIO_NAME_LEN]; /**< name of pktio provided to + pktio_open() */ + odp_pool_t pool; + odp_pktio_param_t param; + + /* Storage for queue handles + * Multi-queue support is pktio driver specific */ + unsigned num_in_queue; + unsigned num_out_queue; + + struct { + odp_queue_t queue; + queue_t queue_int; + odp_pktin_queue_t pktin; + } in_queue[PKTIO_MAX_QUEUES]; + + struct { + odp_queue_t queue; + odp_pktout_queue_t pktout; + } out_queue[PKTIO_MAX_QUEUES]; +}; + +union pktio_entry_u { + struct pktio_entry s; + uint8_t pad[ROUNDUP_CACHE_LINE(sizeof(struct pktio_entry))]; +}; + +typedef struct { + odp_spinlock_t lock; + pktio_entry_t entries[ODP_CONFIG_PKTIO_ENTRIES]; +} pktio_table_t; + +extern void *pktio_entry_ptr[]; + +static inline int pktio_to_id(odp_pktio_t pktio) +{ + return _odp_typeval(pktio) - 1; +} + +static inline pktio_entry_t *get_pktio_entry(odp_pktio_t pktio) +{ + if (odp_unlikely(pktio == ODP_PKTIO_INVALID)) + return NULL; + + if (odp_unlikely(_odp_typeval(pktio) > ODP_CONFIG_PKTIO_ENTRIES)) { + ODP_DBG("pktio limit %d/%d exceed\n", + _odp_typeval(pktio), ODP_CONFIG_PKTIO_ENTRIES); + return NULL; + } + + return pktio_entry_ptr[pktio_to_id(pktio)]; +} + +static inline int pktio_cls_enabled(pktio_entry_t *entry) +{ + return entry->s.cls_enabled; +} + +static inline void pktio_cls_enabled_set(pktio_entry_t *entry, int ena) +{ + entry->s.cls_enabled = ena; +} + +/* + * Dummy single queue implementations of multi-queue API + */ +int single_input_queues_config(pktio_entry_t *entry, + const odp_pktin_queue_param_t *param); +int single_output_queues_config(pktio_entry_t *entry, + const odp_pktout_queue_param_t *param); +int single_recv_queue(pktio_entry_t *entry, int index, odp_packet_t packets[], + int num); +int single_send_queue(pktio_entry_t *entry, int index, + const odp_packet_t packets[], int num); + ++int pktin_poll_one(int pktio_index, ++ int rx_queue, ++ odp_event_t evt_tbl[]); +int pktin_poll(int pktio_index, int num_queue, int index[]); +void pktio_stop_finalize(int pktio_index); + +#ifdef __cplusplus +} +#endif + +#endif diff --cc platform/linux-dpdk/m4/configure.m4 index 428ecc82,00000000..b08136bc mode 100644,000000..100644 --- a/platform/linux-dpdk/m4/configure.m4 +++ b/platform/linux-dpdk/m4/configure.m4 @@@ -1,161 -1,0 +1,160 @@@ +# Enable -fvisibility=hidden if using a gcc that supports it +OLD_CFLAGS="$CFLAGS" +AC_MSG_CHECKING([whether $CC supports -fvisibility=hidden]) +VISIBILITY_CFLAGS="-fvisibility=hidden" +CFLAGS="$CFLAGS $VISIBILITY_CFLAGS" +AC_LINK_IFELSE([AC_LANG_PROGRAM()], AC_MSG_RESULT([yes]), + [VISIBILITY_CFLAGS=""; AC_MSG_RESULT([no])]); + +AC_SUBST(VISIBILITY_CFLAGS) +# Restore CFLAGS; VISIBILITY_CFLAGS are added to it where needed. +CFLAGS=$OLD_CFLAGS + +AC_MSG_CHECKING(for GCC atomic builtins) +AC_LINK_IFELSE( + [AC_LANG_SOURCE( + [[int main() { + int v = 1; + __atomic_fetch_add(&v, 1, __ATOMIC_RELAXED); + __atomic_fetch_sub(&v, 1, __ATOMIC_RELAXED); + __atomic_store_n(&v, 1, __ATOMIC_RELAXED); + __atomic_load_n(&v, __ATOMIC_RELAXED); + return 0; + } + ]])], + AC_MSG_RESULT(yes), + AC_MSG_RESULT(no) + echo "GCC-style __atomic builtins not supported by the compiler." + echo "Use newer version. For gcc > 4.7.0" + exit -1) + +dnl Check for libconfig (required) +PKG_CHECK_MODULES([LIBCONFIG], [libconfig >= 1.3.2]) + +dnl Check whether -latomic is needed +use_libatomic=no + +AC_MSG_CHECKING(whether -latomic is needed for 64-bit atomic built-ins) +AC_LINK_IFELSE( + [AC_LANG_SOURCE([[ + static int loc; + int main(void) + { + int prev = __atomic_exchange_n(&loc, 7, __ATOMIC_RELAXED); + return 0; + } + ]])], + [AC_MSG_RESULT(no)], + [AC_MSG_RESULT(yes) + AC_CHECK_LIB( + [atomic], [__atomic_exchange_8], + [use_libatomic=yes], + [AC_MSG_CHECKING([__atomic_exchange_8 is not available])]) + ]) + +AC_MSG_CHECKING(whether -latomic is needed for 128-bit atomic built-ins) +AC_LINK_IFELSE( + [AC_LANG_SOURCE([[ + static __int128 loc; + int main(void) + { + __int128 prev; + prev = __atomic_exchange_n(&loc, 7, __ATOMIC_RELAXED); + return 0; + } + ]])], + [AC_MSG_RESULT(no)], + [AC_MSG_RESULT(yes) + AC_CHECK_LIB( + [atomic], [__atomic_exchange_16], + [use_libatomic=yes], + [AC_MSG_CHECKING([cannot detect support for 128-bit atomics])]) + ]) + +if test "x$use_libatomic" = "xyes"; then + ATOMIC_LIBS="-latomic" +fi +AC_SUBST([ATOMIC_LIBS]) + +# linux-generic PCAP support is not relevant as the code doesn't use +# linux-generic pktio at all. And DPDK has its own PCAP support anyway +AM_CONDITIONAL([HAVE_PCAP], [false]) +AM_CONDITIONAL([netmap_support], [false]) +AM_CONDITIONAL([PKTIO_DPDK], [false]) +m4_include([platform/linux-dpdk/m4/odp_pthread.m4]) +m4_include([platform/linux-dpdk/m4/odp_timer.m4]) +m4_include([platform/linux-dpdk/m4/odp_openssl.m4]) +m4_include([platform/linux-dpdk/m4/odp_modules.m4]) +m4_include([platform/linux-dpdk/m4/odp_schedule.m4]) + +########################################################################## +# DPDK build variables +########################################################################## +DPDK_DRIVER_DIR=/usr/lib/$(uname -m)-linux-gnu - AS_CASE($host_cpu, [x86_64], [AM_CPPFLAGS="$AM_CPPFLAGS -msse4.2"]) - if test ${DPDK_DEFAULT_DIR} = 1; then - AM_CPPFLAGS="$AM_CPPFLAGS -I/usr/include/dpdk" ++AS_CASE($host_cpu, [x86_64], [DPDK_CPPFLAGS="$DPDK_CPPFLAGS -msse4.2"]) ++if test "x${SDK_INSTALL_PATH}" = "x"; then ++ DPDK_CPPFLAGS="$DPDK_CPPFLAGS -I/usr/include/dpdk" +else + DPDK_DRIVER_DIR=$SDK_INSTALL_PATH/lib - AM_CPPFLAGS="$AM_CPPFLAGS -I$SDK_INSTALL_PATH/include" - AM_LDFLAGS="$AM_LDFLAGS -L$SDK_INSTALL_PATH/lib" ++ DPDK_CPPFLAGS="$DPDK_CPPFLAGS -I$SDK_INSTALL_PATH/include" ++ DPDK_LDFLAGS="$DPDK_CPPFLAGS -L$SDK_INSTALL_PATH/lib" +fi + +# Check if we should link against the static or dynamic DPDK library +AC_ARG_ENABLE([shared-dpdk], + [ --enable-shared-dpdk link against the shared DPDK library], + [if test "x$enableval" = "xyes"; then + shared_dpdk=true + fi]) + +########################################################################## +# Save and set temporary compilation flags +########################################################################## +OLD_LDFLAGS=$LDFLAGS +OLD_CPPFLAGS=$CPPFLAGS - LDFLAGS="$AM_LDFLAGS $LDFLAGS" - CPPFLAGS="$AM_CPPFLAGS $CPPFLAGS -pthread" ++LDFLAGS="$DPDK_LDFLAGS $LDFLAGS" ++CPPFLAGS="$DPDK_CPPFLAGS $CPPFLAGS -pthread" + +########################################################################## +# Check for DPDK availability +########################################################################## +AC_CHECK_HEADERS([rte_config.h], [], + [AC_MSG_FAILURE(["can't find DPDK headers"])]) + - AC_SEARCH_LIBS([rte_eal_init], [dpdk], [], - [AC_MSG_ERROR([DPDK libraries required])], [-ldl]) - +########################################################################## +# In case of static linking DPDK pmd drivers are not linked unless the +# --whole-archive option is used. No spaces are allowed between the +# --whole-arhive flags. +########################################################################## +if test "x$shared_dpdk" = "xtrue"; then - LIBS="$LIBS -Wl,--no-as-needed,-ldpdk,-as-needed -ldl -lm -lpcap" ++ DPDK_LIBS="-Wl,--no-as-needed,-ldpdk,-as-needed -ldl -lm -lpcap" +else + + AS_VAR_SET([DPDK_PMDS], [-Wl,--whole-archive,]) + for filename in $DPDK_DRIVER_DIR/librte_pmd_*.a; do + cur_driver=`basename "$filename" .a | sed -e 's/^lib//'` + # rte_pmd_nfp has external dependencies which break linking + if test "$cur_driver" = "rte_pmd_nfp"; then + echo "skip linking rte_pmd_nfp" + else + AS_VAR_APPEND([DPDK_PMDS], [-l$cur_driver,]) + fi + done + AS_VAR_APPEND([DPDK_PMDS], [--no-whole-archive]) + + DPDK_LIBS="-L$DPDK_DRIVER_DIR -ldpdk -lpthread -ldl -lm -lpcap" - AC_SUBST([DPDK_CPPFLAGS]) - AC_SUBST([DPDK_LIBS]) + AC_SUBST([DPDK_PMDS]) +fi + +########################################################################## +# Restore old saved variables +########################################################################## +LDFLAGS=$OLD_LDFLAGS +CPPFLAGS=$OLD_CPPFLAGS + ++AC_SUBST([DPDK_CPPFLAGS]) ++AC_SUBST([DPDK_LDFLAGS]) ++AC_SUBST([DPDK_LIBS]) ++ +AC_CONFIG_FILES([platform/linux-dpdk/Makefile + platform/linux-dpdk/include/odp/api/plat/static_inline.h]) diff --cc platform/linux-dpdk/odp_crypto.c index 8235e1bd,00000000..8e0f8a9d mode 100644,000000..100644 --- a/platform/linux-dpdk/odp_crypto.c +++ b/platform/linux-dpdk/odp_crypto.c @@@ -1,1396 -1,0 +1,1398 @@@ +/* Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + ++#include "config.h" ++ +#include <odp/api/crypto.h> +#include <odp_internal.h> +#include <odp/api/atomic.h> +#include <odp/api/spinlock.h> +#include <odp/api/sync.h> +#include <odp/api/debug.h> +#include <odp/api/align.h> +#include <odp/api/shared_memory.h> +#include <odp_crypto_internal.h> +#include <odp_debug_internal.h> +#include <odp/api/hints.h> +#include <odp/api/random.h> +#include <odp_packet_internal.h> +#include <rte_crypto.h> +#include <rte_cryptodev.h> + +#include <string.h> +#include <math.h> + +#include <openssl/rand.h> + +/* default number supported by DPDK crypto */ +#define MAX_SESSIONS 2048 +#define NB_MBUF 8192 + +typedef struct crypto_session_entry_s crypto_session_entry_t; +struct crypto_session_entry_s { + struct crypto_session_entry_s *next; + odp_crypto_session_param_t p; + uint64_t rte_session; + odp_bool_t do_cipher_first; + struct rte_crypto_sym_xform cipher_xform; + struct rte_crypto_sym_xform auth_xform; + struct { + uint8_t *data; + uint16_t length; + } iv; +}; + +struct crypto_global_s { + odp_spinlock_t lock; + uint8_t enabled_crypto_devs; + uint8_t enabled_crypto_dev_ids[RTE_CRYPTO_MAX_DEVS]; + crypto_session_entry_t *free; + crypto_session_entry_t sessions[MAX_SESSIONS]; + int is_crypto_dev_initialized; + struct rte_mempool *crypto_op_pool; +}; + +typedef struct crypto_global_s crypto_global_t; +static crypto_global_t *global; +static odp_shm_t crypto_global_shm; + +static inline int is_valid_size(uint16_t length, uint16_t min, + uint16_t max, uint16_t increment) +{ + uint16_t supp_size = min; + + if (length < supp_size) + return -1; + + for (; supp_size <= max; supp_size += increment) { + if (length == supp_size) + return 0; + } + + return -1; +} + +static int cipher_alg_odp_to_rte(odp_cipher_alg_t cipher_alg, + struct rte_crypto_sym_xform *cipher_xform) +{ + int rc = 0; + + switch (cipher_alg) { + case ODP_CIPHER_ALG_NULL: + cipher_xform->cipher.algo = RTE_CRYPTO_CIPHER_NULL; + break; + case ODP_CIPHER_ALG_DES: + case ODP_CIPHER_ALG_3DES_CBC: + cipher_xform->cipher.algo = RTE_CRYPTO_CIPHER_3DES_CBC; + break; + case ODP_CIPHER_ALG_AES_CBC: +#if ODP_DEPRECATED_API + case ODP_CIPHER_ALG_AES128_CBC: +#endif + cipher_xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC; + break; + case ODP_CIPHER_ALG_AES_GCM: +#if ODP_DEPRECATED_API + case ODP_CIPHER_ALG_AES128_GCM: +#endif + cipher_xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_GCM; + break; + default: + rc = -1; + } + + return rc; +} + +static int auth_alg_odp_to_rte(odp_auth_alg_t auth_alg, + struct rte_crypto_sym_xform *auth_xform) +{ + int rc = 0; + + /* Process based on auth */ + switch (auth_alg) { + case ODP_AUTH_ALG_NULL: + auth_xform->auth.algo = RTE_CRYPTO_AUTH_NULL; + break; + case ODP_AUTH_ALG_MD5_HMAC: +#if ODP_DEPRECATED_API + case ODP_AUTH_ALG_MD5_96: +#endif + auth_xform->auth.algo = RTE_CRYPTO_AUTH_MD5_HMAC; + auth_xform->auth.digest_length = 12; + break; + case ODP_AUTH_ALG_SHA256_HMAC: +#if ODP_DEPRECATED_API + case ODP_AUTH_ALG_SHA256_128: +#endif + auth_xform->auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC; + auth_xform->auth.digest_length = 16; + break; + case ODP_AUTH_ALG_SHA1_HMAC: + auth_xform->auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC; + auth_xform->auth.digest_length = 20; + break; + case ODP_AUTH_ALG_SHA512_HMAC: + auth_xform->auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC; + auth_xform->auth.digest_length = 64; + break; + case ODP_AUTH_ALG_AES_GCM: +#if ODP_DEPRECATED_API + case ODP_AUTH_ALG_AES128_GCM: +#endif + auth_xform->auth.algo = RTE_CRYPTO_AUTH_AES_GCM; + auth_xform->auth.digest_length = 16; + break; + default: + rc = -1; + } + + return rc; +} + +static crypto_session_entry_t *alloc_session(void) +{ + crypto_session_entry_t *session = NULL; + + odp_spinlock_lock(&global->lock); + session = global->free; + if (session) { + global->free = session->next; + session->next = NULL; + } + odp_spinlock_unlock(&global->lock); + + return session; +} + +static void free_session(crypto_session_entry_t *session) +{ + odp_spinlock_lock(&global->lock); + session->next = global->free; + global->free = session; + odp_spinlock_unlock(&global->lock); +} + +int odp_crypto_init_global(void) +{ + size_t mem_size; + int idx; + int16_t cdev_id, cdev_count; + int rc = -1; + unsigned cache_size = 0; + unsigned nb_queue_pairs = 0, queue_pair; + + /* Calculate the memory size we need */ + mem_size = sizeof(*global); + mem_size += (MAX_SESSIONS * sizeof(crypto_session_entry_t)); + + /* Allocate our globally shared memory */ + crypto_global_shm = odp_shm_reserve("crypto_pool", mem_size, + ODP_CACHE_LINE_SIZE, 0); + + if (crypto_global_shm != ODP_SHM_INVALID) { + global = odp_shm_addr(crypto_global_shm); + + if (global == NULL) { + ODP_ERR("Failed to find the reserved shm block"); + return -1; + } + } else { + ODP_ERR("Shared memory reserve failed.\n"); + return -1; + } + + /* Clear it out */ + memset(global, 0, mem_size); + + /* Initialize free list and lock */ + for (idx = 0; idx < MAX_SESSIONS; idx++) { + global->sessions[idx].next = global->free; + global->free = &global->sessions[idx]; + } + + global->enabled_crypto_devs = 0; + odp_spinlock_init(&global->lock); + + odp_spinlock_lock(&global->lock); + if (global->is_crypto_dev_initialized) + return 0; + + if (RTE_MEMPOOL_CACHE_MAX_SIZE > 0) { + unsigned j; + + j = ceil((double)NB_MBUF / RTE_MEMPOOL_CACHE_MAX_SIZE); + j = RTE_MAX(j, 2UL); + for (; j <= (NB_MBUF / 2); ++j) + if ((NB_MBUF % j) == 0) { + cache_size = NB_MBUF / j; + break; + } + if (odp_unlikely(cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || + (uint32_t)cache_size * 1.5 > NB_MBUF)) { + ODP_ERR("cache_size calc failure: %d\n", cache_size); + cache_size = 0; + } + } + + cdev_count = rte_cryptodev_count(); + if (cdev_count == 0) { + printf("No crypto devices available\n"); + return 0; + } + + for (cdev_id = cdev_count - 1; cdev_id >= 0; cdev_id--) { + struct rte_cryptodev_info dev_info; + + rte_cryptodev_info_get(cdev_id, &dev_info); + nb_queue_pairs = odp_cpu_count(); + if (nb_queue_pairs > dev_info.max_nb_queue_pairs) + nb_queue_pairs = dev_info.max_nb_queue_pairs; + + struct rte_cryptodev_qp_conf qp_conf; + + struct rte_cryptodev_config conf = { + .nb_queue_pairs = nb_queue_pairs, + .socket_id = SOCKET_ID_ANY, + .session_mp = { + .nb_objs = NB_MBUF, + .cache_size = cache_size + } + }; + + rc = rte_cryptodev_configure(cdev_id, &conf); + if (rc < 0) { + ODP_ERR("Failed to configure cryptodev %u", cdev_id); + return -1; + } + + qp_conf.nb_descriptors = NB_MBUF; + + for (queue_pair = 0; queue_pair < nb_queue_pairs - 1; + queue_pair++) { + rc = rte_cryptodev_queue_pair_setup(cdev_id, + queue_pair, + &qp_conf, + SOCKET_ID_ANY); + if (rc < 0) { + ODP_ERR("Fail to setup queue pair %u on dev %u", + queue_pair, cdev_id); + return -1; + } + } + + rc = rte_cryptodev_start(cdev_id); + if (rc < 0) { + ODP_ERR("Failed to start device %u: error %d\n", + cdev_id, rc); + return -1; + } + + global->enabled_crypto_devs++; + global->enabled_crypto_dev_ids[ + global->enabled_crypto_devs - 1] = cdev_id; + } + + /* create crypto op pool */ + global->crypto_op_pool = rte_crypto_op_pool_create("crypto_op_pool", + RTE_CRYPTO_OP_TYPE_SYMMETRIC, + NB_MBUF, cache_size, 0, + rte_socket_id()); + + if (global->crypto_op_pool == NULL) { + ODP_ERR("Cannot create crypto op pool\n"); + return -1; + } + + global->is_crypto_dev_initialized = 1; + odp_spinlock_unlock(&global->lock); + + return 0; +} + +int odp_crypto_capability(odp_crypto_capability_t *capability) +{ + uint8_t i, cdev_id, cdev_count; + const struct rte_cryptodev_capabilities *cap; + enum rte_crypto_auth_algorithm cap_auth_algo; + enum rte_crypto_cipher_algorithm cap_cipher_algo; + + if (NULL == capability) + return -1; + + /* Initialize crypto capability structure */ + memset(capability, 0, sizeof(odp_crypto_capability_t)); + + cdev_count = rte_cryptodev_count(); + if (cdev_count == 0) { + ODP_ERR("No crypto devices available\n"); + return -1; + } + + for (cdev_id = 0; cdev_id < cdev_count; cdev_id++) { + struct rte_cryptodev_info dev_info; + + rte_cryptodev_info_get(cdev_id, &dev_info); + i = 0; + cap = &dev_info.capabilities[i]; + if ((dev_info.feature_flags & + RTE_CRYPTODEV_FF_HW_ACCELERATED)) { + odp_crypto_cipher_algos_t *hw_ciphers; + + hw_ciphers = &capability->hw_ciphers; + while (cap->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) { + cap_cipher_algo = cap->sym.cipher.algo; + if (cap->sym.xform_type == + RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (cap_cipher_algo == + RTE_CRYPTO_CIPHER_NULL) { + hw_ciphers->bit.null = 1; + } + if (cap_cipher_algo == + RTE_CRYPTO_CIPHER_3DES_CBC) { + hw_ciphers->bit.trides_cbc = 1; + hw_ciphers->bit.des = 1; + } + if (cap_cipher_algo == + RTE_CRYPTO_CIPHER_AES_CBC) { + hw_ciphers->bit.aes_cbc = 1; +#if ODP_DEPRECATED_API + hw_ciphers->bit.aes128_cbc = 1; +#endif + } + if (cap_cipher_algo == + RTE_CRYPTO_CIPHER_AES_GCM) { + hw_ciphers->bit.aes_gcm = 1; +#if ODP_DEPRECATED_API + hw_ciphers->bit.aes128_gcm = 1; +#endif + } + } + + cap_auth_algo = cap->sym.auth.algo; + if (cap->sym.xform_type == + RTE_CRYPTO_SYM_XFORM_AUTH) { + odp_crypto_auth_algos_t *hw_auths; + + hw_auths = &capability->hw_auths; + if (cap_auth_algo == + RTE_CRYPTO_AUTH_NULL) { + hw_auths->bit.null = 1; + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_AES_GCM) { + hw_auths->bit.aes_gcm = 1; +#if ODP_DEPRECATED_API + hw_auths->bit.aes128_gcm = 1; +#endif + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_MD5_HMAC) { + hw_auths->bit.md5_hmac = 1; +#if ODP_DEPRECATED_API + hw_auths->bit.md5_96 = 1; +#endif + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_SHA256_HMAC) { + hw_auths->bit.sha256_hmac = 1; +#if ODP_DEPRECATED_API + hw_auths->bit.sha256_128 = 1; +#endif + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_SHA1_HMAC) { + hw_auths->bit.sha1_hmac = 1; + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_SHA512_HMAC) { + hw_auths->bit.sha512_hmac = 1; + } + } + cap = &dev_info.capabilities[++i]; + } + } else { + while (cap->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) { + odp_crypto_cipher_algos_t *ciphers; + + ciphers = &capability->ciphers; + cap_cipher_algo = cap->sym.cipher.algo; + if (cap->sym.xform_type == + RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (cap_cipher_algo == + RTE_CRYPTO_CIPHER_NULL) { + ciphers->bit.null = 1; + } + if (cap_cipher_algo == + RTE_CRYPTO_CIPHER_3DES_CBC) { + ciphers->bit.trides_cbc = 1; + ciphers->bit.des = 1; + } + if (cap_cipher_algo == + RTE_CRYPTO_CIPHER_AES_CBC) { + ciphers->bit.aes_cbc = 1; +#if ODP_DEPRECATED_API + ciphers->bit.aes128_cbc = 1; +#endif + } + if (cap_cipher_algo == + RTE_CRYPTO_CIPHER_AES_GCM) { + ciphers->bit.aes_gcm = 1; +#if ODP_DEPRECATED_API + ciphers->bit.aes128_gcm = 1; +#endif + } + } + + cap_auth_algo = cap->sym.auth.algo; + if (cap->sym.xform_type == + RTE_CRYPTO_SYM_XFORM_AUTH) { + odp_crypto_auth_algos_t *auths; + + auths = &capability->auths; + if (cap_auth_algo == + RTE_CRYPTO_AUTH_NULL) { + auths->bit.null = 1; + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_AES_GCM) { + auths->bit.aes_gcm = 1; +#if ODP_DEPRECATED_API + auths->bit.aes128_gcm = 1; +#endif + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_MD5_HMAC) { + auths->bit.md5_hmac = 1; +#if ODP_DEPRECATED_API + auths->bit.md5_96 = 1; +#endif + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_SHA256_HMAC) { + auths->bit.sha256_hmac = 1; +#if ODP_DEPRECATED_API + auths->bit.sha256_128 = 1; +#endif + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_SHA1_HMAC) { + auths->bit.sha1_hmac = 1; + } + if (cap_auth_algo == + RTE_CRYPTO_AUTH_SHA512_HMAC) { + auths->bit.sha512_hmac = 1; + } + } + cap = &dev_info.capabilities[++i]; + } + } + + /* Read from the device with the lowest max_nb_sessions */ + if (capability->max_sessions > dev_info.sym.max_nb_sessions) + capability->max_sessions = dev_info.sym.max_nb_sessions; + + if (capability->max_sessions == 0) + capability->max_sessions = dev_info.sym.max_nb_sessions; + } + + /* Make sure the session count doesn't exceed MAX_SESSIONS */ + if (capability->max_sessions > MAX_SESSIONS) + capability->max_sessions = MAX_SESSIONS; + + return 0; +} + +int odp_crypto_cipher_capability(odp_cipher_alg_t cipher, + odp_crypto_cipher_capability_t dst[], + int num_copy) +{ + odp_crypto_cipher_capability_t src[num_copy]; + int idx = 0, rc = 0; + int size = sizeof(odp_crypto_cipher_capability_t); + + uint8_t i, cdev_id, cdev_count; + const struct rte_cryptodev_capabilities *cap; + enum rte_crypto_cipher_algorithm cap_cipher_algo; + struct rte_crypto_sym_xform cipher_xform; + + rc = cipher_alg_odp_to_rte(cipher, &cipher_xform); + + /* Check result */ + if (rc) + return -1; + + cdev_count = rte_cryptodev_count(); + if (cdev_count == 0) { + ODP_ERR("No crypto devices available\n"); + return -1; + } + + for (cdev_id = 0; cdev_id < cdev_count; cdev_id++) { + struct rte_cryptodev_info dev_info; + + rte_cryptodev_info_get(cdev_id, &dev_info); + i = 0; + cap = &dev_info.capabilities[i]; + while (cap->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) { + cap_cipher_algo = cap->sym.cipher.algo; + if (cap->sym.xform_type == + RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (cap_cipher_algo == cipher_xform.cipher.algo) + break; + } + cap = &dev_info.capabilities[++i]; + } + + if (cap->op == RTE_CRYPTO_OP_TYPE_UNDEFINED) + continue; + + uint32_t key_size_min = cap->sym.cipher.key_size.min; + uint32_t key_size_max = cap->sym.cipher.key_size.max; + uint32_t key_inc = cap->sym.cipher.key_size.increment; + uint32_t iv_size_max = cap->sym.cipher.iv_size.max; + uint32_t iv_size_min = cap->sym.cipher.iv_size.min; + uint32_t iv_inc = cap->sym.cipher.iv_size.increment; + + for (uint32_t key_len = key_size_min; key_len <= key_size_max; + key_len += key_inc) { + for (uint32_t iv_size = iv_size_min; + iv_size <= iv_size_max; iv_size += iv_inc) { + src[idx].key_len = key_len; + src[idx].iv_len = iv_size; + idx++; + if (iv_inc == 0) + break; + } + + if (key_inc == 0) + break; + } + } + + if (idx < num_copy) + num_copy = idx; + + memcpy(dst, src, num_copy * size); + + return idx; +} + +int odp_crypto_auth_capability(odp_auth_alg_t auth, + odp_crypto_auth_capability_t dst[], + int num_copy) +{ + odp_crypto_auth_capability_t src[num_copy]; + int idx = 0, rc = 0; + int size = sizeof(odp_crypto_auth_capability_t); + + uint8_t i, cdev_id, cdev_count; + const struct rte_cryptodev_capabilities *cap; + enum rte_crypto_auth_algorithm cap_auth_algo; + struct rte_crypto_sym_xform auth_xform; + + rc = auth_alg_odp_to_rte(auth, &auth_xform); + + /* Check result */ + if (rc) + return -1; + + cdev_count = rte_cryptodev_count(); + if (cdev_count == 0) { + ODP_ERR("No crypto devices available\n"); + return -1; + } + + for (cdev_id = 0; cdev_id < cdev_count; cdev_id++) { + struct rte_cryptodev_info dev_info; + + rte_cryptodev_info_get(cdev_id, &dev_info); + i = 0; + cap = &dev_info.capabilities[i]; + while (cap->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) { + cap_auth_algo = cap->sym.auth.algo; + if (cap->sym.xform_type == + RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (cap_auth_algo == auth_xform.auth.algo) + break; + } + cap = &dev_info.capabilities[++i]; + } + + if (cap->op == RTE_CRYPTO_OP_TYPE_UNDEFINED) + continue; + + uint8_t key_size_min = cap->sym.auth.key_size.min; + uint8_t key_size_max = cap->sym.auth.key_size.max; + uint8_t increment = cap->sym.auth.key_size.increment; + uint8_t digest_size_max = cap->sym.auth.digest_size.max; + + if (key_size_min == key_size_max) { + src[idx].key_len = key_size_min; + src[idx].digest_len = digest_size_max; + src[idx].aad_len.min = cap->sym.auth.aad_size.min; + src[idx].aad_len.max = cap->sym.auth.aad_size.max; + src[idx].aad_len.inc = cap->sym.auth.aad_size.increment; + idx++; + } else { + for (uint8_t key_len = key_size_min; + key_len <= key_size_max; + key_len += increment) { + idx = (key_len - key_size_min) / increment; + src[idx].key_len = key_len; + src[idx].digest_len = digest_size_max; + src[idx].aad_len.min = + cap->sym.auth.aad_size.min; + src[idx].aad_len.max = + cap->sym.auth.aad_size.max; + src[idx].aad_len.inc = + cap->sym.auth.aad_size.increment; + idx++; + } + } + } + + if (idx < num_copy) + num_copy = idx; + + memcpy(dst, src, num_copy * size); + + return idx; +} + +static int get_crypto_dev(struct rte_crypto_sym_xform *cipher_xform, + struct rte_crypto_sym_xform *auth_xform, + uint16_t iv_length, uint8_t *dev_id) +{ + uint8_t cdev_id, id; + const struct rte_cryptodev_capabilities *cap; + enum rte_crypto_cipher_algorithm cap_cipher_algo; + enum rte_crypto_auth_algorithm cap_auth_algo; + enum rte_crypto_cipher_algorithm app_cipher_algo; + enum rte_crypto_auth_algorithm app_auth_algo; + + for (id = 0; id < global->enabled_crypto_devs; id++) { + struct rte_cryptodev_info dev_info; + int i = 0; + + cdev_id = global->enabled_crypto_dev_ids[id]; + rte_cryptodev_info_get(cdev_id, &dev_info); + app_cipher_algo = cipher_xform->cipher.algo; + cap = &dev_info.capabilities[i]; + while (cap->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) { + cap_cipher_algo = cap->sym.cipher.algo; + if (cap->sym.xform_type == + RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (cap_cipher_algo == app_cipher_algo) + break; + } + cap = &dev_info.capabilities[++i]; + } + + if (cap->op == RTE_CRYPTO_OP_TYPE_UNDEFINED) + continue; + + /* Check if key size is supported by the algorithm. */ + if (cipher_xform->cipher.key.length) { + if (is_valid_size(cipher_xform->cipher.key.length, + cap->sym.cipher.key_size.min, + cap->sym.cipher.key_size.max, + cap->sym.cipher.key_size. + increment) != 0) { + ODP_ERR("Unsupported cipher key length\n"); + return -1; + } + /* No size provided, use minimum size. */ + } else + cipher_xform->cipher.key.length = + cap->sym.cipher.key_size.min; + + /* Check if iv length is supported by the algorithm. */ + if (iv_length) { + if (is_valid_size(iv_length, + cap->sym.cipher.iv_size.min, + cap->sym.cipher.iv_size.max, + cap->sym.cipher.iv_size. + increment) != 0) { + ODP_ERR("Unsupported iv length\n"); + return -1; + } + } + + i = 0; + app_auth_algo = auth_xform->auth.algo; + cap = &dev_info.capabilities[i]; + while (cap->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) { + cap_auth_algo = cap->sym.auth.algo; + if ((cap->sym.xform_type == + RTE_CRYPTO_SYM_XFORM_AUTH) & + (cap_auth_algo == app_auth_algo)) { + break; + } + + cap = &dev_info.capabilities[++i]; + } + + if (cap->op == RTE_CRYPTO_OP_TYPE_UNDEFINED) + continue; + + /* Check if key size is supported by the algorithm. */ + if (auth_xform->auth.key.length) { + if (is_valid_size(auth_xform->auth.key.length, + cap->sym.auth.key_size.min, + cap->sym.auth.key_size.max, + cap->sym.auth.key_size. + increment) != 0) { + ODP_ERR("Unsupported auth key length\n"); + return -1; + } + /* No size provided, use minimum size. */ + } else + auth_xform->auth.key.length = + cap->sym.auth.key_size.min; + + /* Check if digest size is supported by the algorithm. */ + if (auth_xform->auth.digest_length) { + if (is_valid_size(auth_xform->auth.digest_length, + cap->sym.auth.digest_size.min, + cap->sym.auth.digest_size.max, + cap->sym.auth.digest_size. + increment) != 0) { + ODP_ERR("Unsupported digest length\n"); + return -1; + } + /* No size provided, use minimum size. */ + } else + auth_xform->auth.digest_length = + cap->sym.auth.digest_size.min; + + memcpy(dev_id, &cdev_id, sizeof(cdev_id)); + return 0; + } + + return -1; +} + +int odp_crypto_session_create(odp_crypto_session_param_t *param, + odp_crypto_session_t *session_out, + odp_crypto_ses_create_err_t *status) +{ + int rc = 0; + uint8_t cdev_id = 0; + struct rte_crypto_sym_xform cipher_xform; + struct rte_crypto_sym_xform auth_xform; + struct rte_crypto_sym_xform *first_xform; + struct rte_cryptodev_sym_session *session; + crypto_session_entry_t *entry; + + *session_out = ODP_CRYPTO_SESSION_INVALID; + + if (rte_cryptodev_count() == 0) { + ODP_ERR("No crypto devices available\n"); + return -1; + } + + /* Allocate memory for this session */ + entry = alloc_session(); + if (entry == NULL) { + ODP_ERR("Failed to allocate a session entry"); + return -1; + } + + /* Copy parameters */ + entry->p = *param; + + /* Default to successful result */ + *status = ODP_CRYPTO_SES_CREATE_ERR_NONE; + + /* Cipher Data */ + cipher_xform.cipher.key.data = rte_malloc("crypto key", + param->cipher_key.length, 0); + if (cipher_xform.cipher.key.data == NULL) { + ODP_ERR("Failed to allocate memory for cipher key\n"); + /* remove the crypto_session_entry_t */ + memset(entry, 0, sizeof(*entry)); + free_session(entry); + return -1; + } + + cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER; + cipher_xform.next = NULL; + cipher_xform.cipher.key.length = param->cipher_key.length; + memcpy(cipher_xform.cipher.key.data, + param->cipher_key.data, + param->cipher_key.length); + + /* Authentication Data */ + auth_xform.auth.key.data = rte_malloc("auth key", + param->auth_key.length, 0); + if (auth_xform.auth.key.data == NULL) { + ODP_ERR("Failed to allocate memory for auth key\n"); + /* remove the crypto_session_entry_t */ + memset(entry, 0, sizeof(*entry)); + free_session(entry); + return -1; + } + auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH; + auth_xform.next = NULL; + auth_xform.auth.key.length = param->auth_key.length; + memcpy(auth_xform.auth.key.data, + param->auth_key.data, + param->auth_key.length); + + /* Derive order */ + if (ODP_CRYPTO_OP_ENCODE == param->op) + entry->do_cipher_first = param->auth_cipher_text; + else + entry->do_cipher_first = !param->auth_cipher_text; + + /* Process based on cipher */ + /* Derive order */ + if (entry->do_cipher_first) { + cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT; + auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE; + first_xform = &cipher_xform; + first_xform->next = &auth_xform; + } else { + cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT; + auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY; + first_xform = &auth_xform; + first_xform->next = &cipher_xform; + } + + rc = cipher_alg_odp_to_rte(param->cipher_alg, &cipher_xform); + + /* Check result */ + if (rc) { + *status = ODP_CRYPTO_SES_CREATE_ERR_INV_CIPHER; + return -1; + } + + rc = auth_alg_odp_to_rte(param->auth_alg, &auth_xform); + + /* Check result */ + if (rc) { + *status = ODP_CRYPTO_SES_CREATE_ERR_INV_AUTH; + /* remove the crypto_session_entry_t */ + memset(entry, 0, sizeof(*entry)); + free_session(entry); + return -1; + } + + rc = get_crypto_dev(&cipher_xform, + &auth_xform, + param->iv.length, + &cdev_id); + + if (rc) { + ODP_ERR("Couldn't find a crypto device"); + /* remove the crypto_session_entry_t */ + memset(entry, 0, sizeof(*entry)); + free_session(entry); + return -1; + } + + /* Setup session */ + session = rte_cryptodev_sym_session_create(cdev_id, first_xform); + + if (session == NULL) + /* remove the crypto_session_entry_t */ + memset(entry, 0, sizeof(*entry)); + free_session(entry); + return -1; + + entry->rte_session = (intptr_t)session; + entry->cipher_xform = cipher_xform; + entry->auth_xform = auth_xform; + entry->iv.length = param->iv.length; + entry->iv.data = param->iv.data; + + /* We're happy */ + *session_out = (intptr_t)entry; + + return 0; +} + +int odp_crypto_session_destroy(odp_crypto_session_t session) +{ + struct rte_cryptodev_sym_session *rte_session = NULL; + crypto_session_entry_t *entry; + + entry = (crypto_session_entry_t *)session; + + rte_session = + (struct rte_cryptodev_sym_session *) + (intptr_t)entry->rte_session; + + rte_session = rte_cryptodev_sym_session_free(rte_session->dev_id, + rte_session); + + if (rte_session != NULL) + return -1; + + /* remove the crypto_session_entry_t */ + memset(entry, 0, sizeof(*entry)); + free_session(entry); + + return 0; +} + +int odp_crypto_operation(odp_crypto_op_param_t *param, + odp_bool_t *posted, + odp_crypto_op_result_t *result) +{ + odp_crypto_packet_op_param_t packet_param; + odp_packet_t out_pkt = param->out_pkt; + odp_crypto_packet_result_t packet_result; + odp_crypto_op_result_t local_result; + int rc; + + packet_param.session = param->session; + packet_param.override_iv_ptr = param->override_iv_ptr; + packet_param.hash_result_offset = param->hash_result_offset; + packet_param.aad.ptr = param->aad.ptr; + packet_param.aad.length = param->aad.length; + packet_param.cipher_range = param->cipher_range; + packet_param.auth_range = param->auth_range; + + rc = odp_crypto_op(¶m->pkt, &out_pkt, &packet_param, 1); + if (rc < 0) + return rc; + + rc = odp_crypto_result(&packet_result, out_pkt); + if (rc < 0) + return rc; + + /* Indicate to caller operation was sync */ + *posted = 0; + + _odp_buffer_event_subtype_set(packet_to_buffer(out_pkt), + ODP_EVENT_PACKET_BASIC); + + /* Fill in result */ + local_result.ctx = param->ctx; + local_result.pkt = out_pkt; + local_result.cipher_status = packet_result.cipher_status; + local_result.auth_status = packet_result.auth_status; + local_result.ok = packet_result.ok; + + /* + * Be bug-to-bug compatible. Return output packet also through params. + */ + param->out_pkt = out_pkt; + + *result = local_result; + + return 0; +} + +int odp_crypto_term_global(void) +{ + int rc = 0; + int ret; + int count = 0; + crypto_session_entry_t *session; + + odp_spinlock_init(&global->lock); + odp_spinlock_lock(&global->lock); + for (session = global->free; session != NULL; session = session->next) + count++; + if (count != MAX_SESSIONS) { + ODP_ERR("crypto sessions still active\n"); + rc = -1; + } + + if (global->crypto_op_pool != NULL) + rte_mempool_free(global->crypto_op_pool); + + odp_spinlock_unlock(&global->lock); + + ret = odp_shm_free(crypto_global_shm); + if (ret < 0) { + ODP_ERR("shm free failed for crypto_pool\n"); + rc = -1; + } + + return rc; +} + +odp_random_kind_t odp_random_max_kind(void) +{ + return ODP_RANDOM_CRYPTO; +} + +int32_t odp_random_data(uint8_t *buf, uint32_t len, odp_random_kind_t kind) +{ + int rc; + + switch (kind) { + case ODP_RANDOM_BASIC: + RAND_pseudo_bytes(buf, len); + return len; + + case ODP_RANDOM_CRYPTO: + rc = RAND_bytes(buf, len); + return (1 == rc) ? (int)len /*success*/: -1 /*failure*/; + + case ODP_RANDOM_TRUE: + default: + return -1; + } +} + +int32_t odp_random_test_data(uint8_t *buf, uint32_t len, uint64_t *seed) +{ + union { + uint32_t rand_word; + uint8_t rand_byte[4]; + } u; + uint32_t i = 0, j; + uint32_t seed32 = (*seed) & 0xffffffff; + + while (i < len) { + u.rand_word = rand_r(&seed32); + + for (j = 0; j < 4 && i < len; j++, i++) + *buf++ = u.rand_byte[j]; + } + + *seed = seed32; + return len; +} + +odp_crypto_compl_t odp_crypto_compl_from_event(odp_event_t ev) +{ + /* This check not mandated by the API specification */ + if (odp_event_type(ev) != ODP_EVENT_CRYPTO_COMPL) + ODP_ABORT("Event not a crypto completion"); + return (odp_crypto_compl_t)ev; +} + +odp_event_t odp_crypto_compl_to_event(odp_crypto_compl_t completion_event) +{ + return (odp_event_t)completion_event; +} + +void odp_crypto_compl_result(odp_crypto_compl_t completion_event, + odp_crypto_op_result_t *result) +{ + (void)completion_event; + (void)result; + + /* We won't get such events anyway, so there can be no result */ + ODP_ASSERT(0); +} + +void odp_crypto_compl_free(odp_crypto_compl_t completion_event) +{ + odp_event_t ev = odp_crypto_compl_to_event(completion_event); + + odp_buffer_free(odp_buffer_from_event(ev)); +} + +void odp_crypto_session_param_init(odp_crypto_session_param_t *param) +{ + memset(param, 0, sizeof(odp_crypto_session_param_t)); +} + +uint64_t odp_crypto_session_to_u64(odp_crypto_session_t hdl) +{ + return (uint64_t)hdl; +} + +uint64_t odp_crypto_compl_to_u64(odp_crypto_compl_t hdl) +{ + return _odp_pri(hdl); +} + +odp_packet_t odp_crypto_packet_from_event(odp_event_t ev) +{ + /* This check not mandated by the API specification */ + ODP_ASSERT(odp_event_type(ev) == ODP_EVENT_PACKET); + ODP_ASSERT(odp_event_subtype(ev) == ODP_EVENT_PACKET_CRYPTO); + + return odp_packet_from_event(ev); +} + +odp_event_t odp_crypto_packet_to_event(odp_packet_t pkt) +{ + return odp_packet_to_event(pkt); +} + +static +odp_crypto_packet_result_t *get_op_result_from_packet(odp_packet_t pkt) +{ + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + + return &hdr->crypto_op_result; +} + +int odp_crypto_result(odp_crypto_packet_result_t *result, + odp_packet_t packet) +{ + odp_crypto_packet_result_t *op_result; + + ODP_ASSERT(odp_event_subtype(odp_packet_to_event(packet)) == + ODP_EVENT_PACKET_CRYPTO); + + op_result = get_op_result_from_packet(packet); + + memcpy(result, op_result, sizeof(*result)); + + return 0; +} + +static +int odp_crypto_int(odp_packet_t pkt_in, + odp_packet_t *pkt_out, + const odp_crypto_packet_op_param_t *param) +{ + crypto_session_entry_t *entry; + odp_crypto_packet_result_t local_result; + odp_crypto_alg_err_t rc_cipher = ODP_CRYPTO_ALG_ERR_NONE; + odp_crypto_alg_err_t rc_auth = ODP_CRYPTO_ALG_ERR_NONE; + struct rte_crypto_sym_xform cipher_xform; + struct rte_crypto_sym_xform auth_xform; + struct rte_cryptodev_sym_session *rte_session = NULL; + uint8_t *data_addr, *aad_head; + struct rte_crypto_op *op; + uint32_t aad_len; + odp_bool_t allocated = false; + odp_packet_t out_pkt = *pkt_out; + odp_crypto_packet_result_t *op_result; + uint16_t rc; + + entry = (crypto_session_entry_t *)(intptr_t)param->session; + if (entry == NULL) + return -1; + + rte_session = + (struct rte_cryptodev_sym_session *) + (intptr_t)entry->rte_session; + + if (rte_session == NULL) + return -1; + + cipher_xform = entry->cipher_xform; + auth_xform = entry->auth_xform; + + /* Resolve output buffer */ + if (ODP_PACKET_INVALID == out_pkt && + ODP_POOL_INVALID != entry->p.output_pool) { + out_pkt = odp_packet_alloc(entry->p.output_pool, + odp_packet_len(pkt_in)); + allocated = true; + } + + if (pkt_in != out_pkt) { + if (odp_unlikely(ODP_PACKET_INVALID == out_pkt)) + ODP_ABORT(); + int ret; + + ret = odp_packet_copy_from_pkt(out_pkt, + 0, + pkt_in, + 0, + odp_packet_len(pkt_in)); + if (odp_unlikely(ret < 0)) + goto err; + + _odp_packet_copy_md_to_packet(pkt_in, out_pkt); + odp_packet_free(pkt_in); + pkt_in = ODP_PACKET_INVALID; + } + + data_addr = odp_packet_data(out_pkt); + + odp_spinlock_init(&global->lock); + odp_spinlock_lock(&global->lock); + op = rte_crypto_op_alloc(global->crypto_op_pool, + RTE_CRYPTO_OP_TYPE_SYMMETRIC); + if (op == NULL) { + ODP_ERR("Failed to allocate crypto operation"); + goto err; + } + + odp_spinlock_unlock(&global->lock); + + /* Set crypto operation data parameters */ + rte_crypto_op_attach_sym_session(op, rte_session); + op->sym->auth.digest.data = data_addr + param->hash_result_offset; + op->sym->auth.digest.phys_addr = + rte_pktmbuf_mtophys_offset((struct rte_mbuf *)out_pkt, + odp_packet_len(out_pkt) - + auth_xform.auth.digest_length); + op->sym->auth.digest.length = auth_xform.auth.digest_length; + + /* For SNOW3G algorithms, offset/length must be in bits */ + if (auth_xform.auth.algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2) { + op->sym->auth.data.offset = param->auth_range.offset << 3; + op->sym->auth.data.length = param->auth_range.length << 3; + } else { + op->sym->auth.data.offset = param->auth_range.offset; + op->sym->auth.data.length = param->auth_range.length; + } + + aad_head = param->aad.ptr; + aad_len = param->aad.length; + + if (aad_len > 0) { + op->sym->auth.aad.data = rte_malloc("aad", aad_len, 0); + if (op->sym->auth.aad.data == NULL) { + rte_crypto_op_free(op); + ODP_ERR("Failed to allocate memory for AAD"); + goto err; + } + + memcpy(op->sym->auth.aad.data, aad_head, aad_len); + op->sym->auth.aad.phys_addr = + rte_malloc_virt2phy(op->sym->auth.aad.data); + op->sym->auth.aad.length = aad_len; + } + + if (entry->iv.length == 0) { + rte_crypto_op_free(op); + ODP_ERR("Wrong IV length"); + goto err; + } + + op->sym->cipher.iv.data = rte_malloc("iv", entry->iv.length, 0); + if (op->sym->cipher.iv.data == NULL) { + rte_crypto_op_free(op); + ODP_ERR("Failed to allocate memory for IV"); + goto err; + } + + if (param->override_iv_ptr) { + memcpy(op->sym->cipher.iv.data, + param->override_iv_ptr, + entry->iv.length); + } else if (entry->iv.data) { + memcpy(op->sym->cipher.iv.data, + entry->iv.data, + entry->iv.length); + + op->sym->cipher.iv.phys_addr = + rte_malloc_virt2phy(op->sym->cipher.iv.data); + op->sym->cipher.iv.length = entry->iv.length; + } else { + rc_cipher = ODP_CRYPTO_ALG_ERR_IV_INVALID; + } + + /* For SNOW3G algorithms, offset/length must be in bits */ + if (cipher_xform.cipher.algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2) { + op->sym->cipher.data.offset = param->cipher_range.offset << 3; + op->sym->cipher.data.length = param->cipher_range.length << 3; + + } else { + op->sym->cipher.data.offset = param->cipher_range.offset; + op->sym->cipher.data.length = param->cipher_range.length; + } + + if (rc_cipher == ODP_CRYPTO_ALG_ERR_NONE && + rc_auth == ODP_CRYPTO_ALG_ERR_NONE) { + int queue_pair = odp_cpu_id(); + + op->sym->m_src = (struct rte_mbuf *)out_pkt; + rc = rte_cryptodev_enqueue_burst(rte_session->dev_id, + queue_pair, &op, 1); + if (rc == 0) { + rte_crypto_op_free(op); + ODP_ERR("Failed to enqueue packet"); + goto err; + } + + rc = rte_cryptodev_dequeue_burst(rte_session->dev_id, + queue_pair, &op, 1); + + if (rc == 0) { + rte_crypto_op_free(op); + ODP_ERR("Failed to dequeue packet"); + goto err; + } + + out_pkt = (odp_packet_t)op->sym->m_src; + } + + /* Fill in result */ + local_result.cipher_status.alg_err = rc_cipher; + local_result.cipher_status.hw_err = ODP_CRYPTO_HW_ERR_NONE; + local_result.auth_status.alg_err = rc_auth; + local_result.auth_status.hw_err = ODP_CRYPTO_HW_ERR_NONE; + local_result.ok = + (rc_cipher == ODP_CRYPTO_ALG_ERR_NONE) && + (rc_auth == ODP_CRYPTO_ALG_ERR_NONE); + + _odp_buffer_event_subtype_set(packet_to_buffer(out_pkt), + ODP_EVENT_PACKET_BASIC); + op_result = get_op_result_from_packet(out_pkt); + *op_result = local_result; + + rte_crypto_op_free(op); + + /* Synchronous, simply return results */ + *pkt_out = out_pkt; + + return 0; + +err: + if (allocated) { + odp_packet_free(out_pkt); + out_pkt = ODP_PACKET_INVALID; + } + + return -1; +} + +int odp_crypto_op(const odp_packet_t pkt_in[], + odp_packet_t pkt_out[], + const odp_crypto_packet_op_param_t param[], + int num_pkt) +{ + crypto_session_entry_t *entry; + int i, rc; + + entry = (crypto_session_entry_t *)(intptr_t)param->session; + ODP_ASSERT(ODP_CRYPTO_SYNC == entry->p.op_mode); + + for (i = 0; i < num_pkt; i++) { + rc = odp_crypto_int(pkt_in[i], &pkt_out[i], ¶m[i]); + if (rc < 0) + break; + } + + return i; +} + +int odp_crypto_op_enq(const odp_packet_t pkt_in[], + const odp_packet_t pkt_out[], + const odp_crypto_packet_op_param_t param[], + int num_pkt) +{ + odp_packet_t pkt; + odp_event_t event; + crypto_session_entry_t *entry; + int i, rc; + + entry = (crypto_session_entry_t *)(intptr_t)param->session; + ODP_ASSERT(ODP_CRYPTO_ASYNC == entry->p.op_mode); + ODP_ASSERT(ODP_QUEUE_INVALID != entry->p.compl_queue); + + for (i = 0; i < num_pkt; i++) { + pkt = pkt_out[i]; + rc = odp_crypto_int(pkt_in[i], &pkt, ¶m[i]); + if (rc < 0) + break; + + event = odp_packet_to_event(pkt); + if (odp_queue_enq(entry->p.compl_queue, event)) { + odp_event_free(event); + break; + } + } + + return i; +} diff --cc platform/linux-dpdk/odp_init.c index 1d1c451a,fdccac7c..81b60898 --- a/platform/linux-dpdk/odp_init.c +++ b/platform/linux-dpdk/odp_init.c @@@ -4,7 -4,8 +4,9 @@@ * SPDX-License-Identifier: BSD-3-Clause */
+ #include "config.h" + +#include <odp_posix_extensions.h> #include <odp/api/init.h> #include <odp_debug_internal.h> #include <odp/api/debug.h> diff --cc platform/linux-dpdk/odp_packet.c index b2860a77,00000000..cdfe2815 mode 100644,000000..100644 --- a/platform/linux-dpdk/odp_packet.c +++ b/platform/linux-dpdk/odp_packet.c @@@ -1,1508 -1,0 +1,1510 @@@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + ++#include <config.h> ++ +#include <odp/api/plat/packet_inlines.h> +#include <odp/api/packet.h> +#include <odp_packet_internal.h> +#include <odp_debug_internal.h> +#include <odp/api/hints.h> +#include <odp/api/byteorder.h> + +#include <protocols/eth.h> +#include <protocols/ip.h> +#include <protocols/tcp.h> +#include <protocols/udp.h> + +#include <string.h> +#include <stdio.h> +#include <stddef.h> +#include <inttypes.h> + +#include <odp/visibility_begin.h> + +/* Fill in packet header field offsets for inline functions */ + +const _odp_packet_inline_offset_t _odp_packet_inline ODP_ALIGNED_CACHE = { + .mb = offsetof(odp_packet_hdr_t, buf_hdr.mb), + .pool = offsetof(odp_packet_hdr_t, buf_hdr.pool_hdl), + .input = offsetof(odp_packet_hdr_t, input), + .user_ptr = offsetof(odp_packet_hdr_t, buf_hdr.buf_ctx), + .timestamp = offsetof(odp_packet_hdr_t, timestamp), + .input_flags = offsetof(odp_packet_hdr_t, p.input_flags), + .buf_addr = offsetof(odp_packet_hdr_t, buf_hdr.mb) + + offsetof(const struct rte_mbuf, buf_addr), + .data = offsetof(odp_packet_hdr_t, buf_hdr.mb) + + offsetof(struct rte_mbuf, data_off), + .pkt_len = offsetof(odp_packet_hdr_t, buf_hdr.mb) + + (size_t)&rte_pktmbuf_pkt_len((struct rte_mbuf *)0), + .seg_len = offsetof(odp_packet_hdr_t, buf_hdr.mb) + + (size_t)&rte_pktmbuf_data_len((struct rte_mbuf *)0), + .nb_segs = offsetof(odp_packet_hdr_t, buf_hdr.mb) + + offsetof(struct rte_mbuf, nb_segs), + .udata_len = offsetof(odp_packet_hdr_t, uarea_size), + .udata = sizeof(odp_packet_hdr_t), + .rss = offsetof(odp_packet_hdr_t, buf_hdr.mb) + + offsetof(struct rte_mbuf, hash.rss), + .ol_flags = offsetof(odp_packet_hdr_t, buf_hdr.mb) + + offsetof(struct rte_mbuf, ol_flags), + .rss_flag = PKT_RX_RSS_HASH +}; + +#include <odp/visibility_end.h> + +struct rte_mbuf dummy; +ODP_STATIC_ASSERT(sizeof(dummy.data_off) == sizeof(uint16_t), + "data_off should be uint16_t"); +ODP_STATIC_ASSERT(sizeof(dummy.pkt_len) == sizeof(uint32_t), + "pkt_len should be uint32_t"); +ODP_STATIC_ASSERT(sizeof(dummy.data_len) == sizeof(uint16_t), + "data_len should be uint16_t"); +ODP_STATIC_ASSERT(sizeof(dummy.hash.rss) == sizeof(uint32_t), + "hash.rss should be uint32_t"); +ODP_STATIC_ASSERT(sizeof(dummy.ol_flags) == sizeof(uint64_t), + "ol_flags should be uint64_t"); +/* + * + * Alloc and free + * ******************************************************** + * + */ + +static inline odp_buffer_t buffer_handle(odp_packet_hdr_t *pkt_hdr) +{ + return pkt_hdr->buf_hdr.handle.handle; +} + +static inline odp_packet_hdr_t *buf_to_packet_hdr(odp_buffer_t buf) +{ + return (odp_packet_hdr_t *)buf_hdl_to_hdr(buf); +} + +static odp_packet_t packet_alloc(odp_pool_t pool_hdl, uint32_t len) +{ + pool_entry_dp_t *pool_dp; + odp_packet_t pkt; + uintmax_t totsize = RTE_PKTMBUF_HEADROOM + len; + odp_packet_hdr_t *pkt_hdr; + struct rte_mbuf *mbuf; + + ODP_ASSERT(odp_pool_to_entry_cp(pool_hdl)->params.type + == ODP_POOL_PACKET); + + pool_dp = odp_pool_to_entry_dp(pool_hdl); + + mbuf = rte_pktmbuf_alloc(pool_dp->rte_mempool); + if (mbuf == NULL) { + rte_errno = ENOMEM; + return ODP_PACKET_INVALID; + } + pkt_hdr = (odp_packet_hdr_t *)mbuf; + pkt_hdr->buf_hdr.totsize = mbuf->buf_len; + + if (mbuf->buf_len < totsize) { + intmax_t needed = totsize - mbuf->buf_len; + struct rte_mbuf *curseg = mbuf; + + do { + struct rte_mbuf *nextseg = + rte_pktmbuf_alloc(pool_dp->rte_mempool); + + if (nextseg == NULL) { + rte_pktmbuf_free(mbuf); + return ODP_PACKET_INVALID; + } + + curseg->next = nextseg; + curseg = nextseg; + curseg->data_off = 0; + pkt_hdr->buf_hdr.totsize += curseg->buf_len; + needed -= curseg->buf_len; + } while (needed > 0); + } + + pkt = (odp_packet_t)mbuf; + + if (odp_packet_reset(pkt, len) != 0) + return ODP_PACKET_INVALID; + + return pkt; +} + +odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) +{ + return packet_alloc(pool_hdl, len); +} + +int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, + odp_packet_t pkt[], int num) +{ + int i; + + for (i = 0; i < num; i++) { + pkt[i] = packet_alloc(pool_hdl, len); + if (pkt[i] == ODP_PACKET_INVALID) + return rte_errno == ENOMEM ? i : -EINVAL; + } + return i; +} + +void odp_packet_free(odp_packet_t pkt) +{ + struct rte_mbuf *mbuf = (struct rte_mbuf *)pkt; + rte_pktmbuf_free(mbuf); +} + +void odp_packet_free_multi(const odp_packet_t pkt[], int num) +{ + int i; + + for (i = 0; i < num; i++) { + struct rte_mbuf *mbuf = (struct rte_mbuf *)pkt[i]; + + rte_pktmbuf_free(mbuf); + } +} + +int odp_packet_reset(odp_packet_t pkt, uint32_t len) +{ + odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); + struct rte_mbuf *ms, *mb = &pkt_hdr->buf_hdr.mb; + uint8_t nb_segs = 0; + int32_t lenleft = len; + + if (RTE_PKTMBUF_HEADROOM + len > odp_packet_buf_len(pkt)) { + ODP_DBG("Not enought head room for that packet %d/%d\n", + RTE_PKTMBUF_HEADROOM + len, + odp_packet_buf_len(pkt)); + return -1; + } + + pkt_hdr->p.input_flags.all = 0; + pkt_hdr->p.output_flags.all = 0; + pkt_hdr->p.error_flags.all = 0; + + pkt_hdr->p.l2_offset = 0; + pkt_hdr->p.l3_offset = ODP_PACKET_OFFSET_INVALID; + pkt_hdr->p.l4_offset = ODP_PACKET_OFFSET_INVALID; + + pkt_hdr->buf_hdr.next = NULL; + + pkt_hdr->input = ODP_PKTIO_INVALID; + pkt_hdr->buf_hdr.event_subtype = ODP_EVENT_PACKET_BASIC; + + mb->port = 0xff; + mb->pkt_len = len; + mb->data_off = RTE_PKTMBUF_HEADROOM; + mb->vlan_tci = 0; + nb_segs = 1; + + if (RTE_PKTMBUF_HEADROOM + lenleft <= mb->buf_len) { + mb->data_len = lenleft; + } else { + mb->data_len = mb->buf_len - RTE_PKTMBUF_HEADROOM; + lenleft -= mb->data_len; + ms = mb->next; + while (lenleft > 0) { + nb_segs++; + ms->data_len = lenleft <= ms->buf_len ? + lenleft : ms->buf_len; + lenleft -= ms->buf_len; + ms = ms->next; + } + } + + mb->nb_segs = nb_segs; + return 0; +} + +odp_packet_t _odp_packet_from_buf_hdr(odp_buffer_hdr_t *buf_hdr) +{ + return (odp_packet_t)buf_hdr; +} + +odp_packet_t odp_packet_from_event(odp_event_t ev) +{ + if (odp_unlikely(ev == ODP_EVENT_INVALID)) + return ODP_PACKET_INVALID; + + return (odp_packet_t)buf_to_packet_hdr((odp_buffer_t)ev); +} + +odp_event_t odp_packet_to_event(odp_packet_t pkt) +{ + if (odp_unlikely(pkt == ODP_PACKET_INVALID)) + return ODP_EVENT_INVALID; + + return (odp_event_t)buffer_handle(odp_packet_hdr(pkt)); +} + +uint32_t odp_packet_buf_len(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->buf_hdr.totsize; +} + +void *odp_packet_tail(odp_packet_t pkt) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(pkt)->buf_hdr.mb); + mb = rte_pktmbuf_lastseg(mb); + return (void *)(rte_pktmbuf_mtod(mb, char *) + mb->data_len); +} + +void *odp_packet_push_head(odp_packet_t pkt, uint32_t len) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(pkt)->buf_hdr.mb); + return (void *)rte_pktmbuf_prepend(mb, len); +} + +static void _copy_head_metadata(struct rte_mbuf *newhead, + struct rte_mbuf *oldhead) +{ + odp_packet_t pkt = (odp_packet_t)newhead; + uint32_t saved_index = odp_packet_hdr(pkt)->buf_hdr.index; + + rte_mbuf_refcnt_set(newhead, rte_mbuf_refcnt_read(oldhead)); + newhead->port = oldhead->port; + newhead->ol_flags = oldhead->ol_flags; + newhead->packet_type = oldhead->packet_type; + newhead->vlan_tci = oldhead->vlan_tci; + newhead->hash.rss = 0; + newhead->seqn = oldhead->seqn; + newhead->vlan_tci_outer = oldhead->vlan_tci_outer; + newhead->udata64 = oldhead->udata64; + memcpy(&newhead->tx_offload, &oldhead->tx_offload, + sizeof(odp_packet_hdr_t) - + offsetof(struct rte_mbuf, tx_offload)); + odp_packet_hdr(pkt)->buf_hdr.handle.handle = + (odp_buffer_t)newhead; + odp_packet_hdr(pkt)->buf_hdr.index = saved_index; +} + +int odp_packet_extend_head(odp_packet_t *pkt, uint32_t len, void **data_ptr, + uint32_t *seg_len) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(*pkt)->buf_hdr.mb); + int addheadsize = len - rte_pktmbuf_headroom(mb); + + if (addheadsize > 0) { + struct rte_mbuf *newhead, *t; + uint32_t totsize_change; + int i; + + newhead = rte_pktmbuf_alloc(mb->pool); + if (newhead == NULL) + return -1; + + newhead->data_len = addheadsize % newhead->buf_len; + newhead->pkt_len = addheadsize; + newhead->data_off = newhead->buf_len - newhead->data_len; + newhead->nb_segs = addheadsize / newhead->buf_len + 1; + t = newhead; + + for (i = 0; i < newhead->nb_segs - 1; --i) { + t->next = rte_pktmbuf_alloc(mb->pool); + + if (t->next == NULL) { + rte_pktmbuf_free(newhead); + return -1; + } + /* The intermediate segments are fully used */ + t->data_len = t->buf_len; + t->data_off = 0; + } + totsize_change = newhead->nb_segs * newhead->buf_len; + if (rte_pktmbuf_chain(newhead, mb)) { + rte_pktmbuf_free(newhead); + return -1; + } + /* Expand the original head segment*/ + newhead->pkt_len += rte_pktmbuf_headroom(mb); + mb->data_off = 0; + mb->data_len = mb->buf_len; + _copy_head_metadata(newhead, mb); + mb = newhead; + *pkt = (odp_packet_t)newhead; + odp_packet_hdr(*pkt)->buf_hdr.totsize += totsize_change; + } else { + rte_pktmbuf_prepend(mb, len); + } + + if (data_ptr) + *data_ptr = odp_packet_data(*pkt); + if (seg_len) + *seg_len = mb->data_len; + + return 0; +} + +void *odp_packet_pull_head(odp_packet_t pkt, uint32_t len) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(pkt)->buf_hdr.mb); + return (void *)rte_pktmbuf_adj(mb, len); +} + +int odp_packet_trunc_head(odp_packet_t *pkt, uint32_t len, void **data_ptr, + uint32_t *seg_len) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(*pkt)->buf_hdr.mb); + + if (odp_packet_len(*pkt) < len) + return -1; + + if (len > mb->data_len) { + struct rte_mbuf *newhead = mb, *prev = NULL; + uint32_t left = len; + uint32_t totsize_change = 0; + + while (newhead->next != NULL) { + if (newhead->data_len > left) + break; + left -= newhead->data_len; + totsize_change += newhead->buf_len; + prev = newhead; + newhead = newhead->next; + --mb->nb_segs; + } + newhead->data_off += left; + newhead->nb_segs = mb->nb_segs; + newhead->pkt_len = mb->pkt_len - len; + newhead->data_len -= left; + _copy_head_metadata(newhead, mb); + prev->next = NULL; + rte_pktmbuf_free(mb); + *pkt = (odp_packet_t)newhead; + odp_packet_hdr(*pkt)->buf_hdr.totsize -= totsize_change; + } else { + rte_pktmbuf_adj(mb, len); + } + + if (data_ptr) + *data_ptr = odp_packet_data(*pkt); + if (seg_len) + *seg_len = mb->data_len; + + return 0; +} + +void *odp_packet_push_tail(odp_packet_t pkt, uint32_t len) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(pkt)->buf_hdr.mb); + + return (void *)rte_pktmbuf_append(mb, len); +} + +int odp_packet_extend_tail(odp_packet_t *pkt, uint32_t len, void **data_ptr, + uint32_t *seg_len) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(*pkt)->buf_hdr.mb); + int newtailsize = len - odp_packet_tailroom(*pkt); + uint32_t old_pkt_len = odp_packet_len(*pkt); + + if (data_ptr) + *data_ptr = odp_packet_tail(*pkt); + + if (newtailsize > 0) { + struct rte_mbuf *newtail = rte_pktmbuf_alloc(mb->pool); + struct rte_mbuf *t; + struct rte_mbuf *m_last = rte_pktmbuf_lastseg(mb); + int i; + + if (newtail == NULL) + return -1; + newtail->data_off = 0; + newtail->pkt_len = newtailsize; + if (newtailsize > newtail->buf_len) + newtail->data_len = newtail->buf_len; + else + newtail->data_len = newtailsize; + newtail->nb_segs = newtailsize / newtail->buf_len + 1; + t = newtail; + + for (i = 0; i < newtail->nb_segs - 1; ++i) { + t->next = rte_pktmbuf_alloc(mb->pool); + + if (t->next == NULL) { + rte_pktmbuf_free(newtail); + return -1; + } + t = t->next; + t->data_off = 0; + /* The last segment's size is not trivial*/ + t->data_len = i == newtail->nb_segs - 2 ? + newtailsize % newtail->buf_len : + t->buf_len; + } + if (rte_pktmbuf_chain(mb, newtail)) { + rte_pktmbuf_free(newtail); + return -1; + } + /* Expand the original tail */ + m_last->data_len = m_last->buf_len - m_last->data_off; + mb->pkt_len += len - newtailsize; + odp_packet_hdr(*pkt)->buf_hdr.totsize += + newtail->nb_segs * newtail->buf_len; + } else { + rte_pktmbuf_append(mb, len); + } + + if (seg_len) + odp_packet_offset(*pkt, old_pkt_len, seg_len, NULL); + + return 0; +} + +void *odp_packet_pull_tail(odp_packet_t pkt, uint32_t len) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(pkt)->buf_hdr.mb); + + if (rte_pktmbuf_trim(mb, len)) + return NULL; + else + return odp_packet_tail(pkt); +} + +int odp_packet_trunc_tail(odp_packet_t *pkt, uint32_t len, void **tail_ptr, + uint32_t *tailroom) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(*pkt)->buf_hdr.mb); + + if (odp_packet_len(*pkt) < len) + return -1; + + if (rte_pktmbuf_trim(mb, len)) { + struct rte_mbuf *reverse[mb->nb_segs]; + struct rte_mbuf *t = mb; + int i; + + for (i = 0; i < mb->nb_segs; ++i) { + reverse[i] = t; + t = t->next; + } + for (i = mb->nb_segs - 1; i >= 0 && len > 0; --i) { + t = reverse[i]; + if (len >= t->data_len) { + len -= t->data_len; + mb->pkt_len -= t->data_len; + t->data_len = 0; + if (i > 0) { + rte_pktmbuf_free_seg(t); + --mb->nb_segs; + reverse[i - 1]->next = NULL; + } + } else { + t->data_len -= len; + mb->pkt_len -= len; + len = 0; + } + } + } + + if (tail_ptr) + *tail_ptr = odp_packet_tail(*pkt); + if (tailroom) + *tailroom = odp_packet_tailroom(*pkt); + + return 0; +} + +void *odp_packet_offset(odp_packet_t pkt, uint32_t offset, uint32_t *len, + odp_packet_seg_t *seg) +{ + struct rte_mbuf *mb = &(odp_packet_hdr(pkt)->buf_hdr.mb); + + do { + if (mb->data_len > offset) { + break; + } else { + offset -= mb->data_len; + mb = mb->next; + } + } while (mb); + + if (mb) { + if (len) + *len = mb->data_len - offset; + if (seg) + *seg = (odp_packet_seg_t)(uintptr_t)mb; + return (void *)(rte_pktmbuf_mtod(mb, char *) + offset); + } else { + return NULL; + } +} + +/* + * + * Meta-data + * ******************************************************** + * + */ + +int odp_packet_input_index(odp_packet_t pkt) +{ + return odp_pktio_index(odp_packet_hdr(pkt)->input); +} + +void odp_packet_user_ptr_set(odp_packet_t pkt, const void *ctx) +{ + odp_packet_hdr(pkt)->buf_hdr.buf_cctx = ctx; +} + +static inline void *packet_offset_to_ptr(odp_packet_t pkt, uint32_t *len, + const size_t offset) +{ + if (odp_unlikely(offset == ODP_PACKET_OFFSET_INVALID)) + return NULL; + + if (len) + return odp_packet_offset(pkt, offset, len, NULL); + else + return odp_packet_offset(pkt, offset, NULL, NULL); +} + +void *odp_packet_l2_ptr(odp_packet_t pkt, uint32_t *len) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (!packet_hdr_has_l2(pkt_hdr)) + return NULL; + return packet_offset_to_ptr(pkt, len, pkt_hdr->p.l2_offset); +} + +uint32_t odp_packet_l2_offset(odp_packet_t pkt) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (!packet_hdr_has_l2(pkt_hdr)) + return ODP_PACKET_OFFSET_INVALID; + return pkt_hdr->p.l2_offset; +} + +int odp_packet_l2_offset_set(odp_packet_t pkt, uint32_t offset) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (odp_unlikely(offset >= (odp_packet_len(pkt) - 1))) + return -1; + + packet_hdr_has_l2_set(pkt_hdr, 1); + pkt_hdr->p.l2_offset = offset; + return 0; +} + +void *odp_packet_l3_ptr(odp_packet_t pkt, uint32_t *len) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + return packet_offset_to_ptr(pkt, len, pkt_hdr->p.l3_offset); +} + +uint32_t odp_packet_l3_offset(odp_packet_t pkt) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + return pkt_hdr->p.l3_offset; +} + +int odp_packet_l3_offset_set(odp_packet_t pkt, uint32_t offset) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (odp_unlikely(offset >= (odp_packet_len(pkt) - 1))) + return -1; + + pkt_hdr->p.l3_offset = offset; + return 0; +} + +void *odp_packet_l4_ptr(odp_packet_t pkt, uint32_t *len) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + return packet_offset_to_ptr(pkt, len, pkt_hdr->p.l4_offset); +} + +uint32_t odp_packet_l4_offset(odp_packet_t pkt) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + return pkt_hdr->p.l4_offset; +} + +int odp_packet_l4_offset_set(odp_packet_t pkt, uint32_t offset) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (odp_unlikely(offset >= (odp_packet_len(pkt) - 1))) + return -1; + + pkt_hdr->p.l4_offset = offset; + return 0; +} + +void odp_packet_ts_set(odp_packet_t pkt, odp_time_t timestamp) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + pkt_hdr->timestamp = timestamp; + pkt_hdr->p.input_flags.timestamp = 1; +} + +/* + * + * Segment level + * ******************************************************** + * + */ + +void *odp_packet_seg_data(odp_packet_t pkt ODP_UNUSED, odp_packet_seg_t seg) +{ + return odp_packet_data((odp_packet_t)(uintptr_t)seg); +} + +uint32_t odp_packet_seg_data_len(odp_packet_t pkt ODP_UNUSED, + odp_packet_seg_t seg) +{ + return odp_packet_seg_len((odp_packet_t)(uintptr_t)seg); +} + +/* + * + * Manipulation + * ******************************************************** + * + */ + +int odp_packet_add_data(odp_packet_t *pkt_ptr, uint32_t offset, uint32_t len) +{ + odp_packet_t pkt = *pkt_ptr; + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + uint32_t pktlen = odp_packet_len(pkt); + odp_packet_t newpkt; + + if (offset > pktlen) + return -1; + + newpkt = odp_packet_alloc(pkt_hdr->buf_hdr.pool_hdl, pktlen + len); + + if (newpkt == ODP_PACKET_INVALID) + return -1; + + if (odp_packet_copy_from_pkt(newpkt, 0, pkt, 0, offset) != 0 || + odp_packet_copy_from_pkt(newpkt, offset + len, pkt, offset, + pktlen - offset) != 0) { + odp_packet_free(newpkt); + return -1; + } + + _odp_packet_copy_md_to_packet(pkt, newpkt); + odp_packet_free(pkt); + *pkt_ptr = newpkt; + + return 1; +} + +int odp_packet_rem_data(odp_packet_t *pkt_ptr, uint32_t offset, uint32_t len) +{ + odp_packet_t pkt = *pkt_ptr; + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + uint32_t pktlen = odp_packet_len(pkt); + odp_packet_t newpkt; + + if (offset > pktlen || offset + len > pktlen) + return -1; + + newpkt = odp_packet_alloc(pkt_hdr->buf_hdr.pool_hdl, pktlen - len); + + if (newpkt == ODP_PACKET_INVALID) + return -1; + + if (odp_packet_copy_from_pkt(newpkt, 0, pkt, 0, offset) != 0 || + odp_packet_copy_from_pkt(newpkt, offset, pkt, offset + len, + pktlen - offset - len) != 0) { + odp_packet_free(newpkt); + return -1; + } + + _odp_packet_copy_md_to_packet(pkt, newpkt); + odp_packet_free(pkt); + *pkt_ptr = newpkt; + + return 1; +} + +int odp_packet_align(odp_packet_t *pkt, uint32_t offset, uint32_t len, + uint32_t align) +{ + int rc; + uint32_t shift; + uint32_t seglen = 0; /* GCC */ + void *addr = odp_packet_offset(*pkt, offset, &seglen, NULL); + uint64_t uaddr = (uint64_t)(uintptr_t)addr; + uint64_t misalign; + + if (align > ODP_CACHE_LINE_SIZE) + return -1; + + if (seglen >= len) { + misalign = align <= 1 ? 0 : + ROUNDUP_ALIGN(uaddr, align) - uaddr; + if (misalign == 0) + return 0; + shift = align - misalign; + } else { + if (len > odp_packet_seg_len(*pkt)) + return -1; + shift = len - seglen; + uaddr -= shift; + misalign = align <= 1 ? 0 : + ROUNDUP_ALIGN(uaddr, align) - uaddr; + if (misalign) + shift += align - misalign; + } + + rc = odp_packet_extend_head(pkt, shift, NULL, NULL); + if (rc < 0) + return rc; + + (void)odp_packet_move_data(*pkt, 0, shift, + odp_packet_len(*pkt) - shift); + + (void)odp_packet_trunc_tail(pkt, shift, NULL, NULL); + return 1; +} + +int odp_packet_concat(odp_packet_t *dst, odp_packet_t src) +{ + odp_packet_hdr_t *dst_hdr = odp_packet_hdr(*dst); + odp_packet_hdr_t *src_hdr = odp_packet_hdr(src); + struct rte_mbuf *mb_dst = pkt_to_mbuf(dst_hdr); + struct rte_mbuf *mb_src = pkt_to_mbuf(src_hdr); + odp_packet_t new_dst; + odp_pool_t pool; + uint32_t dst_len; + uint32_t src_len; + + if (odp_likely(!rte_pktmbuf_chain(mb_dst, mb_src))) { + dst_hdr->buf_hdr.totsize += src_hdr->buf_hdr.totsize; + return 0; + } + + /* Fall back to using standard copy operations after maximum number of + * segments has been reached. */ + dst_len = odp_packet_len(*dst); + src_len = odp_packet_len(src); + pool = odp_packet_pool(*dst); + + new_dst = odp_packet_copy(*dst, pool); + if (odp_unlikely(new_dst == ODP_PACKET_INVALID)) + return -1; + + if (odp_packet_extend_tail(&new_dst, src_len, NULL, NULL) >= 0) { + (void)odp_packet_copy_from_pkt(new_dst, dst_len, + src, 0, src_len); + odp_packet_free(*dst); + odp_packet_free(src); + *dst = new_dst; + return 1; + } + + odp_packet_free(new_dst); + return -1; +} + +int odp_packet_split(odp_packet_t *pkt, uint32_t len, odp_packet_t *tail) +{ + uint32_t pktlen = odp_packet_len(*pkt); + + if (len >= pktlen || tail == NULL) + return -1; + + *tail = odp_packet_copy_part(*pkt, len, pktlen - len, + odp_packet_pool(*pkt)); + + if (*tail == ODP_PACKET_INVALID) + return -1; + + return odp_packet_trunc_tail(pkt, pktlen - len, NULL, NULL); +} + +/* + * + * Copy + * ******************************************************** + * + */ + +odp_packet_t odp_packet_copy(odp_packet_t pkt, odp_pool_t pool) +{ + uint32_t pktlen = odp_packet_len(pkt); + odp_packet_t newpkt = odp_packet_alloc(pool, pktlen); + + if (newpkt != ODP_PACKET_INVALID) { + if (_odp_packet_copy_md_to_packet(pkt, newpkt) || + odp_packet_copy_from_pkt(newpkt, 0, pkt, 0, pktlen)) { + odp_packet_free(newpkt); + newpkt = ODP_PACKET_INVALID; + } + } + + return newpkt; +} + +odp_packet_t odp_packet_copy_part(odp_packet_t pkt, uint32_t offset, + uint32_t len, odp_pool_t pool) +{ + uint32_t pktlen = odp_packet_len(pkt); + odp_packet_t newpkt; + + if (offset >= pktlen || offset + len > pktlen) + return ODP_PACKET_INVALID; + + newpkt = odp_packet_alloc(pool, len); + if (newpkt != ODP_PACKET_INVALID) + odp_packet_copy_from_pkt(newpkt, 0, pkt, offset, len); + + return newpkt; +} + +int odp_packet_copy_to_mem(odp_packet_t pkt, uint32_t offset, + uint32_t len, void *dst) +{ + void *mapaddr; + uint32_t seglen = 0; /* GCC */ + uint32_t cpylen; + uint8_t *dstaddr = (uint8_t *)dst; + + if (offset + len > odp_packet_len(pkt)) + return -1; + + while (len > 0) { + mapaddr = odp_packet_offset(pkt, offset, &seglen, NULL); + cpylen = len > seglen ? seglen : len; + memcpy(dstaddr, mapaddr, cpylen); + offset += cpylen; + dstaddr += cpylen; + len -= cpylen; + } + + return 0; +} + +int odp_packet_copy_from_mem(odp_packet_t pkt, uint32_t offset, + uint32_t len, const void *src) +{ + void *mapaddr; + uint32_t seglen = 0; /* GCC */ + uint32_t cpylen; + const uint8_t *srcaddr = (const uint8_t *)src; + + if (offset + len > odp_packet_len(pkt)) + return -1; + + while (len > 0) { + mapaddr = odp_packet_offset(pkt, offset, &seglen, NULL); + cpylen = len > seglen ? seglen : len; + memcpy(mapaddr, srcaddr, cpylen); + offset += cpylen; + srcaddr += cpylen; + len -= cpylen; + } + + return 0; +} + +int odp_packet_copy_from_pkt(odp_packet_t dst, uint32_t dst_offset, + odp_packet_t src, uint32_t src_offset, + uint32_t len) +{ + odp_packet_hdr_t *dst_hdr = odp_packet_hdr(dst); + odp_packet_hdr_t *src_hdr = odp_packet_hdr(src); + void *dst_map; + void *src_map; + uint32_t cpylen, minseg; + uint32_t dst_seglen = 0; /* GCC */ + uint32_t src_seglen = 0; /* GCC */ + int overlap; + + if (dst_offset + len > odp_packet_len(dst) || + src_offset + len > odp_packet_len(src)) + return -1; + + overlap = (dst_hdr == src_hdr && + ((dst_offset <= src_offset && + dst_offset + len >= src_offset) || + (src_offset <= dst_offset && + src_offset + len >= dst_offset))); + + if (overlap && src_offset < dst_offset) { + odp_packet_t temp = + odp_packet_copy_part(src, src_offset, len, + odp_packet_pool(src)); + if (temp == ODP_PACKET_INVALID) + return -1; + odp_packet_copy_from_pkt(dst, dst_offset, temp, 0, len); + odp_packet_free(temp); + return 0; + } + + while (len > 0) { + dst_map = odp_packet_offset(dst, dst_offset, &dst_seglen, NULL); + src_map = odp_packet_offset(src, src_offset, &src_seglen, NULL); + + minseg = dst_seglen > src_seglen ? src_seglen : dst_seglen; + cpylen = len > minseg ? minseg : len; + + if (overlap) + memmove(dst_map, src_map, cpylen); + else + memcpy(dst_map, src_map, cpylen); + + dst_offset += cpylen; + src_offset += cpylen; + len -= cpylen; + } + + return 0; +} + +int odp_packet_copy_data(odp_packet_t pkt, uint32_t dst_offset, + uint32_t src_offset, uint32_t len) +{ + return odp_packet_copy_from_pkt(pkt, dst_offset, + pkt, src_offset, len); +} + +int odp_packet_move_data(odp_packet_t pkt, uint32_t dst_offset, + uint32_t src_offset, uint32_t len) +{ + return odp_packet_copy_from_pkt(pkt, dst_offset, + pkt, src_offset, len); +} + +/* + * + * Debugging + * ******************************************************** + * + */ + +void odp_packet_print(odp_packet_t pkt) +{ + odp_packet_seg_t seg; + int max_len = 512; + char str[max_len]; + uint8_t *p; + int len = 0; + int n = max_len - 1; + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + odp_buffer_t buf = packet_to_buffer(pkt); + + len += snprintf(&str[len], n - len, "Packet "); + len += odp_buffer_snprint(&str[len], n - len, buf); + len += snprintf(&str[len], n - len, " input_flags 0x%" PRIx64 "\n", + hdr->p.input_flags.all); + len += snprintf(&str[len], n - len, " error_flags 0x%" PRIx32 "\n", + hdr->p.error_flags.all); + len += snprintf(&str[len], n - len, " output_flags 0x%" PRIx32 "\n", + hdr->p.output_flags.all); + len += snprintf(&str[len], n - len, + " l2_offset %" PRIu32 "\n", hdr->p.l2_offset); + len += snprintf(&str[len], n - len, + " l3_offset %" PRIu32 "\n", hdr->p.l3_offset); + len += snprintf(&str[len], n - len, + " l4_offset %" PRIu32 "\n", hdr->p.l4_offset); + len += snprintf(&str[len], n - len, + " frame_len %" PRIu32 "\n", + hdr->buf_hdr.mb.pkt_len); + len += snprintf(&str[len], n - len, + " input %" PRIu64 "\n", + odp_pktio_to_u64(hdr->input)); + len += snprintf(&str[len], n - len, + " headroom %" PRIu32 "\n", + odp_packet_headroom(pkt)); + len += snprintf(&str[len], n - len, + " tailroom %" PRIu32 "\n", + odp_packet_tailroom(pkt)); + len += snprintf(&str[len], n - len, + " num_segs %i\n", odp_packet_num_segs(pkt)); + + seg = odp_packet_first_seg(pkt); + + while (seg != ODP_PACKET_SEG_INVALID) { + len += snprintf(&str[len], n - len, + " seg_len %" PRIu32 "\n", + odp_packet_seg_data_len(pkt, seg)); + + seg = odp_packet_next_seg(pkt, seg); + } + + str[len] = '\0'; + + ODP_PRINT("\n%s\n", str); + rte_pktmbuf_dump(stdout, &hdr->buf_hdr.mb, 32); + + p = odp_packet_data(pkt); + ODP_ERR("00000000: %02X %02X %02X %02X %02X %02X %02X %02X\n", + p[0], p[1], p[2], p[3], p[4], p[5], p[6], p[7]); + ODP_ERR("00000008: %02X %02X %02X %02X %02X %02X %02X %02X\n", + p[8], p[9], p[10], p[11], p[12], p[13], p[14], p[15]); +} + +int odp_packet_is_valid(odp_packet_t pkt) +{ + odp_buffer_t buf = packet_to_buffer(pkt); + + return odp_buffer_is_valid(buf); +} + +/* + * + * Internal Use Routines + * ******************************************************** + * + */ + +int _odp_packet_copy_md_to_packet(odp_packet_t srcpkt, odp_packet_t dstpkt) +{ + odp_packet_hdr_t *srchdr = odp_packet_hdr(srcpkt); + odp_packet_hdr_t *dsthdr = odp_packet_hdr(dstpkt); + uint32_t src_size = odp_packet_user_area_size(srcpkt); + uint32_t dst_size = odp_packet_user_area_size(dstpkt); + + dsthdr->input = srchdr->input; + dsthdr->dst_queue = srchdr->dst_queue; + dsthdr->buf_hdr.buf_u64 = srchdr->buf_hdr.buf_u64; + + dsthdr->buf_hdr.mb.port = srchdr->buf_hdr.mb.port; + dsthdr->buf_hdr.mb.ol_flags = srchdr->buf_hdr.mb.ol_flags; + dsthdr->buf_hdr.mb.packet_type = srchdr->buf_hdr.mb.packet_type; + dsthdr->buf_hdr.mb.vlan_tci = srchdr->buf_hdr.mb.vlan_tci; + dsthdr->buf_hdr.mb.hash = srchdr->buf_hdr.mb.hash; + dsthdr->buf_hdr.mb.vlan_tci_outer = srchdr->buf_hdr.mb.vlan_tci_outer; + dsthdr->buf_hdr.mb.tx_offload = srchdr->buf_hdr.mb.tx_offload; + + if (dst_size != 0) + memcpy(odp_packet_user_area(dstpkt), + odp_packet_user_area(srcpkt), + dst_size <= src_size ? dst_size : src_size); + + copy_packet_parser_metadata(srchdr, dsthdr); + + /* Metadata copied, but return indication of whether the packet + * user area was truncated in the process. Note this can only + * happen when copying between different pools. + */ + return dst_size < src_size; +} + +/** + * Parser helper function for IPv4 + */ +static inline uint8_t parse_ipv4(packet_parser_t *prs, const uint8_t **parseptr, + uint32_t *offset, uint32_t frame_len) +{ + const _odp_ipv4hdr_t *ipv4 = (const _odp_ipv4hdr_t *)*parseptr; + uint8_t ver = _ODP_IPV4HDR_VER(ipv4->ver_ihl); + uint8_t ihl = _ODP_IPV4HDR_IHL(ipv4->ver_ihl); + uint16_t frag_offset; + uint32_t dstaddr = odp_be_to_cpu_32(ipv4->dst_addr); + uint32_t l3_len = odp_be_to_cpu_16(ipv4->tot_len); + + if (odp_unlikely(ihl < _ODP_IPV4HDR_IHL_MIN) || + odp_unlikely(ver != 4) || + (l3_len > frame_len - *offset)) { + prs->error_flags.ip_err = 1; + return 0; + } + + *offset += ihl * 4; + *parseptr += ihl * 4; + + if (odp_unlikely(ihl > _ODP_IPV4HDR_IHL_MIN)) + prs->input_flags.ipopt = 1; + + /* A packet is a fragment if: + * "more fragments" flag is set (all fragments except the last) + * OR + * "fragment offset" field is nonzero (all fragments except the first) + */ + frag_offset = odp_be_to_cpu_16(ipv4->frag_offset); + if (odp_unlikely(_ODP_IPV4HDR_IS_FRAGMENT(frag_offset))) + prs->input_flags.ipfrag = 1; + + /* Handle IPv4 broadcast / multicast */ + prs->input_flags.ip_bcast = (dstaddr == 0xffffffff); + prs->input_flags.ip_mcast = (dstaddr >> 28) == 0xd; + + return ipv4->proto; +} + +/** + * Parser helper function for IPv6 + */ +static inline uint8_t parse_ipv6(packet_parser_t *prs, const uint8_t **parseptr, + uint32_t *offset, uint32_t frame_len, + uint32_t seg_len) +{ + const _odp_ipv6hdr_t *ipv6 = (const _odp_ipv6hdr_t *)*parseptr; + const _odp_ipv6hdr_ext_t *ipv6ext; + uint32_t dstaddr0 = odp_be_to_cpu_32(ipv6->dst_addr.u8[0]); + uint32_t l3_len = odp_be_to_cpu_16(ipv6->payload_len) + + _ODP_IPV6HDR_LEN; + + /* Basic sanity checks on IPv6 header */ + if ((odp_be_to_cpu_32(ipv6->ver_tc_flow) >> 28) != 6 || + l3_len > frame_len - *offset) { + prs->error_flags.ip_err = 1; + return 0; + } + + /* IPv6 broadcast / multicast flags */ + prs->input_flags.ip_mcast = (dstaddr0 & 0xff000000) == 0xff000000; + prs->input_flags.ip_bcast = 0; + + /* Skip past IPv6 header */ + *offset += sizeof(_odp_ipv6hdr_t); + *parseptr += sizeof(_odp_ipv6hdr_t); + + /* Skip past any IPv6 extension headers */ + if (ipv6->next_hdr == _ODP_IPPROTO_HOPOPTS || + ipv6->next_hdr == _ODP_IPPROTO_ROUTE) { + prs->input_flags.ipopt = 1; + + do { + ipv6ext = (const _odp_ipv6hdr_ext_t *)*parseptr; + uint16_t extlen = 8 + ipv6ext->ext_len * 8; + + *offset += extlen; + *parseptr += extlen; + } while ((ipv6ext->next_hdr == _ODP_IPPROTO_HOPOPTS || + ipv6ext->next_hdr == _ODP_IPPROTO_ROUTE) && + *offset < seg_len); + + if (*offset >= prs->l3_offset + + odp_be_to_cpu_16(ipv6->payload_len)) { + prs->error_flags.ip_err = 1; + return 0; + } + + if (ipv6ext->next_hdr == _ODP_IPPROTO_FRAG) + prs->input_flags.ipfrag = 1; + + return ipv6ext->next_hdr; + } + + if (odp_unlikely(ipv6->next_hdr == _ODP_IPPROTO_FRAG)) { + prs->input_flags.ipopt = 1; + prs->input_flags.ipfrag = 1; + } + + return ipv6->next_hdr; +} + +/** + * Parser helper function for TCP + */ +static inline void parse_tcp(packet_parser_t *prs, + const uint8_t **parseptr, uint32_t *offset) +{ + const _odp_tcphdr_t *tcp = (const _odp_tcphdr_t *)*parseptr; + + if (tcp->hl < sizeof(_odp_tcphdr_t) / sizeof(uint32_t)) + prs->error_flags.tcp_err = 1; + else if ((uint32_t)tcp->hl * 4 > sizeof(_odp_tcphdr_t)) + prs->input_flags.tcpopt = 1; + + if (offset) + *offset += (uint32_t)tcp->hl * 4; + *parseptr += (uint32_t)tcp->hl * 4; +} + +/** + * Parser helper function for UDP + */ +static inline void parse_udp(packet_parser_t *prs, + const uint8_t **parseptr, uint32_t *offset) +{ + const _odp_udphdr_t *udp = (const _odp_udphdr_t *)*parseptr; + uint32_t udplen = odp_be_to_cpu_16(udp->length); + + if (odp_unlikely(udplen < sizeof(_odp_udphdr_t))) + prs->error_flags.udp_err = 1; + + if (offset) + *offset += sizeof(_odp_udphdr_t); + *parseptr += sizeof(_odp_udphdr_t); +} + +/** + * Parse common packet headers up to given layer + * + * The function expects at least PACKET_PARSE_SEG_LEN bytes of data to be + * available from the ptr. + */ +int packet_parse_common(packet_parser_t *prs, const uint8_t *ptr, + uint32_t frame_len, uint32_t seg_len, + odp_pktio_parser_layer_t layer) +{ + uint32_t offset; + uint16_t ethtype; + const uint8_t *parseptr; + uint8_t ip_proto; + const _odp_ethhdr_t *eth; + uint16_t macaddr0, macaddr2, macaddr4; + const _odp_vlanhdr_t *vlan; + + if (layer == ODP_PKTIO_PARSER_LAYER_NONE) + return 0; + + /* We only support Ethernet for now */ + prs->input_flags.eth = 1; + /* Assume valid L2 header, no CRC/FCS check in SW */ + prs->input_flags.l2 = 1; + /* Detect jumbo frames */ + if (frame_len > _ODP_ETH_LEN_MAX) + prs->input_flags.jumbo = 1; + + offset = sizeof(_odp_ethhdr_t); + eth = (const _odp_ethhdr_t *)ptr; + + /* Handle Ethernet broadcast/multicast addresses */ + macaddr0 = odp_be_to_cpu_16(*((const uint16_t *)(const void *)eth)); + prs->input_flags.eth_mcast = (macaddr0 & 0x0100) == 0x0100; + + if (macaddr0 == 0xffff) { + macaddr2 = + odp_be_to_cpu_16(*((const uint16_t *) + (const void *)eth + 1)); + macaddr4 = + odp_be_to_cpu_16(*((const uint16_t *) + (const void *)eth + 2)); + prs->input_flags.eth_bcast = + (macaddr2 == 0xffff) && (macaddr4 == 0xffff); + } else { + prs->input_flags.eth_bcast = 0; + } + + /* Get Ethertype */ + ethtype = odp_be_to_cpu_16(eth->type); + parseptr = (const uint8_t *)(eth + 1); + + /* Check for SNAP vs. DIX */ + if (ethtype < _ODP_ETH_LEN_MAX) { + prs->input_flags.snap = 1; + if (ethtype > frame_len - offset) { + prs->error_flags.snap_len = 1; + goto parse_exit; + } + ethtype = odp_be_to_cpu_16(*((const uint16_t *)(uintptr_t) + (parseptr + 6))); + offset += 8; + parseptr += 8; + } + + /* Parse the VLAN header(s), if present */ + if (ethtype == _ODP_ETHTYPE_VLAN_OUTER) { + prs->input_flags.vlan_qinq = 1; + prs->input_flags.vlan = 1; + + vlan = (const _odp_vlanhdr_t *)parseptr; + ethtype = odp_be_to_cpu_16(vlan->type); + offset += sizeof(_odp_vlanhdr_t); + parseptr += sizeof(_odp_vlanhdr_t); + } + + if (ethtype == _ODP_ETHTYPE_VLAN) { + prs->input_flags.vlan = 1; + vlan = (const _odp_vlanhdr_t *)parseptr; + ethtype = odp_be_to_cpu_16(vlan->type); + offset += sizeof(_odp_vlanhdr_t); + parseptr += sizeof(_odp_vlanhdr_t); + } + + if (layer == ODP_PKTIO_PARSER_LAYER_L2) + return prs->error_flags.all != 0; + + /* Set l3_offset+flag only for known ethtypes */ + prs->l3_offset = offset; + prs->input_flags.l3 = 1; + + /* Parse Layer 3 headers */ + switch (ethtype) { + case _ODP_ETHTYPE_IPV4: + prs->input_flags.ipv4 = 1; + ip_proto = parse_ipv4(prs, &parseptr, &offset, frame_len); + break; + + case _ODP_ETHTYPE_IPV6: + prs->input_flags.ipv6 = 1; + ip_proto = parse_ipv6(prs, &parseptr, &offset, frame_len, + seg_len); + break; + + case _ODP_ETHTYPE_ARP: + prs->input_flags.arp = 1; + ip_proto = 255; /* Reserved invalid by IANA */ + break; + + default: + prs->input_flags.l3 = 0; + prs->l3_offset = ODP_PACKET_OFFSET_INVALID; + ip_proto = 255; /* Reserved invalid by IANA */ + } + + if (layer == ODP_PKTIO_PARSER_LAYER_L3) + return prs->error_flags.all != 0; + + /* Set l4_offset+flag only for known ip_proto */ + prs->l4_offset = offset; + prs->input_flags.l4 = 1; + + /* Parse Layer 4 headers */ + switch (ip_proto) { + case _ODP_IPPROTO_ICMPv4: + /* Fall through */ + + case _ODP_IPPROTO_ICMPv6: + prs->input_flags.icmp = 1; + break; + + case _ODP_IPPROTO_TCP: + if (odp_unlikely(offset + _ODP_TCPHDR_LEN > seg_len)) + return -1; + prs->input_flags.tcp = 1; + parse_tcp(prs, &parseptr, NULL); + break; + + case _ODP_IPPROTO_UDP: + if (odp_unlikely(offset + _ODP_UDPHDR_LEN > seg_len)) + return -1; + prs->input_flags.udp = 1; + parse_udp(prs, &parseptr, NULL); + break; + + case _ODP_IPPROTO_AH: + prs->input_flags.ipsec = 1; + prs->input_flags.ipsec_ah = 1; + break; + + case _ODP_IPPROTO_ESP: + prs->input_flags.ipsec = 1; + prs->input_flags.ipsec_esp = 1; + break; + + case _ODP_IPPROTO_SCTP: + prs->input_flags.sctp = 1; + break; + + default: + prs->input_flags.l4 = 0; + prs->l4_offset = ODP_PACKET_OFFSET_INVALID; + break; + } +parse_exit: + return prs->error_flags.all != 0; +} +/** + * Simple packet parser + */ +int packet_parse_layer(odp_packet_hdr_t *pkt_hdr, + odp_pktio_parser_layer_t layer) +{ + uint32_t seg_len = odp_packet_seg_len((odp_packet_t)pkt_hdr); + uint32_t len = packet_len(pkt_hdr); + void *base = odp_packet_data((odp_packet_t)pkt_hdr); + + return packet_parse_common(&pkt_hdr->p, base, len, seg_len, layer); +} + +uint64_t odp_packet_to_u64(odp_packet_t hdl) +{ + return _odp_pri(hdl); +} + +uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl) +{ + return _odp_pri(hdl); +} + +odp_packet_t odp_packet_ref_static(odp_packet_t pkt) +{ + return odp_packet_copy(pkt, odp_packet_pool(pkt)); +} + +odp_packet_t odp_packet_ref(odp_packet_t pkt, uint32_t offset) +{ + odp_packet_t new; + int ret; + + new = odp_packet_copy(pkt, odp_packet_pool(pkt)); + + if (new == ODP_PACKET_INVALID) { + ODP_ERR("copy failed\n"); + return ODP_PACKET_INVALID; + } + + ret = odp_packet_trunc_head(&new, offset, NULL, NULL); + + if (ret < 0) { + ODP_ERR("trunk_head failed\n"); + odp_packet_free(new); + return ODP_PACKET_INVALID; + } + + return new; +} + +odp_packet_t odp_packet_ref_pkt(odp_packet_t pkt, uint32_t offset, + odp_packet_t hdr) +{ + odp_packet_t new; + int ret; + + new = odp_packet_copy(pkt, odp_packet_pool(pkt)); + + if (new == ODP_PACKET_INVALID) { + ODP_ERR("copy failed\n"); + return ODP_PACKET_INVALID; + } + + if (offset) { + ret = odp_packet_trunc_head(&new, offset, NULL, NULL); + + if (ret < 0) { + ODP_ERR("trunk_head failed\n"); + odp_packet_free(new); + return ODP_PACKET_INVALID; + } + } + + ret = odp_packet_concat(&hdr, new); + + if (ret < 0) { + ODP_ERR("concat failed\n"); + odp_packet_free(new); + return ODP_PACKET_INVALID; + } + + return hdr; +} + +int odp_packet_has_ref(odp_packet_t pkt) +{ + (void)pkt; + + return 0; +} + +uint32_t odp_packet_unshared_len(odp_packet_t pkt) +{ + return odp_packet_len(pkt); +} + +/* Include non-inlined versions of API functions */ +#if ODP_ABI_COMPAT == 1 +#include <odp/api/plat/packet_inlines_api.h> +#endif diff --cc platform/linux-dpdk/pktio/dpdk.h index 064c5d8c,00000000..2adf5072 mode 100644,000000..100644 --- a/platform/linux-dpdk/pktio/dpdk.h +++ b/platform/linux-dpdk/pktio/dpdk.h @@@ -1,66 -1,0 +1,68 @@@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_PKTIO_OPS_DPDK_H_ +#define ODP_PKTIO_OPS_DPDK_H_ + +#include <stdint.h> +#include <net/if.h> + ++#include <config.h> ++ +#include <protocols/eth.h> +#include <odp/api/align.h> +#include <odp/api/debug.h> +#include <odp/api/packet.h> +#include <odp_packet_internal.h> +#include <odp/api/pool.h> +#include <odp_pool_internal.h> +#include <odp_buffer_internal.h> +#include <odp/api/std_types.h> + +#include <rte_config.h> +#include <rte_memory.h> +#include <rte_memzone.h> +#include <rte_launch.h> +#include <rte_tailq.h> +#include <rte_eal.h> +#include <rte_per_lcore.h> +#include <rte_lcore.h> +#include <rte_branch_prediction.h> +#include <rte_prefetch.h> +#include <rte_cycles.h> +#include <rte_errno.h> +#include <rte_debug.h> +#include <rte_log.h> +#include <rte_byteorder.h> +#include <rte_pci.h> +#include <rte_random.h> +#include <rte_ether.h> +#include <rte_ethdev.h> +#include <rte_hash.h> +#include <rte_jhash.h> +#include <rte_hash_crc.h> + +#define RTE_TEST_RX_DESC_DEFAULT 128 +#define RTE_TEST_TX_DESC_DEFAULT 512 + +/** Packet socket using dpdk mmaped rings for both Rx and Tx */ +typedef struct { + odp_pktio_capability_t capa; /**< interface capabilities */ + + /********************************/ + char ifname[32]; + uint8_t min_rx_burst; + uint8_t portid; + odp_bool_t vdev_sysc_promisc; /**< promiscuous mode defined with + system call */ + odp_pktin_hash_proto_t hash; /**< Packet input hash protocol */ + odp_bool_t lockless_rx; /**< no locking for rx */ + odp_bool_t lockless_tx; /**< no locking for tx */ + odp_ticketlock_t rx_lock[PKTIO_MAX_QUEUES]; /**< RX queue locks */ + odp_ticketlock_t tx_lock[PKTIO_MAX_QUEUES]; /**< TX queue locks */ +} pktio_ops_dpdk_data_t; + +#endif diff --cc platform/linux-dpdk/pktio/subsystem.c index 4ff15c81,00000000..985ae782 mode 100644,000000..100644 --- a/platform/linux-dpdk/pktio/subsystem.c +++ b/platform/linux-dpdk/pktio/subsystem.c @@@ -1,33 -1,0 +1,35 @@@ +/* Copyright (c) 2017, ARM Limited. All rights reserved. + * + * Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + ++#include <config.h> ++ +#include <odp_debug_internal.h> +#include <odp_packet_io_internal.h> + +#define SUBSYSTEM_VERSION 0x00010000UL +ODP_SUBSYSTEM_DEFINE(pktio_ops, "packet IO operations", SUBSYSTEM_VERSION); + +/* Instantiate init and term functions */ +ODP_SUBSYSTEM_FOREACH_TEMPLATE(pktio_ops, init_global, ODP_ERR) +ODP_SUBSYSTEM_FOREACH_TEMPLATE(pktio_ops, init_local, ODP_ERR) +ODP_SUBSYSTEM_FOREACH_TEMPLATE(pktio_ops, term_global, ODP_ABORT) + +/* Temporary variable to enable link modules, + * will remove in Makefile scheme changes. + */ +extern int enable_link_dpdk_pktio_ops; +extern int enable_link_loopback_pktio_ops; + +ODP_SUBSYSTEM_CONSTRUCTOR(pktio_ops) +{ + odp_subsystem_constructor(pktio_ops); + + /* Further initialization per subsystem */ + enable_link_dpdk_pktio_ops = 1; + enable_link_loopback_pktio_ops = 1; +} diff --cc platform/linux-dpdk/pool/dpdk.c index a9eca2f1,00000000..3f085620 mode 100644,000000..100644 --- a/platform/linux-dpdk/pool/dpdk.c +++ b/platform/linux-dpdk/pool/dpdk.c @@@ -1,623 -1,0 +1,624 @@@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + ++#include <config.h> +#include <odp/api/std_types.h> +#include <odp/api/pool.h> +#include <odp_pool_internal.h> +#include <odp_buffer_internal.h> +#include <odp_packet_internal.h> +#include <odp_timer_internal.h> +#include <odp_align_internal.h> +#include <odp/api/shared_memory.h> +#include <odp/api/align.h> +#include <odp_internal.h> +#include <odp_config_internal.h> +#include <odp/api/hints.h> +#include <odp/api/debug.h> +#include <odp_debug_internal.h> +#include <odp/api/cpumask.h> + +#include <string.h> +#include <stdlib.h> +#include <math.h> +#include <inttypes.h> + +/* for DPDK */ +#include <odp_packet_io_internal.h> + +#ifdef POOL_USE_TICKETLOCK +#include <odp/api/ticketlock.h> +#define LOCK(a) odp_ticketlock_lock(a) +#define UNLOCK(a) odp_ticketlock_unlock(a) +#define LOCK_INIT(a) odp_ticketlock_init(a) +#else +#include <odp/api/spinlock.h> +#define LOCK(a) odp_spinlock_lock(a) +#define UNLOCK(a) odp_spinlock_unlock(a) +#define LOCK_INIT(a) odp_spinlock_init(a) +#endif + +/* Define a practical limit for contiguous memory allocations */ +#define MAX_SIZE (10 * 1024 * 1024) + +/* The pool table ptr - resides in shared memory */ +pool_table_cp_t *pool_tbl_cp; +pool_table_dp_t *pool_tbl_dp; + +static int dpdk_pool_init_global(void) +{ + uint32_t i; + odp_shm_t shm; + + shm = odp_shm_reserve("odp_pools_cp", + sizeof(pool_table_cp_t), + sizeof(pool_entry_cp_t), 0); + if (shm == ODP_SHM_INVALID) + return -1; + + pool_tbl_cp = odp_shm_addr(shm); + if (pool_tbl_cp == NULL) + return -1; + + memset(pool_tbl_cp, 0, sizeof(pool_table_cp_t)); + pool_tbl_cp->shm_cp = shm; + + shm = odp_shm_reserve("odp_pools_dp", + sizeof(pool_table_dp_t), + ODP_CACHE_LINE_SIZE, 0); + if (shm == ODP_SHM_INVALID) + return -1; + + pool_tbl_dp = odp_shm_addr(shm); + if (pool_tbl_dp == NULL) + goto dp_tbl_alloc_failed; + + memset(pool_tbl_dp, 0, sizeof(pool_table_dp_t)); + pool_tbl_cp->shm_dp = shm; + + for (i = 0; i < ODP_CONFIG_POOLS; i++) { + /* init locks */ + pool_entry_cp_t *pool_cp = &pool_tbl_cp->pool[i]; + + LOCK_INIT(&pool_cp->lock); + pool_cp->pool_hdl = pool_index_to_handle(i); + } + + ODP_DBG("\nPool init global\n"); + ODP_DBG(" pool_entry_cp_t size %zu\n", sizeof(pool_entry_cp_t)); + ODP_DBG(" pool_entry_dp_t size %zu\n", sizeof(pool_entry_dp_t)); + ODP_DBG(" pool_table_cp_t size %zu\n", sizeof(pool_table_cp_t)); + ODP_DBG(" pool_table_dp_t size %zu\n", sizeof(pool_table_dp_t)); + ODP_DBG(" odp_buffer_hdr_t size %zu\n", sizeof(odp_buffer_hdr_t)); + ODP_DBG("\n"); + + return 0; + +dp_tbl_alloc_failed: + odp_shm_free(pool_tbl_cp->shm_cp); + return -1; +} + +static int dpdk_pool_init_local(void) +{ + return 0; +} + +static int dpdk_pool_term_global(void) +{ + int ret; + + ret = odp_shm_free(pool_tbl_cp->shm_dp); + if (ret < 0) + ODP_ERR("Pool DP shm free failed\n"); + + ret = odp_shm_free(pool_tbl_cp->shm_cp); + if (ret < 0) + ODP_ERR("Pool CP shm free failed\n"); + + return ret; +} + +static int dpdk_pool_term_local(void) +{ + return 0; +} + +static int dpdk_pool_capability(odp_pool_capability_t *capa) +{ + memset(capa, 0, sizeof(odp_pool_capability_t)); + + capa->max_pools = ODP_CONFIG_POOLS; + + /* Buffer pools */ + capa->buf.max_pools = ODP_CONFIG_POOLS; + capa->buf.max_align = ODP_CONFIG_BUFFER_ALIGN_MAX; + capa->buf.max_size = MAX_SIZE; + capa->buf.max_num = CONFIG_POOL_MAX_NUM; + + /* Packet pools */ + capa->pkt.max_pools = ODP_CONFIG_POOLS; + capa->pkt.max_len = 0; + capa->pkt.max_num = CONFIG_POOL_MAX_NUM; + capa->pkt.min_headroom = CONFIG_PACKET_HEADROOM; + capa->pkt.min_tailroom = CONFIG_PACKET_TAILROOM; + capa->pkt.max_segs_per_pkt = CONFIG_PACKET_MAX_SEGS; + capa->pkt.min_seg_len = CONFIG_PACKET_SEG_LEN_MIN; + capa->pkt.max_seg_len = CONFIG_PACKET_SEG_LEN_MAX; + capa->pkt.max_uarea_size = MAX_SIZE; + + /* Timeout pools */ + capa->tmo.max_pools = ODP_CONFIG_POOLS; + capa->tmo.max_num = CONFIG_POOL_MAX_NUM; + + return 0; +} + +struct mbuf_ctor_arg { + uint16_t seg_buf_offset; /* To skip the ODP buf/pkt/tmo header */ + uint16_t seg_buf_size; /* size of user data */ + int type; + int pkt_uarea_size; /* size of user area in bytes */ +}; + +struct mbuf_pool_ctor_arg { + /* This has to be the first member */ + struct rte_pktmbuf_pool_private pkt; + odp_pool_t pool_hdl; +}; + +static void +odp_dpdk_mbuf_pool_ctor(struct rte_mempool *mp, + void *opaque_arg) +{ + struct mbuf_pool_ctor_arg *mbp_priv; + + if (mp->private_data_size < sizeof(struct mbuf_pool_ctor_arg)) { + ODP_ERR("(%s) private_data_size %d < %d", + mp->name, (int)mp->private_data_size, + (int)sizeof(struct mbuf_pool_ctor_arg)); + return; + } + mbp_priv = rte_mempool_get_priv(mp); + *mbp_priv = *((struct mbuf_pool_ctor_arg *)opaque_arg); +} + +/* ODP DPDK mbuf constructor. + * This is a combination of rte_pktmbuf_init in rte_mbuf.c + * and testpmd_mbuf_ctor in testpmd.c + */ +static void +odp_dpdk_mbuf_ctor(struct rte_mempool *mp, + void *opaque_arg, + void *raw_mbuf, + unsigned i) +{ + struct mbuf_ctor_arg *mb_ctor_arg; + struct rte_mbuf *mb = raw_mbuf; + struct odp_buffer_hdr_t *buf_hdr; + struct mbuf_pool_ctor_arg *mbp_ctor_arg = rte_mempool_get_priv(mp); + + /* The rte_mbuf is at the begninning in all cases */ + mb_ctor_arg = (struct mbuf_ctor_arg *)opaque_arg; + mb = (struct rte_mbuf *)raw_mbuf; + + RTE_ASSERT(mp->elt_size >= sizeof(struct rte_mbuf)); + + memset(mb, 0, mp->elt_size); + + /* Start of buffer is just after the ODP type specific header + * which contains in the very beginning the rte_mbuf struct */ + mb->buf_addr = (char *)mb + mb_ctor_arg->seg_buf_offset; + mb->buf_physaddr = rte_mempool_virt2phy(mp, mb) + + mb_ctor_arg->seg_buf_offset; + mb->buf_len = mb_ctor_arg->seg_buf_size; + mb->priv_size = rte_pktmbuf_priv_size(mp); + + /* keep some headroom between start of buffer and data */ + if (mb_ctor_arg->type == ODP_POOL_PACKET) { + odp_packet_hdr_t *pkt_hdr; + + mb->data_off = RTE_PKTMBUF_HEADROOM; + mb->nb_segs = 1; + mb->port = 0xff; + mb->vlan_tci = 0; + pkt_hdr = (odp_packet_hdr_t *)raw_mbuf; + pkt_hdr->uarea_size = mb_ctor_arg->pkt_uarea_size; + } else { + mb->data_off = 0; + } + + /* init some constant fields */ + mb->pool = mp; + mb->ol_flags = 0; + + /* Save index, might be useful for debugging purposes */ + buf_hdr = (struct odp_buffer_hdr_t *)raw_mbuf; + buf_hdr->index = i; + buf_hdr->handle.handle = (odp_buffer_t)buf_hdr; + buf_hdr->pool_hdl = mbp_ctor_arg->pool_hdl; + buf_hdr->type = mb_ctor_arg->type; + buf_hdr->event_type = mb_ctor_arg->type; + buf_hdr->event_subtype = ODP_EVENT_NO_SUBTYPE; +} + +#define CHECK_U16_OVERFLOW(X) do { \ + if (odp_unlikely(X > UINT16_MAX)) { \ + ODP_ERR("Invalid size: %d", X); \ + UNLOCK(&pool_cp->lock); \ + return ODP_POOL_INVALID; \ + } \ +} while (0) + +static int check_params(odp_pool_param_t *params) +{ + odp_pool_capability_t capa; + + if (odp_pool_capability(&capa) < 0) + return -1; + + switch (params->type) { + case ODP_POOL_BUFFER: + if (params->buf.num > capa.buf.max_num) { + printf("buf.num too large %u\n", params->buf.num); + return -1; + } + + if (params->buf.size > capa.buf.max_size) { + printf("buf.size too large %u\n", params->buf.size); + return -1; + } + + if (params->buf.align > capa.buf.max_align) { + printf("buf.align too large %u\n", params->buf.align); + return -1; + } + + break; + + case ODP_POOL_PACKET: + if (params->pkt.num > capa.pkt.max_num) { + printf("pkt.num too large %u\n", params->pkt.num); + + return -1; + } + + if (params->pkt.seg_len > capa.pkt.max_seg_len) { + printf("pkt.seg_len too large %u\n", + params->pkt.seg_len); + return -1; + } + + if (params->pkt.uarea_size > capa.pkt.max_uarea_size) { + printf("pkt.uarea_size too large %u\n", + params->pkt.uarea_size); + return -1; + } + + break; + + case ODP_POOL_TIMEOUT: + if (params->tmo.num > capa.tmo.max_num) { + printf("tmo.num too large %u\n", params->tmo.num); + return -1; + } + break; + + default: + printf("bad pool type %i\n", params->type); + return -1; + } + + return 0; +} + +static odp_pool_t dpdk_pool_create(const char *name, + odp_pool_param_t *params) +{ + struct mbuf_pool_ctor_arg mbp_ctor_arg; + struct mbuf_ctor_arg mb_ctor_arg; + odp_pool_t pool_hdl = ODP_POOL_INVALID; + unsigned mb_size, i, cache_size; + size_t hdr_size; + pool_entry_cp_t *pool_cp; + pool_entry_dp_t *pool_dp; + uint32_t buf_align, blk_size, headroom, tailroom, min_seg_len; + uint32_t max_len, min_align; + char pool_name[ODP_POOL_NAME_LEN]; + char *rte_name = NULL; +#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 + unsigned j; +#endif + + if (check_params(params)) + return ODP_POOL_INVALID; + + if (name == NULL) { + pool_name[0] = 0; + } else { + strncpy(pool_name, name, ODP_POOL_NAME_LEN - 1); + pool_name[ODP_POOL_NAME_LEN - 1] = 0; + } + + /* Find an unused buffer pool slot and initialize it as requested */ + for (i = 0; i < ODP_CONFIG_POOLS; i++) { + uint32_t num; + struct rte_mempool *mp; + + pool_cp = get_pool_entry_cp(i); + pool_dp = get_pool_entry_dp(i); + + LOCK(&pool_cp->lock); + if (pool_dp->rte_mempool != NULL) { + UNLOCK(&pool_cp->lock); + continue; + } + + switch (params->type) { + case ODP_POOL_BUFFER: + buf_align = params->buf.align; + blk_size = params->buf.size; + + /* Validate requested buffer alignment */ + if (buf_align > ODP_CONFIG_BUFFER_ALIGN_MAX || + buf_align != + ROUNDDOWN_POWER2(buf_align, buf_align)) { + UNLOCK(&pool_cp->lock); + return ODP_POOL_INVALID; + } + + /* Set correct alignment based on input request */ + if (buf_align == 0) + buf_align = ODP_CACHE_LINE_SIZE; + else if (buf_align < ODP_CONFIG_BUFFER_ALIGN_MIN) + buf_align = ODP_CONFIG_BUFFER_ALIGN_MIN; + + if (params->buf.align != 0) + blk_size = ROUNDUP_ALIGN(blk_size, + buf_align); + + hdr_size = sizeof(odp_buffer_hdr_t); + CHECK_U16_OVERFLOW(blk_size); + mbp_ctor_arg.pkt.mbuf_data_room_size = blk_size; + num = params->buf.num; + ODP_DBG("type: buffer name: %s num: " + "%u size: %u align: %u\n", pool_name, num, + params->buf.size, params->buf.align); + break; + case ODP_POOL_PACKET: + headroom = CONFIG_PACKET_HEADROOM; + tailroom = CONFIG_PACKET_TAILROOM; + min_seg_len = CONFIG_PACKET_SEG_LEN_MIN; + min_align = ODP_CONFIG_BUFFER_ALIGN_MIN; + + blk_size = min_seg_len; + if (params->pkt.seg_len > blk_size) + blk_size = params->pkt.seg_len; + if (params->pkt.len > blk_size) + blk_size = params->pkt.len; + /* Make sure at least one max len packet fits in the + * pool. + */ + max_len = 0; + if (params->pkt.max_len != 0) + max_len = params->pkt.max_len; + if ((max_len + blk_size) / blk_size > params->pkt.num) + blk_size = (max_len + params->pkt.num) / + params->pkt.num; + blk_size = ROUNDUP_ALIGN(headroom + blk_size + + tailroom, min_align); + /* Segment size minus headroom might be rounded down by + * the driver to the nearest multiple of 1024. Round it + * up here to make sure the requested size still going + * to fit there without segmentation. + */ + blk_size = ROUNDUP_ALIGN(blk_size - headroom, + min_seg_len) + headroom; + + hdr_size = sizeof(odp_packet_hdr_t) + + params->pkt.uarea_size; + mb_ctor_arg.pkt_uarea_size = params->pkt.uarea_size; + CHECK_U16_OVERFLOW(blk_size); + mbp_ctor_arg.pkt.mbuf_data_room_size = blk_size; + num = params->pkt.num; + + ODP_DBG("type: packet, name: %s, " + "num: %u, len: %u, blk_size: %u, " + "uarea_size %d, hdr_size %d\n", + pool_name, num, params->pkt.len, blk_size, + params->pkt.uarea_size, hdr_size); + break; + case ODP_POOL_TIMEOUT: + hdr_size = sizeof(odp_timeout_hdr_t); + mbp_ctor_arg.pkt.mbuf_data_room_size = 0; + num = params->tmo.num; + ODP_DBG("type: tmo name: %s num: %u\n", + pool_name, num); + break; + default: + ODP_ERR("Bad type %i\n", + params->type); + UNLOCK(&pool_cp->lock); + return ODP_POOL_INVALID; + } + + mb_ctor_arg.seg_buf_offset = + (uint16_t)ROUNDUP_CACHE_LINE(hdr_size); + mb_ctor_arg.seg_buf_size = mbp_ctor_arg.pkt.mbuf_data_room_size; + mb_ctor_arg.type = params->type; + mb_size = mb_ctor_arg.seg_buf_offset + mb_ctor_arg.seg_buf_size; + mbp_ctor_arg.pool_hdl = pool_cp->pool_hdl; + mbp_ctor_arg.pkt.mbuf_priv_size = mb_ctor_arg.seg_buf_offset - + sizeof(struct rte_mbuf); + + ODP_DBG("Metadata size: %u, mb_size %d\n", + mb_ctor_arg.seg_buf_offset, mb_size); + cache_size = 0; +#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 + j = ceil((double)num / RTE_MEMPOOL_CACHE_MAX_SIZE); + j = RTE_MAX(j, 2UL); + for (; j <= (num / 2); ++j) + if ((num % j) == 0) { + cache_size = num / j; + break; + } + if (odp_unlikely(cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || + (uint32_t)cache_size * 1.5 > num)) { + ODP_ERR("cache_size calc failure: %d\n", cache_size); + cache_size = 0; + } +#endif + ODP_DBG("cache_size %d\n", cache_size); + + if (strlen(pool_name) > RTE_MEMPOOL_NAMESIZE - 1) { + ODP_ERR("Max pool name size: %u. Trimming %u long, name collision might happen!\n", + RTE_MEMPOOL_NAMESIZE - 1, strlen(pool_name)); + rte_name = malloc(RTE_MEMPOOL_NAMESIZE); + snprintf(rte_name, RTE_MEMPOOL_NAMESIZE - 1, "%s", + pool_name); + } + + pool_dp->rte_mempool = + rte_mempool_create(rte_name ? rte_name : pool_name, + num, + mb_size, + cache_size, + sizeof(struct mbuf_pool_ctor_arg), + odp_dpdk_mbuf_pool_ctor, + &mbp_ctor_arg, + odp_dpdk_mbuf_ctor, + &mb_ctor_arg, + rte_socket_id(), + 0); + free(rte_name); + if (pool_dp->rte_mempool == NULL) { + ODP_ERR("Cannot init DPDK mbuf pool: %s\n", + rte_strerror(rte_errno)); + UNLOCK(&pool_cp->lock); + return ODP_POOL_INVALID; + } + /* found free pool */ + if (name == NULL) { + pool_cp->name[0] = 0; + } else { + strncpy(pool_cp->name, name, + ODP_POOL_NAME_LEN - 1); + pool_cp->name[ODP_POOL_NAME_LEN - 1] = 0; + } + + pool_cp->params = *params; + mp = pool_dp->rte_mempool; + ODP_DBG("Header/element/trailer size: %u/%u/%u, " + "total pool size: %lu\n", + mp->header_size, mp->elt_size, mp->trailer_size, + (unsigned long)((mp->header_size + mp->elt_size + + mp->trailer_size) * num)); + UNLOCK(&pool_cp->lock); + pool_hdl = pool_cp->pool_hdl; + break; + } + + return pool_hdl; +} + +static odp_pool_t dpdk_pool_lookup(const char *name) +{ + struct rte_mempool *mp = NULL; + odp_pool_t pool_hdl = ODP_POOL_INVALID; + int i; + + mp = rte_mempool_lookup(name); + if (mp == NULL) + return ODP_POOL_INVALID; + + for (i = 0; i < ODP_CONFIG_POOLS; i++) { + pool_entry_cp_t *pool_cp = get_pool_entry_cp(i); + pool_entry_dp_t *pool_dp = get_pool_entry_dp(i); + + LOCK(&pool_cp->lock); + if (pool_dp->rte_mempool != mp) { + UNLOCK(&pool_cp->lock); + continue; + } + UNLOCK(&pool_cp->lock); + pool_hdl = pool_cp->pool_hdl; + break; + } + return pool_hdl; +} + +static void dpdk_pool_print(odp_pool_t pool_hdl) +{ + pool_entry_dp_t *pool_dp = odp_pool_to_entry_dp(pool_hdl); + + rte_mempool_dump(stdout, pool_dp->rte_mempool); +} + +static int dpdk_pool_info(odp_pool_t pool_hdl, odp_pool_info_t *info) +{ + pool_entry_cp_t *pool_cp = odp_pool_to_entry_cp(pool_hdl); + + if (pool_cp == NULL || info == NULL) + return -1; + + info->name = pool_cp->name; + info->params = pool_cp->params; + + return 0; +} + +/* + * DPDK doesn't support pool destroy at the moment. Instead we should improve + * dpdk_pool_create() to try to reuse pools + */ +static int dpdk_pool_destroy(odp_pool_t pool_hdl) +{ + pool_entry_dp_t *pool_dp = odp_pool_to_entry_dp(pool_hdl); + + if (pool_dp->rte_mempool == NULL) { + ODP_ERR("Can't find pool!\n"); + return -1; + } + + rte_mempool_free(pool_dp->rte_mempool); + pool_dp->rte_mempool = NULL; + /* The pktio supposed to be closed by now */ + return 0; +} + +static void dpdk_pool_param_init(odp_pool_param_t *params) +{ + memset(params, 0, sizeof(odp_pool_param_t)); +} + +static uint64_t dpdk_pool_to_u64(odp_pool_t hdl) +{ + return _odp_pri(hdl); +} + +pool_module_t dpdk_pool = { + .base = { + .name = "dpdk_pool", + .init_local = dpdk_pool_init_local, + .term_local = dpdk_pool_term_local, + .init_global = dpdk_pool_init_global, + .term_global = dpdk_pool_term_global, + }, + .capability = dpdk_pool_capability, + .create = dpdk_pool_create, + .destroy = dpdk_pool_destroy, + .lookup = dpdk_pool_lookup, + .info = dpdk_pool_info, + .print = dpdk_pool_print, + .to_u64 = dpdk_pool_to_u64, + .param_init = dpdk_pool_param_init, +}; + +ODP_MODULE_CONSTRUCTOR(dpdk_pool) +{ + odp_module_constructor(&dpdk_pool); + odp_subsystem_register_module(pool, &dpdk_pool); +} diff --cc platform/linux-generic/Makefile.am index 7d1066f3,3e26aab4..539342df --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@@ -2,19 -2,15 +2,19 @@@ #export CUSTOM_STR=https://git.linaro.org/lng/odp.git
include $(top_srcdir)/platform/Makefile.inc - include $(top_srcdir)/platform/@with_platform@/Makefile.inc
+lib_LTLIBRARIES = $(LIB)/libodp-linux.la + - AM_CFLAGS += -I$(srcdir)/include - AM_CFLAGS += -I$(top_srcdir)/include - AM_CFLAGS += -I$(top_srcdir)/frameworks/modular - AM_CFLAGS += -I$(top_srcdir)/include/odp/arch/@ARCH_ABI@ - AM_CFLAGS += -I$(top_builddir)/include - AM_CFLAGS += -I$(top_srcdir)/arch/@ARCH_DIR@ - AM_CFLAGS += -Iinclude - AM_CFLAGS += -DSYSCONFDIR="@sysconfdir@" - AM_CFLAGS += -D_ODP_PKTIO_IPC + AM_CPPFLAGS = -I$(srcdir)/include + AM_CPPFLAGS += -I$(top_srcdir)/include ++AM_CPPFLAGS += -I$(top_srcdir)/frameworks/modular + AM_CPPFLAGS += -I$(top_srcdir)/include/odp/arch/@ARCH_ABI@ + AM_CPPFLAGS += -I$(top_builddir)/include + AM_CPPFLAGS += -Iinclude + AM_CPPFLAGS += -I$(top_srcdir)/platform/$(with_platform)/arch/$(ARCH_DIR) ++AM_CPPFLAGS += -I$(top_srcdir)/platform/$(with_platform) + AM_CPPFLAGS += -Iinclude + AM_CPPFLAGS += -DSYSCONFDIR="@sysconfdir@"
AM_CPPFLAGS += $(OPENSSL_CPPFLAGS) AM_CPPFLAGS += $(DPDK_CPPFLAGS) diff --cc platform/linux-generic/buffer/generic.c index c896e3d0,00000000..3281119b mode 100644,000000..100644 --- a/platform/linux-generic/buffer/generic.c +++ b/platform/linux-generic/buffer/generic.c @@@ -1,360 -1,0 +1,362 @@@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + ++#include <config.h> ++ +#include <odp/api/buffer.h> +#include <odp_pool_internal.h> +#include <odp_buffer_internal.h> +#include <odp_buffer_inlines.h> +#include <odp_debug_internal.h> +#include <odp_buffer_subsystem.h> + +#include <string.h> +#include <stdio.h> +#include <inttypes.h> + +static odp_buffer_t generic_buffer_from_event(odp_event_t ev) +{ + return (odp_buffer_t)ev; +} + +static odp_event_t generic_buffer_to_event(odp_buffer_t buf) +{ + return (odp_event_t)buf; +} + +static void *generic_buffer_addr(odp_buffer_t buf) +{ + odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf); + + return hdr->seg[0].data; +} + +static uint32_t generic_buffer_size(odp_buffer_t buf) +{ + odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf); + + return hdr->size; +} + +int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf) +{ + odp_buffer_hdr_t *hdr; + pool_t *pool; + int len = 0; + + if (!odp_buffer_is_valid(buf)) { + ODP_PRINT("Buffer is not valid.\n"); + return len; + } + + hdr = buf_hdl_to_hdr(buf); + pool = hdr->pool_ptr; + + len += snprintf(&str[len], n - len, + "Buffer\n"); + len += snprintf(&str[len], n - len, + " pool %" PRIu64 "\n", + odp_pool_to_u64(pool->pool_hdl)); + len += snprintf(&str[len], n - len, + " addr %p\n", hdr->seg[0].data); + len += snprintf(&str[len], n - len, + " size %" PRIu32 "\n", hdr->size); + len += snprintf(&str[len], n - len, + " type %i\n", hdr->type); + + return len; +} + +static void generic_buffer_print(odp_buffer_t buf) +{ + int max_len = 512; + char str[max_len]; + int len; + + len = odp_buffer_snprint(str, max_len - 1, buf); + str[len] = 0; + + ODP_PRINT("\n%s\n", str); +} + +static uint64_t generic_buffer_to_u64(odp_buffer_t hdl) +{ + return _odp_pri(hdl); +} + +odp_event_type_t _odp_buffer_event_type(odp_buffer_t buf) +{ + return buf_hdl_to_hdr(buf)->event_type; +} + +void _odp_buffer_event_type_set(odp_buffer_t buf, int ev) +{ + buf_hdl_to_hdr(buf)->event_type = ev; +} + +odp_event_subtype_t _odp_buffer_event_subtype(odp_buffer_t buf) +{ + return buf_hdl_to_hdr(buf)->event_subtype; +} + +void _odp_buffer_event_subtype_set(odp_buffer_t buf, int ev) +{ + buf_hdl_to_hdr(buf)->event_subtype = ev; +} + +int buffer_alloc_multi(pool_t *pool, odp_buffer_hdr_t *buf_hdr[], int max_num) +{ + ring_t *ring; + uint32_t mask, i; + pool_cache_t *cache; + uint32_t cache_num, num_ch, num_deq, burst; + odp_buffer_hdr_t *hdr; + + cache = local.cache[pool->pool_idx]; + + cache_num = cache->num; + num_ch = max_num; + num_deq = 0; + burst = CACHE_BURST; + + if (odp_unlikely(cache_num < (uint32_t)max_num)) { + /* Cache does not have enough buffers */ + num_ch = cache_num; + num_deq = max_num - cache_num; + + if (odp_unlikely(num_deq > CACHE_BURST)) + burst = num_deq; + } + + /* Get buffers from the cache */ + for (i = 0; i < num_ch; i++) { + uint32_t j = cache_num - num_ch + i; + + buf_hdr[i] = buf_hdr_from_index(pool, cache->buf_index[j]); + } + + /* If needed, get more from the global pool */ + if (odp_unlikely(num_deq)) { + /* Temporary copy needed since odp_buffer_t is uintptr_t + * and not uint32_t. */ + uint32_t data[burst]; + + ring = &pool->ring->hdr; + mask = pool->ring_mask; + burst = ring_deq_multi(ring, mask, data, burst); + cache_num = burst - num_deq; + + if (odp_unlikely(burst < num_deq)) { + num_deq = burst; + cache_num = 0; + } + + for (i = 0; i < num_deq; i++) { + uint32_t idx = num_ch + i; + + hdr = buf_hdr_from_index(pool, data[i]); + odp_prefetch(hdr); + buf_hdr[idx] = hdr; + } + + /* Cache extra buffers. Cache is currently empty. */ + for (i = 0; i < cache_num; i++) + cache->buf_index[i] = data[num_deq + i]; + + cache->num = cache_num; + } else { + cache->num = cache_num - num_ch; + } + + return num_ch + num_deq; +} + +static inline void buffer_free_to_pool(pool_t *pool, + odp_buffer_hdr_t *buf_hdr[], int num) +{ + int i; + ring_t *ring; + uint32_t mask; + pool_cache_t *cache; + uint32_t cache_num; + + cache = local.cache[pool->pool_idx]; + + /* Special case of a very large free. Move directly to + * the global pool. */ + if (odp_unlikely(num > CONFIG_POOL_CACHE_SIZE)) { + uint32_t buf_index[num]; + + ring = &pool->ring->hdr; + mask = pool->ring_mask; + for (i = 0; i < num; i++) + buf_index[i] = buf_hdr[i]->index; + + ring_enq_multi(ring, mask, buf_index, num); + + return; + } + + /* Make room into local cache if needed. Do at least burst size + * transfer. */ + cache_num = cache->num; + + if (odp_unlikely((int)(CONFIG_POOL_CACHE_SIZE - cache_num) < num)) { + uint32_t index; + int burst = CACHE_BURST; + + ring = &pool->ring->hdr; + mask = pool->ring_mask; + + if (odp_unlikely(num > CACHE_BURST)) + burst = num; + if (odp_unlikely((uint32_t)num > cache_num)) + burst = cache_num; + + { + /* Temporary copy needed since odp_buffer_t is + * uintptr_t and not uint32_t. */ + uint32_t data[burst]; + + index = cache_num - burst; + + for (i = 0; i < burst; i++) + data[i] = cache->buf_index[index + i]; + + ring_enq_multi(ring, mask, data, burst); + } + + cache_num -= burst; + } + + for (i = 0; i < num; i++) + cache->buf_index[cache_num + i] = buf_hdr[i]->index; + + cache->num = cache_num + num; +} + +void buffer_free_multi(odp_buffer_hdr_t *buf_hdr[], int num_total) +{ + pool_t *pool; + int num; + int i; + int first = 0; + + while (1) { + num = 1; + i = 1; + pool = buf_hdr[first]->pool_ptr; + + /* 'num' buffers are from the same pool */ + if (num_total > 1) { + for (i = first; i < num_total; i++) + if (pool != buf_hdr[i]->pool_ptr) + break; + + num = i - first; + } + + buffer_free_to_pool(pool, &buf_hdr[first], num); + + if (i == num_total) + return; + + first = i; + } +} + +static odp_buffer_t generic_buffer_alloc(odp_pool_t pool_hdl) +{ + odp_buffer_t buf; + pool_t *pool; + int ret; + + ODP_ASSERT(ODP_POOL_INVALID != pool_hdl); + + pool = pool_entry_from_hdl(pool_hdl); + ret = buffer_alloc_multi(pool, (odp_buffer_hdr_t **)&buf, 1); + + if (odp_likely(ret == 1)) + return buf; + + return ODP_BUFFER_INVALID; +} + +static int generic_buffer_alloc_multi(odp_pool_t pool_hdl, + odp_buffer_t buf[], int num) +{ + pool_t *pool; + + ODP_ASSERT(ODP_POOL_INVALID != pool_hdl); + + pool = pool_entry_from_hdl(pool_hdl); + + return buffer_alloc_multi(pool, (odp_buffer_hdr_t **)buf, num); +} + +static void generic_buffer_free(odp_buffer_t buf) +{ + buffer_free_multi((odp_buffer_hdr_t **)&buf, 1); +} + +static void generic_buffer_free_multi(const odp_buffer_t buf[], int num) +{ + buffer_free_multi((odp_buffer_hdr_t **)(uintptr_t)buf, num); +} + +static odp_pool_t generic_buffer_pool(odp_buffer_t buf) +{ + pool_t *pool = pool_from_buf(buf); + + return pool->pool_hdl; +} + +static int generic_buffer_is_valid(odp_buffer_t buf) +{ + pool_t *pool; + + if (buf == ODP_BUFFER_INVALID) + return 0; + + pool = pool_from_buf(buf); + + if (pool->pool_idx >= ODP_CONFIG_POOLS) + return 0; + + if (pool->reserved == 0) + return 0; + + return 1; +} + +odp_buffer_module_t generic_buffer = { + .base = { + .name = "generic_buffer", + .init_local = NULL, + .term_local = NULL, + .init_global = NULL, + .term_global = NULL, + }, + .buffer_from_event = generic_buffer_from_event, + .buffer_to_event = generic_buffer_to_event, + .buffer_addr = generic_buffer_addr, + .buffer_alloc_multi = generic_buffer_alloc_multi, + .buffer_free_multi = generic_buffer_free_multi, + .buffer_alloc = generic_buffer_alloc, + .buffer_free = generic_buffer_free, + .buffer_size = generic_buffer_size, + .buffer_is_valid = generic_buffer_is_valid, + .buffer_pool = generic_buffer_pool, + .buffer_print = generic_buffer_print, + .buffer_to_u64 = generic_buffer_to_u64, +}; + +ODP_MODULE_CONSTRUCTOR(generic_buffer) +{ + odp_module_constructor(&generic_buffer); + odp_subsystem_register_module(buffer, &generic_buffer); +} + diff --cc platform/linux-generic/drv_driver.c index 3c918def,529da48f..ecf75ab9 --- a/platform/linux-generic/drv_driver.c +++ b/platform/linux-generic/drv_driver.c @@@ -4,10 -4,7 +4,12 @@@ * SPDX-License-Identifier: BSD-3-Clause */
++#include <config.h> ++ +#include <string.h> + #include <odp_config_internal.h> +#include <_ishmpool_internal.h>
#include <odp/api/std_types.h> #include <odp/api/debug.h> diff --cc platform/linux-generic/include/odp/api/plat/packet_inlines.h index 06b049fc,be7e18ec..1804fa6f --- a/platform/linux-generic/include/odp/api/plat/packet_inlines.h +++ b/platform/linux-generic/include/odp/api/plat/packet_inlines.h @@@ -147,6 -157,11 +157,12 @@@ static inline void _odp_packet_prefetch (void)pkt; (void)offset; (void)len; }
++/** @internal Inline function @param pkt @return */ + static inline odp_buffer_t packet_to_buffer(odp_packet_t pkt) + { + return (odp_buffer_t)pkt; + } + /* Include inlined versions of API functions */ #include <odp/api/plat/static_inline.h> #if ODP_ABI_COMPAT == 0 diff --cc platform/linux-generic/include/odp_packet_io_internal.h index 9fe9aeaa,1a4e345f..dacea47c --- a/platform/linux-generic/include/odp_packet_io_internal.h +++ b/platform/linux-generic/include/odp_packet_io_internal.h @@@ -18,6 -18,6 +18,8 @@@ extern "C" { #endif
++#include <config.h> ++ #include <odp/api/spinlock.h> #include <odp/api/ticketlock.h> #include <odp_classification_datamodel.h> @@@ -151,9 -271,6 +153,12 @@@ int sock_stats_fd(pktio_entry_t *pktio_ int fd); int sock_stats_reset_fd(pktio_entry_t *pktio_entry, int fd);
++int pktin_poll_one(int pktio_index, ++ int rx_queue, ++ odp_event_t evt_tbl[]); +int pktin_poll(int pktio_index, int num_queue, int index[]); +void pktio_stop_finalize(int pktio_index); + #ifdef __cplusplus } #endif diff --cc platform/linux-generic/include/odp_queue_subsystem.h index 2c62af37,00000000..601a254e mode 100644,000000..100644 --- a/platform/linux-generic/include/odp_queue_subsystem.h +++ b/platform/linux-generic/include/odp_queue_subsystem.h @@@ -1,77 -1,0 +1,77 @@@ +/* Copyright (c) 2017, ARM Limited. All rights reserved. + * + * Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_QUEUE_SUBSYSTEM_H +#define ODP_QUEUE_SUBSYSTEM_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include <odp_module.h> +#include <odp/api/queue.h> + +#define QUEUE_SUBSYSTEM_VERSION 0x00010000UL + +/* ODP queue public APIs subsystem */ +ODP_SUBSYSTEM_DECLARE(queue); + +/* Subsystem APIs declarations */ +ODP_SUBSYSTEM_API(queue, odp_queue_t, create, const char *name, + const odp_queue_param_t *param); +ODP_SUBSYSTEM_API(queue, int, destroy, odp_queue_t queue); +ODP_SUBSYSTEM_API(queue, odp_queue_t, lookup, const char *name); +ODP_SUBSYSTEM_API(queue, int, capability, odp_queue_capability_t *capa); +ODP_SUBSYSTEM_API(queue, int, context_set, odp_queue_t queue, + void *context, uint32_t len); +ODP_SUBSYSTEM_API(queue, void *, context, odp_queue_t queue); +ODP_SUBSYSTEM_API(queue, int, enq, odp_queue_t queue, odp_event_t ev); +ODP_SUBSYSTEM_API(queue, int, enq_multi, odp_queue_t queue, + const odp_event_t events[], int num); +ODP_SUBSYSTEM_API(queue, odp_event_t, deq, odp_queue_t queue); +ODP_SUBSYSTEM_API(queue, int, deq_multi, odp_queue_t queue, + odp_event_t events[], int num); +ODP_SUBSYSTEM_API(queue, odp_queue_type_t, type, odp_queue_t queue); +ODP_SUBSYSTEM_API(queue, odp_schedule_sync_t, sched_type, odp_queue_t queue); +ODP_SUBSYSTEM_API(queue, odp_schedule_prio_t, sched_prio, odp_queue_t queue); +ODP_SUBSYSTEM_API(queue, odp_schedule_group_t, sched_group, + odp_queue_t queue); - ODP_SUBSYSTEM_API(queue, int, lock_count, odp_queue_t queue); ++ODP_SUBSYSTEM_API(queue, uint32_t, lock_count, odp_queue_t queue); +ODP_SUBSYSTEM_API(queue, uint64_t, to_u64, odp_queue_t hdl); +ODP_SUBSYSTEM_API(queue, void, param_init, odp_queue_param_t *param); +ODP_SUBSYSTEM_API(queue, int, info, odp_queue_t queue, + odp_queue_info_t *info); + +typedef ODP_MODULE_CLASS(queue) { + odp_module_base_t base; + + odp_api_proto(queue, enq_multi) enq_multi; + odp_api_proto(queue, deq_multi) deq_multi; + odp_api_proto(queue, enq) enq; + odp_api_proto(queue, deq) deq; + odp_api_proto(queue, context) context; + odp_api_proto(queue, sched_type) sched_type; + odp_api_proto(queue, sched_prio) sched_prio; + odp_api_proto(queue, sched_group) sched_group; + odp_api_proto(queue, create) create; + odp_api_proto(queue, destroy) destroy; + odp_api_proto(queue, lookup) lookup; + odp_api_proto(queue, capability) capability; + odp_api_proto(queue, context_set) context_set; + odp_api_proto(queue, type) type; + odp_api_proto(queue, lock_count) lock_count; + odp_api_proto(queue, to_u64) to_u64; + odp_api_proto(queue, param_init) param_init; + odp_api_proto(queue, info) info; +} odp_queue_module_t; + +#ifdef __cplusplus +} +#endif + +#endif diff --cc platform/linux-generic/include/odp_schedule_if.h index 8f39eec1,06a70bdd..c7c5194c --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@@ -10,28 -10,106 +10,28 @@@ #include <odp/api/queue.h> #include <odp_queue_if.h> #include <odp/api/schedule.h> -#include <odp_forward_typedefs_internal.h> - -/* Number of ordered locks per queue */ -#define SCHEDULE_ORDERED_LOCKS_PER_QUEUE 2 - -typedef void (*schedule_pktio_start_fn_t)(int pktio_index, - int num_in_queue, - int in_queue_idx[], - odp_queue_t odpq[]); -typedef int (*schedule_thr_add_fn_t)(odp_schedule_group_t group, int thr); -typedef int (*schedule_thr_rem_fn_t)(odp_schedule_group_t group, int thr); -typedef int (*schedule_num_grps_fn_t)(void); -typedef int (*schedule_init_queue_fn_t)(uint32_t queue_index, - const odp_schedule_param_t *sched_param - ); -typedef void (*schedule_destroy_queue_fn_t)(uint32_t queue_index); -typedef int (*schedule_sched_queue_fn_t)(uint32_t queue_index); -typedef int (*schedule_unsched_queue_fn_t)(uint32_t queue_index); -typedef int (*schedule_ord_enq_multi_fn_t)(queue_t q_int, - void *buf_hdr[], int num, int *ret); -typedef int (*schedule_init_global_fn_t)(void); -typedef int (*schedule_term_global_fn_t)(void); -typedef int (*schedule_init_local_fn_t)(void); -typedef int (*schedule_term_local_fn_t)(void); -typedef void (*schedule_order_lock_fn_t)(void); -typedef void (*schedule_order_unlock_fn_t)(void); -typedef void (*schedule_order_unlock_lock_fn_t)(void); -typedef uint32_t (*schedule_max_ordered_locks_fn_t)(void); -typedef void (*schedule_save_context_fn_t)(uint32_t queue_index);
typedef struct schedule_fn_t { - int status_sync; - schedule_pktio_start_fn_t pktio_start; - schedule_thr_add_fn_t thr_add; - schedule_thr_rem_fn_t thr_rem; - schedule_num_grps_fn_t num_grps; - schedule_init_queue_fn_t init_queue; - schedule_destroy_queue_fn_t destroy_queue; - schedule_sched_queue_fn_t sched_queue; - schedule_ord_enq_multi_fn_t ord_enq_multi; - schedule_init_global_fn_t init_global; - schedule_term_global_fn_t term_global; - schedule_init_local_fn_t init_local; - schedule_term_local_fn_t term_local; - schedule_order_lock_fn_t order_lock; - schedule_order_unlock_fn_t order_unlock; - schedule_order_unlock_lock_fn_t order_unlock_lock; - schedule_max_ordered_locks_fn_t max_ordered_locks; + int status_sync; + void (*pktio_start)(int pktio_index, int num_in_queue, - int in_queue_idx[]); ++ int in_queue_idx[], odp_queue_t odpq[]); + int (*thr_add)(odp_schedule_group_t group, int thr); + int (*thr_rem)(odp_schedule_group_t group, int thr); + int (*num_grps)(void); + int (*init_queue)(uint32_t queue_index, + const odp_schedule_param_t *sched_param); + void (*destroy_queue)(uint32_t queue_index); + int (*sched_queue)(uint32_t queue_index); + int (*ord_enq_multi)(queue_t q_int, void *buf_hdr[], int num, int *ret); + void (*order_lock)(void); + void (*order_unlock)(void); + unsigned (*max_ordered_locks)(void);
/* Called only when status_sync is set */ - schedule_unsched_queue_fn_t unsched_queue; - schedule_save_context_fn_t save_context; - + int (*unsched_queue)(uint32_t queue_index); + void (*save_context)(uint32_t queue_index); } schedule_fn_t;
-/* Interface towards the scheduler */ extern const schedule_fn_t *sched_fn;
-/* Interface for the scheduler */ -int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]); -int sched_cb_pktin_poll_one(int pktio_index, int rx_queue, odp_event_t evts[]); -void sched_cb_pktio_stop_finalize(int pktio_index); -odp_queue_t sched_cb_queue_handle(uint32_t queue_index); -void sched_cb_queue_destroy_finalize(uint32_t queue_index); -int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int num); -int sched_cb_queue_empty(uint32_t queue_index); - -/* API functions */ -typedef struct { - uint64_t (*schedule_wait_time)(uint64_t); - odp_event_t (*schedule)(odp_queue_t *, uint64_t); - int (*schedule_multi)(odp_queue_t *, uint64_t, odp_event_t [], int); - void (*schedule_pause)(void); - void (*schedule_resume)(void); - void (*schedule_release_atomic)(void); - void (*schedule_release_ordered)(void); - void (*schedule_prefetch)(int); - int (*schedule_num_prio)(void); - odp_schedule_group_t (*schedule_group_create)(const char *, - const odp_thrmask_t *); - int (*schedule_group_destroy)(odp_schedule_group_t); - odp_schedule_group_t (*schedule_group_lookup)(const char *); - int (*schedule_group_join)(odp_schedule_group_t, const odp_thrmask_t *); - int (*schedule_group_leave)(odp_schedule_group_t, - const odp_thrmask_t *); - int (*schedule_group_thrmask)(odp_schedule_group_t, odp_thrmask_t *); - int (*schedule_group_info)(odp_schedule_group_t, - odp_schedule_group_info_t *); - void (*schedule_order_lock)(uint32_t); - void (*schedule_order_unlock)(uint32_t); - void (*schedule_order_unlock_lock)(uint32_t, uint32_t); - -} schedule_api_t; - -#ifdef __cplusplus -} -#endif - #endif diff --cc platform/linux-generic/include/odp_schedule_subsystem.h index c3edef63,00000000..4b2f2958 mode 100644,000000..100644 --- a/platform/linux-generic/include/odp_schedule_subsystem.h +++ b/platform/linux-generic/include/odp_schedule_subsystem.h @@@ -1,77 -1,0 +1,80 @@@ +/* Copyright (c) 2017, ARM Limited. All rights reserved. + * + * Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_SCHEDULE_SUBSYSTEM_H_ +#define ODP_SCHEDULE_SUBSYSTEM_H_ + +/* API header files */ +#include <odp/api/align.h> +#include <odp/api/schedule.h> + +/* Internal header files */ +#include <odp_module.h> + +#define SCHEDULE_SUBSYSTEM_VERSION 0x00010000UL + +ODP_SUBSYSTEM_DECLARE(schedule); + +ODP_SUBSYSTEM_API(schedule, uint64_t, wait_time, uint64_t ns); +ODP_SUBSYSTEM_API(schedule, odp_event_t, schedule, odp_queue_t *from, + uint64_t wait); +ODP_SUBSYSTEM_API(schedule, int, schedule_multi, odp_queue_t *from, + uint64_t wait, odp_event_t events[], int num); +ODP_SUBSYSTEM_API(schedule, void, schedule_pause, void); +ODP_SUBSYSTEM_API(schedule, void, schedule_resume, void); +ODP_SUBSYSTEM_API(schedule, void, schedule_release_atomic, void); +ODP_SUBSYSTEM_API(schedule, void, schedule_release_ordered, void); +ODP_SUBSYSTEM_API(schedule, void, schedule_prefetch, int num); +ODP_SUBSYSTEM_API(schedule, int, schedule_num_prio, void); +ODP_SUBSYSTEM_API(schedule, odp_schedule_group_t, schedule_group_create, + const char *name, const odp_thrmask_t *mask); +ODP_SUBSYSTEM_API(schedule, int, schedule_group_destroy, + odp_schedule_group_t group); +ODP_SUBSYSTEM_API(schedule, odp_schedule_group_t, schedule_group_lookup, + const char *name); +ODP_SUBSYSTEM_API(schedule, int, schedule_group_join, + odp_schedule_group_t group, const odp_thrmask_t *mask); +ODP_SUBSYSTEM_API(schedule, int, schedule_group_leave, + odp_schedule_group_t group, const odp_thrmask_t *mask); +ODP_SUBSYSTEM_API(schedule, int, schedule_group_thrmask, + odp_schedule_group_t group, odp_thrmask_t *thrmask); +ODP_SUBSYSTEM_API(schedule, int, schedule_group_info, + odp_schedule_group_t group, odp_schedule_group_info_t *info); +ODP_SUBSYSTEM_API(schedule, void, schedule_order_lock, unsigned lock_index); +ODP_SUBSYSTEM_API(schedule, void, schedule_order_unlock, unsigned lock_index); ++ODP_SUBSYSTEM_API(schedule, void, schedule_order_unlock_lock, ++ uint32_t unlock_index, uint32_t lock_index); + +typedef ODP_MODULE_CLASS(schedule) { + odp_module_base_t base; + /* Called from CP threads */ + odp_api_proto(schedule, schedule_group_create) schedule_group_create; + odp_api_proto(schedule, schedule_group_destroy) schedule_group_destroy; + odp_api_proto(schedule, schedule_group_lookup) schedule_group_lookup; + odp_api_proto(schedule, schedule_group_join) schedule_group_join; + odp_api_proto(schedule, schedule_group_leave) schedule_group_leave; + odp_api_proto(schedule, schedule_group_thrmask) schedule_group_thrmask; + odp_api_proto(schedule, schedule_group_info) schedule_group_info; + odp_api_proto(schedule, schedule_num_prio) schedule_num_prio; + /* Called from DP threads */ + odp_api_proto(schedule, schedule) schedule ODP_ALIGNED_CACHE; + odp_api_proto(schedule, schedule_multi) schedule_multi; + odp_api_proto(schedule, schedule_prefetch) schedule_prefetch; + odp_api_proto(schedule, schedule_order_lock) schedule_order_lock; + odp_api_proto(schedule, schedule_order_unlock) schedule_order_unlock; ++ odp_api_proto(schedule, schedule_order_unlock_lock) schedule_order_unlock_lock; + odp_api_proto(schedule, schedule_release_atomic) + schedule_release_atomic; + odp_api_proto(schedule, schedule_release_ordered) + schedule_release_ordered; + odp_api_proto(schedule, wait_time) wait_time; + odp_api_proto(schedule, schedule_pause) schedule_pause; + odp_api_proto(schedule, schedule_resume) schedule_resume; +} odp_schedule_module_t; + +#endif /* ODP_SCHEDULE_SUBSYSTEM_H_ */ diff --cc platform/linux-generic/m4/odp_schedule.m4 index d862b8b2,087cff87..9c09d6c4 --- a/platform/linux-generic/m4/odp_schedule.m4 +++ b/platform/linux-generic/m4/odp_schedule.m4 @@@ -1,44 -1,23 +1,26 @@@ - # Checks for --enable-schedule-sp and defines ODP_SCHEDULE_SP and adds - # -DODP_SCHEDULE_SP to CFLAGS. - AC_ARG_ENABLE( - [schedule_sp], - [AC_HELP_STRING([--enable-schedule-sp], - [enable strict priority scheduler])], - [if test "x$enableval" = xyes; then - schedule_sp=true - ODP_CFLAGS="$ODP_CFLAGS -DODP_SCHEDULE_SP" - else - schedule_sp=false - fi], - [schedule_sp=false]) - AM_CONDITIONAL([ODP_SCHEDULE_SP], [test x$schedule_sp = xtrue]) + AC_ARG_ENABLE([schedule-sp], + [ --enable-schedule-sp enable strict priority scheduler], + [if test x$enableval = xyes; then + schedule_sp_enabled=yes + AC_DEFINE([ODP_SCHEDULE_SP], [1], + [Define to 1 to enable strict priority scheduler]) + fi]) ++AM_CONDITIONAL([ODP_SCHEDULE_SP], [test x$schedule_sp_enabled = xyes])
- # Checks for --enable-schedule-iquery and defines ODP_SCHEDULE_IQUERY and adds - # -DODP_SCHEDULE_IQUERY to CFLAGS. - AC_ARG_ENABLE( - [schedule_iquery], - [AC_HELP_STRING([--enable-schedule-iquery], - [enable interests query (sparse bitmap) scheduler])], - [if test "x$enableval" = xyes; then - schedule_iquery=true - ODP_CFLAGS="$ODP_CFLAGS -DODP_SCHEDULE_IQUERY" - else - schedule_iquery=false - fi], - [schedule_iquery=false]) - AM_CONDITIONAL([ODP_SCHEDULE_IQUERY], [test x$schedule_iquery = xtrue]) + AC_ARG_ENABLE([schedule-iquery], + [ --enable-schedule-iquery enable interests query (sparse bitmap) scheduler], + [if test x$enableval = xyes; then + schedule_iquery_enabled=yes + AC_DEFINE([ODP_SCHEDULE_IQUERY], [1], + [Define to 1 to enable interests query scheduler]) + fi]) ++AM_CONDITIONAL([ODP_SCHEDULE_IQUERY], [test x$schedule_iquery_enabled = xyes])
- # Checks for --enable-schedule-scalable and defines ODP_SCHEDULE_SCALABLE and - # adds -DODP_SCHEDULE_SCALABLE to CFLAGS. - AC_ARG_ENABLE( - [schedule_scalable], - [AC_HELP_STRING([--enable-schedule-scalable], - [enable scalable scheduler])], - [if test "x$enableval" = xyes; then - schedule_scalable=true - ODP_CFLAGS="$ODP_CFLAGS -DODP_SCHEDULE_SCALABLE" - else - schedule_scalable=false - fi], - [schedule_scalable=false]) - AM_CONDITIONAL([ODP_SCHEDULE_SCALABLE], [test x$schedule_scalable = xtrue]) + AC_ARG_ENABLE([schedule_scalable], + [ --enable-schedule-scalable enable scalable scheduler], + [if test x$enableval = xyes; then + schedule_scalable_enabled=yes + AC_DEFINE([ODP_SCHEDULE_SCALABLE], [1], + [Define to 1 to enable scalable scheduler]) + fi]) ++AM_CONDITIONAL([ODP_SCHEDULE_SCALABLE], [test x$schedule_scalable_enabled = xyes]) diff --cc platform/linux-generic/odp_packet_io.c index e023df13,64ec1f67..9269f33d --- a/platform/linux-generic/odp_packet_io.c +++ b/platform/linux-generic/odp_packet_io.c @@@ -640,7 -670,53 +645,53 @@@ static int pktin_deq_multi(queue_t q_in return nbr; }
-int sched_cb_pktin_poll_one(int pktio_index, - int rx_queue, - odp_event_t evt_tbl[QUEUE_MULTI_MAX]) ++int pktin_poll_one(int pktio_index, ++ int rx_queue, ++ odp_event_t evt_tbl[QUEUE_MULTI_MAX]) + { + int num_rx, num_pkts, i; + pktio_entry_t *entry = pktio_entry_by_index(pktio_index); + odp_packet_t pkt; + odp_packet_hdr_t *pkt_hdr; + odp_buffer_hdr_t *buf_hdr; + odp_packet_t packets[QUEUE_MULTI_MAX]; + queue_t queue; + + if (odp_unlikely(entry->s.state != PKTIO_STATE_STARTED)) { + if (entry->s.state < PKTIO_STATE_ACTIVE || + entry->s.state == PKTIO_STATE_STOP_PENDING) + return -1; + + ODP_DBG("interface not started\n"); + return 0; + } + + ODP_ASSERT((unsigned)rx_queue < entry->s.num_in_queue); + num_pkts = entry->s.ops->recv(entry, rx_queue, + packets, QUEUE_MULTI_MAX); + + num_rx = 0; + for (i = 0; i < num_pkts; i++) { + pkt = packets[i]; + pkt_hdr = odp_packet_hdr(pkt); + if (odp_unlikely(pkt_hdr->p.input_flags.dst_queue)) { + queue = pkt_hdr->dst_queue; + buf_hdr = packet_to_buf_hdr(pkt); + if (queue_fn->enq_multi(queue, &buf_hdr, 1) < 0) { + /* Queue full? */ + odp_packet_free(pkt); + __atomic_fetch_add(&entry->s.stats.in_discards, + 1, + __ATOMIC_RELAXED); + } + } else { + evt_tbl[num_rx++] = odp_packet_to_event(pkt); + } + } + return num_rx; + } + -int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]) +int pktin_poll(int pktio_index, int num_queue, int index[]) { odp_buffer_hdr_t *hdr_tbl[QUEUE_MULTI_MAX]; int num, idx; diff --cc platform/linux-generic/odp_queue_if.c index f77405a9,969b0d3c..8137be5a --- a/platform/linux-generic/odp_queue_if.c +++ b/platform/linux-generic/odp_queue_if.c @@@ -1,11 -1,17 +1,14 @@@ /* Copyright (c) 2017, ARM Limited * All rights reserved. * - * SPDX-License-Identifier: BSD-3-Clause + * SPDX-License-Identifier: BSD-3-Clause */ + + #include "config.h" + #include <odp_queue_if.h>
-extern const queue_api_t queue_scalable_api; extern const queue_fn_t queue_scalable_fn; - -extern const queue_api_t queue_default_api; extern const queue_fn_t queue_default_fn;
#ifdef ODP_SCHEDULE_SCALABLE diff --cc platform/linux-generic/pktio/dpdk.c index 511c7778,26ca0d6b..bbe8ddda --- a/platform/linux-generic/pktio/dpdk.c +++ b/platform/linux-generic/pktio/dpdk.c @@@ -30,20 -32,12 +32,24 @@@ #include <rte_mbuf.h> #include <rte_mempool.h> #include <rte_ethdev.h> + #include <rte_ip.h> + #include <rte_ip_frag.h> + #include <rte_udp.h> + #include <rte_tcp.h> #include <rte_string_fns.h>
+static inline pktio_ops_dpdk_data_t * + __retrieve_op_data(pktio_entry_t *pktio) +{ + return (pktio_ops_dpdk_data_t *)(pktio->ops_data(dpdk)); +} + +static inline void __release_op_data(pktio_entry_t *pktio) +{ + free(pktio->ops_data(dpdk)); + pktio->ops_data(dpdk) = NULL; +} + #if ODP_DPDK_ZERO_COPY ODP_STATIC_ASSERT(CONFIG_PACKET_HEADROOM == RTE_PKTMBUF_HEADROOM, "ODP and DPDK headroom sizes not matching!"); @@@ -322,10 -358,11 +370,11 @@@ static inline int mbuf_to_pkt(pktio_ent int i, j; int nb_pkts = 0; int alloc_len, num; - odp_pool_t pool = pktio_entry->s.pkt_dpdk.pool; + odp_pool_t pool = __retrieve_op_data(pktio_entry)->pool; + odp_pktin_config_opt_t *pktin_cfg = &pktio_entry->s.config.pktin;
/* Allocate maximum sized packets */ - alloc_len = pktio_entry->s.pkt_dpdk.data_room; + alloc_len = __retrieve_op_data(pktio_entry)->data_room;
num = packet_alloc_multi(pool, alloc_len, pkt_table, mbuf_num); if (num != mbuf_num) { @@@ -443,7 -582,8 +595,8 @@@ static inline int mbuf_to_pkt_zero(pkti void *data; int i; int nb_pkts = 0; - odp_pool_t pool = pktio_entry->s.pkt_dpdk.pool; + odp_pool_t pool = __retrieve_op_data(pktio_entry)->pool; + odp_pktin_config_opt_t *pktin_cfg = &pktio_entry->s.config.pktin;
for (i = 0; i < mbuf_num; i++) { odp_packet_hdr_t parsed_hdr; @@@ -498,13 -645,12 +658,13 @@@ static inline int pkt_to_mbuf_zero(pktio_entry_t *pktio_entry, struct rte_mbuf *mbuf_table[], const odp_packet_t pkt_table[], uint16_t num, - uint16_t *seg_count) + uint16_t *copy_count) { - pkt_dpdk_t *pkt_dpdk = &pktio_entry->s.pkt_dpdk; + pktio_ops_dpdk_data_t *pkt_dpdk = + __retrieve_op_data(pktio_entry); + odp_pktout_config_opt_t *pktout_cfg = &pktio_entry->s.config.pktout; int i; - - *seg_count = 0; + *copy_count = 0;
for (i = 0; i < num; i++) { odp_packet_t pkt = pkt_table[i]; @@@ -670,9 -829,9 +844,10 @@@ static void rss_conf_to_hash_proto(stru static int dpdk_setup_port(pktio_entry_t *pktio_entry) { int ret; - pkt_dpdk_t *pkt_dpdk = &pktio_entry->s.pkt_dpdk; + pktio_ops_dpdk_data_t *pkt_dpdk = + __retrieve_op_data(pktio_entry); struct rte_eth_rss_conf rss_conf; + uint16_t hw_ip_checksum = 0;
/* Always set some hash functions to enable DPDK RSS hash calculation */ if (pkt_dpdk->hash.all_bits == 0) { @@@ -905,9 -1066,13 +1085,14 @@@ static int dpdk_output_queues_config(pk static void dpdk_init_capability(pktio_entry_t *pktio_entry, struct rte_eth_dev_info *dev_info) { - pkt_dpdk_t *pkt_dpdk = &pktio_entry->s.pkt_dpdk; + pktio_ops_dpdk_data_t *pkt_dpdk = + __retrieve_op_data(pktio_entry); odp_pktio_capability_t *capa = &pkt_dpdk->capa; + int ptype_cnt; + int ptype_l3_ipv4 = 0; + int ptype_l4_tcp = 0; + int ptype_l4_udp = 0; + uint32_t ptype_mask = RTE_PTYPE_L3_MASK | RTE_PTYPE_L4_MASK;
memset(dev_info, 0, sizeof(struct rte_eth_dev_info)); memset(capa, 0, sizeof(odp_pktio_capability_t)); @@@ -1180,9 -1381,8 +1414,9 @@@ static int dpdk_send(pktio_entry_t *pkt const odp_packet_t pkt_table[], int num) { struct rte_mbuf *tx_mbufs[num]; - pkt_dpdk_t *pkt_dpdk = &pktio_entry->s.pkt_dpdk; + pktio_ops_dpdk_data_t *pkt_dpdk = + __retrieve_op_data(pktio_entry); - uint16_t seg_count = 0; + uint16_t copy_count = 0; int tx_pkts; int i; int mbufs; diff --cc platform/linux-generic/pktio/ipc.c index 14cd86eb,a7f346ae..5211d1e5 --- a/platform/linux-generic/pktio/ipc.c +++ b/platform/linux-generic/pktio/ipc.c @@@ -3,6 -3,10 +3,9 @@@ * * SPDX-License-Identifier: BSD-3-Clause */ + + #include "config.h" + -#include <odp_packet_io_ipc_internal.h> #include <odp_debug_internal.h> #include <odp_packet_io_internal.h> #include <odp/api/system_info.h> diff --cc platform/linux-generic/pool/generic.c index c628eb1b,c9aca7d6..520d8a6f --- a/platform/linux-generic/pool/generic.c +++ b/platform/linux-generic/pool/generic.c @@@ -37,10 -41,45 +40,13 @@@ ODP_STATIC_ASSERT(CONFIG_POOL_CACHE_SIZ ODP_STATIC_ASSERT(CONFIG_PACKET_SEG_LEN_MIN >= 256, "ODP Segment size must be a minimum of 256 bytes");
+ ODP_STATIC_ASSERT(CONFIG_PACKET_SEG_SIZE < 0xffff, + "Segment size must be less than 64k (16 bit offsets)"); + -/* Thread local variables */ -typedef struct pool_local_t { - pool_cache_t *cache[ODP_CONFIG_POOLS]; - int thr_id; -} pool_local_t; - pool_table_t *pool_tbl; -static __thread pool_local_t local; +__thread pool_local_t local;
-static inline odp_pool_t pool_index_to_handle(uint32_t pool_idx) -{ - return _odp_cast_scalar(odp_pool_t, pool_idx); -} - -static inline pool_t *pool_from_buf(odp_buffer_t buf) -{ - odp_buffer_hdr_t *buf_hdr = buf_hdl_to_hdr(buf); - - return buf_hdr->pool_ptr; -} - -static inline odp_buffer_hdr_t *buf_hdr_from_index(pool_t *pool, - uint32_t buffer_idx) -{ - uint32_t block_offset; - odp_buffer_hdr_t *buf_hdr; - - block_offset = buffer_idx * pool->block_size; - - /* clang requires cast to uintptr_t */ - buf_hdr = (odp_buffer_hdr_t *)(uintptr_t)&pool->base_addr[block_offset]; - - return buf_hdr; -} - -int odp_pool_init_global(void) +static int generic_pool_init_global(void) { uint32_t i; odp_shm_t shm; @@@ -633,12 -916,20 +665,13 @@@ static void generic_pool_print(odp_pool printf("\n"); }
-odp_pool_t odp_buffer_pool(odp_buffer_t buf) -{ - pool_t *pool = pool_from_buf(buf); - - return pool->pool_hdl; -} - -void odp_pool_param_init(odp_pool_param_t *params) +static void generic_pool_param_init(odp_pool_param_t *params) { memset(params, 0, sizeof(odp_pool_param_t)); + params->pkt.headroom = CONFIG_PACKET_HEADROOM; }
-uint64_t odp_pool_to_u64(odp_pool_t hdl) +static uint64_t generic_pool_to_u64(odp_pool_t hdl) { return _odp_pri(hdl); } diff --cc platform/linux-generic/queue/generic.c index 37f13d03,3f355e69..ab2b9704 --- a/platform/linux-generic/queue/generic.c +++ b/platform/linux-generic/queue/generic.c @@@ -1,9 -1,11 +1,11 @@@ /* Copyright (c) 2013, Linaro Limited * All rights reserved. * - * SPDX-License-Identifier: BSD-3-Clause + * SPDX-License-Identifier: BSD-3-Clause */
+ #include "config.h" + #include <odp/api/queue.h> #include <odp_queue_internal.h> #include <odp_queue_if.h> @@@ -175,16 -175,16 +177,16 @@@ static odp_schedule_group_t generic_que return handle_to_qentry(handle)->s.param.sched.group; }
- static int generic_queue_lock_count(odp_queue_t handle) -static uint32_t queue_lock_count(odp_queue_t handle) ++static uint32_t generic_queue_lock_count(odp_queue_t handle) { queue_entry_t *queue = handle_to_qentry(handle);
return queue->s.param.sched.sync == ODP_SCHED_SYNC_ORDERED ? - (int)queue->s.param.sched.lock_count : -1; + queue->s.param.sched.lock_count : 0; }
-static odp_queue_t queue_create(const char *name, - const odp_queue_param_t *param) +static odp_queue_t generic_queue_create(const char *name, + const odp_queue_param_t *param) { uint32_t i; queue_entry_t *queue; diff --cc platform/linux-generic/queue/scalable.c index 020b790d,07201ce7..00cd8da6 --- a/platform/linux-generic/queue/scalable.c +++ b/platform/linux-generic/queue/scalable.c @@@ -3,8 -3,9 +3,9 @@@ * Copyright (c) 2017, Linaro Limited * All rights reserved. * - * SPDX-License-Identifier: BSD-3-Clause + * SPDX-License-Identifier: BSD-3-Clause */ + #include <config.h>
#include <odp/api/hints.h> #include <odp/api/plat/ticketlock_inlines.h> @@@ -333,16 -341,16 +342,16 @@@ static odp_schedule_group_t scalable_qu return qentry_from_int(queue_from_ext(handle))->s.param.sched.group; }
- static int scalable_queue_lock_count(odp_queue_t handle) -static uint32_t queue_lock_count(odp_queue_t handle) ++static uint32_t scalable_queue_lock_count(odp_queue_t handle) { queue_entry_t *queue = qentry_from_int(queue_from_ext(handle));
return queue->s.param.sched.sync == ODP_SCHED_SYNC_ORDERED ? - (int)queue->s.param.sched.lock_count : -1; + queue->s.param.sched.lock_count : 0; }
-static odp_queue_t queue_create(const char *name, - const odp_queue_param_t *param) +static odp_queue_t scalable_queue_create(const char *name, + const odp_queue_param_t *param) { int queue_idx; odp_queue_t handle = ODP_QUEUE_INVALID; diff --cc platform/linux-generic/queue/subsystem.c index e4c66a2b,00000000..5a88b2df mode 100644,000000..100644 --- a/platform/linux-generic/queue/subsystem.c +++ b/platform/linux-generic/queue/subsystem.c @@@ -1,264 -1,0 +1,267 @@@ +/* Copyright (c) 2017, ARM Limited. All rights reserved. + * + * Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ ++ ++#include <config.h> ++ +#include <odp/api/queue.h> +#include <odp_internal.h> +#include <odp_debug_internal.h> +#include <odp_queue_subsystem.h> +#include <odp_module.h> + +ODP_SUBSYSTEM_DEFINE(queue, "queue public APIs", QUEUE_SUBSYSTEM_VERSION); + +ODP_SUBSYSTEM_CONSTRUCTOR(queue) +{ + odp_subsystem_constructor(queue); +} + +int odp_queue_init_global(void) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->base.init_global); + + return mod->base.init_global(); +} + +int odp_queue_term_global(void) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->base.term_global); + + return mod->base.term_global(); +} + +int odp_queue_init_local(void) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->base.init_local); + + return mod->base.init_local(); +} + +int odp_queue_term_local(void) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->base.term_local); + + return mod->base.term_local(); +} + +odp_queue_t odp_queue_create(const char *name, + const odp_queue_param_t *param) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->create); + + return mod->create(name, param); +} + +int odp_queue_destroy(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->destroy); + + return mod->destroy(queue_hdl); +} + +odp_queue_t odp_queue_lookup(const char *name) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->lookup); + + return mod->lookup(name); +} + +int odp_queue_capability(odp_queue_capability_t *capa) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->capability); + + return mod->capability(capa); +} + +int odp_queue_context_set(odp_queue_t queue_hdl, void *context, + uint32_t len ODP_UNUSED) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->context_set); + + return mod->context_set(queue_hdl, context, len); +} + +void *odp_queue_context(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->context); + + return mod->context(queue_hdl); +} + +int odp_queue_enq(odp_queue_t queue_hdl, odp_event_t ev) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->enq); + + return mod->enq(queue_hdl, ev); +} + +int odp_queue_enq_multi(odp_queue_t queue_hdl, + const odp_event_t events[], int num) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->enq_multi); + + return mod->enq_multi(queue_hdl, events, num); +} + +odp_event_t odp_queue_deq(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->deq); + + return mod->deq(queue_hdl); +} + +int odp_queue_deq_multi(odp_queue_t queue_hdl, odp_event_t events[], int num) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->deq_multi); + + return mod->deq_multi(queue_hdl, events, num); +} + +odp_queue_type_t odp_queue_type(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->type); + + return mod->type(queue_hdl); +} + +odp_schedule_sync_t odp_queue_sched_type(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->sched_type); + + return mod->sched_type(queue_hdl); +} + +odp_schedule_prio_t odp_queue_sched_prio(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->sched_prio); + + return mod->sched_prio(queue_hdl); +} + +odp_schedule_group_t odp_queue_sched_group(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->sched_group); + + return mod->sched_group(queue_hdl); +} + - int odp_queue_lock_count(odp_queue_t queue_hdl) ++uint32_t odp_queue_lock_count(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->lock_count); + + return mod->lock_count(queue_hdl); +} + +uint64_t odp_queue_to_u64(odp_queue_t queue_hdl) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->to_u64); + + return mod->to_u64(queue_hdl); +} + +void odp_queue_param_init(odp_queue_param_t *params) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->param_init); + + return mod->param_init(params); +} + +int odp_queue_info(odp_queue_t queue_hdl, odp_queue_info_t *info) +{ + odp_queue_module_t *mod = + odp_subsystem_active_module(queue, mod); + + ODP_ASSERT(mod); + ODP_ASSERT(mod->info); + + return mod->info(queue_hdl, info); +} diff --cc platform/linux-generic/schedule/generic.c index 36ed857b,59d924a5..73fef40c --- a/platform/linux-generic/schedule/generic.c +++ b/platform/linux-generic/schedule/generic.c @@@ -1402,36 -1418,24 +1411,37 @@@ const schedule_fn_t schedule_default_f };
/* Fill in scheduler API calls */ -const schedule_api_t schedule_default_api = { - .schedule_wait_time = schedule_wait_time, - .schedule = schedule, - .schedule_multi = schedule_multi, - .schedule_pause = schedule_pause, - .schedule_resume = schedule_resume, - .schedule_release_atomic = schedule_release_atomic, - .schedule_release_ordered = schedule_release_ordered, - .schedule_prefetch = schedule_prefetch, - .schedule_num_prio = schedule_num_prio, - .schedule_group_create = schedule_group_create, - .schedule_group_destroy = schedule_group_destroy, - .schedule_group_lookup = schedule_group_lookup, - .schedule_group_join = schedule_group_join, - .schedule_group_leave = schedule_group_leave, - .schedule_group_thrmask = schedule_group_thrmask, - .schedule_group_info = schedule_group_info, - .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock, - .schedule_order_unlock_lock = schedule_order_unlock_lock +odp_schedule_module_t schedule_generic = { + .base = { + .name = "schedule_generic", + .init_global = schedule_init_global, + .term_global = schedule_term_global, + .init_local = schedule_init_local, + .term_local = schedule_term_local, + }, - .wait_time = schedule_wait_time, - .schedule = schedule, - .schedule_multi = schedule_multi, - .schedule_pause = schedule_pause, - .schedule_resume = schedule_resume, - .schedule_release_atomic = schedule_release_atomic, - .schedule_release_ordered = schedule_release_ordered, - .schedule_prefetch = schedule_prefetch, - .schedule_num_prio = schedule_num_prio, - .schedule_group_create = schedule_group_create, - .schedule_group_destroy = schedule_group_destroy, - .schedule_group_lookup = schedule_group_lookup, - .schedule_group_join = schedule_group_join, - .schedule_group_leave = schedule_group_leave, - .schedule_group_thrmask = schedule_group_thrmask, - .schedule_group_info = schedule_group_info, - .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock ++ .wait_time = schedule_wait_time, ++ .schedule = schedule, ++ .schedule_multi = schedule_multi, ++ .schedule_pause = schedule_pause, ++ .schedule_resume = schedule_resume, ++ .schedule_release_atomic = schedule_release_atomic, ++ .schedule_release_ordered = schedule_release_ordered, ++ .schedule_prefetch = schedule_prefetch, ++ .schedule_num_prio = schedule_num_prio, ++ .schedule_group_create = schedule_group_create, ++ .schedule_group_destroy = schedule_group_destroy, ++ .schedule_group_lookup = schedule_group_lookup, ++ .schedule_group_join = schedule_group_join, ++ .schedule_group_leave = schedule_group_leave, ++ .schedule_group_thrmask = schedule_group_thrmask, ++ .schedule_group_info = schedule_group_info, ++ .schedule_order_lock = schedule_order_lock, ++ .schedule_order_unlock = schedule_order_unlock, ++ .schedule_order_unlock_lock = schedule_order_unlock_lock }; + +ODP_MODULE_CONSTRUCTOR(schedule_generic) +{ + odp_module_constructor(&schedule_generic); + odp_subsystem_register_module(schedule, &schedule_generic); +} diff --cc platform/linux-generic/schedule/iquery.c index ac0d0981,1ad918a4..5de22983 --- a/platform/linux-generic/schedule/iquery.c +++ b/platform/linux-generic/schedule/iquery.c @@@ -1371,15 -1378,10 +1383,16 @@@ odp_schedule_module_t schedule_iquery .schedule_group_thrmask = schedule_group_thrmask, .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock + .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock };
+ODP_MODULE_CONSTRUCTOR(schedule_iquery) +{ + odp_module_constructor(&schedule_iquery); + odp_subsystem_register_module(schedule, &schedule_iquery); +} + static void thread_set_interest(sched_thread_local_t *thread, unsigned int queue_index, int prio) { diff --cc platform/linux-generic/schedule/scalable.c index 0fba38a7,642e7ee7..2786573f --- a/platform/linux-generic/schedule/scalable.c +++ b/platform/linux-generic/schedule/scalable.c @@@ -592,8 -601,8 +602,9 @@@ void sched_queue_rem(odp_schedule_group
sgi = grp; sg = sg_vec[sgi]; + x = __atomic_sub_fetch(&sg->xcount[prio], 1, __ATOMIC_RELAXED);
+ x = __atomic_sub_fetch(&sg->xcount[prio], 1, __ATOMIC_RELAXED); if (x == 0) { /* Last ODP queue for this priority * Notify all threads in sg->thr_wanted that they @@@ -693,56 -708,174 +710,174 @@@ static inline void _schedule_release_or ts->rctx = NULL; }
- static void poll_pktin(sched_scalable_thread_state_t *ts) + static uint16_t poll_count[ODP_CONFIG_PKTIO_ENTRIES]; + + static void pktio_start(int pktio_idx, + int num_in_queue, + int in_queue_idx[], + odp_queue_t odpq[]) { - uint32_t i, tag, hi, npolls = 0; - int pktio_index, queue_index; + int i, rxq; + queue_entry_t *qentry; + sched_elem_t *elem;
- hi = __atomic_load_n(&pktin_hi, __ATOMIC_RELAXED); - if (hi == 0) - return; + ODP_ASSERT(pktio_idx < ODP_CONFIG_PKTIO_ENTRIES); + for (i = 0; i < num_in_queue; i++) { + rxq = in_queue_idx[i]; + ODP_ASSERT(rxq < PKTIO_MAX_QUEUES); + __atomic_fetch_add(&poll_count[pktio_idx], 1, __ATOMIC_RELAXED); + qentry = qentry_from_ext(odpq[i]); + elem = &qentry->s.sched_elem; + elem->cons_type |= FLAG_PKTIN; /* Set pktin queue flag */ + elem->pktio_idx = pktio_idx; + elem->rx_queue = rxq; + elem->xoffset = sched_pktin_add(elem->sched_grp, + elem->sched_prio); + ODP_ASSERT(elem->schedq != NULL); + schedq_push(elem->schedq, elem); + } + }
- for (i = ts->pktin_next; npolls != hi; i = (i + 1) % hi, npolls++) { - tag = __atomic_load_n(&pktin_tags[i], __ATOMIC_RELAXED); - if (!TAG_IS_READY(tag)) - continue; - if (!__atomic_compare_exchange_n(&pktin_tags[i], &tag, - tag | TAG_BUSY, - true, - __ATOMIC_ACQUIRE, - __ATOMIC_RELAXED)) - continue; - /* Tag grabbed */ - pktio_index = TAG_2_PKTIO(tag); - queue_index = TAG_2_QUEUE(tag); - if (odp_unlikely(pktin_poll(pktio_index, - 1, &queue_index))) { - /* Pktio stopped or closed - * Remove tag from pktin_tags - */ - __atomic_store_n(&pktin_tags[i], - TAG_EMPTY, __ATOMIC_RELAXED); - __atomic_fetch_sub(&pktin_num, - 1, __ATOMIC_RELEASE); - /* Call stop_finalize when all queues - * of the pktio have been removed - */ - if (__atomic_sub_fetch(&pktin_count[pktio_index], 1, - __ATOMIC_RELAXED) == 0) - pktio_stop_finalize(pktio_index); - } else { - /* We don't know whether any packets were found and enqueued - * Write back original tag value to release pktin queue - */ - __atomic_store_n(&pktin_tags[i], tag, __ATOMIC_RELAXED); - /* Do not iterate through all pktin queues every time */ - if ((ts->pktin_poll_cnts & 0xf) != 0) - break; + static void pktio_stop(sched_elem_t *elem) + { + elem->cons_type &= ~FLAG_PKTIN; /* Clear pktin queue flag */ + sched_pktin_rem(elem->sched_grp); + if (__atomic_sub_fetch(&poll_count[elem->pktio_idx], + 1, __ATOMIC_RELAXED) == 0) { + /* Call stop_finalize when all queues + * of the pktio have been removed */ - sched_cb_pktio_stop_finalize(elem->pktio_idx); ++ pktio_stop_finalize(elem->pktio_idx); + } + } + + static bool have_reorder_ctx(sched_scalable_thread_state_t *ts) + { + if (odp_unlikely(bitset_is_null(ts->priv_rvec_free))) { + ts->priv_rvec_free = atom_bitset_xchg(&ts->rvec_free, 0, + __ATOMIC_RELAXED); + if (odp_unlikely(bitset_is_null(ts->priv_rvec_free))) { + /* No free reorder contexts for this thread */ + return false; } } - ODP_ASSERT(i < hi); - ts->pktin_poll_cnts++; - ts->pktin_next = i; + return true; + } + + static inline bool is_pktin(sched_elem_t *elem) + { + return (elem->cons_type & FLAG_PKTIN) != 0; + } + + static inline bool is_atomic(sched_elem_t *elem) + { + return elem->cons_type == (ODP_SCHED_SYNC_ATOMIC | FLAG_PKTIN); + } + + static inline bool is_ordered(sched_elem_t *elem) + { + return elem->cons_type == (ODP_SCHED_SYNC_ORDERED | FLAG_PKTIN); + } + + static int poll_pktin(sched_elem_t *elem, odp_event_t ev[], int num_evts) + { + sched_scalable_thread_state_t *ts = sched_ts; + int num, i; + /* For ordered queues only */ + reorder_context_t *rctx; + reorder_window_t *rwin = NULL; + uint32_t sn; + uint32_t idx; + + if (is_ordered(elem)) { + /* Need reorder context and slot in reorder window */ + rwin = queue_get_rwin((queue_entry_t *)elem); + ODP_ASSERT(rwin != NULL); + if (odp_unlikely(!have_reorder_ctx(ts) || + !rwin_reserve_sc(rwin, &sn))) { + /* Put back queue on source schedq */ + schedq_push(ts->src_schedq, elem); + return 0; + } + /* Slot in reorder window reserved! */ + } + + /* Try to dequeue events from the ingress queue itself */ + num = _odp_queue_deq_sc(elem, ev, num_evts); + if (odp_likely(num > 0)) { + events_dequeued: + if (is_atomic(elem)) { + ts->atomq = elem; /* Remember */ + ts->dequeued += num; + /* Don't push atomic queue on schedq */ + } else /* Parallel or ordered */ { + if (is_ordered(elem)) { + /* Find and initialise an unused reorder + * context. */ + idx = bitset_ffs(ts->priv_rvec_free) - 1; + ts->priv_rvec_free = + bitset_clr(ts->priv_rvec_free, idx); + rctx = &ts->rvec[idx]; + rctx_init(rctx, idx, rwin, sn); + /* Are we in-order or out-of-order? */ + ts->out_of_order = sn != rwin->hc.head; + ts->rctx = rctx; + } + schedq_push(elem->schedq, elem); + } + return num; + } + + /* Ingress queue empty => poll pktio RX queue */ + odp_event_t rx_evts[QUEUE_MULTI_MAX]; - int num_rx = sched_cb_pktin_poll_one(elem->pktio_idx, ++ int num_rx = pktin_poll_one(elem->pktio_idx, + elem->rx_queue, + rx_evts); + if (odp_likely(num_rx > 0)) { + num = num_rx < num_evts ? num_rx : num_evts; + for (i = 0; i < num; i++) { + /* Return events directly to caller */ + ev[i] = rx_evts[i]; + } + if (num_rx > num) { + /* Events remain, enqueue them */ + odp_buffer_hdr_t *bufs[QUEUE_MULTI_MAX]; + + for (i = num; i < num_rx; i++) + bufs[i] = + (odp_buffer_hdr_t *)(void *)rx_evts[i]; + i = _odp_queue_enq_sp(elem, &bufs[num], num_rx - num); + /* Enqueue must succeed as the queue was empty */ + ODP_ASSERT(i == num_rx - num); + } + goto events_dequeued; + } + /* No packets received, reset state and undo side effects */ + if (is_atomic(elem)) + ts->atomq = NULL; + else if (is_ordered(elem)) + rwin_unreserve_sc(rwin, sn); + + if (odp_likely(num_rx == 0)) { + /* RX queue empty, push it to pktin priority schedq */ + sched_queue_t *schedq = ts->src_schedq; + /* Check if queue came from the designated schedq */ + if (schedq == elem->schedq) { + /* Yes, add offset to the pktin priority level + * in order to get alternate schedq */ + schedq += elem->xoffset; + } + /* Else no, queue must have come from alternate schedq */ + schedq_push(schedq, elem); + } else /* num_rx < 0 => pktio stopped or closed */ { + /* Remove queue */ + pktio_stop(elem); + /* Don't push queue to schedq */ + } + + ODP_ASSERT(ts->atomq == NULL); + ODP_ASSERT(!ts->out_of_order); + ODP_ASSERT(ts->rctx == NULL); + return 0; }
static int _schedule(odp_queue_t *from, odp_event_t ev[], int num_evts) @@@ -1981,10 -2095,5 +2100,11 @@@ odp_schedule_module_t schedule_scalabl .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock, }; + +ODP_MODULE_CONSTRUCTOR(schedule_scalable) +{ + odp_module_constructor(&schedule_scalable); + odp_subsystem_register_module(schedule, &schedule_scalable); +} diff --cc platform/linux-generic/schedule/sp.c index a40d42d8,7f0404b1..ea7b8342 --- a/platform/linux-generic/schedule/sp.c +++ b/platform/linux-generic/schedule/sp.c @@@ -872,11 -878,6 +884,12 @@@ odp_schedule_module_t schedule_sp = .schedule_group_thrmask = schedule_group_thrmask, .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock + .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock }; + +ODP_MODULE_CONSTRUCTOR(schedule_sp) +{ + odp_module_constructor(&schedule_sp); + odp_subsystem_register_module(schedule, &schedule_sp); +} diff --cc platform/linux-generic/schedule/subsystem.c index 6ca6459e,00000000..ba9a095f mode 100644,000000..100644 --- a/platform/linux-generic/schedule/subsystem.c +++ b/platform/linux-generic/schedule/subsystem.c @@@ -1,272 -1,0 +1,274 @@@ +/* Copyright (c) 2017, ARM Limited. All rights reserved. + * + * Copyright (c) 2017, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + ++#include <config.h> ++ +/* API header files */ +#include <odp.h> + +/* Internal header files */ +#include <odp_debug_internal.h> +#include <odp_internal.h> +#include <odp_module.h> +#include <odp_schedule_subsystem.h> + +ODP_SUBSYSTEM_DEFINE(schedule, "schedule public APIs", + SCHEDULE_SUBSYSTEM_VERSION); + +ODP_SUBSYSTEM_CONSTRUCTOR(schedule) +{ + odp_subsystem_constructor(schedule); +} + +int odp_schedule_init_global(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->base.init_global); + + return module->base.init_global(); +} + +int odp_schedule_term_global(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->base.term_global); + + return module->base.term_global(); +} + +int odp_schedule_init_local(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->base.init_local); + + return module->base.init_local(); +} + +int odp_schedule_term_local(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->base.term_local); + + return module->base.term_local(); +} + +uint64_t odp_schedule_wait_time(uint64_t ns) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->wait_time); + + return module->wait_time(ns); +} + +odp_event_t odp_schedule(odp_queue_t *from, uint64_t wait) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule); + + return module->schedule(from, wait); +} + +int odp_schedule_multi(odp_queue_t *from, uint64_t wait, odp_event_t events[], + int num) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_multi); + + return module->schedule_multi(from, wait, events, num); +} + +void odp_schedule_pause(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_pause); + + return module->schedule_pause(); +} + +void odp_schedule_resume(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_resume); + + return module->schedule_resume(); +} + +void odp_schedule_release_atomic(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_release_atomic); + + return module->schedule_release_atomic(); +} + +void odp_schedule_release_ordered(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_release_ordered); + + return module->schedule_release_ordered(); +} + +void odp_schedule_prefetch(int num) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_prefetch); + + return module->schedule_prefetch(num); +} + +int odp_schedule_num_prio(void) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_num_prio); + + return module->schedule_num_prio(); +} + +odp_schedule_group_t odp_schedule_group_create(const char *name, + const odp_thrmask_t *mask) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_group_create); + + return module->schedule_group_create(name, mask); +} + +int odp_schedule_group_destroy(odp_schedule_group_t group) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_group_destroy); + + return module->schedule_group_destroy(group); +} + +odp_schedule_group_t odp_schedule_group_lookup(const char *name) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_group_lookup); + + return module->schedule_group_lookup(name); +} + +int odp_schedule_group_join(odp_schedule_group_t group, + const odp_thrmask_t *mask) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_group_join); + + return module->schedule_group_join(group, mask); +} + +int odp_schedule_group_leave(odp_schedule_group_t group, + const odp_thrmask_t *mask) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_group_leave); + + return module->schedule_group_leave(group, mask); +} + +int odp_schedule_group_thrmask(odp_schedule_group_t group, + odp_thrmask_t *thrmask) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_group_thrmask); + + return module->schedule_group_thrmask(group, thrmask); +} + +int odp_schedule_group_info(odp_schedule_group_t group, + odp_schedule_group_info_t *info) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_group_info); + + return module->schedule_group_info(group, info); +} + +void odp_schedule_order_lock(unsigned lock_index) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_order_lock); + + return module->schedule_order_lock(lock_index); +} + +void odp_schedule_order_unlock(unsigned lock_index) +{ + odp_schedule_module_t *module = + odp_subsystem_active_module(schedule, module); + + ODP_ASSERT(module); + ODP_ASSERT(module->schedule_order_unlock); + + return module->schedule_order_unlock(lock_index); +} diff --cc test/linux-dpdk/m4/configure.m4 index ff6caf97,00000000..84fd72b8 mode 100644,000000..100644 --- a/test/linux-dpdk/m4/configure.m4 +++ b/test/linux-dpdk/m4/configure.m4 @@@ -1,2 -1,0 +1,4 @@@ ++m4_include([test/linux-generic/m4/performance.m4]) ++ +AC_CONFIG_FILES([test/linux-dpdk/Makefile + test/linux-dpdk/validation/api/pktio/Makefile])
-----------------------------------------------------------------------
Summary of changes: .gitignore | 3 + .travis.yml | 166 ++- DEPENDENCIES | 31 +- Makefile.am | 10 +- configure.ac | 149 +- doc/images/segment.svg | 181 +-- doc/users-guide/users-guide-crypto.adoc | 120 +- example/Makefile.inc | 8 +- example/ddf_ifs/Makefile.am | 2 +- example/generator/odp_generator.c | 304 ++-- example/m4/configure.m4 | 11 +- frameworks/modular/odp_module.c | 2 + helper/.gitignore | 3 - helper/Makefile.am | 21 +- helper/chksum.c | 2 + helper/cuckootable.c | 2 + helper/eth.c | 2 + helper/hashtable.c | 3 + helper/include/odp/helper/chksum.h | 2 +- helper/include/odp/helper/ip.h | 92 +- helper/{ => include}/odph_debug.h | 0 helper/{ => include}/odph_list_internal.h | 22 +- helper/ip.c | 2 + helper/iplookuptable.c | 2 + helper/lineartable.c | 2 + helper/linux/thread.c | 2 + helper/m4/configure.m4 | 9 +- helper/test/Makefile.am | 23 +- helper/test/chksum.c | 14 +- helper/test/cuckootable.c | 2 + helper/test/iplookuptable.c | 2 + helper/test/linux/process.c | 9 +- helper/test/linux/pthread.c | 9 +- helper/test/odpthreads.c | 9 +- helper/test/parse.c | 2 + helper/test/table.c | 2 + helper/threads.c | 2 + include/odp/api/spec/crypto.h | 9 +- include/odp/api/spec/event.h | 4 + include/odp/api/spec/ipsec.h | 18 +- include/odp/api/spec/packet.h | 52 + include/odp/api/spec/packet_io.h | 59 +- include/odp/api/spec/pool.h | 14 + include/odp/api/spec/queue.h | 9 +- include/odp/api/spec/schedule.h | 28 +- include/odp/api/spec/schedule_types.h | 2 +- include/odp/api/spec/timer.h | 36 +- include/odp/arch/default/api/abi/packet.h | 2 +- m4/ax_check_compile_flag.m4 | 12 +- platform/Makefile.inc | 7 +- platform/linux-dpdk/Makefile.am | 29 +- platform/linux-dpdk/buffer/dpdk.c | 2 + .../linux-dpdk/include/odp_packet_io_internal.h | 3 + platform/linux-dpdk/m4/configure.m4 | 25 +- platform/linux-dpdk/odp_crypto.c | 2 + platform/linux-dpdk/odp_init.c | 2 + platform/linux-dpdk/odp_packet.c | 2 + platform/linux-dpdk/pktio/dpdk.h | 2 + platform/linux-dpdk/pktio/subsystem.c | 2 + platform/linux-dpdk/pool/dpdk.c | 1 + platform/linux-generic/Makefile.am | 23 +- platform/linux-generic/Makefile.inc | 2 - platform/linux-generic/_fdserver.c | 2 + platform/linux-generic/_ishm.c | 2 + platform/linux-generic/_ishmphy.c | 2 + platform/linux-generic/_ishmpool.c | 1 + platform/linux-generic/_modules.c | 1 + platform/linux-generic/arch/arm/odp_cpu_arch.c | 2 + .../linux-generic/arch/arm/odp_sysinfo_parse.c | 7 +- platform/linux-generic/arch/default/odp_cpu_arch.c | 2 + .../linux-generic/arch/default/odp_sysinfo_parse.c | 7 +- platform/linux-generic/arch/mips64/odp_cpu_arch.c | 2 + .../linux-generic/arch/mips64/odp_sysinfo_parse.c | 7 +- platform/linux-generic/arch/powerpc/odp_cpu_arch.c | 2 + .../linux-generic/arch/powerpc/odp_sysinfo_parse.c | 7 +- platform/linux-generic/arch/x86/cpu_flags.c | 4 +- platform/linux-generic/arch/x86/odp_cpu_arch.c | 2 + .../linux-generic/arch/x86/odp_sysinfo_parse.c | 40 +- platform/linux-generic/buffer/generic.c | 2 + platform/linux-generic/drv_driver.c | 2 + .../include/odp/api/plat/packet_inlines.h | 50 +- .../include/odp/api/plat/packet_inlines_api.h | 16 + .../include/odp/api/plat/packet_types.h | 38 +- .../linux-generic/include/odp_buffer_internal.h | 74 +- .../include/odp_classification_internal.h | 7 - platform/linux-generic/include/odp_internal.h | 1 + .../linux-generic/include/odp_packet_internal.h | 133 +- .../linux-generic/include/odp_packet_io_internal.h | 5 + platform/linux-generic/include/odp_pool_internal.h | 1 + .../include/odp_queue_scalable_internal.h | 2 + .../linux-generic/include/odp_queue_subsystem.h | 2 +- platform/linux-generic/include/odp_schedule_if.h | 2 +- .../linux-generic/include/odp_schedule_scalable.h | 25 +- .../include/odp_schedule_scalable_ordered.h | 13 +- .../linux-generic/include/odp_schedule_subsystem.h | 3 + .../linux-generic/include/odp_timer_internal.h | 3 - platform/linux-generic/m4/odp_dpdk.m4 | 6 +- platform/linux-generic/m4/odp_netmap.m4 | 3 +- platform/linux-generic/m4/odp_pcap.m4 | 2 +- platform/linux-generic/m4/odp_schedule.m4 | 66 +- platform/linux-generic/odp_atomic.c | 2 + platform/linux-generic/odp_barrier.c | 2 + platform/linux-generic/odp_bitmap.c | 2 + platform/linux-generic/odp_byteorder.c | 2 + platform/linux-generic/odp_classification.c | 17 +- platform/linux-generic/odp_cpu.c | 2 + platform/linux-generic/odp_cpumask.c | 2 + platform/linux-generic/odp_cpumask_task.c | 2 + platform/linux-generic/odp_crypto.c | 73 +- platform/linux-generic/odp_errno.c | 2 + platform/linux-generic/odp_event.c | 2 + platform/linux-generic/odp_hash.c | 2 + platform/linux-generic/odp_impl.c | 2 + platform/linux-generic/odp_init.c | 3 + platform/linux-generic/odp_ipsec.c | 2 +- platform/linux-generic/odp_name_table.c | 2 + platform/linux-generic/odp_packet.c | 1516 +++++++++----------- platform/linux-generic/odp_packet_flags.c | 2 + platform/linux-generic/odp_packet_io.c | 53 +- platform/linux-generic/odp_pkt_queue.c | 2 + platform/linux-generic/odp_queue_if.c | 3 + platform/linux-generic/odp_rwlock.c | 2 + platform/linux-generic/odp_rwlock_recursive.c | 2 + platform/linux-generic/odp_schedule_if.c | 2 + platform/linux-generic/odp_shared_memory.c | 2 + platform/linux-generic/odp_sorted_list.c | 2 + platform/linux-generic/odp_spinlock.c | 2 + platform/linux-generic/odp_spinlock_recursive.c | 2 + platform/linux-generic/odp_std_clib.c | 2 + platform/linux-generic/odp_sync.c | 2 + platform/linux-generic/odp_system_info.c | 39 + platform/linux-generic/odp_thread.c | 2 + platform/linux-generic/odp_thrmask.c | 2 + platform/linux-generic/odp_ticketlock.c | 2 + platform/linux-generic/odp_time.c | 2 + platform/linux-generic/odp_timer.c | 15 +- platform/linux-generic/odp_timer_wheel.c | 2 + platform/linux-generic/odp_traffic_mngr.c | 4 + platform/linux-generic/odp_version.c | 2 + platform/linux-generic/odp_weak.c | 2 + platform/linux-generic/pktio/common.c | 2 + platform/linux-generic/pktio/dpdk.c | 280 +++- platform/linux-generic/pktio/ethtool.c | 14 +- platform/linux-generic/pktio/ipc.c | 3 + platform/linux-generic/pktio/loopback.c | 22 +- platform/linux-generic/pktio/netmap.c | 2 + platform/linux-generic/pktio/pcap.c | 2 + platform/linux-generic/pktio/ring.c | 6 + platform/linux-generic/pktio/socket.c | 4 +- platform/linux-generic/pktio/socket_mmap.c | 2 + platform/linux-generic/pktio/sysfs.c | 2 + platform/linux-generic/pktio/tap.c | 2 + platform/linux-generic/pool/generic.c | 37 +- platform/linux-generic/queue/generic.c | 6 +- platform/linux-generic/queue/scalable.c | 39 +- platform/linux-generic/queue/subsystem.c | 5 +- platform/linux-generic/schedule/generic.c | 58 +- platform/linux-generic/schedule/iquery.c | 25 +- platform/linux-generic/schedule/scalable.c | 556 ++++--- platform/linux-generic/schedule/scalable_ordered.c | 48 +- platform/linux-generic/schedule/sp.c | 23 +- platform/linux-generic/schedule/subsystem.c | 2 + test/Makefile.inc | 9 +- test/common_plat/common/Makefile.am | 9 +- test/common_plat/common/mask_common.c | 2 + test/common_plat/common/odp_cunit_common.c | 4 +- test/common_plat/m4/miscellaneous.m4 | 9 +- test/common_plat/m4/performance.m4 | 9 +- test/common_plat/m4/validation.m4 | 71 +- test/common_plat/miscellaneous/Makefile.am | 6 +- test/common_plat/performance/Makefile.am | 34 +- test/common_plat/performance/odp_bench_packet.c | 2 + test/common_plat/performance/odp_crypto.c | 22 +- test/common_plat/performance/odp_l2fwd.c | 89 +- test/common_plat/performance/odp_pktio_ordered.c | 4 +- test/common_plat/performance/odp_pktio_perf.c | 3 + test/common_plat/performance/odp_sched_latency.c | 2 + test/common_plat/performance/odp_scheduling.c | 2 + test/common_plat/validation/Makefile.am | 2 +- test/common_plat/validation/api/Makefile.am | 3 - test/common_plat/validation/api/Makefile.inc | 8 +- test/common_plat/validation/api/atomic/Makefile.am | 9 +- test/common_plat/validation/api/atomic/atomic.c | 2 + .../validation/api/atomic/atomic_main.c | 2 + .../common_plat/validation/api/barrier/Makefile.am | 9 +- test/common_plat/validation/api/barrier/barrier.c | 2 + .../validation/api/barrier/barrier_main.c | 2 + test/common_plat/validation/api/buffer/Makefile.am | 9 +- test/common_plat/validation/api/buffer/buffer.c | 2 + .../validation/api/buffer/buffer_main.c | 3 + .../validation/api/classification/Makefile.am | 20 +- .../validation/api/classification/classification.c | 2 + .../api/classification/classification_main.c | 2 + .../api/classification/odp_classification_basic.c | 2 + .../api/classification/odp_classification_common.c | 5 +- .../classification/odp_classification_test_pmr.c | 6 +- .../api/classification/odp_classification_tests.c | 11 +- .../common_plat/validation/api/cpumask/Makefile.am | 10 +- test/common_plat/validation/api/cpumask/cpumask.c | 2 + .../validation/api/cpumask/cpumask_main.c | 3 + test/common_plat/validation/api/crypto/Makefile.am | 16 +- test/common_plat/validation/api/crypto/crypto.c | 2 + test/common_plat/validation/api/crypto/crypto.h | 16 +- .../validation/api/crypto/crypto_main.c | 2 + .../validation/api/crypto/odp_crypto_test_inp.c | 1118 +++------------ .../validation/api/crypto/test_vectors.h | 984 +++++++------ .../validation/api/crypto/test_vectors_len.h | 45 +- test/common_plat/validation/api/errno/Makefile.am | 9 +- test/common_plat/validation/api/errno/errno.c | 2 + test/common_plat/validation/api/errno/errno_main.c | 2 + test/common_plat/validation/api/hash/Makefile.am | 9 +- test/common_plat/validation/api/hash/hash.c | 2 + test/common_plat/validation/api/hash/hash_main.c | 2 + test/common_plat/validation/api/init/Makefile.am | 16 +- test/common_plat/validation/api/init/init.c | 2 + .../validation/api/init/init_main_abort.c | 3 + .../validation/api/init/init_main_log.c | 3 + .../common_plat/validation/api/init/init_main_ok.c | 3 + test/common_plat/validation/api/lock/Makefile.am | 9 +- test/common_plat/validation/api/lock/lock.c | 2 + test/common_plat/validation/api/lock/lock_main.c | 2 + test/common_plat/validation/api/packet/Makefile.am | 9 +- test/common_plat/validation/api/packet/packet.c | 111 +- .../validation/api/packet/packet_main.c | 2 + test/common_plat/validation/api/pktio/Makefile.am | 9 +- test/common_plat/validation/api/pktio/parser.c | 3 + test/common_plat/validation/api/pktio/pktio.c | 3 + test/common_plat/validation/api/pktio/pktio_main.c | 2 + test/common_plat/validation/api/pool/Makefile.am | 9 +- test/common_plat/validation/api/pool/pool.c | 2 + test/common_plat/validation/api/pool/pool_main.c | 2 + test/common_plat/validation/api/queue/Makefile.am | 9 +- test/common_plat/validation/api/queue/queue.c | 10 +- test/common_plat/validation/api/queue/queue_main.c | 2 + test/common_plat/validation/api/random/Makefile.am | 9 +- test/common_plat/validation/api/random/random.c | 2 + .../validation/api/random/random_main.c | 2 + .../validation/api/scheduler/Makefile.am | 9 +- .../validation/api/scheduler/scheduler.c | 22 +- .../validation/api/scheduler/scheduler_main.c | 2 + test/common_plat/validation/api/shmem/Makefile.am | 9 +- test/common_plat/validation/api/shmem/shmem.c | 2 + test/common_plat/validation/api/shmem/shmem_main.c | 2 + .../validation/api/std_clib/Makefile.am | 9 +- .../common_plat/validation/api/std_clib/std_clib.c | 2 + .../validation/api/std_clib/std_clib_main.c | 2 + test/common_plat/validation/api/system/Makefile.am | 9 +- test/common_plat/validation/api/system/system.c | 2 + .../validation/api/system/system_main.c | 2 + test/common_plat/validation/api/thread/Makefile.am | 12 +- test/common_plat/validation/api/thread/thread.c | 2 + .../validation/api/thread/thread_main.c | 2 + test/common_plat/validation/api/time/Makefile.am | 8 +- test/common_plat/validation/api/time/time.c | 2 + test/common_plat/validation/api/time/time_main.c | 2 + test/common_plat/validation/api/timer/Makefile.am | 9 +- test/common_plat/validation/api/timer/timer.c | 2 + test/common_plat/validation/api/timer/timer_main.c | 2 + .../validation/api/traffic_mngr/Makefile.am | 8 +- .../validation/api/traffic_mngr/traffic_mngr.c | 4 + .../api/traffic_mngr/traffic_mngr_main.c | 2 + test/common_plat/validation/drv/Makefile.am | 3 - test/linux-dpdk/m4/configure.m4 | 2 + test/linux-generic/Makefile.inc | 4 +- test/linux-generic/m4/performance.m4 | 9 +- test/linux-generic/mmap_vlan_ins/Makefile.am | 9 +- test/linux-generic/mmap_vlan_ins/mmap_vlan_ins.c | 2 + test/linux-generic/performance/Makefile.am | 2 +- test/linux-generic/pktio_ipc/Makefile.am | 13 +- test/linux-generic/pktio_ipc/ipc_common.c | 14 +- test/linux-generic/pktio_ipc/ipc_common.h | 4 +- test/linux-generic/pktio_ipc/pktio_ipc1.c | 42 +- test/linux-generic/pktio_ipc/pktio_ipc2.c | 38 +- test/linux-generic/ring/Makefile.am | 19 +- test/linux-generic/ring/ring_basic.c | 2 + test/linux-generic/ring/ring_main.c | 2 + test/linux-generic/ring/ring_stress.c | 4 + test/linux-generic/ring/ring_suites.c | 2 + .../linux-generic/validation/api/shmem/Makefile.am | 16 +- test/linux-generic/validation/api/shmem/shmem.h | 21 - .../validation/api/shmem/shmem_linux.c | 2 + .../validation/api/shmem/shmem_odp1.c | 2 + .../validation/api/shmem/shmem_odp2.c | 2 + 283 files changed, 4401 insertions(+), 3813 deletions(-) delete mode 100644 helper/.gitignore rename helper/{ => include}/odph_debug.h (100%) rename helper/{ => include}/odph_list_internal.h (73%) delete mode 100644 platform/linux-generic/Makefile.inc delete mode 100644 test/linux-generic/validation/api/shmem/shmem.h
hooks/post-receive